Apple's HEIC (High-Efficiency Image Container) is great for saving space, but not so great for compatibility. Many APIs and libraries are optimized for older, more universal formats like JPEG. The beauty of Python is its vast ecosystem of libraries that can solve almost any problem. This little script was the key that unlocked the entire project.Apple's HEIC (High-Efficiency Image Container) is great for saving space, but not so great for compatibility. Many APIs and libraries are optimized for older, more universal formats like JPEG. The beauty of Python is its vast ecosystem of libraries that can solve almost any problem. This little script was the key that unlocked the entire project.

From 50 Pages of Handwritten Notes to a Digital Manuscript with Python and AI

2025/10/27 12:51

We’ve all got them. The notebooks filled with scribbled ideas, the half-finished projects, the “someday” repositories gathering digital dust. For three years, my “someday” project was a 50-page, handwritten draft of a novel. It was a tangible thing, a stack of paper in a box, but the activation energy required to turn it into a working digital manuscript always seemed just out of reach.

Then, life threw a serious curveball a health scare that came with a flurry of heavy, clinical words. I won’t dwell on the details, but it became a powerful, personal forcing function. The concept of "someday" was suddenly replaced with the urgency of "right now." The project was no longer a hobby; it was a mission.

It was time to digitize. My plan was simple: take photos of each page with my iPhone and feed them into a modern AI with vision capabilities to transcribe the text. What could be easier?

The First Roadblock: Apple’s HEIC Problem

As any developer knows, the gap between a simple plan and a working execution is where the real work happens. I quickly took high-resolution photos of all 50 pages, but when I tried to upload them, I hit an immediate wall.

The native iOS camera format, HEIC (High-Efficiency Image Container), is great for saving space. It’s not so great for compatibility. Many APIs and libraries, including some of the most powerful vision models, are optimized for older, more universal formats like JPEG.

My seamless AI pipeline was blocked at the first step. Manually converting 50+ images was a non-starter. This wasn't a time for tedious tasks; this was a time for building. So, I did what any developer does when faced with a repetitive, boring problem: I wrote a script to fix it.

The Python Script That Unlocked Everything

The beauty of Python is its vast ecosystem of libraries that can solve almost any problem. In this case, Pillow (the friendly fork of PIL) and the pillow-heif library were the perfect tools for the job.

The goal was simple: point a script at a folder of .heic files and have it spit out high-quality JPEGs in another folder. This little script was the key that unlocked the entire project.

# A simple, effective script to batch convert HEIC files to JPEG from PIL import Image import pillow_heif import os # --- Configuration --- # The folder where my iPhone photos were stored image_folder_path = '/home/j/Desktop/book_notes' # The destination for the converted files converted_folder_path = '/home/j/Desktop/book_notes/converted' # --- End Configuration --- # Create the destination folder if it doesn't exist os.makedirs(converted_folder_path, exist_ok=True) print('start the process yo') try: # A clean one-liner to find all .heic files, case-insensitively get_the_files = [f for f in os.listdir(image_folder_path) if f.lower().endswith('.heic')] print(f"Found {len(get_the_files)} this many yo") for filename in get_the_files: print(f"Processing: {filename}") # Construct the full path to the source file _path = os.path.join(image_folder_path, filename) # Create the new JPEG filename jpeg_filename = os.path.splitext(filename)[0] + '.jpg' jpeg_path = os.path.join(converted_folder_path, jpeg_filename) print(f"Converting {filename} -> {jpeg_filename}...") # Read the HEIF file heif_file = pillow_heif.read_heif(_path) # Create a Pillow Image from the data image = Image.frombytes( heif_file.mode, heif_file.size, heif_file.data, 'raw', ) # Save the image as a JPEG with high quality image.save(jpeg_path, "JPEG", quality=95) except Exception as e: print(f"An error occurred: {e}") print('you be done yo!')

This script worked flawlessly. In a matter of seconds, my incompatible photo library became a clean, ordered set of JPEGs, ready for the AI.

The Real Surprise: AI as a Story Editor

With the conversion done, I batch-uploaded the JPEGs to a vision-enabled LLM. This is where the true magic of modern AI became apparent.

Here’s the thing: in my haste, I hadn’t uploaded the images in the correct order. Page 1 might have been followed by page 15, then page 3. I was expecting to get back a jumble of transcribed text that I would have to painstakingly reassemble.

What I got back was astonishing.

The AI didn't just perform Optical Character Recognition (OCR). It understood the context. It recognized page numbers, chapter headings, and the narrative flow of the text. It not only transcribed the handwriting with incredible accuracy but also re-ordered the disparate image inputs into a perfectly sequential document.

This is a monumental leap from the transcription tools of just a few years ago. We've moved from simple character recognition to contextual understanding. The AI wasn't just a typist; it was acting as a developmental editor.

From Raw Text to a Fine-Tuned Model: The Road Ahead

This initial transcription is the 80/20 solution. It gets me 80% of the way there with 20% of the effort. But it’s just the beginning. My forcing function has not only pushed me to start this project but to think about the entire pipeline from end to end.

Here’s my raw project plan from my notes—the real road map for turning this into a serious, long-term asset.

# PROJECT ROADMAP # 1. Convert Images (DONE) # - Python script handles the HEIC -> JPEG bottleneck. # 2. Load to Database # - Store the raw text and corrected versions for training. # 3. Run Basic LLM for 80/20 (DONE) # - Get the initial transcription. # 4. Make Corrections # - Manually review and correct the AI's output to create a "golden dataset." # 5. Load to Fine-Tune LLM # - Use the corrected text to fine-tune a model specifically on my handwriting and narrative style. # - Infrastructure thought: A Digital Ocean droplet or similar cloud VM with a 16-32GB GPU should be sufficient for this. Need to price this out. # 6. Train # - Run the fine-tuning process. Train multiple versions and compare results. # 7. Test # - Feed the fine-tuned model new handwritten pages and test its accuracy against the base model.

\n Conclusion

A personal crisis can be a powerful lens, clarifying what’s truly important. For me, it was the catalyst to finally stop thinking about a project and start building it. But the journey also revealed how incredibly advanced and accessible the tools at our disposal have become.

A simple Python script solved a frustrating compatibility issue. A modern LLM did more than just transcribe; it understood narrative structure. And the path forward to building a custom-trained model on my own data is no longer the exclusive domain of large tech companies. It's a tangible, achievable project for any developer with a clear goal.

You don't need to wait for a crisis to create your own forcing function. Find that project you've been putting off, identify the first technical hurdle, and write the script that gets you past it. The tools are here. The technology is ready. You just have to start.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Pound Sterling softens as traders eye BoE rate cut next week

Pound Sterling softens as traders eye BoE rate cut next week

The post Pound Sterling softens as traders eye BoE rate cut next week appeared on BitcoinEthereumNews.com. The GBP/USD pair trades in negative territory near 1.3365 during the early European trading hours on Thursday, pressured by the rebound in the US Dollar (USD). Nonetheless, the potential downside might be limited after the US Federal Reserve (Fed) delivered a rate cut at its December policy meeting. Traders brace for the US weekly Initial Jobless Claims report, which will be published later on Thursday.  Markets continue to digest the largely anticipated rate cut by the Fed on Wednesday. The US central bank reduced its key interest rate for the third time in a row at its December meeting but signaled that it may leave rates unchanged in the coming months. Two Fed officials voted to keep the rate unchanged, while Stephen Miran, whom Trump appointed in September, voted for a larger rate cut. During the press conference, Fed Chair Jerome Powell said central bankers need time to see how the three reductions this year work their way through the US economy. Powell added that he will closely examine incoming data leading up to the next meeting in January. The Fed’s economic projections suggested one rate cut will take place next year, although new data could change this. On the other hand, the prospect of the Bank of England (BoE) rate reductions could drag the Pound Sterling (GBP) lower against the Greenback. Financial markets are now pricing in nearly an 88% chance of the BoE rate cut next week after signs from economic data that inflation pressure has eased.  Pound Sterling FAQs The Pound Sterling (GBP) is the oldest currency in the world (886 AD) and the official currency of the United Kingdom. It is the fourth most traded unit for foreign exchange (FX) in the world, accounting for 12% of all transactions, averaging $630 billion a day, according to 2022…
Share
BitcoinEthereumNews2025/12/11 13:40