Apple's HEIC (High-Efficiency Image Container) is great for saving space, but not so great for compatibility. Many APIs and libraries are optimized for older, more universal formats like JPEG. The beauty of Python is its vast ecosystem of libraries that can solve almost any problem. This little script was the key that unlocked the entire project.Apple's HEIC (High-Efficiency Image Container) is great for saving space, but not so great for compatibility. Many APIs and libraries are optimized for older, more universal formats like JPEG. The beauty of Python is its vast ecosystem of libraries that can solve almost any problem. This little script was the key that unlocked the entire project.

From 50 Pages of Handwritten Notes to a Digital Manuscript with Python and AI

2025/10/27 12:51

We’ve all got them. The notebooks filled with scribbled ideas, the half-finished projects, the “someday” repositories gathering digital dust. For three years, my “someday” project was a 50-page, handwritten draft of a novel. It was a tangible thing, a stack of paper in a box, but the activation energy required to turn it into a working digital manuscript always seemed just out of reach.

Then, life threw a serious curveball a health scare that came with a flurry of heavy, clinical words. I won’t dwell on the details, but it became a powerful, personal forcing function. The concept of "someday" was suddenly replaced with the urgency of "right now." The project was no longer a hobby; it was a mission.

It was time to digitize. My plan was simple: take photos of each page with my iPhone and feed them into a modern AI with vision capabilities to transcribe the text. What could be easier?

The First Roadblock: Apple’s HEIC Problem

As any developer knows, the gap between a simple plan and a working execution is where the real work happens. I quickly took high-resolution photos of all 50 pages, but when I tried to upload them, I hit an immediate wall.

The native iOS camera format, HEIC (High-Efficiency Image Container), is great for saving space. It’s not so great for compatibility. Many APIs and libraries, including some of the most powerful vision models, are optimized for older, more universal formats like JPEG.

My seamless AI pipeline was blocked at the first step. Manually converting 50+ images was a non-starter. This wasn't a time for tedious tasks; this was a time for building. So, I did what any developer does when faced with a repetitive, boring problem: I wrote a script to fix it.

The Python Script That Unlocked Everything

The beauty of Python is its vast ecosystem of libraries that can solve almost any problem. In this case, Pillow (the friendly fork of PIL) and the pillow-heif library were the perfect tools for the job.

The goal was simple: point a script at a folder of .heic files and have it spit out high-quality JPEGs in another folder. This little script was the key that unlocked the entire project.

# A simple, effective script to batch convert HEIC files to JPEG from PIL import Image import pillow_heif import os # --- Configuration --- # The folder where my iPhone photos were stored image_folder_path = '/home/j/Desktop/book_notes' # The destination for the converted files converted_folder_path = '/home/j/Desktop/book_notes/converted' # --- End Configuration --- # Create the destination folder if it doesn't exist os.makedirs(converted_folder_path, exist_ok=True) print('start the process yo') try: # A clean one-liner to find all .heic files, case-insensitively get_the_files = [f for f in os.listdir(image_folder_path) if f.lower().endswith('.heic')] print(f"Found {len(get_the_files)} this many yo") for filename in get_the_files: print(f"Processing: {filename}") # Construct the full path to the source file _path = os.path.join(image_folder_path, filename) # Create the new JPEG filename jpeg_filename = os.path.splitext(filename)[0] + '.jpg' jpeg_path = os.path.join(converted_folder_path, jpeg_filename) print(f"Converting {filename} -> {jpeg_filename}...") # Read the HEIF file heif_file = pillow_heif.read_heif(_path) # Create a Pillow Image from the data image = Image.frombytes( heif_file.mode, heif_file.size, heif_file.data, 'raw', ) # Save the image as a JPEG with high quality image.save(jpeg_path, "JPEG", quality=95) except Exception as e: print(f"An error occurred: {e}") print('you be done yo!')

This script worked flawlessly. In a matter of seconds, my incompatible photo library became a clean, ordered set of JPEGs, ready for the AI.

The Real Surprise: AI as a Story Editor

With the conversion done, I batch-uploaded the JPEGs to a vision-enabled LLM. This is where the true magic of modern AI became apparent.

Here’s the thing: in my haste, I hadn’t uploaded the images in the correct order. Page 1 might have been followed by page 15, then page 3. I was expecting to get back a jumble of transcribed text that I would have to painstakingly reassemble.

What I got back was astonishing.

The AI didn't just perform Optical Character Recognition (OCR). It understood the context. It recognized page numbers, chapter headings, and the narrative flow of the text. It not only transcribed the handwriting with incredible accuracy but also re-ordered the disparate image inputs into a perfectly sequential document.

This is a monumental leap from the transcription tools of just a few years ago. We've moved from simple character recognition to contextual understanding. The AI wasn't just a typist; it was acting as a developmental editor.

From Raw Text to a Fine-Tuned Model: The Road Ahead

This initial transcription is the 80/20 solution. It gets me 80% of the way there with 20% of the effort. But it’s just the beginning. My forcing function has not only pushed me to start this project but to think about the entire pipeline from end to end.

Here’s my raw project plan from my notes—the real road map for turning this into a serious, long-term asset.

# PROJECT ROADMAP # 1. Convert Images (DONE) # - Python script handles the HEIC -> JPEG bottleneck. # 2. Load to Database # - Store the raw text and corrected versions for training. # 3. Run Basic LLM for 80/20 (DONE) # - Get the initial transcription. # 4. Make Corrections # - Manually review and correct the AI's output to create a "golden dataset." # 5. Load to Fine-Tune LLM # - Use the corrected text to fine-tune a model specifically on my handwriting and narrative style. # - Infrastructure thought: A Digital Ocean droplet or similar cloud VM with a 16-32GB GPU should be sufficient for this. Need to price this out. # 6. Train # - Run the fine-tuning process. Train multiple versions and compare results. # 7. Test # - Feed the fine-tuned model new handwritten pages and test its accuracy against the base model.

\n Conclusion

A personal crisis can be a powerful lens, clarifying what’s truly important. For me, it was the catalyst to finally stop thinking about a project and start building it. But the journey also revealed how incredibly advanced and accessible the tools at our disposal have become.

A simple Python script solved a frustrating compatibility issue. A modern LLM did more than just transcribe; it understood narrative structure. And the path forward to building a custom-trained model on my own data is no longer the exclusive domain of large tech companies. It's a tangible, achievable project for any developer with a clear goal.

You don't need to wait for a crisis to create your own forcing function. Find that project you've been putting off, identify the first technical hurdle, and write the script that gets you past it. The tools are here. The technology is ready. You just have to start.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

The post Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC appeared on BitcoinEthereumNews.com. Franklin Templeton CEO Jenny Johnson has weighed in on whether the Federal Reserve should make a 25 basis points (bps) Fed rate cut or 50 bps cut. This comes ahead of the Fed decision today at today’s FOMC meeting, with the market pricing in a 25 bps cut. Bitcoin and the broader crypto market are currently trading flat ahead of the rate cut decision. Franklin Templeton CEO Weighs In On Potential FOMC Decision In a CNBC interview, Jenny Johnson said that she expects the Fed to make a 25 bps cut today instead of a 50 bps cut. She acknowledged the jobs data, which suggested that the labor market is weakening. However, she noted that this data is backward-looking, indicating that it doesn’t show the current state of the economy. She alluded to the wage growth, which she remarked is an indication of a robust labor market. She added that retail sales are up and that consumers are still spending, despite inflation being sticky at 3%, which makes a case for why the FOMC should opt against a 50-basis-point Fed rate cut. In line with this, the Franklin Templeton CEO said that she would go with a 25 bps rate cut if she were Jerome Powell. She remarked that the Fed still has the October and December FOMC meetings to make further cuts if the incoming data warrants it. Johnson also asserted that the data show a robust economy. However, she noted that there can’t be an argument for no Fed rate cut since Powell already signaled at Jackson Hole that they were likely to lower interest rates at this meeting due to concerns over a weakening labor market. Notably, her comment comes as experts argue for both sides on why the Fed should make a 25 bps cut or…
Share
BitcoinEthereumNews2025/09/18 00:36
French Lender Offers Crypto To Millions

French Lender Offers Crypto To Millions

The post French Lender Offers Crypto To Millions appeared on BitcoinEthereumNews.com. They say journalists never truly clock out. But for Christian, that’s not just a metaphor, it’s a lifestyle. By day, he navigates the ever-shifting tides of the cryptocurrency market, wielding words like a seasoned editor and crafting articles that decipher the jargon for the masses. When the PC goes on hibernate mode, however, his pursuits take a more mechanical (and sometimes philosophical) turn. Christian’s journey with the written word began long before the age of Bitcoin. In the hallowed halls of academia, he honed his craft as a feature writer for his college paper. This early love for storytelling paved the way for a successful stint as an editor at a data engineering firm, where his first-month essay win funded a months-long supply of doggie and kitty treats – a testament to his dedication to his furry companions (more on that later). Christian then roamed the world of journalism, working at newspapers in Canada and even South Korea. He finally settled down at a local news giant in his hometown in the Philippines for a decade, becoming a total news junkie. But then, something new caught his eye: cryptocurrency. It was like a treasure hunt mixed with storytelling – right up his alley! So, he landed a killer gig at NewsBTC, where he’s one of the go-to guys for all things crypto. He breaks down this confusing stuff into bite-sized pieces, making it easy for anyone to understand (he salutes his management team for teaching him this skill). Think Christian’s all work and no play? Not a chance! When he’s not at his computer, you’ll find him indulging his passion for motorbikes. A true gearhead, Christian loves tinkering with his bike and savoring the joy of the open road on his 320-cc Yamaha R3. Once a speed demon who hit…
Share
BitcoinEthereumNews2025/12/09 12:01