I’ve just scanned a section of a book (in French) that unfortunately uses a very fine typeface and a lot of italics that seem to confuse the OCR.

I’m on Linux, so I’m switching off between gscan2pdf (which makes use of the remarkable unpaper program) and Master PDF Editor (a proprietary program) to clean up and deskew the scans before OCRing them (since each program has their own strengths and weaknesses). I did this, got the scanned pages looking pretty good, and then OCRed them using Tesseract (which is an option in gscan2pdf). I also tried GOCR, which produced garbage-level results.

Tesseract didn’t do too badly, but what did happen is that it occasionally mixes lines of text together–despite me trying to get them as straight as possible, and doing what I thought was a pretty good job! Also, it will put spaces in the midst of words and sentences, like this: “J e t’ai m e” which is kind of annoying to have to go through and fix, especially since there are a lot of those spaces! Can anyone recommend a better approach to this, some different software maybe, or is this the best I can reasonably hope for?

  •  donio   ( @donio@beehaw.org ) 
    link
    fedilink
    English
    2
    edit-2
    7 months ago

    OCRmyPDF is what I use as well, had good luck with it on boardgame rulebooks that sometimes come with missing or partial embedded text. Combined with recoll and the Emacs pdf-tools mode I have it all indexed and at my fingertips.