Trying to extract text from scanned paper forms containing logos, lines, boxes, and text. From what I've read, I expected Tesseract to segment the page and classify each element. I tried TessBaseAPI::SetPageSegMode () with PSM_AUTO, PSM_AUTO_OSD and a few others, followed by TessBaseAPI::AnalyseLayout (), but all I get is a single PT_FLOWING_IMAGE block representing the whole page.
However, if I REMOVE the logo from the form, I then get PT_FLOWING_TEXT, PT_HORZ_LINE, and PT_VERT_LINE blocks, and TessBaseAPI::Recognize does a fairly good job recognizing the text, even though the text is not in contiguous blocks and is interspersed among the lines.
I have seen examples online of Tesseract segmenting a page and separately identifying blocks of text and graphics, but I cannot remember where.
So, I am looking for information and advice on how to have Tesseract accurately segment a form that includes images and accurately recognize text interspersed among lines and boxes.
Thank you.