Dear community,
Currently I wanna OCR on a relative large image data sets (more than 1 million images). Is there a way to reduce the computational time, i.e. cost $$$$$.
My setting is 280dpi A4 page, oem will be LSTM.
I cannot really find any similar topic and I think the options could be limited. The following will be some points I got from the GitHub and here:
1) Build tesseract with --disable-openmp
2) Build tesseract with --disable-static and CXXFLAGS=-Wall -g O2
I plan to run it on cloud (AWS, M$, Google) with docker. Most likely on Dual Core Instance but if 4 cores will help then I will definitely look into that.
Have not tried to implement the C++ function coz I am not sure about the performance gain. I think the on/off initialisation from calling Tesseract for each image may slow down the whole process. Do you all think implementing the process with C++ will help my case?
Thank you in advance.