Vision API first performs a layout analysis on the image to segment the location of the text. After the general location is detected, the OCR module then performs a text recognition analysis on the specified location to generate the text. Finally, errors are corrected at a post-processing step by feeding it through a language model or dictionary. You can find more details
here.
If you are using
document_text_detection you might try
text_detection and see if you get better performances. There might be some factors contributing for this issue (like poor image quality or slight angle orientation of the text) and it would be advisable to
open a case with support or create a
public issue providing your use setup and a sample document (non-confidential).
Since you are handling receipts you might try to test
Document AI or
Procurement DocAI that are specialized in analyzing invoices.