Thanks for the code snippet. One thing to clarify, since the legacy_layout advanced_ocr_option is implemented on the current stable model ONLY (builtin/legacy does not need it anyway), one could omit the "model" parameter in the code:
image = vision.Image(content=img_obj)
response1 = vision.AnnotateImageRequest(image=image, features=[{'type_': vision.Feature.Type.DOCUMENT_TEXT_DETECTION}], image_context={"text_detection_params": {"advanced_ocr_options": ["legacy_layout"]}})
response = client.annotate_image(response1)
texts = response.text_annotations
The reason why the code in the original post still works is because the API ignores builtin/legacy and default it to the builtin/stable model after Aug 20.