Hello everyone!
Straight to the point: I was trying to implement Google Safe Browsing url check for my anti-phishing project using corresponding API but I have reached a dead end at this point. The problem is that I can`t properly decode gRPC response so I can process it in my script. Everything started from just an attempt to decode v5alpha1 responses to urls:search requests with locally compiled _pb2 files from .proto files from official github page. But I kept getting the same error "google.protobuf.message.DecodeError: Error parsing message with type 'google.security.safebrowsing.v5alpha1.SearchHashesResponse'". So I have decided to try hash:search requests and to my surprise everything worked fine but only for testing hash that is provided as example in the documentation. When I created a new function to hash my urls to acceptable 4-byte format I started getting only empty responses which could signal towards two possible outcomes: either my algorithm is incorrect and obviously API cant find anything for this hash or there is no info about this url in general. But when I try these urls from PhishTank in this site "
https://transparencyreport.google.com/safe-browsing/search" it correctly flags my urls as malicious. So the issue is probably in my algorithm.
And here I am stuck with malfunctioning decoder and broken algorithm.
So the question is, did anybody encounter this "google.protobuf.message.DecodeError: Error parsing message with type 'google.security.safebrowsing.v5.SearchHashesResponse'" error or is there somewhere an example of hashing algorithm to prepare urls for hash:search requests?