In general, when you're scraping a site and the site offers a way to get automated access through an API, you should use the API rather than scraping.
To answer your question: when you use a script to make an HTTP request, the HTTP library inserts HTTP headers such as "User-Agent", which identify the HTTP library to the server. Servers can use this information to deny access to automated clients.
In some cases, changing the User-Agent string to a string associated with a web browser will cause the server to think it's dealing with a human user rather than a script.
Here's some documentation that explains the issue in a Python context.
This won't work all the time, because servers can use other mechanisms to detect automated clients, such as checking for Javascript support. In those cases you may be able to use
Selenium to script a real web browser.
Leonard