Depending on what you're harvesting & scraping, it's typically better to do it as optimally as possible. As such that usually means using a non-GUI HTTP/REST client to fetch web pages and parse the data out of it, whether the data is HTML/XML/JSON/PDF/text/binary. And for some cases, directly hitting the REST APIs (if accessible) made by the website (via AJAX calls) rather than scrape off the site. Think code/scripts with HTTP/REST client libraries, and/or tools like curl with scripting.
But there are times where you need the stuff to be rendered in the browser then scrape that off, or if you need to scrape for visuals/graphics rendered in the browser, not basic text or images you can fetch by URL alone. In that case, Selenium comes in handy. And where Selenium fails there (say Flash or Java plugins), then you go to some other tool like Sikuli and image recognition tools.
But when using Selenium for scraping, you'd probably want to try headless options and only go to GUI mode when debugging, since headless executes faster and won't hog up your display/screen. E.g. PhantomJS/GhostDriver, headless Firefox mode, headless Chrome mode