The two most basic elements of any programming language are loops, and conditional statements. You have enabled the ability to create loops by allowing link selectors to be their own parent and execute recursively, but there is no way to break out of that loop. If you are scraping a site with thousands of pages, it is more likely that your browser will crash before the scrape ends. It would be nice to be able compare the text selector for a page number and if it is greater than or equal to a literal then execute one link selector or nothing at all, else execute another link selector.
Sometimes, it would be much more efficient if the link selector could accept the text for an actual link to a page, instead of having to navigate to a page by clicking on buttons. There have been multiple occasions where I needed to start scraping on one page to get some general data before going to another page with rows of data. The general data is added to each row which is redundant, but acceptable. I guess that what I would really like is a feature to combine multiple scrapes into one, so that I can scrape one row of general data from one start page, then go to another start page, and scrape multiple rows from multiple pages. But being able to jump around from page to page instead of navigating with links would be an improvement.
As an example, I am scraping a site by creating 20 duplicate link selectors which all select the next page, so that after 20 pages the scrape stops without me having to manually stop the scrape. If there were a way to check for Page 20 and either select the next page or not. It would only be necessary to have that one conditional link selector executing itself recursively.
I am using a work around for the grouped selector which works for me, because it is a limited list. Instead of selecting multiple for an element selector, I just created multiple selectors to choose each item that I needed in the row.
Yes, I did mean uploading a list of start urls. I was scraping a game site which has many different servers, but they are all identical, and the same sitemap could be used for each server except for a unique start url.
I would think that it would be nice to be able to load the selectors from a file anyway, then all of your sitemaps could be stored in a convenient folder on your computer. Perhaps you could execute an entire folder of sitemaps, loading the sitemap, performing the scrape, then downloading the records to a results file.
If you could then schedule those scrapes to run daily or weekly, I could see the potential for using your app to scrape many types of data that is updated on a regular basis, but that would begin to interfere with the services which your professional version provides.