From the image above, the highlighted line is the container am using with the id = "watch-related" to capture the links of these videos so as to extract the same data. From the results am getting, the crawler is evidently not getting all the links on the current seeded url and it finishes crawling after a while. The only way so far that I have tried and succeeded in getting it to crawl recursively is by using the dont_filter=True option which starts to crawl the same page indefinitely after a while and it is not what I need the crawler to do. Below is my very simple crawler. Again, am not good at this so my apologies for my poor coding skills. If Someone could show me a simple way to get the crawler scrape recursively and skip the already extracted urls, I'll be forever grateful. Thank you in advance.
RELATED_SELECTOR = '#watch-related a ::attr(href)'
for article in response.css(RELATED_SELECTOR).extract():
if article:
yield scrapy.Request(
response.urljoin(article),
callback=self.parse, #dont_filter=True
)