Ok, I maybe didn't use the most accurate phrasing.
The XMLFeedSpider has three possibilities for parsing, but I would prefer to use this module instead:
It is a more full featured parser.
It appears to me that I cannot extend the xmlfeedspider as it only offers the possibility to choose from three predifined parsers.
I maybe will need to extend the basespider instead.
I looked into the Crawler spider implementation, but it pulls in functions from quite a multitude of places and it also feels like a too evolved example for a beginner.
Let's say I start with the base spider. What's confusing me is that I don't understand how link following is achieved. Is it by yelding request objects from the parse() method? Which spider will then handle them?
What I take from the documentation is that my parse() method should return a request object if I want to continue spidering, an item object if I want to save something in some way and nothing if do not want to do anything anymore. Is this correct?