You can't do that with CrawlSpider, in case you are wondering. CrawlSpider is specifically designed to deal with shallow/flatten rules, so that each page on the site can be treated the same way in terms in links to follow.
What you can do is override BaseSpider (instead of CrawlSpider) and implement your custom logic there. If you think this will be a common pattern among the spiders of your project the next step would be to implement a new base/generic spider that abstracts this logic and provides a way to declare multi-level rules. If you reached this point (wrote the generic spider, and are using it successfully with a few spiders in your project) and you believe this spider could be useful in general (outside your project), you can consider documenting it and contributing to scrapy.contrib, so that it becomes a new builtin generic spider (like CrawlSpider is). This is just to illustrate the lifetime of a generic spider. As you can see, many spiders don't get enough motivation to make it to scrapy.contrib, but who knows... maybe the MultiLevelCrawlSpider could be the next one :)
Best,
Pablo.