Feed crawler with URIs taken from WET files

44 views
Skip to first unread message

Dimitris Anag

unread,
Sep 18, 2017, 9:52:26 AM9/18/17
to Common Crawl
Hello,

I am kinda new at this so i would like some help and advice. I have setup a crawler and i what i want to do is to feed the crawler with URIs which i want to extract from WET files. For example, if i would like to feed my crawler all the URIs which refer to mysite.com, i want to search the WET files , store all URIs which start with mysite.com and feed them to my crawler. Is that possible and on top of that, is that the optimal way to feed my crawler with specific seeds? Thank you for your time!

Dimitris Anagnostopoulos

Sebastian Nagel

unread,
Sep 19, 2017, 11:39:31 AM9/19/17
to common...@googlegroups.com
Hi Dimitris,

> store all URIs which start with mysite.com

Have a look at Common Crawl's URL index
http://index.commoncrawl.org/
or the index files in
s3://commoncrawl/cc-index/collections/CC-MAIN-2017-34/indexes/
(CC-MAIN-2017-34 = August crawl)

That's the easiest way to search for URLs for a specific domain.
The WET files are an option if you want to extract URLs based on
text classification or by matching keywords/phrases in text.

Best,
Sebastian
> --
> You received this message because you are subscribed to the Google Groups "Common Crawl" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
> common-crawl...@googlegroups.com <mailto:common-crawl...@googlegroups.com>.
> To post to this group, send email to common...@googlegroups.com
> <mailto:common...@googlegroups.com>.
> Visit this group at https://groups.google.com/group/common-crawl.
> For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
Message has been deleted
0 new messages