Common Crawl

Welcome to the Common Crawl Group!

Common Crawl, a non-profit organization, 
provides an open repository of web crawl data that is freely accessible to all. In doing so, we aim to advance the open web and democratize access to information. Today, the Common Crawl Corpus encompasses over two petabytes of web crawl data collected over eight years and ongoing. As the largest, most comprehensive, open repository of web crawl data on the cloud, we contribute to the thriving open data commons that drives innovation, research, and education.

This group is for discussion and collaboration among all those who use or seek to use Common Crawl data and/or share an interest in the open data ecosystem.

Please use this forum to:

Showing 1-20 of 485 topics
Meanpath Jan 2014 Torrent - 1.6TB of crawl data from 115m websites. Adam Seabrook 12/1/16
No images with common crawl .warc files and pywb Gregory Petropoulos 11/30/16
Duplicates Olexiy Lytvynenko 11/29/16
IRC/Slack? Oli Lalonde 11/27/16
Re: .wet file encoding Sebastian Nagel 11/23/16
How to operate on Common Crawl Dataset to extract website URL and the related emails? Jaffer Wilson 11/18/16
Question about web-sites that are not allowed to be scraped by their owners 11/15/16
Updated getting started with Common Crawl Matt Horridge 11/14/16
http-equiv = refresh Christian Lund 11/14/16
Re: Unable to see latest crawl dirs in common-crawl bucket Sebastian Nagel 11/9/16
October 2016 crawl archive now available Sebastian Nagel 11/9/16
Detecting Home Christian Lund 11/3/16
Call for ideas: top million crawl Greg Lindahl 11/2/16
URL list (and file format of url-index) KP 11/2/16
RSS feeds being tracked in news crawl Premraj Narkhede 11/2/16
Warc start and end dates Christian Lund 11/1/16
Unusual search question TheBean InABox 11/1/16
results limit in index search Premraj Narkhede 10/28/16
WARC to WET transformation (HTML to plain text) Vladimir Smatanik 10/28/16
News Crawler seed list KP 10/26/16
More topics »