easiest way to get a text file of tweets

453 views
Skip to first unread message

Bill

unread,
Apr 15, 2009, 8:14:09 AM4/15/09
to Twitter Development Talk
Hi. Can any suggest the easiest way to get a text file of say 2000
tweets that contain the word 'Japan' in them?
Thanks.

Bill Claster

unread,
Apr 15, 2009, 8:44:45 AM4/15/09
to Twitter Development Talk
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/1df7a2d9898d93e4#
--
Best wishes,
Bill
Sent from Beppu, Ōita Prefecture, Japan

Abraham Williams

unread,
Apr 15, 2009, 8:58:00 AM4/15/09
to twitter-deve...@googlegroups.com
Not quite what you are looking but http://tweetbook.in/oauth lets you export to pdf.
--
Abraham Williams | http://the.hackerconundrum.com
Hacker | http://abrah.am | http://twitter.com/abraham
Web608 | Community Evangelist | http://web608.org
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Madison, Wisconsin, United States

Bill

unread,
Apr 15, 2009, 9:14:52 AM4/15/09
to Twitter Development Talk
Hi Thanks very much. Actually though it seems that they only can make
a pdf with my own tweets. I was hoping to get just tweets that contain
the word 'japan' but thanks anyway.

On Apr 15, 9:58 pm, Abraham Williams <4bra...@gmail.com> wrote:
> Not quite what you are looking buthttp://tweetbook.in/oauthlets you export
> to pdf.

Matt Sanford

unread,
Apr 15, 2009, 10:39:17 AM4/15/09
to twitter-deve...@googlegroups.com
You could use http://search.twitter.com/search.atom?q=japan and a small bit of scripting. Look in those results for a link rel="next" for the next page. That will let you page your way back to ~1500 tweets.

Thanks;
  — Matt Sanford / @mzsanford

Bill

unread,
Apr 15, 2009, 9:11:09 PM4/15/09
to Twitter Development Talk
Hi. Thanks again. I see:

<link type="application/atom+xml" rel="next" href="http://
search.twitter.com/search.atom?
max_id=1529989226&amp;page=2&amp;q=japan"/>


but how do I change that and what do I change it to in the url:

http://search.twitter.com/search.atom?q=japan

is there some part of the url that I can adjust to get more results?

Bill

On Apr 15, 11:39 pm, Matt Sanford <m...@twitter.com> wrote:
> You could usehttp://search.twitter.com/search.atom?q=japanand a  

Peter Denton

unread,
Apr 15, 2009, 11:08:49 PM4/15/09
to twitter-deve...@googlegroups.com
<?php   
     $page_num = 1;
     $txtString = "";
     while ($page_num <= 2 )
     {
         $host = "http://search.twitter.com/search.atom?q=japan&max_id=1529989226&rpp=100&page=$page_num";
         $result = file_get_contents($host);
         $xml = new SimpleXMLElement($result);
         foreach ($xml->entry as $entry)
         {
          $statusUpdate[] = $entry->title;
         }
       $page_num++;
     }
     foreach($statusUpdate as $su)
     {
       $txtString .= $su;
     }

     $myFile = "myTextFile.txt";
     $fh = fopen($myFile, 'w') or die("can't open file");
     fwrite($fh, $stringData);
     $stringData = $txtString;
     fwrite($fh, $stringData);
     fclose($fh);
?>
--
Peter M. Denton
www.twibs.com
in...@twibs.com

Twibs makes Top 20 apps on Twitter - http://tinyurl.com/bopu6c


Peter Denton

unread,
Apr 15, 2009, 11:12:48 PM4/15/09
to twitter-deve...@googlegroups.com
and you could bump up the while loop to 15, to get 1500 results, like...

while ($page_num <= 15 )

notivce in the search URL rpp=100 (results per page) and page=$page_num (pagination), so you can get 1500 results

Bill

unread,
Apr 16, 2009, 7:56:09 AM4/16/09
to Twitter Development Talk
THANKS Peter!
It works!

On Apr 16, 12:12 pm, Peter Denton <petermden...@gmail.com> wrote:
> and you could bump up the while loop to 15, to get 1500 results, like...
>
> while ($page_num <= 15 )
>
> notivce in the search URL rpp=100 (results per page) and page=$page_num
> (pagination), so you can get 1500 results
>
> On Wed, Apr 15, 2009 at 8:08 PM, Peter Denton <petermden...@gmail.com>wrote:
>
>
>
> > <?php
> >      $page_num = 1;
> >      $txtString = "";
> >      while ($page_num <= 2 )
> >      {
> >          $host = "
> >http://search.twitter.com/search.atom?q=japan&max_id=1529989226&rpp=1...
> > ";
> >          $result = file_get_contents($host);
> >          $xml = new SimpleXMLElement($result);
> >          foreach ($xml->entry as $entry)
> >          {
> >           $statusUpdate[] = $entry->title;
> >          }
> >        $page_num++;
> >      }
> >      foreach($statusUpdate as $su)
> >      {
> >        $txtString .= $su;
> >      }
>
> >      $myFile = "myTextFile.txt";
> >      $fh = fopen($myFile, 'w') or die("can't open file");
> >      fwrite($fh, $stringData);
> >      $stringData = $txtString;
> >      fwrite($fh, $stringData);
> >      fclose($fh);
> > ?>
>
> > On Wed, Apr 15, 2009 at 6:11 PM, Bill <william...@gmail.com> wrote:
>
> >> Hi. Thanks again. I see:
>
> >>  <link type="application/atom+xml" rel="next" href="http://
> >> search.twitter.com/search.atom?
> >> max_id=1529989226&amp;page=2&amp;q=japan<http://search.twitter.com/search.atom?%0Amax_id=1529989226&page=2&q=j...>
> > i...@twibs.com
>
> > Twibs makes Top 20 apps on Twitter -http://tinyurl.com/bopu6c
>
> --
> Peter M. Dentonwww.twibs.com
> i...@twibs.com
Reply all
Reply to author
Forward
0 new messages