I have a script that scrapes some stuff from craigslist that now takes
way too long (I might as well look myself). And my heart sinks when I
see "Updating metadata for 176 gems from http://gems.rubyforge.org".
That seriously takes forever (a little hyperbole). Nothing else is
suffering huge network slowness. Even a simple "require 'open-uri';
open('http://www.google.com')" takes 15 seconds (but half a second
tops in a browser). This all happened at a reasonable pace just a few
days ago. Other ruby installations on other machines (with other
operating systems) talk to the internet with appropriate quickness.
I know one other person who's experienced this, but with no solution.
Anyone else? I haven't found similar stories, but maybe that's because
no one else is having this problem, or the solution is so obvious no
one bothers to mention it.
Any pointers or ideas would be hugely appreciated.
Thanks,
Chris
HTH
Try adding this to your script:
# for OSX compatibility
Socket.do_not_reverse_lookup = true
Cheers-
- Ezra Zygmuntowicz
-- Founder & Software Architect
-- ez...@engineyard.com
-- EngineYard.com
Any idea where to patch rubygems?
Chris Shea wrote:
> Just a couple of days ago I "upgraded" from Tiger to Leopard, and the
> most horrible thing about it is that anything Ruby does over the
> internet is so very very slow now. `which ruby` tells me I'm still
> using my from-source install in /usr/local/bin, as does `which gem`.
>
--
Posted via http://www.ruby-forum.com/.
> I've had almost exactly the same experience. Really slow internet
> connectivity in general. I think I have it narrowed down to
> wireless using WEP. I hooked up an old fashioned ethernet cable
> (remember those?) and everything is lickety split fast. HTH.
me too. this a leoTard issue. in general i'm finding it to be
sucking the big one for networking - you can see the same behavior
with curl as you'll find in ruby.
i also disabled ipv6 and am using opendns for lookups. still slower
than tiger though.
a @ http://codeforpeople.com/
--
it is not enough to be compassionate. you must act.
h.h. the 14th dalai lama
I had this problem too and found some bogus DNS servers had creeped into
Network Setup, even though everything had been set to DCHP before (and I
assumed after) the upgrade. My guess was that network was waiting for
them to timeout before contacting my real DNS server. Key symptom is
eons to start loading a page/download a file/whatever, but normal
performance once the activity actually starts.
I know it sounds noob, but check to make sure there aren't any
unexpected DNS Servers in System Prefs -> Network -> Advanced (for the
correct interface) -> DNS.
On Jan 14, 2008, at 8:22 PM, ara.t.howard wrote:
> am using opendns for lookups
I tried opendns until the first bad lookup (address not in dns) when
opendns hijacked the request and redirected my browser to an ad. It
just seems creepy to have your dns lookups spoofed like that.
>
> On Jan 14, 2008, at 9:14 PM, s.ross wrote:
>
>> I've had almost exactly the same experience. Really slow internet
>> connectivity in general. I think I have it narrowed down to
>> wireless using WEP. I hooked up an old fashioned ethernet cable
>> (remember those?) and everything is lickety split fast. HTH.
>
> me too. this a leoTard issue. in general i'm finding it to be
> sucking the big one for networking - you can see the same behavior
> with curl as you'll find in ruby.
>
> i also disabled ipv6 and am using opendns for lookups. still slower
> than tiger though.
>
I had a problem like that. For some reason removing all locations from
the network prefs and adding them back in fixed it for some reason.
Fred
could try
require 'resolv-replace'
Then it comes up with this:
LoadError: no such file to load -- resolve-replace
from
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rubygems/custom_require.rb:27:in
`gem_original_require'
from
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rubygems/custom_require.rb:27:in
`require'
from (irb):1
After some pointers and ideas, things are running much faster. What
I've done:
1. Disabled IPv6 (thanks s.ross)
2. OpenDNS for lookup (thanks Ara, though those redirects on failed
lookups suck)
3. Started requiring 'resolv-replace' (thanks Roger Pack)
I'm using Ruby and the Internet together at near Tiger speeds. I added
"require 'resolv-replace'" to /usr/local/bin/gem as well, and I'm no
longer scared of "Updating metadata for 132 gems from http://gems.rubyforge.org"
Thanks everyone. Hopes this thread helps anyone else having similar
issues.
Chris
somehow my editing /etc/hosts once (took some hosts out), worked faster,
then adding them back in and it still worked fast. Weird.
Somebody elsewhere also said that doing the 'assist me' in networking
preferences fixed their problem. Who knows what it really is :)
-Roger
count = 0
loop do
a = File.open '/etc/hosts', 'w'
a.write '127.0.0.1 localhost' + "\n"
a.write '127.0.0.1 localhost2' + "\n" if (count%2) == 1
a.close
sleep 0.5
count += 1
end
(save this as a file like 'renew_etc.rb', save a copy of your /etc/hosts
file somewhere, then run
sudo ruby renew_etc.rb)
You could try changing the sleep value.
Note that running this may have unforeseen effects.
I wouldn't believe it if it didn't seem to work.
Note that the default /etc/hosts file is
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
if you need to recreate it.
GL.
is there a way I can have this enabled by default accross all scripts?
you could add that code to the lib included when you run
require 'rubygems'
as that is always loaded for app