iff flock(LOCK_EX | LOCK_NB) fails, then open again and retry
this should not be possible should it??
note that the following code SHOULD deadlock - but does not.
CODE:
----CUT----
#!/usr/local/ruby-1.8.0/bin/ruby
threads = []
flags = (File::LOCK_EX | File::LOCK_NB)
system 'touch a b' rescue nil
threads << Thread.new do
%w(a b).each do |path|
fd = open(path)
#until((ret = fd.flock flags)) # this works (deadlock)
until((ret = open(path).flock flags)) # this doesn't
Thread.pass
end
printf "0 LOCK_EX %s <%s>\n", path, ret
sleep 0.5
end
end
threads << Thread.new do
%w(b a).each do |path|
fd = open(path)
#until((ret = fd.flock flags)) # this works (deadlock)
until((ret = open(path).flock flags)) # this doesn't
Thread.pass
end
printf "1 LOCK_EX %s <%s>\n", path, ret
sleep 0.5
end
end
Thread.abort_on_exception = true
threads.map{|thread| thread.join}
----CUT----
OUTPUT:
[ahoward@localhost flock]$ ./flock.rb
0 LOCK_EX a <0>
1 LOCK_EX b <0>
0 LOCK_EX b <0>
1 LOCK_EX a <0>
so, two threads are able to obtain exclusive locks on a file? perhaps i am
making an obvious mistake?? shouldn't each call to open(path).flock be
referring to the same open file table entry and, therefore, not affect the
thread's ability to obtain an LOCK_EX?
-a
====================================
| Ara Howard
| NOAA Forecast Systems Laboratory
| Information and Technology Services
| Data Systems Group
| R/FST 325 Broadway
| Boulder, CO 80305-3328
| Email: ara.t....@noaa.gov
| Phone: 303-497-7238
| Fax: 303-497-7259
| The difference between art and science is that science is what we understand
| well enough to explain to a computer. Art is everything else.
| -- Donald Knuth, "Discover"
| ~ > /bin/sh -c 'for lang in ruby perl; do $lang -e "print \"\x3a\x2d\x29\x0a\""; done'
====================================
[snip]
Aren't locks on a per-process basis? Ruby threads all run in the
same process.
Hal
> Ara.T.Howard wrote:
> > in the code below, i am able to obtain TWO exclusive locks on files using the
> > following logic:
> >
> > iff flock(LOCK_EX | LOCK_NB) fails, then open again and retry
> >
> > this should not be possible should it??
> >
> > note that the following code SHOULD deadlock - but does not.
>
> [snip]
>
> Aren't locks on a per-process basis?
no.knotice that this will produce monotonically increasing timestamps:
~/eg/ruby > cat ./flock.rb
#!/usr/local/ruby-1.8.0/bin/ruby
require 'ftools'
require 'tempfile'
def compete who, f
5.times do |i|
f.flock File::LOCK_EX
f.puts format("%s:%d @ %f\n",who, i, Time.now.to_f)
f.flock File::LOCK_UN
end
end
path = format(__FILE__ + '.out')
fd = open(path, File::WRONLY | File::TRUNC | File::CREAT)
if fork
compete 'PARENT', fd
Process.wait rescue nil
open(path){|f| puts f.read}
fd.close
File.rm_f path if File.exist? path
else
compete 'CHILD', fd
end
~/eg/ruby > ./flock.rb
PARENT:0 @ 1065472160.300986
PARENT:1 @ 1065472160.301536
CHILD:0 @ 1065472160.301911
CHILD:1 @ 1065472160.302190
CHILD:2 @ 1065472160.302303
CHILD:3 @ 1065472160.302411
CHILD:4 @ 1065472160.302520
PARENT:2 @ 1065472160.303144
PARENT:3 @ 1065472160.305026
PARENT:4 @ 1065472160.305172
flocks are durable across processes. i think they are held in the kernel open
file table. on my systems, they even work on nfs between machines! (this is
not normal)
> Ruby threads all run in the same process.
which is an even stronger argument for why no two threads (let alone
processes) should EVER be able to obtain an exclusive lock at the same time.
>On Tue, 7 Oct 2003, Hal Fulton wrote:
>flocks are durable across processes.
I sure hope not! If the process goes away, its locks better disappear, too!
> i think they are held in the kernel open file table.
I've seen them in their own tables (SVR4).
>> Ruby threads all run in the same process.
"Yeah, baby!" (in an Austin Powers dialect).
>which is an even stronger argument for why no two threads (let alone
>processes) should EVER be able to obtain an exclusive lock at the same time.
No, once a process has an exclusive lock, it's free to put additional exclusive
locks on it. It "owns" the file now, so to speak.
Also, many locking mechanisms are merely advisory, not mandatory.
In Linux, the filesystem has to be mounted with a special flag to allow
actual exclusive file locks. I know that's not the issue here; I was just
pointing out a certain case.
Since you don't print an unlock message, you don't see when a lock is
released. I guess, when the IO instance that "fd" is bound to, is
finalized, the lock is released, too. And that happens at the end of one
block run since "fd" is local to the block. So, IMHO your test does not
proove multiple locks at the same time.
Reopening the file over and over again (labeled "this doesn't") is bad
practice IMHO since you get a lock on an IO instance that you immediately
forget again ("fd" is not reassigned).
> shouldn't each call to open(path).flock be
> referring to the same open file table entry and, therefore, not affect
the
> thread's ability to obtain an LOCK_EX?
Reopening the same file IMHO creates a new entry, because you have
duplicate state with regard to seek position etc.
Try the implementation below that really deadlocks.
Regards
robert
# basic settings
Thread.abort_on_exception = true
FLAGS = (File::LOCK_EX | File::LOCK_NB)
# ruby "touch"
%w(a b).each {|path| File.open(path, "w").close }
# methods
def wait_for_lock(fd)
until fd.flock(FLAGS)
puts "Thread #{Thread.current["label"]} waiting"
Thread.pass
end
end
def create_thread(label, file1, file2)
Thread.new(label, file1, file2) do |l,a,b|
Thread.current["label"]=l
File.open(a) do |fd_a|
wait_for_lock fd_a
puts "Thread #{l} Locked #{a}"
sleep 0.5 # if removed deadlock disappears because of the timing
File.open(b) do |fd_b|
wait_for_lock fd_b
puts "Thread #{l} Locked #{b}"
sleep 0.5
puts "Thread #{l} About to unlock #{b}"
end
puts "Thread #{l} About to unlock #{a}"
end
end
end
#
# MAIN
#
threads = []
threads << create_thread( "0", "a", "b" )
threads << create_thread( "1", "b", "a" )
threads.each{|t| t.join}