File locking

155 views
Skip to first unread message

Itaru Kitayama

unread,
May 5, 2021, 3:35:15 AM5/5/21
to Spack
How do I check if the filesystem supports locking? I'd like to invoke if possible concurrently Spack processes.

Gamblin, Todd

unread,
May 5, 2021, 4:20:56 AM5/5/21
to Itaru Kitayama, Spack
Use mount and grep for ‘lock'. For Vast and NFS you’ll see “local_lock=none” in the options; for Lustre you’ll see the flock option.  Not sure about others.


On May 5, 2021, at 12:35 AM, Itaru Kitayama <itaru.k...@gmail.com> wrote:

How do I check if the filesystem supports locking? I'd like to invoke if possible concurrently Spack processes.

--
You received this message because you are subscribed to the Google Groups "Spack" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spack+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/spack/401463d1-32b6-4d3b-beba-4a13218c3ff8n%40googlegroups.com.

Itaru Kitayama

unread,
May 5, 2021, 4:34:49 AM5/5/21
to Gamblin, Todd, Spack
Hi Todd,
I’m on proprietary GPFS on JURECA at JSC.

Gamblin, Todd

unread,
May 5, 2021, 4:44:40 AM5/5/21
to Itaru Kitayama, Spack
GPFS should support locking.  If you want to test things out you could look at this file:


It’s the lock tests — they can be run in parallel on up to 7 PMI processes.  You’ll need to run spack with a python installation that has mpi4py installed, but if you can do that, you can see if our lock tests pass on your filesystem.

Follow the instructions in the highlighted region at the link above (sorry you have to edit the file — we should probably parametrize that) and let us know what happens.

-Todd

Itaru Kitayama

unread,
May 5, 2021, 4:45:19 AM5/5/21
to Gamblin, Todd, Spack
Awesome, thanks!

Gamblin, Todd

unread,
May 5, 2021, 4:46:30 AM5/5/21
to Itaru Kitayama, Spack
And that should say “MPI” below — not “PMI”.  Let me know how it goes!

Itaru Kitayama

unread,
May 5, 2021, 4:47:14 AM5/5/21
to Gamblin, Todd, Spack
Oops I take it back I’m on BeeGFS from time to time, and while I m on it I can’t use Spack without locking set to off.

Itaru Kitayama

unread,
May 5, 2021, 5:10:12 AM5/5/21
to Gamblin, Todd, Spack
$ spack test lock
usage: spack test [-h] SUBCOMMAND ...
spack test: error: argument SUBCOMMAND: invalid choice: 'lock' choose from:
find list remove results run status

I'm on the develop branch.

Gamblin, Todd

unread,
May 5, 2021, 9:34:44 AM5/5/21
to Itaru Kitayama, Spack
Gah. Sorry about that. The old `spack test` Is now called `spack unit-test`.


---
Sent from Workspace ONE Boxer

On May 5, 2021 at 2:10:19 AM PDT, Itaru Kitayama <itaru.k...@gmail.com> wrote:
$ spack test lock
usage: spack test [-h] SUBCOMMAND ...
spack test: error: argument SUBCOMMAND: invalid choice: 'lock' choose from:
    find  list  remove  results  run  status

I'm on the develop branch.

On Wed, May 5, 2021 at 5:47 PM Itaru Kitayama <itaru.k...@gmail.com> wrote:
>
> Oops I take it back I’m on BeeGFS from time to time, and while I m on it I can’t use Spack without locking set to off.
>
> On Wed, May 5, 2021 at 17:45 Itaru Kitayama <itaru.k...@gmail.com> wrote:
>>
>> Awesome, thanks!
>>
>> On Wed, May 5, 2021 at 17:44 Gamblin, Todd <gamb...@llnl.gov> wrote:
>>>
>>> GPFS should support locking.  If you want to test things out you could look at this file:
>>>
>>>
>>> It’s the lock tests — they can be run in parallel on up to 7 PMI processes.  You’ll need to run spack with a python installation that has mpi4py installed, but if you can do that, you can see if our lock tests pass on your filesystem.
>>>
>>> Follow the instructions in the highlighted region at the link above (sorry you have to edit the file — we should probably parametrize that) and let us know what happens.
>>>
>>> -Todd
>>>
>>> On May 5, 2021, at 1:34 AM, Itaru Kitayama <itaru.k...@gmail.com> wrote:
>>>
>>> Hi Todd,
>>> I’m on proprietary GPFS on JURECA at JSC.
>>>
>>> On Wed, May 5, 2021 at 17:20 Gamblin, Todd <gamb...@llnl.gov> wrote:
>>>>
>>>> Use mount and grep for ‘lock'. For Vast and NFS you’ll see “local_lock=none” in the options; for Lustre you’ll see the flock option.  Not sure about others.
>>>>
>>>>
>>>> On May 5, 2021, at 12:35 AM, Itaru Kitayama <itaru.k...@gmail.com> wrote:
>>>>
>>>> How do I check if the filesystem supports locking? I'd like to invoke if possible concurrently Spack processes.
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google Groups "Spack" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an email to spack+un...@googlegroups.com.
>>>>
>>>>

Itaru Kitayama

unread,
May 5, 2021, 4:58:16 PM5/5/21
to Gamblin, Todd, Spack
$ spack unit-test lock
============================= test session starts ==============================
platform linux -- Python 3.5.3, pytest-3.2.5, py-1.4.34, pluggy-0.4.0
rootdir: /work/users/kitayama/projects/spack, inifile: pytest.ini

========================= no tests ran in 0.00 seconds =========================
ERROR: file not found: lock

There're too many tests it seems.

Gamblin, Todd

unread,
May 5, 2021, 5:00:18 PM5/5/21
to Itaru Kitayama, Spack
Sigh. The syntax changed as well.  Sorry I’m bad at this — I’ll update it in the docs for that file.  Do this:

spack unit-test -k lock.py

You should see test parametrized by the filesystems like this:

(spackle):spack> spack unit-test -k lock.py
======================================= test session starts =======================================
platform darwin -- Python 3.7.8, pytest-3.2.5, py-1.4.34, pluggy-0.4.0
rootdir: /Users/gamblin2/Workspace/spack, inifile: pytest.ini
collected 2808 items                                                                               

lib/spack/spack/test/llnl/util/lock.py .............................................ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss...........
===================================== short test summary info =====================================
SKIP [44] /Users/gamblin2/Workspace/spack/lib/spack/spack/test/llnl/util/lock.py:154: requires filesystem: '/nfs/tmp2/gamblin2'
SKIP [44] /Users/gamblin2/Workspace/spack/lib/spack/spack/test/llnl/util/lock.py:154: requires filesystem: '/p/lscratch*/gamblin2'

==================================== slowest 30 test durations ====================================
0.83s call     lib/spack/spack/test/llnl/util/lock.py::test_complex_acquire_and_release_chain[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.27s call     lib/spack/spack/test/llnl/util/lock.py::test_lock_debug_output[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.18s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.17s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_3_2[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.16s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_3_2_ranges[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.15s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_2_1_ranges[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.14s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_ranges_4[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_2_3_ranges[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_ranges_3[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_3_1_ranges[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_2_2[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_ranges_3[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_read_lock_timeout_on_write_2[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_3_1[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_3[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_read_lock_timeout_on_write_3[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_2_1[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_read_lock_timeout_on_write_ranges_3[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_ranges_2[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.13s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_ranges_5[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.12s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_ranges_4[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.12s call     lib/spack/spack/test/llnl/util/lock.py::test_read_lock_timeout_on_write_ranges[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.12s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_ranges[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.12s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_2[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.12s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_2[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.12s call     lib/spack/spack/test/llnl/util/lock.py::test_read_lock_timeout_on_write_ranges_2[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.12s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_ranges_2[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.12s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_3[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.12s call     lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
0.12s call     lib/spack/spack/test/llnl/util/lock.py::test_read_lock_timeout_on_write[/var/folders/0s/q_y0zhfj6xdd5n7rn780zz6h001qr7/T]
====================================== 2664 tests deselected ======================================
===================== 56 passed, 88 skipped, 2664 deselected in 12.52 seconds =====================

Itaru Kitayama

unread,
May 5, 2021, 5:33:56 PM5/5/21
to Gamblin, Todd, Spack
I still can't tell whether the directory backed by BeeGFS holds locks

```

$ spack unit-test -k lock.py
============================= test session starts ==============================
platform linux -- Python 3.5.3, pytest-3.2.5, py-1.4.34, pluggy-0.4.0
rootdir: /work/users/kitayama/projects/spack, inifile: pytest.ini
collected 2809 items

lib/spack/spack/test/llnl/util/lock.py
.........................................................................................ssssssssssssssssssssssssssssssssssssssssssss...........
=========================== short test summary info ============================
SKIP [44] /work/users/kitayama/projects/spack/lib/spack/spack/test/llnl/util/lock.py:154:
requires filesystem: '/p/lscratch*/kitayama'

========================== slowest 30 test durations ===========================
0.86s call lib/spack/spack/test/llnl/util/lock.py::test_complex_acquire_and_release_chain[/work/users/kitayama]
0.84s call lib/spack/spack/test/llnl/util/lock.py::test_complex_acquire_and_release_chain[/tmp]
0.44s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write[/tmp]
0.16s call lib/spack/spack/test/llnl/util/lock.py::test_lock_debug_output[/tmp]
0.15s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_ranges_4[/work/users/kitayama]
0.15s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_3[/work/users/kitayama]
0.15s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_2_3_ranges[/work/users/kitayama]
0.15s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_ranges_3[/work/users/kitayama]
0.15s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_3_2_ranges[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_ranges_5[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_3_2[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_3[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_3_1[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_read_lock_timeout_on_write_ranges_3[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_read_lock_timeout_on_write_3[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_ranges_4[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_ranges_2[/tmp]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_3_2_ranges[/tmp]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_ranges_4[/tmp]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_2_2[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_ranges_3[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_read_ranges_3[/tmp]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_3[/tmp]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_3_2[/tmp]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_2_3_ranges[/tmp]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_read_lock_timeout_on_write_2[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_2_1[/work/users/kitayama]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_read_lock_timeout_on_write_ranges_3[/tmp]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_on_write_ranges_3[/tmp]
0.14s call lib/spack/spack/test/llnl/util/lock.py::test_write_lock_timeout_with_multiple_readers_3_1[/tmp]
============================ 2665 tests deselected =============================
=========== 100 passed, 44 skipped, 2665 deselected in 24.70 seconds ===========
```

Gamblin, Todd

unread,
May 5, 2021, 5:53:26 PM5/5/21
to Itaru Kitayama, Spack
If the tests pass, it does.

Itaru Kitayama

unread,
May 5, 2021, 5:56:44 PM5/5/21
to Gamblin, Todd, Spack
Right. I keep getting this:

$ spack install vim
==> Error: [Errno 37] No locks available

Gamblin, Todd

unread,
May 5, 2021, 6:06:47 PM5/5/21
to Itaru Kitayama, Spack
The tests put a lock file in the directories in the `locations` array, and run all the tests for those paths. It kind of looks like the paths you put in locations either work (/work/users/kitayama, /tmp), or don’t exist (/p/lscratch*/kitayama) and are skipped.

When you run spack normally, the database is actually wherever your spack install is (unless you changed install_tree). So test it with whatever *that* directory is. I suspect you might get this error.

Itaru Kitayama

unread,
May 5, 2021, 6:20:46 PM5/5/21
to Gamblin, Todd, Spack
I see. I just verified my home directory is backed by NFS and the
option local_lock is set to none.
And tests are printing streams of Fs.

I'll try to move my user Spack configurations to the BeeGFS backed regions.

Thank you Todd for following up on my questions.
Reply all
Reply to author
Forward
0 new messages