AFP + Lion

178 views
Skip to first unread message

Michael Newbery

unread,
Feb 28, 2013, 4:41:07 AM2/28/13
to zfs-...@googlegroups.com
I have an issue that I don't think I've seen reported before, although it is related to a number of documented issues.

I am sharing a number of pools over AFP, on a machine running Lion (A MacPro 1,1, so running Mountain Lion isn't an option).
The pool is called "Wells" and there are several filesystems defined, one per user.
Wells
Wells/user1
Wells/user2
Wells/user3
Wells/user4

I am sharing this via AFS, and I have the necessary patch https://github.com/joshado/liberate-applefileserver to enable this.

1. All the shares show up called simply "Wells". This appears to be related to the Known Issue http://code.google.com/p/maczfs/wiki/KnownIssues
Any new filesystems that you create, in addition to every zpool's default root filesystem, will still show up in Finder as being additional instances of the zpool's own name. This is a purely cosmetic bug and can be ignored. You can create as many ZFSes as you want on each zpool, and name them whatever you want, and use them however you want. They'll just appear as a bunch of redundantly-named icons on the Finder sidebar.

On the client machine, the names appear as:
Wells
Wells-1
Wells-1-1
Wells-2
Wells-3
Wells1

And different client machines will see different variations on that theme.

This makes selecting the right share extremely difficult.

2. After a random amount of time (minutes, days, weeks) the Share stops working. As in, you can no longer connect to the server, which reports that file sharing is not enabled. However, clients already connected can continue to work just fine. This APPEARS to be related to 10.7 and to the liberate-applefileserver patch.

The cure that works for me is to
a) turn off each of the ZFS shares, one at a timem in System Preferences. This is very slow, 10s of seconds each.
b) kill the file server via launchctl stop com.apple.AppleFIlesServer (which needs to be done twice)
c) Turn off file sharing in System Preferences
d) Turn on file sharing in System Preferences
e) Add the shares back, on at a time (this proceeds at normal speed)

Note that you can't reverse (d) and (e) or do (e) immediately after (a).

As I am adding/removing the ZFS shares, I get the following entries in the log, which makes me suspect a problem with the liberate-applefilserver
28/02/13 7:01:06.464 PM [0x0-0x725725].com.apple.systempreferences: System Preferences(19292,0x7fff7b5c8960) malloc: reference count underflow for 0x40044dfa0, break on auto_refcount_underflow_error to debug.



Alex Blewitt

unread,
Feb 28, 2013, 4:57:22 AM2/28/13
to zfs-...@googlegroups.com
On 28 Feb 2013, at 09:41, Michael Newbery wrote:

> 2. After a random amount of time (minutes, days, weeks) the Share stops working. As in, you can no longer connect to the server, which reports that file sharing is not enabled. However, clients already connected can continue to work just fine. This APPEARS to be related to 10.7 and to the liberate-applefileserver patch.

Randomly stopping working is a property of Apple File Sharing. I made the mistake of going that way once, realised my mistake, and then backed out to use a real sharing protocol like NFS instead.

Alex



Michael Newbery

unread,
Feb 28, 2013, 5:38:52 AM2/28/13
to zfs-...@googlegroups.com
:)
I recall one of Henry Spencer's sigs from the days of USENET:

 "The N in NFS stands for Not, | Henry Spencer at U of Toronto Zoology or 
    Need, or perhaps Nightmare"| uunet!attcan!utzoo!henry he...@zoo.toronto.edu.

The random stops are, as far as I can tell, only when sharing ZFS. And only since Lion.

I'm using the ZFS server as a TimeMachine server, that is, each Wells/user1 contains a sparsebundle that TM backs up to. For things to work as seamlessly as possible, it should serve AFP.


Fastmail Jason

unread,
Feb 28, 2013, 8:25:03 AM2/28/13
to zfs-...@googlegroups.com
Try a small script to keep tickling the pool(s) every so often and see if its like the USB issue of sleeping.


--
Jason Belec
Sent from my iPad
--
 
---
You received this message because you are subscribed to the Google Groups "zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-macos+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

X Bytor

unread,
Feb 28, 2013, 11:38:34 AM2/28/13
to zfs-...@googlegroups.com
I saw the same problem when moving to Lion. After trying the patch you mention and a couple of other things, the solution I came up with was to use Samba on my Mac and disable the default file sharing. I never could get printer sharing to work correctly so I have to sneaker-net pdfs from my Win7 boxes to my Mac server for printing.

-X

Fastmail Jason

unread,
Feb 28, 2013, 12:13:46 PM2/28/13
to zfs-...@googlegroups.com
Just install Bonjour on Win7 boxes, simple, quick and woks a treat.



--
Jason Belec
Sent from my iPad

On 2013-02-28, at 11:38 AM, X Bytor <xby...@gmail.com> wrote:

I saw the same problem when moving to Lion. After trying the patch you mention and a couple of other things, the solution I came up with was to use Samba on my Mac and disable the default file sharing. I never could get printer sharing to work correctly so I have to sneaker-net pdfs from my Win7 boxes to my Mac server for printing.

-X

--

Michael Newbery

unread,
Feb 28, 2013, 1:44:58 PM2/28/13
to zfs-...@googlegroups.com

On 1/03/2013, at 2:25 AM, Fastmail Jason <jason...@belecmartin.com> wrote:

> Try a small script to keep tickling the pool(s) every so often and see if its like the USB issue of sleeping.

Timemachine runs every hour, on at least one of the filesystems in the pool. There is also another script that touches another filesystem in the pool every hour (not synced to the TM tasks) though this pool is not shared.

Do you suggest touching from the sever (touching the Wells/* directly) or through the shares from an external machine?


Michael Newbery

unread,
Feb 28, 2013, 1:46:49 PM2/28/13
to zfs-...@googlegroups.com

On 1/03/2013, at 5:38 AM, X Bytor <xby...@gmail.com> wrote:

> I saw the same problem when moving to Lion. After trying the patch you mention and a couple of other things, the solution I came up with was to use Samba on my Mac and disable the default file sharing. I never could get printer sharing to work correctly so I have to sneaker-net pdfs from my Win7 boxes to my Mac server for printing.
>

Do you mean use the built-in SMB sharing from the System Prefs, or configure SMB directly, or install a different Samba via MacPorts/Homebrew/Fink?


Jason

unread,
Feb 28, 2013, 1:48:21 PM2/28/13
to zfs-...@googlegroups.com
Try directly. The goal is too try and find what is dropping the ball.

Jason
Sent from my iPhone

X Bytor

unread,
Feb 28, 2013, 3:14:37 PM2/28/13
to zfs-...@googlegroups.com
I downloaded the source from samba.org and did my own build and config.

Jason

unread,
Feb 28, 2013, 4:14:52 PM2/28/13
to zfs-...@googlegroups.com
Does that mean all is again perfect in the universe?


Jason
Sent from my iPhone

X Bytor

unread,
Feb 28, 2013, 5:00:02 PM2/28/13
to zfs-...@googlegroups.com
On Thu, Feb 28, 2013 at 3:14 PM, Jason <jason...@belecmartin.com> wrote:
Does that mean all is again perfect in the universe?


Using samba has kept things rock solid for file sharing. With Apples stuff, my mounts would randomly drop especially if it was not actively being used. The only annoying bit is that files I copy from Win7 to the Mac don't have me as the owner or have appropriate permissions set. There's probably something in the samba config file that will fix this.

I did get my Mac printer to show up on Win7. The only problem is that things don't print out. It's not a big problem for me because I do all my Photoshop work (and printing) on my Mac.

-X

Michael Newbery

unread,
Mar 1, 2013, 5:23:24 AM3/1/13
to zfs-...@googlegroups.com

On 1/03/2013, at 7:48 AM, Jason <jason...@belecmartin.com> wrote:

Try directly. The goal is too try and find what is dropping the ball.


Thanks. I've fired off a launchd job to touch a file in each shared filesystem every 1/2 hour. We'll see if that does anything


Graham Perrin

unread,
Mar 7, 2013, 2:52:17 PM3/7/13
to zfs-...@googlegroups.com
On Thursday, 28 February 2013 22:00:02 UTC, xbytor wrote:
 
… With Apples stuff, my mounts would randomly drop especially if it was not actively being used. …

Wonder whether the disconnects were normal. Does it help to consider AFP replay cache capabilities?

Daniel Bethe

unread,
Mar 7, 2013, 4:42:53 PM3/7/13
to zfs-...@googlegroups.com
What do you say now, Michael?  Any news?

Michael Newbery

unread,
Mar 8, 2013, 2:41:07 AM3/8/13
to zfs-...@googlegroups.com
At the moment it is "no news is good news". I have a launchd job that touches a file on every shared filesystem every 30 min, and thus far I have seen no sign of AFP forgetting the shares. However, the problem has gone a couple of weeks before recurring before now, so we have no so much proof as the absence of disproof.

I shall certainly report if the cure does fail.

Thanks for the interest Daniel.

Daniel Bethe

unread,
Mar 8, 2013, 3:13:32 AM3/8/13
to zfs-...@googlegroups.com

>At the moment it is "no news is good news". I have a launchd job that touches a file on every shared filesystem every 30 min, and thus far I have seen no sign of AFP forgetting the shares. However, the problem has gone a couple of weeks before recurring before now, so we have no so much proof as the absence of disproof.


Ok that's cool.  I reported this thread on the github site for that injected library project, just in case.  This is kind of important!  It's another example of Apple increasing the NIH (not invented here) factor, and walling out alternatives.  They do a lot of inclusive stuff, and they do some noninclusive stuff.  So we have to be aware and unify our responses.

Michael Newbery

unread,
Mar 23, 2013, 7:46:33 PM3/23/13
to zfs-...@googlegroups.com

On 8/03/2013, at 9:13 PM, Daniel Bethe <d...@smuckola.org> wrote:


At the moment it is "no news is good news". I have a launchd job that touches a file on every shared filesystem every 30 min, and thus far I have seen no sign of AFP forgetting the shares. However, the problem has gone a couple of weeks before recurring before now, so we have no so much proof as the absence of disproof.


Ok that's cool.  I reported this thread on the github site for that injected library project, just in case.  This is kind of important!  It's another example of Apple increasing the NIH (not invented here) factor, and walling out alternatives.  They do a lot of inclusive stuff, and they do some noninclusive stuff.  So we have to be aware and unify our responses.

Well, bad news. The touch fix doesn't seem to have worked, in that I have had the AFP file server 'go away' as before.

However, the 'touch' fix may have improved things, in that it had been working for longer than I remember it doing before.

The last time MAY have been correlated to me sharing a DVD off the server, but that may also have been coincidence.

By the way, with reference to the injected library project, when I am creating and destroying shares on AFP, these are the messages I see in the console log.

24/03/13 12:37:47.183 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x1027d9000) malloc: reference count underflow for 0x400498c20, break on auto_refcount_underflow_error to debug.
24/03/13 12:37:47.184 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x1027d9000) malloc: reference count underflow for 0x40043fe80, break on auto_refcount_underflow_error to debug.
24/03/13 12:37:57.640 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x10392e000) malloc: reference count underflow for 0x40043fe80, break on auto_refcount_underflow_error to debug.
24/03/13 12:37:57.641 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x10392e000) malloc: reference count underflow for 0x400498c20, break on auto_refcount_underflow_error to debug.
24/03/13 12:37:57.641 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x10392e000) malloc: reference count underflow for 0x40043fe80, break on auto_refcount_underflow_error to debug.
24/03/13 12:37:57.689 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x7fff7a7b7960) malloc: reference count underflow for 0x400498c20, break on auto_refcount_underflow_error to debug.
24/03/13 12:38:10.372 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x7fff7a7b7960) malloc: reference count underflow for 0x40043fe80, break on auto_refcount_underflow_error to debug.
24/03/13 12:38:14.859 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x10392e000) malloc: reference count underflow for 0x400498c20, break on auto_refcount_underflow_error to debug.
24/03/13 12:38:14.859 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x10392e000) malloc: reference count underflow for 0x40043fe80, break on auto_refcount_underflow_error to debug.
24/03/13 12:38:14.859 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x10392e000) malloc: reference count underflow for 0x400498c20, break on auto_refcount_underflow_error to debug.
24/03/13 12:38:14.859 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x10392e000) malloc: reference count underflow for 0x40043fe80, break on auto_refcount_underflow_error to debug.
24/03/13 12:38:14.991 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x7fff7a7b7960) malloc: reference count underflow for 0x400498c20, break on auto_refcount_underflow_error to debug.
24/03/13 12:38:23.563 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x10392e000) malloc: reference count underflow for 0x40043fe80, break on auto_refcount_underflow_error to debug.
24/03/13 12:38:23.566 PM [0x0-0xc90c9].com.apple.systempreferences: System Preferences(10700,0x7fff7a7b7960) malloc: reference count underflow for 0x400498c20, break on auto_refcount_underflow_error to debug.



Fastmail Jason

unread,
Mar 23, 2013, 7:53:43 PM3/23/13
to zfs-...@googlegroups.com
Sorry to hear. Can you detail the steps for you to connect over AFP? I'd like to setup a system matching what you are doing. 



--
Jason Belec
Sent from my iPad

Michael Newbery

unread,
Mar 25, 2013, 2:37:24 PM3/25/13
to zfs-...@googlegroups.com

On 24/03/2013, at 12:53 PM, Fastmail Jason <jason...@belecmartin.com> wrote:

> Sorry to hear. Can you detail the steps for you to connect over AFP? I'd like to setup a system matching what you are doing.

OK, thanks. How much detail would you like? What sort of things would you like to know?

Fastmail Jason

unread,
Mar 25, 2013, 2:45:41 PM3/25/13
to zfs-...@googlegroups.com
As much as you can bear to detail as I run a lot of systems so replication should be doable and thus a proper solution.


--
Jason Belec
Sent from my iPad

Michael Newbery

unread,
Mar 31, 2013, 9:26:25 PM3/31/13
to zfs-...@googlegroups.com
On 26/03/2013, at 7:45 AM, Fastmail Jason <jason...@belecmartin.com> wrote:

As much as you can bear to detail as I run a lot of systems so replication should be doable and thus a proper solution.

Sorry for the delay in responding. Let me know if you want any more details (and if you want it off-list. This is currently going to the whole list just in case it might prove interesting)

The system is a MacPro1,1 at 2.66GHz with 5GB of memory. OS Is Lion (10.7.5 11G63)

There are four disks. The boot disk is standard HFS+. 2TB Seagate ST2000DL003-9VT166

The ZFS pool is on three Seagate 2TB ST2000DM001-1CH164. All disks on the internal SATA.
% diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk0
   1:                        EFI                         209.7 MB   disk0s1
   2:                  Apple_HFS Correlli                1.8 TB     disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
   4:                  Apple_HFS Elgar                   100.0 GB   disk0s4
   5:                  Apple_HFS Delius                  52.0 GB    disk0s5
   6:                  Apple_HFS Haydn                   7.1 GB     disk0s6
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk1
   1:                        EFI                         209.7 MB   disk1s1
   2:                        ZFS Wells                   2.0 TB     disk1s2
/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk2
   1:                        EFI                         209.7 MB   disk2s1
   2:                        ZFS Wells                   2.0 TB     disk2s2
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk3
   1:                        EFI                         209.7 MB   disk3s1
   2:                        ZFS Wells                   2.0 TB     disk3s2


% zpool status;zpool iostat -v
  pool: Wells
 state: ONLINE
 scrub: scrub completed with 0 errors on Sun Mar 31 05:29:00 2013
config:

NAME         STATE     READ WRITE CKSUM
Wells        ONLINE       0     0     0
 raidz1     ONLINE       0     0     0
   disk1s2  ONLINE       0     0     0
   disk2s2  ONLINE       0     0     0
   disk3s2  ONLINE       0     0     0

errors: No known data errors
                capacity     operations    bandwidth
pool          used  avail   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
Wells        1.50T  3.96T     43     43  3.22M   405K
  raidz1     1.50T  3.96T     43     43  3.22M   405K
    disk1s2      -      -     22     29  1.70M   205K
    disk2s2      -      -     23     30  1.70M   205K
    disk3s2      -      -     22     30  1.70M   205K
-----------  -----  -----  -----  -----  -----  -----

% zfs list
NAME                                        USED  AVAIL  REFER  MOUNTPOINT
Wells                                      1020G  2.58T   579K  /Volumes/Wells
Wells/Mailstore                            5.10G  2.58T  3.99G  /Volumes/Wells/Mailstore
Wells/Mailstore@2013-02-20                 22.6K      -  25.3K  -
Wells/Mailstore@2013-03-09                  116M      -  3.97G  -
Wells/Mailstore@2013-03-17                  110M      -  3.97G  -
Wells/Mailstore@2013-03-25                 90.8M      -  3.98G  -
Wells/Mailstore@2013-03-26                 68.8M      -  3.98G  -
Wells/Mailstore@2013-03-27                 77.0M      -  3.98G  -
Wells/Mailstore@2013-03-28                 63.5M      -  3.99G  -
Wells/Mailstore@2013-03-29                 62.7M      -  3.99G  -
Wells/Mailstore@2013-03-30                 61.2M      -  3.99G  -
Wells/Mailstore@2013-03-31                 19.9M      -  3.99G  -
Wells/Mailstore@2013-04-01T00:12:00_13:00  19.4M      -  3.99G  -
Wells/Mailstore@2013-04-01T01:12:00_13:00  19.4M      -  3.99G  -
Wells/Mailstore@2013-04-01T02:12:00_13:00  19.4M      -  3.99G  -
Wells/Mailstore@2013-04-01T03:12:00_13:00  19.4M      -  3.99G  -
Wells/Mailstore@2013-04-01T04:12:00_13:00  19.4M      -  3.99G  -
Wells/Mailstore@2013-04-01T05:12:00_13:00  19.4M      -  3.99G  -
Wells/Mailstore@2013-04-01T06:12:00_13:00  19.4M      -  3.99G  -
Wells/Mailstore@2013-04-01T07:12:00_13:00  19.4M      -  3.99G  -
Wells/Mailstore@2013-04-01T08:12:00_13:00  37.6M      -  3.99G  -
Wells/Mailstore@2013-04-01T09:12:00_13:00  36.9M      -  3.99G  -
Wells/Mailstore@2013-04-01T10:12:00_13:00  19.4M      -  3.99G  -
Wells/Mailstore@2013-04-01T11:12:00_13:00  19.5M      -  3.99G  -
Wells/Mailstore@2013-04-01T12:12:00_13:00  19.5M      -  3.99G  -
Wells/Mailstore@2013-04-01T13:12:00_13:00  19.4M      -  3.99G  -
Wells/user01                                282K  2.58T  26.6K  /Volumes/Wells/user01
Wells/user01@2013-02-21                    21.3K      -  25.3K  -
Wells/user01@2013-03-08                    21.3K      -  26.6K  -
Wells/user01@2013-03-16                    21.3K      -  26.6K  -
Wells/user01@2013-03-24                    21.3K      -  26.6K  -
Wells/user01@2013-03-25                    21.3K      -  26.6K  -
Wells/user01@2013-03-26                    21.3K      -  26.6K  -
Wells/user01@2013-03-27                    21.3K      -  26.6K  -
Wells/user01@2013-03-28                    21.3K      -  26.6K  -
Wells/user01@2013-03-29                    21.3K      -  26.6K  -
Wells/user01@2013-03-30                    21.3K      -  26.6K  -
Wells/user01@2013-03-31                    21.3K      -  26.6K  -
Wells/user01@2013-04-01                    21.3K      -  26.6K  -
Wells/user02                               268K  2.58T  25.3K  /Volumes/Wells/user02
Wells/user02@2013-02-21                    21.3K      -  25.3K  -
Wells/user02@2013-03-08                    20.0K      -  25.3K  -
Wells/user02@2013-03-16                    20.0K      -  25.3K  -
Wells/user02@2013-03-24                    20.0K      -  25.3K  -
Wells/user02@2013-03-25                    20.0K      -  25.3K  -
Wells/user02@2013-03-26                    20.0K      -  25.3K  -
Wells/user02@2013-03-27                    20.0K      -  25.3K  -
Wells/user02@2013-03-28                    20.0K      -  25.3K  -
Wells/user02@2013-03-29                    21.3K      -  26.6K  -
Wells/user02@2013-03-30                    20.0K      -  25.3K  -
Wells/user02@2013-03-31                    20.0K      -  25.3K  -
Wells/user02@2013-04-01                    20.0K      -  25.3K  -
Wells/user03                               623G  2.58T   584G  /Volumes/Wells/user03
Wells/user03@2013-02-21                   22.6K      -  25.3K  -
Wells/user03@2013-03-08                   1.26G      -   529G  -
Wells/user03@2013-03-16                    296M      -   529G  -
Wells/user03@2013-03-24                    310M      -   530G  -
Wells/user03@2013-03-25                    299M      -   530G  -
Wells/user03@2013-03-26                    337M      -   536G  -
Wells/user03@2013-03-27                    360M      -   548G  -
Wells/user03@2013-03-28                    307M      -   548G  -
Wells/user03@2013-03-29                    441M      -   554G  -
Wells/user03@2013-03-30                    349M      -   582G  -
Wells/user03@2013-03-31                    242M      -   583G  -
Wells/user03@2013-04-01                    276M      -   584G  -
Wells/user04                               393G  2.58T   353G  /Volumes/Wells/user04
Wells/user04@2013-02-21                   11.4M      -  1.05G  -
Wells/user04@2013-03-08                   10.6G      -   325G  -
Wells/user04@2013-03-16                   10.5G      -   350G  -
Wells/user04@2013-03-24                    771M      -   345G  -
Wells/user04@2013-03-25                    568M      -   345G  -
Wells/user04@2013-03-26                    651M      -   346G  -
Wells/user04@2013-03-27                    731M      -   346G  -
Wells/user04@2013-03-28                    585M      -   346G  -
Wells/user04@2013-03-29                    747M      -   346G  -
Wells/user04@2013-03-30                    897M      -   346G  -
Wells/user04@2013-03-31                    862M      -   346G  -
Wells/user04@2013-04-01                    621M      -   346G  -

There is  cron job (launchd) that snapshots the userxx filesystems each day and trims old versions. As you can see, there is currently no activity on user01 or user02. These (user01-user04) are the timemachine file systems. There is also a backup of the mail system which is snapshotted hourly (this is rsyncd from another machine).

In addition there is a cron (launchd) that touches each filesystem once every half hour.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
        <string>local.toucher</string>
    <key>ProgramArguments</key>
        <array>
            <string>/Users/user04/ztouch.sh</string>
        </array>
    <key>StartCalendarInterval</key>
        <array>
            <dict>
                <key>Minute</key>
                    <integer>0</integer>
            </dict>
            <dict>
                <key>Minute</key>
                    <integer>30</integer>
            </dict>
        </array>
</dict>
</plist>

% cat /Users/user04/ztouch.sh
#!/bin/sh

# Touch each of the ZFS file systems
fs="user01 user02 user03 user04"
pfx=/Volumes/Wells
sfx=beacon

for i in $fs 
do
   touch $pfx/$i/$sfx
done
Reply all
Reply to author
Forward
0 new messages