[lio-utils] How to disable acls?

720 views
Skip to first unread message

sophana

unread,
Mar 9, 2011, 7:12:08 AM3/9/11
to Linux-iSCSI.org Target Users
Hi

I'd like to setup a 30 node gfs2 filesystem over iscsi on scientific
linux 6 (rhel 6 clone)
Application is a compute cluster with a lot of tasks lauched by SGE.

I could successfully compile and test LIO.
Now I would like to setup gfs2.
The problem I have is that I don't want to setup node ACLs, and just
want a login/password authentication (chap?)

If my test works, I want to setup about 20 gfs2 filesystems each
spanning on 30 nodes.
Managing the ACLs is a pain when I will add a new node to the cluster.

lio_node has the option --enableaclmode but there is no --
disableaclmode

How can I do this?

It looks like in your documentation you say that chap authentication
is not recommended.
But it doesn't say ACL is mandatory...

Thanks

Nicholas A. Bellinger

unread,
Mar 13, 2011, 12:24:21 AM3/13/11
to linux-iscsi-...@googlegroups.com, Linux-iSCSI.org Target Dev
On Wed, 2011-03-09 at 04:12 -0800, sophana wrote:
> Hi
>
> I'd like to setup a 30 node gfs2 filesystem over iscsi on scientific
> linux 6 (rhel 6 clone)
> Application is a compute cluster with a lot of tasks lauched by SGE.
>

Thanks for the background on your setup. I have also CC'ed the main LIO
development list as not many people follow the target-users list..

> I could successfully compile and test LIO.
> Now I would like to setup gfs2.
> The problem I have is that I don't want to setup node ACLs, and just
> want a login/password authentication (chap?)
>

For any type production usage with cluster clients <-> shared storage, I
would strongly recommend against this type shortcut and go ahead and use
explict NodeACLs for all connected initiators. Especially if you are
using SPC-3 Persistent Reservations (which I assume is the case for GFS
+ SCSI fencing), defining each of the iSCSI Initiators using explict
NodeACLs in /sys/kernel/config/target/$FABRIC/$TARGET_WWN/tpgt_
$TPGT/acls/ is going to be a hard requirement.

> If my test works, I want to setup about 20 gfs2 filesystems each
> spanning on 30 nodes.
> Managing the ACLs is a pain when I will add a new node to the cluster.
>

The reasoning here is that LIO SCSI Target is required to keep fabric
dependent Initiator Port state for each inidividual SCSI Initiator Port
accessing the shared storage LUN (eg: each iSCSI session). There are
cases in TPG demo-mode where dynamic sessions (eg: no ACL group
registered with ConfigFS) do not function as expected in a handful of
cases with Persistent Reservations operation+metadata. There are ways
to work around this by enabling a TPG attribute called
'cache_dynamic_acls' when 'generate_node_acls=1' is enabled. But
again, and I will repeat, TPG demo-mode operation is *not* intended for
any type of serious production usage, and the above workaround has not
been throughly tested with cluster clients.

Note that it is possible to convert initiators connected to a TPG
endpoint in demo-mode by creating the explict NodeACL once the initiator
has logged in. This means you also need to create the MappedLUN layout
to follow the TPG LUN layout in order to ensure proper LUN access once
the target configuration has been saved into /etc/target/ and restarted.

> lio_node has the option --enableaclmode but there is no --
> disableaclmode
>
> How can I do this?
>

Currently it's not possible as we expect the CHAP authentication to be
set inside of the explict NodeACL ConfigFS group under $TARGET_WWN/tpgt_
$TPGT/acls/$INITAITOR_WWN/auth/

> It looks like in your documentation you say that chap authentication
> is not recommended.
> But it doesn't say ACL is mandatory...
>

With the current code this is not possible, and for production cluster
clients please avoid this. With GFS+iSCSI+SCSI Fencing, I would
recommend using explict NodeACLs with a TPG LUN <-> NodeACL MappedLUN
1:1 layout mapping. Depending upon the the number of iSCSI LUNs being
exported per physical target machine, you may also want to distribute
these across multiple TargetName+TargetPortalGroupTag endpoints.

--nab

Reply all
Reply to author
Forward
0 new messages