The nfs-ganesha project is developing Open Source solutions for NFS and 9P based file servers.The development of nfs-ganesha started at CEA, Paris, France as a solution to providing NFS access to their tape archive library.It has grown into a full featured file server that supports NFSv3, NFSv4.0, NFSv4.1, and NFSv4.2. We have even thrown in 9P protocol support.
Nfs-ganesha is the main file server project. It supports both the NFS and 9P protocols. The NFS support includes NFSv3 with integrated NLM (lock manager) and MNT (mountd) services.It also supports NFSv4 including pNFS (v4.1) and a subset of V4.2 commands. For further information, check the nfs-ganesha wiki.The source code can be found in the git repository.
The issue tracker here is the best place to bring issues to our attention however, if you have patches to submit, we useGerritHub for that, please see src/CONTRIBUTING_HOWTO.txt for code contributions.mooshikaThis project has not been worked on in quite some time and is not currently tested or supportedMooshika is a relatively new project that provides an RDMA abstraction layer for protocol transport.It does not yet have a wiki but thesource code can be found in the git repository.
The issue tracker here is the best place to bring issues with ntirpc itself. Code submissions are accepted by pull request.nfs-ganesha-debiannfs-ganesha-debian contains the packaging files for building packages for Debian and Ubuntu. Its source code is in the git repository.
Philippe Deniel and Thomas Leibovici of CEA are the original authors. Since then, the contributorlist has grown to include developers from IBM, Red Hat, Panasas, LinuxBox, as well as a number of individuals.
NFS-Ganesha v2.5.4 orlater that allows dynamic update of access rules, and can make use of highlyavailable Ceph RADOS (distributed object storage) as its shared storage forNFS client recovery data, and exports. Use with Ceph v12.2.2 or later, andganesha.GaneshaNASHelper2 library class in manila Queens release orlater.
The library has just modest requirements against general NFS-Ganesha (in thefollowing: Ganesha) configuration; a best effort was made to remain agnostictowards it as much as possible. This section describes the few requirements.
In versions 2.5.4 or later, Ganesha can store NFS client recovery data inCeph RADOS, and also read exports stored in Ceph RADOS. These features areuseful to make Ganesha server that has access to a Ceph (luminous or later)storage backend, highly available. The Ganesha library classGaneshaNASHelper2 (in manila Queens or later) allows you to store Ganeshaexports directly in a shared storage, RADOS objects, by setting the followingmanila config options in the driver section:
The driver has to contain a subclass of ganesha.GaneshaNASHelper2,instantiate it along with the driver instance and delegateupdate_access method to it (when appropriate, i.e., when access_protois NFS).
The Ganesha Library generates sane default export blocks for theexports it manages, with one thing left blank, the so-called FSALsubblock. The helper class has to implement the _fsal_hookmethod which returns the FSAL subblock (in Python represented asa dict with string keys and values). It has one mandatory key,Name, to which the value should be the name of the FSAL(eg.: "Name": "CEPH"). Further content of it isoptional and FSAL specific.
The export location for shares of a driver that uses the Ganesha Librarywill be of the format :/share-. However,this is incomplete information, because it pertains only to NFSv3access, which is partially broken. NFSv4 mounts work well but theactual NFSv4 export paths differ from the above. In detail:
The share is, however, exported through NFSv4, juston paths that differ from the one indicated bythe export location, namely at::/share---,where ranges over the ID-s of accessrules of the share (and the export with is accessible according to the access rule of that ID).
NFS-Ganesha is a user space file server for the NFS protocol with support for NFSv3, v4, v4.1, pNFS. It provides a FUSE-compatible File System Abstraction Layer(FSAL) to allow the file-system developers to plug in their own storage mechanism and access it from any NFS client. NFS-Ganesha can access the FUSE filesystems directly through its FSAL without copying any data to or from the kernel, thus potentially improving response times.
To export any GlusterFS volume or directory inside volume, create the EXPORT block for each of those entries in a export configuration file. The following parameters are required to export any entry.- #cat export.conf
In a highly available active-active environment, if a NFS-Ganesha server that is connected to a NFS client running a particular application crashes, the application/NFS client is seamlessly connected to another NFS-Ganesha server without any administrative intervention.The cluster is maintained using Pacemaker and Corosync. Pacemaker acts a resource manager and Corosync provides the communication layer of the cluster.Data coherency across the multi-head NFS-Ganesha servers in the cluster is achieved using the UPCALL infrastructure. UPCALL infrastructure is a generic and extensible framework that sends notifications to the respective glusterfs clients (in this case NFS-Ganesha server) in case of any changes detected in the backend filesystem.
The ganesha-ha.conf.example is created in the following location /etc/ganesha when Gluster Storage is installed. Rename the file to ganesha-ha.conf and make the changes as suggested in the following example:sample ganesha-ha.conf file:
Before adding a node to the cluster, ensure all the prerequisites mentioned in section Pre-requisites to run NFS-Ganesha is met. To add a node to the cluster. execute the following command on any of the nodes in the existing NFS-Ganesha cluster:
Currently ganesha HA cluster creationg tightly integrated with glusterd. So here user need to create a another TSP using ganesha nodes. Then create ganesha HA cluster using above mentioned steps till executing "gluster nfs-ganesha enable"Exporting/Unexporting should be performed with out using glusterd cli (follow the manual steps, before perfoming step4 replace localhost with required hostname/ip "hostname=localhost;" in export configuration file)
The Parallel Network File System (pNFS) is part of the NFS v4.1 protocol that allows compute clients to access storage devices directly and in parallel. The pNFS cluster consists of MDS(Meta-Data-Server) and DS (Data-Server). The client sends all the read/write requests directly to DS and all other operations are handle by the MDS.
93ddb68554