Command and mount modules don't work correctly with SSHFS

545 views
Skip to first unread message

Михаил Чинков

unread,
Dec 23, 2016, 12:56:18 PM12/23/16
to Ansible Project
Ansible version: 2.0.1.0

2 isuues.

I want to mount SSHFS filesystem as a root user without reboot as temporarily as sure as permanently.

1. Temporary mount may be done with sshfs command. So, here is example when all credentials and connection parameters (port, user, identity file) stored in root user ssh config.
 
- name: mount sshfs immediately
  command: "sshfs sftp.example.ru:/ /mnt/example/sftp/ -o allow_other"
  become_user: root
  tags:
    - mount
    - temporary


This command doesn't make out any mistake ('changed' status), but also it doesn't do anything. Fuse-SSHFS filesystem isn't mounting. When I'm executing the following command from the shell...

sudo sshfs sftp.example.ru:/ /mnt/example/sftp/ -o allow_other

...it perfectly works. Also tried to built in sudo to the command in task for full compatibility and it doesn't work too. What else can I do to execute the command as a root user?

2. Permanently filesystem should be mounted by means of "mount" module. My implementation

- name: mount sshfs permanently
  mount
:
    name
: /mnt/example/sftp/
    src
: sftp.example.ru:/
    fstype
: fuse.sshfs
    opts
:
     
"rw,noexec,nosuid,nodev,idmap=user,
      follow_symlinks,allow_other,
      default_permissions,uid=1000,gid=1000"

    state
: present
  tags
: mount


It works pretty straightforward, converting this task to the description of filesystem finally writing to /etc/fstab. But! Unfortunately it's not idempotent. Mount module with 2 role runs will write the same filesystem description twice. Should it work this way?

So, probably all of them - my mistakes. But I didn't get any information about the root cause. That's would be great if anyone faced with such problem and figured out how to fix this.
Reply all
Reply to author
Forward
0 new messages