Parted plugin fails after first run

691 views
Skip to first unread message

Lucas Possamai

unread,
Dec 6, 2021, 4:08:25 AM12/6/21
to ansible...@googlegroups.com
Hi all.

I'm having issues with the parted plugin. It works when I run the playbook for the first time, but if I try to run again, it will fail.

I actually want this task to be run only once, but the run_once parameter is not working either.

------------------------------------------------

- name: Add new partition "{{ data_volume }}"
run_once: true
parted:
device: "{{ data_volume }}"
number: 2
fs_type: ext4
state: present

Error: 
fatal: [localhost]: FAILED! => {"changed": false, "err": "Error: Partition(s) on /dev/nvme2n1 are being used.\n", "msg": "Error while running parted script: /usr/sbin/parted -s -m -a optimal /dev/nvme2n1 -- unit KiB mklabel msdos mkpart primary ext4 0% 100%", "out": "", "rc": 1}

If I unmount the partition, it will work. But then it will delete everything in that partition and I don't want that if I am running the playbook for the second or third time.

------------------------------------------------


All the tasks relevant are below:

- name: Add new partition "{{ data_volume }}"
run_once: true
parted:
device: "{{ data_volume }}"
number: 2
fs_type: ext4
state: present

- name: Create a ext4 filesystem on "{{ data_volume }}" (/data)
run_once: true
community.general.filesystem:
fstype: ext4
dev: "{{ data_volume }}"

- name: Mount /data
ansible.posix.mount:
path: /data
src: "{{ data_volume }}"
fstype: ext4
state: mounted
opts: defaults

What am I missing?

I'm using Ansible 2.11 and Ubuntu 20.

dulh...@mailbox.org

unread,
Dec 6, 2021, 4:21:40 AM12/6/21
to ansible...@googlegroups.com
I have seen something similar with re-running LVM operations and remember they mentioned to add a force: yes option (I don't recall the exact wording though) in oder the not fail on re-execution. Wondering whether something alike would help here too.

--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CAE_gQfVZwnbceY%3DsSPu5updRQ8_1ZieJDiwX_a78AD3pndXKpQ%40mail.gmail.com.

Lucas Possamai

unread,
Dec 6, 2021, 4:51:20 AM12/6/21
to ansible...@googlegroups.com
On Mon, 6 Dec 2021 at 22:21, dulhaver via Ansible Project <ansible...@googlegroups.com> wrote:
I have seen something similar with re-running LVM operations and remember they mentioned to add a force: yes option (I don't recall the exact wording though) in oder the not fail on re-execution. Wondering whether something alike would help here too. 

Hmm that didn't help.

This is working for me, but I think that's ugly.. there must be another way to achieve this:

- name: Check if {{ data_volume }} is already mounted
shell: df | grep {{ data_volume }} | wc -l
with_items: "{{ data_volume }}"
register: ebs_checked

- name: Create a new ext4 primary partition for /data
run_once: true
community.general.parted:
name: pg_data
device: "{{ data_volume }}"
number: 2
state: present
fs_type: ext4
when: "{{item.stdout}} == 0"
with_items: "{{ ebs_checked.results }}"

- name: Create a ext4 filesystem on "{{ data_volume }}" (/data)
run_once: true
community.general.filesystem:
fstype: ext4
dev: "{{ data_volume }}"
when: "{{item.stdout}} == 0"
with_items: "{{ ebs_checked.results }}"

Stefan Hornburg (Racke)

unread,
Dec 6, 2021, 5:17:22 AM12/6/21
to ansible...@googlegroups.com
On 06/12/2021 10:50, Lucas Possamai wrote:
> On Mon, 6 Dec 2021 at 22:21, dulhaver via Ansible Project <ansible...@googlegroups.com <mailto:ansible...@googlegroups.com>> wrote:
>
> __
> I have seen something similar with re-running LVM operations and remember they mentioned to add a *force: yes* option (I don't recall the exact wording though) in oder the not fail on re-execution. Wondering whether something alike would help here too.
>
>
> Hmm that didn't help.
>
> This is working for me, but I think that's ugly.. there must be another way to achieve this:
>
> - name: Check if {{ data_volume }}is already mounted
> shell: df | grep {{ data_volume }}| wc -l
> with_items: "{{ data_volume }}"
> register: ebs_checked
>

Hello Lucas,

it is more efficient and less fragile to check "ansible_mounts" fact, e.g.

when: ansible_mounts | selectattr('fstype', 'equalto', 'ext4') | selectattr('mount', 'equalto', data_volume) | list | count > 1

(not tested).

Regards
Racke


> - name: Create a new ext4 primary partition for /data
> run_once: true
> community.general.parted:
> name: pg_data
> device: "{{ data_volume }}"
> number: 2
> state: present
> fs_type: ext4
> when: "{{item.stdout}} == 0"
> with_items: "{{ ebs_checked.results }}"
>
> - name: Create a ext4 filesystem on "{{ data_volume }}" (/data)
> run_once: true
> community.general.filesystem:
> fstype: ext4
> dev: "{{ data_volume }}"
> when: "{{item.stdout}} == 0"
> with_items: "{{ ebs_checked.results }}"
>
> - name: Mount /data
> ansible.posix.mount:
> path: /data
> src: "{{ data_volume }}"
> fstype: ext4
> state: mounted
> opts: defaults
>
> --
> You received this message because you are subscribed to the Google Groups "Ansible Project" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com <mailto:ansible-proje...@googlegroups.com>.
> To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CAE_gQfUMJgDFKv0-x5Nyn109X7ZtS6qbwaO64VdadJBVx-2Q%2BQ%40mail.gmail.com <https://groups.google.com/d/msgid/ansible-project/CAE_gQfUMJgDFKv0-x5Nyn109X7ZtS6qbwaO64VdadJBVx-2Q%2BQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.


--
Ecommerce and Linux consulting + Perl and web application programming.
Debian and Sympa administration.


OpenPGP_signature

Todd Lewis

unread,
Dec 6, 2021, 12:26:56 PM12/6/21
to Ansible Project
You're specifying partition 2 w/o a size. On a blank disk (in my testing at least) this produces a partition 1. Subsequent calls to the first task fail because partition 1 consumes the entire disk, and partition 2 cannot be created. However, changing "number: 2" to "number: 1" in the first task allows subsequent calls to succeed. Can you explain why you're using "2", and how are subsequent tasks expected to associate / operate on that partition?

Also, it may be helpful if you tell us what value you have for data_volume.

Lucas Possamai

unread,
Dec 6, 2021, 3:00:52 PM12/6/21
to ansible...@googlegroups.com
On Tue, 7 Dec 2021 at 06:27, Todd Lewis <uto...@gmail.com> wrote:
You're specifying partition 2 w/o a size. On a blank disk (in my testing at least) this produces a partition 1. Subsequent calls to the first task fail because partition 1 consumes the entire disk, and partition 2 cannot be created. However, changing "number: 2" to "number: 1" in the first task allows subsequent calls to succeed. Can you explain why you're using "2", and how are subsequent tasks expected to associate / operate on that partition?

There is only one partition in that disk. And TBH, I don't know why I put "partition 2" there. I think it was a copy and paste. I'll change the partition number from 2 to 1 and test again.
 

Also, it may be helpful if you tell us what value you have for data_volume.

data_volume is /dev/nvme2n1
pgsql_volume is /dev/nvme1n1

Those are variables, and I need to manually change them since even by using aws_volume_attachment in terraform, it's not guaranteed that the volume will have the same designation for every attachment. And I couldn't find a way to check the disks by size.

Cheers,
Lucas

Todd Lewis

unread,
Dec 6, 2021, 3:27:52 PM12/6/21
to Ansible Project
I was able to get your playbook to work repeatedly on an external thumdrive. It shows up as /dev/sda, so the full partition - after changing to "number: 1" is /dev/sda1. My significant changes are bolded below. Note that dev: and src: in the last two steps are a concatenation of data_volume and number. For an nvme device like /dev/nvme2n1 you'd need to insert a "p" before the partition number: "{{ data_volume }}p{{ number }}".

- hosts: localhost
  vars:
    data_volume: /dev/sda
    number: 1
  tasks:
    - name: Read device information (always use unit when probing)
      community.general.parted:
        device: "{{ data_volume }}"
        unit: MiB
      register: sdb_info


    - name: Add new partition "{{ data_volume }}"
      run_once: true
      community.general.parted:
        device: "{{ data_volume }}"
        number: "{{ number }}"
        fs_type: ext4
        state: present


    - name: Create a ext4 filesystem on "{{ data_volume }}" (/data)
      run_once: true
      community.general.filesystem:
        fstype: ext4
        dev: "{{ data_volume }}{{ number }}"


    - name: Mount /data
      ansible.posix.mount:
        path: /data
        src: "{{ data_volume }}{{ number }}"

        fstype: ext4
        state: mounted
        opts: defaults

If you do get all the nuances worked out, post back to let us know what you ended up with. Good luck.

Lucas Possamai

unread,
Dec 6, 2021, 9:17:04 PM12/6/21
to ansible...@googlegroups.com
Thanks for your reply.

I tried using your suggestions, but I get this error: 
fatal: [localhost]: FAILED! => {"changed": false, "err": "Error: Partition(s) on /dev/nvme2n1 are being used.\n", "msg": "Error while running parted script: /usr/sbin/parted -s -m -a optimal /dev/nvme2n1 -- unit KiB mklabel msdos mkpart primary ext4 0% 100%", "out": "", "rc": 1}

It works if I unmount the volume, though.
 
This is the code I'm using:

- name: Add new partition "{{ pgsql_volume }}" (/pgsql)
run_once: true
community.general.parted:
device: "{{ data_volume }}"
number: "{{ number }}"
fs_type: ext4
state: present
when: prod and not awsbau and not slave

- name: Create a ext4 filesystem on "{{ pgsql_volume }}" (/pgsql)
run_once: true
community.general.filesystem:
fstype: ext4
dev: "{{ pgsql_volume }}p{{ number }}"
when: prod and not awsbau and not slave

- name: Unomunt /data
ansible.posix.mount:
path: /data
state: unmounted

- name: Add new partition "{{ data_volume }}" (/data)
run_once: true
community.general.parted:
device: "{{ data_volume }}"
number: "{{ number }}"
fs_type: ext4
state: present

- name: Create a ext4 filesystem on "{{ data_volume }}" (/data)
run_once: true
community.general.filesystem:
fstype: ext4
dev: "{{ data_volume }}p{{ number }}"

- name: Mount /pgsql
ansible.posix.mount:
path: /pgsql
src: "{{ pgsql_volume }}{{ number }}"
fstype: ext4
state: mounted
opts: defaults

- name: Mount /data
ansible.posix.mount:
path: /data
src: "{{ data_volume }}{{ number }}"
fstype: ext4
state: mounted
opts: defaults

Also I noticed that the "Read device information (always use unit when probing) " isn't being used. So I removed it.

Lucas

Todd Lewis

unread,
Dec 7, 2021, 7:25:18 AM12/7/21
to Ansible Project
> Also I noticed that the "Read device information (always use unit when probing) " isn't being used. So I removed it.
That's too bad, because its output is exactly what we need to know why the next bit is failing. It also contains the information you need to know whether the "Add new partition…" steps need to execute. But if we get those steps expressed idempotently it really shouldn't matter. Would be informative to see the output of that for both data_volume and pgsql_volume. I'd also be interested to see what /etc/fstab entry/entries you've got for these devices.

I'm assuming the error you quoted is from either the "Add new partition "{{ pgsql_volume }}" (/pgsql)" step or the "Add new partition "{{ data_volume }}" (/data)" step, but I don't know which. I understand the desire to keep posts brief, but leaving out step output headers, values of variables, etc. makes us have to guess or scroll back through the thread and piece it back together. In the mean time, something's changed, etc. It could be that the one little breaking detail is something you're writing off as insignificant — otherwise you would have fixed it already!

[aside: I've been doing a lot of Ansible support in the last couple of weeks in forums like this, and nearly half the exchanges have been requests for information that was left out - presumably for my/our convenience - from prior posts.]

At any rate, the "Add new partition…" steps are not behaving idempotently: the partition you are declaring already exists. There must be some way to express that so that the community.general.parted module doesn't run "/usr/sbin/parted -s -m -a optimal /dev/nvme2n1 -- unit KiB mklabel msdos mkpart primary ext4 0% 100%", which we know is doomed to fail if there's already a partition on /dev/nvme2n1. (Or is there? Could be a raw device?)

Lucas Possamai

unread,
Dec 7, 2021, 3:07:08 PM12/7/21
to ansible...@googlegroups.com
On Wed, 8 Dec 2021 at 01:25, Todd Lewis <uto...@gmail.com> wrote:
> Also I noticed that the "Read device information (always use unit when probing) " isn't being used. So I removed it.
That's too bad, because its output is exactly what we need to know why the next bit is failing. It also contains the information you need to know whether the "Add new partition…" steps need to execute. But if we get those steps expressed idempotently it really shouldn't matter. Would be informative to see the output of that for both data_volume and pgsql_volume. I'd also be interested to see what /etc/fstab entry/entries you've got for these devices.


Oh, sorry about that then. I didn't realize you were going to need the output of those commands.
 
I'm assuming the error you quoted is from either the "Add new partition "{{ pgsql_volume }}" (/pgsql)" step or the "Add new partition "{{ data_volume }}" (/data)" step, but I don't know which. I understand the desire to keep posts brief, but leaving out step output headers, values of variables, etc. makes us have to guess or scroll back through the thread and piece it back together. In the mean time, something's changed, etc. It could be that the one little breaking detail is something you're writing off as insignificant — otherwise you would have fixed it already!

[aside: I've been doing a lot of Ansible support in the last couple of weeks in forums like this, and nearly half the exchanges have been requests for information that was left out - presumably for my/our convenience - from prior posts.]

At any rate, the "Add new partition…" steps are not behaving idempotently: the partition you are declaring already exists. There must be some way to express that so that the community.general.parted module doesn't run "/usr/sbin/parted -s -m -a optimal /dev/nvme2n1 -- unit KiB mklabel msdos mkpart primary ext4 0% 100%", which we know is doomed to fail if there's already a partition on /dev/nvme2n1. (Or is there? Could be a raw device?)

Well, TBH it's working fine now. I created a bunch of files in the /data and /pgsql directories and re-run the Ansible playbook. The idea here was to know if Ansible would completely destroy the partitions and re-create them, or if it would understand the partitions are already there and skip or signal "ok".

I executed the playbook a couple of times and the files remained in their directories. Which is really good.
The only different thing I had to do was to add those unmount commands. The code I sent in my previous email is the entire bit.

Thank you for your help Todd, I'm happy with the result.

Cheers,
Lucas
Reply all
Reply to author
Forward
0 new messages