Daniel Dehennin
unread,Jun 12, 2025, 6:11:53 AMJun 12Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to salt-...@googlegroups.com
Hello.
I have a formula to setup servers with additional disks, I could do it
with shell like:
#+begin_src bash
for disk in ${DISKS}
do
if configured_disk ${disk}
then
continue
fi
wipefs -a ${disk}
mkfs ${disk}
UUID=$(lsblk -n -o UUID ${disk})
echo "UUID=${uuid} /media/${UUID} ${OPTIONS} 0 0" >> /etc/fstab
mount /media/${UUID}
done
#+end_src
With salt states it's a little bit more complicated with slots and
output is ugly:
#+begin_src salt
{%- for disk, params in additionnal_disks.items() %}
{%- set mount_options = params.get("mount_options") %}
{%- set mount_user = params.get("user", "root") %}
{%- set mount_group = params.get("group", "root") %}
{%- set mount_dir_mode = params.get("dir_mode", "700") %}
{%- set mount_file_mode = params.get("file_mode", "600") %}
server/setup/disks/{{ disk }}/disk.wipe:
module.run:
- disk.wipe:
- device: {{ disk }}
- unless:
- blkid -t LABEL=salt-managed {{ disk }}
server/setup/disks/{{ disk }}/xfs.mkfs:
module.run:
- xfs.mkfs:
- device: {{ disk }}
- label: salt-managed
- ssize: size=4k
- onchanges:
- module: server/setup/disks/{{ disk }}/disk.wipe
server/setup/disks/chunk-{{ disk }}/mount.mounted:
mount.mounted:
- name: __slot__:salt:cmd.run('/bin/bash -c "echo /media/$(lsblk -n -o UUID {{ disk }})"')
- device: __slot__:salt:cmd.run("lsblk -P -o UUID {{ disk }}")
- fstype: xfs
- persist: True
- mkmnt: True
- opts: {{ mount_options }}
- dump: 0
- pass_num: {{ 2 + loop.index }}
- require:
- module: server/setup/disks/{{ disk }}/xfs.mkfs
server/setup/disks/chunk-{{ disk }}/file.directory:
file.directory:
- name: __slot__:salt:cmd.run('/bin/bash -c "echo /media/$(lsblk -n -o UUID {{ disk }})"')
- user: {{ mount_user }}
- group: {{ mount_group }}
- dir_mode: {{ mount_dir_mode }}
- file_mode: {{ mount_file_mode }}
- require:
- mount: server/setup/disks/chunk-{{ disk }}/mount.mounted
{%- endfor %}
#+end_src
I wonder if, instead of using slots, it could be more dynamic by using
the reactor or something:
1. for each disk
1. test if it's configured, next disk if OK
2. wipe the disk
3. format it
4. fire an event that disk is formatted
when handling the event, the jinja cloud then lookup information like
UUID and avoid the need for slot.
Regards.
--
Daniel Dehennin