salt runners state event

528 views
Skip to first unread message

Rémy Dernat

unread,
May 9, 2016, 3:42:03 AM5/9/16
to salt-...@googlegroups.com
Hi,

I am trying to reinstall a machine and automatically add the new key to salt.


However, contrary to the documentation, I need first to delete the salt key, and to add the new one when the install is finished.

So I did this :

```
salt-key -y -d "$MACHINE"

echo "Rebooting "$MACHINE >> $LOGFILE

salt "$MACHINE" system.reboot && \\
salt-run state.event "salt/minion/$MACHINE/start" count=1 quiet=True && \\
salt-key -y -a "$MACHINE" && \\
salt "$MACHINE" grains.append roles big && \\
salt "$MACHINE" state.highstat
```

Yet, I am stuck on:
```
Key for minion bigmem2 deleted.
No minions matched the target. No command was sent, no jid was assigned.
```

And the machine did not reboot. I think that it can not add a "non-existing" key, so, it is not working.

How could I do that ?

Best,
Rémy

Rémy Dernat

unread,
May 9, 2016, 10:49:32 AM5/9/16
to salt-...@googlegroups.com
I updated this part of code to:

```
echo "Rebooting "$MACHINE >> $LOGFILE

salt "$MACHINE" system.reboot
salt-key -y -d "$MACHINE"
salt-key -y -a "$MACHINE" && \
salt-run state.event "salt/minion/$MACHINE/start" count=1 quiet=True && \
salt "$MACHINE" grains.append roles big && \
salt "$MACHINE" state.highstate
```

But it is hanging there:
```
bigmem2
Key for minion bigmem2 deleted.
The key glob 'bigmem2' does not match any unaccepted keys.
```

When I check with "ps aufx", I saw that the blocking command is:
```
/usr/bin/python /usr/bin/salt-run state.event salt/minion/bigmem2/start count=1 quiet=True
```

Should I increase the count or put the state.event in a while ?

Regards,
Remy

Seth House

unread,
May 9, 2016, 5:08:49 PM5/9/16
to salt users list
The error "does not match any unaccepted keys" is because the minion
generates a key when the daemon starts and then pushes that to the
master when it first connects. Meaning you need to wait for it to
connect, then accept the key, then wait for the start event.

If you run `state.event` with no args you can see which events come in and when.

Since the `salt/auth` event tag does not have the minion ID in the tag
I don't think you can watch for it from the `state.event` runner. You
might need to use Orchestrate instead. Here's an (untested) example:

{% set mid = salt.pillar.get('mid') %}

reboot:
salt.function:
- name: system.reboot
- tgt: {{ mid }}

del_old_key:
salt.wheel:
- name: key.delete
- match: {{ mid }}
- require:
- salt: reboot

wait_for_connect:
salt.wait_for_event:
- name: salt/auth
- id_list:
- {{ mid }}
- onchanges:
- salt: del_old_key

acc_new_key:
salt.wheel:
- name: key.accept
- match: {{ mid }}
- require:
- salt: wait_for_connect

wait_for_start:
salt.wait_for_event:
- name: salt/minion/*/start
- id_list:
- {{ mid }}
- require:
- salt: acc_new_key

set_init_grains:
salt.function:
- name: grains.setval
- tgt: {{ mid }}
- arg:
- roles
- big
> --
> You received this message because you are subscribed to the Google Groups
> "Salt-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to salt-users+...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Rémy Dernat

unread,
May 10, 2016, 3:14:27 AM5/10/16
to salt-...@googlegroups.com
Thanks Seth! This is very helpful !

I will try that.

Best,
Remy

Rémy Dernat

unread,
Jun 1, 2016, 12:04:17 PM6/1/16
to salt-...@googlegroups.com
Hi again (and sorry to dig out this topic),

I am still stuck on this orchestrate stuff; here is my orchestrate:

/srv/salt/orch/bigmem_init.sls 
```
{% set host = salt.pillar.get('reinstall') %}

set_bigmem_to_install:
  salt.state:
    - tgt: 'faiserv'
    - sls:
      - bigmem.tftp

reboot:
  salt.function:
    - name: system.reboot
    - tgt: {{ host }}

del_old_key:
  salt.wheel:
    - name: key.delete
    - match: {{ host }}
    - watch:
      - salt: reboot

wait_for_connect:
  salt.wait_for_event:
    - name: salt/auth
    - timeout: 1800
    - id_list:
      - {{ host }}
    - require:
      - salt: del_old_key

acc_new_key:
  salt.wheel:
    - name: key.accept
    - match: {{ host }}
    - require:
      - salt: wait_for_connect

wait_for_start:
  salt.wait_for_event:
    - name: salt/minion/*/start
    - timeout: 2700
    - id_list:
      - {{ host }}
    - require:
      - salt: acc_new_key

set_init_grains:
  salt.function:
    - name: grains.append
    - tgt: {{ host }}
    - arg:
      - roles
      - big

global_state:
  salt.state:
    - tgt: {{ host }}
    - highstate: True

postinstall_state:
  salt.state:
    - tgt: {{ host }}
    - sls:
      - mount/bigmem
      - rsync
      - bigmem/mkuser

clean_mediabigvol:
  salt.function:
    - name: cmd.run
    - tgt: {{ host }}
    - arg:
      - rm -rf /media/bigvol/*

```

After the reboot, I never enter into the "del_old_key". So, then, I have two keys on my master (one accepted, which is the old one, and one new which is automatically denied), and I get a timeout on wait_for_start.
I tried to change the orders of this orchestrate (wait_for_connect, reboot and del_old_key), and change require <-> onchanges, but that did not change this behaviour (I did not try 'watch'). I think it hangs on the end of the reboot function.

Any idea would be _very_ useful :) !!

Cheers,
Remy

Rémy Dernat

unread,
Jun 2, 2016, 11:34:10 AM6/2/16
to Salt-users
I finally found a solution by using the salt API and a custom module. My orchestrate is now:
```
{% set host = salt.pillar.get('reinstall') %}
{% set master = 'master_name' %}


set_bigmem_to_install:
  salt.state:
    - tgt: 'faiserv'
    - sls:
      - bigmem.tftp

reboot:
  salt.function:
    - name: system.reboot
    - tgt: {{ host }}

del_old_key:
#  salt.wheel:
#    - name: key.delete
#    - match: {{ host }}
  salt.function:
    - name: customkeys.rm_key
    - tgt: {{ master }}
    - arg: 
      - {{ host }}

wait_for_connect:
  salt.wait_for_event:
    - name: salt/auth
    - timeout: 1800
    - id_list:
      - {{ host }}
    - onchanges:
      - salt: del_old_key

acc_new_key:
#  salt.wheel:
#    - name: key.accept
#    - match: {{ host }}
#    - include_denied: True
  salt.function:
    - name: customkeys.add_key
    - tgt: {{ master }}
    - arg: 
      - {{ host }}
    - require:
#      - salt: del_old_key
      - salt: wait_for_connect

wait_for_start:
  salt.wait_for_event:
    - name: salt/minion/*/start
    - timeout: 2700
    - id_list:
      - {{ host }}
    - require:
      - salt: acc_new_key

set_init_grains:
  salt.function:
    - name: grains.append
    - tgt: {{ host }}
    - arg:
      - roles
      - big
    - require:
      - salt: acc_new_key

global_state:
  salt.state:
    - tgt: {{ host }}
    - highstate: True
    - require:
      - salt: set_init_grains

postinstall_state:
  salt.state:
    - tgt: {{ host }}
    - sls:
      - mount/bigmem
      - rsync
      - bigmem/mkuser
    - require:
      - salt: set_init_grains

clean_mediabigvol:
  salt.function:
    - name: cmd.run
    - tgt: {{ host }}
    - arg:
      - rm -rf /media/bigvol/*
    - require:
      - salt: acc_new_key
```


I still do not know why it did not work with salt.wheel.key directly in my orchestrate...

Regards.

Seth House

unread,
Jun 7, 2016, 9:55:12 PM6/7/16
to salt users list
Sorry that I missed your reply here! I'm glad you got it working.
Thanks for replying with your solution. +1
Reply all
Reply to author
Forward
0 new messages