We recently updated our Razor installation after a bad update, and it appears that it's decided to reinstall systems that hadn't requested reinstallations. An example log, in part, is below. We do have protect_new_nodes set to "true"... +---------------------------+----------+-----------------+--------------------------------------+
| 2019-04-10T17:48:15+00:00 | info | stage_done | stage: kickstart |
+---------------------------+----------+-----------------+--------------------------------------+
| 2019-04-10T17:48:15+00:00 | info | get_task_file | template: post_install, url: http:// |
| | | | 192.168.1.33:8150/svc/file/789/post_ |
| | | | install |
+---------------------------+----------+-----------------+--------------------------------------+
| 2019-04-10T17:39:33+00:00 | info | get_task_file | template: kickstart, url: http://192 |
| | | | .168.1.33:8150/svc/file/789/kickstar |
| | | | t |
+---------------------------+----------+-----------------+--------------------------------------+
| 2019-10-14T17:39:11+00:00 | info | node-booted | exit_status: 0, actions: updating no |
| | | | de metadata: {"update"=>{"hostname"= |
| | | | >"aw130", "ipaddress"=>"192.168.1.3" |
| | | | , "ip"=>"192.168.1.3", "baseos"=>"7" |
| | | | , "solr"=>false}} |
+---------------------------+----------+-----------------+--------------------------------------+
| 2019-10-14T17:39:11+00:00 | info | boot | task: centos, template: boot_install |
| | | | , repo: centos7 |
+---------------------------+----------+-----------------+--------------------------------------+
| 2019-10-14T17:39:35+00:00 | info | | action: reboot, policy: centos7 |
+---------------------------+----------+-----------------+--------------------------------------+
| 2019-10-14T17:38:35+00:00 | info | bind | policy: centos7 |
+---------------------------+----------+-----------------+--------------------------------------+
| 2019-10-14T17:37:48+00:00 | info | node-booted | exit_status: 0, actions: updating no |
| | | | de metadata: {"update"=>{"hostname"= |
| | | | >"aw130", "ipaddress"=>"192.168.1.3" |
| | | | , "ip"=>"192.168.1.3", "baseos"=>"7" |
| | | | , "solr"=>false}} |
+---------------------------+----------+-----------------+--------------------------------------+
| 2019-10-14T17:37:47+00:00 | info | boot | task: microkernel, template: boot |
+---------------------------+----------+-----------------+--------------------------------------+
| 2019-05-02T18:20:09+00:00 | info | node-booted | exit_status: 0, actions: updating no |
| | | | de metadata: {"update"=>{"hostname"= |
| | | | >"aw130", "ipaddress"=>"192.168.1.3" |
| | | | , "ip"=>"192.168.1.3", "baseos"=>"7" |
| | | | , "solr"=>false}} |+---------------------------+----------+-----------------+--------------------------------------+
| 2019-05-02T18:19:08+00:00 | info | boot | task: microkernel, template: boot |
+---------------------------+----------+-----------------+--------------------------------------+
| 2019-04-10T17:01:08+00:00 | info | node-booted | exit_status: 0, actions: updating no |
| | | | de metadata: {"update"=>{"hostname"= |
| | | | >"aw130", "ipaddress"=>"192.168.1.3" |
| | | | , "ip"=>"192.168.1.3", "baseos"=>"7" |
| | | | , "solr"=>false}} |+---------------------------+----------+-----------------+--------------------------------------+
| 2019-04-10T17:01:07+00:00 | info | boot | task: microkernel, template: boot |
+---------------------------+----------+-----------------+--------------------------------------+
What might have triggered this, and how to I best ensure that it doesn't happen to any of our other systems? We already deleted the nodes for any systems whose status was "installed: false", but sadly we did that on 2019-10-15. Is there a way to find any others that might have this hanging over them?
Thanks!