Rob.
I do see these messages in all of our nodes but we have 'reboot-strategy=off' so i figured it just pulls down the update and applies it to the other partition and then waits for something to reboot the node. As only a small % of the node population is rebooting and only somewhat recently it makes me think it's not related to the update-engine. Happy to be shown i'm wrong though as that would seem to be an easier problem to solve:)
Apr 17 17:36:58 ip-10-20-0-104.ec2.internal update_engine[896]: </actions>
Apr 17 17:36:58 ip-10-20-0-104.ec2.internal update_engine[896]: </manifest>
Apr 17 17:36:58 ip-10-20-0-104.ec2.internal update_engine[896]: </updatecheck>
Apr 17 17:36:58 ip-10-20-0-104.ec2.internal update_engine[896]: </app>
Apr 17 17:36:58 ip-10-20-0-104.ec2.internal update_engine[896]: </response>
Apr 17 17:36:58 ip-10-20-0-104.ec2.internal update_engine[896]: I0417 17:36:58.152194 896 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Apr 17 17:36:58 ip-10-20-0-104.ec2.internal update_engine[896]: I0417 17:36:58.152201 896 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Apr 17 17:36:58 ip-10-20-0-104.ec2.internal update_engine[896]: I0417 17:36:58.152207 896 update_attempter.cc:290] Processing Done.
Apr 17 17:36:58 ip-10-20-0-104.ec2.internal update_engine[896]: I0417 17:36:58.152809 896 update_attempter.cc:316] Update successfully applied, waiting to reboot.
Apr 17 17:36:58 ip-10-20-0-104.ec2.internal update_engine[896]: I0417 17:36:58.152835 896 update_check_scheduler.cc:74] Next update check in 50m0s