Alternating results with mount Puppet resource

264 views
Skip to first unread message

zerozer...@gmail.com

unread,
Jan 7, 2014, 9:10:31 AM1/7/14
to puppet...@googlegroups.com
Hi,
I have a problem with a host which has a Puppet class applied to it.
The host, a CentOS 6.4 machine, has been provisioned and is managed using Foreman, but I'm not sure this is related to the problem.

The class includes a "mount" type resource which should mount a NFS share and add the related line in /etc/fstab.
But the existing fstab file on that machine has an error: there is one custom line starting with a space (although I cannot find any official doc stating that this is actually forbidden), and this causes the manual application of the puppet class (puppet agent --test) to *always* fail with the message "Could not evaluate: Field 'device' is required".

Up to here, this is ok. The failure is expected behaviour, if the leading space in fstab really is a syntax error.
I've read elsewhere that this error does not come from puppet itself, but rather from the OS when puppet tries to "flush" the fstab file.

The problem is with automated puppet runs every 30 minutes, as per default: 50% of the runs end with no errors in the report, even though the class is not correctly applied.
One report shows the error, the next one does not, the next shows it again, then again not, and so on.
This can be very misleading, as you could believe everything is ok by looking at the error-free logs.

Any suggestions?

Thanks.
Marco

zerozer...@gmail.com

unread,
Jan 8, 2014, 4:12:03 AM1/8/14
to puppet...@googlegroups.com
On Tuesday, January 7, 2014 3:10:31 PM UTC+1, zerozer...@gmail.com wrote:
 
The problem is with automated puppet runs every 30 minutes, as per default: 50% of the runs end with no errors in the report, even though the class is not correctly applied.

Just for completeness, here is an excerpt from puppet's log showing the alternating pattern (host and class names have been masked):

Jan  8 06:25:38 hostname puppet-agent[1330]: (/Stage[main]/Test::Testrepo/Mount[repomount]/ensure) defined 'ensure' as 'mounted'
Jan  8 06:25:38 hostname puppet-agent[1330]: (/Stage[main]/Test::Testrepo/Mount[repomount]) Could not evaluate: Field 'device' is required
Jan  8 06:25:38 hostname puppet-agent[1330]: (/Stage[main]/Test::Testrepo/Yumrepo[test-lab]) Dependency Mount[repomount] has failures: true
Jan  8 06:25:38 hostname puppet-agent[1330]: (/Stage[main]/Test::Testrepo/Yumrepo[test-lab]) Skipping because of failed dependencies
Jan  8 06:25:38 hostname puppet-agent[1330]: Finished catalog run in 0.28 seconds
Jan  8 06:55:41 hostname puppet-agent[1330]: Finished catalog run in 0.27 seconds
Jan  8 07:25:43 hostname puppet-agent[1330]: (/Stage[main]/Test::Testrepo/Mount[repomount]/ensure) defined 'ensure' as 'mounted'
Jan  8 07:25:43 hostname puppet-agent[1330]: (/Stage[main]/Test::Testrepo/Mount[repomount]) Could not evaluate: Field 'device' is required
Jan  8 07:25:43 hostname puppet-agent[1330]: (/Stage[main]/Test::Testrepo/Yumrepo[test-lab]) Dependency Mount[repomount] has failures: true
Jan  8 07:25:43 hostname puppet-agent[1330]: (/Stage[main]/Test::Testrepo/Yumrepo[test-lab]) Skipping because of failed dependencies
Jan  8 07:25:43 hostname puppet-agent[1330]: Finished catalog run in 0.28 seconds
Jan  8 07:55:46 hostname puppet-agent[1330]: Finished catalog run in 0.28 seconds

Thanks for any help.
Marco

Felix Frank

unread,
Jan 8, 2014, 5:15:24 AM1/8/14
to puppet...@googlegroups.com
Hi,

can you reproduce this using puppet agent --one-time?

If not, you may have to activate debug logging in your background agent.
Otherwise, it will be difficult to determine what's happening when and why.

Thanks,
Felix

zerozer...@gmail.com

unread,
Jan 9, 2014, 5:59:41 AM1/9/14
to puppet...@googlegroups.com
On Wednesday, January 8, 2014 11:15:24 AM UTC+1, Felix.Frank wrote:
 
can you reproduce this using puppet agent --one-time?

"puppet agent --onetime" always fails, just like "puppet agent --test".
 
If not, you may have to activate debug logging in your background agent. 

I added "PUPPET_EXTRA_OPTS=--debug" in /etc/sysconfig/puppet, but the output in /var/log/messages is not much different AFAICT:

Jan  9 10:19:09 hostname puppet-agent[21589]: Retrieving plugin
Jan  9 10:19:11 hostname puppet-agent[21589]: Caching catalog for hostname.testlab
Jan  9 10:19:11 hostname puppet-agent[21589]: Applying configuration version '1389175620'
Jan  9 10:19:12 hostname puppet-agent[21589]: Finished catalog run in 0.78 seconds
Jan  9 10:49:12 hostname puppet-agent[21589]: Retrieving plugin
Jan  9 10:49:14 hostname puppet-agent[21589]: Caching catalog for hostname.testlab
Jan  9 10:49:14 hostname puppet-agent[21589]: Applying configuration version '1389175620'
Jan  9 10:49:14 hostname puppet-agent[21589]: (/Stage[main]/Test::Testrepo/Mount[repomount]/ensure) defined 'ensure' as 'mounted'
Jan  9 10:49:14 hostname puppet-agent[21589]: FileBucket got a duplicate file {md5}750b1c78de9911d1a9b5cdcc7e33703d
Jan  9 10:49:14 hostname puppet-agent[21589]: (/Stage[main]/Test::Testrepo/Mount[repomount]) Could not evaluate: Field 'device' is required
Jan  9 10:49:14 hostname puppet-agent[21589]: (/Stage[main]/Test::Testrepo/Yumrepo[test-lab]) Dependency Mount[repomount] has failures: true
Jan  9 10:49:14 hostname puppet-agent[21589]: (/Stage[main]/Test::Testrepo/Yumrepo[test-lab]) Skipping because of failed dependencies
Jan  9 10:49:14 hostname puppet-agent[21589]: Finished catalog run in 0.52 seconds

(This log shows a first run with no errors, and a second one with the error)

Any clues or suggestions to try?

Thank you very much.
Marco

Felix Frank

unread,
Jan 9, 2014, 6:11:06 AM1/9/14
to puppet...@googlegroups.com
Hi,

I'm not sure whether debug logging ends up in /var/log/messages. There
may be syslog configuration issues.

Here's an idea:

puppet agent --no-daemonize --debug --verbose --interval=120 --logdest
stdout

That should make your debugging agent try to run once and repeat each 2
minutes, dumping the output to your terminal (although I'm not sure if
--logdest is still a valid parameter).

HTH,
Felix

jcbollinger

unread,
Jan 9, 2014, 9:58:10 AM1/9/14
to puppet...@googlegroups.com


On Thursday, January 9, 2014 5:11:06 AM UTC-6, Felix.Frank wrote:
Hi,

I'm not sure whether debug logging ends up in /var/log/messages. There
may be syslog configuration issues.

Here's an idea:

puppet agent --no-daemonize --debug --verbose --interval=120 --logdest
stdout

That should make your debugging agent try to run once and repeat each 2
minutes, dumping the output to your terminal (although I'm not sure if
--logdest is still a valid parameter).



It would probably be best (though not absolutely essential) to shut down the agent daemon first.  Also, --logdest shouldn't be needed in that case.  When run with --no-daemonize, the agent already sends its output to stdout.


John

zerozer...@gmail.com

unread,
Jan 10, 2014, 4:49:04 AM1/10/14
to puppet...@googlegroups.com
On Thursday, January 9, 2014 3:58:10 PM UTC+1, jcbollinger wrote:
 
puppet agent --no-daemonize --debug --verbose --interval=120 --logdest
stdout

The --interval option is not accepted, so being forced to use the default of 30 minutes I used:
    PUPPET_EXTRA_OPTS="--debug --verbose --logdest /var/log/puppet/debug.log"
…in /etc/sysconfig/puppet and restarted the daemon to let it go in the background.

The problem remained the same, of course, and here is attached an excerpt of the log containing one run with the error, and the next one with no errors.
I can see no differences, except for the part between "Applying configuration" and "Finishing transaction" which is completely missing in the second run.
How is that?

Any clues?
Thanks.
Marco
debug.txt

Felix Frank

unread,
Jan 10, 2014, 5:05:47 AM1/10/14
to puppet...@googlegroups.com
Hi,

looks like a minor bug with the mount provider. You may want to file a
report, although you should be able to trivially work around it by
fixing your fstabs, correct?

Thanks,
Felix

zerozer...@gmail.com

unread,
Jan 10, 2014, 5:31:03 AM1/10/14
to puppet...@googlegroups.com
On Friday, January 10, 2014 11:05:47 AM UTC+1, Felix.Frank wrote:

looks like a minor bug with the mount provider. You may want to file a
report, although you should be able to trivially work around it by
fixing your fstabs, correct? 

Yes, it's not a big problem for me. I just wanted to be sure I understood what was happening.

I'll file a report even though I'm not 100% sure "what" to report… :)

Thanks.
Marco 

Felix Frank

unread,
Jan 10, 2014, 5:32:59 AM1/10/14
to puppet...@googlegroups.com
:) OK.

I think your original post to the list was pretty much to the point.

On 01/10/2014 11:31 AM, zerozer...@gmail.com wrote:
>
> I'll file a report even though I'm not 100% sure "what" to report� :)

zerozer...@gmail.com

unread,
Jan 10, 2014, 6:53:51 AM1/10/14
to puppet...@googlegroups.com
On Friday, January 10, 2014 11:32:59 AM UTC+1, Felix.Frank wrote:
 
I think your original post to the list was pretty much to the point. 

I just realized the machine was still running puppet 2.6.18.
I now upgraded to 2.7.23 and I'll wait some runs before filing the issue.
The bug appears to be still there, but the log looks slightly different.

Marco 

jcbollinger

unread,
Jan 10, 2014, 9:19:39 AM1/10/14
to puppet...@googlegroups.com

A v2.6 agent should work with a v2.7 master, but it is worthwhile checking out the effect of the upgrade.

Myself, I'm stumped.  I agree with Felix that it looks like there's a bug in the provider, but that's not really saying much more than "it shouldn't do that".  I am curious, however, whether Puppet is actually modifying /etc/fstab on either the failed or the successful run.  I am also curious whether behavior differs when the dodgy Mount is initially mounted and when it is initially unmounted.  None of that really changes anything, though.


John

Felix Frank

unread,
Jan 10, 2014, 9:28:40 AM1/10/14
to puppet...@googlegroups.com
On 01/10/2014 03:19 PM, jcbollinger wrote:
> Myself, I'm stumped. I agree with Felix that it looks like there's a
> bug in the provider, but that's not really saying much more than "it
> shouldn't do that". I am curious, however, whether Puppet is actually
> modifying /etc/fstab on either the failed or the successful run. I am
> also curious whether behavior differs when the dodgy Mount is initially
> mounted and when it is initially unmounted. None of that really changes
> anything, though.

I can add some assumptions :-)

The update will likely not change the issue - the mount type and
provider have likely not seen much action for a couple of years, code-wise.

The root cause for the erratic behavior is likely the fact that the
ensure property of the mount type is schizophrenic insofar as to try and
sync both the mounting state as well as the content of fstab.

The former likely works, but the latter crashes and burns because the
leading space breaks the fstab parsing.

I cannot yet think of a reason for the agent to oscillate between these
behaviors, but it appears "sort of" plausible, or thinkable at least.

Cheers,
Felix

zerozer...@gmail.com

unread,
Jan 10, 2014, 10:48:59 AM1/10/14
to puppet...@googlegroups.com
On Friday, January 10, 2014 3:19:39 PM UTC+1, jcbollinger wrote:
 
A v2.6 agent should work with a v2.7 master, but it is worthwhile checking out the effect of the upgrade.

The upgrade to 2.7.23 had no effect on the issue.
The master runs 3.3.2 by the way; are there issues with this?
 
Myself, I'm stumped.  I agree with Felix that it looks like there's a bug in the provider, but that's not really saying much more than "it shouldn't do that".  I am curious, however, whether Puppet is actually modifying /etc/fstab on either the failed or the successful run.  I am also curious whether behavior differs when the dodgy Mount is initially mounted and when it is initially unmounted.  None of that really changes anything, though.

Puppet could never modify /etc/fstab.
At the moment I can't check the behaviour with the mount already active.

But I discovered a worse effect.
The puppet class also includes a yumrepo resource which requires the mount to be active (it adds a local repository pointing at that mount point).
Well, the "successful" run does not stop at the failed mount, and it proceeds to actually creating the yum repo config file, thus behaving as if the "require" parameter was ignored!

To me it looks like the first failed run leaves the mount resource in a "suspended" state. On the second run puppet just "closes/finishes" it, without actually re-processing the resource from scratch, so it doesn't catch the problems it might have; this time, the processing of the resource is considered to be complete, so puppet proceeds with the other resources and on the next run starts everything from scratch, catching the fstab problem and failing again, and so on.


Marco
Reply all
Reply to author
Forward
0 new messages