Puppetlabs Firewall

82 views
Skip to first unread message

Danny Roberts

unread,
Jul 1, 2014, 10:30:57 AM7/1/14
to puppet...@googlegroups.com
I am using the Puppetlabs firewall module to manage our firewall. All servers get our core ruleset:

modules/mycompany/manifests/firewall/pre.pp:

class mycompany::firewall::pre {

  Firewall {
    require => undef,
  }

  firewall { '000 accept all icmp':
    proto   => 'icmp',
    action  => 'accept',
  }
  firewall { '001 accept all to lo interface':
    proto   => 'all',
    iniface => 'lo',
    action  => 'accept',
  }
  firewall { '002 accept related established rules':
    proto   => 'all',
    state   => ['RELATED', 'ESTABLISHED'],
    action  => 'accept',
  }

}

modules/mycompany/manifests/firewall/core.pp:

class mycompany::firewall::core {

  firewall { '100 allow SSH':
    proto   => 'tcp',
    port    => [22],
    action  => 'accept',
  }
  firewall { '101 allow salt-minion communication':
    proto   => 'tcp',
    port    => [4505,4506,4510,4511],
    action  => 'accept',
  }
  firewall { '102 allow DNS UDP':
    proto   => 'udp',
    port    => [53],
    action  => 'accept',
  }
  firewall { '103 allow DNS TCP':
    proto   => 'tcp',
    port    => [53],
    action  => 'accept',
  }
  firewall { '104 allow NTP traffic':
    proto   => 'udp',
    port    => [123],
    action  => 'accept',
  }

}

modules/mycompany/manifests/firewall/post.pp:

class mycompany::firewall::post {

  firewall { '999 drop all':
    proto   => 'all',
    action  => 'drop',
    before  => undef,
  }

}

We also have some rules that are added based on server roles dynamically via hiera:

modules/mycompany/manifests/firewall/puppet.pp:

class mycompany::firewall::puppet {

  firewall { '105 allow puppet communication':
    proto   => 'tcp',
    port    => [8140],
    action  => 'accept',
  }

}

modules/
mycompany/manifests/firewall/database.pp:

class mycompany::firewall::database {

  firewall { '106 allow Percona/MySQL communication':
    proto   => 'tcp',
    port    => [3306],
    action  => 'accept',
  }

}

This worked perfectly when I spun up a server with no role (and therefore no extra rules. However when I spun up servers with the 'puppet' & 'database' roles (and therefore the extra rules) it hung at:

Notice: /Stage[main]/Mycompany/Firewall[9001 fe701ab7ca74bd49f13b9f0ab39f3254]/ensure: removed

My SSH session eventually disconnects with a broken pipe. The puppet server I spun up yesterday was available when I got into the office this morning so it seems they do eventually come back but it takes some time. Is there any reason I am getting cut of like that and is there any way to avoid it?

Danny Roberts

unread,
Jul 2, 2014, 2:13:05 AM7/2/14
to puppet...@googlegroups.com
Just a slight update this seems to happen on any server now, not just those with the extra rules.

jcbollinger

unread,
Jul 2, 2014, 9:27:05 AM7/2/14
to puppet...@googlegroups.com


On Tuesday, July 1, 2014 9:30:57 AM UTC-5, Danny Roberts wrote:
I am using the Puppetlabs firewall module to manage our firewall. All servers get our core ruleset:
[...]
This worked perfectly when I spun up a server with no role (and therefore no extra rules. However when I spun up servers with the 'puppet' & 'database' roles (and therefore the extra rules) it hung at:

Notice: /Stage[main]/Mycompany/Firewall[9001 fe701ab7ca74bd49f13b9f0ab39f3254]/ensure: removed

My SSH session eventually disconnects with a broken pipe. The puppet server I spun up yesterday was available when I got into the office this morning so it seems they do eventually come back but it takes some time. Is there any reason I am getting cut of like that and is there any way to avoid it?


I'm a little confused.  What does your SSH session have to do with it?  I don't find it especially surprising that an existing SSH connection gets severed when the destination machine's firewall is manipulated by Puppet, if that's what you're describing.  I would not necessarily have predicted it, but in retrospect it seems reasonable.

I'm supposing that you were connected remotely via SSH to the machine on which the agent was running, following the progress of the run in real time.  In that case, are you certain that the run was in fact interrupted at all?  Maybe the output from the remote side was curtailed when your SSH connection was disrupted, but the run continued.  Or if you were running un-daemonized, then perhaps the run was interrupted when severing the SSH connection produced a forced logout from the controlling terminal.

Any way around, the fact that the subject systems eventually recover on their own makes me suspect that the problem lies in how you were monitoring the run, rather than in your manifests.  You could try running puppet in daemon mode, or otherwise disconnected from a terminal, and checking the log after the fact to make sure everything went as it should.


John

Danny Roberts

unread,
Jul 4, 2014, 8:19:17 AM7/4/14
to puppet...@googlegroups.com
To clarify; we have to use SSH to connect to the servers in this environment, they are all VMs & the hosting provider does not give any means of accessing a console (not ideal but sadly beyond our control).

Our standard process is after building a new server to have manually run Puppet once to bring it up to our standard ASAP. Normally Puppet runs daemonized beyond this point.

This is our first production environmnet that uses the Puppetlabs Firewall module so our first time encountering this in anger. Oddly the server remains unreachable via SSH after this for at least 2 hours which is enough for 3/4 Puppet runs to sort out any issues. This still seems a bit long.

I'm about to try another test by stopped the firewall before doing another Puppet run on a fresh server to see how that behaves.

Ken Barber

unread,
Jul 4, 2014, 10:48:33 AM7/4/14
to Puppet Users
So puppetlabs-firewall is an active provider, whenever it 'runs' in
the catalog it applies the rule straight away. You are probably seeing
this because you're applying a blocking rule (like a DROP or default
DROP for the table) before the SSH allowance rule gets applied.

Take a close look at the pre/post suggestion here:
https://forge.puppetlabs.com/puppetlabs/firewall#beginning-with-firewall

Notice how it suggests creating a course-grained ordering to setup the
"DROP" rule as the very last thing that runs. Now to be clear this
concept is about puppet resource execution order, not the order which
the rules are setup in iptables (ie. with the number in the title).

ken.
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/5eb83bdd-c8a5-4e36-956d-ff87eafd7acb%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.

Cristian Falcas

unread,
Jul 10, 2014, 6:59:38 PM7/10/14
to puppet...@googlegroups.com
Hi,

We where hitting the same issue also. We solved it like this:
- we have a fact that makes sure the iptables service is up on the machine. Otherwise if the service is down, there will be no purge and we still get the previous rules.
- we put the purge resource in a special class (something like mycompany::firewall::puppet_purge) and enforce it to run in the last stage (we use stages)

Best regards,
Cristian Falcas



Reply all
Reply to author
Forward
0 new messages