Internet Browsing on Server Subnets

50 views
Skip to first unread message

HoneyBadger

unread,
May 22, 2012, 10:18:41 AM5/22/12
to SANS Internet Storm Center / DShield
I'm hoping to solicit some feedback in terms of what others are doing
to limit or prevent Internet browsing on Server subnets. I'm looking
to put a policy in place to completely prevent Internet browsing
(techincal control) from all of our Server subnets. Looking at
current traffic patterns, it seems there is currently a good amount of
browsing going on from these subnets. Given the dangers of this, our
plan is to prevent Internet browsing from all servers using our
current web filtering software. We are receiving a good amount of
pushback from our Server engineering group to at least grant them a
back door in case they need to troubleshoot an application w/ a
vendor. I have countered with I can provide a white list of sites
that they will be able to browse to; for example microsoft.com,
ibm.com, etc. however they would like a complete override that will
allow them to get to any site they want.

I'd like to find out what others in Security are doing? More
specifically:

1. Do you have a policy / techncial control in place to prevent
Internet browsing from server subnets?
2. Do you prevent all web traffic or do you have some sort of white
list for traffic coming from server subnets?
3. Have you ever come across a scenario where a vendor specifically
asked you to download / run something from the Internet on the Server
itself?

Thanks in advance.

Jim

unread,
May 23, 2012, 1:29:17 PM5/23/12
to iscds...@googlegroups.com
Due to our hosting providers bandwidth pricing, I left outbound connections to HTTP, HTTPS, and FTP blocked.
I use a proxy server in the office, via a back-end connection, so I can still access the Internet, it's just routed through the office internet connection. Traffic is logged on the proxy and the office firewall.
For me, this keeps it simple ;-)

--
Jim

valdis.k...@vt.edu

unread,
May 23, 2012, 1:25:29 PM5/23/12
to iscds...@googlegroups.com
On Tue, 22 May 2012 07:18:41 -0700, HoneyBadger said:
> I'm hoping to solicit some feedback in terms of what others are doing
> to limit or prevent Internet browsing on Server subnets. I'm looking
> to put a policy in place to completely prevent Internet browsing
> (techincal control) from all of our Server subnets.

Be careful what you ask for, you may surely get it.

> Looking at current traffic patterns, it seems there is currently a good amount
> of browsing going on from these subnets. Given the dangers of this, our plan
> is to prevent Internet browsing from all servers using our current web
> filtering software. We are receiving a good amount of pushback from our Server
> engineering group to at least grant them a back door in case they need to
> troubleshoot an application w/ a vendor.

"Given the dangers"? Please quantify the *actual* danger you're worried about.

Probably a lot of that "browsing" is your Redhat servers calling home to RedHatNetwork
for available updates, your Suse servers calling home to Novell, your... you get
the idea.

> I have countered with I can provide a white list of sites that they will be
> able to browse to; for example microsoft.com, ibm.com, etc. however they would
> like a complete override that will allow them to get to any site they want.

You're perfectly welcome to play whack-a-mole and figure out all the machines
needed to make this work. You whitelist rhn.redhat.com, and that makes things
sort of work - till you discover that some of the files are downloaded from another
server at RedHat. And you should plan on starting to drink heavily if a vendor
decides to use a CDN like Akamai.

You might want to stop and think about what actual threat you're trying to close
down by preventing browsing - what's the threat model here? Figure most of the
"internet browsing" is actually automated phone-home activity. If that bothers you,
*turn off the automated facility on the server*. If the problem is sysadmins who
are surfing the web from servers, you probably need to figure out why they're
doing it from the server rather from their desk, and deal with *that* instead.

(For the record, we have about 10K square feet of raised floor across the hall, and
we don't do any prevention of web traffic - too many automated processes do
phone-home, and none of our sysadmins wants to work in there any longer than
necessary - it's *noisy* in there and nowhere near enough comfortable chairs ;)

Fazzina, Angelo

unread,
May 23, 2012, 2:21:15 PM5/23/12
to iscds...@googlegroups.com
I do not work with MS Windows at all, but I have had to do #3 often with vendor or manufacturer of server hard ware.

Typically the OS should be installed without browser software installed...yes ?

Maybe one server (with a browser) all the others have access to, can get through sftp,curl,wget, etc.... the necessary content ?


This can be a delicate situation to navigate , so I would propose a policy, have it vetted by the governance board/committee
And go from there to get the goal adopted.
Good Luck
-Angelo Fazzina
--
Need IPv6 Training? See http://www.ipv6securitytraining.com . IPv6 Security Training

To unsubscribe from this group, send email to
iscdshield+...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/iscdshield?hl=en

Seth Art

unread,
May 23, 2012, 3:42:09 PM5/23/12
to iscds...@googlegroups.com
I think you are on the right track. Using a web proxy solution that
only allows outbound access to a white-list of allowed domains is
definitely a good solution, and one that works. The list might get
large(ish), but no matter what, it is a whitelist, which is a great
step in the right direction.

Of course, you will have to back this up with a firewall ACL that only
allows the web proxy solution to initiate outbound connections to the
Internet,. Otherwise, the servers could just bypass the proxy.

This protection helps you in multiple ways, mainly making data
exfiltration, and command and control extremely hard to pull off.

If one of your boxes gets owned (ie: malware on removable media,
publicly accessible network service that is vulnerable), and the only
way out of your server subnet is through a proxy that only allows
outbound access to a limited list of websites on one of two ports
(443/tcp and 80/tcp), it is going to be mighty hard for someone to
ex-filtrate data out of your network.

Of course this is a defense in depth strategy, not a silver bullet.
There will always be people telling you "But our servers are not
owned". haha. A lot harder to prove than people think.

I do not have as much experience to draw on for your last question,
but I don't really see a problem with a manual override, as long as it
is 1) Temporary, and 2) Documented. I'm less concerned with the
machine that is being updated and watched closely, than I am with the
entire network that is NOT being watched as closely, especially if it
is Internet facing.

Good luck!

Seth

Annie

unread,
May 23, 2012, 7:38:21 PM5/23/12
to iscds...@googlegroups.com
I would perform a business impact analysis before implementing changes that could cost someone their job. Then I would examine the organizations. Internet policy, is awareness training a d update.training. Then it could be of benefit to examine your architecture and consider a proxy connection, web filtering white list, firewall, acls and more. It is important to balance costs to implement and sustain . Planning is key.

Sent from my iPhone

Annie

unread,
May 23, 2012, 7:47:26 PM5/23/12
to iscds...@googlegroups.com
This is a credible response. Good topic

Sent from my iPhone

valdis.k...@vt.edu

unread,
May 23, 2012, 9:10:45 PM5/23/12
to iscds...@googlegroups.com
On Wed, 23 May 2012 15:42:09 -0400, Seth Art said:

> This protection helps you in multiple ways, mainly making data
> exfiltration, and command and control extremely hard to pull off.

If you think that blocking outbound browsing is going to stop exfiltration,
you're in for a rude surprise.

> If one of your boxes gets owned (ie: malware on removable media,
> publicly accessible network service that is vulnerable), and the only
> way out of your server subnet is through a proxy that only allows
> outbound access to a limited list of websites on one of two ports
> (443/tcp and 80/tcp), it is going to be mighty hard for someone to
> ex-filtrate data out of your network.

Nope. Not going to be hard at all.

Hint: How did the attacker *kick off* the exfiltration? They *already* have a
connection in from the outside (unless you got pwned off a USB memory stick, in
which case you got *bigger* policy issues)

Tom Byrnes

unread,
May 24, 2012, 3:49:09 AM5/24/12
to iscds...@googlegroups.com
Why should there be ANY "Internet Browsing" from a server subnet?

If you mean http(s) port 80 and 443 outbound requests, then that is a
bit different, in that your servers may, for any one of a number of
legitimate reasons, be trying to contact other locations via outbound
HTTP(S). However, you should KNOW what those target hosts are, and the
reasons for the connections.

If you don't KNOW what outside hosts your servers are supposed to be
connecting to, and why, then you have already lost.

If your policy on your server subnet is default allow outbound, you've
been pwned for ages.


> -----Original Message-----
> From: iscds...@googlegroups.com [mailto:iscds...@googlegroups.com]
> On Behalf Of HoneyBadger
> Sent: Tuesday, May 22, 2012 7:19 AM
> To: SANS Internet Storm Center / DShield
> Subject: [dshield] Internet Browsing on Server Subnets
>

Tom Byrnes

unread,
May 24, 2012, 3:49:34 AM5/24/12
to iscds...@googlegroups.com
Seriously?

Do you actually think this consultantspeak is real advice?

Precisely what actionable advice, other than spend lots of $ on
consultants, did you offer?





> -----Original Message-----
> From: iscds...@googlegroups.com [mailto:iscds...@googlegroups.com]

Shaun O'Leary

unread,
May 24, 2012, 10:48:41 AM5/24/12
to iscds...@googlegroups.com
I worked for an MSO for a number of years that dealt with this fairly,
in my opinion.

We did not allow any outbound browser sessions from any server in our
data center.

All updates, FTP, SCP etc. were handled by one sys admin team using
internal and external bastion servers. This was a busy group but all
the other sys admins, and there were lots, worked well with this.
Sometimes there was friction but it was inevitably due to poor
communication, not process. To increase efficiency, we tried to always
use a 'pull' model.

This 'FTP bastion' system supported sys admins as well as many business
units, including Finance and Legal.


Shaun

HoneyBadger

unread,
May 24, 2012, 11:15:20 AM5/24/12
to SANS Internet Storm Center / DShield
We are moving towards that, having a a specific team or more likely a
small subset of SysAdmins being able to perform functions such as FTP,
SCP, etc. and only from a specific host(s). With all due respect, I
don't have the time or the money to perform a business impact analysis
and for something like this I don't see the point. I'm convinced that
we can map our expected outbound Internet traffic fairly quickly and
then use our Enterprise web filtering application to create a
whitelist for specific subnets.
Reply all
Reply to author
Forward
0 new messages