Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ping works, but ftp/telnet get "no route to host"

8 views
Skip to first unread message

Mark Bixby

unread,
Jul 21, 1992, 11:43:20 PM7/21/92
to
Why would I be able to ping a site OK, but when I try to ftp or telnet to it
I receive a "no route to host" error? Traceroute can find the host, and it
is apparently on Alternet, if that makes a difference.

Any info would be most helpful to this tcp/ip novice. Thanks.

--
Mark Bixby Internet: ma...@spock.dis.cccd.edu
Coast Community College District 1370 Adams Avenue
District Information Services Costa Mesa, CA, USA 92626
Technical Support (714) 432-5064
"You can tune a file system, but you can't tune a fish." - tunefs(1M)

John Ioannidis

unread,
Jul 22, 1992, 9:41:35 AM7/22/92
to
In article <BrruC...@spock.dis.cccd.edu> ma...@spock.dis.cccd.edu (Mark Bixby) writes:
>Why would I be able to ping a site OK, but when I try to ftp or telnet to it
>I receive a "no route to host" error? Traceroute can find the host, and it
>is apparently on Alternet, if that makes a difference.
>
>Any info would be most helpful to this tcp/ip novice. Thanks.
>

The site you are trying to ping is running a firewall gateway, because
they're too lazy to beef up their host security and are relying on the
firewall to protect themselves against external attacks. The site
router(s) look inside your packets, and if they are not to/from
specific hosts or ports, they don't forward them, but rather send you
an ICMP host unreachable message back.

I wish I had a transcript of Dave Clark's talk at the IETF last week.
He said some great things about firewall gateways and mailbridges, and
how they've essentially destroyed the whole purpose of having an IP
internet, and have forced a lot of us to use mail as a transport-level
protocol.

/ji

In-Real-Life: John "Heldenprogrammer" Ioannidis
E-Mail-To: j...@cs.columbia.edu
V-Mail-To: +1 212 854 8120
P-Mail-To: 450 Computer Science \n Columbia University \n New York, NY 10027

Vernon Schryver

unread,
Jul 22, 1992, 12:14:36 PM7/22/92
to
In article <BrsM1...@cs.columbia.edu>, j...@cs.columbia.edu (John Ioannidis) writes:
> ....

> The site you are trying to ping is running a firewall gateway, because
> they're too lazy to beef up their host security and are relying on the
> firewall to protect themselves against external attacks....


That you're return address is at a university is somehow unsurprising.


Many of us out here in the commercial world have thousands of machines
on corporate networks with minimum internal, inter-machine security.

Using firewalls allows us to do the jobs we're paid to do without
spending so much time fiddling with "security," whether choosing
passwords, typing them, or using FTP instead of rcp (.rhosts are
unsafe, remember?). It is true that parts of the commercial world do
not mind wasting high salaries and far greater "lost opportunity
costs" to have all machines on their networks "secure." As far as I
know, all such commercial organizations are not what anyone would call
nimble or industry leading. (Yes, before everyone asserts their
Military Industrial employer is different, I'm sure there must be at
least one exception.)

Firewalls are like guards at the front desk instead of patroling the
halls. Some places have guards in the halls, rules about leaving
papers on your desk, and so forth, but many of us decline to work in
such places.

It is nicest to not have any guards, just as it is nicest to not worry
about locking your door. Unfortunately, zillions of tiny minds,
frequently jejune university students, have proven that the Internet is
too much like a big city to do without locks.

To have someone at a big city university suggest that we lock our
bedrooms instead of our front doors is either amusing or offensive.


Vernon Schryver, v...@sgi.com

Oren Shani

unread,
Jul 23, 1992, 1:29:40 AM7/23/92
to
In article <BrruC...@spock.dis.cccd.edu> ma...@spock.dis.cccd.edu (Mark Bixby) writes:
>Why would I be able to ping a site OK, but when I try to ftp or telnet to it
>I receive a "no route to host" error? Traceroute can find the host, and it
>is apparently on Alternet, if that makes a difference.
>
>Any info would be most helpful to this tcp/ip novice. Thanks.
>
>--
Looks like there are traffic filters somewhere along the route. Some, more
advanced, routers and bridges, are able to filter traffic, and for instance,
allow ping or finger and disable ftp or telnet to certain targets.

This probably should be in the FAQ list (?)
--
__ __ Oren Shani (sh...@genius.tau.ac.il)
/ / / Faculty of Engineering, Tel Aviv university
/ / -- Israel
/__/ . __/ . "Hold your temper" -- The caterpillar to Alice

Gary Heston

unread,
Jul 23, 1992, 10:20:26 AM7/23/92
to
In article <BrsM1...@cs.columbia.edu> j...@cs.columbia.edu (John Ioannidis) writes:
>In article <BrruC...@spock.dis.cccd.edu> ma...@spock.dis.cccd.edu (Mark Bixby) writes:
>>Why would I be able to ping a site OK, but when I try to ftp or telnet to it
>>I receive a "no route to host" error? ....

>The site you are trying to ping is running a firewall gateway, because
>they're too lazy to beef up their host security and are relying on the
>firewall to protect themselves against external attacks.

I have to take exception to this remark. Use of a firewall doesn't indicate
laziness on the part of a site; it most probably means that the persons
responsible for the Internet connection and security of the sites' net are
either too understaffed to maintain all the hosts on their site, or they
don't have control over all the hosts, and are therefore not able to make
them secure. And there are doubtless many sites that suffer from both
problems.

>I wish I had a transcript of Dave Clark's talk at the IETF last week.
>He said some great things about firewall gateways and mailbridges, and
>how they've essentially destroyed the whole purpose of having an IP
>internet, and have forced a lot of us to use mail as a transport-level
>protocol.

Yeah, I'd probably enjoy reading it myself. Unfortunantly, with the explosive
growth of the net, it's no longer an approximation of an ideal world. In
an ideal world, we wouldn't need locks on our doors, keyswitches in our
cars, or firewalls on our nets.

There are other considerations, too; accounting for net traffic (it does
cost someone money somewhere down the line), maintaining security of
proprietary, sensitive, confidential, or classified information, and
insuring that resorces are used for the intended purpose by the people
they're provided for.

Flaming admins as being "lazy" because a firewall is in place is *way*
out of line.


--
Gary Heston SCI Systems, Inc. ga...@sci34hub.sci.com site admin
The Chariman of the Board and the CFO speak for SCI. I'm neither.
"Always remember, that someone, somewhere, is making a product that will
make your product obselete." Georges Doriot, founder of American R & D.

Steve Dyer

unread,
Jul 23, 1992, 12:43:14 PM7/23/92
to
I was always under the opinion that "firewalls" and "mailbridges" (as they
were originally proposed for use when the ARPAnet and MILNET split) were a
Bad Thing. To a certain extent I still agree. However, after my experience
with this most recent Sun hacker/cracker who was meandering around the net
a few months ago invading hundreds of Suns and Ultrix machines, I have a
different opinion. I was in the unfortunate situation of being responsible
for a group of Suns which were being invaded, and the loss of time and the
disruption which the research group experienced due to this was more than
annoying; it disrupted and sometimes destroyed real work. I was doing
this strictly pro-bono in an informal capacity with a group I am associated
with, but at least I'm pretty familiar with the kinds of problems there are.
God help the burgeoning majority of workstation users who are totally
ignorant of issues like security as it relates to networks.

Listen, unless someone has a dedicated system manager who does nothing
else, and is a security fiend, it is very difficult to be protected
against someone who has infinite time, machine-like patience and an
encyclopedic knowledge of existing security holes in the binary distributions
of systems as shipped from companies like Sun. Oh, and did I mention Sun?
These days, the situation is much more likely to be an autonomous
researcher taking a commodity out of a box and plugging it into their
institution's 10-Base-T connector in the wall. Their interests are not
security; they've purchased this box to get their job done. You can't hand
them a 20 page paper giving them instructions on how to FTP and then apply
the 30 most recent program patches to their version of the OS, that is,
once they've determined that patch #8 doesn't conflict with patch #1
and if they haven't upgraded their OS and wiped out earlier patches which
never got into subsequent commercial releases. And few sites are large
enough and deem it important enough to provide support for this endeavor
centrally.

The creation of CERT was a good idea, but it's so far been mainly
reactive. That's not a criticism, mind you--right now, that's a
full time job as it is.

I see the use of gateways and other technologies to provide a firewall
as inevitable and, for some sites, essential to their use of internetworks
today. It's not just big multi-billion companies worrying about the loss
of trade secrets anymore, it's a matter of allowing unsophisticated users
to get their work done without interference from some sociopath with too
much time on his hands.

There are some real structural problems here. Just one part of this
is the attention to security in their distributed products by the OS
vendors, which is remarkably lackluster. Of course, what do you expect
when they don't warrant their software to actually DO anything anyway
except take up space on a distribution medium? :-)


--
Steve Dyer
dy...@ursa-major.spdcc.com aka {ima,harvard,rayssd,linus,m2c}!spdcc!dyer

Marcus J. will do TCP/IP for food Ranum

unread,
Jul 24, 1992, 12:57:48 AM7/24/92
to
>I was always under the opinion that "firewalls" and "mailbridges" (as they
>were originally proposed for use when the ARPAnet and MILNET split) were a
>Bad Thing. To a certain extent I still agree.

Firewalls are a lot like Jersey barriers on a freeway. They keep
the irresponsible drunks in the other lane from being able to hit you
head on.

Since a firewall is the individual site's decision, it's neither
good nor bad - it's just something you feel you need or not. There are a
LOT of companies on the net who wouldn't be there if they didn't have a
decent firewall.

mjr.

[Incidentally, put some recent stuff about the firewall I run is up for
FTP from decuac.dec.com, /pub/docs/firewall/*]

Marcus J. will do TCP/IP for food Ranum

unread,
Jul 24, 1992, 12:52:28 AM7/24/92
to
>Use of a firewall doesn't indicate
>laziness on the part of a site; it most probably means that the persons
>responsible for the Internet connection and security of the sites' net are
>either too understaffed to maintain all the hosts on their site, or they
>don't have control over all the hosts, and are therefore not able to make
>them secure.

It also can mean that the site cares about security. There are
loads of sites on the net that run NIS... A firewall helps. Having a
firewall changes your problem domain. Basically, once you are firewalled,
you presumably *still* have some security - but the firewall acts a
multi-part role as:
a) shield - making things harder is always better.
b) fly-paper - detect intrusion attempts, and you *bet* I do.
c) a logger - it is hard to get through any decent firewall
without leaving a logged trace. note the phraseology
here - many firewalls are not what I would call "decent".

mjr.

Steven Bellovin

unread,
Jul 24, 1992, 3:10:56 PM7/24/92
to
Let me very strongly second what Marcus Ranum said about the need for
firewalls. No, I don't want to run one. However, given the abominable
state of host security, I have no choice. You can blame software
designers for not paying enough attention to the problem (and certainly,
that's some of the trouble), or you can blame the current state of
software engineering (if all large programs have bugs, then by Murphy's
Law all large network servers have security bugs), or you can blame
lax administration by folks who are more interested in getting their own
work done. It doesn't matter. I may or may not be able to keep my
own machine secure (modulo new surprises from the Hole of the Month Club);
I *know* I can't secure the tens of thousands of machines connected
to AT&T's internal networks.

Are their attackers out there? You'd better believe it. Skeptics are
invited to ftp dist/internet_security/dragon.{dvi,ps} from research.att.com;
it's a draft of a paper I'll be presenting at the UNIX Security Symposium.

--Steve Bellovin

Nicholas R. Trio

unread,
Jul 24, 1992, 4:35:44 PM7/24/92
to
I do use firewalls/secure servers here at IBM. I've sometimes thought
of myself as a "bad neighbor" or a leach since my users can get out
to the Internet, but we don't really provide much return service for
others. However, it is a necessary evil because of folks who I'm sure
would love to get into our systems.

When someone asks me about networks, I think of them as a highway, with
folks host systems/workstations as houses with front doors and locks
on them. In an ideal environment, every workstation or host would have
really secure front doors on them, and everyone can drive up to the houses
anywhere on the net...only if they have the right key can they get
into the house.

The problems are (1) the "front doors" to the computers just aren't strong
enough and (2) even if they were, it's possible to get copies of the
"keys" (passwords, etc.) by sniffing the network. Thus, for the most
part, I have to have a gate that only allows folks to get out, and only
lets in those who are authentic users.

Authentication is possible (many places are using authenticator
"smart cards" like the Digital Pathways' Secure-Net Key which allows
for authentication of remote users. However, for the immediate future,
I suspect many organizations worried about who's "driving up to their
door" will use firewalls.

Nick Trio (n...@watson.ibm.com)
IBM T.J. Watson Research Center

Thomas Eric Brunner

unread,
Jul 24, 1992, 12:10:06 PM7/24/92
to
In article <1992Jul23.1...@sci34hub.sci.com> ga...@sci34hub.sci.com (Gary Heston) writes:
>In article <BrsM1...@cs.columbia.edu> j...@cs.columbia.edu (John Ioannidis) writes:
>>In article <BrruC...@spock.dis.cccd.edu> ma...@spock.dis.cccd.edu (Mark Bixby) writes:
>>>Why would I be able to ping a site OK, but when I try to ftp or telnet to it
>>>I receive a "no route to host" error? ....

They are probably doing packet filtering based on ports at one or another of
the routers under their adminstrative control, for reasons of their own.

>>The site you are trying to ping is running a firewall gateway, because
>>they're too lazy to beef up their host security and are relying on the
>>firewall to protect themselves against external attacks.

Hmm, an ad hominum explination, always a pleasure to read in a technical list.
Having reluctently made a few dollars working for or against host security,
I respectfully note to those not entirely convinced by the offered rational
above, that for reasons of their own, the site in question, or any site,
may have chosen to obtain hosts which meet specific local needs, and don't
as yet meet a narrow subset of features thought of as offering "security"
to ip- (or smtp-, or decnet-, or uucp-) addressable hosts. In short, they
may be heterogeneous with some hosts meeting higher locally-defined needs
than denial of unathorized use -- like computing for instance.

>I have to take exception to this remark. Use of a firewall doesn't indicate
>laziness on the part of a site; it most probably means that the persons
>responsible for the Internet connection and security of the sites' net are
>either too understaffed to maintain all the hosts on their site, or they
>don't have control over all the hosts, and are therefore not able to make
>them secure. And there are doubtless many sites that suffer from both
>problems.

They may also have intellectual property, or operational function, which
they value sufficiently to attempt some form of administrative filtering,
in addition to the staffing and competency issues.

>>I wish I had a transcript of Dave Clark's talk at the IETF last week.
>>He said some great things about firewall gateways and mailbridges, and
>>how they've essentially destroyed the whole purpose of having an IP
>>internet, and have forced a lot of us to use mail as a transport-level
>>protocol.

Dave is usually correct, but as he thinks quite a bit more than many, and
says what he thinks, he is frequently not correct. Send him mail and ask
for a copy, or invite him to write. Perhaps he's been misstated in this
summary of his remarks. In any event, this was not one of the more important
topics on the IETF adgenda, had it been, I'm sure there would have been
other points of view expressed, as well as discussion of technical details
of implementation, which are more to the point.

I'm looking forward to John's posting, "Administrative Packet Filtering
Considered Harmfull"... in comp.security.misc, or as an internet-draft...

--
#include <std/disclaimer.h>
Eric Brunner, Tule Network Services
uunet!practic!brunner or practic!bru...@uunet.uu.net
trying to understand multiprocessing is like having bees live inside your head.

Jean-Francois Lamy

unread,
Jul 24, 1992, 7:50:03 PM7/24/92
to
>>The site you are trying to ping is running a firewall gateway, because
>>they're too lazy to beef up their host security and are relying on the
>>firewall to protect themselves against external attacks.

Or have to convince auditors that the advantage the development groups
get by being on the Internet does not compromise the security of the
operational and strategic data on the other sites on the network. If you're
unable to clearly show that the packets from the outside world aren't touching
the production wires, you may find yourself on the wrong side of an
inquisition real quick...

Jean-Francois Lamy | la...@sobeco.com
Computer Networks and Systems | la...@sobeco.ca
Sobeco Ernst & Young, Montreal, Canada H2Z 1Y7 | uunet!sobeco!lamy

Karl Denninger

unread,
Jul 25, 1992, 12:05:58 AM7/25/92
to
In article <1992Jul23.1...@sci34hub.sci.com> ga...@sci34hub.sci.com (Gary Heston) writes:
>In article <BrsM1...@cs.columbia.edu> j...@cs.columbia.edu (John Ioannidis) writes:
>>In article <BrruC...@spock.dis.cccd.edu> ma...@spock.dis.cccd.edu (Mark Bixby) writes:
>>>Why would I be able to ping a site OK, but when I try to ftp or telnet to it
>>>I receive a "no route to host" error? ....
>
>>The site you are trying to ping is running a firewall gateway, because
>>they're too lazy to beef up their host security and are relying on the
>>firewall to protect themselves against external attacks.
>
>I have to take exception to this remark. Use of a firewall doesn't indicate
>laziness on the part of a site; it most probably means that the persons
>responsible for the Internet connection and security of the sites' net are
>either too understaffed to maintain all the hosts on their site, or they
>don't have control over all the hosts, and are therefore not able to make
>them secure. And there are doubtless many sites that suffer from both
>problems.

There are a number of reasons other than laziness or overwork that an
organization may firewall (seeing as I put these things in, I'll explain):

1) The company may have rather loose security internally (i.e. Runs NIS
for password validation) and doesn't like the idea of someone doing
a ypset/ypcat on their password files, or its moral equivalent.
This "rather loose" part may be a >requirement< by some of their
hardware or software that they cannot control (see some of the major
system vendors for some of the worst offenders here, and their utter
failure to provide off-the-shelf secure alternatives. KERBEROS is
>NOT< adaquate in most commercial environments, as tickets expire,
and for most people in these environments that is plain unacceptable).
Organizationally it may be deemed acceptable to have this security
level for employees, but not against outsiders.

2) The organization may have a large number of MAC and PC desktops, all
or any of which may be able to be compromised to someone's extreme
detriment with no good way to stop the abuse. Not all the world's a
Unix.

3) The organization may want to run NFS, and either has too many hosts
to do individual host validation in the exports file, or just can't
for other reasons. This is a real bitch with the NFS mount
protocols; there is >no< good way to stop this from being a
potential problem in many organizations. The ability to wild-card
domain names in the export list as in "allow anyone from "mcs.com"
access), along with a local name server (which prevents spoofing the
reverse lookups) would solve this easily -- but it isn't happening
these days with the major vendors -- again. The NETGROUP paradigm
does >not< help in many of these cases, especially when combined
with the NIS problem.

4) The organization may not like the idea of the local (or
long-distance) teen-age hacker crowd taking pot-shots at their
security on a daily basis across several hundred or thousand hosts.
It is much easier to watch, and defend, one entry point.

5) Change the offender in #4 to some corporate espionage types, and add
a company that has significant computer-based assets, and you have
yet another argument.

>>I wish I had a transcript of Dave Clark's talk at the IETF last week.
>>He said some great things about firewall gateways and mailbridges, and
>>how they've essentially destroyed the whole purpose of having an IP
>>internet, and have forced a lot of us to use mail as a transport-level
>>protocol.

Hogwash. If it is that important to an organization, the firewall can
provide proxy service for it. It is >not< that big a deal to provide a
proxy connection for these kinds of things, >provided< that you have a nice
clean requirement. The generic "anyone can do anything" doesn't cut it in
Corporate America anyway.

>Yeah, I'd probably enjoy reading it myself. Unfortunantly, with the explosive
>growth of the net, it's no longer an approximation of an ideal world. In
>an ideal world, we wouldn't need locks on our doors, keyswitches in our
>cars, or firewalls on our nets.
>

>Flaming admins as being "lazy" because a firewall is in place is *way*
>out of line.

Agreed.

--
Karl Denninger (ka...@ddsw1.MCS.COM, <well-connected>!ddsw1!karl)
Data Line: [+1 312 248-0900] Anon. arch. (nuucp) 00:00-06:00 C[SD]T
Request file: /u/public/sources/DIRECTORY/README for instructions

don provan

unread,
Jul 25, 1992, 4:34:28 AM7/25/92
to
In article <1992Jul24.2...@watson.ibm.com> n...@watson.ibm.com (Nicholas R. Trio) writes:
>The problems are (1) the "front doors" to the computers just aren't strong
>enough and (2) even if they were, it's possible to get copies of the
>"keys" (passwords, etc.) by sniffing the network.

Call me a Capitalist Pig, but no matter how strong my front door is
and no matter how well protected i keep my keys, i still consider my
front yard private property and i might put up a fence to keep people
away from my house.
don provan
do...@novell.com

Doug Karl

unread,
Jul 26, 1992, 6:08:25 AM7/26/92
to
Folks, I have recently released for annonymous ftp an IP, DECNET, AppleTalk,
etc firwall. It is in the form a bridge similar to PCBRIDGE called
KarlBridge. If you get a chance please pick it up and let me know what
you think. (Also an inexpensive version is available from a real company
with support and all that stuff).

OSU KarlBridge can be found on: nisca.acs.ohio-state.edu (128.146.1.7)
/pub/kbridge

FROM THE BROCHURE:

The OSU KarlBridge is a full featured protocol filtering bridge for Ethernet.
It has the most extensive filtering capability of any bridge or router
available. It will filter packets by any Ethernet protocol or Ethernet
address and also IP Socket (incomming and/or outgoing), IP address, IP Subnet,
AppleTalk Server Name, AppleTalk Zone Name, DECnet Object and DECnet address.

Features:
1) Ping'able with SNMP support for MIB-II, Ethernet-like Interface MIB,
and Bridge MIB.
2) Can be configured to filter specific packets by protocol, address
server name and zone name by the use of easy to understand menus.
3) Provides protection against erroneous Ethernet packets, that a
standard bridge does not provide.
4) Leaks are greatly reduced compared to standard bridges, due to
unique filtering and timed-learning algorithms.
5) Works on any Thin or Thick Ethernet with 10BASE-T optional.
6) Very good packet forwarding rate; 8100 packets per second.
(If you choose to use your own clone this may vary)
7) Excellent packet forwarding delay; 140 uS (for small packets)
(If you choose to use your own clone this may vary)
8) Easily upgraded, since the code and filters are on standard PC
compatible bootable floppy.

Any questions.... Send e-mail to kbr...@osu.edu

Doug Karl
Senior Computer Specialist
Ohio State University

Marcus J. will do TCP/IP for food Ranum

unread,
Jul 26, 1992, 5:16:39 PM7/26/92
to
kbr...@magnus.acs.ohio-state.edu (Doug Karl) writes:
>Folks, I have recently released for annonymous ftp an IP, DECNET, AppleTalk,
>etc firwall. It is in the form a bridge similar to PCBRIDGE called
>KarlBridge.

I don't want to start a war, but I'd like to propose that we try
to agree on some terminology. What exactly is a "firewall"? I believe that
a firewall addresses more than just routing and IP connectivity. These
are my rough definitions:

Simple gateway - a node which is reachable on two networks, but has routing
disabled, making it a termination point on both. This is typically
a host with TCP/IP forwarding disabled.

Screening router - a router that can contain some degree of logic to perform
host or service-based access control. Screening routers include some
commercial routers, as well as host-based routers with screening
services. (E.g.: KarlBridge, ULTRIX nodes with screend)

Screened network - a private network that is connected to an untrusted
network via a screening router. It is important to note that a
screened network is a matter of degree, and that in order to
work a screened network must share routes with the untrusted
network.

Screened subnet - a subnet which sits between a private network and an
untrusted network, with a screening router mediating access between
them. In some configurations, screened subnets are configured such
that routes are not given between the private network and the
untrusted network. Often a simple gateway node is installed on the
screened subnet, to act as a network access point.

Trusted application gateway - a software gateway for a given application,
such as a telnet "forwarder", or relay. Sendmail is a trusted
(or at least some versions) application gateway.

Firewall - a combination of a security policy with some of the components
above. Specifically, an implementation of the given policy that
is enforced by a combination of screening and/or routing.


I like to think of access (routing and connectivity) in terms of
Direct - routes and traffic are shared between the private network and
the untrusted network.
Indirect - routes and traffic are passed through some kind of controlling
mechanism that prevents routes from being shown between the
private and untrusted networks, and prevents traffic from passing
directly between the two networks. Communication is accomplished
by trusted application gateways.

In other words, in my terminology, a firewall may be made by using
KarlBridge and some policy to build a screened network or screened subnet.
By itself, KarlBridge does not a firewall make; it is a very useful building
block.

Note that many of the components above can be combined, and I
believe that my terminology retains its clarity in such a case. The kind
of firewall I run can be deemed a "screened subnet with a simple gateway,
hosting a suite of trusted application gateways with indirect access".

This is not to imply that any one technique is better or worse,
but it's difficult when someone says "I am sheltered by a firewall" to
know if it's a relatively trivial firewall such as a screened network
with fairly wide access for telnet and mail, or if it's something much
more complex, like the AT&T or Digital corporate gateways.

Comments??

mjr.

Marcus J. will do TCP/IP for food Ranum

unread,
Jul 26, 1992, 5:36:53 PM7/26/92
to
>Firewall - a combination of a security policy with some of the components
> above. Specifically, an implementation of the given policy that
> is enforced by a combination of screening and/or routing.

I should have mentioned that I believe that a "policy" needs to be
something coherent and consistent that is more or less regular. I don't
believe that a firewall can be implemented successfully by just plugging
onto the network and disabling a bunch of stuff until it "works". I'm
as far from a theoretical kind of guy as I think you can get, but I really
think it's very important here to have a clear statement of the goals of
the firewall before any connection is undertaken.

Simply dividing the overall philosophy of the firewall into one
of these two categories or the other will make a huge difference:
"everything not expressly forbidden is permitted"
"everything not expressly permitted is forbidden"

Consider that in the latter case, the administrator's life is
(hopefully!) easier - we tilt instinctively towards more security. In
the former case, the user's life is usually easier - they are free to
do anything that they can think of that the administrator has not
identified as a security risk and blocked.

mjr.

John Wroclawski

unread,
Jul 27, 1992, 3:20:02 PM7/27/92
to

In article <1992Jul24.1...@practic.com> bru...@practic.com (Thomas Eric Brunner) writes:

>>I wish I had a transcript of Dave Clark's talk at the IETF last week.
>>He said some great things about firewall gateways and mailbridges, and
>>how they've essentially destroyed the whole purpose of having an IP
>>internet, and have forced a lot of us to use mail as a transport-level
>>protocol.

Dave is usually correct, but as he thinks quite a bit more than many, and
says what he thinks, he is frequently not correct. Send him mail and ask
for a copy, or invite him to write. Perhaps he's been misstated in this
summary of his remarks.

Dave made two points. The first was that the architecture of the
current internet is end-to-end, and does not have any notion of
firewalls. He argued that because of this, firewalls make some things,
such as introducing new services, much more difficult than they should
be, and as a result people resort to odd things like using mail as a
transport.

Dave's second point is that perhaps it is time to rethink the core
architecture of the internet (or its follow-on) to specifically
-include- mechanism to separate organizational policy functions (such
as authentication, logging, and access control) from the actual
service functions running on the typical host.

His key observation is that if we can provide these "checkpoint"
functions at administrative boundaries as part of a well-designed
architecture, rather than in an ad-hoc manner, we might be able to
achieve two goals at once - provide a network which can -better- meet
the security and usage requirements of a wide range of people, and at
the same time preserve some of the open access which has driven the
incredible growth of the internet so far.

Implicit in this observation is the belief that if we -don't- succeed
in doing something like this, the continuing rapid spread of firewalls
is inevitable, for the simple reason that many people feel they simply
have no choice.

Responding to the same quote, ka...@ddsw1.mcs.com (Karl Denninger) writes:

Hogwash. If it is that important to an organization, the firewall can
provide proxy service for it. It is >not< that big a deal to provide a
proxy connection for these kinds of things, >provided< that you have a nice
clean requirement. The generic "anyone can do anything" doesn't cut it in
Corporate America anyway.

I suspect Mr. Denninger will disagree, but I find this to be dangerous
and depressing thinking. A major strength of the Internet is the
ability for new services to come into existance when they're needed,
not several years later when a "nice clean requirement" has been
formulated, written down, approved, standardized, and poked at by a
layer or two of beaurocracy. In other words, the internet is strong
because it can evolve.

Existing firewalls threaten this evolution because they mix up the
notion of enforcing security with the notion of limiting
functionality. This -is- a serious threat. Dave's core argument, that
we should work to develop an architecture which can separate these
functions, seems to offer a useful way out.

John Wroclawski
MIT Lab for Computer Science
j...@lcs.mit.edu

Ric Werme

unread,
Jul 27, 1992, 3:29:45 PM7/27/92
to
In article <17...@ulysses.att.com> s...@ulysses.att.com (Steven Bellovin) writes:

>Let me very strongly second what Marcus Ranum said about the need for
>firewalls. No, I don't want to run one. However, given the
>abominable state of host security, I have no choice. You can blame

>software designers for not paying enough attention to the problem or
>you can blame the current state of software engineering, or you can


>blame lax administration by folks who are more interested in getting
>their own work done.

Hey, no one has mentioned retraining all the local users who use their
initials for a password! Until we got on the Internet here I was
rather fond of my old password. A little embarassed that it was on
the Robert T Morris short list, but my coworkers are pretty trustworthy.

I *was* somewhat more embarrassed when I discovered my old password was
on a machine here that did not use the yellow plague, but I think things
are under control now, thank you.
--
| A pride of lions | Eric J Werme |
| A gaggle of geese | 77 Tater St |
| An odd lot of programmers | Mont Vernon NH 03057 |
| A Constitution of Libertarians | Phone: 603-673-3993 |

Dale R. Worley

unread,
Jul 27, 1992, 3:36:57 PM7/27/92
to
As far as I've seen, much of the problem with firewalls is not that
they exist, but that they are badly configured. For instance, I've
seen firewalls that would allow "inside" users to telnet out, but they
couldn't rlogin out. A good firewall should allow everything that is
permitted and nothing that is forbidden, and so improves security
without adding any additional burden to proper usage.

Dale

Dale Worley Dept. of Math., MIT d...@math.mit.edu
--
What do you mean, *you're* a solipsist?

Cliff Frost

unread,
Jul 27, 1992, 5:12:36 PM7/27/92
to
|>
|> >>I wish I had a transcript of Dave Clark's talk at the IETF last week.
|> >>He said some great things about firewall gateways and mailbridges, and
|> >>how they've essentially destroyed the whole purpose of having an IP
|> >>internet, and have forced a lot of us to use mail as a transport-level
|> >>protocol.
|>
|> Dave is usually correct, but as he thinks quite a bit more than many, and
|> says what he thinks, he is frequently not correct. Send him mail and ask
|> for a copy, or invite him to write. Perhaps he's been misstated in this
|> summary of his remarks.

Yes, I believe Dave Clark's points have been somewhat twisted in this
discussion.

What I thought I heard him say is that good host security is very important
because without it people are forced to use firewalls and mail relays. It's
hard to argue this point, and Dave Clark most certainly did *not* criticize
anyone for using firewalls or mail relays.

The follow on point (which ji referred to), is that IP (and UDP/TCP and
the support infrastructure) was designed for end-to-end connectivity, so
firewalls and mail relays break the architectural model. It's pretty hard
to argue with this also.

For a common example of the problems this creates, consider the DNS.
The DNS is designed so that all hosts get the same answer to the same
query. This is a problem for people who have corporate internets hidden
from The Internet, because they often want the corporate machines to get
one set of info from the DNS and for machines on The Internet to get a
different set of info. The most common example is MX records, of course.

Cliff Frost
UC Berkeley

Perry E. Metzger

unread,
Jul 27, 1992, 7:07:21 PM7/27/92
to
In article <151os4...@agate.berkeley.edu> cl...@garnet.berkeley.edu (Cliff Frost) writes:
>The follow on point (which ji referred to), is that IP (and UDP/TCP and
>the support infrastructure) was designed for end-to-end connectivity, so
>firewalls and mail relays break the architectural model. It's pretty hard
>to argue with this also.

I'll argue with it. Firewalls don't have to be designed to prohibit
all forms of end to end connectivity; good firewalls will allow you to
do things like set up outgoing TCP/IP links and not incoming ones. Its
thus possible to set up reasonable end to end communication for
certain services without opening yourself up too badly to the outside
world. Yeah, its possible to shanghai existing connections, but the
damage the bad guy can do is limited for most services.

--
Perry Metzger pmet...@shearson.com
--
Just say "NO!" to death and taxes.
Extropian and Proud.

Marcus J. will do TCP/IP for food Ranum

unread,
Jul 27, 1992, 9:09:21 PM7/27/92
to
>Hey, no one has mentioned retraining all the local users who use their
>initials for a password!

Another excellent case for firewalls. I want those users on the
*in*side of the firewall, where my problem domain becomes simply one of
making sure that the 2 user accounts on my firewall system have good
passwords, rather than having to worry about all the users on 40,000
hosts in Digital.

mjr.

Randall Atkinson

unread,
Jul 27, 1992, 8:38:39 PM7/27/92
to
In article <1992Jul26.2...@decuac.dec.com>,

m...@hussar.dco.dec.com (Marcus J. "will do TCP/IP for food" Ranum) writes:


>Trusted application gateway - a software gateway for a given application,
> such as a telnet "forwarder", or relay. Sendmail is a trusted
> (or at least some versions) application gateway.

The other terms were OK, but the term "trusted" usually refers to a
component believed not to compromise a multilevel security policy
(e.g. Bell-Lapadula). The overloading of the meaning that the above
proposes is not helpful because it tends to confuse terms used in the
same community.

I would suggest just using "application gateway" instead.

Ran
atki...@itd.nrl.navy.mil

Jeffrey Mogul

unread,
Jul 27, 1992, 9:03:44 PM7/27/92
to
In article <JTW.92Ju...@pmws.lcs.mit.edu> j...@lcs.mit.edu (John Wroclawski) writes:
>Dave [Clark's] second point is that perhaps it is time to rethink the core

>architecture of the internet (or its follow-on) to specifically
>-include- mechanism to separate organizational policy functions (such
>as authentication, logging, and access control) from the actual
>service functions running on the typical host.

Actually, I think this approach was (in embryonic form, at least)
suggested several years ago by Deborah Estrin, who developed the
concept of Visa protocols. In this model, "border" routers pass
only those packets deemed allowable by an Access Control Server (ACS).
Instead of requiring each packet to pass through an ACS on the way
in or out of an organization, the end hosts get cryptographically-
sealed thingies (visas) to stick onto their packets. This allows
a distributed implementation, but also allows the ACS to set whatever
policy is desired. The downside is that the cryptographic stuff could
be rather costly, and the whole model depends on every external-access
host implementing the mechanism. For details, see
Deborah Estrin, Jeffrey C. Mogul, and Gene Tsudik.
Visa Protocols for Controlling Inter-Organization Datagram Flow.
IEEE Journal on Selected Areas in Communication 7(4):486-498, May, 1989.
I'm sure Dave knows about this stuff; Deborah did her dissertation at MIT.

>I suspect Mr. Denninger will disagree, but I find this to be dangerous
>and depressing thinking. A major strength of the Internet is the
>ability for new services to come into existance when they're needed,
>not several years later when a "nice clean requirement" has been
>formulated, written down, approved, standardized, and poked at by a
>layer or two of beaurocracy. In other words, the internet is strong
>because it can evolve.

Alas, while parts of the Internet can (and should) continue to follow
the "everything not forbidden is permitted" approach, which allows for
evolution, other parts have to follow the "everything not permitted
is forbidden" rule. The powers-that-be at a company such as Digital
are not going to permit us to run arbitrary experiments across the
boundary between the nasty wide-open Internet and our soft, naive
internal network. There's nothing intrinsically wrong with this; the
IETF doesn't require that everyone on the Internet experiment with a new
service before blessing it. Once a new service is properly understood,
we can (if we so choose) cut a hole for it in the firewall.

From time to time, I (hidden behind a firewall) find Digital's policies
to be a major pain; I still don't use services such as WAIS because of
the extra effort involved. But, I (and my coworkers) no longer spend
a significant part of our time either tracking down intruders, or
explaining to the corporate security types why they shouldn't shut down
our gateway complex.

-Jeff [who accepts the blame for a few of the firewalls out there]

Gene Tsudik

unread,
Jul 28, 1992, 5:44:34 AM7/28/92
to

In article <1992Jul28....@PA.dec.com> mo...@pa.dec.com (Jeffrey Mogul)
writes:


>In article <JTW.92Ju...@pmws.lcs.mit.edu> j...@lcs.mit.edu
>(John Wroclawski) writes:
>>Dave [Clark's] second point is that perhaps it is time to rethink the core
>>architecture of the internet (or its follow-on) to specifically
>>-include- mechanism to separate organizational policy functions (such
>>as authentication, logging, and access control) from the actual
>>service functions running on the typical host.
>
>Actually, I think this approach was (in embryonic form, at least)
>suggested several years ago by Deborah Estrin, who developed the
>concept of Visa protocols. In this model, "border" routers pass

Visa protocol addresses only end-to-end (in terms of domains, not hosts) access
control/authentication issues. Control of transit traffic is also quite
important and will (may) become more so in the future. Charging for providing
transit services may well become the chief reason for implementing
transit "hurdles" (the term "firewall" doesn't really apply here).
For details, see "Secure Control of Transit Internetwork Traffic" (D. Estrin &
G. Tsudik), Computer Networks and ISDN Systems, October 1991.

>only those packets deemed allowable by an Access Control Server (ACS).
>Instead of requiring each packet to pass through an ACS on the way
>in or out of an organization, the end hosts get cryptographically-
>sealed thingies (visas) to stick onto their packets. This allows
>a distributed implementation, but also allows the ACS to set whatever
>policy is desired. The downside is that the cryptographic stuff could
>be rather costly, and the whole model depends on every external-access
>host implementing the mechanism. For details, see
> Deborah Estrin, Jeffrey C. Mogul, and Gene Tsudik.
> Visa Protocols for Controlling Inter-Organization Datagram Flow.
> IEEE Journal on Selected Areas in Communication 7(4):486-498, May, 1989.

A more up-to-date Visa protocol description can be obtained via anonymous
FTP from jerico.usc.edu (pub/gene/new-visa.ps.Z).

>....

>From time to time, I (hidden behind a firewall) find Digital's policies
>to be a major pain; I still don't use services such as WAIS because of

Ditto for IBM (imho).

>the extra effort involved. But, I (and my coworkers) no longer spend
>a significant part of our time either tracking down intruders, or
>explaining to the corporate security types why they shouldn't shut down
>our gateway complex.
>

--
----------------------
Gene Tsudik
Spiritually at the University of Southern Califlower
Physically at the IBM Zurich Research Laboratory

John Ioannidis

unread,
Jul 28, 1992, 11:36:35 AM7/28/92
to
[[Too many articles to follow up on individually -- I do work instead
of reading news (for a change) and suddently there's a flamewar that
no firewall will stop (I know, bad pun)]]

I'll probably follow the suggestion offered and write a "Firewalls
Considered Harmful" paper. Meanwhile, here are some points:

* I'm not advocating that companies should allow uncontrolled access
to their networks -- that would be stupid.

* A firewall is no excuse for lax internal security. To wit:

- In a large organization, there are bound to be "bad guys" (either
through malice, negligence, or sheer stupidity) inside the
organization as well. No firewall is going to protect you against
those.

- A firewall only protects you against *known* external threats.

- If your internal network is insecure, you are vulnerable to anyone
who can get physical access to it. Today this involves tapping
ethernet cables, but tomorrow it may just involve dropping by with
a laptop with a wireless interface. I have a vested interest in
seeing wireless LANs take off -- I don't want t hem stifled
because of security concerns.

- Think of the Maginot Line.

* The network should switch bits and enforce routing policies -- not
cover up for insecure applications.

* It should not be the job of the millions of system administrators to
patch known holes -- the vendors should be doing that. There is
simply no excuse for vendors shipping us insecure code. (Is it true
that SunOS is still distributed with /etc/hosts.equiv containing a
single '+"? Why do we still have login programs that only accept
eight-character passwords, password files that are publicly
readable, things like NIS that allow uncontrolled access to their
information, etc? At least we don't get sendmail shipped with the
debug option turned on any more.

* Having firewalls reduces the urgency (that is, the pressure on the
vendors) of patching those security holes. It's a vicious cycle.

* We've seen analogies such as putting locks on the front door rather
than each individual room, and that it's perfectly acceptable
capitalist behavior to put a firewall gateway in front of your
network. I claim that this is far from being capitalistic; you're
beeing communist inside, and hiding behind an Iron Curtain.

* The argument "naive users and administrators don't want to deal with
security" has been kicked around. I say that the systems should be
secure from the beginning. I hope it's not too late to do that.

* There are a lot of other security concerns in networked systems that
should be addressed, that have nothing to do with firewalls. If
those concerns are dealt with, firewalls will stop making. For
example, I don't want anyone with the root password to be able to
read my files, or log onto my machine and spy on what I'm doing.
That includes the head of the security department, as well as the
guy down the hall that I just had an argument with and wants to kill
my files in revenge. While the latter is probably unavoidable, the
former can be dealt with with proper cryptographic techniques.

Finally,

* Firewalls are an easy solution to a very real and very serious
problem. My point, if a bit idealistic, is that we should *fix* the
problem, rather than patch its manifestations.

* Security, like good manners, starts at home.

Steven Bellovin

unread,
Jul 28, 1992, 11:36:14 AM7/28/92
to
In article <DRW.92Ju...@jordan.mit.edu>, d...@jordan.mit.edu (Dale R. Worley) writes:
> As far as I've seen, much of the problem with firewalls is not that
> they exist, but that they are badly configured. For instance, I've
> seen firewalls that would allow "inside" users to telnet out, but they
> couldn't rlogin out. A good firewall should allow everything that is
> permitted and nothing that is forbidden, and so improves security
> without adding any additional burden to proper usage.

There are a number of problems with that notion. For one thing, what
you suggest is often impractical. To use your example, how can I
characterize, at the router level, a legal rlogin packet? About all I
can say is that the outside port number in one direction is 513, and
the inside port number is something less than 1024. But when such a
packet floats by, the router has no way of knowing that that's really
rlogin. The *real* definition is that the connection was initiated
from the inside. Otherwise, the packet could be from a connection
initiated *from* port 513 on a dedicated attacker's machine, and to
some service on an inside machine. But routers don't keep track of
connections, they look at individual packets.

Higher-level gateways, such as the one we run, can do what you say.
We've consciously decided not to support rlogin for two reasons.
First, in general we don't think that address-based or name-based
authentication is secure, and -- as a matter of principle -- we won't
ask people to trust us when we won't trust them in that regard. The
second problem is the nature of our gateways. All outbound connections
from AT&T come one of a very few machines (at the moment, exactly two,
I believe). We can't vouch for the uniqueness of userids -- there
might be another smb, somewhere within the company, who would have the
same privileges as I would when doing an rlogin through the gateway.
That isn't acceptable. For that matter, we don't keep track of the
trust level of every internal machine; if you saw an rlogin attempt
from us claiming a userid of ``root'', you'd have no way of knowing if
that was root on a great gronking huge UTS machine in one of our comp
centers, or root on an MS/DOS laptop. I suppose we could munge the
username field on the rlogin messages, to include the inside host name;
however, that would demand more knowledge of the application protocol
than our gateway currently has. (We run a circuit-level gateawy. A
true application gateway, such as DEC's, could do that more easily, I'd
guess.)

UDP applications are even more problematic. We'd love to support
``talk'', or have higher-level access to Archie. And we haven't
given up; we have a number of ideas on how to improve our gateways.
We *don't* want gateways -- but we're very certain we need to have them.


--Steve Bellovin

Vernon Schryver

unread,
Jul 28, 1992, 2:05:04 PM7/28/92
to
In article <Bs3vC...@cs.columbia.edu>, j...@cs.columbia.edu (John Ioannidis) writes:
> ...

> * A firewall is no excuse for lax internal security. To wit:
> - In a large organization, there are bound to be "bad guys" (either
> through malice, negligence, or sheer stupidity) inside the
> organization as well. No firewall is going to protect you against
> those.

True, but irrelevant. The only effective means to protect against bad
insiders are administrative. Technical solutions to administrative
problems are always less effective.

Most murders and non-sexual assualts are committed by people acquainted
with the victim. Do you conclude that you need dead bolts on your
bedroom doors? Must you sleep alone or with an awake, armed guard?
There are far more external network damage and attacks than internal.

Open networks are no fun to break. A "disgruntled employee" is not
going to "break into" a system to "erase critical files" if those files
are properly achived. Such a bad guy is not going to use the network
to get even, but will do commit any of the many more familiar, easier
to commit, harder to detect forms of white collar crime. (You are
talking about criminal acts.)

"Security" does nothing to protect against negligence or stupidity,
because those who who are acting stupid or negilgent almost always have
all of the passwords, keys, and badges. It is reckless to
unnecessarily run as root, because of the danger of typo's, but that
has nothing to do with "security.'

Fasist internal security systems generally exist for internal political
reasons, to entertain and justify SystemAdministators. There are
development organizations where the programmers are not allowed to know
the root passwords of their workstations, but all such organizations I
have seen have been less then unproductive.

> - A firewall only protects you against *known* external threats.
>
> - If your internal network is insecure, you are vulnerable to anyone
> who can get physical access to it. Today this involves tapping
> ethernet cables, but tomorrow it may just involve dropping by with
> a laptop with a wireless interface. I have a vested interest in
> seeing wireless LANs take off -- I don't want t hem stifled
> because of security concerns.

Maybe you should find other vested interest, if your wireless stuff
requires that machines on local networks be protected.

> - Think of the Maginot Line.
>
> * The network should switch bits and enforce routing policies -- not
> cover up for insecure applications.

So, you have locks on your bedroom door, to protect you against your
relatives.

> * It should not be the job of the millions of system administrators to
> patch known holes -- the vendors should be doing that. There is
> simply no excuse for vendors shipping us insecure code. (Is it true
> that SunOS is still distributed with /etc/hosts.equiv containing a
> single '+"? Why do we still have login programs that only accept
> eight-character passwords, password files that are publicly
> readable, things like NIS that allow uncontrolled access to their
> information, etc? At least we don't get sendmail shipped with the
> debug option turned on any more.


So you think you can sell an obligatorily secure system? Like the SCO
C2 UNIX? Where the customer cannot run with open doors? Wrong.

Systems should come in the box with a reasonable amount of security.
That does not imply everyone must or should or be required to use it.


> * Having firewalls reduces the urgency (that is, the pressure on the
> vendors) of patching those security holes. It's a vicious cycle.

Wrong. We vendors can fix holes since we generally have source for our
products, and we generally feel more pressure to fix holes for
customers than you can imagine, but we run with firewalls.

> * We've seen analogies such as putting locks on the front door rather
> than each individual room, and that it's perfectly acceptable
> capitalist behavior to put a firewall gateway in front of your
> network. I claim that this is far from being capitalistic; you're
> beeing communist inside, and hiding behind an Iron Curtain.

So, you have locks on your bedroom door, but not on your front door.
Well, from the stories I've read, that makes a certain amount of sense
for New York City.

> ...


> * Firewalls are an easy solution to a very real and very serious
> problem. My point, if a bit idealistic, is that we should *fix* the
> problem, rather than patch its manifestations.
>
> * Security, like good manners, starts at home.

Having good security available is not the same thing as being required
to use it. It is good that locks and safes are available for those
who need them. It would be bad and stupid to expect everyone to
use case-hardened bars on their first floor windows, just because
they are necessary in New York.


Vernon Schryver, v...@sgi.com

Steven Bellovin

unread,
Jul 28, 1992, 3:13:56 PM7/28/92
to
In article <Bs3vC...@cs.columbia.edu>, j...@cs.columbia.edu (John Ioannidis) writes:
[lots of reasons why firewalls are a bandaid, and how we should fix the
real problem, including getting vendors to ship secure systems]

John is, of course, absolutely right (except where he called firewall
users ``communist'', which may or may not be true (the Internet reaches
lots of places these days...), but is irrelevant and seemed to be
intended as an insult). Vendors should ship secure systems, internal
security measures are necessary in any event, and users and system
administrators should do a better job. I strongly suspect that every
firewall developer is fighting all of those battles, and many more. I
certainly do -- when I worry about TCP/IP security, for example, it's
because AT&T uses it internally, and wants to secure its internal
networks and products.

The problem with John's conclusions are that I have to live in the real
world, which includes people who *must* run old versions of various
operating systems, users who don't pick good passwords, and
administrators who are careless. I can't do anything about any of
those things, except to exhort people to do better. In the mean time,
I try to keep the dragons away from their doors, while hoping for a
better world tomorrow.


--Steve Bellovin

P.S. It occurs to me that John is also wrong when he says that firewalls
only defend against known problems. That's precisely wrong. Fixing holes
only works until some chracker finds a new hole. A good firewall keeps out
anything but a very few services. The network behind the firewall can
be attacked, but only through bugs in either those few services or in
the firewall itself.

While host security can -- and should -- be improved, I'm quite dubious
that it can ever be made good enough. Never mind bad administration --
I don't think the state of the art of software engineering is up to the
task. I take it as axiomatic that all large programs have bugs, and
that therefore security servers will have security bugs. Yes, good design
can minimze the odds and/or the impact -- but I doubt that the holes
can ever be eliminated completely. Looking at it another way, firewalls
are precisely an example of good software engineering practice -- they're
(presumably) small, simple pieces of code, and hence are much less likely
to have bugs.

Perry E. Metzger

unread,
Jul 28, 1992, 4:22:11 PM7/28/92
to
s...@ulysses.att.com (Steven Bellovin) writes:

>The *real* definition is that the connection was initiated
>from the inside. Otherwise, the packet could be from a connection
>initiated *from* port 513 on a dedicated attacker's machine, and to
>some service on an inside machine. But routers don't keep track of
>connections, they look at individual packets.

I was under the impression that if you filter all the SYN packets from
one direction that aren't SYN ACKs, bingo, you can't initiate any
incoming TCP connections. Nice and stateless. The only flaw is that
implementations that seperately ACK the initiating SYN and then send
their own SYN won't be able to connect, but they are rare. Connections
could still be hijacked by various mechanisms, but they can't be
initiated, and there is a limited amount of damage a hijacker can do.
I was also under the impression that some recent routers will actually
let you do this trick.

Am I egregiously wrong about this?

Barry Margolin

unread,
Jul 28, 1992, 8:15:34 PM7/28/92
to
In article <1992Jul28....@PA.dec.com> mo...@pa.dec.com (Jeffrey Mogul) writes:
>Alas, while parts of the Internet can (and should) continue to follow
>the "everything not forbidden is permitted" approach, which allows for
>evolution, other parts have to follow the "everything not permitted
>is forbidden" rule.

I'm the maintainer of part of our firewall, and I unfortunately have to
agree that in many circumstances (like ours), the second rule is necessary.

We have developers here implementing new protocols all the time, such as
inter-machine diagnostics and special-purpose file access. With the first
rule, these would suddenly become accessible to anyone on the Internet, and
crackers could wreak havoc with our development systems (or maybe even
destroy or get access to proprietary data). The second rule is more
limiting, but it's the only thing that works when the environment on this
side of the firewall is very open.

--
Barry Margolin
System Manager, Thinking Machines Corp.

bar...@think.com {uunet,harvard}!think!barmar

Bob Sutterfield

unread,
Jul 29, 1992, 9:36:10 AM7/29/92
to
In article <Bs3vC...@cs.columbia.edu> j...@cs.columbia.edu (John Ioannidis) writes:
- A firewall only protects you against *known* external threats.

Our firewall is set up conversely: it permits only traffic that's
strongly suspected (notice I didn't say "known") not to be a threat.
It's configured to give our internal users maximal access to the rest
of the world, to give the rest of the world the sort of access to our
net and hosts that we want them to have, and to ease our burden in
managing systems from dozens of vendors.

I'm not a lazy or unconscientious system/network administrator, I'm a
wily one. This strategy reduces the size of my problem domain, makes
my network manageable with limited staff resources, and lets me go
home at night to my wife and kids when I otherwise spent evenings in
the office, chasing crackers.

Chris Cox

unread,
Jul 29, 1992, 5:53:33 PM7/29/92
to
In article <1992Jul28.2...@shearson.com> pmet...@snark.shearson.com (Perry E. Metzger) writes:

>I was under the impression that if you filter all the SYN packets from
>one direction that aren't SYN ACKs, bingo, you can't initiate any
>incoming TCP connections. Nice and stateless. The only flaw is that
>implementations that seperately ACK the initiating SYN and then send
>their own SYN won't be able to connect, but they are rare. Connections

That would eliminate your users from starting ftp data sessions (wouldn't
it?).

Chris

Chris Cox W0/G4JEC chr...@ramrod.lmt.mn.org
LaserMaster Technologies Tel: (612) 944-6069
7156 Shady Oak Road Fax: (612) 944-5544
Eden Prairie, MN 55344

----- For mail of a more social nature, please use -----

chr...@moron.vware.mn.org -or- chr...@biggus.g4jec.tcman.ampr.org

Gene Tsudik

unread,
Jul 30, 1992, 11:00:59 AM7/30/92
to
In article <Bs3vC...@cs.columbia.edu> j...@cs.columbia.edu
(John Ioannidis) writes:

>* A firewall is no excuse for lax internal security. To wit:
> - In a large organization, there are bound to be "bad guys" (either
> through malice, negligence, or sheer stupidity) inside the
> organization as well. No firewall is going to protect you against
> those.

True. But it is safe to assume that there is a much larger and much more
"varied" population of "bad guys" outside than in.

> - A firewall only protects you against *known* external threats.

This is not a convincing point. Any security measure whether implemented in
a firewall gateway or in an end-system is going to protect you only against
*known* attacks.

>* The network should switch bits and enforce routing policies -- not
> cover up for insecure applications.
>

>* Having firewalls reduces the urgency (that is, the pressure on the
> vendors) of patching those security holes. It's a vicious cycle.
>

>* We've seen analogies such as putting locks on the front door rather
> than each individual room, and that it's perfectly acceptable
> capitalist behavior to put a firewall gateway in front of your
> network. I claim that this is far from being capitalistic; you're

> being communist inside, and hiding behind an Iron Curtain.

I see. So, to be a true capitalist I would have to padlock the doors to
individual rooms (end-systems or hosts) and let the sh*t fly in the corridors,
right?

I'm afraid that no matter how secure you make the hosts, the problem of
securing internal links will remain. You are assuming that the only purpose of
firewalls is to protect otherwise vulnerable hosts.
What about the rest of internal network resources: links, bridges and even
the network interfaces of the very same hosts?

Without firewalls, no matter how secure the OS,
your workstation can be bombarded and flooded with meaningless garbage
traffic from outside of your organization. This can render your
workstation unusable. Moreover, valuable communication resources, e.g.,
critical internal links, can be similarly flooded with trash from the outside
thus denying service to legitimate internal users.

I don't think many people (me included) believe that firewalls are elegant.
They constitute an ugly solution to an even uglier problem.

Cheers,

Gene

g...@zurich.ibm.com

--
----------------------
Gene Tsudik, Member FDIC

Barry Margolin

unread,
Jul 30, 1992, 4:03:41 PM7/30/92
to
In article <l7g11b...@pollux.usc.edu> tsu...@pollux.usc.edu (Gene Tsudik) writes:
>I see. So, to be a true capitalist I would have to padlock the doors to
>individual rooms (end-systems or hosts) and let the sh*t fly in the corridors,
>right?

I think this analogy to a home and rooms is very poor. The same people
live in a home and the rooms, so it's usually reasonable to have the locks
just on the main doors. Then again, my parents did have a lock on their
bedroom door.

But I think better analogies would be to apartment buildings, dormitories,
or hotels. It's usual to have locks on the individual apartments or rooms.
But whether there's a lock on the main entrance, or perhaps a doorman
screening visitors, is a matter of policy. Some people prefer
high-security apartment buildings with a doorman. Hotels, on the other
hand, often don't have much entrance security, so you should lock your
door if you want any expectation of privacy.

Dale R. Worley

unread,
Jul 30, 1992, 4:34:27 PM7/30/92
to
In article <17...@ulysses.att.com> s...@ulysses.att.com (Steven Bellovin) writes:
About all I can say is that the outside port number in one
direction is 513, and the inside port number is something less than
1024. But when such a packet floats by, the router has no way of
knowing that that's really rlogin. The *real* definition is that
the connection was initiated from the inside. Otherwise, the
packet could be from a connection initiated *from* port 513 on a
dedicated attacker's machine, and to some service on an inside
machine. But routers don't keep track of connections, they look at
individual packets.

I haven't studied the matter, but I believe that the more
sophisticated firewalling routers actually *do* track connections. At
least, I've heard claims about what some routers could do that I
couldn't figure out how to do without tracking connections.

Dale Worley Dept. of Math., MIT d...@math.mit.edu
--

Anything that's not nailed down is mine. Anything I can pry loose is not
nailed down.

Marcus J. Ranum

unread,
Jul 30, 1992, 8:09:24 PM7/30/92
to
d...@euclid.mit.edu (Dale R. Worley) writes:

>I haven't studied the matter, but I believe that the more
>sophisticated firewalling routers actually *do* track connections.

I prefer to use a somewhat different approach in general. First
you determine the services that your users have a clear business need
for. Then you develop an application gateway that "knows" that protocol
and can give you decent access control, logging, and piggy-back blocking.
This is much more secure (in my opinion) since you preserve the "that which
is not expressly permitted is prohibited" doctrine and you can incorporate
appropriate per-protocol authentication or authorization as needed.

mjr.

Marcus J. Ranum

unread,
Jul 30, 1992, 8:05:43 PM7/30/92
to
>I think this analogy to a home and rooms is very poor.

I really don't think there *IS* a particularly good analogy.

Part of the problem is that firewalls and their implementation
is not merely a technical problem. There is a whole set of "management"
issues that usually need to be addressed. There are whole sets of CYA
issues that need to be addressed, which don't necessarily improve the
security of the network, but definitely improve the network manager's
claim to showing diligence in securing the network.

In the consulting work I've done for DEC (setting up firewalls)
I've run across various combinations of these issues. Compared to them,
the actual details of locking things down tight are real simple. Every
time I run into these discussions, I try to come up with an analogy for
Internet security - it's pretty hard. Part of the problem is that unlike
a house, you don't always know that you've been robbed; people don't
break into your house and steal a *COPY* of your gun collection, and
vanish after shaving (or somehow magically cleaning) all the rugs to
hide their footprints. This makes the whole thing harder to understand,
especially for someone who is not used to modern networked computing.
You get strange policies like: "it must be impossible to export data
out over the network" - never mind that a fistful of DATs is easier
to hide.

mjr.

Carl Beame

unread,
Jul 30, 1992, 8:11:41 PM7/30/92
to
In article <chrisc.21...@ramrod.lmt.mn.org> chr...@ramrod.lmt.mn.org (Chris Cox) writes:
>In article <1992Jul28.2...@shearson.com> pmet...@snark.shearson.com (Perry E. Metzger) writes:
>
>>I was under the impression that if you filter all the SYN packets from
>>one direction that aren't SYN ACKs, bingo, you can't initiate any
>>incoming TCP connections. Nice and stateless. The only flaw is that
>>implementations that seperately ACK the initiating SYN and then send
>>their own SYN won't be able to connect, but they are rare. Connections
>
>That would eliminate your users from starting ftp data sessions (wouldn't
>it?).
>

If your Firewall stopped all remote TCP packets with SYNs which are
for ports < 1024 except Domainname and SMTP, you could still FTP out and
receive mail and possibly domainname requests. For UDP you might want
to inhibit port 111 (portnampper) and 2049 (nfs) and possibly TFTP.

A properly configured Firewall Router can allow access from the
local net onto the Internet and even allow Internet access to specific
services or servers on the local net. For Example: If you want to provide
anonymous FTP from a single host on your local net, just configure the
router to pass FTP SYN requests only to the specific host.

- Carl Beame
Beame & Whiteside Software Ltd.

P.S: I don't know of any comercial router which can do all this, but public
domain ones could be modified.

Vernon Schryver

unread,
Jul 31, 1992, 1:11:55 AM7/31/92
to


The meaning I infer for "piggy-back blockin" seems formally
impossible. Does it mean preventing people from using permitted
facilities for impermissible things? Such as piggy-backing a full
featured "gateway" on top of rlogin?

Consider the recent talk of running SLIP over email over UUCP over
DECNET over x.25 over mirrors, or some such.


There is code floating around that does what its name, "gateway,"
implies with an rlogin or telnet session through a firewall. We
recently encountered a version with a different name (it's trivial to
write from scratch) that a "security expert" (sic) from a major network
provided had "helped" a naive, trusting, related employee install. The
thing let people from that network provider's network by pass the
sgi.com firewall. The turkey could not understand why we considered
him no different from any other bad guy who had tried to punch a hole
through the firewall. He only admitted guilt in having his code
go crazy and eat up all of the processes allowed "guest".

Who will guard the guardians?


Vernon Schryver, v...@sgi.com

Roger-Hunen

unread,
Jul 31, 1992, 3:41:54 AM7/31/92
to
In article <chrisc.21...@ramrod.lmt.mn.org> chr...@ramrod.lmt.mn.org (Chris Cox) writes:
>>I was under the impression that if you filter all the SYN packets from
>>one direction that aren't SYN ACKs, bingo, you can't initiate any
>>incoming TCP connections. Nice and stateless. The only flaw is that
>>implementations that seperately ACK the initiating SYN and then send
>>their own SYN won't be able to connect, but they are rare. Connections
>
>That would eliminate your users from starting ftp data sessions (wouldn't
>it?).

So what you really want is application level proxies in the firewall for
TELNET, FTP, MAIL etc. Of course this defaults to the 'everything is
forbidden unless permitted' approach.

Regards,
-Roger

Marcus J. Ranum

unread,
Jul 31, 1992, 10:00:39 AM7/31/92
to
>The meaning I infer for "piggy-back blockin" seems formally
>impossible. Does it mean preventing people from using permitted
>facilities for impermissible things? Such as piggy-backing a full
>featured "gateway" on top of rlogin?

Right. I was referring to using one service to piggy-back another
one. Such as running an interactive login session over an FTP command
stream. ;) Or tunnelling X-windows packets in a telnet session. In general
one may need to block that kind of thing, especially in the case where
the piggy-backing is being done to make an unsecure protocol (like X)
available.

I'm not a theorist of any type, so I can't say anything about
formalisms. ;) On the other hand, I do know that ad hack methods can
go a long way towards preventing these kinds of things, *if it is necessary*.
Not all sites will *care* about preventing people from tunnelling X over
telnet, for example. At that point the problem ceases to be a technical
one, and becomes a policy matter - and many of the gateway admins I know
don't set the policy; they are just stuck with enforcing it.

My telnet application gateway, for example, has a rate limiter
in it that will adjust the baud rate of an outgoing telnet session to
a (low) settable value. That way you can run an interactive session nicely,
but if you try to run kermit over it, it'll be unacceptably slow. My
FTP application gateway has rate limiting on the command stream, and also
fires off warnings if commands are sent over the command stream that are
not recognizable telnet commands. Yes, one could encode X packets in the
pathnames of "GET" commands, but then the rate limiter will choke you.
All these interesting events are logged.

The main thing here is not to subscribe to some (imho useless)
formal model of security, but to just try to be reasonable and to make
sure that if someone does Something Bad to your gateway, it leaves
tracks that you can follow. Gateway admins are often in a really nasty
spot - they have to enforce sets of rules that sometimes don't make
a whole lot of sense. I try to enforce them sensibly, and to do anything
I can to make sure that if someone launches a *real* attack on my gateway,
I'll have a chance of catching them, or at least I'll have enough
information about what happened that I can deduce what they were
trying to do, and check for holes.

mjr.

Perry E. Metzger

unread,
Jul 31, 1992, 1:02:03 PM7/31/92
to

In article <1992Jul31....@medtron.medtronic.com> rh0...@medtronic.COM (Roger-Hunen) writes:

>From: rh0...@medtronic.COM (Roger-Hunen)

>So what you really want is application level proxies in the firewall for
>TELNET, FTP, MAIL etc. Of course this defaults to the 'everything is
>forbidden unless permitted' approach.

Ah, but this is a real pain in the posterior, and breaks the semantics
of many applications. By using intelligent filtering of SYN packets,
socket numbers, etc, you can allow easy addition of new services (such
as WAIS) without needing to write new code or break security.

For extra security, I'm considering building a firewall in which
requests to change filtering can come in on a kerberized TCP service,
and then restricting everything. Users wanting to go through the
gateway would have to explicitly open a "narrow hole" through the
firewall in order to get out, thus eliminating any "random"
connections; the holes would time out after a while if no packets of
the appropriate kind had gone through.

Tom Fitzgerald

unread,
Aug 1, 1992, 1:26:29 AM8/1/92
to
d...@euclid.mit.edu (Dale R. Worley) writes:

> I haven't studied the matter, but I believe that the more
> sophisticated firewalling routers actually *do* track connections. At
> least, I've heard claims about what some routers could do that I
> couldn't figure out how to do without tracking connections.

This sounds strange to me - and doesn't sound like a good idea. If a
router has to take over a traffic path because of congestion or a dead
link, or if it's rebooted, it's going to find itself in the middle of a
bunch of connections with no idea of their history. Should it block all
traffic that it didn't see a SYN for?

Are we going to return to the Good Old Days of stateful IMPs that blow away
all the connections through them when they're rebooted?

I suspect the router claims are based on the fact that you can tell a LOT
about a packet from the port numbers, protocol and flags.

--
Tom Fitzgerald Wang Labs fi...@wang.com "I went to the universe today;
1-508-967-5278 Lowell MA, USA It was closed...."

Mark R. Ludwig

unread,
Aug 1, 1992, 3:14:13 AM8/1/92
to
Okay, at the risk of a stuffed mailbox, I'll raise my hand and say I'm
in the process of designing and implementing a firewall. Its purpose
is to allow our generally-soft-hearted internal network to remain so
without exposing our softness to the Internet at large. In my
opinion, the Whole Issue is pragmatism: I don't want to fight with
each of the ten(!) Unix vendors represented over security of their
systems. Furthermore, I don't have the time to replace the
fundamentally-insecure "protocols" which have made this heterogeneous
nightmare manageable (can you say Yellow P..., er, Network Information
Service?).

I do not want to install a firewall. It's extra work I don't want.
However, as a professional who wants to continue in the networking
milieu (morass?), I am compelled. Imagine what would happen if I
blithely connected us to the network and sometime later something bad
happens which is determined to be related to the network. I could
find myself on the wrong end of a lot of things, from inquisitions
("Why didn't you protect us?") to police guns ("You were actually
aiding and abetting the bad guy, who's your friend, weren't you?") to
unemployment lines ("We're completely out of business because we've
lost too much to the competitor's industrial spy to survive in the
marketplace.").

I've read quite a bit about it, but this firewall discussion has
raised some questions in my mind. The approach I'm taking is the (I
think) standard one which knows about specific well-known ports, and
imputes the direction of the connection from the port numbers. What
is not clear is the basis for the assumption that an outgoing FTP or
TELNET connection uses a port greater than (or equal to?) 1024. Is
this just a Unix convention? I'd appreciate if someone could point me
to some additional information about non-Unix hosts.

In article <17...@ulysses.att.com> s...@ulysses.att.com (Steven
Bellovin) writes:

|> UDP applications are even more problematic. We'd love to support
|> ``talk'', or have higher-level access to Archie.

I assume this means non-TELNET access to Archie.

What are the problems with these two? It appears from its protections
(lack of set-uid) that "talk" can't be allocating a port under 1024
for its outgoing connection, so I don't understand the problem. I
probably could answer my own question if I had the source (which I
can't get until we get the firewall installed...).$$
--
INET: m...@uai.com BANG: uunet!uaisun4!mrl ICBM: USA; Lower Left Coast
"You can get a crowd to clap at a phone number, if the inflection is right."
-- Dave Ross

Robert Elz

unread,
Aug 2, 1992, 8:30:08 PM8/2/92
to
In <1992Jul26.2...@decuac.dec.com> m...@hussar.dco.dec.com (Marcus J. "will do TCP/IP for food" Ranum) writes:

| Simple gateway - a node which is reachable on two networks, but has routing
| disabled, making it a termination point on both. This is typically
| a host with TCP/IP forwarding disabled.

That isn't a gateway at all, its a multi-homed host, which is a term
that's been around since about forever.

kre

Paul Brooks

unread,
Aug 3, 1992, 4:31:21 AM8/3/92
to
In article <MRL.92Au...@sun4.uai.com> m...@uai.com (Mark R. Ludwig) writes:
|
|I've read quite a bit about it, but this firewall discussion has
|raised some questions in my mind. The approach I'm taking is the (I
|think) standard one which knows about specific well-known ports, and
|imputes the direction of the connection from the port numbers. What
|is not clear is the basis for the assumption that an outgoing FTP or
|TELNET connection uses a port greater than (or equal to?) 1024. Is
|this just a Unix convention? I'd appreciate if someone could point me
|to some additional information about non-Unix hosts.

Basically, the assumptions only apply to Unix systems that you trust the
sysadmin on! Any PC can be made to use any port number it feels like to
initiate connections, under 1024 and under 256. In fact, If someone is
sysadmin (or can get root access) to a unix machine there is no reason
they couldn't change the sources and re-compile to allow low port
numbers either. How will your firewall handle an incoming Telnet (on TCP)
connection using TCP port 23 as the source as well as the destination?

--
Paul Brooks |pa...@atlas.abccomp.oz.au|LIFE is a bowl of cherries:
TurboSoft Pty Ltd |p...@newt.phys.unsw.oz.au| sweet at first, until you
248 Johnston St. Annandale| | reach the pits.
Sydney Australia 2038 |ph: +61 2 552 3088 |

don provan

unread,
Aug 3, 1992, 1:50:23 PM8/3/92
to

I think he really did mean "gateway" in the modern sense, not "router".
You telnet in from the outside world through one interface, authenticate
yourself via a login, and then telnet or whatever out of the other
interface to access machines within the sphere of protection.

So it is a multi-homed host, yes, being used as a simple, service level
gateway, as opposed to a multi-homed host that's just being used to
provide services to both interfaces.
don provan
do...@novell.com

Jeffrey Mogul

unread,
Aug 3, 1992, 9:08:35 PM8/3/92
to
In article <bsahs...@wang.com> fi...@wang.com (Tom Fitzgerald) writes:
>d...@euclid.mit.edu (Dale R. Worley) writes:
>> I haven't studied the matter, but I believe that the more
>> sophisticated firewalling routers actually *do* track connections. At
>> least, I've heard claims about what some routers could do that I
>> couldn't figure out how to do without tracking connections.
>
>This sounds strange to me - and doesn't sound like a good idea. If a
>router has to take over a traffic path because of congestion or a dead
>link, or if it's rebooted, it's going to find itself in the middle of a
>bunch of connections with no idea of their history. Should it block all
>traffic that it didn't see a SYN for?
>
>Are we going to return to the Good Old Days of stateful IMPs that blow away
>all the connections through them when they're rebooted?

The screend program (described in "Simple and Flexible Datagram Access
Controls for Unix-based Gateways", in Proc. Summer 1989 USENIX Conference,
pp 203-221) keeps just enough state to match up IP datagram fragments.
Otherwise, there is no way to know the port numbers for a fragment.

This does make it impossible to run multiple screend instances in
parallel (i.e., to allow different fragments of one datagram to
follow different paths across the firewall router). We believe
that this is not an actual problem, both because few people send
fragments across routers (that usually causes trouble in any case)
and because path-splitting is actually quite rare (especially at
the boundaries of organizations). It does make it hard to exploit
a multiprocessor screening router, but why worry about it?

It also means that fragments received out of order will not always be
delivered. We have not observed this to be a problem.

If screend (or the entire router) crashes, the only state that
is lost applies to particular datagrams in transit, not connections.

However, bear in mind that many people believe that there will have
to be some sort of "soft state" setup in future Internets, to handle
both security and performance policies. The trick will be to avoid
losing a connection when a router crashes, but it's not impossible.

-Jeff

Barry Margolin

unread,
Aug 4, 1992, 12:12:48 AM8/4/92
to
In article <1992Aug4.0...@PA.dec.com> mo...@pa.dec.com (Jeffrey Mogul) writes:
>The screend program (described in "Simple and Flexible Datagram Access
>Controls for Unix-based Gateways", in Proc. Summer 1989 USENIX Conference,
>pp 203-221) keeps just enough state to match up IP datagram fragments.
>Otherwise, there is no way to know the port numbers for a fragment.

Non-initial fragments can simply be forwarded, and filtering should be
applied to the initial fragment when it shows up. If the initial fragment
is rejected then the datagram will never be fully assembled and will be
discarded by the destination host.

Robin Pickering

unread,
Aug 4, 1992, 7:34:14 AM8/4/92
to
In article <15l040...@early-bird.think.com> bar...@think.com (Barry Margolin) writes:
>In article <1992Aug4.0...@PA.dec.com> mo...@pa.dec.com (Jeffrey Mogul) writes:
>>The screend program (described in "Simple and Flexible Datagram Access
>>Controls for Unix-based Gateways", in Proc. Summer 1989 USENIX Conference,
>>pp 203-221) keeps just enough state to match up IP datagram fragments.
>>Otherwise, there is no way to know the port numbers for a fragment.
>
>Non-initial fragments can simply be forwarded, and filtering should be
>applied to the initial fragment when it shows up. If the initial fragment
>is rejected then the datagram will never be fully assembled and will be
>discarded by the destination host.

This is the approach which I took for the IP packet filtering code which
we introduced to our gateway UNIX box kernel.

I have some nagging doubts about this causing problems by filling the fragment
reassembly queues of hosts to which the datagrams are sent. A failed connection
or RPC attempt will potentially generate many fragments. These will obviously
time out eventually but the recommended value for IP reassembly timeout is quite
long (60-120 secs). In practice of course TCP SYNs will arrive un-fragmented due
to their size.

Another problem with this approach is that one tells the sending host a
complete load of ambiguous cobblers (ICMP Time Exceeded Reassembly from target
and ICMP Unreach Host from gateway) blech!.

--
Rob.

Barry Margolin

unread,
Aug 4, 1992, 7:34:35 PM8/4/92
to
In article <1992Aug4.1...@inmos.co.uk> r...@inmos.co.uk (Robin Pickering) writes:
>In article <15l040...@early-bird.think.com> bar...@think.com (Barry Margolin) writes:
>>Non-initial fragments can simply be forwarded, and filtering should be
>>applied to the initial fragment when it shows up.

>I have some nagging doubts about this causing problems by filling the fragment


>reassembly queues of hosts to which the datagrams are sent. A failed connection
>or RPC attempt will potentially generate many fragments. These will obviously
>time out eventually but the recommended value for IP reassembly timeout is quite
>long (60-120 secs). In practice of course TCP SYNs will arrive un-fragmented due
>to their size.

Under normal conditions, I don't think it should be a problem. As you
implied, most SYNs don't have any data, so they won't be fragmented. While
NFS often has to fragment packets containing file data, if you reject the
initial mount request (or the portmapper request before that) then you
won't get as far as sending these large data packets. I think there are
very few protocols that are likely to send fragmented packets before some
initial handshake using small packets, and if you were planning on
filtering the large packets you should also filter out the handshake.

>Another problem with this approach is that one tells the sending host a
>complete load of ambiguous cobblers (ICMP Time Exceeded Reassembly from target
>and ICMP Unreach Host from gateway) blech!.

I don't think he'll get the Time Exceeded message. In order for the target
to send Time Exceeded I believe it has to receive the initial fragment (so
it can include it in the data portion of the ICMP). When reassembly times
out without receiving the initial fragment, the reassembly queue is
discarded silently.

Tom Fitzgerald

unread,
Aug 4, 1992, 8:41:04 PM8/4/92
to
> m...@hussar.dco.dec.com (Marcus J. "will do TCP/IP for food" Ranum) writes:
>>| Simple gateway - a node which is reachable on two networks, but has routing
>>| disabled, making it a termination point on both. This is typically
>>| a host with TCP/IP forwarding disabled.

> k...@cs.mu.oz.au (Robert Elz) writes:
> >That isn't a gateway at all, its a multi-homed host, which is a term
> >that's been around since about forever.

do...@novell.com (don provan) writes:
> I think he really did mean "gateway" in the modern sense, not "router".
> You telnet in from the outside world through one interface, authenticate
> yourself via a login, and then telnet or whatever out of the other
> interface to access machines within the sphere of protection.

Then a better name might be "application level gateway". "Simple" doesn't
really explain what's going on. An application-level gateway can be a lot
more complicated than an IP-level gateway.... And the definition should
probably emphasize the relaying of data, rather than the lack of IP routing;
if the host isn't doing forwarding data at the application layer, it fits
Marcus's definition but not the label "gateway".

peter da silva

unread,
Aug 5, 1992, 5:34:13 PM8/5/92
to
In article <1992Aug3.1...@novell.com> do...@novell.com (don provan) writes:
> >That isn't a gateway at all, its a multi-homed host, which is a term
> >that's been around since about forever.

> I think he really did mean "gateway" in the modern sense, not "router".

I call this an application-level gateway. We run a number of these between
OSI and TCP networks.
--
Peter da Silva `-_-'
$ EDIT/TECO LOVE 'U`
%TECO-W-OLDJOKE Not war? Have you hugged your wolf today?
Ferranti Intl. Ctls. Corp. Sugar Land, TX 77487-5012 +1 713 274 5180

Steven Bellovin

unread,
Aug 6, 1992, 11:37:40 AM8/6/92
to
In article <15l040...@early-bird.think.com>, bar...@think.com (Barry Margolin) writes:
> Non-initial fragments can simply be forwarded, and filtering should be
> applied to the initial fragment when it shows up. If the initial fragment
> is rejected then the datagram will never be fully assembled and will be
> discarded by the destination host.

That works in most situations. However, if you're worried about an
insider using specialized programs to leak information to an outsider,
this isn't sufficient. Consider the military models -- and you'll see
why security labels are copied on fragmentation.

John Clark

unread,
Aug 12, 1992, 2:41:30 PM8/12/92
to
In article <1992Jul31....@maccs.dcss.mcmaster.ca> be...@maccs.dcss.mcmaster.ca (Carl Beame) writes:
+
+ A properly configured Firewall Router can allow access from the
+local net onto the Internet and even allow Internet access to specific
+services or servers on the local net. For Example: If you want to provide

Having only recently been interrested in this type of feature, does
any one have list of hardware which supports 'Firewall' building.

In a somewhat related topic, what does one do about the password
protection problem for accesses other than anonymous FTP. I saw some
mention of a 'encrypter' which appeared to repacket IP datagrams
with the data portion encrypted. Is there hardware that does this or
did I really misunderstand the post.


--

John Clark
jcl...@ucsd.edu

Marcus J. Ranum

unread,
Aug 12, 1992, 10:31:31 PM8/12/92
to
jcl...@sdcc3.ucsd.edu (John Clark) writes:

>Having only recently been interrested in this type of feature, does
>any one have list of hardware which supports 'Firewall' building.

We use (not surprisingly) DEC hardware to build host-based
screening routers. See Mogul's paper on the packet screen, for more
details. My gateway here (decuac.dec.com) is built of a DECsystem2100,
a MicroVAXII with a pair of ethernet interfaces, and another
DECsystem2100 internally running IP and DECnet.

mjr.

0 new messages