Grupuri Google nu mai acceptă postările sau abonamentele noi Usenet. Conținutul anterior este în continuare vizibil.

Linux or BSD alternative to Windows Home Server

7 afișări
Accesați primul mesaj necitit

dh003i

necitită,
2 sept. 2007, 12:49:1602.09.2007
Sorry if this post is round-about, but I just wanted to explain my
situation before asking a few questions.

I have a question on using Gentoo vs. FreeBSD vs. OpenBSD for a home
server. A few years ago, I tried using Gentoo for my desktop. The user-
guides were great, as was the support offered on the forums, but
setting it up as a desktop was just too much of a hassle (I thus
reverted to my default XP install, also because we need Excel & Word
in business). It was weird, because the first time I did the emerge
for Xfree, and set it up, it worked (although settings weren't all
optimized). For some reason, it never worked after that, even though I
returned the settings to exactly the same. I don't mind fiddling
around with settings, but for my laptop, at some point, I just want it
to work easily, because I do a lot of photography & listen to many
podcasts.

Anyways, a home server doesn't need Xfree (thank god), so Gentoo,
FreeBSD, and OpenBSD seem more viable. I've become interested in a
home server, as my 100GB secondary laptop HD is getting full, and I
need someplace for my data. So I found out about Drobo (an idiot-proof
system), and then saw Windows Home Server @ http://snipurl.com/whs_features
[HP EX475 MediaSmart Home Server (AMD Live/ 64 Bit Sempron Processor,
1 TB Hard Drive), with 4 HD bays]. Right now, when home, I sit behind
2 wireless routers arranged in serial (Belkin, then Netgear). I have
each configured, and connect to the Netgear when using my ethernet
cable. I usually connect to the Belkin when using wireless, as it has
superior signal (so I probably ought to reconfigure it so the belkin
is the 2nd wireless router).

My basic minimal needs are using the server to store my data, which I
want to access from my laptop(s) [my fiance will also be using it].
Our laptops run Windows XP. I'd like to access it via direct ethernet
cable connection, wireless in the house, and also from the internet
(hence, set it up as a true server). I doubt at least for a while that
many people other than myself, fiance, and family would be accessing
it.

Issues:

1. As I will be able to access it from the internet, one of my main
concerns is obviously security. How does fBSD stack up in security vs.
oBSD for my purposes?

2. What about the setup of the cable modem, belkin router, netgear
router, and laptops? These routers are wireless, so could I put one of
the routers between the BSD server and that cable modem as an extra
layer of protection (via it's firmware firewall) for the server, and
the other one between the server and the laptops (which I want to be
able to access the network wirelessly)? i.e., like this

[Cable modem] <=> [Wireless Router] <=> [Home Server] <=> [Wireless
Router] <=> [Laptops]

Or does having the router's firewall in front of the server
effectively do nothing, as it's security measures are insignificant
compared to what I'd have on a BSD server? In any event, I would need
one wireless router between the BSD server and the laptops.

3. My files are all of windows file-types; e.g., the text-files have
windows line-breaking. Will I be able to seamlessly interoperate
between the laptop and the server?

4. What about hard-ware compatibility? I'd probably have something
like the AMD Live/ 64 Bit Sempron Processor for the server's CPU...how
is BSD's compatibility vs. Gentoo? What about for graphics cards? (one
thing that interests me is maybe farming out some of the work done on
the laptop to the server)

5. One of my foreward-looking concerns is scalability & ability to
upgrade. I'll probably be using a tower with at least 4 hard-drive
bays, possibly more, plus the ability for several USB-connected hard-
drives. I may also want to at some point upgrade the CPU.

6. Features...some of the features of WHS I find quite
attractive...can I implement them in BSD or Gentoo as well; e.g.,
* Automatically backup, specifying time, how many copies made, and
what file backed up
* Stream photos, music and videos to PCs on your network or to your TV
or stereo system
* Incremental backups: After initial backup, only changes are backed
up.
* Efficient single copy backup: A single copy of each file is backed
up, no matter how many computers that files resides on in your home
network.

Your thoughts would be greatly appreciated.

Ignoramus20336

necitită,
2 sept. 2007, 13:10:2302.09.2007
If you install, e.g. Fedora 7, you likely will have everything working
fine (desktop, sharing, samba, NFS etc). I use Fedora 7 for my home
servers , clients, laptop etc.

i

General Schvantzkoph

necitită,
2 sept. 2007, 14:09:1602.09.2007

Why do you want to make your life so hard? Gentoo is for people who want
to tweak every last knob, it's not for people who just want to get the
job done. Fedora 7 will give you everything you need right out of the
box. It will just work but like any Linux you can still tweak anything
you want to tweak.

Accessing any *nix server from the Internet is best done through ssh.
OpenSSH comes from the FreeBSD people but it's bundled in every Linux,
Unix and BSD distro. SSH gives you an encrypted channel to your machine.
When you set it up make sure that you require RSA authentication and
disable password authentication. Passwords can be guessed, public keys
can't. Also it's a good idea to move the ssh port from the default port
number, 22, to some high port number. There are port scanners that look
for ssh on port 22 and try and guess passwords. If you require RSA
authentication the port scanners will be harmless, but it's still
annoying to see their attempts in your log files. Moving the port to a
high number eliminates this. On your router you port forward the SSH port
to your server, all other ports should be closed.

Douglas Mayne

necitită,
2 sept. 2007, 14:19:2902.09.2007
On Sun, 02 Sep 2007 16:49:16 +0000, dh003i wrote:

> Sorry if this post is round-about, but I just wanted to explain my
> situation before asking a few questions.
>
> I have a question on using Gentoo vs. FreeBSD vs. OpenBSD for a home
> server. A few years ago, I tried using Gentoo for my desktop. The user-
> guides were great, as was the support offered on the forums, but
> setting it up as a desktop was just too much of a hassle (I thus
> reverted to my default XP install, also because we need Excel & Word
> in business).
>

It's funny that you chose Gentoo as your distribution, but then gave
up so quickly. Most Gentoo users aren't as easily discouraged, as
represented by their willingness to wait up to a week while the complete
system is recompiled. I don't have that amount of patience, myself.
However, there are a lot of choices when it comes to GNU/Linux
distributions. I use Slackware, which is somewhat in the BSD startup
school. IMHO, Slackware plus ssh, plus Samba makes an excellent server.
It can also be setup to boot headless, and does not require X at all.
However, I would install X, even on headless boxes, because X can be
forwarded over ssh.

BTW, your Windows compatibility problem could have been handled nicely by
virtualization. Virtual machines are the ultimate compatibility layer. Too
bad you didn't investigate that option before trotting your horse back to
the Windows barn. Freedom is always the best option.

--
Douglas Mayne


dh003i

necitită,
2 sept. 2007, 14:45:2302.09.2007
General Schvantzkoph,

Thanks for your response.

1. Re why I want to make life so hard...

The reason I'm looking at Gentoo and FreeBSD is because I just want a
server, and don't need the stuff that was driving me nuts when I tried
Gentoo for my laptop (i.e., X). Also, I like the customizability and
hence possibility for greater performance. I am somewhat of a control
freak. This isn't for a while until I need this, so I'm doing the
research now, and will be well-read and ready to start running by the
time I'm ready to set this up. As I understand it, under Fedora, you
do not get compiles customized to your specifications, or your CPU
(which you do get in Gentoo and BSD).

I actually setup everything on my laptop in Gentoo, except for X. Now
that I remember, it was something to do with the ATI 32MB Radeon
Mobility graphics card being problematic for Linux at the time.

2. Re RSA authentication for connecting to SSH, as I understand it,
public GPGP keys are enormous sequences of characters. How would I use
such to login to SSH? Is that kind of like the USB key boot key thing,
where the USB key (with your key on it), is required to boot? So I
stick in a USB key when I need to login remotely?

What if I want to login and be able to view it in a windows folder
view? Are there ways to do this, while retaining SSH security?

Also, at some point, I may want to use the home server to serve up web-
pages, e.g., for some of the pictures I have...what do I do then about
security (obviously, then, that computer has to be accessible by web-
browsers, not just via ssh)?

It seems to me like the option to maintain security would then be to
have a separate server box for public access from the web at-
large...but this would duplicate data, right? Or could I set something
up where there's one box that serves up the web-page, but refers to my
home server for the data (i.e., picture files)?

3. Re a wireless router for a firewall vs. a dedicated box running a
minimal BSD or Linux install, what are the advantages / disadvantages
of each?

dh003i

necitită,
2 sept. 2007, 15:13:1402.09.2007
Douglas,

1. I did not revert quickly. I used Gentoo for approximately 1 year,
and then spent some time using both systems. I simply could not get X
working with my graphics card (32MB ATI Radeon Mobility)...there were
numerous forums dedicated to this on the Gentoo Forums list, and for
some people it worked, but not everyone could get it set up. Xfree86
had problems with that graphics card. Btw, I did get Xfree86 to
compile and setup properly on my desktop, which had a 64MB Nvidia
GeForce2, along with a WM.

In any event, I got other things set up fine, and used Vim for quite a
long time for my note-taking (I also use it on Windows; I have some
stand-alone ports, and also Cygwin; the "rename" command is incredibly
useful whenever I need to rename 100 files in batch, e.g., change DSC1
to "Picture 1").

2. Re setting up X on my headless server, as it can be forwarded, can
it be forwarded to a Windows computer?

3. Virtualization, such as VM Ware, only works if you have Xfree86 up
and working, along with a desktop or window manager. I couldn't get
that up and running. In any event, even if I could, I'd then be
running it v. slowly (Excel / Word is already slow enough), and using
lots of space for a dual boot. I didn't have that much HD space.

4. Re giving up on "freedom", I don't think there's a categorical
difference between using a little bit of proprietary software from
within a largely Free (GPL & BSD) system vs. using a significant
amount of Free software (e.g., Firefox) from within a proprietary
system, which is what i do now. Anyways, the drivers for my Radeon GPU
were proprietary to begin with.

Dances With Crows

necitită,
2 sept. 2007, 15:20:0502.09.2007
dh003i staggered into the Black Sun and said:

When posting to Usenet, it is considered good manners to include
context and trim it as I have done here. Please do that in the future.
Yes, there is a way to make that "G2" client do that.

> The reason I'm looking at Gentoo and FreeBSD is because I just want a
> server, and don't need the stuff that was driving me nuts when I tried
> Gentoo for my laptop (i.e., X). Also, I like the customizability and
> hence possibility for greater performance.

Gentoo's customizability is a huge advantage if you need it. The main
advantages that Gentoo has over Fedora are A) fewer bugs B) ease of
upgrading. The performance gain you get from custom compilation is,
AFAICT, small.

> I actually setup everything on my laptop in Gentoo, except for X. Now
> that I remember, it was something to do with the ATI 32MB Radeon
> Mobility graphics card being problematic for Linux at the time.

When was this? I had no trouble getting the radeon Xorg module working
with the FireGL2 on my laptop 1.5 years ago. (Then again, I'm much more
experienced with Gentoo than you are.) The main thing you need to do is
to put the lines

VIDEO_CARDS="vesa radeon"
INPUT_DEVICES="keyboard mouse evdev joystick"

...into /etc/make.conf before emerging X. No, this is not as clear as
it could be, but it *is* mentioned in the Gentoo Handbook.

> as I understand it, public GPGP keys are enormous sequences of
> characters. How would I use such to login to SSH?

You take your public key (~/.ssh/id_rsa.pub , usually) and append it to
~/.ssh/authorized_keys2 on the remote machine. The public keys are
about 600 bytes long and in ASCII, hardly an insane amount of data.
If you don't have an ssh public key yet, do ssh-keygen first.

> What if I want to login and be able to view it in a windows folder
> view? Are there ways to do this, while retaining SSH security?

sftp://user@machine/ in a konqueror window, WinSCP3 in 'Doze.

> Also, at some point, I may want to use the home server to serve up

> webpages, e.g., for some of the pictures I have...what do I do then
> about security?

Make sure apache's configured properly. Make sure the dirs where you
have content aren't world-writable, make sure you aren't running dodgy
PHP scripts. Also, the TOSes of many ISPs prevent users from "running
servers", so make sure you're allowed to run apache.

--
I have to carry this axe.
It's part of my anger management therapy.
--Patrik_Aka_RedX
Matt G|There is no Darkness in Eternity/But only Light too dim for us to see

Douglas Mayne

necitită,
2 sept. 2007, 16:08:5102.09.2007
On Sun, 02 Sep 2007 19:13:14 +0000, dh003i wrote:

> Douglas,
>
> 1. I did not revert quickly. I used Gentoo for approximately 1 year,
> and then spent some time using both systems. I simply could not get X
> working with my graphics card (32MB ATI Radeon Mobility)...there were
> numerous forums dedicated to this on the Gentoo Forums list, and for
> some people it worked, but not everyone could get it set up. Xfree86
> had problems with that graphics card. Btw, I did get Xfree86 to
> compile and setup properly on my desktop, which had a 64MB Nvidia
> GeForce2, along with a WM.
>

Of course, people do give up on GNU/Linux and revert back to Windows. They
do so for a variety of reasons. The most common reasons that I have
seen are these: either they don't have the patience (or are not willing
to invest the time necessary) to learn the basics; or else they get
stymied by a hardware road block. I don't know about the specific problem
you were having with the NVidia card, but I do know that computer hardware
is fairly easy to come by, especially obsolete hardware. Having a working,
but obsolete, graphics card for $15 is sometimes better than a $400
"state-of-the art," but non-supported one. I think you would have
benefitted by sticking with GNU/Linux in the past couple of years because
you would have gained that much experience. In any case, I doubt you'd
still be struggling with a square one question, "Which is the best
server?" type of question. You'd have that answer in your back pocket.

BTW, X.org has pretty much pre-empted XFree86 now.


>
> In any event, I got other things set up fine, and used Vim for quite a
> long time for my note-taking (I also use it on Windows; I have some
> stand-alone ports, and also Cygwin; the "rename" command is incredibly
> useful whenever I need to rename 100 files in batch, e.g., change DSC1
> to "Picture 1").
>
> 2. Re setting up X on my headless server, as it can be forwarded, can it
> be forwarded to a Windows computer?
>

Yes. Cygwin's X is one choice.


>
> 3. Virtualization, such as VM Ware, only works if you have Xfree86 up
> and working, along with a desktop or window manager. I couldn't get that
> up and running. In any event, even if I could, I'd then be running it v.
> slowly (Excel / Word is already slow enough), and using lots of space
> for a dual boot. I didn't have that much HD space.
>

Older hardware may not provide a suitable platform for virtualization.
Newer hardware with adequate CPU, memory, and storage are now up to the
task. The typical configuration of last year's laptops can be used to run
virtualization with no big performance hit: say >=1.5G CPU, >=1.0G RAM,
>=80G HD. Obviously, today's core 2 duos with 2G RAM would be even faster.
I have used Pentium M based laptops with VMWare without noticing a
significant performance penalty.

>
> 4. Re giving up on "freedom", I don't think there's a categorical
> difference between using a little bit of proprietary software from
> within a largely Free (GPL & BSD) system vs. using a significant
> amount of Free software (e.g., Firefox) from within a proprietary
> system, which is what i do now. Anyways, the drivers for my Radeon GPU
> were proprietary to begin with.
>

From a strictly pragmatic point of view, I feel that virtualization is
the best compromise for running Windows-only applications. From a
security point of view, I believe that IPTables on top of the Linux
kernel is far superior to any Windows firewall which, by definition,
will be built on top Windows. Virtualization allows Windows to live in a
box, protected by IPTables, and be suspended when not in use.

Also, please see comments on this thread from "Dances with Crows"
regarding posting style when posting through Google groups.

--
Douglas Mayne

dh003i

necitită,
2 sept. 2007, 17:03:4302.09.2007
> stymied by a hardware road block. I don't know about the specific problem
> you were having with the NVidia card, but I do know that computer hardware
> is fairly easy to come by, especially obsolete hardware. Having a working,
> but obsolete, graphics card for $15 is sometimes better than a $400
> "state-of-the art," but non-supported one.

Well, considering I bought the desktop before I got Linux, and bought
it partly for it's then-great 64MB GeForce 2, I'm not going to get
another (inferior) processor. And when I bought it, it was a gaming
system, largely for the Descent and Tomb Raider series, which I had on
Windows (btw, I wasn't going to buy them again for Linux, I know D3 is
avail for Linux; and Wine was and I bet still is really patchy for
those games).

Also, Gentoo worked fine with my Nvidia graphics card. It did not work
fine with my on-board ATI card in my laptop, which isn't replaceable.

> I think you would have
> benefitted by sticking with GNU/Linux in the past couple of years because
> you would have gained that much experience. In any case, I doubt you'd
> still be struggling with a square one question, "Which is the best
> server?" type of question. You'd have that answer in your back pocket.

I learned quite a bit about Linux installing Gentoo. However, I didn't
learn squat about servers, nor was any of my research on servers and
routers. That Gentoo Linux can make a pretty descent desktop system
has nothing to do with a server or router. That's like saying if I
used Windows 98 (a desktop OS), I'll somehow magically learn about
what's good for a server.

Douglas Mayne

necitită,
2 sept. 2007, 17:37:2102.09.2007
On Sun, 02 Sep 2007 21:03:43 +0000, dh003i wrote:

>> stymied by a hardware road block. I don't know about the specific problem
>> you were having with the NVidia card, but I do know that computer hardware
>> is fairly easy to come by, especially obsolete hardware. Having a working,
>> but obsolete, graphics card for $15 is sometimes better than a $400
>> "state-of-the art," but non-supported one.
>
> Well, considering I bought the desktop before I got Linux, and bought
> it partly for it's then-great 64MB GeForce 2, I'm not going to get
> another (inferior) processor. And when I bought it, it was a gaming
> system, largely for the Descent and Tomb Raider series, which I had on
> Windows (btw, I wasn't going to buy them again for Linux, I know D3 is
> avail for Linux; and Wine was and I bet still is really patchy for
> those games).

Sorry, I'm not a gamer at all.


>
> Also, Gentoo worked fine with my Nvidia graphics card. It did not work
> fine with my on-board ATI card in my laptop, which isn't replaceable.
>
>> I think you would have
>> benefitted by sticking with GNU/Linux in the past couple of years because
>> you would have gained that much experience. In any case, I doubt you'd
>> still be struggling with a square one question, "Which is the best
>> server?" type of question. You'd have that answer in your back pocket.
>
> I learned quite a bit about Linux installing Gentoo. However, I didn't
> learn squat about servers, nor was any of my research on servers and
> routers. That Gentoo Linux can make a pretty descent desktop system
> has nothing to do with a server or router. That's like saying if I
> used Windows 98 (a desktop OS), I'll somehow magically learn about
> what's good for a server.
>

In the *nix world, AFAICT, there's not a tremendous difference between
workstations and servers. That is, assuming you are not trying to build
a load-balanced failover cluster member, akin to those which serve
the Google. That is a different animal. I think that your needs is
somewhat less intensive in building a simple household server.
>
There's a loose association between working knowledge and the
distributions people pick. I wouldn't expect a newbie to pick
Linux-from-scratch (LFS) and have that much success getting it working.
You came to this newsgroup as a self-professed former Gentoo user. Gentoo
is a Linux distribution that I associate in the same class with LFS.
Gentoo users are generally knowledgable (because they have to be to get
things working.) You have stated that you worked with it for a year. I
simply assumed you must have learned something in that time.

--
Douglas Mayne

General Schvantzkoph

necitită,
2 sept. 2007, 17:39:3502.09.2007

> I learned quite a bit about Linux installing Gentoo. However, I didn't
> learn squat about servers, nor was any of my research on servers and
> routers. That Gentoo Linux can make a pretty descent desktop system has
> nothing to do with a server or router. That's like saying if I used
> Windows 98 (a desktop OS), I'll somehow magically learn about what's
> good for a server.

That's not really a valid comparison. Microsoft differentiates their
offerings in a way that Linux doesn't. Every major Linux distro handles
the full range of tasks from laptops to huge servers. The Linux kernel
can support up to 255 processors with a simple change of a parameter (I'm
not sure what is done in supercomputers with 10s of thousands of
processors but whatever it is it's in the kernel and available to
everybody). The default number of processors supported in Fedora is 64,
in Gentoo that choice would be yours to make so it could be any number
that you wanted. The major Linux distros also compile their kernels to
support as much RAM as you can put on a box, even the 32bit distros.
Microsoft limits XP Pro to about 3.5G, the 32 bit Linuxes all support
64G, the 64bit Linuxes are only limited by the hardware limits. The Linux
distros also come with the full range of server applications, all you
have to do is install them. You get webservers, i.e. Apache, database
servers, SAMBA and NFS, mail servers, clustering, ssh, firewalling,
routing, everything you could want. Microsoft makes you buy the server
edition to get those things. Linux has no limit on the number of users or
the number of connections, Microsoft does. So if you have installed any
Linux distro then you can make that system a server, just install the
apps you want. If you want to run with X off all you have to do is change
the init level from 5 to 3 in /etc/initab. Configuring the server
applications is also easy, I use webmin which works for most distros. It
allows you to use a browser from any box on your network to administer
your system.

dh003i

necitită,
2 sept. 2007, 18:27:4802.09.2007
> In the *nix world, AFAICT, there's not a tremendous difference between
> workstations and servers. That is, assuming you are not trying to build
> a load-balanced failover cluster member, akin to those which serve
> the Google. That is a different animal. I think that your needs is
> somewhat less intensive in building a simple household server.
>
> There's a loose association between working knowledge and the
> distributions people pick. I wouldn't expect a newbie to pick
> Linux-from-scratch (LFS) and have that much success getting it working.
> You came to this newsgroup as a self-professed former Gentoo user. Gentoo
> is a Linux distribution that I associate in the same class with LFS.
> Gentoo users are generally knowledgable (because they have to be to get
> things working.) You have stated that you worked with it for a year. I
> simply assumed you must have learned something in that time.
>
> --
> Douglas Mayne

Yea, I know that just about any Linux distro can be set up as a
server. Gentoo most certainly can. (btw, Gentoo is a difficult distro
to install -- although the amazing documentation basically makes it
easy, so maybe I should say "technical" to install -- but it is no-
where near LFS. Gentoo automates the compiling of just about
everything for you, and all you have to do typically is specify USE
flats; in LFS, as I understand it, you compile everything from scratch
the hard way; no meta-package management system to help you out).

Sure, I learned things, but I learned 0 (or almost 0) about servers.
It's quite possible to install and setup any Linux distro as a desktop
without learning one single thing about severs, beyond whatever you
need to know to get your internet working.

My questions were about Gentoo vs. Linux for a sever, and the relative
merits of each. Maybe I should re-state them more clearly:

1. How does fBSD stack up in security vs. oBSD for my purposes? (and
also vs. Gentoo?) I will probably eventually have a dedicated firewall
box (replacing the wireless router) between the cable modem and the
server, like this

[Cable modem] <=> [Wireless Router] or [dedicated firewall] <=> [Home


Server] <=> [Wireless Router] <=> [Laptops]

2. Are there any benefits to having multiple firewalls in place in
serial...e.g., right now, I have 2 wireless routers in serial; if I
get a dedicated firewall box, I would have 1 dedicated firewall and 2
router firewalls in serial (possibly). Or is this just useless?

4. I've since learned that FreeBSD and Gentoo support 64-bit
processors fine. However, what about the server applications? (This is
again something I learned nothing about, as I am still using just 32-
bit processors; I figure my server will be 64-bit).

5. One of my foreward-looking concerns is scalability & ability to

upgrade: possibility to upgrade to numerous HD, multiple CPUs,
multiple GPUs, RAM, etc. Thus, my concerns here are the limits in
Linux and BSD on hard-drive space recognizeable, and RAM
recognizable.

6. I'm aware that I could probably create scripts to regularly backup
certain files, and use BASH shell commands to specify a schedule for
such; but I was wondering if there were utilities for this. And also,
streaming of photos or video or music to the TV or stereo system? Is
that possible?

7. Finally, the nicest backup features of WHS that I saw were
incremental backups, and efficient single copy backup. If a file's
already backed up, it only backs up changes; likewise, storing only 1
copy of the same file as a backup, if there are multiple copies of
that file in the home network. Are there utilities in BSD or Linux to
do this?

dh003i

necitită,
2 sept. 2007, 18:38:2002.09.2007
The other thing I wanted to know is if I can farm off CPU and GPU
intensive tasks from my WinXP laptop to a Linux or BSD server?

Dances With Crows

necitită,
2 sept. 2007, 19:53:0702.09.2007
dh003i staggered into the Black Sun and said:
> Douglas Mayne wrote:
>> In the *nix world, AFAICT, there's not a tremendous difference
>> between workstations and servers. There's a loose association
>> between working knowledge and the distributions people pick. Gentoo

>> is a Linux distribution that I associate in the same class with LFS.
> Gentoo automates the compiling of just about everything for you, and
> all you have to do typically is specify USE flats; in LFS, as I
> understand it, you compile everything from scratch the hard way; no
> meta-package management system to help you out).

Package management is a major win. Using a distro without a good
package manager is like trying to roll marbles uphill using your nose.

> Sure, I learned things, but I learned 0 (or almost 0) about servers.

Every major server has (or should have) an associated HOWTO, FAQ, and
probably a wiki. Read those things, then apply the knowledge.

> 1. How does fBSD stack up in security vs. oBSD for my purposes?

0. What's with the numbered paragraphs? Some British authors did that
in the 1960s, then they gave it up. Anyway, OpenBSD is probably the
most secure of the *BSDs in its default install. It's also complete
overkill for your purposes, and a much larger PITA to work with.

> [Cable modem] <=> [Wireless Router] or [dedicated firewall] <=> [Home
> Server] <=> [Wireless Router] <=> [Laptops]

Why 2 802.11x pieces? Wireless is inherently flakier, slower, and more
expensive than wired. Have as few APs as you can just for sanity.

> Are there any benefits to having multiple firewalls in place in

> serial?

Not unless you're doing some more complicated things which you didn't say
you were doing. Put a good iptables ruleset on the first wall and
fuggeddabouddit.

> I've since learned that FreeBSD and Gentoo support 64-bit processors
> fine. However, what about the server applications?

x86-64 server apps work fine. The real problem with the x86-64 is that
there are still a few X clients that don't work properly (epsxe, wmctrl)
and a few more where workarounds are necessary (OpenOffice, Flash
Player).

> my concerns here are the limits in Linux and BSD on hard-drive space
> recognizeable, and RAM recognizable.

Linux can use up to 255 SCSI disks at once. A single filesystem can be
no larger than 2T unless you use XFS. Maximum partitionable space with
a normal x86 partition table is 2T. To use larger partitions, you need
to use a GPT table, but most x86s can't boot from disks that have a GPT
table. RAM should be OK to at least 64G on the x86-64, and probably
would work if you had more than that.

> I'm aware that I could probably create scripts to regularly backup
> certain files, and use BASH shell commands to specify a schedule for
> such; but I was wondering if there were utilities for this.

man crontab

> And also, [can I do] streaming of photos or video or music to the TV


> or stereo system? Is that possible?

Stereo: Plug stereo speakers into one of the other output jacks on your
sound card. If you only have one output jack, a thing called a Y-cable
may be useful. Use alsamixer (or whatever) to set the main sound output
to that jack if you don't have a Y-cable. TV: Impossible to say since
you didn't provide the make and model of your video card. TV-out with
an nVidia card and the evil binary-only nVidia X module works fine, but
you need to read the README (chapter 16, "configuring TV-out".)

> Finally, the nicest backup features of WHS that I saw were incremental
> backups, and efficient single copy backup. If a file's already backed
> up, it only backs up changes; likewise, storing only 1 copy of the
> same file as a backup, if there are multiple copies of that file in
> the home network. Are there utilities in BSD or Linux to do this?

rsync/unison sounds like it'd probably do what you need. HTH,

--
One RAID to rule them all, One RAID to bind them
One RAID to hold the files and in the darkness grind them
In the land of Server where the Unix lies....

Douglas Mayne

necitită,
2 sept. 2007, 19:55:5602.09.2007
On Sun, 02 Sep 2007 22:27:48 +0000, dh003i wrote:
<snip>

>
> My questions were about Gentoo vs. Linux for a sever, and the relative
> merits of each. Maybe I should re-state them more clearly:
>
Caveat: I'll answer your specific questions in more detail below. Keep in
mind that I am not using Gentoo, or any of the BSDs. I use Slackware
Linux (most recently, Slackware version 12.)

>
> 1. How does fBSD stack up in security vs. oBSD for my purposes? (and
> also vs. Gentoo?) I will probably eventually have a dedicated firewall
> box (replacing the wireless router) between the cable modem and the
> server, like this
>
Sorry, I can't advise you because I don't use any of the BSDs. The BSD
firewall code is well regarded, but there are only so many hours in the
day. I came to GNU/Linux from Windows, and its firewall features
(especially IPTables) have been sufficient for my needs. BTW, I learned
almost nothing when using Windows. I only realized there was more to learn
upon switching to GNU/Linux. The guides and howtos at the Linux
Documentation Project is how I got started.

>
> [Cable modem] <=> [Wireless Router] or [dedicated firewall] <=> [Home
> Server] <=> [Wireless Router] <=> [Laptops]
>
> 2. Are there any benefits to having multiple firewalls in place in
> serial...e.g., right now, I have 2 wireless routers in serial; if I
> get a dedicated firewall box, I would have 1 dedicated firewall and 2
> router firewalls in serial (possibly). Or is this just useless?
>

It depends on your design requirements. Multple firewalls are most often
deployed to isolate more trusted resources from less trusted resources.
For example, if you were to have a file server which is accessible
to users on the internet, then it is probably wise to isolate that server
from the local network. A primary firewall could be used to regulate
overall incoming traffic to a group of servers. Simple servers can
regulate their own traffic. In your case, some entries in your block
diagram might be combined.

I don't know the specifics of your setup, but I think this is an alternate
topology to consider:

+- GNU/Linux Server (Built in firewall)
|
[Cable modem] -+- [Wired/Wireless Router ] <=> Laptops, Workstations, etc.

This assumes that your cable modem is using NAT to hide a local network
on its LAN side (say, 192.168.0.0/24). You should be able to access and
administer your GNU/Linux server through the common connection (network)
provided by the cable modem. In this case, setup the cable modem to do
port forwarding for the specific service ports. From there, the GNU/Linux
server will deal with the specific requests it receives, and can
begin handling them using an IPTable ruleset. An example of a
useful IPTable rule is to setup rate limiting on ssh, which discourages a
lot of casual attacks.

>
> 4. I've since learned that FreeBSD and Gentoo support 64-bit processors
> fine. However, what about the server applications? (This is again
> something I learned nothing about, as I am still using just 32- bit
> processors; I figure my server will be 64-bit).
>

My systems don't have enough memory to worry about whether they are 32 or
64-bit. Some major distributions have versions which have been
precompiled for 64-bit. There is starting to be more interest in 64-bit,
but RAM is a limiting factor for a lot of systems. 32-bit systems have
quite a bit of life remaining, IMO.


>
> 5. One of my foreward-looking concerns is scalability & ability to
> upgrade: possibility to upgrade to numerous HD, multiple CPUs, multiple
> GPUs, RAM, etc. Thus, my concerns here are the limits in Linux and BSD
> on hard-drive space recognizeable, and RAM recognizable.
>

It would be nice. However, almost every new generation of motherboards
requires wholesale replacement of the CPU, memory, motherboard, and
possibly the case and power supply also. If cost is no object, then the
sky is the limit. Personally, cost is an object for me. I try to keep cost
in check. For example, I have just started upgrading the P-III
architecture boards which I have used up until now. The Intel Core 2
architecture offers a 10x to 100x performance boost. I think I have
probably saved a lot of money waiting for a compelling upgrade. I forsee
that the new Core 2 boards that I am rolling out will have a long life,
too.


>
> 6. I'm aware that I could probably create scripts to regularly backup
> certain files, and use BASH shell commands to specify a schedule for
> such; but I was wondering if there were utilities for this. And also,
> streaming of photos or video or music to the TV or stereo system? Is
> that possible?

Making backups is a big topic in itself. It can be easy, or it can be
relatively complex. People tend to roll out solutions which "fit" their
needs. One solution that I often recommend is to take a snapshot using an
external disk, either to the storage on another network computer, or
directly to an external disk (USB 2.0). I see that a lot of people on
these groups recommend Amanda, and similar programs. The goal of these
programs is to provide a self-booting backup set.

I have no experience with streaming. I know that any two computers are
free to communicate over the network. The network apps you want probably
exist. I administer, send/receive files, run applications, etc. all using
ssh.


>
> 7. Finally, the nicest backup features of WHS that I saw were
> incremental backups, and efficient single copy backup. If a file's
> already backed up, it only backs up changes; likewise, storing only 1
> copy of the same file as a backup, if there are multiple copies of that
> file in the home network. Are there utilities in BSD or Linux to do
> this?
>

Incremental backups can be easy or hard, too. In the simplest case, you
only need to compare two file listings and look for differences. Changed
files are selected for backup. Use the backup tool of your choice- I often
use tar.

For more complex cases, where files may be open which are being backed
up, then more advanced tools are probably required. Newer Windows versions
have the ability to create automatic snapshots in the background, and
somehow track versions for you. The facility most similar to this in
GNU/Linux, AFAIK is the Linux kernel's device mapper facility.

More infor about Device Mapper...
I did some experiments recently with device mapper snapshots. This
article is a good starting point:
http://linuxgazette.net/114/kapil.html

BTW, a lot of the facilities provided by LVM v2 are really a higher level
interface to device mapper. Using LVM v2 is probably more appropriate
than using device mapper directly. LVM handles the details of setting
up device-mapper's targets. The snapshot-origin target allows a file
to be backed up in a prior fixed state, while it is allowied to be
changing at the same time in another view. However, I noticed significant
performance degradation in my simple tests when using the snapshot-origin
target. A quick google shows that the kernel developers are aware of this
problem and are working on changes to the kernel IO design which will
address the problem and simplify the IO architecture to avoid corner
cases. I think that I noticed the speed problem because I have become
accustomed to speedy file transfers, even when using journalling file
systems, such as XFS, and other device mapper targets, such as dm-crypt.)
Also, device mapper's snapshot does not suffer the same performance
degradation as snapshot-origin.

In short, I am impressed with the work which has been done building the
framework on the Linux kernel. Windows may offer similar features, but as
I said earlier, I choose both free (price) and freedom (free to utilize
as I see fit).

--
Douglas Mayne


Mumia W.

necitită,
2 sept. 2007, 22:56:3802.09.2007
On 09/02/2007 05:38 PM, dh003i wrote:
> The other thing I wanted to know is if I can farm off CPU and GPU
> intensive tasks from my WinXP laptop to a Linux or BSD server?
>

That depends upon the tasks and the applications doing them. Some
applications support distributed computation and load-balancing, and
some do not.

Mumia W.

necitită,
2 sept. 2007, 22:54:2202.09.2007
On 09/02/2007 05:27 PM, dh003i wrote:
> [...]

> 4. I've since learned that FreeBSD and Gentoo support 64-bit
> processors fine. However, what about the server applications? (This is
> again something I learned nothing about, as I am still using just 32-
> bit processors; I figure my server will be 64-bit).
>

But the O/S doesn't have to be. AMD64 processors run 32-bit code just
fine. The fact that you have a 64-bit processor does not mean that have
to get a 64-bit operating system. The advantage to running a 32-bit O/S
is that you get access to 32-bit, closed-source applications for which
no 64-bit version has been made available. Also, I suspect 32-bit apps
should require less memory than 64-bit apps.

> 5. One of my foreward-looking concerns is scalability & ability to
> upgrade: possibility to upgrade to numerous HD, multiple CPUs,
> multiple GPUs, RAM, etc. Thus, my concerns here are the limits in
> Linux and BSD on hard-drive space recognizeable, and RAM
> recognizable.
>
> 6. I'm aware that I could probably create scripts to regularly backup
> certain files, and use BASH shell commands to specify a schedule for
> such; but I was wondering if there were utilities for this. And also,
> streaming of photos or video or music to the TV or stereo system? Is
> that possible?
>

There are many backup programs for Linux. If you install one of the
popular Linux distributions such as Debian, Ubuntu or Fedora, you'll get
to try out many of the backup packages. On Debian, I see afbackup,
amanda, backup-manager, backup2l, bacula, faubackup, flexbackup,
ibackup, rdiff-backup, openafs, cvs, and at least ten others.

> 7. Finally, the nicest backup features of WHS that I saw were
> incremental backups, and efficient single copy backup. If a file's
> already backed up, it only backs up changes; likewise, storing only 1
> copy of the same file as a backup, if there are multiple copies of
> that file in the home network. Are there utilities in BSD or Linux to
> do this?
>

Faubackup, rdiff-backup and cvs do this I know. I haven't checked the
others.

Unfortunately you chose the most difficult Linux distribution to install
and configure (Gentoo). Binary distributions such as Debian, Fedora or
Slackware would have been much better choices--especially for someone
new to Linux. Anyway, good luck on your search for a new server O/S.

The Natural Philosopher

necitită,
3 sept. 2007, 05:35:2103.09.2007
I've just built a headless server using latest Debian etch stable i-386
on a new cheapo celeron board. Runs like a dream on 512M RAM. Very
little configuration required and painless install. Only thing that was
a bit weird was putting WEBMIN on it, which for some reason needed to be
FORCED in..complained about depending on stuff that definitely was
there. Probably expecting older libraries or something. Anyway it ran
fine once installed, and makes a pretty decent way of administering MOST
of what you want.

SAMBA took the usual few minutes to lick into shape, and I had to REMOVE
appletalk..not sure why it installed it in the first place!

Only slight issues I have had are with it losing network connections if
I put this Mac to sleep..in order to move between mac and PC I have a
shared email data directory. Works as long aa I don't leave e-mail
running on BOTH. :-)

The Natural Philosopher

necitită,
3 sept. 2007, 05:45:3603.09.2007
dh003i wrote:
>> stymied by a hardware road block. I don't know about the specific problem
>> you were having with the NVidia card, but I do know that computer hardware
>> is fairly easy to come by, especially obsolete hardware. Having a working,
>> but obsolete, graphics card for $15 is sometimes better than a $400
>> "state-of-the art," but non-supported one.
>
> Well, considering I bought the desktop before I got Linux, and bought
> it partly for it's then-great 64MB GeForce 2, I'm not going to get
> another (inferior) processor.

Think carefully. A file server doesn't need a good processor, nor a huge
amount of RAM. What it needs is a fast network connection and super fast
DISKS.


Super CPU ratings are relevant for running a GUI, or computationally
intensive tasks, but not for simply serving up web pages and files.


What you really need is rock solid hardware, and hardware that runs
*cool*, if its a 24x7 server.

Go for a low power chip, and the best disk you can afford, and don;t
worry about CPU power.

The Natural Philosopher

necitită,
3 sept. 2007, 05:40:5903.09.2007
dh003i wrote:
> General Schvantzkoph,
>
> Thanks for your response.
>
> 1. Re why I want to make life so hard...
>
> The reason I'm looking at Gentoo and FreeBSD is because I just want a
> server, and don't need the stuff that was driving me nuts when I tried
> Gentoo for my laptop (i.e., X).

Well don't install it then. MY debian asked me if I wanted a GUI and I
said 'no thanks'

> Also, I like the customizability and
> hence possibility for greater performance. I am somewhat of a control
> freak. This isn't for a while until I need this, so I'm doing the
> research now, and will be well-read and ready to start running by the
> time I'm ready to set this up. As I understand it, under Fedora, you
> do not get compiles customized to your specifications, or your CPU
> (which you do get in Gentoo and BSD).
>

DEbian is good for all of that.

> I actually setup everything on my laptop in Gentoo, except for X. Now
> that I remember, it was something to do with the ATI 32MB Radeon
> Mobility graphics card being problematic for Linux at the time.
>
> 2. Re RSA authentication for connecting to SSH, as I understand it,
> public GPGP keys are enormous sequences of characters. How would I use
> such to login to SSH? Is that kind of like the USB key boot key thing,
> where the USB key (with your key on it), is required to boot? So I
> stick in a USB key when I need to login remotely?
>
> What if I want to login and be able to view it in a windows folder
> view? Are there ways to do this, while retaining SSH security?
>
> Also, at some point, I may want to use the home server to serve up web-
> pages, e.g., for some of the pictures I have...what do I do then about
> security (obviously, then, that computer has to be accessible by web-
> browsers, not just via ssh)?
>

Use secure apache etc. And set up apache to only allow access to the
paces you want it to.

> It seems to me like the option to maintain security would then be to
> have a separate server box for public access from the web at-
> large...but this would duplicate data, right? Or could I set something
> up where there's one box that serves up the web-page, but refers to my
> home server for the data (i.e., picture files)?

The way this server I am working oin works, is that its behind a
firewall, so only access to apache and secure apache 80/443 - is
allowed, and I have set up IPtables to allow me from here to access it -
and only me on my IP address - for other services.

Apache is FAIRLY safe if you don't make the obvious mistakes.


>
> 3. Re a wireless router for a firewall vs. a dedicated box running a
> minimal BSD or Linux install, what are the advantages / disadvantages
> of each?
>

I'd never use wireless if I wanted security and reliability. But I would
always go for a separate NAT firewall and run as little as possible on
the Linux. In may case the customer router was only able to provide
minimal incoming firewalling, so I did some on the box itself.

The Natural Philosopher

necitită,
3 sept. 2007, 05:48:1403.09.2007
dh003i wrote:
> The other thing I wanted to know is if I can farm off CPU and GPU
> intensive tasks from my WinXP laptop to a Linux or BSD server?
>
Not easily, no.

Garamond

necitită,
3 sept. 2007, 05:58:4003.09.2007
Dances With Crows skrev:

> Anyway, OpenBSD is probably the
> most secure of the *BSDs in its default install. It's also complete
> overkill for your purposes, and a much larger PITA to work with.

FreeBSD is as secure as OpenBSD and also very easy to work with, much
easier then Linux.

The Natural Philosopher

necitită,
3 sept. 2007, 05:57:5003.09.2007
Douglas Mayne wrote:
> On Sun, 02 Sep 2007 22:27:48 +0000, dh003i wrote:

> My systems don't have enough memory to worry about whether they are 32 or
> 64-bit. Some major distributions have versions which have been
> precompiled for 64-bit. There is starting to be more interest in 64-bit,
> but RAM is a limiting factor for a lot of systems. 32-bit systems have
> quite a bit of life remaining, IMO.

I did some research with my server..the processor is 64 bit, but it
would run either 32 or 64 bit kernels.

I chose the 32 bit one because

- many 3rd party apps are not available yet in a stable 64 bit port
- the 32 bit code was smaller.
- in many tests the 64 bit code wasn't any faster
- I wanted rock solid performance,and the 32 bit has been around a long
time.
- in a file and web server, computaional speed is almost irrelevant: wah
is needed is a fast network and disk. 32/64 bit makes sod all difference.


>> 5. One of my foreward-looking concerns is scalability & ability to
>> upgrade: possibility to upgrade to numerous HD, multiple CPUs, multiple
>> GPUs, RAM, etc. Thus, my concerns here are the limits in Linux and BSD
>> on hard-drive space recognizeable, and RAM recognizable.
>>
> It would be nice. However, almost every new generation of motherboards
> requires wholesale replacement of the CPU, memory, motherboard, and
> possibly the case and power supply also. If cost is no object, then the
> sky is the limit. Personally, cost is an object for me. I try to keep cost
> in check. For example, I have just started upgrading the P-III
> architecture boards which I have used up until now. The Intel Core 2
> architecture offers a 10x to 100x performance boost. I think I have
> probably saved a lot of money waiting for a compelling upgrade. I forsee
> that the new Core 2 boards that I am rolling out will have a long life,
> too.

I have an old pentium here with a few hundred Mbyte RAM that is capable
of serving files as fast as the network can deliver them.


You are a gamer, that takes CPU power. Serving web pages and files does
not.

Start thinking professionally, and gear your hardware not to the sexy
marketing, but to the requirements of the job in hand.


>> 6. I'm aware that I could probably create scripts to regularly backup
>> certain files, and use BASH shell commands to specify a schedule for
>> such; but I was wondering if there were utilities for this. And also,
>> streaming of photos or video or music to the TV or stereo system? Is
>> that possible?
> Making backups is a big topic in itself. It can be easy, or it can be
> relatively complex. People tend to roll out solutions which "fit" their
> needs. One solution that I often recommend is to take a snapshot using an
> external disk, either to the storage on another network computer, or
> directly to an external disk (USB 2.0). I see that a lot of people on
> these groups recommend Amanda, and similar programs. The goal of these
> programs is to provide a self-booting backup set.
>

The biggest problem with backups is that the backup media size is now
well outstripped by the disk size. You are better off with a twin
machine or twin disk setup than hoping to burn a tape once a day as we
used to do.

I'd be interested i strategies too.

The Natural Philosopher

necitită,
3 sept. 2007, 06:00:4303.09.2007

There are a few differences in the installation to tune the system for
server use.

More swap disk maybe. Slightly more severe partitioning. But that's
about it really. Apart from that - with Debian just select 'server
installation' and it makes intelligent guesses anyway!

Mesajul a fost șters

Balwinder S Dheeman

necitită,
3 sept. 2007, 07:12:4303.09.2007

IMHO, the FreeBSD, NetBSD and, or OpenBSD kind of meta distributions --
like Gentoo -- will eat out a lot of your machines time and resources to
install and, or upgrade software. That's what, I for one, want to convey
as an opinion from a vast experience I have as a Unix sysadmin.

I bet, none of the others can beat Ubuntu and, or Debian, no matter be
it a server and, or desktop; so better you go for a good binary
distribution of Linux, which is easy to maintain, update and, or upgrade.

--
Dr Balwinder S "bsd" Dheeman Registered Linux User: #229709
Anu'z Linux@HOME Machines: #168573, 170593, 259192
Chandigarh, UT, 160062, India Gentoo, Fedora, Debian/FreeBSD/XP
Home: http://cto.homelinux.net/~bsd/ Visit: http://counter.li.org/

Mesajul a fost șters

Johan Lindquist

necitită,
3 sept. 2007, 07:35:3303.09.2007
So anyway, it was like, 13:12 CEST Sep 03 2007, you know? Oh, and, yeah,
Balwinder S Dheeman was all like, "Dude,

> On 09/03/2007 03:28 PM, Garamond wrote:

>> FreeBSD is as secure as OpenBSD and also very easy to work with,
>> much easier then Linux.

I've seen nothing in FreeBSD that makes it an easier system to work
with compared to the user friendly Linux distributions. I'm sure there
are Linux distributions that are more difficult than *BSD, tho.

> IMHO, the FreeBSD, NetBSD and, or OpenBSD kind of meta distributions
> -- like Gentoo -- will eat out a lot of your machines time and
> resources to install and, or upgrade software. That's what, I for
> one, want to convey as an opinion from a vast experience I have as a
> Unix sysadmin.

All of them, as far as I know, can use binary packages, so as far as
time and resources go it's not really different. Gentoo might not be
at its best when doing it that way, but for OpenBSD at least, I think
the recommendation is to use binary packages instead of compiling
ports yourself.

> I bet, none of the others can beat Ubuntu and, or Debian, no matter
> be it a server and, or desktop; so better you go for a good binary
> distribution of Linux, which is easy to maintain, update and, or
> upgrade.

This I will totally agree with.

--
Time flies like an arrow, fruit flies like a banana. Perth ---> *
13:29:57 up 5 days, 3:31, 1 user, load average: 0.09, 0.07, 0.01
Linux 2.6.22.5 x86_64 GNU/Linux Registered Linux user #261729

Douglas Mayne

necitită,
3 sept. 2007, 11:02:5103.09.2007
On Mon, 03 Sep 2007 10:57:50 +0100, The Natural Philosopher wrote:

> Douglas Mayne wrote:
>> On Sun, 02 Sep 2007 22:27:48 +0000, dh003i wrote:
>
>> My systems don't have enough memory to worry about whether they are 32 or
>> 64-bit. Some major distributions have versions which have been
>> precompiled for 64-bit. There is starting to be more interest in 64-bit,
>> but RAM is a limiting factor for a lot of systems. 32-bit systems have
>> quite a bit of life remaining, IMO.
>
> I did some research with my server..the processor is 64 bit, but it
> would run either 32 or 64 bit kernels.
>
> I chose the 32 bit one because
>
> - many 3rd party apps are not available yet in a stable 64 bit port
> - the 32 bit code was smaller.
> - in many tests the 64 bit code wasn't any faster
> - I wanted rock solid performance,and the 32 bit has been around a long
> time.
> - in a file and web server, computaional speed is almost irrelevant: wah
> is needed is a fast network and disk. 32/64 bit makes sod all difference.
>

Thanks for this info. If I understand the memory model correctly, the
computer can have more memory than 4G, but only 4G can be paged in at
one time. This limits the amount of memory the kernel can give each
process to be at most* 2^32 bytes. (*Memory available to each process will
be less than that because the kernel uses some of the address space for
its own processes, probably along some division line (say, 3G user + 1G
kernel).) The advantage of 64-bit is its larger address space, which will
not place a 4G limit on any one process. The 64-bit address space is so
large that it is probably not possilbe to _ever_ build a computer which
has consumed its entire address space. Correct?


>
>
>
>>> 5. One of my foreward-looking concerns is scalability & ability to
>>> upgrade: possibility to upgrade to numerous HD, multiple CPUs,
>>> multiple GPUs, RAM, etc. Thus, my concerns here are the limits in
>>> Linux and BSD on hard-drive space recognizeable, and RAM recognizable.
>>>
>> It would be nice. However, almost every new generation of motherboards
>> requires wholesale replacement of the CPU, memory, motherboard, and
>> possibly the case and power supply also. If cost is no object, then the
>> sky is the limit. Personally, cost is an object for me. I try to keep
>> cost in check. For example, I have just started upgrading the P-III
>> architecture boards which I have used up until now. The Intel Core 2
>> architecture offers a 10x to 100x performance boost. I think I have
>> probably saved a lot of money waiting for a compelling upgrade. I
>> forsee that the new Core 2 boards that I am rolling out will have a
>> long life, too.
>
> I have an old pentium here with a few hundred Mbyte RAM that is capable
> of serving files as fast as the network can deliver them.
>
>
> You are a gamer, that takes CPU power. Serving web pages and files does
> not.
>
> Start thinking professionally, and gear your hardware not to the sexy
> marketing, but to the requirements of the job in hand.
>

It's hard not to get caught up in the blitz. Some of the current
generation mohterboards/CPUs provide a very big bang to the buck. We
are literally awash in CPU power- the current generation of boards is
probably overkill for a lot of uses that they will be put. Still, a
new board may be the best choice in the long run, even if it is
overpowered, because it may have a long service life. That said, I
also believe upgrades should be rolled out only as necessary.

Actually, this remains my current upgrade strategy:
1. Workstations are upgraded to Core 2 Duo (E6600 CPU, typ), if the user
needs the power. The typical user is doing complex spreadsheet/database
and mathematical work. They have said the upgrade provides a boost of
between 10x to 100x when compared to their old computer.

2. Motherboard, CPU, and memory from old workstations are P-III class. The
motherboard will accept 1.5G RAM. Some workstations had the full amount
installed. These are being inspected, and used to upgrade some office
servers which could benefit from newer hardware. are currently using more
obsolete hardware than that. These boards are also appropriate for use as
clones, see below.

BTW, this strategy is probably only appropriate for a small office. Larger
offices are less interested in recycling their equipment in this manner,
for a variety of reasons. Probably, it is mostly due to the fact they
like working with "heavy iron." ;-) They don't like to "piss around" on
the small stuff.


>
>>> 6. I'm aware that I could probably create scripts to regularly backup
>>> certain files, and use BASH shell commands to specify a schedule for
>>> such; but I was wondering if there were utilities for this. And also,
>>> streaming of photos or video or music to the TV or stereo system? Is
>>> that possible?
>> Making backups is a big topic in itself. It can be easy, or it can be
>> relatively complex. People tend to roll out solutions which "fit" their
>> needs. One solution that I often recommend is to take a snapshot using
>> an external disk, either to the storage on another network computer, or
>> directly to an external disk (USB 2.0). I see that a lot of people on
>> these groups recommend Amanda, and similar programs. The goal of these
>> programs is to provide a self-booting backup set.
>>
>>
> The biggest problem with backups is that the backup media size is now
> well outstripped by the disk size. You are better off with a twin
> machine or twin disk setup than hoping to burn a tape once a day as we
> used to do.
>
> I'd be interested i strategies too.
>

I haved started using a poor-man's hot-swap on some critical systems; it's
a "warm" swap, I guess ;-). The goal is to provide a quicker recovery than
would be possible if restoring the server from absolute scratch. I
estimate this idea could save 2-4 hours of time restoring the server under
some failure modes. It only relies on one fact: hardware is cheap. In the
past, this assumption was not true. However, Moore's law has definitely
caught up. In my setup, I can afford to have another similar system
sitting beside critical servers. These warm swap servers are initially
setup by copying the source server (cloning). After the initial copy, some
minor tweaks are required to make the server and its clone non-identical
(unique IP address, etc.). That step is to ensure that there will be no
network conflicts when they are booted at the same time (next step.) Most
of the time the warm swap computer is turned off; it is only turned on
once every 2-4 weeks. When the clone is booted, a simple restore operation
is performed to bring the served data up to date. Also, OS patches which
have been issued in the interim may be applied as necessary, but this
technique is primarily a method for quickly snapshotting the _data
partition_ which is being served.

The clone is brought up to date in two steps:
1. The data from relative backups which have been made since the
last sync are restored to the clone.
2. Files are closed on the source server, then the clone begins an rsync
operation against it. Because of step 1, this step should complete quickly
and limit the amount of time which the server is effectively offline to
users. Files must be closed to ensure a clean, consistent backup. This
step should complete in a few minutes.

In the event the clone is needed due to a failure. it can be brought
online quickly. It will be powered on, restored (as above) using the most
recent snapshot, and will assume the identity (IP, name) of its source
server at boot.

BTW, there is still potential for data loss using this method. If this
potential is not acceptable, then use a true failover cluster (or
something similar.)

BTW, this method only _supplements_ other backup tools that I
use. It obviously does not protect against failure modes where both
the server and its clone are destroyed. In that case, restoration of the
backup set will require more time and rely on other backups being
available. I use external USB 2.0 hard drives for offsite backups. These
are a good bang to the buck, too. Their capacity is approaching 1TB per
disk. I did a rough calculation recently which compared LTO tape vs. USB
HD at current price levels. IIRC, tape was cheaper, if your backup set was
larger than 40TB. I have less data than that.

--
Douglas Mayne

Jean-David Beyer

necitită,
3 sept. 2007, 11:33:0003.09.2007
With my Red Hat Enterprise Linux 5 2.6.18-8.1.8.el5PAE kernel each user
process can have almost 4GBytes RAM (far more than 3GBytes), and the kernel
can also get 4GBytes at one time, but more as you will see in a minute.
Provided you have the right 32-bit processors (I have 32-bit Intel Xeon
processors) and the right chip set (I have Intel E7501 chip set) and enough
memory (I have 8GBytes).

Notice the top lines of top:

top - 11:20:09 up 25 days, 14:42, 4 users, load average: 5.37, 5.30, 5.13
Tasks: 186 total, 6 running, 179 sleeping, 1 stopped, 0 zombie
Cpu0 : 11.1%us, 3.5%sy, 71.9%ni, 9.6%id, 1.5%wa, 0.0%hi, 2.3%si, 0.0%st
Cpu1 : 8.3%us, 2.8%sy, 73.8%ni, 14.1%id, 0.8%wa, 0.3%hi, 0.0%si, 0.0%st
Cpu2 : 4.5%us, 6.0%sy, 79.6%ni, 8.6%id, 1.0%wa, 0.0%hi, 0.3%si, 0.0%st
Cpu3 : 8.1%us, 4.0%sy, 82.6%ni, 3.8%id, 1.3%wa, 0.0%hi, 0.3%si, 0.0%st
Mem: 8185340k total, 7203956k used, 981384k free, 249396k buffers
Swap: 4096496k total, 1420k used, 4095076k free, 5929768k cached

The kernel gets almost 6GBytes for the cache, 0.25GBytes for buffers. This
is possible because, while the processors have only 32 bits of address
space, the chipset allows up to 36 bits for address space, so the hardware
can have up to 16GBytes ram if the motherboard supports it (mine does) and
your finances permit.

It is true that any one process can get only 4 GBytes of virtual address
space, but so far that has not been a problem for me. I am running some
postgreSQL stuff and it is working quite well with only 1 GByte or so for
its biggest process:

PID USER S VIRT RES SHR %MEM %CPU TIME+ P COMMAND

1718 postgres D 1003m 73m 72m 0.9 14 10:48.32 0 postgres: jdbeyer
stock [local] COMMIT
26210 postgres S 1001m 16m 15m 0.2 0 204:55.16 0 /usr/bin/postmaster
-p 5432 -D /srv/dbms/data
26212 postgres S 8152 944 552 0.0 0 0:00.78 3 postgres: logger process
26214 postgres S 1002m 983m 983m 12.3 0 0:43.67 1 postgres: writer process
26215 postgres S 9152 1692 300 0.0 0 8:59.23 0 postgres: stats
buffer process

I have given the postgres writer process more memory, and it will take it,
but it does not go much faster. Even though the database is split over 6
10,000rpm SCSI hard drives, it is still limited by how fast the drives can seek.

--
.~. Jean-David Beyer Registered Linux User 85642.
/V\ PGP-Key: 9A2FC99A Registered Machine 241939.
/( )\ Shrewsbury, New Jersey http://counter.li.org
^^-^^ 11:15:01 up 25 days, 14:37, 3 users, load average: 5.34, 5.29, 5.09

Gregory Shearman

necitită,
3 sept. 2007, 19:21:2403.09.2007
Balwinder S Dheeman wrote:

> IMHO, the FreeBSD, NetBSD and, or OpenBSD kind of meta distributions --
> like Gentoo -- will eat out a lot of your machines time and resources to
> install and, or upgrade software. That's what, I for one, want to convey
> as an opinion from a vast experience I have as a Unix sysadmin.

Really? I hadn't noticed. Even my old Pentium111(Katmai) with 128MB of
memory and a blistering 451MHZ processor can update itself without any
noticeable performance hit (admittedly it is only a
router/firewall/squidproxy).

--
Regards,

Gregory.
Gentoo Linux - Penguin Power

John Thompson

necitită,
3 sept. 2007, 22:40:3003.09.2007
On 2007-09-03, Johan Lindquist <sp...@smilfinken.net> wrote:

> So anyway, it was like, 13:12 CEST Sep 03 2007, you know? Oh, and, yeah,
> Balwinder S Dheeman was all like, "Dude,
>> On 09/03/2007 03:28 PM, Garamond wrote:
>
>>> FreeBSD is as secure as OpenBSD and also very easy to work with,
>>> much easier then Linux.

> I've seen nothing in FreeBSD that makes it an easier system to work
> with compared to the user friendly Linux distributions.

The "ports" collection. It rocks.

--

John (jo...@os2.dhs.org)

0 mesaje noi