Being interested in what's for sale I went to the appropriate forum
and found more good stuff.
Then I decided to do a simple search, clicked on search, entered
my paramters and....
WTF is with the excessive security on just doing a search? Both a
question AND image verification? I can see requiring stuff like
that for registration, but to just do a simple search of the forum's
messages? Pleeeeaaaaase.
If I have to do that every time I want to search the forums then
I won't be doing any more searching.
Brian
--
http://www.skywise711.com - Lasers, Seismology, Astronomy, Skepticism
Seismic FAQ: http://www.skywise711.com/SeismicFAQ/SeismicFAQ.html
Quake "predictions": http://www.skywise711.com/quakes/EQDB/index.html
Sed quis custodiet ipsos Custodes?
I believe the point is to discourage excessive server load from fly-by
searching... This is the case for most forums out there... and that is
good that non-registered users can do searches, especially for a forum
that has no financial backup or real income source. After registration
the search feature is nothing more than a couple of clicks and your
parameters... you should try it :P
> I believe the point is to discourage excessive server load from fly-by
> searching... This is the case for most forums out there... and that is
> good that non-registered users can do searches, especially for a forum
> that has no financial backup or real income source. After registration
> the search feature is nothing more than a couple of clicks and your
> parameters... you should try it :P
I've been on the net a long time. This is the first time I've had
to ever jump through hoops on ANY web site just do a search.
And what? Are there malicious bots running around out there taking
up server bandwidth doing massive amounts of bogus searches? A new
form of DOS attack?
I figured it might disappear after registration, but first impressions,
ya know?
If I can't easily test drive, I'm not likely to buy. I'll see if
it's possible to contact the webmaster without jumping through too
many hoops. For this, some would be expected.
Brian
--
http://www.skywise711.com - Lasers, Seismology, Astronomy, Skepticism
"Skywise" <in...@oblivion.nothing.com> wrote in message
news:13jd52f...@corp.supernews.com...
Yeah, same here :)
> The bulletin board system they use is used by several other groups and they
> all require that if you aren't a user.
I now see it's something different. I thought the software was
phpBB as it looked so much like it.
> Even though you don't think it is necessary, it is. Otherwise
the developers wouldn't have bothered to implement that feature.
Why? Why is it "necessary" to require the user to do a word AND
image verification to do a search?
> I like it the way it is. It keeps whiners from signing up and
posting.
Are you responding to the question I asked or the one you
_thought_ I asked. I'm not talking about registration and
posting. I expect security measures for that.
I am talking about using the search function.
And as others have pointed out, there is a way around it. Which
just leads to my question again - "why?"
>
>> Even though you don't think it is necessary, it is. Otherwise
> the developers wouldn't have bothered to implement that feature.
>
> Why? Why is it "necessary" to require the user to do a word AND
> image verification to do a search?
These "features" are a mess in my opinion, I really dislike them and
would rather get rid of them but can not for a few reasons. Please
prepare for half a dozen catch-22's.
The image verification they use in Vbulletin and PHPBB is currently
machine readable, there are quite a few packages out there in use by
spammers that can jump through it with no more than a few spare
processor cycles. There are better image generators out there that make
it marginally more difficult for a computer to read, and I have tried
them, but I found out rather quickly that people couldn't read them.
The word verification part is really neat in my opinion. A bot is
incapable of guessing it and a spammer would have to expend more effort
than its worth to post a thread to the board that would *hopefully* be
deleted by me as quickly as it was posted.
I do hear what you are saying: "What the hell is the point in keeping
spammers and their bots from accessing the search feature?"
It's not them I am worried about, its my host. The search feature is
very processor and disk intensive and the load created by even your
average web spider, who's job it is to *click* submit and cache that
page, and every link in that page, and all the links in subsequent
pages... Well, you get the idea. The load is mind boggling, especially
for dynamically generated pages like the output of the search function.
What would be an "identical" page as far as output is concerned to a
human simply is not to these automated systems.
I tried using a robots.txt exclusion for the search page but it was
useless as there is a unique per user session ID attached to pretty much
every single php file/function in vbulletin. On a side note: I consider
the use of SID's a developer's attempt at building a better square wheel.
I wanted to keep the search function available for everyone yet minimize
the load associated with it. Most forums out there disable the search
function entirely for non-registered users... Please note that I am not
saying the solution I use is ideal or otherwise better than other sites
out there just because I make it available... Just that I actually went
out of my way to make it available, even if it's ease of accessibility
is lacking.
>
>
>> I like it the way it is. It keeps whiners from signing up and
> posting.
I wasn't worried about "whiners" when I enabled these packages, but yes:
I can see your point. The original purpose was for spam-bot reduction
and has worked masterfully. Because of the "word" verification I have
yet to delete a single out-right spambot... Now the "Chinese Laser
vendors who serve no purpose other than promoting their product, loudly"
is another equally vicious animal entirely. :)
>
> Are you responding to the question I asked or the one you
> _thought_ I asked. I'm not talking about registration and
> posting. I expect security measures for that.
>
> I am talking about using the search function.
>
> And as others have pointed out, there is a way around it. Which
> just leads to my question again - "why?"
As to why, simply: There isn't a better solution, I am limited by what I
can do at my current hosting provider, and dedicated hosting is cost
prohibitive.
>
> Brian
Hope this helps, though I doubt it will given it doesn't even come close
to fixing the problem in question.
Another problem you will run across is that attachments are disabled for
guests of the forum. I did try to remove this limitation however the
permissions control system is unwilling to cooperate with me with
regards to guest "accounts".
If you, or anyone, has any more questions and/or suggestions relating to
PL I will be glad to help to the best of my ability.
-Robert
> I wanted to keep the search function available for everyone yet minimize
> the load associated with it. Most forums out there disable the search
> function entirely for non-registered users... Please note that I am not
> saying the solution I use is ideal or otherwise better than other sites
> out there just because I make it available... Just that I actually went
> out of my way to make it available, even if it's ease of accessibility
> is lacking.
>
Why not get Google to do it? Works for me.
I use both, but Google (while a tad late to catch up) caches most things
over a week or two old, and if it's caching anyway, let Google do it. They
even provide highlighting of the found terms, and if they cared about
people hitting them hard for that they'd stop it happening, and they have
tolerated it for years. They make (BIG) money, it's part of how Google
works and I doubt they'll change it unless they think it's costing them
for no gain. Other agencies out there are collecting board posts in an
effort to form a common portal, no doubt thinking it will make them an
indispensible part of some profitable operation, so it's something we
should turn to our advantage by using their bandwidth if they can find what
we want through it. They certainly use everyone elses to build up a big
library of what was never theirs to start with. How do I know these things
happen? Found them accidentally through Google. And I can avoid that too,
"site:photonlexicon.com" as one search term in Google will do that
perfectly.
Here in usenetland we sometimes complain about how Google's nature allows
spammers and other nonsensicals to post to waste time, space and bandwidth,
but in this case the same nature works in our favour, so we should use it.
Hello there:
I put some more thought into the guest searching problem and feel that
the "word" check is more than adequate and have removed the "image"
check portion.
Hope this helps some.
-Robert from the "Redundant Nested Redundancy Removal Team" at PL. :)
> Skywise wrote:
>> Robert, I appreciate you taking the time to write that detailed
>> and thoughtful reply. It does much to negate my first impression.
>>
>> Brian
>
> Hello there:
>
> I put some more thought into the guest searching problem and feel that
> the "word" check is more than adequate and have removed the "image"
> check portion.
>
> Hope this helps some.
Sounds like a reasonable compromise.
Now if only I can keep my computer running long enough to check
the site out more.... been dealing with an over heating video
card. I get all sorts of garbage on screen I've not seen the
likes of since my days with 8 bit computers and my playing
around with moving the video segment into executable memory.
I don't know if this will help any but here it goes:
I had a server with on board video that went nuts before, it simply
started generating more heat, There simply was not enough room for a new
heat sink, and I didn't want to track down another motherboard. Boy do I
hate embedded hardware.
The chip in question was an ATI something-or-other and the server
obviously had no need for the "3d hardware acceleration" junk and the
related fast clock speed. I ended up using some 3rd party modification
software to roll the clock speed back a bunch and I never had a problem
again.
As a side note: I joked with my friends that I managed to save a
glorious 32 cents a year on electricity.
-Robert
>> Sounds like a reasonable compromise.
>>
>> Now if only I can keep my computer running long enough to check
>> the site out more.... been dealing with an over heating video
>> card. I get all sorts of garbage on screen I've not seen the
>> likes of since my days with 8 bit computers and my playing
>> around with moving the video segment into executable memory.
>>
>> Brian
>
> I don't know if this will help any but here it goes:
>
> I had a server with on board video that went nuts before, it simply
> started generating more heat, There simply was not enough room for a new
> heat sink, and I didn't want to track down another motherboard. Boy do I
> hate embedded hardware.
Mine is an actual PCI card, an ATI Radeon 9600. The fan on it has
been flakey since shortly after I bought it two years ago. It
chatters when it first starts up but eventually settles into nice
quiet proper operation. However, I typically do not turn off my
computer so after two solid years of running I'm sure it's had
enough by now.
I do seem to be getting by at the moment with some extra air
flow I've installed, disconnecting 3 of the hard drives, and
stopped running Seti@Home which meant 100% CPU usage all the time.
> The chip in question was an ATI something-or-other and the server
> obviously had no need for the "3d hardware acceleration" junk and the
> related fast clock speed. I ended up using some 3rd party modification
> software to roll the clock speed back a bunch and I never had a problem
> again.
My card came with overclocking software, but I never used it.
Didn't want to burn up the money I spent for a measly extra 2%
frame rate.
> As a side note: I joked with my friends that I managed to save a
> glorious 32 cents a year on electricity.
HAHAHAH!!! cute...
Anyway, I'm using this as an excuse to build a new system. When
done, then I will replace this card with an older one I still
have laying around and reinstall the system and use this machine
for other purposes.
I have exotic cooling in mind for the new system. One of my many
ideas is sticking it in a small fridge.
>
>I have exotic cooling in mind for the new system. One of my many
>ideas is sticking it in a small fridge.
Ooooh, condensation would be the death of that one.. Maybe
20,000 dessicant packets and duct taping it perm. closed would
do the trick. Need to get some of that cooling fluid that they used
with Cray's, just dunk the whole thing in a fishtank full of that stuff
with chiller lines run through it.. Yeah, that's the ticket...
d.
> Now if only I can keep my computer running long enough to check
> the site out more.... been dealing with an over heating video
> card. I get all sorts of garbage on screen I've not seen the
> likes of since my days with 8 bit computers and my playing
> around with moving the video segment into executable memory.
>
I got strange video problems on an ITX mainboard that had been running hot
for a year. I replaced all the electrolytics, and the problem vanished.
> I have exotic cooling in mind for the new system. One of my many
> ideas is sticking it in a small fridge.
>
Best idea I know of is a beer chiller, they're very cheap, very stable, and
they punp whatever fluid you need round the system and keep all the heat
exchanging and refrigeration stuff neatly in one place outside the machine,
and they have an enormous cooling capacity, and it's much easier to avoid
the stresses and condensation problems with TEC's.
Never done any of that though, I don't overclock either, I just get the
best airflow I can from as few fans as possible to avoid turbulence and
noise. Adding one of those ;heat pipe' thingers is a nice idea though, it's
a tiny refrigerant based passive heat pump that has the effect of producing
a kind of virtual low thermal conduction, greater than a copper bar of
similar size, so you can duct a lot of heat out of a tight space with one.
Not done that either, but I will if the need ever arises, it's a really
neat idea.
> Ooooh, condensation would be the death of that one..
Yep. Beer chillers are nice though. They can pump so much heat that you
don't need a large thermal gradient to remove it effectively, so less
condensation. I still like good airflow from a large slow fan better
though, and to rely on efficient conduction from hot sources into that air
flow. And I'm going to get a small heat pipe too, even if all I ever do is
play with it, the idea is so cool.
Somewhere around here I saw a complete water cooling system for a
pc for under $200, and that included two "caps", one for the processor
and one for video. I think it was one of the companies that's in cahoot's
with our version of radio shack, "The Source" (owned by Panasonic).
If you're interested, I'll dig back through the links for it...
The kit included all the hoses, heat exchanger (kinda was made to look
like an external USB drive enclosure, and dead quiet) and even their
"cooling fluid"..
d.
Sounds cool, but I like the heat pipe idea better. The scale those things
can work on is so compact they can duct heat around parts like small cabled
duct electrical power. Still interesting though, if that's a heatpump and
exchanger all in the space of an external drive enclosure.
Actaully, my most serious idea so far is this:
A sealed box, perhaps several "outs" but definately only one
"in". This is to keep a positive pressure on the inside whose
source is the only input.
This input would be filtered to keep out dust. I have something
like this now but really fine dust still makes it in, so may
go to HEPA filters.
My current box actually has negative pressure as there is dust
buildup in all the nooks and crannies, especially like around
the DVD drives, etc...
TE cooling of ingoing airflow as needed. At first, this might
just be manual control with a POT, but ideally would want some
active control. It would only go on as needed and would use
temp sensors. Also, I'd have ambient and cooled airflow temp
sensors so that I don't have too much of a thermal difference
that could lead to condensation.
I have experimented with this concept to a degree already. I
rigged up an inlet box with a piece of A/C filter and a medium
sized A/C powered muffin fan. Before the fan is a car heater's
heat exchanger, through which I pumped ice water from an Igloo
lunch bucket. Was quite effective as long as I had the chilled
water. CPU temp dropped as much as 18F. It's all made of
cardboard and styrofoam board taped together. Hey, it's only a
prototype!!!
>I have experimented with this concept to a degree already. I
>rigged up an inlet box with a piece of A/C filter and a medium
>sized A/C powered muffin fan. Before the fan is a car heater's
>heat exchanger, through which I pumped ice water from an Igloo
>lunch bucket. Was quite effective as long as I had the chilled
>water. CPU temp dropped as much as 18F. It's all made of
>cardboard and styrofoam board taped together. Hey, it's only a
>prototype!!!
>
>Brian
Not trying to be snide... What is it that you're trying to accomplish,
being able to overclock, or just get over some environmental
problems? With the low (relatively) cost of mboards/processors
these days, the amount of gizmo's required to overclock seem
to outweigh just upgrading to a faster box. Short of needing something
akin to an old cray which they had no choice as you can only air
condition so much of a football field..
Anyway, truly just curious if there is some new big gain in
processor speeds that can be attained with a 18F drop, I've never
tried it and don't know what can be done via that route.. I have added
a fan to my ATI AGP video card as it ran hot, and stuck extra fans
on the case to just keep air moving around, other than that I think I
would just spring for a new mboard..
d.
> Not trying to be snide... What is it that you're trying to accomplish,
Lifetime and reliability. I have no interest in overclocking.
My system is normally on 24/7. Partly because I run Seti@Home
nearly all the time (100% CPU load), that I am stuck on dialup
still and let my system download at night or while I'm off to
work, and to a lesser degree I hate waiting for my system to
boot up and get all my software up and running.
The more powerful computers get, the more forced air they need
to stay cool. The more forced air, the more dust that gets in.
I unfortunately live in a dirtier than normal environment. It
bothers me to think that I'm breathing the same stuff that's
collecting on my CPU cooling fins.
Also, I don't run the A/C that much because I end up cooling
the whole damned house when all I want is the room I'm in.
Finally, there's that geeky nerdy cool factor.
BTW, my video went haywire again last night and on a whim I
tried a screen capture and was surprised that it was recorded.
http://www.skywise711.com/misc/badvideo.jpg
>
>BTW, my video went haywire again last night and on a whim I
>tried a screen capture and was surprised that it was recorded.
>
>http://www.skywise711.com/misc/badvideo.jpg
>
>Brian
Well, I'm fairly clueless as to how video cards do their thing..
It looks to me like either a memory or timing problem. Have
you any kind of video diagnostic utilities? I've got a couple
around here somewhere that will run through all the video
memory, etc. and isolate most problems. One of them was
a freebie and actually is the best of the bunch. Try looking
for something from SiSoftware, their software really does
a good job of identifying just about anything you'd want to
know about your pc, and the free version I'm running
covered pretty much everything..
(I have nothing to do with SiSoftware).
d.
So, How da yall like Photonlexicon? Talk about getting side-tracked/
hiiJacked///
>>BTW, my video went haywire again last night and on a whim I
>>tried a screen capture and was surprised that it was recorded.
>>
>>http://www.skywise711.com/misc/badvideo.jpg
>>
>>Brian
>
> Well, I'm fairly clueless as to how video cards do their thing..
> It looks to me like either a memory or timing problem.
Screenshots access the frame buffer, so you see whatever is prepared to
convert to analog rather than what the host computer thinks is meant to be
seen.
I had that problem on a PCI video card once and chucked it out, thinking
that it was an overheated or static-damaged IC causing it, but since fixing
an ITX mainboard earlier this year I thini it was an error in timing caused
by digital noise. So again, I think the thing to do is replace electrolytic
capacitors. Those are likely the first thing to fail with prolonged use and
heat as a cause.
> So again, I think the thing to do is replace electrolytic
> capacitors. Those are likely the first thing to fail with prolonged use and
> heat as a cause.
If the goal is to save the video card....
If I use this as an excuse to build a whole new system (this one is
two years old and was already obsolete when I built it) I'll be
buying a new video card anyway. Then this system would be reduced
to secondary use, and for that I still have a Matrox G400 from the
previous computer which will be adequate for that use.
So from that perspective, there's not much point in the effort,
unless it's to prove or disprove the idea.
But, I am currently downloading the SANDRA software Doug suggested.
I'll report back if it finds anything.
> Lostgallifreyan <no-...@nowhere.net> wrote in
> news:Xns99EB9951290...@140.99.99.130:
>
>> So again, I think the thing to do is replace electrolytic
>> capacitors. Those are likely the first thing to fail with prolonged
>> use and heat as a cause.
>
> If the goal is to save the video card....
>
> If I use this as an excuse to build a whole new system (this one is
> two years old and was already obsolete when I built it) I'll be
> buying a new video card anyway. Then this system would be reduced
> to secondary use, and for that I still have a Matrox G400 from the
> previous computer which will be adequate for that use.
>
> So from that perspective, there's not much point in the effort,
> unless it's to prove or disprove the idea.
>
> But, I am currently downloading the SANDRA software Doug suggested.
> I'll report back if it finds anything.
>
> Brian
Trust me, replacing caps is easier than agonising over the implications if
you don't. For example, there are several new-old-stock mainboards from the
'bad caps' era of recent past. Awesome way to get a new board if replacing
a few caps is all it takes. Main problem is that a lot of people will still
pay too much for those boards, so it's not that easy yet to find a bargain.
>Not trying to be snide... What is it that you're trying to accomplish,
>being able to overclock, or just get over some environmental
>problems? With the low (relatively) cost of mboards/processors
>these days, the amount of gizmo's required to overclock seem
>to outweigh just upgrading to a faster box.
My E4300, a $100 or so processor at the time that runs at 1.8 stock, runs
at 3.0 GHz without much special attention or fans. The closest stock
relative, the E6850 (at 3.0 GHz stock but more cache), only came out half
a year later and cost 2.5 times as much.
It takes extremely careful selection of CPU models, in particular, but
overclocking is still well worth doing in some cases. It wasn't
particularly worthwhile on the A64 (X2) or P4 lines, though.
Jasper
>> But, I am currently downloading the SANDRA software Doug suggested.
>> I'll report back if it finds anything.
>Trust me, replacing caps is easier than agonising over the implications if
>you don't.
True for real electronics, less so for PCs. They're so commoditised that
refurbishing couple-year-old models isn't really worth the effort.
Especially given that motherboards are 4 to 6 layer PCBs which aren't
exactly easy to solder/desolder without tearing internal tracks.
>For example, there are several new-old-stock mainboards from the
>'bad caps' era of recent past. Awesome way to get a new board if replacing
>a few caps is all it takes. Main problem is that a lot of people will still
>pay too much for those boards, so it's not that easy yet to find a bargain.
Abit BE6-II and Abit A7V333 are the ones I have lying around, IIRC. Both
with visible secretions from the caps. Yours for the price of shipping
intercontinental, if you want 'em.
Jasper
> On Sat, 17 Nov 2007 22:51:45 GMT, Lostgallifreyan <no-...@nowhere.net>
> wrote:
>>Skywise <in...@oblivion.nothing.com> wrote in
>>news:13jur78...@corp.supernews.com:
>
>>> But, I am currently downloading the SANDRA software Doug suggested.
>>> I'll report back if it finds anything.
>
>>Trust me, replacing caps is easier than agonising over the
>>implications if you don't.
>
> True for real electronics, less so for PCs. They're so commoditised
> that refurbishing couple-year-old models isn't really worth the
> effort. Especially given that motherboards are 4 to 6 layer PCBs which
> aren't exactly easy to solder/desolder without tearing internal
> tracks.
>
PC's not real electronics? :) Interesting concept. It is definitely worth
it when you buy unused ITX mainboards like the Via Epia MII12000. Those are
still on sale, but have known bad caps. It's worth buying those boards for
their small size, high efficiency, high compatibility, and changing the
caps early in their life. Avoiding tearing internal tracks is easy enough,
they're clamped inside the FR4 layers, they're not going to go anywhere,
and unless you pull hard, the through-hole plating isn't either. Just keep
moderate tension on them while heating the pins, till they slide free, then
drill the solder with a bit just matching the new cap's leads. As the hole
is always bigger than the leads, it's impossible to damage a significant
part, if any, of the through-hole lining this way.
The cost and trouble of maintaining these boards is far less than that of
changing the OS extensively. I develop a system over time (with backup
images) so it's very stable, well known, unwanted parts removed, so I can
rely on it and all the stuff added to it since. I choose a board that will
support everything I need to do with a computer, easily, and get a few
spares, that way I don't have to play with the machine the way some people
have their cars up on blocks for half the time. :) Also, the amount of
waste per ten years is close to zero.
I fixed one of mine this way without a hitch. I won't do this till the
other newer ones show signs of need. I have some spare caps, enough for
more than one other board. I rely on at least one of those boards at all
times, so this way I can swap one at any time with no change to the system
running on it.
>>For example, there are several new-old-stock mainboards from the
>>'bad caps' era of recent past. Awesome way to get a new board if
>>replacing a few caps is all it takes. Main problem is that a lot of
>>people will still pay too much for those boards, so it's not that easy
>>yet to find a bargain.
>
> Abit BE6-II and Abit A7V333 are the ones I have lying around, IIRC.
> Both with visible secretions from the caps. Yours for the price of
> shipping intercontinental, if you want 'em.
>
> Jasper
>
Nice offer. I pass though, they're fairly old now, and at least one of them
has lots of jumpers. The one case I have that will take a full sized ATX
board needs as new and fast as I can justify buying, when I get to it, and
getting to it won't be that easy, physically, once it's in, so I need
jumperless settings wherever possible.
> My E4300, a $100 or so processor at the time that runs at 1.8 stock, runs
> at 3.0 GHz without much special attention or fans. The closest stock
> relative, the E6850 (at 3.0 GHz stock but more cache), only came out half
> a year later and cost 2.5 times as much.
>
That's cool but the economy doesn't add up. The timing DOES add up, if you
can't get that speed early enough for your need, but when you consider the
worth of data, it's almost always far more than the $150 (£75) you save
even if it's just lots of repeated downtime in restoring from backups.
It's arguable that the time and effort alone of acheiving it is worth more
than the price difference, even in hours spent and valued at a minumum
wage. I wouldn't endanger my data with a significant risk of slipped bits
unless I was running a system entirely (data included) out of ROM (can
hard drives have a hardware read-only lock?). At least that way the boot
can be repeatable unless the machine is almost too shaky to boot at all.
I could justify it in terms of fun and excitement, but not economically.
Same as I like to drive a cooled Rohm laser diode at 300 mA for 200+ mW
output, but I wouldn't be able to justify relying on it enough to sell, for
example. At least not to anyone who wasn't into doing the same thing, and
bought it for that reason.
>>>Trust me, replacing caps is easier than agonising over the
>>>implications if you don't.
>>
>> True for real electronics, less so for PCs. They're so commoditised
>> that refurbishing couple-year-old models isn't really worth the
>> effort. Especially given that motherboards are 4 to 6 layer PCBs which
>> aren't exactly easy to solder/desolder without tearing internal
>> tracks.
>
>PC's not real electronics? :)
Nope. Change too fast, get better too fast, drop in price too fast. Buying
new or young-old tends to be a lot more cost-effective than buying old-old
(the cap thing was 5 years or so ago, now) and fiddling about with it.
>Interesting concept. It is definitely worth
>it when you buy unused ITX mainboards like the Via Epia MII12000. Those are
>still on sale, but have known bad caps. It's worth buying those boards for
>their small size, high efficiency, high compatibility, and changing the
>caps early in their life.
I'd expect a rather significant discount if I had to faff around with them
that much to get them to work properly out of the box. I though the
M2-1200 was way too new to be afflicted with the bad caps? Or is this a
later batch of bad caps?
>The cost and trouble of maintaining these boards is far less than that of
>changing the OS extensively. I develop a system over time (with backup
>images) so it's very stable, well known, unwanted parts removed, so I can
>rely on it and all the stuff added to it since.
Embedded stuff is much more "real electronics" than it is "commodity-based
PC", though.
>I choose a board that will
>support everything I need to do with a computer, easily, and get a few
>spares, that way I don't have to play with the machine the way some people
>have their cars up on blocks for half the time. :) Also, the amount of
>waste per ten years is close to zero.
So you use that as a main desktop PC? I suspect you're rather atypical in
your usage. Most people would vastly prefer a brand-new PC with a
brand-new Vista to keeping the same Debian-STABLE image for 10 years only
with occasional dist-upgrades.
>> Abit BE6-II and Abit A7V333 are the ones I have lying around, IIRC.
>> Both with visible secretions from the caps. Yours for the price of
>> shipping intercontinental, if you want 'em.
>Nice offer. I pass though, they're fairly old now, and at least one of them
>has lots of jumpers. The one case I have that will take a full sized ATX
>board needs as new and fast as I can justify buying, when I get to it, and
>getting to it won't be that easy, physically, once it's in, so I need
>jumperless settings wherever possible.
More power to ya. I'm past the point of bothering much with them myself --
my current home server is a Celeron 1.4 Tualatin on a Tualatin-compatible
Slocket to an Asus P3B, which could be replaced with either of those
boards, but a) that works well enough, and I know Asuses of that era have
good caps, and Abits don't, so I'd have to change the whole lot, all
several dozen, and b) I've got a complete Athlon XP 2500+/1GB waiting
around to become the new server, which is my old desktop freed up when I
moved to my new Core 2 Duo E4300 1.8 @ 3.0 GHz/2GB.
Jasper
>That's cool but the economy doesn't add up. The timing DOES add up, if you
>can't get that speed early enough for your need, but when you consider the
>worth of data, it's almost always far more than the $150 (£75) you save
>even if it's just lots of repeated downtime in restoring from backups.
What downtime are you talking about, kemosabe? I did *extensive* stability
tests. Even then, an E_TOO_HOT is more likely to result in a system crash
than data corruption. Even on $TOY_OS, let alone a real OS.
But, I repeat, outside of high, unairconned summer it's not gonna happen.
And even then I only need to go back to 2.4G to have rock solid stability,
when the temperature climbs past 40 (which, I need not remind you, is the
kind of temperature that when found in a datacentre results in about an
LD50 of boxes falling over..). If I were willing to sacrifice stability,
I'd be running at somewhere north of 3.5 G.
>It's arguable that the time and effort alone of acheiving it is worth more
>than the price difference, even in hours spent and valued at a minumum
>wage. I wouldn't endanger my data with a significant risk of slipped bits
>unless I was running a system entirely (data included) out of ROM (can
>hard drives have a hardware read-only lock?).
Firmware, yes. And said firmware runs on a completely different processor,
the one in the drive. Some drives include one, but you might need to buy
10.000 of them to specify them actually including the jumper.
>At least that way the boot
>can be repeatable unless the machine is almost too shaky to boot at all.
>
>I could justify it in terms of fun and excitement, but not economically.
The E4300, with a near-guaranteed stable 66% overclock (100%, if you go
high-end-cooling), is/was *particularly* good, though. Much like the old
Celeron 300A, the Celeron Coppermine 533/566 (both of which were near
certain stable 50% overclocks), and very damn little in between. Most of
the time, 20 or 30% is all you get even with the slowest processors from a
line, which means you gain relatively little.
>Same as I like to drive a cooled Rohm laser diode at 300 mA for 200+ mW
>output, but I wouldn't be able to justify relying on it enough to sell, for
>example. At least not to anyone who wasn't into doing the same thing, and
>bought it for that reason.
Well, sure. It's an enthusiast's game, not a commercial proposition.
Although I'll note that there *are* occasional stock-overclocked OEM PC
clones out there. Not even all Alienware and outfits like that -- Dell
sells/sold at one point a liquid-cooled, stock-overclocked, dual-dualcore
Core 2 Extreme machine, or something along those lines. Hell, the quad G5
from Apple was liquid-cooled as stock, and while that may not technically
count as overclocked that's mainly because Apple has a significant say in
what the manufacturer prints on their chips. The thermal envelope they
specified to be able to reach those speeds and that processor density was
staggering.
Jasper
> I'd expect a rather significant discount if I had to faff around with
> them that much to get them to work properly out of the box. I though
> the M2-1200 was way too new to be afflicted with the bad caps? Or is
> this a later batch of bad caps?
>
Second of four wasn't so well discounted, but the last two were. They're
all with bad caps though. GSC I think, well recorded as bad, and on the
first board they started dying within a year at around 40+C. (Confined 1U
rack later fitted with radial fan). Most were fine but I replaced the lot
with low ESR Panasonic caps rated for 105C. Took about an hour and a half,
which is a lot less than most software changes and testing take.
>>The cost and trouble of maintaining these boards is far less than that
>>of changing the OS extensively. I develop a system over time (with
>>backup images) so it's very stable, well known, unwanted parts
>>removed, so I can rely on it and all the stuff added to it since.
>
> Embedded stuff is much more "real electronics" than it is
> "commodity-based PC", though.
>
Well, yes. I still think of all of them as real electronics, it's just that
too many corners are cut with boards punted out on a best-buy basis. The
MII12000's are nice, they cope with decent XviD video, 3D CAD, they won't
handle fast 3D gaming but I don't do any. They're embedded boards capable
of most desktop and workstation sorts of tasks. That's what attracted me to
them. They also have a huge number of possibilites for connecting stuff
without adding anything, so they look cheap as soon as you consider the
cost of cards you don't need.
> So you use that as a main desktop PC? I suspect you're rather atypical
> in your usage. Most people would vastly prefer a brand-new PC with a
> brand-new Vista to keeping the same Debian-STABLE image for 10 years
> only with occasional dist-upgrades.
>
W98 SE. Highly modified though. Reduced to a core install of 70 MB, (38 in
compressed image), then built up with tools and extensions I picked up over
time. 48 bit LBA support in a patch by Rudolph Loew, and various other
things designed to coax it to cope with later stuff most people would buy
WXP for. Getting it to take DirectX v9 was fun, I had to create a minimal
(and removable) cryptographics support patch myself just to convince it it
was allowed to go in at all, in the absence of Internet Explorer. :) But it
works. Getting Bluetooth composite devices to go is a bitch though, still
not solved this.
In short, getting the customised install I want with WXP, or OpenBSD, will
be harder than persisting with what I know, and represents nulling ten
years of work and replacing with a lot of unknown risks to fill another ten
years easily. Faced with that, it's better to stay, or at least to avoid
throwing it away. Means more CPU and other resources for tasks done,
instead of the OS doing them, anyway, so I'm happy. I do work with WXP at
times, but I don't like it much.
I dare to suggest that in many cases the CPU advantage for tasks gained
this way might be more than if I'd managed to run WXP on an overclocked
system.
A very generous margin for OC'ing is interesting, it makes me wonder why
the maker didn't always exploit it directly somehow. Just as laser diodes
are often enhanced with microlenses to reduce light spill, allowing simple
aspheres to collimate them, I wonder if the makers of CPU's might not stick
very small heat pipes into the core to conduct heat out better than via
solid silver. That could allow basic fans and heatsinks (also with heat
pipes, as is common now) to manage a very effective air-cooled equivalent
to liquid cooling, as heatpipes are able to clamp the two ends of their
thermal system at very close to the same temperature, something that
normally needs large bore water cooling to do.
While I agree that system instability is more common that data corruption,
if that instability feeds damaged data to the slower parts of the system it
can still propagate corruption. Why it does not do so as often, I don't
know, but I'm not experienced enough with overclocking to want to risk it.
The Epias are pretty much embedded stuff, yeah, and thus qualify as 'real
electronics' to an extent that normal PCs don't. They show up *as*
embedded boards quite frequently for tasks that don't require it to be
quite as rugged and require more horsepower than most embedded boards will
deliver.
>> So you use that as a main desktop PC? I suspect you're rather atypical
>> in your usage. Most people would vastly prefer a brand-new PC with a
>> brand-new Vista to keeping the same Debian-STABLE image for 10 years
>> only with occasional dist-upgrades.
>
>W98 SE. Highly modified though. Reduced to a core install of 70 MB, (38 in
>compressed image), then built up with tools and extensions I picked up over
>time. 48 bit LBA support in a patch by Rudolph Loew, and various other
>things designed to coax it to cope with later stuff most people would buy
>WXP for. Getting it to take DirectX v9 was fun, I had to create a minimal
>(and removable) cryptographics support patch myself just to convince it it
>was allowed to go in at all, in the absence of Internet Explorer. :) But it
>works. Getting Bluetooth composite devices to go is a bitch though, still
>not solved this.
>In short, getting the customised install I want with WXP, or OpenBSD, will
>be harder than persisting with what I know, and represents nulling ten
>years of work and replacing with a lot of unknown risks to fill another ten
>years easily.
It's gonna keep getting harder and harder to cope with new hardware, and
for that matter new software, though. So if you ever run into a new
problem, you're going to have trouble finding ready-made solutions. There
are also file-compatibility issues. And you won't be playing HD video
through it, or a number of other things. If what you want is an absolutely
dependable thing that ever does only what you wanted back when, it's gonna
work for a while yet. But eventually this approach will break down. 10 or
even 5 years ago, you still heard about people using the internet from
their C64 (even if it was only in jest). Now... not so much.
>Faced with that, it's better to stay, or at least to avoid
>throwing it away. Means more CPU and other resources for tasks done,
>instead of the OS doing them, anyway, so I'm happy. I do work with WXP at
>times, but I don't like it much.
If you're going to get comfortable with something new, I would suggest
going either straight to Vista, or to one of the *nixes. You mention
OpenBSD, which might well fit your style -- they're pretty paranoid about
letting in new code.
I would suggest that you prepare for being ready to move to a newer
platform somewhere around 2015-2020 at the latest, for reasons of physical
failure if nothing else, and it might be a good idea to have a toy machine
around to figure out how you'd work that, while you still have the
dependable one for real work.
>I dare to suggest that in many cases the CPU advantage for tasks gained
>this way might be more than if I'd managed to run WXP on an overclocked
>system.
Even with the overheads for the pretty pictures, depending on the
definition of 'tasks', I suspect that my machine with dual 3.0 GHz Core 2
series CPUs will outperform your single 1.2G Via C3 -- which isn't even as
good at performance-per-clock as the P3, let alone the Core 2.
If you start including the pretty pictures of the app itself, things might
well get iffy though. My WinXP with Word2003, or Vista with Word2007, will
not be significantly faster in user experience than, say, a stripped 98SE
with Word 97. If they could match it at all. But, say, running SuperPI, or
other pure-math calculation stress-tests with very few if any visual
extras, I'll lay long odds that you wouldn't get within a factor of 2.
And I likes my eye-candy, incidentally.
Jasper
>> The E4300, with a near-guaranteed stable 66% overclock (100%, if you
>> go high-end-cooling), is/was *particularly* good, though. Much like
>> the old Celeron 300A, the Celeron Coppermine 533/566 (both of which
>> were near certain stable 50% overclocks), and very damn little in
>> between. Most of the time, 20 or 30% is all you get even with the
>> slowest processors from a line, which means you gain relatively
>> little.
>> Well, sure. It's an enthusiast's game, not a commercial proposition.
>> Although I'll note that there *are* occasional stock-overclocked OEM
>> PC clones out there. Not even all Alienware and outfits like that --
>> Dell sells/sold at one point a liquid-cooled, stock-overclocked,
>> dual-dualcore Core 2 Extreme machine, or something along those lines.
>> Hell, the quad G5 from Apple was liquid-cooled as stock, and while
>> that may not technically count as overclocked that's mainly because
>> Apple has a significant say in what the manufacturer prints on their
>> chips. The thermal envelope they specified to be able to reach those
>> speeds and that processor density was staggering.
>A very generous margin for OC'ing is interesting, it makes me wonder why
>the maker didn't always exploit it directly somehow. Just as laser diodes
Well (please note that since my usual reference source has packed up shop,
all figures and codenames are approximate, but I suspect that they're
actually pretty good), in the case of the Celeron 300A, this was when the
P2 was all the rage. It existed in the .35 micron 'Klamath' core, at 233
and 266 MHz with 66MHz bus speed, and in the .25 micron 'Deschutes' core
at 300, 350, 400, 450 MHz and 100 MHz bus speed.
Both of these were P6 (Pentium Pro) based processor cores, and shipped on
a special PCB containing also 512 kB of cache chips that ran at half the
core speed and with fairly high latency. Cache memory chips that fast were
a very significant cost issue, so especially the higher models were pretty
expensive. I think there may have been slower-than-half cache on the
higher models as well.
This was when Intel first conceived of the 'Celeron' concept, a cheap and
cheerful processor suitable for things like secretary's PCs which didn't
have to have the awesome processor power needed to run something like the
boss' Solitaire, since all they did was run the entire business on them.
Ahem. Anyway.
To make this Celeron processor, codenamed 'Covington', they took a
Deschutes core and put it on a PCB *without* any cache, and sold it at 266
and 300 MHz. This made it sloooooooooow. Like molasses mixed with treacle.
Even then, people were taking these chips and telling the motherboard to
run the FSB at 100 rather than 66 MHz, and they could pretty much all do
it. Just they were still only about as fast as one of the slower P2s.
Then Intel came up with the next one, codenamed 'Mendocino', they took the
same P6 core, stuck 128 kB of on-chip, full-clock-speed cache on it, and
made it on the same .25 micron process as the faster Deschutes Pentium
IIs. They then sold it with a 66 MHz bus speed and 300 MHz core speed, the
famed "300A", as well as some faster one, and yet again they could
overclock almost all the time to that 450 MHz, without any special
measures. By extension, so could the cores in the P2s, but the cache chips
couldn't overclock much at all so the net result was very little gain.
But a Mendocino at 450 MHz, with 128 kB of full-speed, on-chip cache, was
pretty much just as fast as a P2 Deschutes with 512 kB of slower
half-speed cache, one or the other wins depending on app. That meant that
the *cheapest* processor in Intel's stable could be made, with a simple
adjustment, to be exactly as fast as the *most expensive* processor in
Intel's stable at the time.
Similar stories, but slightly less extreme, applied to the Celeron 533/566
at 800/850 and the E4300 half a year ago.
>are often enhanced with microlenses to reduce light spill, allowing simple
>aspheres to collimate them, I wonder if the makers of CPU's might not stick
>very small heat pipes into the core to conduct heat out better than via
>solid silver.
Microminiaturised heatpipes for integration into a chip's surface have
been demonstrated in a lab, but aren't yet ready for production, and
that'll take a while yet.
>That could allow basic fans and heatsinks (also with heat
>pipes, as is common now) to manage a very effective air-cooled equivalent
>to liquid cooling, as heatpipes are able to clamp the two ends of their
>thermal system at very close to the same temperature, something that
>normally needs large bore water cooling to do.
But you still need to get rid of that heat, even if you can get it to the
fins easily. Even if you have perfect superconduction right to the fins,
there's *still* going to be, say, 100 Watts pumping into them, which is a
bunch. And knowing CPU manufacturers, they're going to use the headroom to
make a CPU that puts out 200W, rather than to make something run cool and
reliable. At least for the flagship product(s).
>While I agree that system instability is more common that data corruption,
>if that instability feeds damaged data to the slower parts of the system it
>can still propagate corruption. Why it does not do so as often, I don't
>know, but I'm not experienced enough with overclocking to want to risk it.
Bits fall over all the time in memory on a modern PC. It's inevitable due
to background radiation and/or cosmic rays and the very small size of
memory cells these days.
They'll do so at a greater rate if you're pushing the memory or CPU beyond
its design specifications, but that's just a matter of degree, not of
occurrence.
And note that typically, you're not overclocking beyond *design* limits
(well, not if you're like me, anyway), but rather beyond the limits to
which a particular chip was *tested*. And not even always that -- after
production, chips are tested for stability at several speeds and 'binned'
as "safe at this speed" or "the other". Traditionally, out of every wafer,
a few percent would reach the maximum speed, a couple more medium, and
most would be low-grade chips and a couple would be discarded as failures.
That corresponds nicely with a pricing model involving a couple of high
priced chips selling, and mostly low-grade cheap ones. Ever since the late
90s or so, chip makers have been taking such careful control of their
production facilities -- more accurate and repeatable equipment, better
cleanrooms, better tracking procedures and bunnysuits, more automation --
that it quite often happens that almost the entire production run goes
into the highest or one-but-highest bins. But you still need to sell lots
of cheap slow stuff, and a couple of very fast CPUs. Easy to do: you just
print a lower number on the fast CPUs.
What this translates to is that to a large extent (but not quite
universally), almost all the chips made on a certain production line, be
they sold as 1.8 or 3+ GHz chips, have pretty much the same maximum speed
capabilities. Core 2 Duo 65 nm (.065 micron) chips pretty much all have a
top speed somewhere in the 3.3 to 3.6 GHz range with a given amount of
cooling. Be they sold as a $120 E4300 or a $999 X6800.
The biggest variation is between the Conroe and Allendale chips, E4300
being an Allendale (E4xxx) designed with half the cache. Some of the
slower Conroes (E6xxx) were sold with only half the cache enabled,
functionally equivalent to an Allendale, but with Conroe-like slightly
higher overclock potentials.
Presumably those would also have been planned to use up Conroes where
there was an error in one half or the other of the cache, but it looks
like Intel's control of the 65 nm process and resultant yields was too
high for that to happen much.
Jasper
> The Epias are pretty much embedded stuff, yeah, and thus qualify as
> 'real electronics' to an extent that normal PCs don't. They show up
> *as* embedded boards quite frequently for tasks that don't require it
> to be quite as rugged and require more horsepower than most embedded
> boards will deliver.
>
> It's gonna keep getting harder and harder to cope with new hardware,
> and for that matter new software, though. So if you ever run into a
> new problem, you're going to have trouble finding ready-made
> solutions. There are also file-compatibility issues. And you won't be
> playing HD video through it, or a number of other things. If what you
> want is an absolutely dependable thing that ever does only what you
> wanted back when, it's gonna work for a while yet. But eventually this
> approach will break down. 10 or even 5 years ago, you still heard
> about people using the internet from their C64 (even if it was only in
> jest). Now... not so much.
>
True. I just got round to replacing my reliance on old Psion Organiser XP's
with a Dell Axim X50v. :) A big jump. But that's how I like it. It takes a
lot to adapt to something new, and the best things don't happen often
enough to justify me jumping at every change. It takes far more creativity
to make them than I'll ever adapt to on my own, this isn't a reflcetion of
quality, or lack of it. It's useful though, it means I get to avoid wasting
time on the real blind alleys.
I think that so long as the basic things those Epias will do for me remain
doable, I can stay with them. Just setting one up as a WXP box will get me
past problems that W98 makes intractable. Plenty of good stuff on W98 that
is also intractably awkward on WXP... File compatibility issues are
actually the least of it. :) The real difficulty is always hardware, either
by not being fast enough, or by not having driver support. A good standard
machine usually widens the window of opportunity.
One interesting thing I considered was the Pico ITX boards for a small
portable machine, but in this case I decided to rule it out. Far better and
cheaper to use a recently discontinued high end PDA. I can buy two,
maybe three, for the cost of a Pico ITX board, and I won't have to think
about how to cobble a touch screen to it either..
> If you're going to get comfortable with something new, I would suggest
> going either straight to Vista
Nay, nay and thrice nay! >:) Not a chance. I'll diverge to OpenBSD rather
than take the trouble to make Vista work. There are lots of reports of
difficulty with it, and it will be a long time before it it refined to
anything like speed and efficiency, and it will be enthusiasts, not MS
themselves, who mainly achieve this, same as for WXP.
, or to one of the *nixes. You mention
> OpenBSD, which might well fit your style -- they're pretty paranoid
> about letting in new code.
>
Yes, maybe too much, not sure. But they do like to make a stable base and
not rip up foundations just because they want to push new stuff for sale. I
have two vaguely conflicting desires. one being to rootle away in the
depths of the system as far as my limited knowledge can take me, the other
is to avoid that entirely and rely on stability I can trust. OpenBSD is
ideal but austere, I found I liked messing with the basic setting up better
than I liked trying to use it generally. It's more fun to be a mechanic
with than to drive. Oddly, M$ have done an excellently small and efficient
job of Windows Mobile 2003 on the X50v. It's looking a bit old now but as a
complete device it's very cool. I've been reading in several places that
most people who make firmware upgrades end up going back, it's that good.
> I would suggest that you prepare for being ready to move to a newer
> platform somewhere around 2015-2020 at the latest, for reasons of
> physical failure if nothing else, and it might be a good idea to have
> a toy machine around to figure out how you'd work that, while you
> still have the dependable one for real work.
>
>>I dare to suggest that in many cases the CPU advantage for tasks
>>gained this way might be more than if I'd managed to run WXP on an
>>overclocked system.
>
> Even with the overheads for the pretty pictures, depending on the
> definition of 'tasks', I suspect that my machine with dual 3.0 GHz
> Core 2 series CPUs will outperform your single 1.2G Via C3 -- which
> isn't even as good at performance-per-clock as the P3, let alone the
> Core 2.
>
> If you start including the pretty pictures of the app itself, things
> might well get iffy though. My WinXP with Word2003, or Vista with
> Word2007, will not be significantly faster in user experience than,
> say, a stripped 98SE with Word 97. If they could match it at all. But,
> say, running SuperPI, or other pure-math calculation stress-tests with
> very few if any visual extras, I'll lay long odds that you wouldn't
> get within a factor of 2.
>
> And I likes my eye-candy, incidentally.
>
>
> Jasper
>
Me too, though I like it simple. The old W95 shell was a work of
brilliance. Just two lines for light/highlight, or shadow/dark shadow,
allowed illusory depth cues that allow for customising with colours alone
to produce extremely clean GUI shells that use our semiconscious visual
processing to enhance them. It's extremely minimal, like the use of only
two speakers to effectively create a strong stereo image. Everything since
then has been less brilliant. MS and many of the Linuxes now include shells
with this trick. In grey and teal it sucks, but with nice glowing buttons
on dark frames it looks as cool as a stage full of the very finest music
gear. :)
By 2020 I'll certainly have moved to other systems, but the odds are I'll
still keep the older ones as they'll be built steadily to do a large number
of things well. The newer materials used to withstand flow soldering and
oven reflows are likely to easily last half a century. There will be
exceptions, but most will prove the rule. The two drives of gaining
efficiency by new design, and prolonging the usefulness so them can survive
aging, will converge. Probably already happening. If I can run a small OS
on new fast tech then I'm happy, I get the best of both, a system I know,
and the power to do that much more with it. I think the software will
outlive the hardware, unless some major world affair prevents the current
speed of hardware development.
The Via C3 isn't that hot, I know, but that's why I chose it. Its
efficiency is awesome, it does a lot considering how little energy it uses.
It should last a very long time. I'll always want something faster for
demanding processes, but most of the time, I'm not asking a lot of it, and
it definitely does enough. It was the first CPU I found that satisfied all
the demands I needed to make, so I no longer needed to go faster, I could
choose low power instead. For the long haul, that is ideal.
I passed over most of the CPU details because I don't know enough to follow
in the specifics, but I agree, the fast on-chip cache is a must. I stayed
well away fron those bizarre slot one Intel devices, and went for AMD K6-2.
It wasn't as powerful and it was impossible to overclock, I tried, it
wouldn't take another 50 MHz without causing lockups at boot. It wasn't
hot, it never got hot, it just didn't like it. I think my taste for cool
efficient chips began with that chip, maybe. It ran cooler than a 486. I
think that its ability to do things cool and with reasonable speed had to
do with the on-chip cache. Shortly afterwards, Intel went back to a similar
form.
> Microminiaturised heatpipes for integration into a chip's surface have
> been demonstrated in a lab, but aren't yet ready for production, and
> that'll take a while yet.
>
Will be worth the wait.
>>That could allow basic fans and heatsinks (also with heat
>>pipes, as is common now) to manage a very effective air-cooled
>>equivalent to liquid cooling, as heatpipes are able to clamp the two
>>ends of their thermal system at very close to the same temperature,
>>something that normally needs large bore water cooling to do.
>
> But you still need to get rid of that heat, even if you can get it to
> the fins easily. Even if you have perfect superconduction right to the
> fins, there's *still* going to be, say, 100 Watts pumping into them,
> which is a bunch. And knowing CPU manufacturers, they're going to use
> the headroom to make a CPU that puts out 200W, rather than to make
> something run cool and reliable. At least for the flagship product(s).
>
Doesn't matter though, you can get rid of it. The real danger with strong
hot CPU's is the small area of contact with the top of them. Once you have
a way to couple to a large surface area with a very small temperature
difference between emiiter and CPU core, you've got close to a theoretical
ideal. You could use solid diamond and not outdo a good heatpipe. Maybe you
could, but I wouldn't want to spend money trying. :) Nothing wrong with
pushing to use the extra headroom. If they didn't, others would. Same
reason people want to push their laser diodes harder. If the company can do
it, they get extra loot. And if that saves the buyer the effort and can
guarantee performance, the buyer will pay it.
> Bits fall over all the time in memory on a modern PC. It's inevitable
> due to background radiation and/or cosmic rays and the very small size
> of memory cells these days.
>
True, but that's the main limitation of all reduction. So long as the
weakest point isn't too weak it will be ok. I think it will take time to
tell, though fine analysis can allow prediction. Costly, but still cheaper
than waiting for a firm who wants to be first. Same thing seems to plague
single mode laser diodes. No-one can get past 220 mW or so CW with decent
lifetime. There are improvments, but nothing dramatic, it's like the whole
industry needs an entirely new way to do it, or it won't get any further.
> They'll do so at a greater rate if you're pushing the memory or CPU
> beyond its design specifications, but that's just a matter of degree,
> not of occurrence.
>
> And note that typically, you're not overclocking beyond *design*
> limits (well, not if you're like me, anyway), but rather beyond the
> limits to which a particular chip was *tested*. And not even always
> that -- after production, chips are tested for stability at several
> speeds and 'binned' as "safe at this speed" or "the other".
> Traditionally, out of every wafer, a few percent would reach the
> maximum speed, a couple more medium, and most would be low-grade chips
> and a couple would be discarded as failures.
>
Agreed. Testing takes time, and it's faster to spec lower and sell early.
Opnext laser diodes were the same I think. They sold diodes as 80 mW that
were very similar to 50 mW diodes. I think they were the same, but made a
little more accurately, consistently, and with hindsight to tell them what
they could get away with.
> That corresponds nicely with a pricing model involving a couple of
> high priced chips selling, and mostly low-grade cheap ones. Ever since
> the late 90s or so, chip makers have been taking such careful control
> of their production facilities -- more accurate and repeatable
> equipment, better cleanrooms, better tracking procedures and
> bunnysuits, more automation -- that it quite often happens that almost
> the entire production run goes into the highest or one-but-highest
> bins. But you still need to sell lots of cheap slow stuff, and a
> couple of very fast CPUs. Easy to do: you just print a lower number on
> the fast CPUs.
>
Interesting. And beleivable. I tend to go with consistence if I can find
it, it usually indicates a higher margin for envelope pushing. No logical
reason why that should be unless the midrange parts are as good higher
rated parts. Still a flaw there though. Sure, drop the price, but why
under-spec? What they lose on high end sales, they more than make up for on
a slightly higher cost of the low end, which people will pay if they know
they're really getting that much more. Lots of buyers, too.
> What this translates to is that to a large extent (but not quite
> universally), almost all the chips made on a certain production line,
> be they sold as 1.8 or 3+ GHz chips, have pretty much the same maximum
> speed capabilities. Core 2 Duo 65 nm (.065 micron) chips pretty much
> all have a top speed somewhere in the 3.3 to 3.6 GHz range with a
> given amount of cooling. Be they sold as a $120 E4300 or a $999 X6800.
>
Ok, but I bet it comes down to endurance. Again, with those laser diodes,
MOST single mode DVD diodes can do 220 mW CW for a while, but the question
is how long for, and will they mode hop frantically while doing it? There
must similarly be more to CPU specs that speed. Error checking takes time,
so a slower device with less error rates and retries might be faster. Hare
and tortoise...
> The biggest variation is between the Conroe and Allendale chips, E4300
> being an Allendale (E4xxx) designed with half the cache. Some of the
> slower Conroes (E6xxx) were sold with only half the cache enabled,
> functionally equivalent to an Allendale, but with Conroe-like slightly
> higher overclock potentials.
>
> Presumably those would also have been planned to use up Conroes where
> there was an error in one half or the other of the cache, but it looks
> like Intel's control of the 65 nm process and resultant yields was too
> high for that to happen much.
>
> Jasper
>
>
Yes, redundancy planned for and never used might make up a lot of that
margin. Also, if the caches get hot, a half-size cache allows some freedom
for the core to lose more heat.
The details of all the CPU differences confuse me, the main thing I look
for is a good ratio of performance to waste heat. :) If that's good, and
the devices are widely reported to be consistent, then I start feeling
hungry.