"Elite Bastards: A previous interview mentioned using multi-threading
to help speed up area/cell loading for when players move about the
game world. Are the devs using multi-threading in other ways to
facilitate game performance?
Gavin Carter: The game’s code takes advantage of the multithreaded
nature of the Xbox 360 and multithreaded PCs to improve just about
every aspect of the game. The primary function is to improve
framerates by off-loading some work from the main thread to the other
processors. We do a variety of tasks on other threads depending on the
situation – be it sound and music, renderer tasks, physics
calculations, or anything else that could benefit. Loading also gets
spread across hardware threads to aid in load times and provide a more
seamless experience for the player."
For the full interview see:-
http://www.elitebastards.com/page.php?pageid=12316
------------------------------------------------------------
Is this the beginning of the end of single-core PCs for
high-performance gaming ? Seems as if a dual (or more) core CPU
should be high on the PC gamer's 2006 shopping list. AMD is committing
to the M2 socket/DDR2 for all future processors (single, dual, quad
core) starting next year. Seems as if a M2 motherboard with a
dual-core CPU might be a useful place to start for those who have not
yet made the dual-core leap. Or maybe Intel will have sorted out their
mess and Conroe-based dual-cores will be in quantity production ----
avoid anything with a P4 legacy.........
John Lewis
- Technology early-birds are flying guinea-pigs.
[snip]
> Gavin Carter: The game’s code takes advantage of the multithreaded
> nature of the Xbox 360 and multithreaded PCs to improve just about
> every aspect of the game. The primary function is to improve
> framerates by off-loading some work from the main thread to the other
> processors. We do a variety of tasks on other threads depending on the
> situation – be it sound and music, renderer tasks, physics
> calculations, or anything else that could benefit. Loading also gets
> spread across hardware threads to aid in load times and provide a more
> seamless experience for the player."
Games already do this. For example when the JA2 code was released
all the sound processing was done in another thread.
Really nothing new.
Except that the threads are now shared between 2 parallel processors
with their own caches, thus no resource-contention with 2 simultaneous
threads and only half the latency with more. Sharing threads on a
single processor can potentially end up with zero or negative benefit
if the threads are contending for the same resources. And with dual
(or more) independent cores there is none of the Intel HT
limited-resource nonsense where it is totally unpredictable as to
whether turning on HT would speed up multithread execution of
specific programs or slow them down.
So do expect genuine and significant performance improvement with
threaded execution of games in a multi-core environment, especially
with a second core availale to parallel-process those functions that
truly throttle all executions and data-moves on a single-processor,
such as AI and physics computations . Multithreading in a
multi-core/multi-processor environment may be new to computer gaming,
but is well-embedded in professional applications Professional
software packages such as, for example, Adobe Premiere take full
advantage of multi-core/multiprocessor hardware, and show speed
improvement of at least 50% on dual-core vs single-core on
compute-intensive operations such as video encoding or
effects-rendering..
1) initial card will be PCI bus. Uggh, no PCI express at all? I think it's
acceptable for a soundard or lowspeed network card to be PCI, but do we need
yet another PCI card in our PC's?
2) it will have a 40mm fan and may require additional power, because the
chip is being fabbed using an outdated process to cut costs. More noise,
more power.
3) ATI is gunning for GPGPU to have graphics cards do physics calculations.
ATI and NVidia are not about to let an upstart break into their turf.
Therefore, Ageia is, at best, going to be the next 3DFX. Either Microsoft
or some Open Source project is going to pre-empt them with an API, or a more
established third party will step in. Maybe Ageia will have to resign
themselves to writing software implementations.
OTOH, with the dual core, you might not get as big a boost in the physics,
but at least you'll know most any game (and even Ageia) is going to be
running a physics engine in software for some time. And I've already got a
CPU fan, so I won't be really adding any more noise to my system (and Cool
'n' Quiet is a very good solution to the noise/heat tradeoff).
I have been wondering why exactly dual core/multi-threaded would be so
hard to program for. Couldn't you just run DSP functions on a seperate core
(CPU's can run DSP's)? Being as most soundcard/physics hardware that's real
or proposed is basicly just a DSP. At any rate, I'd imagine middleware
will make this stuff alot easier.
No doubt, Pathesda will need at least 2 top end CPUs on a top end rig to get
this bloatware running more than 5fps. Of course, when you add in the
program & system crashes + restarts/reboots, it will probably really average
out to <1fps >8^D
--
A killfile is a friend for life.
Replace 'spamfree' with the other word for 'maze' to reply via email.
They need some developer to be the id of roleplaying games and actually
create a good engine that can be used for RPG's. Gothic's engine wasn't too
bad, maybe they could work that into something.
I am going to be building a new game machine but not until summer of
next year. There are several developments going on here. It is not only
dual-core but 64-bit dual-core. AMD is ahead right now with good
working dual-core 64-bit processors and Intel is miserably stumbling.
But, AMD is having serious difficulties making profits from their 90
nanometer plants much less moving on.
Intel has 65 nanometer single core going but have had a dickens of a
time with leakage, meaning heat problems. Still, they will be shipping
65 nanometer single core processors in the first part of next year.
The really strange development was a few weeks ago, there was a
mysterious announcement from Intel that they had solved their leakage
problem, with 45 nanometer plants expected by the middle of next year.
I would wait for these 45 nanometer processors and expect to see them
in dual-core 64-bit with 1GHz FSB towards the middle of next year.
Clock rates are expected to start going up again. While there can be a
lot of discussion if clock rates reflect overall performance, some
types of calculations are totally proportional to the clock rate.
> get cheaper. The Ageia PhysX chip I have having big doubts about, for a
> couple reasons:
It may not catch on for the PC but the Nintendo Revolution is said to
have a physics co-processor.
> I have been wondering why exactly dual core/multi-threaded would be so
> hard to program for. Couldn't you just run DSP functions on a seperate core
This really is at the foundation of the issue. Dual-core processors
are just an evolution from multiple processor motherboards which have
been around a long time. The primary topic is mainly about resource
allocation and management. Dual or quad processor motherboards CPUs
still have to use the same bank of memory and busses, so that ends up
being shared, with all the rules and limitions that sharing requires.
So, even while multiple CPU motherboards have been around for a long
time, only a very few software apps ever went multi-threaded, Photoshop
being one of the major ones and some graphics programs. They didn't
achieve twice the performance with dual processors but could come
close.
Now that dual-core processors are emerging on mainstream PCs, games
are just at the very beginning of starting to use mult-threaded
parallel processing. In fact, the only two that are really doing it,
beyond just simple data buffering and other housekeeping, are Call of
Duty 2 and now Oblivion, if the reports are factual.
Games really getting into multi-threaded parallel processing are just
at the beginning of the learning curve right now. There is no doubt
that developments will ramp up as more experience is developed. As for
DSP, it is similar to graphics accelerator cards. It can be done with
general purpose CPUs but not as efficiently or as fast as dedicated
hardware. Still, there will no doubt be physics and DSP run on the
other core of a dual-core processor, maybe not as fast as a dedicated
application co-processor but probably good enough to see all kinds of
performance improvements. This may take awhile though, probably another
year or so.
Hey, I don't care which decent 1st/3rd person engine they use, so long as
the thing doesn't crash every 5 mins & looks half decent. Isn't Ravensoft
doing something in this regard with Doom's engine - some sort of crpg?
> I am going to be building a new game machine but not until summer of
>next year. There are several developments going on here. It is not only
>dual-core but 64-bit dual-core. AMD is ahead right now with good
>working dual-core 64-bit processors and Intel is miserably stumbling.
>But, AMD is having serious difficulties making profits from their 90
>nanometer plants much less moving on.
>
Really ???
Er, which planet are you broadcasting from ???? Please look at AMDs
financials from the last quarter/fiscal year, specifically the
Processor Group.
> .... bleh, bleh, bleh, deleted
> This really is at the foundation of the issue. Dual-core processors
>are just an evolution from multiple processor motherboards which have
>been around a long time. The primary topic is mainly about resource
>allocation and management. Dual or quad processor motherboards CPUs
>still have to use the same bank of memory and busses, so that ends up
>being shared, with all the rules and limitions that sharing requires.
Really? You seem to have completely forgotten the all-important
internal independent processor caches in your " dual-core
analysis(??)". It's really horrible that the processor manufacturers
consume all that unnecessary on-chip silicon-area with this
useless cache memory and then have the gall to charge us big
bucks for it.............
>So, even while multiple CPU motherboards have been around for a long
>time, only a very few software apps ever went multi-threaded,
What are you raving about?? Most compute-intensive and/or
data-intensive professional applications are multithreaded and take
full advantage of multiple-processor configurations - silicon design
tools, audio processing tools, video processing tools, math
computation tools, large data-base managers etc.
> Photoshop
>being one of the major ones and some graphics programs. They didn't
>achieve twice the performance with dual processors but could come
>close.
>
Maybe you should do a little background reading before making a fool
of yourself in public............. ??
>Now that dual-core processors are emerging on mainstream PCs, games
>are just at the very beginning of starting to use mult-threaded
>parallel processing. In fact, the only two that are really doing it,
>beyond just simple data buffering and other housekeeping, are Call of
>Duty 2 and now Oblivion, if the reports are factual.
>
> Games really getting into multi-threaded parallel processing are just
>at the beginning of the learning curve right now. There is no doubt
>that developments will ramp up as more experience is developed. As for
>DSP, it is similar to graphics accelerator cards. It can be done with
>general purpose CPUs but not as efficiently or as fast as dedicated
>hardware. Still, there will no doubt be physics and DSP run on the
>other core of a dual-core processor, maybe not as fast as a dedicated
>application co-processor but probably good enough to see all kinds of
>performance improvements. This may take awhile though, probably another
>year or so.
And in the last 2 paragraphs, have you told us anything new, or are
you just regurgitating the contents of various threads (some started
by me...) during the past few months on these newsgroups ?
I believe that's what Epic's Tom Sweeney was talking about in a recent
interview on Anandtech. Writing physics, animation and sound code for a
second core is not that hard, but doing things like AI and scripting he
found to be a waste of time (too complicated), so he's not going to bother
with it.
Obviously, what is needed is more middleware so developers don't have to
figure out all this microcode stuff. They need to be able to focus on
content, not microcoding for processors.
This is complete codswallop.
Writing for multiple threads is a well documented process. There are
interface points between processes and shared data areas which may or
may not, depending on sensitivity, be protected by mutex semaphores.
>
> Obviously, what is needed is more middleware so developers don't have to
> figure out all this microcode stuff. They need to be able to focus on
> content, not microcoding for processors.
>
>
What the hell are you talking about? What microcoding? If its not the
same processor instruction set then it's simply a case of compiling for
the correct instruction set. As it is, multiple cores just bring single
processor threads to the next level but generally the cores will be the
same instruction sets. Multiprocessor/core is not a new thing outside of
the PC world.
>
> This is complete codswallop.
>
codswallop Noun. Nonsense. coffin dodger Noun. An elderly person. Derog.
coffin nail Noun. A cigarette. coggy Noun. See 'croggy'. [NE
Midlands/Northern use] coin it (in) Verb. To make large amounts of money,
to...
http://www.peevish.co.uk/slang/c.htm
Talk about getting educated! Usenet is great :)
McG.
> codswallop Noun. Nonsense. coffin dodger Noun. An elderly person. Derog.
>coffin nail Noun. A cigarette. coggy Noun. See 'croggy'. [NE
>Midlands/Northern use] coin it (in) Verb. To make large amounts of money,
>to...
>http://www.peevish.co.uk/slang/c.htm
This is what the phrase was based on:
http://www.antiquebottles.com/codd/
The drink in the bottle was called (maybe slang) Wallop.
--
Andrew, contact via interpleb.blogspot.com
Help make Usenet a better place: English is read downwards,
please don't top post. Trim replies to quote only relevant text.
Check groups.google.com before asking an obvious question.
Earth, which apparently is not the same planet you are broadcasting
from. Maybe you are from the planet Opteron?
> financials from the last quarter/fiscal year, specifically the
> Processor Group.
The processor group has been slightly profitable for two years now,
after 4 years of heavy losses. That is a lot of loss to make up. And
while their profits are up from last year, the actual net gain is small
and compared to Intel, miniscule. It makes new plant investment
difficult if you have profit gains in the millions but new plants
costing billions. What I am referring to is the present day situation
and what is DEVELOPING (I don't understand why so many people have so
many problems with tenses in time). I didn't say AMD was NOT going to
65nm, just that it is a struggle while Intel is already manufacturing
65nm. Here is an interesting article on the present day situation with
AMD:
http://www.penstarsys.com/editor/company/amd/q3_2005/
AMD Q2-3005
"This Friday Fab 36 officially opens its doors and starts fabricating
65 nm parts. 65 nm is a complex process, and it probably takes around
10 to 12 weeks to complete a wafer. Fab 36 can do around 5000+ wafer
starts a week, and it uses 300 mm wafers. On the current 90 nm process
the Athlon 64 512 KB L2 die is about 84 mm square, the 1 MB L2 model is
112.9 mm square, the X2 with 512 KB of L2 is around 156 mm square, and
the X2 with 1 MB is around 199 mm square. Going to 65 nm with few
changes to the product should result in die sizes of around 59 mm
square, 79 mm square, 109 mm square, and 140 mm square respectively.
On 65 nm the 512 K L2 Athlon 64's will be tiny! Not only that, but
they will most likely be very energy efficient. Heat production will
probably be about the same as the 90 nm version, mainly due to the
smaller contact surface such a chip would have with the heat spreader.
We would also expect to see a jump in overall processor speeds once 65
nm really hits its stride."
However, the recent news that Intel has basically made a breakthrough
in 45nm is actually quite earth-shattering, if true. Basically, Intel
is saying they know they have a lot of problems with their 65nm
fabrication but they are going to ship essentially defective parts
anyway. In the meantime, they have not only solved the leakage/heat
problems but will go down another scale in dimensions and start
increasing clock speeds again. But, this could be Intel marketing. This
recent article probably emphasizes the present day situation, but if
Intel can be believed, this is all going to turn around:
3DS Max and Maya benchmarks on dual dual-core Paxville Xeon versus dual
dual-core Opteron:
http://www.gamepc.com/labs/view_content.asp?id=paxville&page=7
Full article (which caused a huge uproar yesterday and the website was
hacked for awhile afterward):
http://www.gamepc.com/labs/view_content.asp?id=paxville&page=1
> > This really is at the foundation of the issue. Dual-core processors
> >are just an evolution from multiple processor motherboards which have
> >been around a long time. The primary topic is mainly about resource
> >allocation and management. Dual or quad processor motherboards CPUs
> >still have to use the same bank of memory and busses, so that ends up
> >being shared, with all the rules and limitions that sharing requires.
>
> Really? You seem to have completely forgotten the all-important
> internal independent processor caches in your " dual-core
> analysis(??)". It's really horrible that the processor manufacturers
> consume all that unnecessary on-chip silicon-area with this
> useless cache memory and then have the gall to charge us big
> bucks for it.............
Re-read the last sentence from my post paragraph. I was referring to
conventional dual or quad processor motherboards, not the new dual-core
processors. Of course, that is the primary improvement of dual-core
processors, to provide local on-die cache to assist in the sharing of
common off-die data and memory resources. You sure do jump to illogical
conclusions based on mis-reading the context of a sentence.
> >So, even while multiple CPU motherboards have been around for a long
> >time, only a very few software apps ever went multi-threaded,
>
> What are you raving about?? Most compute-intensive and/or
> data-intensive professional applications are multithreaded and take
> full advantage of multiple-processor configurations - silicon design
> tools, audio processing tools, video processing tools, math
> computation tools, large data-base managers etc.
MAINSTREAM APPS. Are you saying gaming is not "compute-intensive"? You
are also confusing MULTI-PROCESSING with MULTI-THREADING. This is the
crux of the complexity problem, when people can't even distinguish the
difference between the two, much less effectively program for the
different conditions. That is why MULTI-THREADING is only now emerging
for mainstream applications like games and productivity software.
> > Photoshop
> >being one of the major ones and some graphics programs. They didn't
> >achieve twice the performance with dual processors but could come
> >close.
> >
>
> Maybe you should do a little background reading before making a fool
> of yourself in public............. ??
I USE several MULTI-THREADED applications, not just read about them
which is all you apparently do. If you actually USED multi-threaded
applications, you would know they have menu items to manage the number
of threads used such as 3DS Max and Maya (although these are now going
to merge together somehow). It is going to be interesting to see how
games are going to implement these characteristics. PC gaming is
dictated by minimum configurations, so that is where the difficulty is
going to lie, how to keep armies of poor, cheap bastards like you from
wailing on the Usenet that the game is crap because it hardly runs on
their 300MHz Celeron.
> And in the last 2 paragraphs, have you told us anything new, or are
> you just regurgitating the contents of various threads (some started
> by me...) during the past few months on these newsgroups ?
And what was added by the above except diverge from the original
subject, as usual? The subject is about games just STARTING to use
MULTI-THREADING on multiple core processors. If this was so
cut-and-dried as you and Walter make it seem, then why hasn't it been
done before, AGAIN WITH GAMING, NOT HIGH END APPS? Of course, the
retort is that there weren't many multi-processor platforms around
before but it would have seemed some kind of attempts would have been
made, if only for concept demonstrations. Since you seem to be kind of
thick, this main subject of discussion is what is behind recent
comments by Gabe Newell and John Carmack. Those guys have gone from
early youth to middle age programming for many single processor
architectures and that is why they are now making some rather public
groanings about how hard game programming is going to be. They know
MULTI-PROCESSING, they are just not familiar with MULTI-THREADING, and
it is going to be a learning curve. That is all.
Yes. Multithreaded applications are pretty easy to screw up, and difficult
to diagnose.
Then there's this popular perception, even amongst skilled
software people, that somehow "more cores is better". I'm reminded of the
recently announced Azul Systems 24-core Java processor. I searched to see
if I could find any documents regarding their backplane. No. But geeze. 24
cores? How are you going to keep the processors fed? Sounds like stall
city to me.
IMO, the main reason that AMD multiprocessor mobos are kicking intel
ass at the moment has little to do with the processor, and everything
to do with the NUMA architecture and the hypertransport links
they built it with...
C//
Í don't doubt that Tim Sweeney can't figure it out, it took the idiot years
to get OpenGL or D3D support working properly for his Unreal engines.
> Obviously, what is needed is more middleware so developers don't have to
> figure out all this microcode stuff. They need to be able to focus on
> content, not microcoding for processors.
OpenMP?
olaf
Morrowind not a great performer? Morrowind runs smoothly on every rig I've
seen so far, even an old 2.8 runs it no worries with a mid range card. And
I've _never_ seen it crash, even on said mid range computer, and my current
computer. So as for crashes that's probably machine dependant. Like most
games now days.
Oblivion is going to be the next stop in graphics for its time, just as
Morrowind was, they say from the start that its going to run slowly if not
at all on normal rigs when its released, they don't hide it. They accept
that their games are ahead of time in terms of graphics. And as I've played
all the latest 3rd person shooters and not really been impressed by any of
their graphics, I could believe it too.
Ceo-