Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Threads in gaming

4 views
Skip to first unread message

kimo

unread,
Nov 5, 2006, 11:58:25 PM11/5/06
to
The approach that Valve finally chose was a combination of the coarse
and fine-grained, with some extra enhancements thrown in. Some systems
were split on multiple cores using coarse threading. Other tasks, such
as VVIS (the calculations of what objects are visible to the player
from their point of view) were split up using fine-grained threading.
Lastly, whenever part of a core is idle, work that can be precalculated
without lagging or adversely affecting the game experience (such as AI
calculations or pathfinding) was queued up to be delivered to the game
engine later.

Valve's approach was the most difficult of all possible methods for
utilizing multiple cores, but if they could pull it off, it would
deliver the maximum possible benefits on systems like Intel's new
quad-core Kentsfield chips.

To deliver this hybrid threading platform, Valve made use of expert
programmers like Tom Leonard, who was writing multithreaded code as
early as 1991 when he worked on C++ development tools for companies
like Zortech and Symantec. Tom walked us through the thought process
behind Valve's new threading model.

More details:
http://arstechnica.com/articles/paedia/cpu/valve-multicore.ars

Chris Thomasson

unread,
Nov 6, 2006, 1:23:16 AM11/6/06
to
"kimo" <kimocr...@gmail.com> wrote in message
news:1162789105.2...@i42g2000cwa.googlegroups.com...

[...]

> More details:
> http://arstechnica.com/articles/paedia/cpu/valve-multicore.ars


http://arstechnica.com/articles/paedia/cpu/valve-multicore.ars/2
(refer to the last paragraph in the 'The new game engine pipeline'
section...)


Here is a brief, and IMHO, accurate paraphrase of something the author
mentioned that immediately threw a lot of red flags off in my mind. It goes
something like this:

... so much multithreading


... special efforts avoid the dreaded deadlocks/pitfalls


... a programmer use lock-free to implement spin-lock


... replace traditional mutex/sema


... spinlock interlocked ... still deadlocks


... implemented read/write lock ...


It sounds to me like they created a spinlock, then created a read/write
mutex... Also, here are three direct quotes that basically explain it all:


-- "The new pipeline featured so much multithreading that special efforts
were needed to avoid the dreaded deadlocks and other pitfalls mentioned
earlier."


-- "The spin loop utilizes a new interlock instruction that is built into
the CPU. However, there were still too many deadlocks."


-- "A read/write lock allowed many threads to read the same bit of memory,
but only one thread to write to it."


I also am guessing that they are using some kind of event loop... Like
futures... IMHO, their implementation is probably not the best way to do
things... Those three quotes could mean that there is at least a possibly
that my assertion holds true...


Need to think some more on this article... Humm...


Joe Seigh

unread,
Nov 6, 2006, 7:37:25 AM11/6/06
to
Chris Thomasson wrote:
> "kimo" <kimocr...@gmail.com> wrote in message
> news:1162789105.2...@i42g2000cwa.googlegroups.com...
>
> [...]
>
>
>>More details:
>>http://arstechnica.com/articles/paedia/cpu/valve-multicore.ars
>
>
...


>
>
> Here is a brief, and IMHO, accurate paraphrase of something the author
> mentioned that immediately threw a lot of red flags off in my mind. It goes
> something like this:
>
...
>
>

> I also am guessing that they are using some kind of event loop... Like
> futures... IMHO, their implementation is probably not the best way to do
> things... Those three quotes could mean that there is at least a possibly
> that my assertion holds true...
>
>
>
>
> Need to think some more on this article... Humm...
>
>

The auther of the article doesn't understand multi-threading very well.
The sentence placement is awkward. E.g. placing the term "lock-free
algorithms" near "spin lock".

The use of spin locks means they're probably not using more threads than
cpus and the game threads are running at highest priority so they
don't preempt too much. Using spin locks pretty much guarantees your
cpus are running at 100% whether they're doing any work or not.


The use of a read/write lock means the lock-free they're using is probably
write only stuff like lock-free fifo and lifo queues.

Probably most of the work involved breaking up the computations and doing
data dependency analysis on it so it was scheduled in the right order.


--
Joe Seigh

When you get lemons, you make lemonade.
When you get hardware, you make software.

Chris Thomasson

unread,
Nov 6, 2006, 8:16:48 AM11/6/06
to
> The auther of the article doesn't understand multi-threading very well.

I think I am going to have to agree with you on this one...

> The sentence placement is awkward. E.g. placing the term "lock-free
> algorithms" near "spin lock".

Well, as long as you try real hard to not use the spinlock... Then I guess
prefixing lock-free with the word 'mostly' will do:


http://groups.google.com/group/comp.programming.threads/msg/2f21a151d3916592

http://groups.google.com/group/comp.programming.threads/msg/221d018c41b18c59


;^)


> The use of spin locks means they're probably not using more threads than
> cpus and the game threads are running at highest priority so they
> don't preempt too much. Using spin locks pretty much guarantees your
> cpus are running at 100% whether they're doing any work or not.

The fact that the author mention frequent deadlocking doesn't help thing
either. I guess they have to understand how to build a lock-based
synchronization algorithm before they jump into lock-free ones?

:O


> The use of a read/write lock means the lock-free they're using is probably
> write only stuff like lock-free fifo and lifo queues.

I bet they are using the queue algorithm from Michael and Scott. You know
the one that is infested with atomic operations. E.g., takes more
atomic-ops/membars that an uncontended mutex to push, and takes an equal
amount of atomic-ops/membars to pop...


> Probably most of the work involved breaking up the computations and doing
> data dependency analysis on it so it was scheduled in the right order.

deadlocks and scheduling don't mix? I have a pretty good idea for how I
would design the synchronization characteristics' of a high-end 3D game
engine, but after listening to these guys I want to keep it all top-secret
for now; if they think what they got is good!


lol....


:^)


kimo

unread,
Nov 7, 2006, 10:17:30 PM11/7/06
to

Hey why don't you guys post your excellent analysis at the ARS site -
you might educate the people who read it and the author!


On Nov 6, 5:16 am, "Chris Thomasson" <cris...@comcast.net> wrote:
> > The auther of the article doesn't understand multi-threading very well.I think I am going to have to agree with you on this one...


>
> > The sentence placement is awkward. E.g. placing the term "lock-free

> > algorithms" near "spin lock".Well, as long as you try real hard to not use the spinlock... Then I guess


> prefixing lock-free with the word 'mostly' will do:
>

> http://groups.google.com/group/comp.programming.threads/msg/2f21a151d...
>
> http://groups.google.com/group/comp.programming.threads/msg/221d018c4...


>
> ;^)
>
> > The use of spin locks means they're probably not using more threads than
> > cpus and the game threads are running at highest priority so they
> > don't preempt too much. Using spin locks pretty much guarantees your

> > cpus are running at 100% whether they're doing any work or not.The fact that the author mention frequent deadlocking doesn't help thing


> either. I guess they have to understand how to build a lock-based
> synchronization algorithm before they jump into lock-free ones?
>
> :O
>
> > The use of a read/write lock means the lock-free they're using is probably

> > write only stuff like lock-free fifo and lifo queues.I bet they are using the queue algorithm from Michael and Scott. You know


> the one that is infested with atomic operations. E.g., takes more
> atomic-ops/membars that an uncontended mutex to push, and takes an equal
> amount of atomic-ops/membars to pop...
>
> > Probably most of the work involved breaking up the computations and doing

> > data dependency analysis on it so it was scheduled in the right order.deadlocks and scheduling don't mix? I have a pretty good idea for how I

0 new messages