Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Oh No! Not again! Direct3D vs OpenGL

681 views
Skip to first unread message

<- Chameleon ->

unread,
Aug 17, 2003, 5:01:04 PM8/17/03
to
I have some questions to us:
These questions are my apprehension's products

We have this time DirectX 9 and OpenGL 1.4

1. Which is the faster, if we use them as better as we can? (Ok! Ok! It depends from what we are programming but I ask in general)

2. once, many games are written in OpenGL. More than games written in DirectX. Today why more games are written for DirectX?

3. Is anything that DirectX can do and OGL not?

4. Which of both supports better the newest capabilities from newest video cards?

P.S. I am not a troll!


WTH

unread,
Aug 17, 2003, 5:19:34 PM8/17/03
to
> 1. Which is the faster, if we use them as better as we can? (Ok! Ok! It
depends from what we are programming but I ask in general)

There is no longer an 'in general' answer avaialable. In 'general' they are
the same.

> 2. once, many games are written in OpenGL. More than games written in
DirectX. Today why more games are written for DirectX?

There weren't ever many games written in OpenGL, and at the same time there
weren't many games written in DirectX. Now there are games written in
OpenGL, and many written in DirectX. This is 'opinion' area and a very
subjective topic. IMHO, (slightly) more games (lately) are being written in
DirectX because many games inherited from the '2D' era of DirectX hegemony.
Many of these developers knew DX6/DX7 to do RTS games, so they found it
easiers to move to D3D than OpenGL when they went to 2D/3D mixes. That's
not necessarily the correct thing to do, but it was most likely a level of
mental comfort.

> 3. Is anything that DirectX can do and OGL not?

Well, sort of and not really at the same time. There are things in DirectX
and OpenGL that are easier to do in DirectX. There are also things in OGL
and DirectX that are (very subjectively) easier to do in OpenGL. You really
need to state what is is you want to do, what your team's experience (or
yours) is, et cetera. Those are the deciding issues generally speaking
(unless you need to run on non-Windows platforms, then the choice is
easy-peasey.)

> 4. Which of both supports better the newest capabilities from newest video
cards?

Well, sadly, DirectX tends to move much faster than OGL in accomodating new
ideas/features; however, the extension architecture exists specifically to
make new features available to OpenGL users as soon as drivers are
published. Technically speaking, OpenGL gets everything first; however, in
reality DirectX incorporates new features and ideas much faster than OpenGL.

Using the extension system is more complicated than relying on DirectX
support (not because of OpenGL but because of hardware vendors screwing each
other over mostly.)

> P.S. I am not a troll!

It is a valid and fair question, but you REALLY need to give people an idea
of what you want to do and what you have that will help you accomplish that
goal. Then you can get inciteful advice about which API may or may not suit
you better.

WTH


<- Chameleon ->

unread,
Aug 17, 2003, 5:52:07 PM8/17/03
to
> > 1. Which is the faster, if we use them as better as we can? (Ok! Ok! It
> depends from what we are programming but I ask in general)
>
> There is no longer an 'in general' answer avaialable. In 'general' they are
> the same.

Ok! This is a good answer!. (And good news!)

> It is a valid and fair question, but you REALLY need to give people an idea
> of what you want to do and what you have that will help you accomplish that
> goal. Then you can get inciteful advice about which API may or may not suit
> you better.

It is only a question to describe if OpenGL growth is identical to DirectX growth.
Thanks for your reply


WTH

unread,
Aug 17, 2003, 5:55:47 PM8/17/03
to
> > It is a valid and fair question, but you REALLY need to give people an
idea
> > of what you want to do and what you have that will help you accomplish
that
> > goal. Then you can get inciteful advice about which API may or may not
suit
> > you better.
>
> It is only a question to describe if OpenGL growth is identical to DirectX
growth.
> Thanks for your reply

It isn't. OpenGL grows (as an API) VERY slowly. DirectX grows (relatively)
quickly. HOWEVER, OpenGL doesn't take the philosophical viewpoint that
every new feature must be incorporated into the API, it takes the
"extensions" philosophy. Meaning, new features can be used in OpenGL
immediately as long as the hardware maker provides an OpenGL 'extension'
which can be used. The downside being, everybody's extensions are generally
different and the same feature on two different cards not only requires two
different function calls BUT the requirements of those calls may be
substantially different as well.

WTH

wogston

unread,
Aug 17, 2003, 6:11:20 PM8/17/03
to
> 1. Which is the faster, if we use them as better as we can? (Ok! Ok! It
depends from what we are programming but I ask in general)

Properly used their performance should be equal, assuming the drivers are
roughly same quality for both APIs. OpenGL has semantically more ways to
push vertices to the hardware:

- display lists
- immediate mode
- vertex arrays
- various extensions

DirectX 9.0 uses "stream" model, where streams can come from two primary
sources:

- VertexBuffer
- user supplied memory address

The VertexBuffer(s) can be in videomemory, system-memory or agp-memory, but
the input semantics and API calls are still the same two most commonly used
ones:

::DrawPrimitive()
::DrawIndexedPrimitive()

and their mutations:

::DrawPrimitiveUP()
::DrawIndexedPrimitiveUP()

.. where the UP -versions means User Pointer, ie. user supplied memory
address. The two first mutations use VertexBuffer object as stream source.

I feel personally that the streaming model is more unified, regardless of
where the vertices are stored. It doesn't mean I think it is nicer model to
program with. I think you should try both, and get basic renderer going on
both and see which you like better. Performance wise it is easier to shoot
yourself in the foot with GL, but ofcourse it isn't too difficult to achieve
in DirectX 9.0 either.


> 2. once, many games are written in OpenGL. More than games written in
DirectX. Today why more games are written for DirectX?

I would argue that *commercially succesfull* games, most of them, are not
programmed in EITHER API, as it appears that game publishers don't want to
hear accronym PC. They want to hear PS2, PS2, PS2.. and little XBOX here and
there, and I don't feel that you would get much advantage of using OpenGL or
equivalent API in PS2 as it is architechturally very hostile towards
OpenGL-- assuming you want to optimize the last drop of performance out of
the hardware.

If we look at PC games only, then I suppose DirectX is more popular because
developers, or atleast publishers feel that it is, I dunno, less hassle to
support only single rendering API and if have to choose between DirectX and
OpenGL, they feel DX is a safer bet. Just a guess!


> 3. Is anything that DirectX can do and OGL not?

DirectX doesn't have extensions, that's the biggest real-world difference.
In practise this just means that DX has fixed feature set for each version
and you still have to check caps if it is supported in the hardware or not.

This kind of gives advantage to the OGL, but on the other hand the feature
set is better defined in DX since there aren't multiply vendor-specific
extensions for the same feature set. For the developer this doesn't make as
dramatic difference as it sounds, as still have to implement multiple
renderpaths unless want to use only the most common denominator for baseline
system requirements.


> 4. Which of both supports better the newest capabilities from newest video
cards?

They leapfrog over each other from version and extension to the next.
Currently both represent the hardware features well enough. I find shader
development more mature with DX, but this is because I worked with DX more
recently so that's a biased opinion.


> P.S. I am not a troll!

Me neither. ;-)


Andy V

unread,
Aug 17, 2003, 6:49:55 PM8/17/03
to
WTH wrote:

> It isn't. OpenGL grows (as an API) VERY slowly.

Have you noticed that the OpenGL ARB (architecture review board, the
group in charge of the specification) has committed to producing a new
release of the standard approximately every 12 months? Currently, this
is faster than Direct3D.

--
Andy V

WTH

unread,
Aug 17, 2003, 7:30:49 PM8/17/03
to
> Have you noticed that the OpenGL ARB (architecture review board, the
> group in charge of the specification) has committed to producing a new
> release of the standard approximately every 12 months? Currently, this
> is faster than Direct3D.

Do you realize the types of changes that occur in these releases? There's a
reason it's "1.4" and not 2.0. Also, how long does it take to actually see
an implementation once this has happened?

OpenGL is MUCH MUCH slower to include functionality into the API. Nobody is
saying this is bad (I don't think it is necessarily bad), it is just the way
it is.

WTH

WTH

unread,
Aug 17, 2003, 7:34:37 PM8/17/03
to
> I feel personally that the streaming model is more unified, regardless of
> where the vertices are stored. It doesn't mean I think it is nicer model
to
> program with. I think you should try both, and get basic renderer going on
> both and see which you like better. Performance wise it is easier to shoot
> yourself in the foot with GL, but ofcourse it isn't too difficult to
achieve
> in DirectX 9.0 either.

It is a nicer model (the DirectX model) when dealing with shaders, but in
general, the programmatic differences between D3D and OpenGL are not great.
They are both great tools for 3D.

> I would argue that *commercially succesfull* games, most of them, are not
> programmed in EITHER API, as it appears that game publishers don't want to
> hear accronym PC. They want to hear PS2, PS2, PS2.. and little XBOX here
and
> there, and I don't feel that you would get much advantage of using OpenGL
or
> equivalent API in PS2 as it is architechturally very hostile towards
> OpenGL-- assuming you want to optimize the last drop of performance out of
> the hardware.

True, an engine generally abstracts the rasterization components from the
rendering system.

> If we look at PC games only, then I suppose DirectX is more popular
because
> developers, or atleast publishers feel that it is, I dunno, less hassle to
> support only single rendering API and if have to choose between DirectX
and
> OpenGL, they feel DX is a safer bet. Just a guess!

I really think it has to do with carry over from the days when the 2D stuff
was all DirectX (sadly.)

> DirectX doesn't have extensions, that's the biggest real-world difference.
> In practise this just means that DX has fixed feature set for each version
> and you still have to check caps if it is supported in the hardware or
not.
>
> This kind of gives advantage to the OGL, but on the other hand the feature
> set is better defined in DX since there aren't multiply vendor-specific
> extensions for the same feature set. For the developer this doesn't make
as
> dramatic difference as it sounds, as still have to implement multiple
> renderpaths unless want to use only the most common denominator for
baseline
> system requirements.

Well put.

bunny

unread,
Aug 18, 2003, 1:24:33 AM8/18/03
to
<- Chameleon -> wrote:
> I have some questions to us:
> These questions are my apprehension's products

These are my observations from writing applications which use both. I
would like to add that I am certainly no expert with either!

>
> We have this time DirectX 9 and OpenGL 1.4
>
> 1. Which is the faster, if we use them as better as we can? (Ok! Ok! It depends from what we are programming but I ask in general)


Both are of comparable speed I have found. Provided you use the features
of hoth that um marry up they arent to different atall and the resulting
performance is the same.

>
> 3. Is anything that DirectX can do and OGL not?

One thing that I dont seem to be able to do with OpenGL is get more
direct (scuse the pun) control of the hardware. OpenGL is designed to be
pretty hardware neutral and doesnt concern itself with many of the
things that DirectX does, for example where textures are stored, whether
we are using hardware TnL, how to copy between different surface types,
the list goes on. Personally I prefer OpenGL, I havent used DirectX in a
project prior to 9 but it seems 9 compared to earlier incarnations is
more and more taking on the OpenGL ideas and I havent really found any
features of OpenGL that I am missing in DirectX, nor have I found a
reason to use Direct3D's more fine grained control. Coversely I really
like Direct3D's flexible vertex format idea I reckon this is something
which should be put into OpenGL (maybe it is and I dont know it).

For me however the crux of the matter which for me has determined that
OpenGL support from my applicatin will always be secondary is support
for hardware video. DirectX isnt just a 3D API it also encompasses
DirectShow (or is it called DirectXMedia now who knows!) this means that
I can easily intergrate hardware accelerated 2D Video and 3D scenes.
Sure there are extensions like render to texture on OpenGL that help
alot but with DirectX and specifically with a tool called Video Mixing
Renderer9, intergrated video and 3D seem to be far better.

(For the record apparently on SGI its a much better situation but I am
targetting home computers for now)

>
> 4. Which of both supports better the newest capabilities from newest video cards?

I think this a matter of taste, both support the newest features of the
video cards. Which you choose is down to your programming style and
probably what best fits with your idealized vision of a 3D API. For me
neither is perfect, my program uses as idealized 3D API that is perfect
for it and then further within the program goes about making DirectX and
OpenGL fit this model.

>
> P.S. I am not a troll!

I wonder what sort of response you would get posting the same question
to microsoft.public.win32.programmer.directx.graphics there seems to be
a fair few programmer there also who use both OpenGL and DirectX.

Bunny

Andy V

unread,
Aug 18, 2003, 7:23:14 PM8/18/03
to
I wrote:

>>Have you noticed that the OpenGL ARB (architecture review board, the
>>group in charge of the specification) has committed to producing a new
>>release of the standard approximately every 12 months? Currently, this
>>is faster than Direct3D.

WTH wrote:
> Do you realize the types of changes that occur in these releases?

Yes, I follow the OpenGL news assiduously.

> There's a
> reason it's "1.4" and not 2.0.

This is because SGI had developed "GL" (a.k.a. Iris GL) over the course
of many years, and that experience made OpenGL 1.0 a very highly
functional interface. When tweaking is what needs to be done, minor
revisions are fine.

On the Direct3D side, the first release came from an internal layer
inside a software package, and it was not very usable as a device
interface. Multiple major revisions later, it is.

> Also, how long does it take to actually see
> an implementation once this has happened?

Generally the functionality is available in *advance* of the
specification, as extensions. The release of the final version does lag
the official endorsement and new API, but generally not unduly so.

> OpenGL is MUCH MUCH slower to include functionality into the API. Nobody is
> saying this is bad (I don't think it is necessarily bad), it is just the way
> it is.

This is what has changed the last couple of years. New functionality is
making it from concept to extension to ARB-endorsed extension to API
much faster now.

OpenGL 2.0 is coming out in a matter of months. There isn't any
technical need for this to be version 2.0 rather than 1.5 -- nothing was
taken out of the API. Rather, it is a statement the OpenGL shading
language is such a major change in functionality that it deserves a new
major version number.

--
Andy V

WTH

unread,
Aug 18, 2003, 10:33:13 PM8/18/03
to
Stop trying to make this a 'OGL better than D3D' contest.

> This is because SGI had developed "GL" (a.k.a. Iris GL) over the course
> of many years, and that experience made OpenGL 1.0 a very highly
> functional interface. When tweaking is what needs to be done, minor
> revisions are fine.

Horsesh*t. OpenGL 1.x is still 1.x because the ARB takes forever to
incorporate anything into OpenGL, NOT because OpenGL already has everything
it needs, lol. You'll probably argue next that a programmable pipeline
should not be a part of the OpenGL API.

> On the Direct3D side, the first release came from an internal layer
> inside a software package, and it was not very usable as a device
> interface. Multiple major revisions later, it is.

Yes, From DX5 onwards it was easy to use. That was in 1998, more than 5
years ago.

> > Also, how long does it take to actually see
> > an implementation once this has happened?
>
> Generally the functionality is available in *advance* of the
> specification, as extensions. The release of the final version does lag
> the official endorsement and new API, but generally not unduly so.

Yes, and nobody with an objective viewpoint would view 'extensions' as a
satisfactory method for supporter all the new features available from modern
3D hardware.

> > OpenGL is MUCH MUCH slower to include functionality into the API.
Nobody is
> > saying this is bad (I don't think it is necessarily bad), it is just the
way
> > it is.
>
> This is what has changed the last couple of years. New functionality is
> making it from concept to extension to ARB-endorsed extension to API
> much faster now.

No, it has not. 1.1 was how long ago? 1.4 is the latest. 1.5/2.0 are who
knows when...

> OpenGL 2.0 is coming out in a matter of months.

People have been saying this for more than a year.

> There isn't any technical need for this to be version 2.0 rather than
1.5 -- nothing was
> taken out of the API. Rather, it is a statement the OpenGL shading
> language is such a major change in functionality that it deserves a new
> major version number.

Yes, exactly, OpenGL incorporates a new major feature, finally, into the
API. It only took how long for this to happen?

There are great advantages to using OpenGL, you should realize that new
feature availability for games and any product that isn't 'in house' where
you can control the hardware is NOT a strong point of OpenGL.

WTH

WTH

unread,
Aug 18, 2003, 10:35:57 PM8/18/03
to
> I think you can sum this up with one word: "neither".

Agreed totally. They are very VERY similar in performance.

> I think COM, Hungarian notation, etc. is horrible to use.

Does OpenGL require COM or Hungarian notation? I'm just asking because
Direct3D doesn't require it either. You used to have to use COM with
DirectX but no longer. You can if you want to though. Hungarian notation?
Never been necessary AFAIK...

WTH


Andy V

unread,
Aug 18, 2003, 11:15:45 PM8/18/03
to
WTH wrote:

> Stop trying to make this a 'OGL better than D3D' contest.

I was responding to you.

>>This is because SGI had developed "GL" (a.k.a. Iris GL) over the course
>>of many years, and that experience made OpenGL 1.0 a very highly
>>functional interface. When tweaking is what needs to be done, minor
>>revisions are fine.
>
>
> Horsesh*t. OpenGL 1.x is still 1.x because the ARB takes forever to
> incorporate anything into OpenGL, NOT because OpenGL already has everything
> it needs, lol. You'll probably argue next that a programmable pipeline
> should not be a part of the OpenGL API.

Horsesh*t right back. OpenGL is still 1.x because it was possible to add
new functionality in an incremental fashion. I never said it had
everything it needed. I would never argue that a programmable pipeline
isn't a good part of OpenGL now that the technology is available. I
don't think it is powerful *enough* yet.

>>On the Direct3D side, the first release came from an internal layer
>>inside a software package, and it was not very usable as a device
>>interface. Multiple major revisions later, it is.
>
>
> Yes, From DX5 onwards it was easy to use. That was in 1998, more than 5
> years ago.

Yes, and Direct3D has not changed drastically since then. Microsoft now
has a usable interface.

>>Generally the functionality is available in *advance* of the
>>specification, as extensions. The release of the final version does lag
>>the official endorsement and new API, but generally not unduly so.
>
>
> Yes, and nobody with an objective viewpoint would view 'extensions' as a
> satisfactory method for supporter all the new features available from modern
> 3D hardware.

I won't claim to be objective (what fun would that be?), but I don't
think extensions are worse than capability bits.

>>This is what has changed the last couple of years. New functionality is
>>making it from concept to extension to ARB-endorsed extension to API
>>much faster now.
>
>
> No, it has not. 1.1 was how long ago? 1.4 is the latest. 1.5/2.0 are who
> knows when...

1.5 was approved in July, 2003 -- per the BOF at Siggraph this year, it
is the

"Third annual revision since the ARB committed to a yearly release cycle"

I do note that the documentation isn't ready yet.

>>OpenGL 2.0 is coming out in a matter of months.
>
>
> People have been saying this for more than a year.

True.

> Yes, exactly, OpenGL incorporates a new major feature, finally, into the
> API. It only took how long for this to happen?

The commodity graphics market took that long to get past where OpenGL
started. I remember the early days of OpenGL when IHVs were struggling
to be able to support 1.0 at all, let alone in hardware.

> There are great advantages to using OpenGL, you should realize that new
> feature availability for games and any product that isn't 'in house' where
> you can control the hardware is NOT a strong point of OpenGL.

You've lost me here. There are essentially two commodity graphics
providers and they seem to provide similar support for Direct3D and
OpenGL, plus or minus a release or two.

--
Andy V

fungus

unread,
Aug 19, 2003, 4:06:01 AM8/19/03
to
WTH wrote:
> Stop trying to make this a 'OGL better than D3D' contest.
>

Hey, take a look at the name of the newsgroup!


> Horsesh*t. OpenGL 1.x is still 1.x because the ARB takes forever to
> incorporate anything into OpenGL, NOT because OpenGL already has everything
> it needs, lol.

Yet strangely enough OpenGL is still as good as Direct3D.

> ...and nobody with an objective viewpoint would view 'extensions' as a


> satisfactory method for supporter all the new features available from modern
> 3D hardware.
>

Rubbish. It works perfectly.

It's certainly no worse than D3D capability bits.

>>OpenGL 2.0 is coming out in a matter of months.
>
>
> People have been saying this for more than a year.
>

Not me. For the last year or so I've been saying it's
due out in SIGGRAPH (September).

--
<\___/> For email, remove my socks.
/ O O \
\_____/ FTB. Why isn't there mouse-flavored cat food?

stingelf

unread,
Aug 19, 2003, 7:57:52 AM8/19/03
to
> Does OpenGL require COM or Hungarian notation? I'm just asking because
> Direct3D doesn't require it either. You used to have to use COM with
> DirectX but no longer. You can if you want to though. Hungarian notation?
> Never been necessary AFAIK...

Don't you implicitly use COM when creating a D3D device? Maybe you
meant you don't have to explicitly query for interfaces now? Just want
to make sure!

Immanuel Albrecht

unread,
Aug 19, 2003, 8:38:47 AM8/19/03
to
"WTH" <spam...@Ih8it.com> wrote in
news:ORf0b.18217$f44....@fe04.atl2.webusenet.com:

_LP_DIRECTD3DDEVICE and what they're all called. But OpenGL also uses
hungarian notation. Once it was useful when using C and you could not have
different functions with different parameters use the same name.
Maybe OpenGL uses more hungarian that direct x, but that might be only my
own experience.


--
http://xrxixpx.rip-productions.de

wogston

unread,
Aug 19, 2003, 8:56:37 AM8/19/03
to
> _LP_DIRECTD3DDEVICE and what they're all called. But OpenGL also uses
> hungarian notation. Once it was useful when using C and you could not have

LPDIRECT... is just a typedef for IDirect3DDevice9 interface, which is the
proper interface type. Besides Microsoft using the typedefined names in
their own examples, I don't see them in any code me or my colleagues write -
what would be the point? ;-)


wogston

unread,
Aug 19, 2003, 8:58:07 AM8/19/03
to
> Everything Microsoft does is Hungarian Notation.
>
> "LPDWORDLPLPDWORDCSTR"...urk!

Yes, in the 1980's... look at DirectX 9.0, .NET etc. and you will see their
latest conventions and API's have changed. The WIN32 API is old and backward
compatible so that legacy stuff is still hanging in there.

If you develop Managed DirectX 9 applications there is no sight -at all- of
those abominations. ;-)


WTH

unread,
Aug 19, 2003, 10:12:18 AM8/19/03
to
Yes, internally DX uses COM (which is a good thing for many reasons which we
can discuss if you like), but you yourself do not have to make use of COM
programming to use D3D anymore, unless you want to use the older interfaces.

WTH

"stingelf" <spip...@yahoo.com> wrote in message
news:882fe461.03081...@posting.google.com...

WTH

unread,
Aug 19, 2003, 10:15:40 AM8/19/03
to
I don't think most people who consider LPDIRECT3DDEVICE as hungarian
notation.
They would probably think is is just a shortened version of
LONGPOINTERDIRECT3DDEVICE.

Now, in_szFileName IS an example of hungarian notation because in denotes
that the variables is an argument passed to the function/method and that it
is a zero terminated string. That one is clearly hungarian in nature ;).

WTH

"Immanuel Albrecht" <xrx...@gmx.de> wrote in message
news:bht5on$kgp$02$1...@news.t-online.com...

Ruud van Gaal

unread,
Aug 19, 2003, 10:19:23 AM8/19/03
to
On Tue, 19 Aug 2003 14:38:47 +0200, Immanuel Albrecht <xrx...@gmx.de>
wrote:

>"WTH" <spam...@Ih8it.com> wrote in
>news:ORf0b.18217$f44....@fe04.atl2.webusenet.com:
>
>>> I think you can sum this up with one word: "neither".
>>
>> Agreed totally. They are very VERY similar in performance.
>>
>>> I think COM, Hungarian notation, etc. is horrible to use.
>>
>> Does OpenGL require COM or Hungarian notation? I'm just asking
>> because Direct3D doesn't require it either. You used to have to use
>> COM with DirectX but no longer. You can if you want to though.
>> Hungarian notation? Never been necessary AFAIK...
>
>_LP_DIRECTD3DDEVICE and what they're all called. But OpenGL also uses
>hungarian notation. Once it was useful when using C and you could not have
>different functions with different parameters use the same name.

It seems to have merits for notations like 'cCustomers' which means
it's a count. It's all personal preference in the end.
Myself, I don't like Hungarian because:
- I like to be able to read source code as mostly English, without
having to strip 'LP' 'LPCSTR' and such in my mind.
- I always find that while I'm typing things, I'm just waiting for my
fingers to finish. Extra characters that are not a description of the
variable's context waste my time.
- Things like 'LP' state the type. So for every variable you
implicitly must know the type.
- Even worse, changing the underlying type (the message LPARAM and
WPARAM anyone) causes confusion.

I can just stand the 'gl' prefix before each OpenGL function. ;-)


Ruud van Gaal
Free car sim: http://www.racer.nl/
Pencil art : http://www.marketgraph.nl/gallery/

Alex Mizrahi

unread,
Aug 19, 2003, 8:19:08 AM8/19/03
to
Hello, WTH!

You wrote on Mon, 18 Aug 2003 22:35:57 -0400:
>> I think COM, Hungarian notation, etc. is horrible to use.

W> Does OpenGL require COM or Hungarian notation? I'm just asking
W> because
W> Direct3D doesn't require it either. You used to have to use COM with
W> DirectX but no longer.

it's no more using COM interfaces for everything? that's something new..

you mean that it can work as pure C structs with methods passign this as
parameter?
but it's really ugly, as for me, and it requires much more work to bind it
with some programming language. and possibly on some languages it's
impossible at all.
with OpenGL pure C can be used, it's as standart as API can be.

there is Jogl, there are OpenGL bindings to Ocaml, Ada.. are there
functioning d3d bindings with this languages?

W> You can if you want to though. Hungarian notation?
W> Never been necessary AFAIK...

OGL only requires GL_ or gl prefix for each enumaration member/function,
other part of it is mostly plain english word, so it's clear enough for
anybody, including newbies, to easily understand code.
in Direct3D they use prefixes for all enumerations, types and so on, so
programs look long and frightening. kompare:

device->SetRenderState( D3DRS_SRCBLEND, D3DBLEND_SRCALPHA );
device->SetRenderState( D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA );

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

OpenGL version is 2.5 times shorter.

With best regards, Alex Mizrahi aka killer_storm.


WTH

unread,
Aug 19, 2003, 12:53:45 PM8/19/03
to
> it's no more using COM interfaces for everything? that's something new..

Actually, you haven't been forced to use COM interfaces directly for a while
now.

> you mean that it can work as pure C structs with methods passign this as
> parameter?

I don't know. It is a windows only API, ergo, almost no one uses straight
C. I think you can use the Helper functions from C but I haven't done this
personally.

> but it's really ugly, as for me, and it requires much more work to bind it
> with some programming language. and possibly on some languages it's
> impossible at all.
> with OpenGL pure C can be used, it's as standart as API can be.

? What do 'pure C' and 'standard' have to do with each other? As a C/C++
guy, I find the idea of smart pointers much more alluring than 'pure C'.

> there is Jogl, there are OpenGL bindings to Ocaml, Ada.. are there
> functioning d3d bindings with this languages?

I don't know, but why would there be? Direct3D is for games (although you
can use it for whatever you want), why would anyone write a game in Ocaml,
or (ack) Ada? In any case, the OGL bindings for many of those languages are
incomplete and/or out of date (not all of them.)

> W> You can if you want to though. Hungarian notation?
> W> Never been necessary AFAIK...
>
> OGL only requires GL_ or gl prefix for each enumaration member/function,
> other part of it is mostly plain english word, so it's clear enough for
> anybody, including newbies, to easily understand code.
> in Direct3D they use prefixes for all enumerations, types and so on, so
> programs look long and frightening. kompare:

No it isn't. How is GL_SRC_ALPHA clearer than D3D

>
> device->SetRenderState( D3DRS_SRCBLEND, D3DBLEND_SRCALPHA );
> device->SetRenderState( D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA );
>
> glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
>
> OpenGL version is 2.5 times shorter.

Shorter, but it is actually clearer in D3D what D3DRS_SRCBLEND means than
GL_SRC_ALPHA. D3D Render State option, versus GL option... D3D Blend
option versus GL option. All the GL options are clumped together. It is
clearer to newbies about which options are usable where (in the pipeline) by
giving them a clearer name. (Neither of them is friendly in any case, lol.)

WTH:)


WTH

unread,
Aug 19, 2003, 12:55:37 PM8/19/03
to
Issues like notation are different between hobbyist developers and
professional developers.
Then there is the difference between professionals in small dev teams versus
professionals in large dev teams. The larger your team, the more
complicated your source base, the greater the need for proper notation.

WTH

"Ruud van Gaal" <KILLSP...@marketgraph.nl> wrote in message
news:3f453100....@news.xs4all.nl...

wogston

unread,
Aug 19, 2003, 1:21:06 PM8/19/03
to
> (While OpenGL apps I write ten years ago still compile
> and run perfectly. Ahem.)

Perfectly is a loose term, using immediate mode might run but not at best of
speeds possible on current hardware. DirectX apps I wrote for DX3 will also
compile and run, so what? ;-)


Immanuel Albrecht

unread,
Aug 19, 2003, 5:06:32 PM8/19/03
to
"WTH" <ih8...@spamtrap.com> wrote in
news:vk4lgs5...@corp.supernews.com:

>> but it's really ugly, as for me, and it requires much more work to
>> bind it with some programming language. and possibly on some
>> languages it's impossible at all.
>> with OpenGL pure C can be used, it's as standart as API can be.
>
> ? What do 'pure C' and 'standard' have to do with each other? As a
> C/C++ guy, I find the idea of smart pointers much more alluring than
> 'pure C'.

C96 standard assures what C code may look like to be portable. But the
main rule is, that performance comes first, then anything else. That's
why many people just think C as a better macro assembler.
Smart pointers will cost you performance. Anything smart will do that,
because you do not have to care for things that are necessary in some
cases but not always. That's the whole deal, you sell performance in gain
of easier programming. So here it really depends on what kind of task
you're up to do.


>> there is Jogl, there are OpenGL bindings to Ocaml, Ada.. are there
>> functioning d3d bindings with this languages?
>
> I don't know, but why would there be? Direct3D is for games (although
> you can use it for whatever you want),

I think least graphic processing time in the world is used for games.
Ever watched Final Fantasy?

> why would anyone write a game
> in Ocaml, or (ack) Ada? In any case, the OGL bindings for many of
> those languages are incomplete and/or out of date (not all of them.)

As long as they work on the target platform (not necessarily an Intel
one) it'll be ok.

>
> No it isn't. How is GL_SRC_ALPHA clearer than D3D
>
>>
>> device->SetRenderState( D3DRS_SRCBLEND, D3DBLEND_SRCALPHA );
>> device->SetRenderState( D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA );
>>
>> glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
>>
>> OpenGL version is 2.5 times shorter.
>
> Shorter, but it is actually clearer in D3D what D3DRS_SRCBLEND means
> than GL_SRC_ALPHA. D3D Render State option, versus GL option... D3D
> Blend option versus GL option. All the GL options are clumped
> together. It is clearer to newbies about which options are usable
> where (in the pipeline) by giving them a clearer name.

Well. If you might only look at the parameters, D3DRS_SRCBLEND might be
more useful. But the funtion name of OpenGL is much more problem oriented
then in Direct3d. What part of gl_BlendFunc_ lacks the information in
D3DRS_SRCBLEND? I mean, that I just want to set the blend func, and not a
render state. I cannot imagine that calling twice a 8+2*4 parameter
function can be as effective as calling once a 2*4 parameter function.


--
http://xrxixpx.rip-productions.de

Alex Mizrahi

unread,
Aug 19, 2003, 5:02:52 PM8/19/03
to
Hello, WTH!

You wrote on Tue, 19 Aug 2003 12:53:45 -0400:

>> it's no more using COM interfaces for everything? that's something
>> new..

W> Actually, you haven't been forced to use COM interfaces directly for
W> a while now.

>> you mean that it can work as pure C structs with methods passign this
>> as parameter?

W> I don't know. It is a windows only API, ergo, almost no one uses
W> straight
W> C. I think you can use the Helper functions from C but I haven't
W> done this personally.

there are no magic solutions to make COM application working as non-COM
application.
even if somebody will write static dll function wrapper, it will have to
take 'handle'(which actually will be interface pointer). opengl has only one
such handle somewherein TLS, as i understand, there is no need of other
handles because of state design.
but as i understand there is no such wrapper, at least standard one(for
non-standard, one can implement calls to d3d via ogl - but it will not be
pure d3d).
in case of pure C they do such trick in header:

#if !defined(__cplusplus) || defined(CINTERFACE)
#define IDirect3D9_QueryInterface(p,a,b) (p)->lpVtbl->QueryInterface(p,a,b)

so it's still COM, but it's masked(only for C using preprocessor), and is
very ugly(such long and ugly constucts make programmers more tired, and thus
decreases quality of applications). so it's kosmetic-only solution.

>> but it's really ugly, as for me, and it requires much more work to
>> bind it with some programming language. and possibly on some
>> languages it's impossible at all.
>> with OpenGL pure C can be used, it's as standart as API can be.

W> ? What do 'pure C' and 'standard' have to do with each other?

plain functions in plain dlls are supported by most languages. klasses and
COM interfaces are special more komplex konstrukts that are specific to
C++(and similar languages). this is implementation of object-oriented idea
via mechanizm of virtual functions - pointers to such functions are written
in structure to which actually interface points, and also that interface
pointer is passed to functions. C++ does efficiently support such way and
has transparent syntax for this, but for other languages this can be not a
natural way (there are a lot of other ways to implement object-oriented
idea), and such constucts in that languages will very bad looking or totally
impossible.

W> As a
W> C/C++
W> guy, I find the idea of smart pointers much more alluring than 'pure
W> C'.

there are a lot of languages that have different useful stuff. but their
internal stuctures are simply not such popular as C++ is.

>> there is Jogl, there are OpenGL bindings to Ocaml, Ada.. are there
>> functioning d3d bindings with this languages?

W> I don't know, but why would there be? Direct3D is for games
W> (although you can use it for whatever you want), why would anyone
W> write a game in Ocaml, or (ack) Ada?

many games use scripting languages, having C/C++ only for parts of engine
where high performance is needed.
and there is more applications that can use 3d graphics other than games.
some applications are easier to write on language other than C++, and part
that uses 3d can be not the main part.

W> In any case, the OGL bindings for many of those languages are
W> incomplete and/or out of date (not all of them.)

but it's easy to do bingings yourself. i bet i can write perl script in
10-20 minutes that will produce OGL header from gl.h for any language that
supports foriegn function interface(if i know syntax of that language). i'm
not sure that write binging with d3d is easy enough.
OpenGL has very few requirements to run - language doesn't even have to have
pointers to run it(at least with limited performance).
so OGL will be unreachable when task is other than another one game written
on C++ for Windows by Mircosoft beurocratic API fans.


W> Shorter, but it is actually clearer in D3D what D3DRS_SRCBLEND means
than
W> GL_SRC_ALPHA. D3D Render State option, versus GL option... D3D Blend
W> option versus GL option. All the GL options are clumped together. It
is
W> clearer to newbies about which options are usable where (in the
pipeline) by
W> giving them a clearer name. (Neither of them is friendly in any case,
lol.)

i think i have seen direct3d code(and code inspired by direct3d) enough to
make a conclusion that OpenGL API is more programmer-friendly and strains
brain by redundant information less.

Alex Mizrahi

unread,
Aug 19, 2003, 5:10:29 PM8/19/03
to
Hello, WTH!
You wrote on Tue, 19 Aug 2003 12:55:37 -0400:

W> Issues like notation are different between hobbyist developers and
W> professional developers.
W> Then there is the difference between professionals in small dev teams
W> versus professionals in large dev teams. The larger your team, the
W> more complicated your source base, the greater the need for proper
W> notation.

have you seen Java naming conventions? Java can be used for large
applications developped by large teams, and, by the way, according to
www.TIOBE.com is more popular than C or C++.
in Java most things look like simple human language words. complexity can be
handled via nesting packages. for example, if you're using Direct3D you can
just import Direct3D package, and stop writting that horrible prefixes. i
suspect that same is done in .NET, but i don't believe that all D3D
developpers will go .NET soon.

WTH

unread,
Aug 19, 2003, 8:04:35 PM8/19/03
to
> > Stop trying to make this a 'OGL better than D3D' contest.
>
> I was responding to you.

I wasn't saying one is better than the other, you are being entirely
defensive about OpenGL. According to you, there's nothing better about D3D
than OpenGL. You're entitled to your opinion of course, just like you're
entitled to be wrong. Just like if some DirectX zealot started boasting
that D3D is better than OpenGL at everything I'd skin his hide.

> Horsesh*t right back. OpenGL is still 1.x because it was possible to add
> new functionality in an incremental fashion. I never said it had
> everything it needed. I would never argue that a programmable pipeline
> isn't a good part of OpenGL now that the technology is available. I
> don't think it is powerful *enough* yet.

You're either playing stupid or incredibly naive if you think OpenGL is at
1.x because there hasn't been a need to add major functionality in the past
10 years. No major functionality has been added to the API since 1.0 with
the exception of the extensions work which is a temporary solution to a
serious problem that has become less temporary every year.

> >>On the Direct3D side, the first release came from an internal layer
> >>inside a software package, and it was not very usable as a device
> >>interface. Multiple major revisions later, it is.
> >
> >
> > Yes, From DX5 onwards it was easy to use. That was in 1998, more than 5
> > years ago.
>
> Yes, and Direct3D has not changed drastically since then. Microsoft now
> has a usable interface.

Maybe you should actually learn something about D3D before saying stupid
crap like "Direct3D has not changed drastically since then." You're
incredibly uneducated about D3D if you believe that.

> >>Generally the functionality is available in *advance* of the
> >>specification, as extensions. The release of the final version does lag
> >>the official endorsement and new API, but generally not unduly so.
> >
> >
> > Yes, and nobody with an objective viewpoint would view 'extensions' as a
> > satisfactory method for supporter all the new features available from
modern
> > 3D hardware.
>
> I won't claim to be objective (what fun would that be?), but I don't
> think extensions are worse than capability bits.

Sorry, this thread was started by somebody who obviously wanted an objective
opinion about the benefits offered by two 'competing' APIs. If you want to
be a zealot, you're not helping the guy out.

> >>This is what has changed the last couple of years. New functionality is
> >>making it from concept to extension to ARB-endorsed extension to API
> >>much faster now.

Yes, great, a faster route to a poor solution... Wonderful. You really
don't see the extensions implementation as a serious weakness for OpenGL as
a gaming API?

> > No, it has not. 1.1 was how long ago? 1.4 is the latest. 1.5/2.0 are
who
> > knows when...
>
> 1.5 was approved in July, 2003 -- per the BOF at Siggraph this year, it
> is the

Great, who has an implementation of 1.5?

> "Third annual revision since the ARB committed to a yearly release cycle"

Revision of the spec. You really need to realize that nobody gives a rat's
ass what the spec says, developers care about what it can do. Example,
everyone is enamored (myself included) with the 2.0 spec and variations
thereof; however, nobody is holding their breath on seeing it for a long
time. Every couple of months, out comes the "6 months to wait..."

> >>OpenGL 2.0 is coming out in a matter of months.
> >
> > People have been saying this for more than a year.
>
> True.

So stop using it as an example of how "fast" the ARB get things done with
OGL.

> > Yes, exactly, OpenGL incorporates a new major feature, finally, into the
> > API. It only took how long for this to happen?
>
> The commodity graphics market took that long to get past where OpenGL
> started. I remember the early days of OpenGL when IHVs were struggling
> to be able to support 1.0 at all, let alone in hardware.

What the hell does THAT have to do with anything, LOL. You want to write
off the snail pace set by the ARB to the fact that hardware vendors were
slow to implement 1.0 in hardware? In any case, it wasn't difficult to get
a 1.0 software rasterizer and then there was Cosmo (don't know if we should
bring that up... lol)

> > There are great advantages to using OpenGL, you should realize that new
> > feature availability for games and any product that isn't 'in house'
where
> > you can control the hardware is NOT a strong point of OpenGL.
>
> You've lost me here. There are essentially two commodity graphics
> providers and they seem to provide similar support for Direct3D and
> OpenGL, plus or minus a release or two.

Of course I have. Do you actually know anything about deploying software
upon a broad variety of video cards? What is just about the only thing
professional game developers dislike about OpenGL? I'll give you a hint, it
starts with ext and ends with ensions.

WTH

WTH

unread,
Aug 19, 2003, 8:18:39 PM8/19/03
to
> C96 standard assures what C code may look like to be portable. But the
> main rule is, that performance comes first, then anything else. That's
> why many people just think C as a better macro assembler.
> Smart pointers will cost you performance. Anything smart will do that,
> because you do not have to care for things that are necessary in some
> cases but not always. That's the whole deal, you sell performance in gain
> of easier programming. So here it really depends on what kind of task
> you're up to do.

I'm sorry, but in a 3D visualization program, it is extremely unlikely that
your choice of C versus C++ will make an appreciable difference now. Your
bottlenecks will be elsewhere :).

> I think least graphic processing time in the world is used for games.
> Ever watched Final Fantasy?

The art work in FF series is fantastic, the graphics themselves are not very
impressive. Is FF even rendered with OpenGL (I doubt it, but maybe)?

> > why would anyone write a game
> > in Ocaml, or (ack) Ada? In any case, the OGL bindings for many of
> > those languages are incomplete and/or out of date (not all of them.)
>
> As long as they work on the target platform (not necessarily an Intel
> one) it'll be ok.

Yes, but the point being that this thread was started by someone asking
about games and differences between OGL and D3D regarding development.

> Well. If you might only look at the parameters, D3DRS_SRCBLEND might be
> more useful. But the funtion name of OpenGL is much more problem oriented
> then in Direct3d. What part of gl_BlendFunc_ lacks the information in
> D3DRS_SRCBLEND? I mean, that I just want to set the blend func, and not a
> render state. I cannot imagine that calling twice a 8+2*4 parameter
> function can be as effective as calling once a 2*4 parameter function.

I agree, although I know this is subjective, I much prefer the OGL function
naming. It is still an 'easier to get into' API for 3D. Clean, simple yet
powerful.

WTH

WTH

unread,
Aug 19, 2003, 8:20:15 PM8/19/03
to
> have you seen Java naming conventions? Java can be used for large
> applications developped by large teams, and, by the way, according to
> www.TIOBE.com is more popular than C or C++.
> in Java most things look like simple human language words. complexity can
be
> handled via nesting packages. for example, if you're using Direct3D you
can
> just import Direct3D package, and stop writting that horrible prefixes. i
> suspect that same is done in .NET, but i don't believe that all D3D
> developpers will go .NET soon.

Yes, I spent a couple of years doing EJB, and Java is a very interesting and
fun language. Enormous drawbacks but also very important new directions.

WTH

WTH

unread,
Aug 19, 2003, 8:13:28 PM8/19/03
to
> > Stop trying to make this a 'OGL better than D3D' contest.
>
> Hey, take a look at the name of the newsgroup!

That's not what the thread is about, try reading it again fungus.

> Yet strangely enough OpenGL is still as good as Direct3D.

Sadly, it no longer is. It was until DX8 came out. Now that DX9 is out
with HLSL (Shaders 2.0), it is falling behind. THIS is why I'm complaining.
I don't want this to continue. People who think everything is copacetic
will be the death of OpenGL in the gaming world. Soon its only saving grace
will be "cross platform." What if MS challenged that (I doubt they would
though, there's no market growth for *nix based visualization.)

> > ...and nobody with an objective viewpoint would view 'extensions' as a
> > satisfactory method for supporter all the new features available from
modern
> > 3D hardware.
> >
>
> Rubbish. It works perfectly.

You think so because you don't understand the benefit/cost tradeoff in using
extensions. Good luck writing a commercially successful game in OpenGL that
supports a wide range of video cards and uses late version extensions.

> It's certainly no worse than D3D capability bits.

What on earth are you talking about? LOL. The entire reason that D3D is
leaving OpenGL behind (in the game development arena) is because MS listens
to the HW vendors, roadmaps features and provides a driver model for the
next version of DX, then the HW vendors support the driver model AND THE
SAME CODE WORKS ACROSS ALL VIDEO CARDS that have the DX version of a
particular driver. Yes some cards have more or less z-buffer depth,
different stencil depths, but good luck running your shaders across two
different OGL cards when you can use the same code across dozens of DX
cards.

> >>OpenGL 2.0 is coming out in a matter of months.
> >
> >
> > People have been saying this for more than a year.
> >
>
> Not me. For the last year or so I've been saying it's
> due out in SIGGRAPH (September).

Great, you don't even know when SIGGRAPH "was". Try "it's already over" and
it took place at the end of July. I was there, didn't hear much talk about
2.0 being out anytime in the next few months. Most people just pretending
2.0 wasn't even on the horizon.

WTH

Andy V

unread,
Aug 19, 2003, 9:22:31 PM8/19/03
to
WTH wrote:

>>>Stop trying to make this a 'OGL better than D3D' contest.
>>
>>I was responding to you.
>
>
> I wasn't saying one is better than the other,

It certainly didn't come across that way, neither in what I responded to
nor in subsequent replies.

> you are being entirely
> defensive about OpenGL.

You haven't said anything about Direct3D for me to be defensive about.
Not that I'm likely to, of course.

> According to you, there's nothing better about D3D
> than OpenGL.

Again, I haven't mentioned anything -- I'm sure I could if I tried hard
enough.

> You're entitled to your opinion of course, just like you're
> entitled to be wrong. Just like if some DirectX zealot started boasting
> that D3D is better than OpenGL at everything I'd skin his hide.

Really?

>>Horsesh*t right back. OpenGL is still 1.x because it was possible to add
>>new functionality in an incremental fashion. I never said it had
>>everything it needed. I would never argue that a programmable pipeline
>>isn't a good part of OpenGL now that the technology is available. I
>>don't think it is powerful *enough* yet.
>
>
> You're either playing stupid or incredibly naive if you think OpenGL is at
> 1.x because there hasn't been a need to add major functionality in the past
> 10 years. No major functionality has been added to the API since 1.0 with
> the exception of the extensions work which is a temporary solution to a
> serious problem that has become less temporary every year.

There certainly has been "major functionality" added since 1.0 --
texture objects, vertex arrays, 3D textures, separate specular color,
texture level of detail, multitexture, compressed textures, cube maps,
multisample, auto mipmap generation, depth textures, vertex programs and
fragment programs. However, all of it was added without changing the
meaning of older programs, and all of it can be retrofitted into older
programs with a minimal amount of effort.

>>>>On the Direct3D side, the first release came from an internal layer
>>>>inside a software package, and it was not very usable as a device
>>>>interface. Multiple major revisions later, it is.
>>>
>>>
>>>Yes, From DX5 onwards it was easy to use. That was in 1998, more than 5
>>>years ago.
>>
>>Yes, and Direct3D has not changed drastically since then. Microsoft now
>>has a usable interface.
>
>
> Maybe you should actually learn something about D3D before saying stupid
> crap like "Direct3D has not changed drastically since then." You're
> incredibly uneducated about D3D if you believe that.

"Drastically", meaning that D3D programs did not need to be drastically
changed to use the new functionality, as they did from DX3 to DX5. (I
may have the versions wrong here -- when they went for a vertex buffer
to an immediate mode interface.) Changes since then have been major, but
not drastic.

>>>>Generally the functionality is available in *advance* of the
>>>>specification, as extensions. The release of the final version does lag
>>>>the official endorsement and new API, but generally not unduly so.
>>>
>>>
>>>Yes, and nobody with an objective viewpoint would view 'extensions' as a
>>>satisfactory method for supporter all the new features available from
>
> modern
>
>>>3D hardware.
>>
>>I won't claim to be objective (what fun would that be?), but I don't
>>think extensions are worse than capability bits.
>
>
> Sorry, this thread was started by somebody who obviously wanted an objective
> opinion about the benefits offered by two 'competing' APIs. If you want to
> be a zealot, you're not helping the guy out.

I don't see your submissions as being objective.

>>>>This is what has changed the last couple of years. New functionality is
>>>>making it from concept to extension to ARB-endorsed extension to API
>>>>much faster now.
>
>
> Yes, great, a faster route to a poor solution... Wonderful. You really
> don't see the extensions implementation as a serious weakness for OpenGL as
> a gaming API?

It is a serious weakness, but no more difficult to handle in a program
than capability bits.

> Revision of the spec. You really need to realize that nobody gives a rat's
> ass what the spec says, developers care about what it can do.

I certainly realize that developers care about what it can do, and one
of the best ways to know that is to understand the spec.

> Example,
> everyone is enamored (myself included) with the 2.0 spec and variations
> thereof; however, nobody is holding their breath on seeing it for a long
> time. Every couple of months, out comes the "6 months to wait..."

Open processes do take longer; often longer than the participant realize.

>>>>OpenGL 2.0 is coming out in a matter of months.
>>>
>>>People have been saying this for more than a year.
>>
>>True.
>
>
> So stop using it as an example of how "fast" the ARB get things done with
> OGL.

2.0 is quite a major amount of work. I'm not surprised that it isn't
ready yet.

>>>Yes, exactly, OpenGL incorporates a new major feature, finally, into the
>>>API. It only took how long for this to happen?
>>
>>The commodity graphics market took that long to get past where OpenGL
>>started. I remember the early days of OpenGL when IHVs were struggling
>>to be able to support 1.0 at all, let alone in hardware.
>
>
> What the hell does THAT have to do with anything, LOL.

OpenGL can't evolve faster than the IHVs are willing and able to go.

> You want to write
> off the snail pace set by the ARB to the fact that hardware vendors were
> slow to implement 1.0 in hardware? In any case, it wasn't difficult to get
> a 1.0 software rasterizer and then there was Cosmo (don't know if we should
> bring that up... lol)

I'm glad we are able to give you lots of laughs.


>
>
>>>There are great advantages to using OpenGL, you should realize that new
>>>feature availability for games and any product that isn't 'in house'
>
> where
>
>>>you can control the hardware is NOT a strong point of OpenGL.
>>
>>You've lost me here. There are essentially two commodity graphics
>>providers and they seem to provide similar support for Direct3D and
>>OpenGL, plus or minus a release or two.
>
>
> Of course I have. Do you actually know anything about deploying software
> upon a broad variety of video cards?

Absolutely. I support software on a broad variety of video cards and
operating systems.

> What is just about the only thing
> professional game developers dislike about OpenGL? I'll give you a hint, it
> starts with ext and ends with ensions.

Extensions are your friends. They allow IHVs to implement new
functionality faster than any other option. Certainly it can be faster
than waiting for either the ARB or Microsoft to bless something new.

--
Andy V

stingelf

unread,
Aug 20, 2003, 3:57:22 AM8/20/03
to
> Yes, internally DX uses COM (which is a good thing for many reasons which we
> can discuss if you like), but you yourself do not have to make use of COM
> programming to use D3D anymore, unless you want to use the older interfaces.

Ok that makes sense. Actually I do seem to recall the early versions
of direct3D requiring you to call QueryInterface... I'm not sure tho.

Momchil Velikov

unread,
Aug 20, 2003, 4:04:24 AM8/20/03
to
"WTH" <spam...@Ih8it.com> wrote in message news:<e6z0b.709$Hf....@fe03.atl2.webusenet.com>...

> > > Stop trying to make this a 'OGL better than D3D' contest.
> >
> > Hey, take a look at the name of the newsgroup!
>
> That's not what the thread is about, try reading it again fungus.
>
> > Yet strangely enough OpenGL is still as good as Direct3D.
>
> Sadly, it no longer is. It was until DX8 came out. Now that DX9 is out
> with HLSL (Shaders 2.0), it is falling behind.

Eh? You seem to conveniently ignore the existance of
ARB_vertex_program, ARB_fragment_program and Cg.

> THIS is why I'm complaining.
> I don't want this to continue. People who think everything is copacetic
> will be the death of OpenGL in the gaming world. Soon its only saving grace
> will be "cross platform." What if MS challenged that (I doubt they would
> though, there's no market growth for *nix based visualization.)
>
> > > ...and nobody with an objective viewpoint would view 'extensions' as a
> > > satisfactory method for supporter all the new features available from
> modern
> > > 3D hardware.
> > >
> >
> > Rubbish. It works perfectly.
>
> You think so because you don't understand the benefit/cost tradeoff in using
> extensions. Good luck writing a commercially successful game in OpenGL that
> supports a wide range of video cards and uses late version extensions.

You do not seem to have heard of abstraction", "information hiding",
"separating interface and implementation", etc. Separate code paths
(for the sake of taking advantage of a vendor/ARB extensions) add zero
complexity to the system, they're just a simple matter of programming,
one code monkey/month more.

> ... but good luck running your shaders across two


> different OGL cards when you can use the same code across dozens of DX
> cards.

It's not a matter of luck. It usually JustWorks(tm). Anyway, whether
it works or not is a quality of implementation issue, not the one or
the other API advantage/drawback.

That said, I see DirectX Graphics and OpenGL as roughly equivalent,
with one DX drawback (besides being ugly, proprietary and single
platform) and one OpenGL advantage:

- DX has only interleaved vertex buffers (correct me of I'm wrong),
which
a) slows it down when you need to update only part of the data (say
only normals or only texunit 5 coords), because you'd have to upload
ALL of it to GART/video memory.
b) makes integrating vertex data into application specific data
structures harder, probably making necessasy copying stuff around.

- OpenGL extensions mechanism are a clear advantage IMHO: it both
exposes latest and greatest features of the latest hardware and
facilitates getting proven (on the field) features into the core
standard.

~velco

Immanuel Albrecht

unread,
Aug 20, 2003, 4:05:29 AM8/20/03
to
"WTH" <spam...@Ih8it.com> wrote in
news:f6z0b.710$Hf....@fe03.atl2.webusenet.com:

> The art work in FF series is fantastic, the graphics themselves are
> not very impressive. Is FF even rendered with OpenGL (I doubt it, but
> maybe)?

"Four SGI 2000 series high-performance servers, four Silicon GraphicsR
Onyx2R visualization systems, 167 Silicon GraphicsR OctaneR visual
workstations and other SGI systems were used to create the film.
Alias|WavefrontTM MayaR software was used for animation authoring on the
SGI machines, and Pixar RenderManR software was run on LinuxR OS-based
systems." (taken from: http://www.arstechnica.com/wankerdesk/01q3/ff-
interview/ff-interview-1.html).

Would be surprising to me if those SGI workstations would not make use of
OpenGL. Since who began with OpenGL?

Ruud van Gaal

unread,
Aug 20, 2003, 4:42:53 AM8/20/03
to
On Wed, 20 Aug 2003 10:05:29 +0200, Immanuel Albrecht <xrx...@gmx.de>
wrote:

>"WTH" <spam...@Ih8it.com> wrote in

SGIs use OpenGL, but that's just previews.
Running Pixar RenderMan on Linux sounds more like the actual software
rendering process (little to do with OpenGL). Raytracing and such.

wogston

unread,
Aug 20, 2003, 4:52:45 AM8/20/03
to
> - DX has only interleaved vertex buffers (correct me of I'm wrong),
> which
> a) slows it down when you need to update only part of the data (say
> only normals or only texunit 5 coords), because you'd have to upload
> ALL of it to GART/video memory.
> b) makes integrating vertex data into application specific data
> structures harder, probably making necessasy copying stuff around.

It supports multistream, atleast in DirectX 9. It has also "evolved" since
FVF was introduced ; FVF felt like a good idea (it still is) when it was
introduced, but the current Vertex Declaration model is even better.

One thing HLSL and Cg are doing "wrong" is the semantic binding. Semantic is
good for output to fragment processor, obviously, so that it knows what to
do with the data. It's also "good" for input to fragment processor, so that
it knows what to do with the data if "fixed" pipe is being used. Ditto for
VS if it is using fixed pipeline.

But when both vertex and fragment processors are programmed with custom
shaders, the only semantic that doesn't get in the way and is useful is the
fragment program output. Semantics just limit how much per-vertex properties
can be streamed into the program. I could be wrong, but it looks likely
something will be done about this in DirectX 10. ;-)

One way how this could work, is, that the user only defines only the size of
single element when creating vertex buffer object. When user copies stuff to
the vbo, he could define a struct in C or C++, for instance, which matches
the layout he uses in HLSL side (alignment issues aside, they are "problem"
the same way with vdecl model)-- then the vertex program would compute
offsets to vertex components and so on. In the case of multistreaming, the
streams would be combined in indexed order to "input stream", so that the
format would again be predictable (and logical) without vertex semantics.
Meanwhile, I use semantics to bind vertex components.. so far haven't ran
out of semantic yet.. but that is just matter of time. ;)

Since speaking of Cg and HLSL, HLSL has one advantage in it's implementation
of the same language. It's the ability to define the entry point symbolic
name when compiling. This way can combine vertex and pixer programs into
same sourcefile. Sometimes this is very convenient. I use symbols vsmain()
and psmain() when I write such processing pipes. Similiar functionality for
Cg would be welcome addition.


Immanuel Albrecht

unread,
Aug 20, 2003, 6:36:43 AM8/20/03
to
KILLSP...@marketgraph.nl (Ruud van Gaal) wrote in
news:3f433453...@news.xs4all.nl:


>>Would be surprising to me if those SGI workstations would not make use
>>of OpenGL. Since who began with OpenGL?
>
> SGIs use OpenGL, but that's just previews.
> Running Pixar RenderMan on Linux sounds more like the actual software
> rendering process (little to do with OpenGL). Raytracing and such.

Of course, but how many previews do you see before you go to final stage?

Dave Eberly

unread,
Aug 20, 2003, 9:29:05 AM8/20/03
to
"wogston" <sp...@nothere.net> wrote in message
news:bhvcsu$9gh$1...@phys-news1.kolumbus.fi...

> Since speaking of Cg and HLSL, HLSL has one advantage in it's
implementation
> of the same language. It's the ability to define the entry point symbolic
> name when compiling. This way can combine vertex and pixer programs into
> same sourcefile. Sometimes this is very convenient. I use symbols vsmain()
> and psmain() when I write such processing pipes. Similiar functionality
for
> Cg would be welcome addition.

The "-entry" option for the Cg executable cgc.exe to specify the
entry point works for me. My Cg vertex and pixel shader programs
are in the same .cg file. I also generate output files that contain both
DX9 and OpenGL shaders so that my graphics engine just loads a
single program.

As always, the "Direct3D vs. OpenGL" threads are entertaining
with participants passionate about their choices. However, when
you work on graphics for a living, you invariably have clients who
want one or the other. The solution is to support both and not
bother with a debate. Each API has its advantages and
disadvantages, so just deal with it.

--
Dave Eberly
ebe...@magic-software.com
http://www.magic-software.com
http://www.wild-magic.com


WTH

unread,
Aug 20, 2003, 11:04:21 AM8/20/03
to
Oh yes, I was an addref'ing freak in those days... COM was a very smart and
(probably) very tough decision for the DirectX team to make.

WTH;)

"stingelf" <spip...@yahoo.com> wrote in message
news:882fe461.03081...@posting.google.com...

WTH

unread,
Aug 20, 2003, 11:28:57 AM8/20/03
to

> there are no magic solutions to make COM application working as non-COM
> application.
> even if somebody will write static dll function wrapper, it will have to
> take 'handle'(which actually will be interface pointer). opengl has only
one
> such handle somewherein TLS, as i understand, there is no need of other
> handles because of state design.

You don't know what you're talking about here. You simply use a helper
function to ask for a pointer to the D3D object you wish to use. You don't
have to addref, you don't have to query interface, you don't have to
release. You can work at that level if you wish, but you have no need to.
You don't have to know anything about COM to use DX now. Stop confusing
people with your ignorance about D3D. You can use direct creation if you
wish, but indirect is just the same as any other API.

> but as i understand there is no such wrapper, at least standard one(for
> non-standard, one can implement calls to d3d via ogl - but it will not be
> pure d3d).
> in case of pure C they do such trick in header:

Are you still talking about using D3D from C? LOL. There are two different
situations which you are trying to lump together. (1)You are writing C and
using a C++ compiler, absolutely no weird crap. (2)You are writing C and
using a C compiler, you need to pass a pointer to itself as the first
argument. Holy crap that is tough...

> #if !defined(__cplusplus) || defined(CINTERFACE)
> #define IDirect3D9_QueryInterface(p,a,b)
(p)->lpVtbl->QueryInterface(p,a,b)
>
> so it's still COM, but it's masked(only for C using preprocessor), and is
> very ugly(such long and ugly constucts make programmers more tired, and
thus
> decreases quality of applications). so it's kosmetic-only solution.

You are incredible. "kosmetic [sic] only solution"? WTF are you talking
about? Have you ever provided access to C++ constructs from C?

> >> but it's really ugly, as for me, and it requires much more work to
> >> bind it with some programming language. and possibly on some
> >> languages it's impossible at all.
> >> with OpenGL pure C can be used, it's as standart as API can be.

It isn't ugly you idiot, you don't work in the header files, maybe you
should move out of the dark ages of 3D graphics and actually use C++ in any
case, lol.

I spent 3 years maintaining an ANSI C codebase between IRIX, Solaris,
HP-Unix, and NT, and providing C++ wrappers and C wrappers. This included a
DirectX renderer back in the 'execute buffer' times. It isn't ugly, and it
isn't difficult.

> W> ? What do 'pure C' and 'standard' have to do with each other?
>
> plain functions in plain dlls are supported by most languages. klasses and
> COM interfaces are special more komplex konstrukts that are specific to
> C++(and similar languages). this is implementation of object-oriented idea
> via mechanizm of virtual functions - pointers to such functions are
written
> in structure to which actually interface points, and also that interface
> pointer is passed to functions. C++ does efficiently support such way and
> has transparent syntax for this, but for other languages this can be not a
> natural way (there are a lot of other ways to implement object-oriented
> idea), and such constucts in that languages will very bad looking or
totally
> impossible.

LOL. "very bad looking". Try to be objective. In any case, none of the
tripe you posted above has anything to do with your original statement about
"pure C" and there being a "standard." I would spend the 5 minutes
deconstructing your argument except it has nothing to do with this thread.

You appear to be trying to turn this into a C versus C++ thread. If you
limit yourself to working in C, that's fine, but don't try to make OpenGL
look better than D3D because you think it is more 'C friendly.' VERY few
game developers work in just C (none that I am aware of, but I don't know
many of them.)

> many games use scripting languages, having C/C++ only for parts of engine
> where high performance is needed.
> and there is more applications that can use 3d graphics other than games.

This thread is about why there don't seem to be many OGL games versus D3D
games. As for games using scripting languages, yes, they do, but ONLY FOR
SCRIPTABLE EVENTS. The game logic itself does not run in script, the input
stage, the task stage, the AI stage, the render stage, et cetera are NOT in
a scripting language. You appear to be trying to make it seem like they
are.

> some applications are easier to write on language other than C++, and part
> that uses 3d can be not the main part.

Are you a professional 3D graphics developer?

> W> In any case, the OGL bindings for many of those languages are
> W> incomplete and/or out of date (not all of them.)
>
> but it's easy to do bingings yourself.

You can't argue in one place that D3D is bad because using it from C
requires ugly headers and then argue that a value of OpenGL is that you can
go and generate your own bindings which are much more complicated and beyond
most newbies. You can't have your cake and eat it to.

> i bet i can write perl script in
> 10-20 minutes that will produce OGL header from gl.h for any language that
> supports foriegn function interface(if i know syntax of that language).
i'm
> not sure that write binging with d3d is easy enough.

You're not sure, you know why? Because you don't know squat about D3D.

> OpenGL has very few requirements to run - language doesn't even have to
have
> pointers to run it(at least with limited performance).
> so OGL will be unreachable when task is other than another one game
written
> on C++ for Windows by Mircosoft beurocratic API fans.

So what? How many people are going to write games that are in languages
that don't support pointers? Why the hell would you want to? You need to
get a job in 3D graphics.

> W> Shorter, but it is actually clearer in D3D what D3DRS_SRCBLEND means
> than
> W> GL_SRC_ALPHA. D3D Render State option, versus GL option... D3D Blend
> W> option versus GL option. All the GL options are clumped together. It
> is
> W> clearer to newbies about which options are usable where (in the
> pipeline) by
> W> giving them a clearer name. (Neither of them is friendly in any case,
> lol.)
>
> i think i have seen direct3d code(and code inspired by direct3d) enough to
> make a conclusion that OpenGL API is more programmer-friendly and strains
> brain by redundant information less.

Not any more. You need to go use DX8 or DX9. Your D3D knowledge is out of
date.

WTH


WTH

unread,
Aug 20, 2003, 12:19:33 PM8/20/03
to
I thought you meant the series of games... Sorry.

WTH

"Immanuel Albrecht" <xrx...@gmx.de> wrote in message

news:bhva49$lj9$02$2...@news.t-online.com...

wogston

unread,
Aug 20, 2003, 12:35:29 PM8/20/03
to
> The "-entry" option for the Cg executable cgc.exe to specify the
> entry point works for me. My Cg vertex and pixel shader programs
> are in the same .cg file. I also generate output files that contain both
> DX9 and OpenGL shaders so that my graphics engine just loads a
> single program.

Any way to bind when loading dynamically? (THAT is the big question, I
generate HLSL code dynamically from renderstates and sometimes compile
that... so I use compiler backend in runtime, not always, but sometimes, and
then HLSL way is better)


> As always, the "Direct3D vs. OpenGL" threads are entertaining
> with participants passionate about their choices. However, when
> you work on graphics for a living, you invariably have clients who

I work for graphics programming for living, but I do take it passionately as
it started as a hobby 15 years ago. Still have the passion, it never went
anywhere, I must be unique and very lucky. ;-)

That said, I don't feel very dramatically zealotish, as I use both APIs in
the work. Currently I am getting into OpenGL ES as it is in demand on my
line of work more than OpenGL 1.X for desktop systems, for desktop currently
Windows is the platform where I seem to be needed most and DirectX 9 is a
good choise for that. No regrets. I do OpenGL programming on desktop aswell,
but that is not as frequent as DX 9. It's not my call most of the time, I'm
just doing the work... API choises are not always mine to make.

If someone wants to fight which API is better, that's their problem.


wogston

unread,
Aug 20, 2003, 12:45:25 PM8/20/03
to
> Of course, but how many previews do you see before you go to final stage?

3DSMAX is also used novadays by movie industry, and there DirectX 8 is the
recommended editor renderer. It says nothing about what renderer is used for
the movie frames. Neither it does say what happens after rendering, when
composing the final frames from layers which are produced on wide range of
graphics packages.

Neither is much said what is used to produce data for Flame, Combustion,
etc. which have sgi versions and have very heavy pricetags for heavy-iron
used for rendering.

DirectX 8 / 9 and OpenGL both are good for previews as they are, and the
movies themselves aren't rendered with either API, they're just realtime
tools. OpenGL is multi-platform, DirectX isn't (with the exception of few
game consoles).

All the same, we should think how productive it is to 'fight' which is
better API..


wogston

unread,
Aug 20, 2003, 12:47:20 PM8/20/03
to
> generate HLSL code dynamically from renderstates and sometimes compile
> that... so I use compiler backend in runtime, not always, but sometimes,
and

p.s. those I compile from memory, but it's still preferable if same
front-end can handle both types of generated source (memory stream vs. file
stream)


wogston

unread,
Aug 20, 2003, 12:48:50 PM8/20/03
to
> etc. which have sgi versions and have very heavy pricetags for heavy-iron
> used for rendering.

Ofcourse Linux (and Windows) networks are also seen, instead of "big iron"..
but everyone knows that, but somehow mentioning it makes you look (in your
own opinion, err.. mine in this case) sharpest tool in the box. ;-)


WTH

unread,
Aug 20, 2003, 1:03:42 PM8/20/03
to
> It certainly didn't come across that way, neither in what I responded to
> nor in subsequent replies.

Of course it didn't, you didn't read my post objectively.

> > you are being entirely
> > defensive about OpenGL.
>
> You haven't said anything about Direct3D for me to be defensive about.
> Not that I'm likely to, of course.

That is poor logic. If I don't state something about D3D, you can't be
defensive about OpenGL? Really? You couldn't become defensive about OpenGL
if I criticise some aspect of it w/o mentioning Direct3D? I think that
would be a resounding 'yes.'

> Again, I haven't mentioned anything -- I'm sure I could if I tried hard
> enough.

Do you realize how this thread started, before you made it an "OpenGL has no
flaws" thread?

> > You're entitled to your opinion of course, just like you're
> > entitled to be wrong. Just like if some DirectX zealot started boasting
> > that D3D is better than OpenGL at everything I'd skin his hide.
>
> Really?

Yes, and I have. Zealots in the DirectX group think I'm anti-DirectX when
somebody asks if they should learn OpenGL or DirectX and they bag on OpenGL.

>
> >>Horsesh*t right back. OpenGL is still 1.x because it was possible to add
> >>new functionality in an incremental fashion. I never said it had
> >>everything it needed. I would never argue that a programmable pipeline
> >>isn't a good part of OpenGL now that the technology is available. I
> >>don't think it is powerful *enough* yet.
> >
> >
> > You're either playing stupid or incredibly naive if you think OpenGL is
at
> > 1.x because there hasn't been a need to add major functionality in the
past
> > 10 years. No major functionality has been added to the API since 1.0
with
> > the exception of the extensions work which is a temporary solution to a
> > serious problem that has become less temporary every year.
>
> There certainly has been "major functionality" added since 1.0 --
> texture objects, vertex arrays, 3D textures, separate specular color,
> texture level of detail, multitexture, compressed textures, cube maps,
> multisample, auto mipmap generation, depth textures, vertex programs and
> fragment programs. However, all of it was added without changing the
> meaning of older programs, and all of it can be retrofitted into older
> programs with a minimal amount of effort.

Vertex arrays as major functionality? No...
Texture objects as major functionality? No...
3D textures as major functionality? No...
Separate specular color as major functionality? Maybe.
MipMaps (texture LOD) as major functionality? Already existed in 1.0
multisample? Yes
auto mipmap generation as major functionality? No... BTW, found in earlier
versions of the GL family in any case.
depth textures as major functionality? No... BTW, only if your card has a
1.4 version driver
vertex programs? You mean, the ARB extension? Not part of the API, Ibid.
fragment programs? You mean, the ARB extension? Not part of the API, Ibid.

Adding vertex and fragment support to earlier code with a minimal effort?
LOL. ONLY if your data just happens to be organized in a particular
fashion.

> "Drastically", meaning that D3D programs did not need to be drastically
> changed to use the new functionality, as they did from DX3 to DX5. (I
> may have the versions wrong here -- when they went for a vertex buffer
> to an immediate mode interface.) Changes since then have been major, but
> not drastic.

You are referring to execute buffers and drawprimitive. Why was that
drastic? It is just major functionality.

> I don't see your submissions as being objective.

That's because you're always defensive about OpenGL. I admire aspects of
OGL and criticise others. The same goes for D3D. Anything you don't think
is being done right in OGL?

> I certainly realize that developers care about what it can do, and one
> of the best ways to know that is to understand the spec.

Who cares if you understand the 2.0 spec IF YOU CAN'T USE IT. It's almost
like the idea of claiming that OGL is at 1.5 now. The SPEC is at 1.5 now,
implementations are mostly at 1.3 and some at 1.4. That's just on Windows
btw. Other OSes are even further behind.

> > Example,
> > everyone is enamored (myself included) with the 2.0 spec and variations
> > thereof; however, nobody is holding their breath on seeing it for a long
> > time. Every couple of months, out comes the "6 months to wait..."
>
> Open processes do take longer; often longer than the participant realize.

LOL. Open processes do not take longer due to their 'open' nature. The
OpenGL ARB is notorious for being sloooooow. I'm sure you'll refute that.

> > What the hell does THAT have to do with anything, LOL.
>
> OpenGL can't evolve faster than the IHVs are willing and able to go.

That's not accurate. The IHVs who mattered during OpenGL infancy were all
more than willing to go faster (SGI, SUN, HP.) Going from 1.0 to 1.2 was
not held up by people like 3Dfx, nVidia, Rendition, ATI, Matrix, et al.

> > You want to write
> > off the snail pace set by the ARB to the fact that hardware vendors were
> > slow to implement 1.0 in hardware? In any case, it wasn't difficult to
get
> > a 1.0 software rasterizer and then there was Cosmo (don't know if we
should
> > bring that up... lol)
>
> I'm glad we are able to give you lots of laughs.

Either laugh or cry at the remarkably narrow vision you have. I prefer to
find it humorous in a sort of 'sad clown' manner.

> >>>There are great advantages to using OpenGL, you should realize that new
> >>>feature availability for games and any product that isn't 'in house'
> >
> > where
> >
> >>>you can control the hardware is NOT a strong point of OpenGL.

> >>You've lost me here. There are essentially two commodity graphics
> >>providers and they seem to provide similar support for Direct3D and
> >>OpenGL, plus or minus a release or two.

Are you now only talking about OpenGL as a Windows rendering API? There are
MANY IHVs involved in offering OpenGL solutions. BTW, seen many full 1.4
implementations yet? I've seen plenty of partials.

> > Of course I have. Do you actually know anything about deploying
software
> > upon a broad variety of video cards?
>
> Absolutely. I support software on a broad variety of video cards and
> operating systems.

How many of your 1.4 drivers support the full 1.4 spec?

> > What is just about the only thing
> > professional game developers dislike about OpenGL? I'll give you a hint,
it
> > starts with ext and ends with ensions.
>
> Extensions are your friends. They allow IHVs to implement new
> functionality faster than any other option. Certainly it can be faster
> than waiting for either the ARB or Microsoft to bless something new.

Extensions are your friends only if you work on limited hardware.

WTH


Dave Eberly

unread,
Aug 20, 2003, 1:12:22 PM8/20/03
to
"wogston" <sp...@nothere.net> wrote in message
news:bi080j$sq9$1...@phys-news1.kolumbus.fi...

> Any way to bind when loading dynamically? (THAT is the big question, I
> generate HLSL code dynamically from renderstates and sometimes compile
> that... so I use compiler backend in runtime, not always, but sometimes,
and
> then HLSL way is better)

I believe the Cg runtime supports this. The contractor I hired
to add shader support to my system wanted to put in the
dynamic loading support, but I preferred not to have that on
the first release of the engine update.

WTH

unread,
Aug 20, 2003, 1:12:07 PM8/20/03
to
> Eh? You seem to conveniently ignore the existance of
> ARB_vertex_program, ARB_fragment_program and Cg.

Cg runs on what? ARB_vertext_program/fragment is actually implemented by
how many 1.4 drivers?

> You do not seem to have heard of abstraction", "information hiding",
> "separating interface and implementation", etc. Separate code paths
> (for the sake of taking advantage of a vendor/ARB extensions) add zero
> complexity to the system, they're just a simple matter of programming,
> one code monkey/month more.

"Separate code paths add zero complexity to the system"? Surely you are
joking. If they don't, what exactly would you say introduces complexity to
software? NOT writing code? Lol... Not implement multiple directions to
solve the same problem? That is very funny.

> > ... but good luck running your shaders across two
> > different OGL cards when you can use the same code across dozens of DX
> > cards.
>
> It's not a matter of luck. It usually JustWorks(tm). Anyway, whether
> it works or not is a quality of implementation issue, not the one or
> the other API advantage/drawback.

Yes, that is the whole point. Because different OpenGL cards use different
extensions to support things that SHOULD be defined by the API, you must
write more code in OpenGL than you would in D3D.

> That said, I see DirectX Graphics and OpenGL as roughly equivalent,
> with one DX drawback (besides being ugly, proprietary and single
> platform) and one OpenGL advantage:

It hasn't been ugly for two versions now (although it WAS ugly as hell, try
1996 for ugly as hell.)
Proprietary? Do you think you can implement OpenGL and call it OpenGL for
nothing?

> - DX has only interleaved vertex buffers (correct me of I'm wrong),
> which

You're wrong.

> a) slows it down when you need to update only part of the data (say
> only normals or only texunit 5 coords), because you'd have to upload
> ALL of it to GART/video memory.

Wrong again.

> b) makes integrating vertex data into application specific data
> structures harder, probably making necessasy copying stuff around.

Wrong.

> - OpenGL extensions mechanism are a clear advantage IMHO: it both
> exposes latest and greatest features of the latest hardware and
> facilitates getting proven (on the field) features into the core
> standard.

Yes, the benefit of extensions are that they are immediate. Not something
that game developers really care about. As for "facilitates getting
features into the core standard", I think that is both untrue and damaging
to OpenGL. Not only does it say "you don't need to include it because cards
will simply offer an extension instead" it tells the ARB that they don't
need to think on their own about OGL, "we can just incorporate the things we
find hardware vendors doing that works." In otherwords, only the IHVs drive
the API, it isn't a co-operative thing like it is with D3D.

WTH


WTH

unread,
Aug 20, 2003, 1:18:13 PM8/20/03
to
> One thing HLSL and Cg are doing "wrong" is the semantic binding. Semantic
is
> good for output to fragment processor, obviously, so that it knows what to
> do with the data. It's also "good" for input to fragment processor, so
that
> it knows what to do with the data if "fixed" pipe is being used. Ditto for
> VS if it is using fixed pipeline.

You realize that you can construct an HLSL shader in real-time with DX9? ;)

> But when both vertex and fragment processors are programmed with custom
> shaders, the only semantic that doesn't get in the way and is useful is
the
> fragment program output. Semantics just limit how much per-vertex
properties
> can be streamed into the program. I could be wrong, but it looks likely
> something will be done about this in DirectX 10. ;-)

I'm not sure I understand what problem you are describing, could you restate
that for me? (please)
You have used multi-stream vertext and pixel shaders, yes?

> Since speaking of Cg and HLSL, HLSL has one advantage in it's
implementation
> of the same language. It's the ability to define the entry point symbolic
> name when compiling. This way can combine vertex and pixer programs into
> same sourcefile. Sometimes this is very convenient. I use symbols vsmain()
> and psmain() when I write such processing pipes. Similiar functionality
for
> Cg would be welcome addition.

I think you can do that with Cg as well, unfortunately, you can't compile Cg
shaders in real-time on the 'client side' (can you?)

WTH


WTH

unread,
Aug 20, 2003, 1:19:52 PM8/20/03
to
> I believe the Cg runtime supports this. The contractor I hired
> to add shader support to my system wanted to put in the
> dynamic loading support, but I preferred not to have that on
> the first release of the engine update.

Sh*t, I didn't know that. Thanks for the info (very interesting.)

WTH


WTH

unread,
Aug 20, 2003, 1:21:03 PM8/20/03
to
> All the same, we should think how productive it is to 'fight' which is
> better API..

Sadly, my post to the original post's author sparked a crusade when all I
tried to do was objectively point out some of the differences between the
two (as I use both at work as well.)

WTH


WTH

unread,
Aug 20, 2003, 1:21:32 PM8/20/03
to
> Ofcourse Linux (and Windows) networks are also seen, instead of "big
iron"..
> but everyone knows that, but somehow mentioning it makes you look (in your
> own opinion, err.. mine in this case) sharpest tool in the box. ;-)

LOL, can we call an IR2 "big iron" now? Hehe...

WTH:)


WTH

unread,
Aug 20, 2003, 1:22:16 PM8/20/03
to
> I think you can do that with Cg as well, unfortunately, you can't compile
Cg
> shaders in real-time on the 'client side' (can you?)

I think Dave just pointed out that you can... My bad.

WTH


wogston

unread,
Aug 20, 2003, 2:02:23 PM8/20/03
to
> Sh*t, I didn't know that. Thanks for the info (very interesting.)

What API call is it?


wogston

unread,
Aug 20, 2003, 2:08:25 PM8/20/03
to
> You realize that you can construct an HLSL shader in real-time with DX9?
;)

I do, actually.


> I'm not sure I understand what problem you are describing, could you
restate
> that for me? (please)
> You have used multi-stream vertext and pixel shaders, yes?

Limited number of bind semantics.

Number of *input* streams have nothing to do with that, each component
(called 'semantic' in HLSL) is 'named' (read: 'has a semantic'). I'm yet to
run out of semantics but the situation is sure as sun is to rise tomorrow to
come. Meanwhile I have faith in my ability to write work-arounds and that I
am not the single programmer on face of the Earth who is aware of the
forthcoming issue, especially since I posted my thoughts in the Usenet.


> I think you can do that with Cg as well, unfortunately, you can't compile
Cg
> shaders in real-time on the 'client side' (can you?)

You can, but the entry-point cannot be defined anywhere in the API I can
see, if you have found it I have use for the knowledge.


wogston

unread,
Aug 20, 2003, 2:09:53 PM8/20/03
to
> I think Dave just pointed out that you can... My bad.

He should point out the entry-point to the method/function which achieves
this. Using offline compiler isn't "it", unless I want to write the shader
into file, compile, and load the compiled binary to my application. I would
prefer compiling from the memory.

If there is entry-point for this, giving it's name will help pin-pointing it
from the documentation and reading the method description and getting
started.


Stefanus Du Toit

unread,
Aug 20, 2003, 2:45:21 PM8/20/03
to
"wogston" <sp...@nothere.net> writes:
> > I think Dave just pointed out that you can... My bad.
>
> He should point out the entry-point to the method/function which achieves
> this. Using offline compiler isn't "it", unless I want to write the shader
> into file, compile, and load the compiled binary to my application. I would
> prefer compiling from the memory.

If "metaprogramming" shaders interests you, then you may want to look
at our language called Sh, which lets you do these sorts of things
very easily and naturally, at least if you're using C++.

See http://libsh.sf.net/ for more information.

Beware that we've just released our first usable release last
month. It is however being worked on heavily -- I will be releasing
another release this week with a working optimizer which drastically
improves the generated code and a Windows port. In the future there
are plans for all sorts of virtualization and generic
stream-processing support, which several people are working on.

Just couldn't resist the temptation for this shameless plug.

--
Stefanus Du Toit; http://3.141592.org/; gpg 4bf2e217; #include <iostream>
template<int i,int j=i-1>struct I{I(){if(i%j)I<i,j-1>();else I<i-1>();}};
template<int i>struct I<i,1>{I(){std::cout<<i<<'\n';I<i-1>();}};template<
>struct I<1,0>{I(){}};int main(){I<50>();}/* Use -ftemplate-depth-5000 */

WTH

unread,
Aug 20, 2003, 2:56:34 PM8/20/03
to

"wogston" <sp...@nothere.net> wrote in message
news:bi0der$b6f$1...@phys-news1.kolumbus.fi...

> > You realize that you can construct an HLSL shader in real-time with DX9?
> ;)
>
> I do, actually.

Sorry, I received your other post (pointing this out) after I posted this.
I like constructing them programmatically as well.

> Limited number of bind semantics.
>
> Number of *input* streams have nothing to do with that, each component
> (called 'semantic' in HLSL) is 'named' (read: 'has a semantic'). I'm yet
to
> run out of semantics but the situation is sure as sun is to rise tomorrow
to
> come. Meanwhile I have faith in my ability to write work-arounds and that
I
> am not the single programmer on face of the Earth who is aware of the
> forthcoming issue, especially since I posted my thoughts in the Usenet.

Sorry, I was confused because I thought you were referring to something that
wasn't optional. Semantics are optional, so I thought you were talking
about 'binding' in relations to binding a semantic to a register (I was
confused, wasn't I... ;))

> > I think you can do that with Cg as well, unfortunately, you can't
compile
> Cg
> > shaders in real-time on the 'client side' (can you?)
>
> You can, but the entry-point cannot be defined anywhere in the API I can
> see, if you have found it I have use for the knowledge.

Hopefully Dave will tell us.

WTH:)


Dave Eberly

unread,
Aug 20, 2003, 3:01:11 PM8/20/03
to
"wogston" <sp...@nothere.net> wrote in message
news:bi0dhi$baf$1...@phys-news1.kolumbus.fi...

> He should point out the entry-point to the method/function
> which achieves this.

I did point this out. It is called "-entry". Here are the
command lines for compiling the vertex shader (vmain
is the entry) and the pixel shader (pmain is the entry),
both shaders in the same .cg file.

cgc iridescence.cg -profile vs_2_x -entry vmain
cgc iridescence.cg -profile ps_2_x -entry pmain

Dave Eberly

unread,
Aug 20, 2003, 3:02:37 PM8/20/03
to
"wogston" <sp...@nothere.net> wrote in message
news:bi0d3g$ag8$1...@phys-news1.kolumbus.fi...

> > Sh*t, I didn't know that. Thanks for the info (very interesting.)
>
> What API call is it?

I will soon be posting Wild Magic 2. You can download
the source to see how I handle .cg files.

WTH

unread,
Aug 20, 2003, 3:24:45 PM8/20/03
to
You can do this with the Cg runtime? For dynamic shaders?

WTH

wogston

unread,
Aug 20, 2003, 3:30:01 PM8/20/03
to
> I will soon be posting Wild Magic 2. You can download
> the source to see how I handle .cg files.

CGDLL_API CGprogram cgCreateProgram(CGcontext ctx,
CGenum program_type,
const char *program,
CGprofile profile,
const char *entry,
const char **args);

It seems this is new on Cg Toolkit 1.1-- this shows that it takes some use
to get things right. ;)


wogston

unread,
Aug 20, 2003, 3:30:51 PM8/20/03
to
> Just couldn't resist the temptation for this shameless plug.

Not at all, it's great that you brought this to our attention (I'm sure I
speak for more people than for myself only), so thanks!


wogston

unread,
Aug 20, 2003, 3:31:24 PM8/20/03
to
> I did point this out. It is called "-entry". Here are the
> command lines for compiling the vertex shader (vmain
> is the entry) and the pixel shader (pmain is the entry),
> both shaders in the same .cg file.

Wrong answer, but thanks anyway. ;-)


Dave Eberly

unread,
Aug 20, 2003, 3:43:39 PM8/20/03
to

"wogston" <sp...@nothere.net> wrote in message
news:bi0i7q$kl0$1...@phys-news1.kolumbus.fi...

> It seems this is new on Cg Toolkit 1.1-- this shows that it takes some use
> to get things right. ;)

I only have used version 1.1, so was not aware of the entry
point problem in version 1.0.

Dave Eberly

unread,
Aug 20, 2003, 3:44:46 PM8/20/03
to
"wogston" <sp...@nothere.net> wrote in message
news:bi0iae$kv0$1...@phys-news1.kolumbus.fi...

Maybe this is the right answer to your wrong question :)
At any rate, I am not the one having problems with Cg.
It works for me...

wogston

unread,
Aug 20, 2003, 3:44:50 PM8/20/03
to
> You can do this with the Cg runtime? For dynamic shaders?

Yeah, I found it after downloading Cg 1.1 Toolkit, wasn't too hard. I used
1.0 and it didn't have this (if it did, back then didn't find it and was
'forced' to use HLSL since for it I just started working and didn't have to
be mr. Detective.. things rolled out on their own very easily).

The Cg and HLSL being the same language (*1 The Cg Tutorial by M. Kilgard
points out some differences, if anyone is interested) the biggest difference
now seems to be, if any, which compiler backend does better optimization
job.

I found few bugs in the Microsoft HLSL compiler backend and they are
supposedly already fixed, but the Summer 2003 Interim release isn't publicly
out yet.. only registered DirectX 9.0 developers have access to it, so HLSL
isn't "known bugfree" for generic public at this time.. besides these few
issues, it generates pretty good code. Now that Cg 1.1 supports what I need,
could just aswell compare them.. I'll opt for the better compiler, easy
trick to do, as I have abstracted the code generation and state management
anyway to match our internal libraries workflow better.

This is what it looks like to compile HLSL shader from a file (just example,
there's more than this):

const char shaderfile[] = "test.shader";

mpVertexShader = CreateVertexShader(shaderfile,"vsmain");
mpPixelShader = CreatePixelShader(shaderfile,"psmain");

Binding renderstate to C++ program:

ShaderHandle handle = mpVertexShader->GetShaderHandle("modelviewproj");

... that's just the backend, but replacing HLSL with Cg will be trivial,
some perks of abstraction gives.. this roots itself to the times when was
still comparing them. For "pipeline" shader, where pixel- and vertex are
coupled tightly together it's just single CreateShader() call. I also have
"standard" API for accessing the most common, unified renderstates like
M,V,P -transformations and such. Obviously custom states aren't unified.
etc.

Shaders are really, really easy to work with all the same and it's not Magic
to use them. Unfortunately there is always this "New Things Are
Hard" -mindset for some programmers, they seem to go "aaaah" & "ooooh" when
someone does something buzzwordish. Shaders had this "bang!" -effect for
early applications where the game demos only merit was that it used shaders
(but not in anyway that indicated performance gain opposed to doing things
the traditional way, or doing something that wasn't already possible in the
first place).

My first shader was gouraud shaded triangle.. that was the biggest "learning
experience" part of it, after that, it was down to business and just
hammering the code in, ie. intellectual exercise was over, back to work. ;-)


WTH

unread,
Aug 20, 2003, 3:46:07 PM8/20/03
to
Very cool...

WTH

"wogston" <sp...@nothere.net> wrote in message

news:bi0i7q$kl0$1...@phys-news1.kolumbus.fi...

wogston

unread,
Aug 20, 2003, 3:46:32 PM8/20/03
to
> I only have used version 1.1, so was not aware of the entry
> point problem in version 1.0.

1.0 effectively steered me to the ways of HLSL last fall, it's my own fault
didn't update my knowledge.


Dave Eberly

unread,
Aug 20, 2003, 3:47:42 PM8/20/03
to
"wogston" <sp...@nothere.net> wrote in message
news:bi0iae$kv0$1...@phys-news1.kolumbus.fi...

I should have also pointed out that the Cg API call you
posted in another response is the one that is used by
the "CgConverter" tool to generate the shader files for
my engine. As you discovered, that API allows you to
compile at run time...

Will R

unread,
Aug 20, 2003, 4:45:17 PM8/20/03
to
>Yes, great, a faster route to a poor solution... Wonderful. You really
>don't see the extensions implementation as a serious weakness for OpenGL as
>a gaming API?
>

I'm just going to butt in randomly, and hopefully not lengthen this thread too
much.

In DX, IIRC, you have to check for things like hardware TnL or pixel shaders,
right? I'm not a big DX guy, so I could be wrong.

In OpenGL, you have to make an extensions query to check for things like pixel
shaders, or vertex buffers.

With DX, a feature is only accessible if MS decides it belongs in the official
API. With OpenGL, anybody and his uncle can throw an extension into their
drivers.

So, regardless of whether you are using DX or OGL, you need to make some
queries to the API to find out what features are supported by the user's
hardware. In DX, you have to wait until a feature makes it into a major API
revision. In OpenGL, you get access at the features a whisker quicker, but you
may wind up with similar extensions from competing manufacturers, unless an
extension eventually gets rolled into an OGL revision.

You may also wind up with a similar issue in DX -- IIRC, the pixel shader
support in DX 8 was a bit wonky. ps 1.1 was for nVidia, and ps 1.2 was for
ATI, or something like that (I never personally coded any DX 8, let alone any
DX 8 pixel shader programs...). So, despite the centralisation of the DX API,
you still wound up with two semi-competing "extensions."

The OGL way has some tradeoffs, relative to the DX way. But, the reverse is
also true, and for the most part, DX and OGL are equally terrible when it comes
to supporting new features. They just take different ways to get there.
------------------
Woooogy
I have to go back in time to pretend to be myself when I tell myself to tell
myself, because I don't remember having been told by myself to tell myself. I
love temporal mechanics.

Alex Mizrahi

unread,
Aug 20, 2003, 7:35:51 PM8/20/03
to
Hello, WTH!
You wrote on Wed, 20 Aug 2003 11:28:57 -0400:

W> You don't know what you're talking about here.

i DO know what i'm talking about.

W> You simply use a helper function to ask for a pointer to the D3D object
W> you wish to use. You don't have to addref, you don't have to query
W> interface, you don't have to release. You can work at that level if you
W> wish, but you have no need to.
W> You don't have to know anything about COM to use DX now. Stop
W> confusing people with your ignorance about D3D. You can use direct
W> creation if you wish, but indirect is just the same as any other API.

what helpers do you mean? haven't seen any examples from. you mean D3DX?
really there is no much need to do QueryInterfaces, but addref/release will
always be needed(you need somehow say that you don't need that object
anymore, ye?)
even on C++ that style produces overburdened, too heavy to understand and
write code.
on languages that natively support COM at high level - C# for example - it
can be really nice code. but that's smart wrapper working, a lot of work has
to be done to write such wrapper.
on languages that support COM worse than C++ working with D3D will be as
hell, unless special wrapper is written. OpenGL code will be nice with most
languages w/o any wrappers.

>> but as i understand there is no such wrapper, at least standard
>> one(for non-standard, one can implement calls to d3d via ogl - but it
>> will not be pure d3d).
>> in case of pure C they do such trick in header:

W> Are you still talking about using D3D from C? LOL. There are two
W> different situations which you are trying to lump together. (1)You
W> are writing C and using a C++ compiler, absolutely no weird crap.
W> (2)You are writing C and using a C compiler, you need to pass a
W> pointer to itself as the first argument. Holy crap that is tough...

i don't like C language, i don't even know it well. i'm speaking about
working with 3d graphics from languages that support COM on level same as C.

>>>> but it's really ugly, as for me, and it requires much more work to
>>>> bind it with some programming language. and possibly on some
>>>> languages it's impossible at all.
>>>> with OpenGL pure C can be used, it's as standart as API can be.

W> It isn't ugly you idiot, you don't work in the header files, maybe
W> you should move out of the dark ages of 3D graphics and actually use
W> C++ in any case, lol.

i'm writting on C++ most time.

W> I spent 3 years maintaining an ANSI C codebase between IRIX, Solaris,
W> HP-Unix, and NT, and providing C++ wrappers and C wrappers. This
W> included a
W> DirectX renderer back in the 'execute buffer' times. It isn't ugly,
W> and it isn't difficult.

ye, it's really beatiful to write

IDirect3DDevice9_SetRenderState(device, D3DRS_SRCBLEND,
D3DBLEND_SRCALPHA );
IDirect3DDevice9_SetRenderState(device, D3DRS_DESTBLEND,
D3DBLEND_INVSRCALPHA );

or

device->lpVtbl->SetRenderState(device..

instead of

glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);

W> LOL. "very bad looking". Try to be objective. In any case, none of
W> the tripe you posted above has anything to do with your original
W> statement about "pure C" and there being a "standard." I would spend
W> the 5 minutes deconstructing your argument except it has nothing to
W> do with this thread.

W> You appear to be trying to turn this into a C versus C++ thread. If
W> you limit yourself to working in C, that's fine, but don't try to
W> make OpenGL look better than D3D because you think it is more 'C
W> friendly.'

OpenGL is more 'C friendly' - more kompatible with different languages. to
work with D3D on as klean level as OpenGL is by nature you have to use C# or
same languages. you don't agree with such arguments?

>> many games use scripting languages, having C/C++ only for parts of
>> engine where high performance is needed.
>> and there is more applications that can use 3d graphics other than
>> games.

W> This thread is about why there don't seem to be many OGL games versus
W> D3D games.

really? this part of thread is about D3D style - COM-like interfaces and
'hungarian notation'. that's what i'm discussing. 3d graphics for games was
one question of original posting, but i think that doesn't mean that whole
thread should be only about games.

W> As for games using scripting languages, yes, they do, but ONLY FOR
W> SCRIPTABLE EVENTS.
W> The game logic itself does not run in script, the input stage, the task
W> stage, the AI stage, the render stage, et cetera are NOT in a scripting
W> language. You appear to be trying to make it seem like they are.

not only. in "Blade of Darkenss" Python was used for different things up to
resouces initializing and texture loading. as i've heard, saving/loading was
done there by saving/loading whole Python virtual machine.
i've also heard about game with a lot written on Java, something like
aviasimulator, IL maybe.

C++ is not best language for AI, as far as i know. better to use something
like Lisp.

and there are functional programming languages, that can make C++ 'stone
age'. i didn't have experience with them, but people are saying they are a
lot better than imperative languages for some tasks(fans say that at all
task). there is Quake written on Haskell, and they say it's not slower than
C quake. the strong point of functional languages is that they can on fly
optimize programs deeper than C++ optimizer can ever. that Quake accelerated
itself after some time.

>> some applications are easier to write on language other than C++, and
>> part that uses 3d can be not the main part.

W> Are you a professional 3D graphics developer?

what do you mean by 'professional graphics developer'?
i'm a student of Applied Math faculty and don't get a job yet, but i've
working with 3d graphics since 1999, and have written some 3d graphics
programs.
you can find them at http://indirect3d.sf.net http://i3dfm.sf.net

W>>> In any case, the OGL bindings for many of those languages are
W>>> incomplete and/or out of date (not all of them.)

>> but it's easy to do bingings yourself.

W> You can't argue in one place that D3D is bad because using it from C
W> requires ugly headers and then argue that a value of OpenGL is that
W> you can go and generate your own bindings which are much more
W> complicated and beyond most newbies. You can't have your cake and
W> eat it to.

opengl is more easily bindable than d3d. what is not ok in this statement?

>> i bet i can write perl script in 10-20 minutes that will produce OGL
>> header from gl.h for any language that supports foriegn function
>> interface(if i know syntax of that language).
W> i'm
>> not sure that write binging with d3d is easy enough.

W> You're not sure, you know why? Because you don't know squat about
W> D3D.

i know what d3d8 is about. if i need i think i can start writting d3d
programs just after few days of api funcitons refreshing.

we were developing some Indirect3D library(link is above) which did
rendering via d3d8 and opengl, with almost equal abilities. it's not some
'engine to rendering cube', 1 meg of source code, 35k lines is something,
ye? by the way, it uses COM interfaces and is written in C++, so you
understand that COM doesn't frightens me. i was not developing d3d rendering
part, but i've looked through sources and watched equal functionality code
in ogl and d3d, so i can kompare which is more elegant in my opinion.

>> OpenGL has very few requirements to run - language doesn't even have
>> to
W> have
>> pointers to run it(at least with limited performance).
>> so OGL will be unreachable when task is other than another one game
W> written
>> on C++ for Windows by Mircosoft beurocratic API fans.

W> So what? How many people are going to write games that are in
W> languages that don't support pointers? Why the hell would you want
W> to?

as far as i know, games for PC(and xbox) have only some percent of game
market, most is taken by stuff like Play Station2. so it's not a very good
idea to apply 3d graphics only to games. there are a lot of different
applications, some visualization..

personally i'm going to apply 3d graphics in UI field making ordinary
applications 3d look. you can take a look at www.3dna.com - there people are
creating 3d shell, last time i visited it there were a lot quotes from
gamedev people saying 'what a kewl thing'.

With best regards, Alex Mizrahi aka killer_storm.


Momchil Velikov

unread,
Aug 21, 2003, 4:38:21 AM8/21/03
to
"WTH" <ih8...@spamtrap.com> wrote in message news:<vk7b2jh...@corp.supernews.com>...

> > Eh? You seem to conveniently ignore the existance of
> > ARB_vertex_program, ARB_fragment_program and Cg.
>
> Cg runs on what? ARB_vertext_program/fragment is actually implemented by
> how many 1.4 drivers?

Certainly in the drivers of all cards capable of running DX8 and up.

> > You do not seem to have heard of abstraction", "information hiding",
> > "separating interface and implementation", etc. Separate code paths
> > (for the sake of taking advantage of a vendor/ARB extensions) add zero
> > complexity to the system, they're just a simple matter of programming,
> > one code monkey/month more.
>
> "Separate code paths add zero complexity to the system"? Surely you are
> joking. If they don't, what exactly would you say introduces complexity to
> software? NOT writing code? Lol... Not implement multiple directions to
> solve the same problem? That is very funny.

Most complexity comes from the interaction between components, not
from the complexity of the components themselves. After all that's
one of the outcomes of the proper design - small, simple components.

>
> > > ... but good luck running your shaders across two
> > > different OGL cards when you can use the same code across dozens of DX
> > > cards.
> >
> > It's not a matter of luck. It usually JustWorks(tm). Anyway, whether
> > it works or not is a quality of implementation issue, not the one or
> > the other API advantage/drawback.
>
> Yes, that is the whole point. Because different OpenGL cards use different
> extensions to support things that SHOULD be defined by the API, you must
> write more code in OpenGL than you would in D3D.

No, there's one API - ARB_vertex_program/ARB_fragment_program.

>
> > That said, I see DirectX Graphics and OpenGL as roughly equivalent,
> > with one DX drawback (besides being ugly, proprietary and single
> > platform) and one OpenGL advantage:
>
> It hasn't been ugly for two versions now (although it WAS ugly as hell, try
> 1996 for ugly as hell.)
> Proprietary? Do you think you can implement OpenGL and call it OpenGL for
> nothing?

By "proprietary" I meant "controlled by a single company".

>
> > - DX has only interleaved vertex buffers (correct me of I'm wrong),
> > which
>
> You're wrong.

Yeah, it appears DX9 finally caught up with OpenGL (that and polygon
offset).
And stencil buffers before. And what not ...

Fact it that MS do not actually need the ARB/extensions process -
they just use the OpenGL's one and rip off the good stuff which comes
out.

>
> > a) slows it down when you need to update only part of the data (say
> > only normals or only texunit 5 coords), because you'd have to upload
> > ALL of it to GART/video memory.
>
> Wrong again.
>
> > b) makes integrating vertex data into application specific data
> > structures harder, probably making necessasy copying stuff around.
>
> Wrong.

Yeah, yeah, wrong, but be specific - wrong only if the above
assumption is not true, and it is true in DX < 9, no ?

> > - OpenGL extensions mechanism are a clear advantage IMHO: it both
> > exposes latest and greatest features of the latest hardware and
> > facilitates getting proven (on the field) features into the core
> > standard.
>
> Yes, the benefit of extensions are that they are immediate. Not something
> that game developers really care about. As for "facilitates getting
> features into the core standard", I think that is both untrue and damaging
> to OpenGL. Not only does it say "you don't need to include it because cards
> will simply offer an extension instead" it tells the ARB that they don't
> need to think on their own about OGL, "we can just incorporate the things we
> find hardware vendors doing that works."

Wrong. Change it to "we can just incorporate the things developers
found useful". The extension process provides alternative solutions
for problems and allows developers to actually pick the better one.
Every standards committee should codify *existing* practice, standards
"invented" by committees usually die young. Like PS 1.0/1.1, no ?

> In otherwords, only the IHVs drive
> the API, it isn't a co-operative thing like it is with D3D.

Huh ? Cooperative ? How so ? Can you provide some examples of
cooperative interaction between MS and IHVs, which resulted in such
and such DX feature ?

~velco

wogston

unread,
Aug 21, 2003, 8:02:18 AM8/21/03
to
> Huh ? Cooperative ? How so ? Can you provide some examples of
> cooperative interaction between MS and IHVs, which resulted in such
> and such DX feature ?

EMBM, S3TC (=DXTC), PS 1.3+ for ATI, NV was driving DOT3 and PS1.1 as their
had their Register Combiners in HW, etc.etc.


WTH

unread,
Aug 21, 2003, 10:20:09 AM8/21/03
to
> > Cg runs on what? ARB_vertext_program/fragment is actually implemented
by
> > how many 1.4 drivers?
>
> Certainly in the drivers of all cards capable of running DX8 and up.

Not at all. Look at the consumer range of graphics cards. You can't use
*_vertex_program on a GeForce2 MX, or a GeForce4 MX (I think, I'm not
positive about this one.) There are many cards that do not have the ability
to use vertex shaders (or pixel shaders.) In D3D, D3D handles that in
software for you (rather well actually), in OpenGL, guess what happens? ;)

> > "Separate code paths add zero complexity to the system"? Surely you are
> > joking. If they don't, what exactly would you say introduces complexity
to
> > software? NOT writing code? Lol... Not implement multiple directions
to
> > solve the same problem? That is very funny.
>
> Most complexity comes from the interaction between components, not
> from the complexity of the components themselves. After all that's
> one of the outcomes of the proper design - small, simple components.

Yes, that is one tiny aspect of complexity, and it assumes that you
components have complexities that do not exhibit themselves unless forced to
interact. Multiple code paths is a much simpler and obvious example of
introducing complexity into a code base.

> > > > ... but good luck running your shaders across two
> > > > different OGL cards when you can use the same code across dozens of
DX
> > > > cards.
> > >
> > > It's not a matter of luck. It usually JustWorks(tm). Anyway, whether
> > > it works or not is a quality of implementation issue, not the one or
> > > the other API advantage/drawback.
> >
> > Yes, that is the whole point. Because different OpenGL cards use
different
> > extensions to support things that SHOULD be defined by the API, you must
> > write more code in OpenGL than you would in D3D.
>
> No, there's one API - ARB_vertex_program/ARB_fragment_program.

Which some 1.4 drivers support, and some do not, which some video cards have
1.4 drivers and some do not...


> > > That said, I see DirectX Graphics and OpenGL as roughly equivalent,
> > > with one DX drawback (besides being ugly, proprietary and single
> > > platform) and one OpenGL advantage:
> >
> > It hasn't been ugly for two versions now (although it WAS ugly as hell,
try
> > 1996 for ugly as hell.)
> > Proprietary? Do you think you can implement OpenGL and call it OpenGL
for
> > nothing?
>
> By "proprietary" I meant "controlled by a single company".

It isn't controlled in the same way that other MS API's are controlled (like
MFC for example.) It is fairly cooperative between MS and the IHVs
(probably because MS is till 'dating' them to get them to leave OpenGL.)

> > > - DX has only interleaved vertex buffers (correct me of I'm wrong),
> > > which
> >
> > You're wrong.
>
> Yeah, it appears DX9 finally caught up with OpenGL (that and polygon
> offset).
> And stencil buffers before. And what not ...

Actually, DX9 has gone right past OpenGL. Examine the streaming system.
This is what I am complaining about in this thread. I am an OpenGL fan, D3D
has been quickly catching OpenGL for the past 5 years, and has now blown
right by. I want OpenGL to catch up. For this to happen, people need to
pull their heads out of the sand and see that D3D is now AT LEAST the equal
of OpenGL in every area (and superior in some) with the exception of
portability. There's no reason for this to be the case. Sadly, nVidia,
ATI, Matrox, and the others use the OpenGL ground as a place to fight. They
don't do that in the DirectX arena because MS specifies what they should or
should not support (after consulting them as a group.)

> Fact it that MS do not actually need the ARB/extensions process -
> they just use the OpenGL's one and rip off the good stuff which comes
> out.

? Sorry, but I could do single pass multi-texture on WinTel with DX before
you could do it with OpenGL. I can use vertex streams in DX right now and
not use them in OpenGL. I can use Shaders in DX with vertex streams, right
now, in DX, and not in OpenGL. I could use shaders in DX before I could use
them in OpenGL (with the exception of one video card.)

> > > a) slows it down when you need to update only part of the data (say
> > > only normals or only texunit 5 coords), because you'd have to upload
> > > ALL of it to GART/video memory.
> >
> > Wrong again.
> >
> > > b) makes integrating vertex data into application specific data
> > > structures harder, probably making necessasy copying stuff around.
> >
> > Wrong.
>
> Yeah, yeah, wrong, but be specific - wrong only if the above
> assumption is not true, and it is true in DX < 9, no ?

Do you want to start comparing against OpenGL 1.2? Or would you rather
stick with 1.4?

> > Yes, the benefit of extensions are that they are immediate. Not
something
> > that game developers really care about. As for "facilitates getting
> > features into the core standard", I think that is both untrue and
damaging
> > to OpenGL. Not only does it say "you don't need to include it because
cards
> > will simply offer an extension instead" it tells the ARB that they don't
> > need to think on their own about OGL, "we can just incorporate the
things we
> > find hardware vendors doing that works."
>
> Wrong. Change it to "we can just incorporate the things developers
> found useful". The extension process provides alternative solutions
> for problems and allows developers to actually pick the better one.
> Every standards committee should codify *existing* practice, standards
> "invented" by committees usually die young. Like PS 1.0/1.1, no ?

The alternative to simply sitting around and waiting for 'valuable ideas' to
make themselves evident is to talk to the IHVs who are constantly trying to
out do each other. Ask them for their 'developer feedback', et cetera.

> > In otherwords, only the IHVs drive
> > the API, it isn't a co-operative thing like it is with D3D.
>
> Huh ? Cooperative ? How so ? Can you provide some examples of
> cooperative interaction between MS and IHVs, which resulted in such
> and such DX feature ?

Uhm, nearly all of the features from DX7 onwards.

WTH


WTH

unread,
Aug 21, 2003, 10:53:24 AM8/21/03
to
> what helpers do you mean? haven't seen any examples from. you mean D3DX?
> really there is no much need to do QueryInterfaces, but addref/release
will
> always be needed(you need somehow say that you don't need that object
> anymore, ye?)

No. Alex, I think you are stuck on the D3D from a couple years back. You
should really look at it now. DX9 is fantastic. That is a scary thing for
OpenGL. This is why I am putting so much effort into this thread. Direct3D
is leaving OpenGL behind RIGHT NOW. That is a bad thing.

> even on C++ that style produces overburdened, too heavy to understand and
> write code.

For school children? In any case, you're wrong about what you do or don't
need to do to use DirectX.

> on languages that natively support COM at high level - C# for example - it
> can be really nice code. but that's smart wrapper working, a lot of work
has
> to be done to write such wrapper.
> on languages that support COM worse than C++ working with D3D will be as
> hell, unless special wrapper is written. OpenGL code will be nice with
most
> languages w/o any wrappers.

Worse than C++? For example? In any case, D3D is designed to work from
C++, but uses COM so that you can use it from any language. Now, go install
the default Java SDK and try to use OpenGL. You can't do it, you know why?
Because you have to create wrappers to generate 'native' code, to make your
Java code non-portable. Defeats the whole purpose of using Java. To use
OpenGL from any language other than C/C++, you must create wrappers as well.
You don't for DirectX unless it is a language that can't support COM (most
compiled languages do.)

> i don't like C language, i don't even know it well. i'm speaking about
> working with 3d graphics from languages that support COM on level same as
C.

Such as? Java doesn't. What language in particular are you referring to?

> ye, it's really beatiful to write
>
> IDirect3DDevice9_SetRenderState(device, D3DRS_SRCBLEND,
> D3DBLEND_SRCALPHA );
> IDirect3DDevice9_SetRenderState(device, D3DRS_DESTBLEND,
> D3DBLEND_INVSRCALPHA );

Yes, it is. Just as it is beautiful to write:

glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );

You really see a difference in complexity between the two? Be objective now
;).

> device->lpVtbl->SetRenderState(device..
>
> instead of
>
> glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);

Sigh... ONLY if you are using C and no wrapper.

> OpenGL is more 'C friendly' - more kompatible with different languages. to
> work with D3D on as klean level as OpenGL is by nature you have to use C#
or
> same languages. you don't agree with such arguments?

No, it is not more 'C friendly.'

It LOOKS slightly cleaner when using it with C. But you can't use OpenGL
from other languages in any case (not without wrappers), whereas you can use
Direct3D from other languages WITHOUT wrappers.

> >> many games use scripting languages, having C/C++ only for parts of
> >> engine where high performance is needed.
> >> and there is more applications that can use 3d graphics other than
> >> games.
>
> W> This thread is about why there don't seem to be many OGL games versus
> W> D3D games.
>
> really? this part of thread is about D3D style - COM-like interfaces and
> 'hungarian notation'. that's what i'm discussing. 3d graphics for games
was
> one question of original posting, but i think that doesn't mean that whole
> thread should be only about games.

No, YOU have been trying to make a new thread about 'style.' Go start the
thread outside of this one. Style was an aspect of the original discussion,
you have now tried to make it the entire discussion.

> W> As for games using scripting languages, yes, they do, but ONLY FOR
> W> SCRIPTABLE EVENTS.
> W> The game logic itself does not run in script, the input stage, the
task
> W> stage, the AI stage, the render stage, et cetera are NOT in a
scripting
> W> language. You appear to be trying to make it seem like they are.
>
> not only. in "Blade of Darkenss" Python was used for different things up
to
> resouces initializing and texture loading. as i've heard, saving/loading
was
> done there by saving/loading whole Python virtual machine.
> i've also heard about game with a lot written on Java, something like
> aviasimulator, IL maybe.

Yes, and "Blade of Darkness" was a very unusual example of a game
implementation (very good one as well), but you are trying to make it sound
like this is common. It is not. It is VERY rare.

> C++ is not best language for AI, as far as i know. better to use something
> like Lisp.

WTF does that have to do with OpenGL/D3D?

> and there are functional programming languages, that can make C++ 'stone
> age'. i didn't have experience with them, but people are saying they are a
> lot better than imperative languages for some tasks(fans say that at all
> task). there is Quake written on Haskell, and they say it's not slower
than
> C quake. the strong point of functional languages is that they can on fly
> optimize programs deeper than C++ optimizer can ever. that Quake
accelerated
> itself after some time.

You should believe everything you read/hear. Who cares if one language is
better than another for generic or obscure purposes.

People code commercial retail games almost exclusively in C++. They know
better than you about what they should or should not be doing.

> what do you mean by 'professional graphics developer'?
> i'm a student of Applied Math faculty and don't get a job yet, but i've
> working with 3d graphics since 1999, and have written some 3d graphics
> programs.

My point being, you sound like an academic. You haven't dealt with the real
world issues of development. This is why you think all the points of an
argument hold the same value. As if it matters that code style has the same
importance as feature set, et cetera.

> >> but it's easy to do bingings yourself.

Really? Then you can easily create a D3D wrapper for your C calls and avoid
the ->lpVtbl-> issue that so horrifies you.
I love how you seem to argue both ways on the same topic, ergo, my thinking
you're not a professional developer.

> W> You can't argue in one place that D3D is bad because using it from C
> W> requires ugly headers and then argue that a value of OpenGL is that
> W> you can go and generate your own bindings which are much more
> W> complicated and beyond most newbies. You can't have your cake and
> W> eat it to.
>
> opengl is more easily bindable than d3d. what is not ok in this statement?

It isn't true. How is opengl more "bindable" than D3D? Not true for Java,
not true for any COM capable language.

> i know what d3d8 is about. if i need i think i can start writting d3d
> programs just after few days of api funcitons refreshing.

Of course. That begs the question, how have you gotten so confused?

> we were developing some Indirect3D library(link is above) which did
> rendering via d3d8 and opengl, with almost equal abilities. it's not some
> 'engine to rendering cube', 1 meg of source code, 35k lines is something,
> ye? by the way, it uses COM interfaces and is written in C++, so you
> understand that COM doesn't frightens me. i was not developing d3d
rendering
> part, but i've looked through sources and watched equal functionality code
> in ogl and d3d, so i can kompare which is more elegant in my opinion.

Wow, 35k lines. Lol. Try >3 million (at SoftImage.)
Why don't you stop using other people's observations and learn for yourself.
I really shouldn't have bothered to argue with you because you don't know
what you're talking about in any case. You have simply "looked through
sources", lol.

> >> OpenGL has very few requirements to run - language doesn't even have

You are outright lying here. Take a new machine, put an OS on it. Install
Gl for that platform (if necessary.) EXACTLY how many languages does OpenGL
on that machine work with? Answer? C/C++. That's it.

> as far as i know, games for PC(and xbox) have only some percent of game
> market, most is taken by stuff like Play Station2. so it's not a very good
> idea to apply 3d graphics only to games. there are a lot of different
> applications, some visualization..

"only some percent", hmmm... BILLIONS of dollars?
Of course it isn't a good idea to apply 3d graphics to only game UNLESS THE
THREAD IS ABOUT 3D GRAPHICS FOR GAMES.

> personally i'm going to apply 3d graphics in UI field making ordinary
> applications 3d look. you can take a look at www.3dna.com - there people
are
> creating 3d shell, last time i visited it there were a lot quotes from
> gamedev people saying 'what a kewl thing'.

Good luck, honestly :). Somebody needs to make the cognitive leap from 2D
to 3D in general.

Alex, a little bit of advice, take a look at DX9 w/o any preconceptions or
biases. By no means am I advocating not using OpenGL, I use OpenGL everyday
at work, I am advocating you opening your mind to the idea that both APIs
can be very valuable and have specific strengths, not just one.

WTH


Ruud van Gaal

unread,
Aug 21, 2003, 12:04:05 PM8/21/03
to
On Wed, 20 Aug 2003 12:36:43 +0200, Immanuel Albrecht <xrx...@gmx.de>
wrote:

>KILLSP...@marketgraph.nl (Ruud van Gaal) wrote in
>news:3f433453...@news.xs4all.nl:
>
>
>>>Would be surprising to me if those SGI workstations would not make use
>>>of OpenGL. Since who began with OpenGL?
>>
>> SGIs use OpenGL, but that's just previews.
>> Running Pixar RenderMan on Linux sounds more like the actual software
>> rendering process (little to do with OpenGL). Raytracing and such.
>
>Of course, but how many previews do you see before you go to final stage?

I doubt it if DX existed on SGI's, and you could select it (like with
Max on a PC) it would make much difference. Not enough to justify
selecting DX over OpenGL or vice versa.
What it comes down to for those situations is (CPU) rendering power;
that is just CPU speed & architecture, and very little to do with 'OGL
vs. DX'. That's what I was trying to point out.


Ruud van Gaal
Free car sim: http://www.racer.nl/
Pencil art : http://www.marketgraph.nl/gallery/

Momchil Velikov

unread,
Aug 21, 2003, 12:08:24 PM8/21/03
to
"wogston" <sp...@nothere.net> wrote in message news:<bhvcsu$9gh$1...@phys-news1.kolumbus.fi>...

> > - DX has only interleaved vertex buffers (correct me of I'm wrong),
> > which
> > a) slows it down when you need to update only part of the data (say
> > only normals or only texunit 5 coords), because you'd have to upload
> > ALL of it to GART/video memory.
> > b) makes integrating vertex data into application specific data
> > structures harder, probably making necessasy copying stuff around.
>
> One thing HLSL and Cg are doing "wrong" is the semantic binding.

In Cg with ARBvp1 profile you can specify bindings for up to 15
generic vertex attributes, with no API/language semantic attached
whatsoever.

Also, no one prevents you from using NORMAL binding for texture unit
5 coords and TEXUNIT3 binding for vertex position in your vertex
programs, as long as you do not forget that glNormalPointer actually
means texture coordinates when your vertex program is active :)

It is that simple - several streams of 4-vectors, with semantics
imposed by the vertex program and nothing else.

> Since speaking of Cg and HLSL, HLSL has one advantage in it's implementation
> of the same language. It's the ability to define the entry point symbolic
> name when compiling.

'Twas answered elsewhere ...

~velco

wogston

unread,
Aug 21, 2003, 4:00:20 PM8/21/03
to
> It is that simple - several streams of 4-vectors, with semantics
> imposed by the vertex program and nothing else.

Diddidyydiyiioyidii... it's that simple, that it's not far that run out of
components. I am doing texture synthesis on GPU so that don't have to move
1024x1024x4x1.33 bytes of data through the bus just so that I can generate
new Unique Texture for the caching subsystem. These vertex- and
pixelprograms get kind of complex after a while, when you add different
noise generation, etc.

I'm yet to run out of semantics but it's not difficult to see that happening
in few years time.. and then... does the API scale? I hope it will.


Momchil Velikov

unread,
Aug 22, 2003, 2:24:07 AM8/22/03
to
"wogston" <sp...@nothere.net> wrote in message news:<bi38cm$4ug$1...@phys-news1.kolumbus.fi>...

It already does, the number of vertex attributes is limited by
MAX_VERTEX_ATTRIBS_ARB, which the implementation can increment with no
changes to the API.

~velco

Momchil Velikov

unread,
Aug 22, 2003, 3:08:00 AM8/22/03
to
"WTH" <ih8...@spamtrap.com> wrote in message news:<vk9lat4...@corp.supernews.com>...

> > > Cg runs on what? ARB_vertext_program/fragment is actually implemented
> by
> > > how many 1.4 drivers?
> >
> > Certainly in the drivers of all cards capable of running DX8 and up.
>
> Not at all. Look at the consumer range of graphics cards. You can't use
> *_vertex_program on a GeForce2 MX, or a GeForce4 MX (I think, I'm not
> positive about this one.) There are many cards that do not have the ability
> to use vertex shaders (or pixel shaders.) In D3D, D3D handles that in
> software for you (rather well actually), in OpenGL, guess what happens? ;)

I meant, of course, cards capable of running vertex programs in
hardware.

CPU is always there, even if the card does not support
ARB_vertex_program, so one can perform the computations on the CPU
anyway, without the need of murky assembly or stripped down C variant.

And it is a quality of implementation issue again, not API deficiency.
Nothing prevents software implementation of ARB_vertex_program.

>
> > > "Separate code paths add zero complexity to the system"? Surely you are
> > > joking. If they don't, what exactly would you say introduces complexity
> to
> > > software? NOT writing code? Lol... Not implement multiple directions
> to
> > > solve the same problem? That is very funny.
> >
> > Most complexity comes from the interaction between components, not
> > from the complexity of the components themselves. After all that's
> > one of the outcomes of the proper design - small, simple components.
>
> Yes, that is one tiny aspect of complexity, and it assumes that you
> components have complexities that do not exhibit themselves unless forced to
> interact. Multiple code paths is a much simpler and obvious example of
> introducing complexity into a code base.

That's one point - complexity/simplicity is hidden behind interfaces.
Exposing complexity usually means exposing implementation - bad.

>
> > > > > ... but good luck running your shaders across two
> > > > > different OGL cards when you can use the same code across dozens of
> DX
> > > > > cards.
> > > >
> > > > It's not a matter of luck. It usually JustWorks(tm). Anyway, whether
> > > > it works or not is a quality of implementation issue, not the one or
> > > > the other API advantage/drawback.
> > >
> > > Yes, that is the whole point. Because different OpenGL cards use
> different
> > > extensions to support things that SHOULD be defined by the API, you must
> > > write more code in OpenGL than you would in D3D.
> >
> > No, there's one API - ARB_vertex_program/ARB_fragment_program.
>
> Which some 1.4 drivers support, and some do not, which some video cards have
> 1.4 drivers and some do not...

ARB_vertex_program - ATI 8500+, nVidia GF3+ (without MX crap), Matrox
Parhelia, 3Dlabs VP (and nVidia GF256/GF2/GF4MX in software).

ARB_fragment_progranm - ATI 9500+, nVidia GFFX

And these are pretty much the ones, which matter. See
http://www.delphi3d.net/hardware/index.php for more information.

> Actually, DX9 has gone right past OpenGL. Examine the streaming system.

I stand corrected about the vertex streams. However, I fail to notice
the difference/advantages compared to OpenGL vertex arrays, which were
there since version 1.1

> This is what I am complaining about in this thread. I am an OpenGL fan, D3D
> has been quickly catching OpenGL for the past 5 years, and has now blown
> right by. I want OpenGL to catch up. For this to happen, people need to
> pull their heads out of the sand and see that D3D is now AT LEAST the equal
> of OpenGL in every area

Yep, it appears almost equal.

> (and superior in some) with the exception of

supperior in which areas ?

> > Fact it that MS do not actually need the ARB/extensions process -
> > they just use the OpenGL's one and rip off the good stuff which comes
> > out.
>
> ? Sorry, but I could do single pass multi-texture on WinTel with DX before
> you could do it with OpenGL.

The earliest mention of multitexture I can find is this:
http://www.opengl.org/developers/about/arb/notes/arb-feb.html

I strongly suspect SGI altready had at that time SGIS_multitexture
(someone with more inforation?) How about DX ?

> I can use vertex streams in DX right now and
> not use them in OpenGL. I can use Shaders in DX with vertex streams, right
> now, in DX, and not in OpenGL. I could use shaders in DX before I could use
> them in OpenGL (with the exception of one video card.)

All of these you can do now with OpenGL. Speaking of shaders, ATI and
nVidia programmable pipeline extensions were not less proprietary that
DX - and the fact that they were different was a good thing - it
allowed developers to choose/express preference, instead of stuff
being pushed down their throat.

> > Yeah, yeah, wrong, but be specific - wrong only if the above
> > assumption is not true, and it is true in DX < 9, no ?
>
> Do you want to start comparing against OpenGL 1.2? Or would you rather
> stick with 1.4?

Vertex arrays are in the spec since 1.1, dunno the exact data, but
before December 1994.


Anyway, when comparing it's better to compare on concrete hardware.

> > > Yes, the benefit of extensions are that they are immediate. Not
> something
> > > that game developers really care about. As for "facilitates getting
> > > features into the core standard", I think that is both untrue and
> damaging
> > > to OpenGL. Not only does it say "you don't need to include it because
> cards
> > > will simply offer an extension instead" it tells the ARB that they don't
> > > need to think on their own about OGL, "we can just incorporate the
> things we
> > > find hardware vendors doing that works."
> >
> > Wrong. Change it to "we can just incorporate the things developers
> > found useful". The extension process provides alternative solutions
> > for problems and allows developers to actually pick the better one.
> > Every standards committee should codify *existing* practice, standards
> > "invented" by committees usually die young. Like PS 1.0/1.1, no ?
>
> The alternative to simply sitting around and waiting for 'valuable ideas' to
> make themselves evident is to talk to the IHVs who are constantly trying to
> out do each other. Ask them for their 'developer feedback', et cetera.

No need to "talk to the IHVs" because they are part of the
standardization process. If you see the ARB as a separate entity,
please, take a look at the list of the ARB voting members
http://www.opengl.org/developers/about/arb.html

> > > In otherwords, only the IHVs drive
> > > the API, it isn't a co-operative thing like it is with D3D.
> >
> > Huh ? Cooperative ? How so ? Can you provide some examples of
> > cooperative interaction between MS and IHVs, which resulted in such
> > and such DX feature ?
>
> Uhm, nearly all of the features from DX7 onwards.

Of course, one can only speculate about this, because MS/IHV
interactions are closed.

~velco

wogston

unread,
Aug 22, 2003, 6:44:38 AM8/22/03
to
> It already does, the number of vertex attributes is limited by
> MAX_VERTEX_ATTRIBS_ARB, which the implementation can increment with no
> changes to the API.

That's cool, but off-topic, I was talking about Microsoft removing semantic
limitation from the HLSL.. now I understand why you so stubbornly refused to
face the reality. Ditto.


wogston

unread,
Aug 22, 2003, 6:57:50 AM8/22/03
to
> > Actually, DX9 has gone right past OpenGL. Examine the streaming system.
>
> I stand corrected about the vertex streams. However, I fail to notice
> the difference/advantages compared to OpenGL vertex arrays, which were
> there since version 1.1

OpenGL vertex arrays are equivalent of DrawIndexedPrimitievUP and
DrawPrimitiveUP, which are not prefered ways to stream vertices (the
DrawPrimitive and DrawIndexedPrimitive) are order of magnitude faster. They
are just mechanism to stream from user supplied memory address, when the
data is generated dynamically- but even in these cases the IHV's recommend
hacing a dynamic vertexbuffer, locking it, filling it, unlocking and doing
drawing command, because this approach, if the dynamic vertexbuffer has
overwrite flag defined when filling, can dynamically re-allocate the memory
from internal pool and not stall existing command queue.

So OpenGL vertex arrays are hardly equivalent of the GOOD way to draw stuff,
and they are not legendarily fast compared to immediate mode. Display lists
are similiarly ignored "optimization" on consumer hardware.


> Vertex arrays are in the spec since 1.1, dunno the exact data, but
> before December 1994.

And performance wise their effect always been very small, where vertexbuffer
/ stream model has enabled the GPU to really shine.

Current 100 million triangles per second on real-world applications wouldn't
be possible, if every single vertex were moved through, even, AGP 8X bus.
Even if PCI traditionally has had poor interconnection and internal buses,
it has managed to outpace dedicated, 10 times more expensive professional
systems. The introduction of PCI Express in the next quarters is going to
change the way we can use GPU, the bus is full-duplex, low-latency, and
high-bandwidth.. even if API's remain roughly the same, the way we can *use*
the GPU will be changed.

This is the situation, where vertex arrays have more useful position in the
API... roughly a decade for PC hardware to actually catch up with what SGI
for instance had 10 years ago, until this time, these cool innovations were
ahead of their time & not so useful on their own (for PC programmers).

wogston

unread,
Aug 22, 2003, 7:06:05 AM8/22/03
to
> It already does, the number of vertex attributes is limited by
> MAX_VERTEX_ATTRIBS_ARB, which the implementation can increment with no
> changes to the API.

Also,

The Cg Tutorial, 2.1.1:
"An output structure differents from a conventional C or C++ structure
because it includes SEMANTICS for each member."

The Cg Tutorial, 2.1.16:
"Semantics are, in a sense, the glue that binds a Cg program to the rest of
the graphics pipeline."

In Cg, infact, the struct you input from C/C++ does not need to be binary
compatible with the one you declare in the Cg source. The Cg runtime
connects the members in the input to correct semantics with keywords like
"POSITION" and "COLOR", and so on.

I need to transfer a lot of states from vertex program to pixel program, and
I see the Cg specs say that output structs need to have a semantic. Alright,
I give up, how is that done with MAX_VERTEX_ATTRIBS_ARB?


JB West

unread,
Aug 22, 2003, 10:41:37 AM8/22/03
to
The "Extension mechanism" that is being maligned is there precisely because
Microsoft
refuses to upgrade their OpenGL! In the rest of the non-Windows graphics
world, we
have direct access to new OpenGL features w/o having to go the extra step of
the
extension mechanism. It's an example of the power of the API setup that
allows it to be an effective
API *even when* a major vendor refuses to participate in the progress of the
API.
And, to work on laptops *and supercomputing clusters*, and everything in
between,
including 64-bit viz for many years now.

Microsoft ignores these market spaces.

Microsoft, on the other hand, has taken the tack that they can force
software developers to
alter code as API's change. This is probably OK for high-volume short-life
stuff (games), but
extremely costly for low-volume professional/scientific applications which
take months
and months to re-develop and re-certify. That's why some ISV's don't want
API /functional changes more
than ~ yearly -- it's impossible to deploy large complex applications at
that speed.

As the burden of installed software "inertia" increases, all API's
inevitably slow down
in their velocity of change.

Extension mechanisms allow for gradual uptake, with strict backwards
compatibility.
Both API's have now reached the crtitcal mass where drastic changes to
backwards
compatibility are no longer very feasible -- too many man-years of ISV
software is
is out there in customer's hands.

OpenGL has an EXPLICIT CONTRACT to not break backwards compatibility.
D3D does not. If you are betting the life of your company on an API, this is
a very
serious issue to consider; not the only one, but an important one.

Thus, API's, like all software, has a maturity life-cycle. OpenGL, D3D
... -- these too shall
pass, in their time.

So: One is definitely better for some, one is definitely better for others,
and there's a large
grey zone where either is more-or-less about the same. You see a
distribution of software to
match that reality. A lot of CAD, sci-viz & etc is definitely largely
OpenGL. Are OpenGL games
inherently inferior/superior to D3D games -- I don't think so. Grey zone.
32-bit, Windows-only
graphics in certain spaces -- D3D. Swell. Life is good.

Everyone wins from competing implementations.

"fungus" <open...@SOCKSartlum.com> wrote in message
news:JTk0b.1313588$iM4.2...@telenews.teleline.es...
> WTH wrote:
> > Stop trying to make this a 'OGL better than D3D' contest.
> >
>
> Hey, take a look at the name of the newsgroup!
>
>
> > Horsesh*t. OpenGL 1.x is still 1.x because the ARB takes forever to
> > incorporate anything into OpenGL, NOT because OpenGL already has
everything
> > it needs, lol.
>
> Yet strangely enough OpenGL is still as good as Direct3D.
>
> > ...and nobody with an objective viewpoint would view 'extensions' as a
> > satisfactory method for supporter all the new features available from
modern
> > 3D hardware.
> >
>
> Rubbish. It works perfectly.
>
> It's certainly no worse than D3D capability bits.
>
> >>OpenGL 2.0 is coming out in a matter of months.
> >
> >
> > People have been saying this for more than a year.
> >
>
> Not me. For the last year or so I've been saying it's
> due out in SIGGRAPH (September).
>
>
>
> --
> <\___/> For email, remove my socks.
> / O O \
> \_____/ FTB. Why isn't there mouse-flavored cat food?
>
>
>


Momchil Velikov

unread,
Aug 23, 2003, 3:57:11 AM8/23/03
to
"wogston" <sp...@nothere.net> wrote in message news:<bi4svg$sn0$1...@phys-news1.kolumbus.fi>...

> > > Actually, DX9 has gone right past OpenGL. Examine the streaming system.
> >
> > I stand corrected about the vertex streams. However, I fail to notice
> > the difference/advantages compared to OpenGL vertex arrays, which were
> > there since version 1.1
>
> OpenGL vertex arrays are equivalent of DrawIndexedPrimitievUP and
> DrawPrimitiveUP,

Not true. Rather, OpenGL vertex arrays are similar to streams, with
their data source being in system or GART or video memory.

> which are not prefered ways to stream vertices (the
> DrawPrimitive and DrawIndexedPrimitive) are order of magnitude faster. They
> are just mechanism to stream from user supplied memory address, when the
> data is generated dynamically- but even in these cases the IHV's recommend
> hacing a dynamic vertexbuffer, locking it, filling it, unlocking and doing
> drawing command, because this approach, if the dynamic vertexbuffer has
> overwrite flag defined when filling, can dynamically re-allocate the memory
> from internal pool and not stall existing command queue.

No. This is beneficial, because:
a) data is copied in bulk, as opposed to vertex at a time
b) there's no posibility of the vertex data changing between
commands, so the card can transfer it in parallel with the user
issuing drawing commands.

> So OpenGL vertex arrays are hardly equivalent of the GOOD way to draw stuff,

Not true. They, together with ARB_vertex_buffer are *the* fastest way
to feed vertex attributes to the card, in no way slower that DX
streams.

> and they are not legendarily fast compared to immediate mode. Display lists
> are similiarly ignored "optimization" on consumer hardware.

Not true. See these results, which include VAR/VAO/display
lists/immediate mode/whatnot http://www.fl-tw.com/opengl/GeomBench/

> > Vertex arrays are in the spec since 1.1, dunno the exact data, but
> > before December 1994.
>
> And performance wise their effect always been very small, where vertexbuffer
> / stream model has enabled the GPU to really shine.

Vertex array can by used with or without vertex buffer (objects).

> Current 100 million triangles per second on real-world applications wouldn't
> be possible, if every single vertex were moved through, even, AGP 8X bus.

Huh? Surely every single vertex is moved through the AGP bus? Maybe
you mean "moved alone" ? That's what vertex array allow you - to
perform batch operations (with {Multi}DrawArrays,
{Multi}DrawElements).

> Even if PCI traditionally has had poor interconnection and internal buses,
> it has managed to outpace dedicated, 10 times more expensive professional
> systems. The introduction of PCI Express in the next quarters is going to
> change the way we can use GPU, the bus is full-duplex, low-latency, and
> high-bandwidth.. even if API's remain roughly the same, the way we can *use*
> the GPU will be changed.

irrelevant (besides being doubtful speculation)

> This is the situation, where vertex arrays have more useful position in the
> API... roughly a decade for PC hardware to actually catch up with what SGI
> for instance had 10 years ago, until this time, these cool innovations were
> ahead of their time & not so useful on their own (for PC programmers).

*sigh* see above. Batching improves performance.

~velco

Momchil Velikov

unread,
Aug 23, 2003, 4:01:36 AM8/23/03
to
"wogston" <sp...@nothere.net> wrote in message news:<bi4s6p$rb6$1...@phys-news1.kolumbus.fi>...

Wasn't that you that said say "One thing HLSL and Cg are doing
"wrong" is the semantic binding ?" Or you think Cg is used with DX
only ? How did I "stubbornly refuse to face the reality" by talking
about Cg and ARBvp in an OpenGL group, after you mentioned Cg ?

~velco

Momchil Velikov

unread,
Aug 23, 2003, 4:05:17 AM8/23/03
to
"wogston" <sp...@nothere.net> wrote in message news:<bi4tev$f5$1...@phys-news1.kolumbus.fi>...

> > It already does, the number of vertex attributes is limited by
> > MAX_VERTEX_ATTRIBS_ARB, which the implementation can increment with no
> > changes to the API.
>
> I need to transfer a lot of states from vertex program to pixel program, and
> I see the Cg specs say that output structs need to have a semantic. Alright,
> I give up, how is that done with MAX_VERTEX_ATTRIBS_ARB?

Sorry, I misunderstood you. When you said "Semantics just limit how
much per-vertex properties can be streamed into the program", I
assumed that since you were talking about per-vertex properties you
meant vertex programs.

You need more interpolated inputs to the fragment program ?

~velco

wogston

unread,
Aug 23, 2003, 12:08:30 PM8/23/03
to
> Wasn't that you that said say "One thing HLSL and Cg are doing
> "wrong" is the semantic binding ?" Or you think Cg is used with DX
> only ? How did I "stubbornly refuse to face the reality" by talking
> about Cg and ARBvp in an OpenGL group, after you mentioned Cg ?

I didn't talk about ARBvp and quite frankly you sholdn't piss on your pants
if I talk about DirectX 9 in OpenGL group when it was on-topic.


wogston

unread,
Aug 23, 2003, 12:09:38 PM8/23/03
to
> You need more interpolated inputs to the fragment program ?

Not at this time, but in the future I can see myself needing more. I thought
using the term "semantic" was well defined within the HLSL and Cg, but I can
now see that we simply had miscommunication and I was the cause so apologies
are in order, consider yourself apologized. ;-)


wogston

unread,
Aug 23, 2003, 12:59:32 PM8/23/03
to
> > OpenGL vertex arrays are equivalent of DrawIndexedPrimitievUP and
> > DrawPrimitiveUP,
>
> Not true. Rather, OpenGL vertex arrays are similar to streams, with
> their data source being in system or GART or video memory.

They only supply a pointer, User Pointer cannot guarantee the memory pointer
he provides is mapped to AGP aperture. This needs driver-level control, so
at best OpenGL vertex array can *memcpy* the data to appropriate location.


> > which are not prefered ways to stream vertices (the
> > DrawPrimitive and DrawIndexedPrimitive) are order of magnitude faster.
They
> > are just mechanism to stream from user supplied memory address, when the
> > data is generated dynamically- but even in these cases the IHV's
recommend
> > hacing a dynamic vertexbuffer, locking it, filling it, unlocking and
doing
> > drawing command, because this approach, if the dynamic vertexbuffer has
> > overwrite flag defined when filling, can dynamically re-allocate the
memory
> > from internal pool and not stall existing command queue.
>
> No. This is beneficial, because:
> a) data is copied in bulk, as opposed to vertex at a time
> b) there's no posibility of the vertex data changing between
> commands, so the card can transfer it in parallel with the user
> issuing drawing commands.

A) DrawPrimitive() doesn't transfer vertices at all, if the current streams
are created into video memory and there isn't swapping (ie. enough memory),
so this model is very efficient.

B) This is non-issue with streaming model, if you read what I wrote again,
the real reason for extra efficiency with nooverwrite flag is from the fact,
that is there is already existing DrawPrimitive() command currently being
processed, the API doesn't have to wait for it to execute until it can lock,
so that it doesn't write into region of memory that is currently being read.
Nooverwrite allows driver to dynamically grab memory from available pool but
keep the "binding" to this memory to the current VB object, the previously
owned memory will be released to the pool. Actually the driver has a lot
more freedom to do what it wants, but this is what DirectX optimization
guides from nVidia say about the topic so I'm very likely to listen to what
the hardware developers have to say.


> > So OpenGL vertex arrays are hardly equivalent of the GOOD way to draw
stuff,
>
> Not true. They, together with ARB_vertex_buffer are *the* fastest way
> to feed vertex attributes to the card, in no way slower that DX
> streams.

That's ARB_vertex_buffer way, not vertex array. I would compare them like
this:

DrawPrimitiveUP <-> vertex arrays
DrawPrimitive <-> ARB_vertex_buffer (+ vertex arrays)

This is the difference, get it? =)


> Not true. See these results, which include VAR/VAO/display
> lists/immediate mode/whatnot http://www.fl-tw.com/opengl/GeomBench/

These statistics only prove, that it is very difficult to write optimized
OpenGL application as there are, I can count over 20 different ways to
render and neither is the fastest on particular 3D accelerator. Infact, if I
combine multiple charts I get more like 50-100 different ways to render, but
single chart has around 20 figures.

This is my system:
(comments after the chart)

-System---------------------------------------------------------------------
----------------
Vendor: GenuineIntel
Name: Unknown processor
Speed: 1692 Mhz
-OpenGL---------------------------------------------------------------------
----------------
Vendor: ATI Technologies Inc.
Version: 1.3.3842 WinXP Release
Renderer: RADEON 9700 PRO x86/SSE2
Extensions: GL_ARB_multitexture GL_EXT_texture_env_add
GL_EXT_compiled_vertex_array
GL_S3_s3tc GL_ARB_depth_texture GL_ARB_fragment_program
GL_ARB_multisample
GL_ARB_point_parameters GL_ARB_shadow GL_ARB_shadow_ambient
GL_ARB_texture_border_clamp
GL_ARB_texture_compression GL_ARB_texture_cube_map
GL_ARB_texture_env_add
GL_ARB_texture_env_combine GL_ARB_texture_env_crossbar
GL_ARB_texture_env_dot3
GL_ARB_texture_mirrored_repeat GL_ARB_transpose_matrix
GL_ARB_vertex_blend
GL_ARB_vertex_program GL_ARB_window_pos GL_ATI_draw_buffers
GL_ATI_element_array
GL_ATI_envmap_bumpmap GL_ATI_fragment_shader
GL_ATI_map_object_buffer
GL_ATI_separate_stencil GL_ATI_texture_env_combine3
GL_ATI_texture_float
GL_ATI_texture_mirror_once GL_ATI_vertex_array_object
GL_ARB_vertex_buffer_object
GL_ATI_vertex_attrib_array_object GL_ATI_vertex_streams
GL_ATIX_texture_env_combine3
GL_ATIX_texture_env_route
GL_ATIX_vertex_shader_output_point_size GL_EXT_abgr
GL_EXT_bgra GL_EXT_blend_color GL_EXT_blend_func_separate
GL_EXT_blend_minmax
GL_EXT_blend_subtract GL_EXT_clip_volume_hint
GL_EXT_draw_range_elements
GL_EXT_fog_coord GL_EXT_multi_draw_arrays GL_EXT_packed_pixels
GL_EXT_point_parameters
GL_EXT_rescale_normal GL_EXT_secondary_color
GL_EXT_separate_specular_color
GL_EXT_stencil_wrap GL_EXT_texgen_reflection GL_EXT_texture3D
GL_EXT_texture_compression_s3tc
GL_EXT_texture_cube_map GL_EXT_texture_edge_clamp
GL_EXT_texture_env_combine
GL_EXT_texture_env_dot3 GL_EXT_texture_filter_anisotropic
GL_EXT_texture_lod_bias
GL_EXT_texture_object GL_EXT_texture_rectangle
GL_EXT_vertex_array
GL_EXT_vertex_shader GL_HP_occlusion_test
GL_NV_texgen_reflection GL_NV_blend_square
GL_NV_occlusion_query GL_SGI_color_matrix
GL_SGIS_texture_edge_clamp
GL_SGIS_texture_border_clamp GL_SGIS_texture_lod
GL_SGIS_generate_mipmap
GL_SGIS_multitexture GL_SUN_multi_draw_arrays GL_WIN_swap_hint
WGL_EXT_extensions_string
WGL_EXT_swap_control
----------------------------------------------------------------------------
----------------
IM Display List (H 0 I S I): 184.09 FPS, 141.40 MTS, 768k Tris
(T109/288)
IM Display List (H 0 I S S): 183.75 FPS, 141.14 MTS, 768k Tris
(T125/288)
VA Display List (H 0 F S I): 183.75 FPS, 141.14 MTS, 768k Tris
(T137/288)
IM Display List (H 0 F S S): 183.42 FPS, 140.89 MTS, 768k Tris
(T121/288)
VA Display List (H 0 I S I): 183.36 FPS, 140.84 MTS, 768k Tris
(T141/288)
VA Display List (H 0 I S S): 183.36 FPS, 140.84 MTS, 768k Tris
(T157/288)
IM Display List (H 0 F S I): 183.09 FPS, 140.63 MTS, 768k Tris
(T105/288)
VA Display List (H 0 F S S): 177.76 FPS, 136.54 MTS, 768k Tris
(T153/288)
Static VAO (H 0 F S S): 164.45 FPS, 126.31 MTS, 768k Tris
(T185/288)
Dynamic VAO (H 0 F S S): 156.07 FPS, 119.88 MTS, 768k Tris
(T217/288)
Static VAO (H 0 F S I): 143.48 FPS, 110.20 MTS, 768k Tris
(T169/288)
Dynamic VAO (H 0 F S I): 142.48 FPS, 109.44 MTS, 768k Tris
(T201/288)
Dynamic VAO (H 0 I S S): 138.81 FPS, 106.62 MTS, 768k Tris
(T221/288)
Static VAO (H 0 I S S): 133.82 FPS, 102.79 MTS, 768k Tris
(T189/288)
Static VAO (H 0 I S I): 124.13 FPS, 95.34 MTS, 768k Tris
(T173/288)
Dynamic VAO (H 0 I S I): 123.83 FPS, 95.12 MTS, 768k Tris
(T205/288)
Static VAO (H 0 I L S): 119.51 FPS, 91.79 MTS, 768k Tris
(T181/288)
Static VAO (H 0 F L S): 118.14 FPS, 90.74 MTS, 768k Tris
(T177/288)
Dynamic VAO (H 0 F L S): 112.85 FPS, 86.68 MTS, 768k Tris
(T209/288)
Dynamic VAO (H 0 I L S): 107.82 FPS, 82.82 MTS, 768k Tris
(T213/288)
IM Display List (H 0 F L I): 65.03 FPS, 49.95 MTS, 768k Tris
(T97/288)
VA Display List (H 0 I L I): 65.01 FPS, 49.93 MTS, 768k Tris
(T133/288)
IM Display List (H 0 F L S): 64.91 FPS, 49.86 MTS, 768k Tris
(T113/288)
VA Display List (H 0 F L I): 64.91 FPS, 49.86 MTS, 768k Tris
(T129/288)
IM Display List (H 0 I L I): 64.89 FPS, 49.84 MTS, 768k Tris
(T101/288)
IM Display List (H 0 I L S): 64.68 FPS, 49.68 MTS, 768k Tris
(T117/288)
VA Display List (H 0 I L S): 64.25 FPS, 49.35 MTS, 768k Tris
(T149/288)
Static VAO (H 0 I L I): 60.92 FPS, 46.79 MTS, 768k Tris
(T165/288)
Static VAO (H 0 F L I): 59.37 FPS, 45.60 MTS, 768k Tris
(T161/288)
Dynamic VAO (H 0 F L I): 58.57 FPS, 44.99 MTS, 768k Tris
(T193/288)
Dynamic VAO (H 0 I L I): 56.91 FPS, 43.71 MTS, 768k Tris
(T197/288)
VA Display List (H 0 F L S): 34.89 FPS, 26.80 MTS, 768k Tris
(T145/288)
Vertex arrays (H 0 F S S): 34.05 FPS, 26.15 MTS, 768k Tris
(T57/288)
Vertex arrays (H 0 F S I): 33.84 FPS, 25.99 MTS, 768k Tris
(T41/288)
Vertex arrays (H 0 I S S): 32.85 FPS, 25.23 MTS, 768k Tris
(T61/288)
Vertex arrays (H 0 I S I): 32.51 FPS, 24.97 MTS, 768k Tris
(T45/288)
Immediate mode (H 0 F S I): 30.19 FPS, 23.19 MTS, 768k Tris
(T9/288)
Immediate mode (H 0 F S S): 27.45 FPS, 21.08 MTS, 768k Tris
(T25/288)
Immediate mode (H 0 I S I): 25.05 FPS, 19.24 MTS, 768k Tris
(T13/288)
Immediate mode (H 0 I S S): 24.22 FPS, 18.60 MTS, 768k Tris
(T29/288)
IM Display List (H 3 I S S): 22.82 FPS, 17.53 MTS, 768k Tris
(T127/288)
IM Display List (H 3 F S S): 22.82 FPS, 17.53 MTS, 768k Tris
(T123/288)
VA Display List (H 3 F S I): 22.81 FPS, 17.52 MTS, 768k Tris
(T139/288)
VA Display List (H 3 F S S): 22.81 FPS, 17.52 MTS, 768k Tris
(T155/288)
VA Display List (H 3 I S S): 22.74 FPS, 17.47 MTS, 768k Tris
(T159/288)
IM Display List (H 3 F S I): 22.74 FPS, 17.47 MTS, 768k Tris
(T107/288)
Compiled vertex arrays (H 0 F S I): 22.74 FPS, 17.47 MTS, 768k Tris
(T73/288)
IM Display List (H 3 I S I): 22.74 FPS, 17.47 MTS, 768k Tris
(T111/288)
Dynamic VAO (H 3 I S I): 22.73 FPS, 17.46 MTS, 768k Tris
(T207/288)
Static VAO (H 3 I S S): 22.67 FPS, 17.41 MTS, 768k Tris
(T191/288)
Static VAO (H 3 F L S): 22.67 FPS, 17.41 MTS, 768k Tris
(T179/288)
Static VAO (H 3 I S I): 22.64 FPS, 17.39 MTS, 768k Tris
(T175/288)
Dynamic VAO (H 3 F S S): 22.56 FPS, 17.33 MTS, 768k Tris
(T219/288)
Dynamic VAO (H 3 F L I): 22.49 FPS, 17.27 MTS, 768k Tris
(T195/288)
Compiled vertex arrays (H 0 F S S): 22.48 FPS, 17.27 MTS, 768k Tris
(T89/288)
Static VAO (H 3 I L S): 22.41 FPS, 17.21 MTS, 768k Tris
(T183/288)
Dynamic VAO (H 3 I S S): 22.33 FPS, 17.15 MTS, 768k Tris
(T223/288)
Static VAO (H 3 I L I): 22.30 FPS, 17.13 MTS, 768k Tris
(T167/288)
Static VAO (H 3 F S S): 22.30 FPS, 17.13 MTS, 768k Tris
(T187/288)
Static VAO (H 3 F S I): 22.23 FPS, 17.07 MTS, 768k Tris
(T171/288)
Static VAO (H 3 F L I): 22.23 FPS, 17.07 MTS, 768k Tris
(T163/288)
Dynamic VAO (H 3 F S I): 22.22 FPS, 17.07 MTS, 768k Tris
(T203/288)
Dynamic VAO (H 3 I L I): 22.16 FPS, 17.02 MTS, 768k Tris
(T199/288)
Dynamic VAO (H 3 I L S): 21.83 FPS, 16.76 MTS, 768k Tris
(T215/288)
Compiled vertex arrays (H 0 I S S): 20.17 FPS, 15.49 MTS, 768k Tris
(T93/288)
Dynamic VAO (H 3 F L S): 19.97 FPS, 15.34 MTS, 768k Tris
(T211/288)
Compiled vertex arrays (H 0 I S I): 19.91 FPS, 15.29 MTS, 768k Tris
(T77/288)
VA Display List (H 3 I S I): 19.17 FPS, 14.73 MTS, 768k Tris
(T143/288)
Vertex arrays (H 3 F S S): 15.06 FPS, 11.57 MTS, 768k Tris
(T59/288)
Vertex arrays (H 3 F S I): 14.98 FPS, 11.50 MTS, 768k Tris
(T43/288)
Vertex arrays (H 3 I S I): 14.88 FPS, 11.43 MTS, 768k Tris
(T47/288)
Vertex arrays (H 3 I S S): 14.88 FPS, 11.43 MTS, 768k Tris
(T63/288)
Immediate mode (H 3 I S I): 13.18 FPS, 10.13 MTS, 768k Tris
(T15/288)
Immediate mode (H 3 F S I): 13.09 FPS, 10.06 MTS, 768k Tris
(T11/288)
Immediate mode (H 3 I S S): 13.05 FPS, 10.03 MTS, 768k Tris
(T31/288)
Immediate mode (H 0 F L S): 11.98 FPS, 9.20 MTS, 768k Tris
(T17/288)
Immediate mode (H 3 F S S): 11.82 FPS, 9.08 MTS, 768k Tris
(T27/288)
Immediate mode (H 0 I L I): 11.39 FPS, 8.75 MTS, 768k Tris
(T5/288)
Immediate mode (H 0 I L S): 11.38 FPS, 8.74 MTS, 768k Tris
(T21/288)
Immediate mode (H 0 F L I): 10.65 FPS, 8.18 MTS, 768k Tris
(T1/288)
Compiled vertex arrays (H 0 F L S): 9.62 FPS, 7.39 MTS, 768k Tris
(T81/288)
Compiled vertex arrays (H 0 F L I): 9.34 FPS, 7.18 MTS, 768k Tris
(T65/288)
Compiled vertex arrays (H 0 I L S): 9.14 FPS, 7.02 MTS, 768k Tris
(T85/288)
Compiled vertex arrays (H 0 I L I): 8.93 FPS, 6.86 MTS, 768k Tris
(T69/288)
Compiled vertex arrays (H 3 F S S): 8.81 FPS, 6.77 MTS, 768k Tris
(T91/288)
Compiled vertex arrays (H 3 I S S): 8.75 FPS, 6.72 MTS, 768k Tris
(T95/288)
Compiled vertex arrays (H 3 F S I): 8.75 FPS, 6.72 MTS, 768k Tris
(T75/288)
Compiled vertex arrays (H 3 I S I): 8.63 FPS, 6.63 MTS, 768k Tris
(T79/288)
Vertex arrays (H 0 F L S): 8.57 FPS, 6.58 MTS, 768k Tris
(T49/288)
Vertex arrays (H 0 F L I): 8.43 FPS, 6.47 MTS, 768k Tris
(T33/288)
Vertex arrays (H 0 I L S): 8.26 FPS, 6.35 MTS, 768k Tris
(T53/288)
Vertex arrays (H 0 I L I): 8.21 FPS, 6.31 MTS, 768k Tris
(T37/288)
IM Display List (H 3 I L I): 7.88 FPS, 6.06 MTS, 768k Tris
(T103/288)
IM Display List (H 3 F L I): 7.88 FPS, 6.06 MTS, 768k Tris
(T99/288)
IM Display List (H 3 F L S): 7.88 FPS, 6.06 MTS, 768k Tris
(T115/288)
VA Display List (H 3 F L I): 7.88 FPS, 6.06 MTS, 768k Tris
(T131/288)
IM Display List (H 3 I L S): 7.88 FPS, 6.05 MTS, 768k Tris
(T119/288)
VA Display List (H 3 I L I): 7.88 FPS, 6.05 MTS, 768k Tris
(T135/288)
VA Display List (H 3 I L S): 7.36 FPS, 5.65 MTS, 768k Tris
(T151/288)
VA Display List (H 3 F L S): 7.34 FPS, 5.64 MTS, 768k Tris
(T147/288)
Immediate mode (H 3 I L S): 4.99 FPS, 3.84 MTS, 768k Tris
(T23/288)
Immediate mode (H 3 I L I): 4.98 FPS, 3.82 MTS, 768k Tris (T7/288)
Immediate mode (H 3 F L S): 4.77 FPS, 3.66 MTS, 768k Tris
(T19/288)
Immediate mode (H 3 F L I): 4.74 FPS, 3.64 MTS, 768k Tris (T3/288)
Compiled vertex arrays (H 3 I L S): 3.80 FPS, 2.92 MTS, 768k Tris
(T87/288)
Compiled vertex arrays (H 3 F L S): 3.79 FPS, 2.91 MTS, 768k Tris
(T83/288)
Compiled vertex arrays (H 3 I L I): 3.76 FPS, 2.88 MTS, 768k Tris
(T71/288)
Compiled vertex arrays (H 3 F L I): 3.71 FPS, 2.85 MTS, 768k Tris
(T67/288)
Vertex arrays (H 3 I L S): 3.68 FPS, 2.82 MTS, 768k Tris
(T55/288)
Vertex arrays (H 3 F L S): 3.60 FPS, 2.77 MTS, 768k Tris
(T51/288)
Vertex arrays (H 3 I L I): 3.57 FPS, 2.74 MTS, 768k Tris
(T39/288)
Vertex arrays (H 3 F L I): 3.54 FPS, 2.72 MTS, 768k Tris
(T35/288)


First, I was aware of this Benchmark prior to your notifying me of it's
existence- I follow the area VERY closely as it's my passion and also brings
the food in the table and pays the mortages. Dreamjob. Now to the comments.


1.
IM Display List (H 0 F S S): 183.42 FPS, 140.89 MTS, 768k Tris
(T121/288)
Static VAO (H 0 F S S): 164.45 FPS, 126.31 MTS, 768k Tris
(T185/288)

OK, on RADEON 9700 display lists are a hair faster than static VAO.

2.
Vertex arrays are nearly 100 times slower than the fastest vertex
"streaming" model, and slower than *immediate mode*, so I really don't grasp
how you can claim they are efficient. I compare them still to
DrawPrimitiveUP() and DrawIndexedPrimitiveUP(), ie. the UP*() methods. They
just aren't fast like you claim.

What is fast, are *extensions*, which actually place the data where it is
accessible faster. Those aren't OpenGL Vertex arrays, like you claim. I
already made the difference very clear in my previous post, yet you
misunderstood it (on purpose?).


> > > Vertex arrays are in the spec since 1.1, dunno the exact data, but
> > > before December 1994.
> >
> > And performance wise their effect always been very small, where
vertexbuffer
> > / stream model has enabled the GPU to really shine.

Ditto. See above chart.


> Vertex array can by used with or without vertex buffer (objects).

You're confusing extensions to the GL 1.1 functionality, which is at bottom
of the chart I posted above.


> > Current 100 million triangles per second on real-world applications
wouldn't
> > be possible, if every single vertex were moved through, even, AGP 8X
bus.
>
> Huh? Surely every single vertex is moved through the AGP bus? Maybe
> you mean "moved alone" ? That's what vertex array allow you - to
> perform batch operations (with {Multi}DrawArrays,
> {Multi}DrawElements).

They are moved through the AGP bus once, then rendered from video memory
100's or 1000's of times. This makes the bus traffic overhead neglible. I'm
doing continuous level-of-detail, billboard generation, etc. 100% on GPU
from streams which reside in video memory and the bus traffic is NIL, when
rendering (except for indices).

I'm writing vertex programs to implement the particle movement, rotation,
coloring and so on. This eats semantics, but it's worth it, I can have
300,000 billboards running at 100 frames per second on ATI RADEON 9700 with
each having unique:

- rotation
- scale
- color
- texture
- position
- velocity

These are the most important criteria, the vertex program implements the
"scene" properties of particle movement such as gravity and position based
velocity delta vector computation.

The Unique Texturing is possible because I map multiple particle textures
into larger physical textures and the systems places the particle drawing
queues so that the texture switching overhead is minimized. If every
particle was *truly* unique and randomly changing the texture then obviously
this performance wouldn't be possible, but the system throughput is like
stated above in real-world scanarios.

The system generates texture coordiantes, rotation vectors etc. in the GPU
vertex program (VertexShader) on-fly so the storage is minimized and can
have more particles. Basicly the memory consumption is high due to the fact
that GPU based billboarding, or particle rendering requires 4 vertices for a
particle. When the VS 3.0 hardware is out, then single vertex will do,
thanks for vertex sampling frequency functionality in VS 3.0 level hardware,
it's just memory saving. ;_)

Also, vertex samplers will be a great addition since this allows to store
Frequently Similiar Clusters of Similiar Data (tm) into textures and using
1D texture coordinates (single float, or even "just" color component!) as
index to the data. So the current GPU programming model is still limited,
but things are improving.

(Now is your time to state, that the current model is not limited and that I
am incompetent ;)


> irrelevant (besides being doubtful speculation)

You think it is irrelevant, that the main system processor and memory has
virtually no-latency full-duplex connection to the GPU's programmable vector
processor array?

You think, that the only benefit from PCI Express will be lower latency and
higher bandwidth? I hope you are not in a managing position at anywhere near
any major vendor of components for these near-future systems, your lack of
vision would seriously endanger the possibilities this new bus architechture
enables.


> > This is the situation, where vertex arrays have more useful position in
the
> > API... roughly a decade for PC hardware to actually catch up with what
SGI
> > for instance had 10 years ago, until this time, these cool innovations
were
> > ahead of their time & not so useful on their own (for PC programmers).
>
> *sigh* see above. Batching improves performance.

*sigh*, you are assuming every single vertex is streamed through bus
connecting the GPU to the system memory, to the graphics processor and
basing your own sigh onto that incorrect knowledge.

The biggest two "wins" from batching are:

- less DrawPrimitive overhead, every DrawPrimitive() burns CPU time
- less renderstate changes, every renderstate change burns CPU and GPU time

Bus traffic is non-issue, when the optimal situation is that ALL data is in
the GPU's local memory, atleast every proficient GPU programmer aims for
that- if not 100%, as close to that as possible.

It seems we two are from two completely different worlds.


Momchil Velikov

unread,
Aug 24, 2003, 9:24:51 AM8/24/03
to
"wogston" <sp...@nothere.net> wrote in message news:<bi86ho$rdh$1...@phys-news1.kolumbus.fi>...

> > > OpenGL vertex arrays are equivalent of DrawIndexedPrimitievUP and
> > > DrawPrimitiveUP,
> >
> > Not true. Rather, OpenGL vertex arrays are similar to streams, with
> > their data source being in system or GART or video memory.
>
> They only supply a pointer,

See the specification of ARB_vertex_buffer_object for an example. The
pointer parameter is actually an offset within a GART/video memory
when a vertex buffer is bound to ARRAY_BUFFER_ARB target.

> > > which are not prefered ways to stream vertices (the
> > > DrawPrimitive and DrawIndexedPrimitive) are order of magnitude faster.
> They
> > > are just mechanism to stream from user supplied memory address, when the
> > > data is generated dynamically- but even in these cases the IHV's
> recommend
> > > hacing a dynamic vertexbuffer, locking it, filling it, unlocking and
> doing
> > > drawing command, because this approach, if the dynamic vertexbuffer has
> > > overwrite flag defined when filling, can dynamically re-allocate the
> memory
> > > from internal pool and not stall existing command queue.
> >
> > No. This is beneficial, because:
> > a) data is copied in bulk, as opposed to vertex at a time
> > b) there's no posibility of the vertex data changing between
> > commands, so the card can transfer it in parallel with the user
> > issuing drawing commands.
>
> A) DrawPrimitive() doesn't transfer vertices at all, if the current streams
> are created into video memory and there isn't swapping (ie. enough memory),
> so this model is very efficient.

Yup, vertices are prolly already copied in bulk.

> B) This is non-issue with streaming model, if you read what I wrote again,
> the real reason for extra efficiency with nooverwrite flag is from the fact,
> that is there is already existing DrawPrimitive() command currently being
> processed, the API doesn't have to wait for it to execute until it can lock,
> so that it doesn't write into region of memory that is currently being read.
> Nooverwrite allows driver to dynamically grab memory from available pool but
> keep the "binding" to this memory to the current VB object, the previously
> owned memory will be released to the pool. Actually the driver has a lot
> more freedom to do what it wants, but this is what DirectX optimization
> guides from nVidia say about the topic so I'm very likely to listen to what
> the hardware developers have to say.

Sure, this is true, only that its performance implications are
minimal. You don't copy vertex data each frame, much less several
times per frame. Thus stuff you do each 100 frames affects a single
frame/the frame rate 100 times less.

> > > So OpenGL vertex arrays are hardly equivalent of the GOOD way to draw
> stuff,
> >
> > Not true. They, together with ARB_vertex_buffer are *the* fastest way
> > to feed vertex attributes to the card, in no way slower that DX
> > streams.
>
> That's ARB_vertex_buffer way, not vertex array. I would compare them like
> this:
>
> DrawPrimitiveUP <-> vertex arrays
> DrawPrimitive <-> ARB_vertex_buffer (+ vertex arrays)
>
> This is the difference, get it? =)

Ah, I'm pretty much aware of it :) (Actually, there's no OpenGL
equivalent to DrawPrimitiveUP, which, BTW, is a good thing, I do not
want/can't to change the source because some array happens to not fit
in GART/video memory. Plain vertex arrays are rather equivalent to a
stream with the stream source being a system memory vertex buffer).

> 2.
> Vertex arrays are nearly 100 times slower than the fastest vertex
> "streaming" model,

Nowdays I do not separate vertex arrays from vertex buffers.
Extensions has been around for a while (VAO/VAR) and for me an ARB
extension is pretty much a core feature.

> and slower than *immediate mode*, so I really don't grasp

It must be some weird way to use them then. My test program does
100/440/1275 FPS for immediate mode/plain vertex arrays/VBO. Here's
the part, which makes the difference:

#if VA
glDrawArrays (*pp, off, *cp);
#else
glBegin (*pp);
for (unsigned int i = 0; i < *cp; i++)
glVertex3f (coord [3 * (off + i)], coord [3 * (off + i) + 1], coord
[3 * (off + i) + 2]);
glEnd ();
#endif

> What is fast, are *extensions*, which actually place the data where it is
> accessible faster. Those aren't OpenGL Vertex arrays, like you claim. I
> already made the difference very clear in my previous post, yet you
> misunderstood it (on purpose?).

Extensions are simply a way to use the vertex arrays feature.

> > > > Vertex arrays are in the spec since 1.1, dunno the exact data, but
> > > > before December 1994.
> > >
> > > And performance wise their effect always been very small, where
> vertexbuffer
> > > / stream model has enabled the GPU to really shine.
>
> Ditto. See above chart.
>
> > Vertex array can by used with or without vertex buffer (objects).
>
> You're confusing extensions to the GL 1.1 functionality, which is at bottom
> of the chart I posted above.

Extensions are at the heart of OpenGL. I'm not confising anything. I
just assume that OpenGL is used with extensions. Certainly I do not
claim the 1995(?) MS s/w implementaion of OpenGL 1.1 competes with
DX9.

> > > Current 100 million triangles per second on real-world applications
> wouldn't
> > > be possible, if every single vertex were moved through, even, AGP 8X
> bus.
> >
> > Huh? Surely every single vertex is moved through the AGP bus? Maybe
> > you mean "moved alone" ? That's what vertex array allow you - to
> > perform batch operations (with {Multi}DrawArrays,
> > {Multi}DrawElements).
>
> They are moved through the AGP bus once, then rendered from video memory
> 100's or 1000's of times. This makes the bus traffic overhead neglible. I'm
> doing continuous level-of-detail, billboard generation, etc. 100% on GPU
> from streams which reside in video memory and the bus traffic is NIL, when
> rendering (except for indices).

How do you measure the bus traffic ? Do you notifce difference between
static and dynamic vertex buffers (presumably video and GART,
respectively) ? I haven't noticed any, maybe it's OpenGL driver
quality issue.

> > irrelevant (besides being doubtful speculation)
>
> You think it is irrelevant, that the main system processor and memory has
> virtually no-latency full-duplex connection to the GPU's programmable vector
> processor array?

Yep, I bet future OpenGL revisions will allow my current programs to
take advantage of whatever advatages the new PCI-X based architectures
provide with virtually no source changes, like they've done for the
last 10 years.

It may be relevant to DX programmers ... or maybe not even for them -
they are accustomed to rewriting stuff :)

> > > This is the situation, where vertex arrays have more useful position in
> the
> > > API... roughly a decade for PC hardware to actually catch up with what
> SGI
> > > for instance had 10 years ago, until this time, these cool innovations
> were
> > > ahead of their time & not so useful on their own (for PC programmers).
> >
> > *sigh* see above. Batching improves performance.
>
> *sigh*, you are assuming every single vertex is streamed through bus
> connecting the GPU to the system memory, to the graphics processor and
> basing your own sigh onto that incorrect knowledge.

How's this incorrect ? Or maybe your statement is imprecise/incomplete
?

> The biggest two "wins" from batching are:
>
> - less DrawPrimitive overhead, every DrawPrimitive() burns CPU time
> - less renderstate changes, every renderstate change burns CPU and GPU time
>
> Bus traffic is non-issue, when the optimal situation is that ALL data is in
> the GPU's local memory, atleast every proficient GPU programmer aims for
> that- if not 100%, as close to that as possible.
>
> It seems we two are from two completely different worlds.

Yeah, one world where they say "vertex is streamed through bus
connecting the GPU to the system memory" and the other where they say


"vertex is streamed through bus connecting the GPU to the system

memory each frame".

~velco

wogston

unread,
Aug 24, 2003, 5:46:06 PM8/24/03
to
> > > Not true. Rather, OpenGL vertex arrays are similar to streams, with
> > > their data source being in system or GART or video memory.
> >
> > They only supply a pointer,
>
> See the specification of ARB_vertex_buffer_object for an example. The
> pointer parameter is actually an offset within a GART/video memory
> when a vertex buffer is bound to ARRAY_BUFFER_ARB target.

Still the same misconception: vertex array != ARB_vertex_buffer , they are
NOT the same thing. vertex array gives only pointer, no AGP support in
efficient way. See my previous post for the chart.


> > A) DrawPrimitive() doesn't transfer vertices at all, if the current
streams
> > are created into video memory and there isn't swapping (ie. enough
memory),
> > so this model is very efficient.
>
> Yup, vertices are prolly already copied in bulk.

Which is irrelevant, how they are transfered, as the overhead is amortized
by the fact that the GPU local vertices are used hundreds of times.


> Sure, this is true, only that its performance implications are
> minimal. You don't copy vertex data each frame, much less several
> times per frame. Thus stuff you do each 100 frames affects a single
> frame/the frame rate 100 times less.

No, this was "immediate mode" optimization, when every single vertex *IS*
generated by the CPU/FPU and uploaded to the GPU local memory for rendering.
Dynamic vertex buffer with nooverwrite flag when locking is FASTER than DPUP
or DIPUP.


> > DrawPrimitiveUP <-> vertex arrays
> > DrawPrimitive <-> ARB_vertex_buffer (+ vertex arrays)
> >
> > This is the difference, get it? =)
>
> Ah, I'm pretty much aware of it :) (Actually, there's no OpenGL
> equivalent to DrawPrimitiveUP, which, BTW, is a good thing, I do not
> want/can't to change the source because some array happens to not fit
> in GART/video memory. Plain vertex arrays are rather equivalent to a
> stream with the stream source being a system memory vertex buffer).

ie. the UP model, User Pointer model. I'm thinking from the perspective of
the hardware: what path the data takes from application to the GPU. In both
the source is User Pointer, so the path in driver is likely very similiar.
Non-AGP aperture system memory, through bus to the GPU local memory for
vertex processing. Both are at the low-end of the performance spectrum.


> Nowdays I do not separate vertex arrays from vertex buffers.
> Extensions has been around for a while (VAO/VAR) and for me an ARB
> extension is pretty much a core feature.

In 1994, when vertex array was introduced in OpenGL 1.1, which is what you
proposed, it certainly was? And, if the extension doesn't exist? All the
same, vertex array != ARB_vertex_buffer .. if ARB_vertex_buffer exists,
great, but it still isn't vertex array, it's extension.


> > and slower than *immediate mode*, so I really don't grasp
>
> It must be some weird way to use them then. My test program does
> 100/440/1275 FPS for immediate mode/plain vertex arrays/VBO. Here's
> the part, which makes the difference:
>
> #if VA
> glDrawArrays (*pp, off, *cp);
> #else
> glBegin (*pp);
> for (unsigned int i = 0; i < *cp; i++)
> glVertex3f (coord [3 * (off + i)], coord [3 * (off + i) + 1], coord
> [3 * (off + i) + 2]);
> glEnd ();
> #endif

You pointed the application for reference, I ran it on quite up-to-date x86
Windows OpenGL ICD.


> Extensions are simply a way to use the vertex arrays feature.

You mean extension to use the ARB_vertex_buffer feature? ;-)


> Extensions are at the heart of OpenGL. I'm not confising anything. I
> just assume that OpenGL is used with extensions. Certainly I do not
> claim the 1995(?) MS s/w implementaion of OpenGL 1.1 competes with
> DX9.

Doesn't change the fact that feature introduced in 1994 for GL 1.1 isn't
still the same in 2003, since OpenGL guarantees backward compatibility, does
is not? The extensions introduces since then, such as ARB_vertex_buffer
introduce alternative, much more efficient paths for contemporary hardware
solutions. Obviously the vertex array alone was inadequate to maximize the
performance. I'm sure you agree to that.


> How do you measure the bus traffic ? Do you notifce difference between
> static and dynamic vertex buffers (presumably video and GART,
> respectively) ? I haven't noticed any, maybe it's OpenGL driver
> quality issue.

You don't even have to go as far as measure the bus traffic to see it's
impact in the framerate. I aim only for the smoothest possible framerate in
my applications and renderpaths, no room for sub-optimal. Bus traffic must
go.


> > You think it is irrelevant, that the main system processor and memory
has
> > virtually no-latency full-duplex connection to the GPU's programmable
vector
> > processor array?
>
> Yep, I bet future OpenGL revisions will allow my current programs to
> take advantage of whatever advatages the new PCI-X based architectures
> provide with virtually no source changes, like they've done for the
> last 10 years.

PCI-X != PCI Express, two different buses. Besides this little error, the
above statement confirms my guess that you lack vision for the possibilities
other than accelerating 10-year-old programs the faster interconnection
between graphics hardware components would bring into the game.


> It may be relevant to DX programmers ... or maybe not even for them -
> they are accustomed to rewriting stuff :)

If this is supposed to be a stab, allow me to remind you that applications I
wrote for DirectX 3 still compile and run, and I can still maintain and
optimize them. I however, find no need to touch 10 year old sourcecode
because the applications written then have long since found end of their
road.

I keep rendering separated from rest of the sourcecode. I can change the
graphics component from DirectX to OpenGL to Glide 2 to Glide 3 to S3 Metal
to Pyramid to various software renderers to X to Y to Z with minimal fuss
required for any other relevant part of applications or application
development framework. Rendering the graphics is just one very tiny overall
part- the level of abstration isn't "Draw Triangle" or "Draw N Triangles"
but rather, "Draw Image", which gives quite a bit of leverage for switching
APIs when it feels appropriate.

However, I do NOT support (anymore) device abstraction so that I have
different output devices like GL, D3D, ... since this approach found the end
of it's useful life-time when software rendering went out of demand (thank
God for that), and only reason for this was that previously (until as late
as 1999-2000) had this arrangement for rendering because was extending
design from 1994-1995, which would still work adequately but have moved on
since then. ;)


> How's this incorrect ? Or maybe your statement is imprecise/incomplete ?

Because every single vertex isn't streamed through bus to GPU local memory
frequently enough for it to make any difference, when your rendering backend
is engineered to use GPU local memory?


> > It seems we two are from two completely different worlds.
>
> Yeah, one world where they say "vertex is streamed through bus
> connecting the GPU to the system memory" and the other where they say
> "vertex is streamed through bus connecting the GPU to the system
> memory each frame".

Are you aware, that you position yourself into the "each frame" world,
right?


wogston

unread,
Aug 24, 2003, 6:20:02 PM8/24/03
to
> Yeah, one world where they say "vertex is streamed through bus
> connecting the GPU to the system memory" and the other where they say
> "vertex is streamed through bus connecting the GPU to the system
> memory each frame".

Just in case you have no clue what I am talking about,

::DrawPrimitive() uses current VertexBuffer object(s) for source streams. If
I created the current VertexBuffer object(s) into GPU local memory, try to
guess where the vertices are streamed from?

GPU local memory! If I'm out of memory and swapping, then the driver can
stream from AGP aperture or from system memory by using AGP aperture as
staging-area or directly from system memory or what not. The driver is in
charge: the hardware manufacturer can make the decision which path is the
fastest for their hardware.

The situation, where we are swapping is one which I am avoiding by writing
scalable design, whenever it makes sense, that is. For instance, let's
imagine I am rendering 4K by 4K heightfield with continuous level of detail
and tesselation, and geomorphing.. in this case it would mean that there is
some cutoff point where I have to start to divide the resolution of the
rendering in source space into one quarter.

Here's a video of such situation:

www.liimatta.org/misc/scape1.mpg

It's work-in-progress, but I prepared a video for your benefit. I only store
position vectors in the local memory, everything else, including texture
coordinates, fog, geomorphing factor, detail texturing factor, etc. are
generated in the vertex program. The only limitation for how far I can
render is the amount of memory I have to work with. I'm using roughly 10% of
GPU power of the RADEON 9700 PRO on Pentium4 1.7Ghz on the video. I'm
storing all the data always on full resolution on the GPU on this version,
currently working on two things to enable the code to render "to the
horizon" (oh, forgot to mention horizon is also generated.. ie. curvature is
generated in the vertex program aswell).

- geometry caching
- texture synthesis

The caching is needed, because I'm still generating the rendering data in
the CPU/FPU and uploading it to buffers I maintain-- it's based on two
techniques combined:

1. heightfield + micro and macro detail generated with simple perlin noise
with different aplitures and frequencies, combined. The generation is varied
through modulation with sincos of position of the sample, using high and low
frequencies of the position for separate sincosines so that the detail isn't
ordered grid.

2. TODO: implement the perlin noise as pixel program to generate the detail
maps for normal map generation inside the GPU so that bus traffic would be
eliminated even if through cache as it is now

Texture synthesis, however, is the thing this really needs the most as
currently using single texture just repeating over the landscape which
doesn't look too good. Texture synthesis is based on render-to-texture to
combine the different layers into Unique Texture for each tile, ie. no
multipass rendering to generate the diffuse map in pixel program (except
once, for the cache), waste of good bandwidth, already have three maps and
room for one extra map so that the number of maps is four, which is
single-pass for most up-to-date hardware (waste want not).

OK, so the video only does 4K by 4K heightfield for now, since the
bottleneck is normal map generation (hey, perpixel or rather, pertexel
lighting is v. good on adaptive level of detail because lighting solution is
uniform for the whole screen, doesn't *snap* when vertices are removed or
added). And yes, 99% of the workload is done in the GPU.

This is the kind of lengths I go for optimization sake: it's not to enable
me to run this thing at 500 frames per second, that I do already.. it is to
enable me to push the horizon.. because.. GPU local memory is a limited
resource, therefore, we must make the BEST use of it. To be able to do THAT,
we must generate the data, even for cache, LOCALLY. There simply isn't room
for bus traffic, it must go.

Secondly, GPU is must faster at vector computation than the CPU/FPU
(x87/sse/sse2/3dnow!), which is added bonus in favour of the GPU local data
synthesis. Which in turn is the reason why I see the introduction of fast
interconnection such as PCI Express very, very, very, very, very lucrative
proposal for the kind of programming I do currently. It will allow much
faster, less latency commanding of the GPU and collecting the results *fast*
into CPU local memory (since it's in practise local memory now for both
systems!) for processing with generic-purpose programmable ALU practical. I
can basicly work on the same dataset with SSE, SSE2, GPU vertex and pixel
programs with no overhead at all.

And the best you can see from a fast low latency interconnection is that
existing OpenGL applications will run faster, for me, existing applications
should automatically run faster on future systems...


Momchil Velikov

unread,
Aug 25, 2003, 2:43:06 AM8/25/03
to
"wogston" <sp...@nothere.net> wrote in message news:<bibdml$mar$1...@phys-news1.kolumbus.fi>...

> > Yeah, one world where they say "vertex is streamed through bus
> > connecting the GPU to the system memory" and the other where they say
> > "vertex is streamed through bus connecting the GPU to the system
> > memory each frame".
>
> Just in case you have no clue what I am talking about,

I have enough clue, I'm just pointing it that your statements are
imprecise and that has lead to a number of misunderstandings already.

~velco

Momchil Velikov

unread,
Aug 25, 2003, 3:39:31 AM8/25/03
to
"wogston" <sp...@nothere.net> wrote in message news:<bibbn3$ct3$1...@phys-news1.kolumbus.fi>...

I see no reason to be faster if you do it less than once per frame.
In both cases data goes from user buffer into driver buffer. Do you
have any numbers ?

> > > DrawPrimitiveUP <-> vertex arrays
> > > DrawPrimitive <-> ARB_vertex_buffer (+ vertex arrays)
> > >
> > > This is the difference, get it? =)
> >
> > Ah, I'm pretty much aware of it :) (Actually, there's no OpenGL
> > equivalent to DrawPrimitiveUP, which, BTW, is a good thing, I do not
> > want/can't to change the source because some array happens to not fit
> > in GART/video memory. Plain vertex arrays are rather equivalent to a
> > stream with the stream source being a system memory vertex buffer).
>
> ie. the UP model, User Pointer model. I'm thinking from the perspective of
> the hardware: what path the data takes from application to the GPU. In both
> the source is User Pointer, so the path in driver is likely very similiar.
> Non-AGP aperture system memory, through bus to the GPU local memory for
> vertex processing. Both are at the low-end of the performance spectrum.

The data always travels the same path, the difference is how often it
does.

> > Nowdays I do not separate vertex arrays from vertex buffers.
> > Extensions has been around for a while (VAO/VAR) and for me an ARB
> > extension is pretty much a core feature.
>
> In 1994, when vertex array was introduced in OpenGL 1.1, which is what you
> proposed, it certainly was? And, if the extension doesn't exist? All the
> same, vertex array != ARB_vertex_buffer .. if ARB_vertex_buffer exists,
> great, but it still isn't vertex array, it's extension.
>
> > > and slower than *immediate mode*, so I really don't grasp
> >
> > It must be some weird way to use them then. My test program does
> > 100/440/1275 FPS for immediate mode/plain vertex arrays/VBO. Here's
> > the part, which makes the difference:
> >
> > #if VA
> > glDrawArrays (*pp, off, *cp);
> > #else
> > glBegin (*pp);
> > for (unsigned int i = 0; i < *cp; i++)
> > glVertex3f (coord [3 * (off + i)], coord [3 * (off + i) + 1], coord
> > [3 * (off + i) + 2]);
> > glEnd ();
> > #endif
>
> You pointed the application for reference, I ran it on quite up-to-date x86
> Windows OpenGL ICD.

I pointed it for the data about display lists. I have not examined the
source, thus I cannot reccommend for/against using it as a reference
benchmark.

Anyway, I just ran it on a GF4 and it went up with 2 million tris from
9 to 11 (IM vs. VA).

> > Extensions are simply a way to use the vertex arrays feature.
>
> You mean extension to use the ARB_vertex_buffer feature? ;-)

Nope. Vertex arrays are the core. There are different ways to
allocate them, some of these ways not available on one platform, some
not available on another. CVA/VAR/VAO/VBO are different ways to use
VA.

>
> > Extensions are at the heart of OpenGL. I'm not confising anything. I
> > just assume that OpenGL is used with extensions. Certainly I do not
> > claim the 1995(?) MS s/w implementaion of OpenGL 1.1 competes with
> > DX9.
>
> Doesn't change the fact that feature introduced in 1994 for GL 1.1 isn't
> still the same in 2003, since OpenGL guarantees backward compatibility, does
> is not?

Ah, but the code which used to say "glDrawElements" still says
"glDrawElements" no matter if there's CVA, VBO or none of them.

> The extensions introduces since then, such as ARB_vertex_buffer
> introduce alternative, much more efficient paths for contemporary hardware
> solutions. Obviously the vertex array alone was inadequate to maximize the
> performance. I'm sure you agree to that.

On some hardware, yes. That's one example where superiority of OpenGL
shows: using extensions you can in a *binary* compatible way use the
vertex array feature in the most efficient for the platform way.

> > > You think it is irrelevant, that the main system processor and memory
> has
> > > virtually no-latency full-duplex connection to the GPU's programmable
> vector
> > > processor array?
> >
> > Yep, I bet future OpenGL revisions will allow my current programs to
> > take advantage of whatever advatages the new PCI-X based architectures
> > provide with virtually no source changes, like they've done for the
> > last 10 years.
>
> PCI-X != PCI Express, two different buses. Besides this little error, the
> above statement confirms my guess that you lack vision for the possibilities
> other than accelerating 10-year-old programs the faster interconnection
> between graphics hardware components would bring into the game.

Well, sure, I do not claim to be a prophet/visionary/messiah :)

I'm yet to study the PCI Express spec, but some preliminary
information (like the lack of cache coherency support) already sounds
bad. Will see if there's some support for atomic read-modify-write
transactions ... If not, well, it's not gonna change the paradigm
much. That's my vision :)

> > It may be relevant to DX programmers ... or maybe not even for them -
> > they are accustomed to rewriting stuff :)
>
> If this is supposed to be a stab, allow me to remind you that applications I
> wrote for DirectX 3 still compile and run, and I can still maintain and
> optimize them. I however, find no need to touch 10 year old sourcecode
> because the applications written then have long since found end of their
> road.

I.e. got rewritten ? :P


~velco

wogston

unread,
Aug 25, 2003, 5:30:40 AM8/25/03
to
> I see no reason to be faster if you do it less than once per frame.
> In both cases data goes from user buffer into driver buffer. Do you
> have any numbers ?

Let's try ONCE more..

UP means that the driver, when rendering, uses the UP as vertex "stream"
source. Non-UP version of the API calls use the VB as the source. This way:

vertex array = uses user supplied pointer as source, "uploads" per
glDrawElements()
ARB_vertex_buffer = uses driver supplied source, "uploads" only once

Similiarly,

DrawPrimitiveUP() uses user supplied pointer as source, "uploads" per call
to DrawPrimitiveUP()
DrawPrimitive() uses user supplied vertex buffer object as source, "uploads"
only once, when user writes to the VB

Now you surely are able to tell the difference between DPUP() and DP(), and
the comparison to VA and VBO and why I made the comparison.. the 1994 GL 1.1
VA functionality is comparable to the DPUP and the performance is similiarly
at low-end of the performance spectrum, and I told you why, things *should*
be clear now.

What numbers precisely? The overhead for ARB_vertex_buffer and DP is
neglible, when application developer is GPU local memory
savvy, that's my whole point-- VA by itself as it existed in 1994 GL 1.1 is
not enough to take advantage of the hardware power, as is demonstrated over
and over again.

As far as the nooverwrite and dynamic VB goes, the difference to static r+w
VB, the performance difference is not "dramatic", but still ~10% class and
this means nearly million triangles a second. At 100 frames per second this
means extra 10K triangles per frame at no extra cost in development time.


> The data always travels the same path, the difference is how often it
> does.

Excuse me, but that was _my_ point. ;-)


> Anyway, I just ran it on a GF4 and it went up with 2 million tris from
> 9 to 11 (IM vs. VA).

Very low numbers for GF4, it should do atleast 50-60 M per sec, IM and VA
seem to be low-performance still, I'd use them only if there was no other
way.


> Nope. Vertex arrays are the core. There are different ways to
> allocate them, some of these ways not available on one platform, some
> not available on another. CVA/VAR/VAO/VBO are different ways to use
> VA.

Now we are talking about CVA/VAR/VAO/VBO *extensions*, not VA.. VA means
that the GL reads vertices from user supplied pointer and is not very
efficient on PEECEE hardware, because the UMA design or XBAR switches are
not very common in PEECEE world, therefore mechanism to allocate GPU local
memory was required and was indeed implemented through these extensions.

They are not different ways to use VA, they are different ways to
*implement* VA, much more efficient ways and have their unique names because
they are NOT VA.


> > Doesn't change the fact that feature introduced in 1994 for GL 1.1 isn't
> > still the same in 2003, since OpenGL guarantees backward compatibility,
does
> > is not?
>
> Ah, but the code which used to say "glDrawElements" still says
> "glDrawElements" no matter if there's CVA, VBO or none of them.

The code also says that CVA, VBO or any other extension is detected and
appropriately uses the extension, which definitely in my opinion requires
typing in some sourcecode. Similiarly, DX9 code says DrawPrimitive()
regardless of what kind of VB object is created, and DrawPrimitiveUP() if
the source is raw memory.


> On some hardware, yes. That's one example where superiority of OpenGL
> shows: using extensions you can in a *binary* compatible way use the
> vertex array feature in the most efficient for the platform way.

DirectX 9 also has *binary* compatibility for this activity, since the
::DrawPrimitive() API call still generates the same binary sequence aswell
regardless of the type of the currently set VB, so I could say that this is
actually example where the superiority of DirectX 9 shows.

The difference here is, that I leave detection of the most efficient
renderpath to the hardware developer, in the device driver.. I don't have to
do it manually, in my opinion THIS is the superior way.. YMMV, obviously..


> I'm yet to study the PCI Express spec, but some preliminary
> information (like the lack of cache coherency support) already sounds
> bad. Will see if there's some support for atomic read-modify-write
> transactions ... If not, well, it's not gonna change the paradigm
> much. That's my vision :)

My vision is that the DirectX 10 has the paradigm shift well mapped into
practical interfaces.. :)


> > If this is supposed to be a stab, allow me to remind you that
applications I
> > wrote for DirectX 3 still compile and run, and I can still maintain and
> > optimize them. I however, find no need to touch 10 year old sourcecode
> > because the applications written then have long since found end of their
> > road.
>
> I.e. got rewritten ? :P

Well, ofcourse! I have grown as a programmer a lot in the past 8 years,
dude.. and damn proud of it, too! There's very little code from that "era",
which is of any significant use. The only remaining code are the Platform
Libraries, which were re-written since then and available for public
scrutiny and use as Open Source, the latest revision can be downloaded here:

www.twilight3d.com/files/prophecy/prophecy333.zip

The rest is closed-source, proprietary libraries and software retail
products, but this code is free. This code is platform and rendering API
neutral. That's the only kind of code that survives the test of time.. like
I said earlier, you have to rewrite: upgrade and maintain your OpenGL
sourcecode aswell, to take advantage of the latest developments in OpenGL
API and it's extensions, if you want to use the latest cutting-edge
features. Same obviously applies to DirectX sourcecode. The version 3 was
still using Execute Buffers, trasition to modern DirectX 9 codebase needs
heavy rewriting, true, DX3 wasn't very good, but it was the best bet at the
time for Windows.. OpenGL wasn't accelerated, 3dfx voodoo graphics wasn't
yet released.. permedia and the few other accelerators that preceded it
weren't yet out, Glint based products did cost thousands of dollars and were
out of reach of consumers. OpenGL was too slow for that era's processors
(486 was still common, Pentium was just being introduced...), basicly you
did WANT TO write custom T&L pipeline for your application. The DX3 apps I
wrote were software renderers, yes, you BET I would want to rewrite the
applications.. *IF* I wanted to rewrite the applications... which I don't
want to do, unless you want to pay me to do so? I didn't think so... ;-)

Regarding DX3 era and hardware that *did* come out, at the time it was 3dfx
0wning the market for consumer 3D acceleration as far as *games* were
concerned. This meant that Glide was the DOMINANT rendering API, not D3D or
OpenGL. It was the best-performer in town aswell for 3dfx hardware of that
era (3dfx voodoo graphics, voodoo2, voodoo rush!, etc..), running Quake2
using OpenGL ICD was impossible since the 3dfx didn't have ICD. You did have
to manually dig OpenGL entrypoints from the driver dll and call them through
pointers, you surely remember that "wonderful" OpenGL coding experience?
Voodoo was too big share of the market to ignore.. I'm not likely to
forget.. novadays things ofcourse are much better, regarding OpenGL since
vendors *HAVE TO* write proper ICD to run all these wonderful OpenGL
applications that exists today, the pioneering work already been done by ID
software, w/o them there wouldn't be much of OpenGL + Windows + games to
talk about.. ofcourse this thread isn't about games, Windows, or x86 even..
with OpenGL I'm sure we're talking about everything that goes into that
which means 100's of platforms, soon including OpenGL ES... ;-)

However, if the codebase is DirectX 6 based the rewriting isn't very
heavy... since DX6 the API has stabilized- the changes from the DX8 to the
DX9 are mostly neglible for the core functionality. I have maintained and
developed enough OpenGL software, to know, that it is not more or less
trivial as developing DirectX sourcecode, as I find developing both
trivial.. it's just matter which is more trivial and that's not something
I'd lose my sleep over.

What I *DO* lose my sleep over, are the things that API fanatics don't even
think about. Here's an example: bilinear filtering.. D3D and GL use
different hotspot for the filter, it's +0.5 texels off.. so.. if I am making
continuously mapped landscape (*cough*) and want to have OpenGL renderer for
it (*cough*) I have to use different texcoordgen matrix, so that I don't get
seams between textures..

Same thing is behind the reason why OpenGL has extension to change the
border clamping mode for texture mapping.. it's because NV and ATI for
instance use different hotspot in hardware in GL (ATI does it right, NV does
it wrong, but a while ago most apps were developed on NV hw and produced
filtering artifacts on ATI, ATI decided that they expose their HW ability to
adjust the hotspot in software to match the D3D filtering rules NV actually
used at the time on GL drivers.. so ATI fixed most apps by exposing
incorrect D3D filtering as extension, old applications still beware!). I
don't know if NV has remedied this filtering issue since then, or do OpenGL
programmers still use *incorrect* filtering by default to look correct on NV
hardware, and how portable this kind of code is between different OpenGL
implementations.. or do other programmers even CARE?

In Direct3D I don't have this particular issue, for example. It's this sort
of "little stuff", that exists behind the scenes nobody talks about (except
me, seemingly, being The Greatest Troll of All Time, or so I atleast
believe..)


Regards,
TGTOAT ;)


wogston

unread,
Aug 25, 2003, 5:39:19 AM8/25/03
to
> As far as the nooverwrite and dynamic VB goes, the difference to static
r+w
> VB, the performance difference is not "dramatic", but still ~10% class and
> this means nearly million triangles a second. At 100 frames per second
this
> means extra 10K triangles per frame at no extra cost in development time.

Disclaimer: in my current development box hardware.. but I have witnessed
similiar percentage in different configurations aswell (I have bundles of 3D
cards for testing)


It is loading more messages.
0 new messages