I've got my Radeon X800 XT (AGP) working successfully, however in OS/2 the
default SNAP imposed clock settings are : 400 MHz core clock and 400 MHz memory
clock.
I have successfully used the ATI-Utility in WinXP to test the video card and can
reliably (w/o ANY issues) run the card at 540/600 MHz clock. I would like to try
this under OS2, if anything, just out of curiosity to see if the display
performance improves.
It is my understanding that SNAP implements the video card/chipset specific
logic in a binary 'nucleus' type code. I believe this exists in the
'graphics.bpd' file. Does anyone have any ideas how I could go about tweaking
the default clock settings? Or for that matter, how to interpret the information
in the 'graphics.bpd' file?
Here is the relevant SNAP log information:
Graphics device configuration:
Manufacturer......... ATI
Chipset.............. Radeon X800 Series
Bus Type............. AGP bus
Memory............... 131072 KB
DAC.................. ATI Internal 24 bit DAC
Clock................ ATI Internal Clock
Memory Clock......... 400 MHz
Default Memory Clock. 400 MHz
Maximum Memory Clock. 400 MHz
Driver Revision...... 3.2, Build 29
Driver Build......... Sep 25 2006
Certified Version.... 1.60
Certified Date....... Sep 25 2006
BIOS header information:
'U.n.+.........................IBM...............'
' 761295520......??..............07/07/04,13:42:1'
'9...............D.87...... .....113-A26104-102..'
'....R420.AGP.DDR3...RADEON X800 XT VIVO.....YOU '
'HAVE NOT CONNECTED THE POWER CABLE TO YOUR VIDEO'
' CARD.PLEASE REFER TO THE 'GETTING STARTED GUIDE'
> I have successfully used the ATI-Utility in WinXP to test the video card and can
> reliably (w/o ANY issues) run the card at 540/600 MHz clock. I would like to try
> this under OS2, if anything, just out of curiosity to see if the display
> performance improves.
The tool for that is GAMemClk, but...
> Memory Clock......... 400 MHz
> Default Memory Clock. 400 MHz
> Maximum Memory Clock. 400 MHz
... this says it can't be changed (overclocking was not implemented in
all drivers).
I would really doubt if the OS/2 driver is able to dispatch enough work
to the GPU that overclocking would be of any use in any case. Except for
the entropy boost, of course.
Marcel
Yes indeed...but I'm guessing that the MAX value is stored somewhere in the
driver file...and that is what I'm attempting to 'un-lock'.
I mean, to be honest, if I'm running it at 400 MHz, I am actually under-clocking
my particular video card...
> Dariusz Piatkowski wrote:
> > I've got my Radeon X800 XT (AGP) working successfully, however in OS/2 the
> > default SNAP imposed clock settings are : 400 MHz core clock and 400 MHz memory
> > clock.
SNIP
> I would really doubt if the OS/2 driver is able to dispatch enough work
> to the GPU that overclocking would be of any use in any case. Except for
> the entropy boost, of course.
> Marcel
You may be absolutely right...but there is only one way to find out for
sure...if the boost turns out to be heat only as you suggest, then letting it
default to the under-clocked 400 MHz setting might be acceptable.
> Yes indeed...but I'm guessing that the MAX value is stored somewhere in the
> driver file...and that is what I'm attempting to 'un-lock'.
>
> I mean, to be honest, if I'm running it at 400 MHz, I am actually under-clocking
> my particular video card...
As I recall, it doesn't actually set that, it just asks the video BIOS
for the info, and reports it back. Why the video BIOS doesn't report
accurate info is one question; other questions are whether the BIOS
interfaces changed, or if there is a bug in the driver.
Given your excellent suggestion about the gamemclk program I went ahead and made
some changes to the source code to test a few things out:
1) I disabled the clock MAX check
2) I disabled the 'programmable engine clock' check
I had at some point in time downloaded the SNAP source tree from somewhere on
the web...I think it was around the time I was trying to figure out why my
1920x1200 Samsung 245T display wasn't being recognized by the SNAP drivers.
Anyways, given your suggestion I changed the code to test this out by simply
removing those 'safety' checks. I recompiled that single program...however the
results aren't that great...and maybe they re-affirm what you were saying.
Even though the application accepts whatever core/mem clock values I give it,
and even though those values are being passed to the SNAP driver, the graphics
performance benchmarks I am running (Sysbench 0.9.5d) suggest that the values
have no impact on the performance of the video card.
Sure would be nice to understand what is really going on...and why the default
driver settings are of 275 core clock and 400 mem clock, where the card itself
has default settings of: 500/500.
Thanks!
Sorry that I'm more than a month late to this party, but I think you
guys were headed in the right direction. I know my nVidia card has
several power modes, which it chooses based on demand, on-the-fly. The
base clock rates are varied as the power modes change. You'll probably
not see higher clocks until or unless you activate some 3D functionality
or on-board colorspace conversions / video decoding.
--
Reverse the parts of the e-mail address to reply by mail.