I am working on a PCI driver, have tested the driver in different hardware
combinations, it is observed that, test ouputs on WDF driver on PIII dual
processor was different from others, here is summary.
Test works in Uni processor & Quad core, but it fails in Dual processor PC,
here it fails very few tests out of 15000 tests.
Test generates around 60000 interrupts along with read & write DMA access,
which also generate interrupts accordingly, based on read/write data, tests
generates the interrupts.
I am currently synchronizing all read, write, IOCTL requests, so in my idea,
there wont be any chance that two requests would be running simultaneously on
any core
Just wanted to know, Any special care to be taken in Dual processor systems
compared to muliticore PC's?
Dual processor configuration: PIII, 550Hz 386MB RAM.
I hope WDF works fine with Old processor configuration also.
Thanks,
Prafulla
d
--
Please do not send e-mail directly to this alias. this alias is for
newsgroup purposes only.
This posting is provided "AS IS" with no warranties, and confers no rights.
"kota" <ko...@discussions.microsoft.com> wrote in message
news:210066D8-8FBB-41E9...@microsoft.com...
Thanks for your reply, could you let me know, any special care to be taken
in the driver in perspective of any difference b/w multicore & multiprocessor?
I have protected the hardware registers shared data with spinlock b/w
Interrupt DPC & other IOCTL fucntions.
All IOCTL's, read, write functions are made it sequential with the help of
mutex.
In interrupt, global PCI interrupt is disabled, used
WdfInterruptQueueDpcForIsr for DPC requests, it services all interrupts here,
though the PCI global interrupt is disabled, during DPC execution, if any
interrupt comes, verifying the all interrupt flags here and servicing the
same.
If there are no more interrupts, enabled the global PCI interrupt at the end
of DPC.
With the above design, i understand that, at a time, only one DPC will be
executing at any time.
In WDF using WdfInterruptSetPolicy, affinity can be set for OS>= Vista, how
can i do this in xp?, In earlier driver(NT model), IoConnectInterrupt is
used to set the affinity.
are there are any alternate methods available in WDF to set this for XP?
Affinity value is 0x3, read from U.Interrupt.KAFFINITY.
0x3 ==> IrqPolicyAllProcessorsInMachine, is this indicates that, hw is not
having any preference on any processor?, in this case, can i change the
affinity value to direct on any one processor?
Previous driver(NT model) is working fine with zero faiures in this h/w
configuration.
For Multiprocessor, any specific driver design is required in WDF?
Thanks,
Kota
d
--
Please do not send e-mail directly to this alias. this alias is for
newsgroup purposes only.
This posting is provided "AS IS" with no warranties, and confers no rights.
"kota" <ko...@discussions.microsoft.com> wrote in message
news:01DEA010-605F-4544...@microsoft.com...
From WDF doc'n:
---------------
The WdfInterruptSetPolicy method is available in version 1.0 and later
versions of KMDF.
Windows Vista and later versions of the operation system allow drivers to
specify an interrupt's priority, processor affinity, and affinity policy.
These driver-supplied values override information that driver INF files or
system administrators place in the registry.
If a driver is running on an operating system version that is earlier than
Windows Vista, the framework ignores the values that the driver specifies
when it calls WdfInterruptSetPolicy.
---------------------
Affinity value is 0x3, read from U.Interrupt.KAFFINITY.
0x3 ==> IrqPolicyAllProcessorsInMachine, is this indicates that, hw is not
having any preference on any processor?, in this case, can i change the
affinity value to direct on any one processor,can you clarify?
Previous driver also using the same value and it is not directing to any one
processor.
Thanks,
Kota
Next thing: check if registers are uncached. if they are cached, it may
happen that different threads will read different data from registers. As i
remember there is way to disable caching for ports.
If you use registers in memory mode, make sure that you declare your pointer
to the memory as volatile, else the optimizer may make unpredictable results
(but i think this is not your problem, if the information is correct, that
your driver runs on another machine properly)
Avoid read/write any registers in your isr. Just let the dpc do that job.
isr should just initiate the dpc. During isr dpc may already run on another
cpu!!!!
if you protect your register by spinlock in dpc, then (even if same code
would run on a different cpu), it doesn't matter, because your register is
protected...
Very important: Do not mix mutextes with spinlocks. The only safe way to
protect registers agains a dpc routine (which usually treats isrs and
retriggers them) are spinlocks. therefore if you want to read from registers
in your ioctl routines, use also spinlock (not forget: the same
spinlockinstance like in your dpc)
last but not least: Keyword (bridge latency times)
Sometimes there are "mysterios" caches, called bridges on pci boards.
If you program a register, this does not mean that the content is really set
on the pci device after leaving the writeroutine. The only way to guarantee
that the register(s) is/are set, is to make a dummyread finally.
then you have the guarantee that the content is really arrived. It is a
little bit time consuming, so you should just do that, if you want the
content to be updated guaranteed right after writing. Don't be afraid, not
all pci devices have such a behavement (just they, which have bad timing
constraints)
Thanks for your detailed reply, please see my respone below.
"DeStefan" wrote:
> Are you sure, that you protect your registers by the same instance of
> spinlock object? If you are using different spinlock instances, it happens
> that on multiple cpu's the spinlock may be ascquired and therefore
> simultanous writes or reads onto the registers take place.
> YOU MUST protect all registers by the same spinlock object.
>
----Yes, I am taking care about this.
> Next thing: check if registers are uncached. if they are cached, it may
> happen that different threads will read different data from registers. As i
> remember there is way to disable caching for ports.
------Registers are NonCached, it is declared in MmMapIoSpace function.
> If you use registers in memory mode, make sure that you declare your pointer
> to the memory as volatile, else the optimizer may make unpredictable results
> (but i think this is not your problem, if the information is correct, that
> your driver runs on another machine properly)
>
-------I am using volatile, where ever appropriate.
> Avoid read/write any registers in your isr. Just let the dpc do that job.
> isr should just initiate the dpc. During isr dpc may already run on another
> cpu!!!!
> if you protect your register by spinlock in dpc, then (even if same code
> would run on a different cpu), it doesn't matter, because your register is
> protected...
>
-----I am doing in the same way.
> Very important: Do not mix mutextes with spinlocks. The only safe way to
> protect registers agains a dpc routine (which usually treats isrs and
> retriggers them) are spinlocks. therefore if you want to read from registers
> in your ioctl routines, use also spinlock (not forget: the same
> spinlockinstance like in your dpc)
>
-----Using the same spinlock in both places.
> last but not least: Keyword (bridge latency times)
> Sometimes there are "mysterios" caches, called bridges on pci boards.
> If you program a register, this does not mean that the content is really set
> on the pci device after leaving the writeroutine. The only way to guarantee
> that the register(s) is/are set, is to make a dummyread finally.
> then you have the guarantee that the content is really arrived. It is a
> little bit time consuming, so you should just do that, if you want the
> content to be updated guaranteed right after writing. Don't be afraid, not
> all pci devices have such a behavement (just they, which have bad timing
> constraints)
>
------after every write, i am flushing it, i.e reading dummy memory in that
page, which makes sure that, data will be written.
But i tell you an experience that i have had with Intel 965 Mainboard(s):
The PCI timing was slightly (really slightly) faster than other reference
boards.
Therefore i also had very mysterious effects regarding port in/port outs.
So maybe you may search for a software bug, where no bug is.
I searched over a month for a softwarebug, and i got tired to continue
finding workarounds for the problems.
We involved our hardware department and they supported me in analyzing pci
bus (oscilloscope!!!! and cable lettuce). We had no success in our efforts.
Also the manufacturer (Intel) was not willed to change the timings (indeed,
why should he?we are a small company) and for most applications it was
sufficient. also the manipulation of the hardwareregisters of the pci bridge
was not a adequate way for us, because this is really a gruesome method.
The only way to check against this problem, is using the same processor on a
different mainboard from another manufacturer.
Also maybe you would be able to fix such things by some softwaretricks, the
development and researching those things is so time consumable that you
should consider to say: "Forbid this mainboardtype for using your driver". I
know that this is much more easy to write than to practice, but in fact, this
was the only solution we practiced in our company.
Sorry again, that i have no further ideas for you.
The correct way is to use READ_REGISTER_ULONG and friends instead.
> Avoid read/write any registers in your isr.
You must at least:
- determine if this is really your hardware who interrupts
- mark the interrupt as serviced by register writes, otherwise, the ISR will be
called again and again.
> isr should just initiate the dpc. During isr dpc may already run on another
> cpu!!!!
...and can be interrupted by the next ISR, and several instances of DpcForIsr
can run on several CPUs...
--
Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
ma...@storagecraft.com
http://www.storagecraft.com