Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

(none)

1 view
Skip to first unread message

Romano Felice (72X440)

unread,
Jun 6, 2013, 2:30:02 PM6/6/13
to


Dear Customer,


As part of our year 2013 Email Security Upgrade, Admin Helpdesk Support require you to immediately update your account information by following the reference link below to prevent your Email address not to be de-activated on our Email service database. CLICK HERE http://www.alainpoirierplv.com/work/ <http://www.alainpoirierplv.com/work/>

Failure to confirm and verify your email account on our database as
instructed, Your e-mail account will be blocked in 24 hours.


Thank you for your cooperation.
�2013 Email System Admin.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Konrad Rzeszutek Wilk

unread,
Jun 10, 2013, 5:10:01 PM6/10/13
to
Please see attached patch. It fixes it for me.

Konrad Rzeszutek Wilk

unread,
Jun 10, 2013, 5:10:02 PM6/10/13
to
There are two tool-stack that can instruct the Xen PCI frontend
and backend to change states: 'xm' (Python code with a daemon),
and 'xl' (C library - does not keep state changes).

With the 'xm', the path to disconnect a PCI device (xm pci-detach
<guest> <BDF>)is:

4(Connected)->7(Reconfiguring*)-> 8(Reconfigured)-> 4(Connected)->5(Closing*).

The * is for states that the tool-stack sets. For 'xl', it is similar:

4(Connected)->7(Reconfiguring*)-> 8(Reconfigured)-> 4(Connected)

Both of them also tear down the XenBus structure, so the backend
state ends up going in the 3(Initialised) and calls pcifront_xenbus_remove.

When a PCI device is plugged in (xm pci-attach <guest> <BDF>)
both of them follow the same pattern:
2(InitWait*), 3(Initialized*), 4(Connected*)->4(Connected).

[xen-pcifront ignores the 2,3 state changes and only acts when
4 (Connected) has been reached]

The problem is that git commit 3d925320e9e2de162bd138bf97816bda8c3f71be
("xen/pcifront: Use Xen-SWIOTLB when initting if required") introduced
a mechanism to initialize the SWIOTLB when the Xen PCI front moves to
Connected state. It also had some aggressive seatbelt code check that
would warn the user if one tried to change to Connected state without
hitting first the Closing state:

pcifront pci-0: PCI frontend already installed!

However, that code can be relaxed and we can continue on working
even if the frontend is instructed to be the 'Connected' state with
no devices and then gets tickled to be in 'Connected' state again.

In other words, this 4(Connected)->5(Closing)->4(Connected) state
was expected, while 4(Connected)->.... anything but 5(Closing)->4(Connected)
was not. This patch removes that aggressive check and allows
Xen pcifront to work with the 'xl' toolstack.

Cc: Bjorn Helgaas <bhel...@google.com>
Cc: linu...@vger.kernel.org
Cc: sta...@vger.kernel.org
Signed-off-by: Konrad Rzeszutek Wilk <konra...@oracle.com>
---
drivers/pci/xen-pcifront.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index ac99515..cc46e253 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -675,10 +675,9 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
if (!pcifront_dev) {
dev_info(&pdev->xdev->dev, "Installing PCI frontend\n");
pcifront_dev = pdev;
- } else {
- dev_err(&pdev->xdev->dev, "PCI frontend already installed!\n");
+ } else
err = -EEXIST;
- }
+
spin_unlock(&pcifront_dev_lock);

if (!err && !swiotlb_nr_tbl()) {
@@ -846,7 +845,7 @@ static int pcifront_try_connect(struct pcifront_device *pdev)
goto out;

err = pcifront_connect_and_init_dma(pdev);
- if (err) {
+ if (err && err != -EEXIST) {
xenbus_dev_fatal(pdev->xdev, err,
"Error setting up PCI Frontend");
goto out;
--
1.8.1.4

Jan Beulich

unread,
Jun 11, 2013, 3:30:02 AM6/11/13
to
I actually think this shouldn't be worked around here, but fixed in
xl. Any device removed from a guest should be driven towards
the "Closed" state.

Jan
> _______________________________________________
> Xen-devel mailing list
> Xen-...@lists.xen.org
> http://lists.xen.org/xen-devel

George Dunlap

unread,
Jun 11, 2013, 5:10:03 AM6/11/13
to
Yeah, that seems pretty obvious to me. The weird thing is that this
wasn't noticed before -- does this work in 4.2? Have you been doing
this test all along, or has it only broken recently?

I've reproduced it on one of my test boxes; let me see if I can sort it out.

-George

konrad wilk

unread,
Jun 11, 2013, 9:10:03 AM6/11/13
to
There is also the per-device state. Those are moved to the 5 (Closing),
while the
whole connection is still in the 4(Connected) state. In essence all of
the per-device states
are closed, it is just that the global state is still Connected.


>
> Yeah, that seems pretty obvious to me. The weird thing is that this
> wasn't noticed before -- does this work in 4.2? Have you been doing
> this test all along, or has it only broken recently?

I just reproduced this in Xen 4.2. I believe that the reason I did not
see this before was b/c I was using 'xm'
primarily.
>
> I've reproduced it on one of my test boxes; let me see if I can sort
> it out.

OK.

OS Engineering

unread,
Jun 11, 2013, 11:20:02 AM6/11/13
to
Hi Jens,

In continuation with our previous communication, we have carried out performance comparison among EnhanceIO, bcache and dm-cache.

We found that EnhanceIO provides better throughput on zipf workload (with theta=1.2) in comparison to bcache and dm-cache for write through caches.
However, for write back caches, we found that dm-cache had best throughput followed by EnhanceIO and then bcache. Dm-cache commits on-disk metadata every time a REQ_SYNC or REQ_FUA bio is written. If no such requests are made then it commits metadata once every second. If power is lost, it may lose some recent writes. However, EnhanceIO and bcache do not acknowledge IO completion until both IO and metadata hits the SSD. Hence, EnhanceIO and bcache provide higher data integrity at a cost of performance.

The fio config and setup information follows:
HDD : 100GB
SSD : 20GB
Mode : write through / write back
Cache block_size : 4KB for bcache, EnhanceIO and 256KB for dm-cache

The other options have been left to their default values.

Note:
1) In case of dm-cache, we used 2 partitions of same SSD with 1GB partition as metadata and 20GB partition as caching device. This has been done so as to ensure a fair comparison as EnhanceIO and bcache do not have a separate metadata device.

2) In order to ensure proper cache warm up, We have turned off sequential bypass in bcache. This does not impact our performance results as they are taken for random workload.

For each test, we first performed a warm up run using the following fio options:
fio --direct=1 --size=90% --filesize=20G --blocksize=4k --ioengine=libaio --rw=rw --rwmixread=100 --rwmixwrite=0 --iodepth=8 ...

Then, we performed our actual run with the following fio options:
fio --direct=1 --size=100% --filesize=20G --blocksize=4k --ioengine=libaio --rw=randrw --rwmixread=90 --rwmixwrite=10 --iodepth=8 --numjobs=4 --random_distribution=zipf:1.2 ...

============================= Write Through ===============================
Type Read Latency(ms) Write Latency(ms) Read(MB/s) Write(MB/s)
===========================================================================
EnhanceIO 1.58 16.53 32.91 3.65
bcache 0.58 31.05 27.17 3.02
dm-cache 0.24 27.45 31.05 3.44

============================= Write Back ==================================
Type Read Latency(ms) Write Latency(ms) Read(MB/s) Write(MB/s)
===========================================================================
EnhanceIO 0.34 4.98 138.72 15.40
bcache 0.95 1.76 106.82 11.85
dm-cache 0.58 0.55 193.76 21.52

============================ Base Line ====================================
Type Read Latency(ms) Write Latency(ms) Read(MB/s) Write(MB/s)
===========================================================================
HDD 6.22 27.23 13.51 1.49
SSD 0.47 0.42 235.87 26.21

We have written scripts that aid in cache creation, deletion and performance run for all these caching solutions. These scripts can be found at:
https://github.com/stec-inc/EnhanceIO/tree/master/performance_test

Thanks and Regards,
sTec Team

PROPRIETARY-CONFIDENTIAL INFORMATION INCLUDED

This electronic transmission, and any documents attached hereto, may contain confidential, proprietary and/or legally privileged information. The information is intended only for use by the recipient named above. If you received this electronic message in error, please notify the sender and delete the electronic message. Any disclosure, copying, distribution, or use of the contents of information received in error is strictly prohibited, and violators will be pursued legally.

George Dunlap

unread,
Jun 11, 2013, 11:50:02 AM6/11/13
to
On 06/10/2013 10:06 PM, Konrad Rzeszutek Wilk wrote:
> There are two tool-stack that can instruct the Xen PCI frontend
> and backend to change states: 'xm' (Python code with a daemon),
> and 'xl' (C library - does not keep state changes).
>
> With the 'xm', the path to disconnect a PCI device (xm pci-detach
> <guest> <BDF>)is:
>
> 4(Connected)->7(Reconfiguring*)-> 8(Reconfigured)-> 4(Connected)->5(Closing*).
>
> The * is for states that the tool-stack sets. For 'xl', it is similar:
>
> 4(Connected)->7(Reconfiguring*)-> 8(Reconfigured)-> 4(Connected)
>
> Both of them also tear down the XenBus structure, so the backend
> state ends up going in the 3(Initialised) and calls pcifront_xenbus_remove.

So I looked a little bit into this; there are actually two different
states that happen as part of this handshake. In order to disonnect a
*device*, xl signals using the *bus* state, like this:
* Wait for the *bus* to be in state 4(Connected)
* Set the *device* state to 5(Closing)
* Set the *bus* state to 7(Reconfiguring)
* Wait for the *bus* state to return to 4(Connected)

So are all of these states you see the *bus* state? And why would you
disconnect the whole pci bus if you're only removing one device?

-George

konrad wilk

unread,
Jun 11, 2013, 12:10:02 PM6/11/13
to

On 6/11/2013 11:36 AM, George Dunlap wrote:
> On 06/10/2013 10:06 PM, Konrad Rzeszutek Wilk wrote:
>> There are two tool-stack that can instruct the Xen PCI frontend
>> and backend to change states: 'xm' (Python code with a daemon),
>> and 'xl' (C library - does not keep state changes).
>>
>> With the 'xm', the path to disconnect a PCI device (xm pci-detach
>> <guest> <BDF>)is:
>>
>> 4(Connected)->7(Reconfiguring*)-> 8(Reconfigured)->
>> 4(Connected)->5(Closing*).
>>
>> The * is for states that the tool-stack sets. For 'xl', it is similar:
>>
>> 4(Connected)->7(Reconfiguring*)-> 8(Reconfigured)-> 4(Connected)
>>
>> Both of them also tear down the XenBus structure, so the backend
>> state ends up going in the 3(Initialised) and calls
>> pcifront_xenbus_remove.
>
> So I looked a little bit into this; there are actually two different
> states that happen as part of this handshake. In order to disonnect a
> *device*, xl signals using the *bus* state, like this:
> * Wait for the *bus* to be in state 4(Connected)
> * Set the *device* state to 5(Closing)
> * Set the *bus* state to 7(Reconfiguring)
> * Wait for the *bus* state to return to 4(Connected)
>
> So are all of these states you see the *bus* state? And why would you
> disconnect the whole pci bus if you're only removing one device?

Correct. The stats I enumerated are *bus* states. Not per-device states.
I presume (and I hadn't checked xm) that Xend has some logic to only
disconnect the bus if all of the PCI devices have been disconnected. In
'xl' it does not do that.

The testing I did was just with one PCI device.

George Dunlap

unread,
Jun 11, 2013, 12:20:02 PM6/11/13
to
Ah, OK -- I see now. The problem is that the code in the Linux side
didn't know about the whole "4->7->8->4" thing to unplug a device. In
all likelihood, if you had used xm with two devices (so that the bus
didn't get disconnected), then you would have run across the same error.

So at least part of the problem *is* a bug in Linux.

That doesn't explain why I have problems doing this on Debian's version
of 3.2 -- unless the "fix" you mentoned above was backported to the
stable kernel, perhaps?

konrad wilk

unread,
Jun 11, 2013, 12:30:03 PM6/11/13
to
Right.
>
> That doesn't explain why I have problems doing this on Debian's
> version of 3.2 -- unless the "fix" you mentoned above was backported
> to the stable kernel, perhaps?
No. It was a feature.

George Dunlap

unread,
Jun 12, 2013, 9:50:01 AM6/12/13
to
On 12/06/13 14:45, Konrad Rzeszutek Wilk wrote:
> Good! Bjorn, would you be OK Ack-ing the patch I sent (attached here
> for reference) or putting it in your queue for Linus?
>
> My plan would be to send it to Linus in the 3.11 merge window.

One nit -- "to work with the 'xl' toolstack" -- didn't we theorize this
would also be broken with xm if you had two devices passed through?

Konrad Rzeszutek Wilk

unread,
Jun 12, 2013, 9:50:01 AM6/12/13
to
On Tue, Jun 11, 2013 at 05:17:45PM +0100, George Dunlap wrote:
0001-xen-pci-Deal-with-toolstack-missing-an-XenbusStateCl.patch

Konrad Rzeszutek Wilk

unread,
Jun 12, 2013, 10:30:01 AM6/12/13
to
Yes. I will fix up the title to reflect that shortly (say Friday?)

Thanks for your sharp eyes.

Bjorn Helgaas

unread,
Jun 12, 2013, 1:30:03 PM6/12/13
to
Sure; this is your baby :) Why don't you handle it via your tree,
since it's more related to xen than any PCI core stuff.

Acked-by: Bjorn Helgaas <bhel...@google.com>

Yakubu Abdulraman

unread,
Jun 14, 2013, 4:20:04 AM6/14/13
to
--
hallo,
Haben Sie ein Darlehen, Rechnungen zu bezahlen oder Ihr eigenes
Unternehmen gründen möchten, können Sie für ein Darlehen bewerben Sie
sich jetzt und kontaktieren Sie uns via E-mail:
willsrichard...@yahoo.co.uk

Kreditantrag

Name:
Adresse:
Tel:
Alter:
Beruf:
Land:
Darlehensbetrag benötigt:
Die Laufzeit des Kredits:
Zweck des Darlehens:

Mr. Mike Petter

Konrad Rzeszutek Wilk

unread,
Jun 14, 2013, 12:30:02 PM6/14/13
to
> >> So at least part of the problem *is* a bug in Linux.
> >
> > Good! Bjorn, would you be OK Ack-ing the patch I sent (attached here
> > for reference) or putting it in your queue for Linus?
> >
> > My plan would be to send it to Linus in the 3.11 merge window.
>
> Sure; this is your baby :) Why don't you handle it via your tree,
> since it's more related to xen than any PCI core stuff.

OK. Thanks!

Mrs Mona Saeedi

unread,
Jun 15, 2013, 6:00:02 AM6/15/13
to



--
I am Mrs.Mona, a Muslim woman. I have inheri tance for you contact me for
more details email:monasa...@outlook.com

Konrad Rzeszutek Wilk

unread,
Nov 4, 2013, 3:50:03 PM11/4/13
to

> Sure; this is your baby :) Why don't you handle it via your tree,
> since it's more related to xen than any PCI core stuff.
>
> Acked-by: Bjorn Helgaas <bhel...@google.com>

Definitly fixed in v3.12. Just tested it and it works.

George, Ian, how do I "close" a bug in http://bugs.xenproject.org/xen/bug/12 ?
0 new messages