Thanks for answering,
I'm not sure Dom0 has its own CPU in that case, but the problem happens
when I don't have DomUs (the problem is not that much worse with DomUs
installed). That's why I don't really understand why I have those
latency problems with interrupts, as a normal Linux kernel behaves
perfectly when loaded (no IRQ loss).
I don't think scheduling is in cause here. Of course with better
scheduling, there would be less load and it would 'betterize' my case,
but the main problem seam to be where interrupts are handled.
Do you think there is anything I could try? Oh, and I tried with 3.1
this morning (with basic 2.6.18 Xen kernel without any customization),
and for now on, the same problems without a single DomU.
François.
Petersson, Mats wrote:
-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
François Delawarde
Sent: 21 May 2007 15:20
To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Fwd: Re: [Xen-devel] Zaptel PCI IRQ problem]
Hi,
Sorry to insist, but I would really like to be able to use Xen and my
zaptel hardware all together in Dom0. I was just wondering if the 3.1
release could contain some changes compared to 3.0.4 related to IRQ
handling or scheduling that could workout my problem.
As far as I can see, there's no improvement in 3.1 over 3.0.4 as to how interrupts are handled or how the scheduling works [I'm not really sure how that could practically be improved wihtout loosing performance elsewhere - this is a case of "you can make it right for some people some of the time, but not all people all of the time"].
This can possibly be solved by restricting which other domains run on the same CPU as Dom0. However, there will certainly be some load on Dom0 because of qemu-dm being there, but unless you're running disk or network benchmarks on your DomU, you should have reasonable performance in Dom0 without much effort.
If you share the Dom0 CPU with DomU's, then you have little chance to get it to wrok.
All this is assuming I understand correctly that the real problem here is that the latency between the interrupt and the actual code executing in user-mode is the key to the problem. Making sure Dom0 runs on it's own CPU will make sure that there's very little overhead compared to native.
--
Mats
Thanks,
François.
----
I actually first asked to asterisk mailing lists, and a few
persons told
me that it was Xen's fault, as it was not yet 'mature' enough
to have a
good IRQ handling under load.
Note that I made tests the last few days as I wasn't sure if
it was Xen
or not, and the exact same system works perfectly with a normal Linux
kernel (same config file except for Xen stuff that are
removed). A Dom-0
kernel without any VMs running comports itself the way I described
(bad), and I tried both schedulers (sedf and credit) without success.
It doesn't appear to be a load problem as the load is about the same
with the non-Xen kernel I tried, but with IRQ handling in
load period.
I'm talking about a machine that is certainly not
over-loaded, but that
once in a while suffers some iowait for disk access. Under
Xen kernel,
if I kill everything I can and only leave Asterisk with at most one
simultaneous conversation, it works quite nice.
I'm using the debian (I think they actually come from fedora) patches
for 2.6.18, and just want to know if this issue is known or has/will
been/be resolved somehow in future versions, if there is anyway I can
deal with it with some kernel configuration, or if I should
wait a few
months/years more to be able to use Xen in my specific setting.
Thanks,
François.
Ian Pratt wrote:
I'm currently trying to run an Asterisk server in a Xen
kernel under
Dom0 (debian kernel 2.6.18 with xen hypervisor 3.0.4). I
had read of
some possible timing issues with ztdummy (using rtc) under
DomU, but I
have a zaptel compatible PCI card (TDM400P), and I experience big
problems with IRQ misses every time there is a bit of load
on the server
(for example, when an HVM DomU is running). The card is supposed to
report 1000 interruptions per second, but it doesn't, and
consequences
are horrible crackling sound in communications. Running the utility
zttest to check for the stability of those interrupts
under a small bit
of load, i get:
I believe folk have had success running asterisk in a domU
and assigning the PCI device directly to the guest. It's best
to set the affinity masks for other guests and dom0 such that
the domU with asterisk in it has a dedicated physical CPU core.
We ran asterisk on an older version of Xen without any
problems, and nothing has changed that should effect xen's
ability to do this. [you could try using the sedf scheduler
if you still have problems with 'credit']
Ian
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|