WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] preferred hardware for xen?

To: "Gabor Szokoli" <szocske@xxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] preferred hardware for xen?
From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
Date: Mon, 14 Aug 2006 13:57:31 +0200
Delivery-date: Mon, 14 Aug 2006 04:58:37 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <de47c0230608140415j6813b522pfda64583139e3c60@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Aca/kwjdSSbiPPz2Qdeg+AqkNo+2MwAA8C2w
Thread-topic: [Xen-users] preferred hardware for xen?
 

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> Gabor Szokoli
> Sent: 14 August 2006 12:15
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] preferred hardware for xen?
> 
> Hi!
> 
> We are a small sw development company, making a nish product which
> runs on dedicated HW in production environments. We normaly ship with
> debian on 32 bit smp x86, like 1U IBM eservers.
> For internal testing purposes however, our developers need to run
> dosens of instances independently, so we have bigger servers
> virtualised apart.
> 
> We would like to migrate away from vmware, and are experimenting with
> xen currently. Because we use two of our older p4 servers for xen
> tests, we can be convinced to blame the recurring "soft lockup
> detected" incidents on it. We get it consistently under high load.
> 
> My question is thus twofold:
> 
> 1, What's up with the "soft lockup" kernel panic thing? I see it
> mentioned all over the 'net, multiple bugs open, but have not found
> any writeup or explanation to what it really is, and how development
> on avoiding it is going. All I could figure out from the source is
> that the guest domain kernel panics in relation to not being scheduled
> on all CPUs within 10 seconds. I can understand this is trouble in a
> regular OS, but a virtual one should instead be happy to experience
> such CPU affinity, no?
> We currently run 3.0.2+hg9697-0, kernel 2.6.16-2-xen-686 #1 SMP,
> debian stable + libc6-xen from testing.

As far as I understand the soft lockup problem is that it's caused by
exhausting the CPU resources by running heavy loads on Dom0 causing
other Dom's to "soft-lock" because they don't get to run due to being
blocked by the Dom0 - particularly running three CPU intensive apps in
three different domains on a two-core CPU, running something heavy in
Dom0 would exhibit this problem.  

However, with the latest code, you get the "Credit scheduler", which is
more flexible in it's approach to running domains, particularly in that
it can migrate a VCPU from one physical core to another, rather than the
old approach of assigning it at creation and sticking to that
assignment. 

So with the latest code (to be released as 3.0.3) it shouldn't happen
often - if at all. 

There is a discussion in last weeks archive of Xen Devel disussing this,
under the title of "Soft lockups blocking Xen" or some such, where Ian
Pratt explains why switching to the latest code-base fixes the issue
[also, apparently one bug is the same person reporting it twice, once
for 32-bit and once for 64-bit].

By the way, this is MY RECOLLECTION of the problems - I would research
it more if your business depends on it, not just take my word for it. 
> 
> 2, We plan to purchase a few new servers to slice up. Should we look
> for something with vanderpool? Pacifica? Anything else we should
> insist on to increase the probablility of xen working on it fine?
> Anything we should make sure is not in them? More or fewer CPUs/cores
> the better?

Pacifica, which should now be called AMDV or SVM, is your best choice.
But then I would say that, wouldn't I... ;-) If nothing else, AMD's
solution is more likely to not be hindered by things like "wrong BIOS"
or something such-like, since it's got a new socket and there's no way
that the old BIOS would work with the new processor anyways... On the
other hand, we're not able to sell Opteron in Rev F (the revision that
contains SVM) just yet - don't ask me when either, as I'm a software
engineer, not sales/marketing... So if you desperately need something
multi-socket this week, you'll have to go with Intel for sure. Any large
manufacturer (HP, IBM, etc) should be able to give you some indication
of when they will have Rev F Opteron systems available, I would think...


More (real)cores is always better for virtualization, that's for sure.
Bear in mind that particularly for hardware assisted virtualization (HVM
-> SVM/VT) there is extra processor work involved with just about any of
about two dozen fairly common kernel operations (page-table manipulation
being one of the more notable ones). 

--
Mats
> 
> 
> Thanks for any tips or advice:
> 
> Gabor Szokoli
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 
> 
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>