WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] QLogic Fibre Channel HBA Support

To: Steve Traugott <stevegt@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] QLogic Fibre Channel HBA Support
From: "Brian Wolfe" <brianw@xxxxxxxxxxxx>
Date: Mon Jul 5 13:50:24 2004 CDT
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 05 Jul 2004 19:54:01 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Reply-to: brianw@xxxxxxxxxxxx
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
Yeah, I'm still seeing a hang on the very last domain int he list. I haven't 
been able to track down what is causing it exactly. Up until now I thought that 
it's unique aspect of running apache wtih php5 beta 1 was causing it with OOM 
due to php5b1 memory leaks....

Other than that, none of the other domains are hanging...even with nfs1 
periodicly rebooting spontaneously (hardware issue i'm tracking on it).

I'm waiting for 2.0-alpha to be officially released before I start monkeying 
with the bleeding edge again. Once that happens we will start protoyping the 
xenctld in python using the existing tools.


On Mon, 5 Jul 2004 11:33:57 -0700
 Steve Traugott <stevegt@xxxxxxxxxxxxx> said...

Hi Brian,

On Mon, Jul 05, 2004 at 04:31:33AM +0000, Brian Wolfe wrote:
> As for stable enough to host, I'm using a 1.3 version from May or June
> (can't remember the version off the top of my head) to do virtual
> server hosting for my own machines and for several clients.
> Interestingly enough, because of the setup I moved to I'm getting
> better overall performance using 2 main NFS servers with 8-disk raid-5
> arrays than with individual machines using mirror sets!

When using NFS roots you haven't seen any hung clients?  I was getting a
lot of those, got rid of a lot of the hangs by using the /dev/urandom
workaround, but still got a few after that, attributed to a long-lived
Linux NFS client kernel bug.  See these threads:

    05 May 2004:  xenolinux /dev/random
    12 May 2004:  Xen hangs with NFS root under high loads

I'm sure 1.3 fixes the first, but what about the second?

Steve
-- 
Stephen G. Traugott  (KG6HDQ)
UNIX/Linux Infrastructure Architect, TerraLuna LLC
stevegt@xxxxxxxxxxxxx 
http://www.stevegt.com -- http://Infrastructures.Org 


-------------------------------------------------------
This SF.Net email sponsored by Black Hat Briefings & Training.
Attend Black Hat Briefings & Training, Las Vegas July 24-29 - 
digital self defense, top technical experts, no vendor pitches, 
unmatched networking opportunities. Visit www.blackhat.com
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel