WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] 4kb or 8kb kernel stacks?

To: "Ted Kaczmarek" <tedkaz@xxxxxxxxxxxxx>, "Chris Bainbridge" <chris.bainbridge@xxxxxxxxx>
Subject: RE: [Xen-devel] 4kb or 8kb kernel stacks?
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Sun, 2 Oct 2005 18:11:42 +0100
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 02 Oct 2005 17:09:24 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcXG5rdyVYiq0aAJQF+5cV/cL50QEgAiiwYw
Thread-topic: [Xen-devel] 4kb or 8kb kernel stacks?
  
> Appears that 4k stacks on recent changeset's is the root of 
> my problem bringing snmp guests on line. I switched to 8k 
> kernel stacks and now I have no more problem bringing up smp guests.
> 
> Is testing with 4k and 8k stacks part of the test suite?

Not currently, but since they're becoming default on 2.6.13 we'll update
our configs acordingly.

> I may be being ignorant here (very likely, hehe), but the 
> changes to irq handling must have some kind of ramification.

That's not the first place I'd look...

Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>