WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: 2.6.37-rc1 mainline domU - BUG: unable to handle ker

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] Re: 2.6.37-rc1 mainline domU - BUG: unable to handle kernel paging request
From: Boris Derzhavets <bderzhavets@xxxxxxxxx>
Date: Tue, 16 Nov 2010 13:42:54 -0800 (PST)
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Bruce Edge <bruce.edge@xxxxxxxxx>
Delivery-date: Tue, 16 Nov 2010 13:49:18 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1289943774; bh=qVnWtoQVuL7ZfIc5zQSP/QUXntiVBDIvSwKnmtWwlnY=; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=rjLMhrDixp9obnUs4cp5LtE3k1R12MzH+8hXLw7lmOGByqi2SYtCsr82JFa7XZ3DIiMovTGA/kuEcHWb7A4b3/K/x5KW1hV1eE1HNfDmAIOcOQ1y/o2DWMGG1/rxjqiX0ntrGszjzah/UIDQiLjBxPVaAg7Hi4lJp2dkQeNDkjs=
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=595g4TOe7H6lU85zF7kLaKENzO2tT8kpi1BdQz+KH6m5d/i80IGMLlXVP/7Lp1wgP0y8ldJgsKMpTa7sqyI1/5pNY4GJQ6YRgw9yJ0hUt4ceHSxm7Ev/obOXR2dt7CeCtxZaRN0usSVBnWE3vIf8RCv3FirUyqmDRxOxsQoqT0M=;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20101116211503.GA11118@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> So what I think you are saying is that you keep on getting the bug in DomU?
> Is the stack-trace the same as in rc1?

Yes.
When i want to get 1-2 hr of stable work :-

# service network restart
# service nfs restart

at Dom0.

I also believe that presence of xen-pcifront.fix.patch is making things much more stable
on F14.

Boris.

--- On Tue, 11/16/10, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:

From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] Re: 2.6.37-rc1 mainline domU - BUG: unable to handle kernel paging request
To: "Boris Derzhavets" <bderzhavets@xxxxxxxxx>
Cc: "Jeremy Fitzhardinge" <jeremy@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, "Bruce Edge" <bruce.edge@xxxxxxxxx>
Date: Tuesday, November 16, 2010, 4:15 PM

On Tue, Nov 16, 2010 at 12:43:28PM -0800, Boris Derzhavets wrote:
> > Huh. I .. what? I am confused. I thought we established that the issue
> > was not related to Xen PCI front? You also seem to uncomment the
> > upstream.core.patches and the xen.pvhvm.patch - why?
>
> I cannot uncomment upstream.core.patches and the xen.pvhvm.patch
> it gives failed HUNKs

Uhh.. I am even more confused.
>
> > Ok, they are.. v2.6.37-rc2 which came out today has the fixes
>
> I am pretty sure rc2 doesn't contain everything from xen.next-2.6.37.patch,
> gntdev's stuff for sure. I've built 2.6.37-rc2 kernel rpms and loaded
> kernel-2.6.27-rc2.git0.xendom0.x86_64 under Xen 4.0.1.
> Device /dev/xen/gntdev has not been created. I understand that it's
> unrelated to DomU ( related to Dom0) , but once again with rc2 in DomU i cannot
> get 3.2 GB copied over to DomU from NFS share at Dom0.

So what I think you are saying is that you keep on getting the bug in DomU?
Is the stack-trace the same as in rc1?


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>