WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 1/2] enable event channel wake-up for mem_event i

To: Tim Deegan <tim@xxxxxxx>
Subject: Re: [Xen-devel] [PATCH 1/2] enable event channel wake-up for mem_event interfaces
From: Olaf Hering <olaf@xxxxxxxxx>
Date: Thu, 27 Oct 2011 16:22:57 +0200
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Adin Scannell <adin@xxxxxxxxxxxxxxx>
Delivery-date: Thu, 27 Oct 2011 07:27:08 -0700
Dkim-signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1319725396; l=1623; s=domk; d=aepfle.de; h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From: Date:X-RZG-CLASS-ID:X-RZG-AUTH; bh=Dzy0j4IMrqnysKaDmWPOuAOsjgs=; b=rEWy716DgGHIZ2jhhObdPwOczlwnLeeh3UX6CnQp/J/Plt7atlNyz4ro6NnuoFiuysE CmHgAEpK6XvNtF1BhjLXuNGbMCjQpgS5sXUj9pOVKjvMcfCQ0AYboECxxRxxnrlG2+GYe TAGMFzCxez7mlunUCwbf07oQYC7MSyOzLLU=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20111006110715.GC21091@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <CAAJKtqoPDzEEY7xLQbFyOXrwNhBUJyV274LzRT-=0fPMbYjWkw@xxxxxxxxxxxxxx> <20111006110715.GC21091@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.21.rev5535 (2011-07-01)
On Thu, Oct 06, Tim Deegan wrote:

> At 17:24 -0400 on 28 Sep (1317230698), Adin Scannell wrote:
> > -void mem_event_put_request(struct domain *d, struct mem_event_domain *med, 
> > mem_event_request_t *req)
> > +static inline int mem_event_ring_free(struct domain *d, struct 
> > mem_event_domain *med)
> > +{
> > +    int free_requests;
> > +
> > +    free_requests = RING_FREE_REQUESTS(&med->front_ring);
> > +    if ( unlikely(free_requests < d->max_vcpus) )
> > +    {
> > +        /* This may happen. */
> > +        gdprintk(XENLOG_INFO, "mem_event request slots for domain %d: 
> > %d\n",
> > +                               d->domain_id, free_requests);
> > +        WARN_ON(1);
> 
> If this is something that might happen on production systems (and is
> basically benign except for the performance), we shouldn't print a full
> WARN().  The printk is more than enough.

While I havent reviewed the whole patch (sorry for that), one thing that
will break is p2m_mem_paging_populate() called from dom0.

If the ring is full, the gfn state was eventually forwarded from
paging-out state to paging-in state. But since the ring was full, no
request was sent to xenpaging which means the gfn remains in
p2m_ram_paging_in_start until the guest eventually tries to access the
gfn as well. Dom0 will call p2m_mem_paging_populate() again and again (I
think), but there will be no attempt to send a new request once the ring
has free slots again, because the gfn is already in the page-in path and
the vcpu does not belong to the guest.

I have some wild ideas how to handle this situation, but the patch as is
will break page-in attempts from xenpaging itself.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>