WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 1/2] enable event channel wake-up for mem_event i

To: Adin Scannell <adin@xxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 1/2] enable event channel wake-up for mem_event interfaces
From: Tim Deegan <tim@xxxxxxx>
Date: Thu, 6 Oct 2011 12:07:15 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 06 Oct 2011 04:08:32 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <CAAJKtqoPDzEEY7xLQbFyOXrwNhBUJyV274LzRT-=0fPMbYjWkw@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <CAAJKtqoPDzEEY7xLQbFyOXrwNhBUJyV274LzRT-=0fPMbYjWkw@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
Hi, 

The general approach seems good.  A few comments on the detail, below:

At 17:24 -0400 on 28 Sep (1317230698), Adin Scannell wrote:
> -void mem_event_put_request(struct domain *d, struct mem_event_domain *med, 
> mem_event_request_t *req)
> +static inline int mem_event_ring_free(struct domain *d, struct 
> mem_event_domain *med)
> +{
> +    int free_requests;
> +
> +    free_requests = RING_FREE_REQUESTS(&med->front_ring);
> +    if ( unlikely(free_requests < d->max_vcpus) )
> +    {
> +        /* This may happen. */
> +        gdprintk(XENLOG_INFO, "mem_event request slots for domain %d: %d\n",
> +                               d->domain_id, free_requests);
> +        WARN_ON(1);

If this is something that might happen on production systems (and is
basically benign except for the performance), we shouldn't print a full
WARN().  The printk is more than enough.

> +    }
> +
> +    return free_requests;
> +}
> +
> +int mem_event_put_request(struct domain *d, struct mem_event_domain *med, 
> mem_event_request_t *req)
>  {
>      mem_event_front_ring_t *front_ring;
>      RING_IDX req_prod;
>  
> +    if( mem_event_check_ring(d, med) < 0 )

Xen coding style has another space between the 'if' and the '(' (here
and elsewhere).

> @@ -135,16 +166,28 @@ void mem_event_put_request(struct domain
>      req_prod++;
>  
>      /* Update ring */
> -    med->req_producers--;
>      front_ring->req_prod_pvt = req_prod;
>      RING_PUSH_REQUESTS(front_ring);
>  
> +    /*
> +     * We ensure that each vcpu can put at least *one* event -- because some
> +     * events are not repeatable, such as dropping a page.  This will ensure 
> no
> +     * vCPU is left with an event that they must place on the ring, but 
> cannot.
> +     * They will be paused after the event is placed.
> +     * See large comment below in mem_event_unpause_vcpus().
> +     */
> +    if( current->domain->domain_id == d->domain_id &&
> +        mem_event_ring_free(d, med) < d->max_vcpus )
> +        mem_event_mark_and_pause(current, med);
> +

This idiom of comparing domain-ids cropped up in the earlier mem-event
patches and seems to be spreading, but the right check is just
(current->domain == d).

Also: are there cases where current->domain != d?  If so, can't those cases
cause the ring to fill up?
  
> -void mem_event_unpause_vcpus(struct domain *d)
> +void mem_event_unpause_vcpus(struct domain *d, struct mem_event_domain *med)
>  {
>      struct vcpu *v;
> +    int free;
> +    int online = d->max_vcpus;
>  
5A> +    if( !med->blocked )
> +        return;
> +
> +    mem_event_ring_lock(med);
> +    free = mem_event_ring_free(d, med);
> +
> +    /*
> +     * We ensure that we only have vCPUs online if there are enough free 
> slots
> +     * for their memory events to be processed.  This will ensure that no
> +     * memory events are lost (due to the fact that certain types of events
> +     * cannot be replayed, we need to ensure that there is space in the ring
> +     * for when they are hit). 
> +     * See large comment above in mem_event_put_request().
> +     */
>      for_each_vcpu ( d, v )
> +        if ( test_bit(_VPF_mem_event, &v->pause_flags) )
> +            online--;
> + 
> +    for_each_vcpu ( d, v )
> +    {
> +        if ( !(med->blocked) || online >= mem_event_ring_free(d, med) )
> +            break;

Is there a risk that under heavy mem-event loads vcpu 0 might starve
other vcpus entirely because they're never allowed to unpause here?

>          if ( test_and_clear_bit(_VPF_mem_event, &v->pause_flags) )
> +        {
>              vcpu_wake(v);
> +            online++;
> +            med->blocked--;
> +        }
> +    }
> +
> +    mem_event_ring_unlock(med);
>  }
>  
> -void mem_event_mark_and_pause(struct vcpu *v)
> +void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
>  {
> -    set_bit(_VPF_mem_event, &v->pause_flags);
> -    vcpu_sleep_nosync(v);
> -}
> -
> -void mem_event_put_req_producers(struct mem_event_domain *med)
> -{
> -    mem_event_ring_lock(med);
> -    med->req_producers--;
> -    mem_event_ring_unlock(med);
> +    if ( !test_bit(_VPF_mem_event, &v->pause_flags) )
> +    {
> +        set_bit(_VPF_mem_event, &v->pause_flags);

Does this need to be an atomic test-and-set?

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>