This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel][RFC]Dynamic modes support for PV xenfb (included)

To: Markus Armbruster <armbru@xxxxxxxxxx>
Subject: Re: [Xen-devel][RFC]Dynamic modes support for PV xenfb (included)
From: Pat Campbell <plc@xxxxxxxxxx>
Date: Tue, 05 Feb 2008 16:38:06 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, "Daniel P. Berrange" <berrange@xxxxxxxxxx>
Delivery-date: Tue, 05 Feb 2008 15:41:18 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <87wspjlogu.fsf@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <47A75E45.20907@xxxxxxxxxx> <87wspjlogu.fsf@xxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (X11/20070801)
Markus Armbruster wrote:
> Pat Campbell <plc@xxxxxxxxxx> writes:
cut out a whole bunch to address the need to protect some variables.
>> +
>> +    xenfb_send_event(info, &event);
>>  }
>>  static int xenfb_queue_full(struct xenfb_info *info)
>> @@ -209,6 +250,16 @@ static void xenfb_update_screen(struct x
>>      xenfb_do_update(info, x1, y1, x2 - x1, y2 - y1);
>>  }
>> +static void xenfb_resize_screen(struct xenfb_info *info)
>> +{
>> +    if (xenfb_queue_full(info))
>> +            return;
>> +
>> +    info->resize_dpy = 0;
>> +    xenfb_do_resize(info);
>> +    xenfb_refresh(info, 0, 0, info->xres, info->yres);
> Hmm, ugly.  See next comment.
>> +}
>> +
>>  static int xenfb_thread(void *data)
>>  {
>>      struct xenfb_info *info = data;
>> @@ -217,6 +268,9 @@ static int xenfb_thread(void *data)
>>              if (info->dirty) {
>>                      info->dirty = 0;
>>                      xenfb_update_screen(info);
>> +            }
>> +            if (info->resize_dpy) {
>> +                    xenfb_resize_screen(info);
>>              }
> Both screen resizes and dirtying don't go to the backend immediately.
> Instead, they accumulate in struct xenfb_info (dirty rectangle and
> resize_dpy flag) until they get pushed by xenfb_thread().
> The way things work, a resize triggers three events:
> 1. The update event for the dirty rectangle.  In theory, the dirty
> rectangle could be clean, but I expect it to be quite dirty in
> practice, as user space typically redraws everything right after a
> resize.
> 2. The resize event.
> 3. Another update event, because xenfb_resize_screen() dirties the
> whole screen.  This event is delayed until the next iteration of
> xenfb_thread()'s loop.
> I'd rather do it like this: decree that resize events count as whole
> screen updates.  Make xenfb_resize_screen() clear the dirty rectangle
> (optional, saves an update event).  Test resize_dpy before dirty.
> Also: you access resize_dpy, xres, yres and fb_stride from multiple
> threads without synchronization.  I fear that's not safe.  Don't you
> have to protect them with a spinlock, like dirty_lock protects the
> dirty rectangle?  Could reuse dirty_lock, I guess.
Don't need to protect these three:
  fb_stride: Eliminated, now using fb_info->fixed.line_len which is read
only after probe()
  xres, yres:
     Set in xenfb_set_par(), read only in all other threads. 

While thinking about this variable it occurred to me that a resize would
invalidate any screen updates that are in the queue.  Resize from 
xenfb_thread() was done so that I could ensure that the resize did not
get lost due to a queue full situation. So,  if I clear the queue,
cons=prod, I can add in the resize event packet directly from
xenfb_set_par() getting  rid of resizing from within the thread.  Change
would require a new spinlock to protect the queue but would eliminate

Setting cons=prod should be safe.  Worst case is when a clear occurs
while backend is in the event for loop, loop process events it did not
need to. 

xenfb_resize_screen()  becomes:   (called directly from xenfb_set_par())
    spin_lock_irqsave(&info->queue_lock, flags);
    info->page->out_cons = info->page->out_prod;
    spin_unlock_irqrestore(&info->queue_lock, flags);
    xenfb_refresh(info, 0, 0, info->xres, info->yres);

xenfb_send_event() becomes:
    spin_lock_irqsave(&info->queue_lock, flags);
    prod = info->page->out_prod;
    /* caller ensures !xenfb_queue_full() */
    mb();           /* ensure ring space available */
    XENFB_OUT_RING_REF(info->page, prod) = *event;
    wmb();          /* ensure ring contents visible */
    info->page->out_prod = prod + 1;
    spin_unlock_irqrestore(&info->queue_lock, flags);

Thoughts on this approach?


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>