WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-devel][RFC]degradation on IPF due to hypercall set irq

To: "Keir Fraser" <keir@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel][RFC]degradation on IPF due to hypercall set irq
From: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
Date: Fri, 24 Nov 2006 22:45:47 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 24 Nov 2006 06:46:08 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AccN5wYmUmir46SdT8a+ptVHeNZ8fgAHbs2mAACOJaAAASp7iwAAG0VmAAB2zNAAAmxzrgAAEGDgAAAZWmIAAC5KoAAAk0fTAAD34HAAAGv3kgBSjAIgAAqAtLAABhntKgAJf9Cg
Thread-topic: [Xen-devel][RFC]degradation on IPF due to hypercall set irq
Keir Fraser write on 2006年11月24日 17:47:
> On 24/11/06 07:08, "Xu, Anthony" <anthony.xu@xxxxxxxxx> wrote:
> 
>> This patch is for comments, and it is based on IPF, it may not apply
>> to IA32 side. 
>> 
>> This patch delivers interrupt and IO finish in the same hypercall
>> xc_hvm_set_irq_and_wakeup_cpu, This eliminate all unnecessary
>> hypercalls. In the meantime, I add a mechanism for IDE DMA thread to
>> wakeup qemu main block(select) to
>> Deliver IDE DMA interrupt.
> 
> Firstly, this patch is not against unstable tip.
> 
> Secondly, we should make multicalls work rather than kludge a
> set_irq_and_notify_io_done type of hypercall. Applications are free
> to include any of the Xen public headers. We really just need a
> xc_multicall() API function.   

This patch is not for checkin, and this patch is similar with multicall,
I just want to see if multicall can get same performance as share PIC line.

> 
> Thirdly, either we should keep the independent IDE-DMA thread or it
> should be entirely incorporated into the main qemu thread. Are pipe
> writes much faster than just doing a hypercall? If much slower, why
> is that? Could you work out a way of generically making IPC
> hypercalls faster (particularly from privileged user space -- could
> you trap straight to the hypervisor from user space rather than
> bouncing through guest kernel?).      

Trapping straight to the hypervisor from user space definitely can improve 
performance,
But it will also improve performance with share PIC irq line.
The degradation is caused by extra hypercalls to deliver irq line status.
So the degradation should still exist.

Firstly I just want to verify whether hypercall can get similar or better 
formance compared with
share PIC irq line.

After reading share PIC line code again, I have following finding.

 xc_evtchn_notify acctually can't notify hvm domain interrupt happening.
See below code segment, the event channel can only wake up hvm domain which
is blocked by IO operation.
So in fact dom0 doesn't notify hvm domain interrupt happen,
Means some xc_evtchn_nofity, which are intended to notify hvm domain, are 
unnecessary.
Hvm domain finds these pending interrupts just because it is trapped into 
hypercall 
at least HZ times per second, may be more frequent due to a lot of VMM_EXIT.


        if ( rchn->consumer_is_xen )
        {
            /* Xen consumers need notification only if they are blocked. */
            if ( test_and_clear_bit(_VCPUF_blocked_in_xen,
                                    &rvcpu->vcpu_flags) )
                vcpu_wake(rvcpu);
        }
        else


Until now, I can't see hypercall is faster than share PIC line.

Can you enlighten me why we use hypercall?



> 
>  -- Keir

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel