WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guesttime

To: "Dong, Eddie" <eddie.dong@xxxxxxxxx>, <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guesttime virtualization
From: "Magenheimer, Dan (HP Labs Fort Collins)" <dan.magenheimer@xxxxxx>
Date: Sat, 30 Apr 2005 21:18:23 -0700
Delivery-date: Sun, 01 May 2005 04:18:19 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVNYTbiS1g8dJcOT9aDwf6uWGSJAwAnrY6Q
Thread-topic: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guesttime virtualization
I think you misunderstand the current Xen/ia64 timer
implementation.  It is a bit different than Xen/x86
as ac_timer is not needed for guests.

Your example of 16 VMs running each with 4 VPs doesn't
result in 64x timer IRQs because a guest can only be
delivered a timer tick when it is running and can only
change ITM when it is running.  Also, I think SMP
OS's generally choose a single processor to handle clock ticks
rather than have each processor get interrupted.  Thus
the timer should fire at most twice as frequently as
the maximim frequency of Xen and all the domains.

E.g. In the current implementation, each Linux domain
asks for 1024 ticks/second and Xen itself asks for
1024 ticks/second.  (The frequency for Xen is probably too
high but that's what its set to right now.)  No matter
how many domains are running, the timer will fire at
most 2048/second.

If the guest sets ITC, an offset is used as you suggest
in your proposal.  I don't think this is implemented
yet (because Linux doesn't set ITC).

On rereading your proposal, I'm not sure I see how it is
different from the current implementation, other than that
you use the ac_timer queue to call vcpu_pend_interrupt
and the current implementation uses ITM directly, keeping
track of whether the next tick was for the domain or
for Xen itself.

> -----Original Message-----
> From: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf 
> Of Dong, Eddie
> Sent: Saturday, April 30, 2005 2:47 AM
> To: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> Cc: xen-devel
> Subject: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. 
> full guesttime virtualization
> 
> Dan:
>       For the guest time (vtimer) implementation, the current approach
> is to set machine ITM to the nearest guest ITM ( HV next ITM) and set
> machine ITC to guest ITC. Yes it may have benefits when the 
> guest domain
> # is small, but how about my full virtualization suggestion?
>       1: Each VP keeps an internal data structure that including at
> least vITM and an offset for guest ITC to machine ITC. This offset is
> updated when guest set ITC. (Thus guest ITC = machine ITC + offset)
>       2: Each time when guest set ITM with guest ITC < guest ITM and
> vITV is enabled, we add a vtime_ac_timer for notification.
>       3: When this vtime_ac_time is due, the callback function will
> call vcpu_pend_interrupt to pend vTimer IRQ.
>       4: In this way the machine ITC/ITM is fully used by HV. 
>       5: When this VP is scheduled out, the vtime_ac_time should be
> removed to reduce the ac_timer link length and improve scalability.
>       6: When the VP is scheduled in, VMM will check if it is due, if
> it is due during deschedule time, inject guest timer IRQ. If 
> it is not,
> re-add the vtime_ac_timer.
> 
>       Pros for current implementation:
>       1: Guest timer fired at much accurate time.
> 
>       Cons:
>       1: Serious scalability issue. If there are 16 VMs running with
> each VM has 4 VPs. The current implementation will see 64 
> times more HV
> timer IRQs.
>       2: If domain-N set ITC, I am afraid current implementation is
> hard to handle.
>       3: HV jiffies is hard to track including stime_irq,
> get_time_delta() and XEN common macro NOW().
> 
>       Pros for full guest time virtualization:
>       1: Good scalability. Each LP only see one vtime_ac_time pending
> in the ac_time link no matter how many VMs exist.
>       2: Seamless for domain0 and domain N.
> 
>       Cons:
>       1: It may fire a little bit later than the exactly expected
> time.
> 
> 
> 
>       This approach can also be used for X86 lsapic timer.
> Eddie
> 
> 
> _______________________________________________
> Xen-ia64-devel mailing list
> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-ia64-devel
> 

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel