|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] event TRC_MEM_PAGE_GRANT_TRANSFER
This code is in dom0, not in Xen itself so you won't be able to do it with the
tracebuffer - you'd need to set up a dom0-based solution.
Good point. I am not sure if I can add Xentrace instrumentation (or some other) to dom0 code ?
> As for native Linux tools, if I run them in dom0 , will that be able to > tell me all about disk and network IO for the guest ? I am not allowed to > run any thing inside domU to meet the black box requirement.
Well, if you measure the amount of traffic the vifs have seen that should tell you how much network IO had occurred. Block-wise, I'm not sure but I think each domain gets a block IO thread for purposes of being able to use the IO
schedulers to regulate each domain... If I'm right and that patch got applied, you may be able to view IO stats for these kernel threads. Otherwise, you could maybe find and apply the patch to dom0 yourself.
I will try finding this patch...IO stats for these threads can definitely help. As you said global info may not be that difficult...its getting this info at the domain level that's the real issue.
For example something I am trying to find is this: Does the network send operation block for a certain application either due to network congestion or sockets blocking on the other end ? This requires me too see the incoming send requests to dom0 from the guest and see if dom0 can actually send them at a fast enough rate. I am not sure if this strategy will actually work so just exploring.
FWIW I think Xentop (and therefore libxenstat - which you could link against
although it's not guaranteed not to change between Xen releases) provides access to some IO performance statistics - maybe that would server your needs. xentop outputs netrx and nettx info per domain --> this can be quite useful.
What is VBD in xentop output ? Does it correspond to page switches ? I tried finding more details about what xentop reports but I couldn't find it online.
I think using this with iostat and vmstat could also tell me something useful. But iostat will give me global info though, so if I can correlate that with the domain level info from xentop, that might work at some level
BTW, all the replies here are immensely useful...I am learning smth. new from every comment. Once I am able to get something done, hopefully I can summarize my findings for tracking Xen stats for benefits of Xen community. Thanks again.
cheers, Ashish
Hope that helps, Mark
> cheers,
> Ashish > > On 4/24/07, Mark Williamson <mark.williamson@xxxxxxxxxxxx> wrote: > > > I am using xentrace to understand performance bottlenecks for an
> > > application inside domU. > > > My question is how can I distinguish between network IO events and disk > > > > IO > > > > > events using xentrace ?
> > > > I don't know if there are trace events currently generated for grant copy > > operations - if not, you could add them and use these to judge the amount > > of > > incoming network traffic.
> > > > Outgoing network traffic and disk IO are harder to distinguish since they > > both > > just use temporary sharing grants. It might be easier to use some sort > > of IO
> > monitoring tools within dom0 and the domU in question, similarly to as > > you would on a native Linux system. > > > > > A second related question is can I figure out disk queue waiting times
> > > > and > > > > > serving times (and similarly for the network) to figure out any > > > > bottlenecks > > > > > or any external stress on these resources that may be causing the guest
> > > machine to be slowing down ? > > > > Again, it's worth taking a look at native Linux tools. XenMon may > > provide some feedback but I imagine you might have already tried this?
> > > > For more complete profiling, Xenoprof allows you to run oprofile against > > multiple domains (and Xen itself) at once). > > > > > I greatly appreciate any advise/insight from fellow members here.
> > > > Sorry not to be more specific. > > > > Cheers, > > Mark > > > > > cheers, > > > Ashish > > > > > > On 23/1/07 19:31, "Rob Gardner" <
rob.gardner@xxxxxx> wrote: > > > >*> * > > > >*> Grant transfers are no longer used to move network data from > > > > netback to* *> netfront (except for backward compatibility with old
> > > > netfront drivers).* * * > > > >* * > > > >* Yeah, got that. ;) Could you explain what mechanisms are currently > > > > used* * to move data for net I/O and disk I/O between domains, and
> > > > in* * particular, can you suggest where in the code I could put trace > > > > calls to* * be able to count I/O's? Thanks.* > > > > > > Everything is done via grant-map/unmap commands as it always was,
> > > except network receive (netback->netfront) which is done via grant-copy > > > > commands > > > > > (one per contiguous fragment of network packet). > > >
> > > -- Keir > > > > -- > > Dave: Just a question. What use is a unicyle with no seat? And no > > pedals! Mark: To answer a question with a question: What use is a > > skateboard? Dave: Skateboards have wheels.
> > Mark: My wheel has a wheel!
-- Dave: Just a question. What use is a unicyle with no seat? And no pedals! Mark: To answer a question with a question: What use is a skateboard? Dave: Skateboards have wheels.
Mark: My wheel has a wheel!
-- http://www.cs.northwestern.edu/~agupta
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|