|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] Re: Capping i/o ops
I would like to also see this, when we used UML a patch created by Chris at
linode allowed for a guest to be throttled. The system worked on tokens,
where each vps had a bucket full of tokens and each IO operation used 1
token and the bucked was refilled with a specified amount of tokens per
second. If the bucket ran dry the user was throttled and had to wait until
the bucket had more tokens added to it.
http://theshore.net/~caker/patches/token-limiter-v5.patch here's the patch
might give you some type of idea of what to do...
~Shaun
"Pim van Riezen" <pi+lists@xxxxxxxxxxxx> wrote in message
news:DAC7D946-A93F-47AC-B033-699933363075@xxxxxxxxxxxxxxx
Hi All,
I've been trying to get a bit of a grip on disk i/o for our Xen set- up.
We're dealing with server workloads that can, at unexpected times, become
seriously io-bound, up to the point that a single guest can cannibalize
the available throughput of the fibrechannel array. Since we're dealing
with a shared medium and multiple Xen hosts, I'm looking for a way in to
put an upper cap on the number of i/o operations individual guests can
perform.
Although this is not entirely scientific, it would allow an informed
administrator to limit the amount of operations guest A on host X can
perform to e.g., 50% of an observed maximum, so that guest B on host Y is
still guaranteed access at a minimum performance level. The case could
also be made for giving quality of service assurances on the single host
level, but the multiple host scenario is the more interesting viewpoint:
It precludes solutions that entail automatic optimizations - there are no
trivial automatic solutions to be had in this scenario.
I'm not afraid to get my hands dirty and come up with a proof of concept
or even completer patch, but before I dive in, would this approach stand
a decent chance at making sense? The current approach seems to be "throw
the requests at the block layer and let the kernel, the hardware, or God
sort it out". Has any previous thought been given to this problem area
that I can/should read up about?
Cheers,
Pim van Riezen
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|