xen-devel
[Xen-devel] Re: RFC: I/O bandwidth controller
To: |
fernando@xxxxxxxxxxxxx |
Subject: |
[Xen-devel] Re: RFC: I/O bandwidth controller |
From: |
Ryo Tsuruta <ryov@xxxxxxxxxxxxx> |
Date: |
Wed, 06 Aug 2008 15:18:24 +0900 (JST) |
Cc: |
taka@xxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, uchida@xxxxxxxxxxxxx, containers@xxxxxxxxxxxxxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, dave@xxxxxxxxxxxxxxxxxx, yoshikawa.takuya@xxxxxxxxxxxxx, dm-devel@xxxxxxxxxx, agk@xxxxxxxxxxxxxx, virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx, ngupta@xxxxxxxxxx, righi.andrea@xxxxxxxxx |
Delivery-date: |
Tue, 05 Aug 2008 23:18:46 -0700 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxxx |
In-reply-to: |
<1217985189.3154.57.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
References: |
<20080804.175126.193692178.ryov@xxxxxxxxxxxxx> <1217870433.20260.101.camel@nimitz> <1217985189.3154.57.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
Hi Fernando,
> This RFC ended up being a bit longer than I had originally intended, but
> hopefully it will serve as the start of a fruitful discussion.
Thanks a lot for posting the RFC.
> *** Goals
> 1. Cgroups-aware I/O scheduling (being able to define arbitrary
> groupings of processes and treat each group as a single scheduling
> entity).
> 2. Being able to perform I/O bandwidth control independently on each
> device.
> 3. I/O bandwidth shaping.
> 4. Scheduler-independent I/O bandwidth control.
> 5. Usable with stacking devices (md, dm and other devices of that
> ilk).
> 6. I/O tracking (handle buffered and asynchronous I/O properly).
>
> The list of goals above is not exhaustive and it is also likely to
> contain some not-so-nice-to-have features so your feedback would be
> appreciated.
I'd like to add the following item to the goals.
7. Selectable from multiple bandwidth control policy (proportion,
maximum rate limiting, ...) like I/O scheduler.
> *** How to move on
>
> As discussed before, it probably makes sense to have both a block layer
> I/O controller and a elevator-based one, and they could certainly
> cohabitate. As discussed before, all of them need I/O tracking
> capabilities so I would like to suggest the plan below to get things
> started:
>
> - Improve the I/O tracking patches (see (6) above) until they are in
> mergeable shape.
> - Fix CFQ and AS to use the new I/O tracking functionality to show its
> benefits. If the performance impact is acceptable this should suffice to
> convince the respective maintainer and get the I/O tracking patches
> merged.
> - Implement a block layer resource controller. dm-ioband is a working
> solution and feature rich but its dependency on the dm infrastructure is
> likely to find opposition (the dm layer does not handle barriers
> properly and the maximum size of I/O requests can be limited in some
> cases). In such a case, we could either try to build a standalone
> resource controller based on dm-ioband (which would probably hook into
> generic_make_request) or try to come up with something new.
> - If the I/O tracking patches make it into the kernel we could move on
> and try to get the Cgroup extensions to CFQ and AS mentioned before (see
> (1), (2), and (3) above for details) merged.
> - Delegate the task of controlling the rate at which a task can
> generate dirty pages to the memory controller.
I agree with your plan.
We keep bio-cgroup improving and porting to the latest kernel.
Thanks,
Ryo Tsuruta
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] [PATCH 7/7] bio-cgroup: Add a cgroup support to dm-ioband, (continued)
- Message not available
- [Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts, Hirokazu Takahashi
- Message not available
- [Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts, Hirokazu Takahashi
- Message not available
- Message not available
- [Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts, Hirokazu Takahashi
- Message not available
- [Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts, Hirokazu Takahashi
- Message not available
- Message not available
- [Xen-devel] RE: Too many I/O controller patches, Satoshi UCHIDA
- [Xen-devel] Re: Too many I/O controller patches, Hirokazu Takahashi
- Message not available
- [Xen-devel] Re: Too many I/O controller patches, Ryo Tsuruta
- Message not available
- Message not available
- [Xen-devel] Re: RFC: I/O bandwidth controller,
Ryo Tsuruta <=
- Message not available
- Message not available
- [Xen-devel] RE: I/O bandwidth controller (was Re: Too many I/O controllerpatches), Caitlin Bestler
- Message not available
- Message not available
- [Xen-devel] Re: RFC: I/O bandwidth controller, Hirokazu Takahashi
- [Xen-devel] Re: RFC: I/O bandwidth controller, Ryo Tsuruta
- Message not available
- [Xen-devel] Re: RFC: I/O bandwidth controller, Ryo Tsuruta
- [Xen-devel] Re: RFC: I/O bandwidth controller, Hirokazu Takahashi
[Xen-devel] [PATCH 0/7] I/O bandwidth controller and BIO tracking, Ryo Tsuruta
|
|
|