xen-devel
[Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsyste
To: |
kamezawa.hiroyu@xxxxxxxxxxxxxx |
Subject: |
[Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts |
From: |
Hirokazu Takahashi <taka@xxxxxxxxxxxxx> |
Date: |
Thu, 07 Aug 2008 17:45:10 +0900 (JST) |
Cc: |
xen-devel@xxxxxxxxxxxxxxxxxxx, containers@xxxxxxxxxxxxxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx, dm-devel@xxxxxxxxxx, agk@xxxxxxxxxxxxxx, ryov@xxxxxxxxxxxxx, balbir@xxxxxxxxxxxxxxxxxx |
Delivery-date: |
Thu, 07 Aug 2008 01:45:33 -0700 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxxx |
In-reply-to: |
<20080807172113.0788f800.kamezawa.hiroyu@xxxxxxxxxxxxxx> |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
References: |
<16255819.1218030343593.kamezawa.hiroyu@xxxxxxxxxxxxxx> <20080807.162512.22162413.taka@xxxxxxxxxxxxx> <20080807172113.0788f800.kamezawa.hiroyu@xxxxxxxxxxxxxx> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
Hi,
> > > >I've just noticed that most of overhead comes from the spin-locks
> > > >when reclaiming the pages inside mem_cgroups and the spin-locks to
> > > >protect the links between pages and page_cgroups.
> > > Overhead between page <-> page_cgroup lock is cannot be catched by
> > > lock_stat now.Do you have numbers ?
> > > But ok, there are too many locks ;(
> >
> > The problem is that every time the lock is held, the associated
> > cache line is flushed.
> I think "page" and "page_cgroup" is not so heavly shared object in fast path.
> foot-print is also important here.
> (anyway, I'd like to remove lock_page_cgroup() when I find a chance)
OK.
> > > >The latter overhead comes from the policy your team has chosen
> > > >that page_cgroup structures are allocated on demand. I still feel
> > > >this approach doesn't make any sense because linux kernel tries to
> > > >make use of most of the pages as far as it can, so most of them
> > > >have to be assigned its related page_cgroup. It would make us happy
> > > >if page_cgroups are allocated at the booting time.
> > > >
> > > Now, multi-sizer-page-cache is discussed for a long time. If it's our
> > > direction, on-demand page_cgroup make sense.
> >
> > I don't think I can agree to this.
> > When multi-sized-page-cache is introduced, some data structures will be
> > allocated to manage multi-sized-pages.
> maybe no. it will be encoded into struct page.
It will nice and simple if it will be.
> > I think page_cgroups should be allocated at the same time.
> > This approach will make things simple.
> yes, of course.
>
> >
> > It seems like the on-demand allocation approach leads not only
> > overhead but complexity and a lot of race conditions.
> > If you allocate page_cgroups when allocating page structures,
> > You can get rid of most of the locks and you don't have to care about
> > allocation error of page_cgroups anymore.
> >
> > And it will also give us flexibility that memcg related data can be
> > referred/updated inside critical sections.
> >
> But it's not good for the systems with small "NORMAL" pages.
Even when it happens to be a system with small "NORMAL" pages, if you
want to use memcg feature, you have to allocate page_groups for most of
the pages in the system. It's impossible to avoid the allocation as far
as you use memcg.
> This discussion should be done again when more users of page_group appears and
> it's overhead is obvious.
Thanks,
Hirokazu Takahashi.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] [PATCH 3/7] bio-cgroup: Introduction, (continued)
- Message not available
- [Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts, Hirokazu Takahashi
- Message not available
- [Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts, Hirokazu Takahashi
- Message not available
- Message not available
- [Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts, Hirokazu Takahashi
- Message not available
- [Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts,
Hirokazu Takahashi <=
- Message not available
- Message not available
- [Xen-devel] RE: Too many I/O controller patches, Satoshi UCHIDA
- [Xen-devel] Re: Too many I/O controller patches, Hirokazu Takahashi
- Message not available
- [Xen-devel] Re: Too many I/O controller patches, Ryo Tsuruta
- Message not available
- Message not available
- [Xen-devel] Re: RFC: I/O bandwidth controller, Ryo Tsuruta
- Message not available
- Message not available
- [Xen-devel] RE: I/O bandwidth controller (was Re: Too many I/O controllerpatches), Caitlin Bestler
- Message not available
- Message not available
- [Xen-devel] Re: RFC: I/O bandwidth controller, Hirokazu Takahashi
- [Xen-devel] Re: RFC: I/O bandwidth controller, Ryo Tsuruta
- Message not available
- [Xen-devel] Re: RFC: I/O bandwidth controller, Ryo Tsuruta
- [Xen-devel] Re: RFC: I/O bandwidth controller, Hirokazu Takahashi
[Xen-devel] [PATCH 0/7] I/O bandwidth controller and BIO tracking, Ryo Tsuruta
|
|
|