Hi George,
Do you know how many threads your master xapi is running? (You could install/
attach gdb and type something like 'thread apply all bt')
The current limit is 300 threads based on a default 10MiB thread stack. As a
short term workaround, I believe it's possible to reduce the thread stack to
5MiB (or maybe 1MiB) to boost the maximum number you can have in the one
process.
In the medium term I'd like to fully understand (and hopefully reduce) the
number of connections made by slaves to masters and then to switch to a
different threading model.
Cheers,
Dave
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of George Shuklin
> Sent: 21 July 2010 20:51
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-users] [Fwd: XCP - extreme high load on pool master]
>
> Well... I can accept idea of 'high load' for master, but my main
> concern
> is the problem of single thread. Most of CPU time used by single xapi
> process, so, if I slightly increase number of hosts and VM on hosts
> (for
> example, 40 VM per host, 16 hosts will do about 640 VM per pool) and if
> xapi will not be able to serve every request in single thread... I
> don't
> know what will happens but I don't like it already.
>
> About console problem...
>
> After few tests with http(s) tunneling I stops at simple ssh tunneling
> from host localhost to my machine localhost (I connect by using -L
> switch of ssh and use xvncviewer localhost:59xx).
>
> One other problem is detection of port number for certain VM... Right
> now I use some hack like xe vm-list uuid=... params=domid; ps aux|grep
> (this domid);netstat -lpn (this pid) - but it not very accurate and not
> very scriptable...
>
>
>
> В Срд, 21/07/2010 в 16:39 -0400, Vern Burke пишет:
> > Thousands of VMs on a single XCP pool? It's just my opinion of course
> > but I wouldn't try to run 100:1 (or worse) virtualization ratios
> unless
> > you're running 12 cores (and a ton of memory) or better in a box.
> >
> > Keep in mind that the pool master is doing a ton of work for the
> entire
> > pool, which explains why its load is higher than the slaves. In my
> > cloud, I generally reserve the pool master for admin work rather than
> > running production workloads on it.
> >
> > The reason for this is that there's still an ongoing bug in XCP's
> > developers console that you can only connect to the console of a VM
> > that's running on the pool master. Try to connect to a VM that's on
> any
> > of the slaves and you get just a blank white window.
> >
> >
> > Vern Burke
> >
> > SwiftWater Telecom
> > http://www.swiftwatertel.com
> > Xen Cloud Control System
> > http://www.xencloudcontrol.com
> >
> >
> > On 7/21/2010 4:13 PM, George Shuklin wrote:
> > > Good day.
> > >
> > > We trying to test XCP cloud under some product-like load (4 hosts,
> each with 24Gb mem and 8 cores)
> > >
> > > But with just about 30-40 virtual machines I got an extreme load on
> dom0
> > > on pool master host: LA is about 3.5-6, and most time are used by
> xapi
> > > and stunnel processes.
> > >
> > > It's really bother me: what happens on higher load with few
> thousands of
> > > VMs with about 10-16 hosts in pool...
> > >
> > > top data:
> > >
> > > Tasks: 95 total, 3 running, 91 sleeping, 0 stopped, 1
> zombie
> > > Cpu(s):19.4%us,42.1%sy,0.0%ni,35.8%id, 1.3%wa, 0.0%hi, 1.0%si,
> 0.3%st
> > > Mem: 746496k total, 731256k used, 15240k free, 31372k
> buffers
> > > Swap: 524280k total, 128k used, 524152k free, 498872k
> cached
> > >
> > > PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> > > 17 -3 315m 14m 5844 S 52.3 2.0 5370:41 xapi
> > > 17 -3 22524 15m 1192 S 8.3 2.2 875:16.18 stunnel
> > > 15 -5 0 0 0 S 0.7 0.0 54:28.78 netback
> > > 10 -10 6384 1868 892 S 0.3 0.3 22:03.14 ovs-vswitchd
> > >
> > > dom0 on non-master hosts are loaded about 25-30% each.
> > >
> > >
> > >
> > > _______________________________________________
> > > Xen-users mailing list
> > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/xen-users
> > >
>
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|