> After knocking off the dust, here it is. Allows max_vcpus to be set
in
> the config file. If not present, it defaults to 8.
Thanks. I think "vcpus_max" might be a better name, though.
Ian
> Signed-off-by: Ryan Grimm <grimm@xxxxxxxxxx>
>
> diff -r ec03b24a2d83 -r 263d3eb8c182 docs/man/xmdomain.cfg.pod.5
> --- a/docs/man/xmdomain.cfg.pod.5 Tue Aug 15 19:53:55 2006 +0100
> +++ b/docs/man/xmdomain.cfg.pod.5 Wed Aug 16 13:45:35 2006 -0500
> @@ -176,6 +176,13 @@ kernel supports. For instance:
>
> Will cause the domain to boot to runlevel 4.
>
> +=item B<max_vcpus>
> +
> +The number of virtual cpus a domain can bring up in its life. In
order
> +to use this the xen kernel must be compiled with SMP support.
> +
> +This defaults to 8, meaning the domain can bring up at most 8 vcpus.
> +
> =item B<nfs_server>
>
> The IP address of the NFS server to use as the root device for the
> diff -r ec03b24a2d83 -r 263d3eb8c182 tools/examples/xmexample1
> --- a/tools/examples/xmexample1 Tue Aug 15 19:53:55 2006 +0100
> +++ b/tools/examples/xmexample1 Wed Aug 16 13:45:35 2006 -0500
> @@ -34,6 +34,9 @@ name = "ExampleDomain"
> #cpus = "" # leave to Xen to pick
> #cpus = "0" # all vcpus run on CPU0
> #cpus = "0-3,5,^1" # run on cpus 0,2,3,5
> +
> +# Max number of Virtual CPUS a domain can have in its life
> +#max_vcpus = 8
>
> # Number of Virtual CPUS to use, default is 1
> #vcpus = 1
> diff -r ec03b24a2d83 -r 263d3eb8c182 tools/examples/xmexample2
> --- a/tools/examples/xmexample2 Tue Aug 15 19:53:55 2006 +0100
> +++ b/tools/examples/xmexample2 Wed Aug 16 13:45:35 2006 -0500
> @@ -64,6 +64,9 @@ name = "VM%d" % vmid
> #cpus = "0" # all vcpus run on CPU0
> #cpus = "0-3,5,^1" # run on cpus 0,2,3,5
> #cpus = "%s" % vmid # set based on vmid (mod number of CPUs)
> +
> +# Max number of Virtual CPUS a domain can have in its life
> +max_vcpus = 8
>
> # Number of Virtual CPUS to use, default is 1
> #vcpus = 1
> diff -r ec03b24a2d83 -r 263d3eb8c182
> tools/python/xen/xend/XendDomainInfo.py
> --- a/tools/python/xen/xend/XendDomainInfo.py Tue Aug 15 19:53:55 2006
> +0100
> +++ b/tools/python/xen/xend/XendDomainInfo.py Wed Aug 16 13:45:35 2006
-
> 0500
> @@ -128,6 +128,7 @@ ROUNDTRIPPING_CONFIG_ENTRIES = [
> ROUNDTRIPPING_CONFIG_ENTRIES = [
> ('uuid', str),
> ('vcpus', int),
> + ('max_vcpus', int),
> ('vcpu_avail', int),
> ('cpu_weight', float),
> ('memory', int),
> @@ -567,6 +568,7 @@ class XendDomainInfo:
> avail = int(1)
>
> defaultInfo('vcpus', lambda: avail)
> + defaultInfo('max_vcpus', lambda: 8)
> defaultInfo('online_vcpus', lambda: self.info['vcpus'])
> defaultInfo('max_vcpu_id', lambda: self.info['vcpus']-1)
> defaultInfo('vcpu_avail', lambda: (1 <<
self.info['vcpus'])
> - 1)
> @@ -749,7 +751,7 @@ class XendDomainInfo:
> return 'offline'
>
> result = {}
> - for v in range(0, self.info['vcpus']):
> + for v in range(0, self.info['max_vcpus']):
> result["cpu/%d/availability" % v] = availability(v)
> return result
>
> @@ -1231,7 +1233,7 @@ class XendDomainInfo:
> self.recreateDom()
>
> # Set maximum number of vcpus in domain
> - xc.domain_max_vcpus(self.domid, int(self.info['vcpus']))
> + xc.domain_max_vcpus(self.domid, int(self.info['max_vcpus']))
>
>
> def introduceDomain(self):
> diff -r ec03b24a2d83 -r 263d3eb8c182 tools/python/xen/xm/create.py
> --- a/tools/python/xen/xm/create.py Tue Aug 15 19:53:55 2006 +0100
> +++ b/tools/python/xen/xm/create.py Wed Aug 16 13:45:35 2006 -0500
> @@ -177,6 +177,10 @@ gopts.var('apic', val='APIC',
> gopts.var('apic', val='APIC',
> fn=set_int, default=0,
> use="Disable or enable APIC of HVM domain.")
> +
> +gopts.var('max_vcpus', val='VCPUS',
> + fn=set_int, default=8,
> + use="max # of Virtual CPUS a domain will have in its
life.")
>
> gopts.var('vcpus', val='VCPUS',
> fn=set_int, default=1,
> @@ -667,7 +671,7 @@ def make_config(vals):
> config.append([n, v])
>
> map(add_conf, ['name', 'memory', 'maxmem', 'restart',
'on_poweroff',
> - 'on_reboot', 'on_crash', 'vcpus', 'features'])
> + 'on_reboot', 'on_crash', 'vcpus', 'max_vcpus',
> 'features'])
>
> if vals.uuid is not None:
> config.append(['uuid', vals.uuid])
>
>
> On Mon, Aug 14, 2006 at 05:46:05PM -0500, Ryan Harper wrote:
> > * Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2006-08-14 17:41]:
> > > > > Either Keir's cpu[X] = "Y" approach or my cpu = [ "A","B","C"
]
> > > approach
> > > > > seem workable.
> > > >
> > > > Your last email seemed to indicate to me that you didn't like
using
> > > > quoted values in a list to separate per-vcpu cpumask values.
Maybe I
> > > > was mistaken.
> > >
> > > If it's an honest python list I have no problem. Your example
appeared
> > > to be some quoting within a string.
> >
> > OK.
> >
> > > My approach is a list too...
> > >
> > > > > BTW: does the right thing happen in the face of vcpu hot
plugging?
> > > i.e.
> > > > > if I unplug a vcpu and put it back in do I keep the old mask?
If I
> > > add
> > > > > vcpus what mask do they get?
> > > >
> > > > unplug events only affect a vcpu's status. The internal struct
> > > > vcpu in the hypervisor is not de-allocated/re-allocated during
> hotplug
> > > > events.
> > > >
> > > > We don't currently support a hotadd for vcpus that weren't
allocated
> > > at
> > > > domain creation time. The current method for simulating hot-add
> would
> > > > be to start a domain with 32 VCPUS and disable all by the number
of
> > > > vcpus you currently want. Ryan Grimm posted a patch back in
February
> > > > that had xend do this by adding a new config option, max_vcpus,
which
> > > > was used when calling xc_domain_max_vcpus() having the
hypervisor
> > > alloc
> > > > that max number of vcpus and then using the vcpus parameter to
> > > determine
> > > > how many to bring online.
> > >
> > > I like the idea of having a vcpus_max
> >
> > I'll see if Ryan Grimm can dust that one off and resend it.
> >
> > --
> > Ryan Harper
> > Software Engineer; Linux Technology Center
> > IBM Corp., Austin, Tx
> > (512) 838-9253 T/L: 678-9253
> > ryanh@xxxxxxxxxx
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
>
> --
> Thanks,
> Ryan Grimm
> IBM Linux Technology Center
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|