On Wed, 13 Apr 2011, Jan Kiszka wrote:
> On 2011-04-13 13:49, Stefano Stabellini wrote:
> > On Wed, 13 Apr 2011, Jan Kiszka wrote:
> >> On 2011-04-13 12:56, Stefano Stabellini wrote:
> >>> On Tue, 12 Apr 2011, Jan Kiszka wrote:
> >>>> Well, either you have a use for the VCPU state (how do you do migration
> >>>> in Xen without it?), or you should probably teach QEMU in a careful &
> >>>> clean way to run its device model without VCPUs - and without any
> >>>> TCG-related memory consumption. For the latter, you would likely receive
> >>>> kudos from KVM people as well.
> >>>>
> >>>> BTW, if you happen to support that crazy vmport under Xen, not updating
> >>>> the VCPU state will break your neck. Also, lacking VCPUs prevent the
> >>>> usage of analysis and debugging features of QEMU (monitor, gdbstub).
> >>>
> >>> We don't use the vcpu state in qemu because qemu takes care of device
> >>> emulation only; under xen the vcpu state is saved and restored by the
> >>> hypervisor.
> >>
> >> Just out of curiosity: So you are extracting the device states out of
> >> QEMU on migration, do the same with the VCPU states from the hypervisor
> >> (which wouldn't be that different from KVM in fact), and then transfer
> >> that to the destination node? Is there a technical or historical reason
> >> for this split-up? I mean, you still need some managing instance that
> >> does the state transportation and VM control on both sides, i.e. someone
> >> for the job that QEMU is doing for TCG or KVM migrations.
> >
> > That someone is the "toolstack", I guess libvirt would be the closest
> > thing to our toolstack in the kvm world.
> > The reason why we have a toolstack performing this task rather than qemu
> > is that pure PV guests don't need device emulation, so we don't even
> > have qemu running most of the times if there are only linux guests
> > installed in the system.
>
> Ah, for that use case it makes some sense to me.
>
> I bet there would also be some value in consolidating the "toolstack"
> functionality over bare qemu/libvirt infrastructure (if we ignored all
> existing interfaces and dependencies for a moment).
We have a libxenlight driver for libvirt already: it doesn't support
migration yet but when it does it will probably reuse the libvirt
infrastructure for doing that.
However it is probably going to be libvirt to make the libxenlight calls
to perfom the VCPU save/restore so that we don't add a qemu dependency
for traditional pv guests...
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|