|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] VM save/restore
On 17/08/2012 22:28, "Junjie Wei" <junjie.wei@xxxxxxxxxx> wrote:
> Hello,
>
> There is a problem in Xen-4.1.2 and early versions with VM save/restore.
> When a VM is configured with VCPUs > 64, it can be started or stopped,
> but cannot be saved. It happens to both PVM and HVM guests.
>
> # xm vcpu-list 3 | grep OVM_OL5U7_X86_64_PVM_10GB | wc -l
> 65
>
> # xm save 3 vm.save
> Error: /usr/lib64/xen/bin/xc_save 27 3 0 0 0 failed
>
> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
> xc: error: Too many VCPUS in guest!: Internal error
>
> It was caused by a hard-coded limit in tools/libxc/xc_domain_save.c:
>
> if ( info.max_vcpu_id >= 64 )
> {
> ERROR("Too many VCPUS in guest!");
> goto out;
> }
>
> And also in tools/libxc/xc_domain_restore.c:
>
> case XC_SAVE_ID_VCPU_INFO:
> buf->new_ctxt_format = 1;
> if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
> buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
> sizeof(uint64_t)) ) {
> PERROR("Error when reading max_vcpu_id");
> return -1;
> }
>
> The code above is in both xen-4.1.2 and xen-unstable.
>
> I think if a VM can be successfully started, then save/restore should
> also work. So I made a patch and did some testing.
The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
after restore I would imagine.
And what is a PVM guest?
-- Keir
> The above problem is gone but there are new ones.
>
> Let me summarize the result here.
>
> With the patch, save/restore works fine as long as it can be started,
> except two cases.
>
> 1) 32-bit guests can be configured with VCPUs > 32 and started,
> but the guest can only make use of 32 of them.
>
> 2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
> but `xm save' does not work.
>
> See the testing below for details.The limit of 128 VCPUs for HVM
> guests is already considered.
>
> Could you please review the patch and help with these two cases?
>
>
> Thanks,
> Junjie
>
> -= Test environment =-
>
> [root@ovs087 HVM_X86_64]# cat /etc/ovs-release
> Oracle VM server release 3.2.1
>
> [root@ovs087 HVM_X86_64]# uname -a
> Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012
> x86_64 x86_64 x86_64 GNU/Linux
>
> [root@ovs087 HVM_X86_64]# rpm -qa | grep xen
> xenpvboot-0.1-8.el5
> xen-devel-4.1.2-39
> xen-tools-4.1.2-39
> xen-4.1.2-39
>
> -= PVM x86_64, 128 VCPUs =-
>
> [root@ovs087 PVM_X86_64]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 6916.9
> OVM_OL5U7_X86_64_PVM_10GB 9 2048 128 r----- 48.1
>
> [root@ovs087 PVM_X86_64]# xm save 9 vm.save
>
> [root@ovs087 PVM_X86_64]# xm restore vm.save
>
> [root@ovs087 PVM_X86_64]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 7076.7
> OVM_OL5U7_X86_64_PVM_10GB 10 2048 128 r----- 51.6
>
> -= PVM x86_64, 256 VCPUs =-
>
> [root@ovs087 PVM_X86_64]# xm list
> Name ID Mem VCPUs State Time(s)
> Domain-0 0 511 8 r----- 10398.1
> OVM_OL5U7_X86_64_PVM_10GB 35 2048 256 r----- 30.4
>
> [root@ovs087 PVM_X86_64]# xm save 35 vm.save
>
> [root@ovs087 PVM_X86_64]# xm restore vm.save
>
> [root@ovs087 PVM_X86_64]# xm list
> Name ID Mem VCPUs State Time(s)
> Domain-0 0 511 8 r----- 10572.1
> OVM_OL5U7_X86_64_PVM_10GB 36 2048 256 r----- 1466.9
>
> -= HVM x86_64, 128 VCPUs =-
>
> [root@ovs087 HVM_X86_64]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 8017.4
> OVM_OL5U7_X86_64_PVHVM_10GB 19 2048 128 r----- 343.7
>
> [root@ovs087 HVM_X86_64]# xm save 19 vm.save
>
> [root@ovs087 HVM_X86_64]# xm restore vm.save
>
> [root@ovs087 HVM_X86_64]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 8241.1
> OVM_OL5U7_X86_64_PVHVM_10GB 20 2048 128 r----- 121.7
>
> -= PVM x86, 64 VCPUs =-
>
> [root@ovs087 PVM_X86]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 36798.0
> OVM_OL5U7_X86_PVM_10GB 54 2048 32 r----- 92.8
>
> [root@ovs087 PVM_X86]# xm vcpu-list 54 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 64
>
> [root@ovs087 PVM_X86]# xm save 54 vm.save
>
> [root@ovs087 PVM_X86]# xm restore vm.save
>
> [root@ovs087 PVM_X86]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 36959.3
> OVM_OL5U7_X86_PVM_10GB 55 2048 32 r----- 51.0
>
> [root@ovs087 PVM_X86]# xm vcpu-list 55 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 64
>
> 32-bit PVM, 65 VCPUs:
>
> [root@ovs087 PVM_X86]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 36975.9
> OVM_OL5U7_X86_PVM_10GB 56 2048 32 r----- 8.6
>
> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 65
>
> [root@ovs087 PVM_X86]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 36977.7
> OVM_OL5U7_X86_PVM_10GB 56 2048 32 r----- 24.8
>
> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 65
>
> [root@ovs087 PVM_X86]# xm save 56 vm.save
> Error: /usr/lib64/xen/bin/xc_save 26 56 0 0 0 failed
>
> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
> xc: error: No context for VCPU64 (61 = No data available): Internal error
>
> -= HVM x86, 64 VCPUs =-
>
> [root@ovs087 HVM_X86]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 36506.1
> OVM_OL5U7_X86_PVHVM_10GB 52 2048 32 r----- 68.6
>
> [root@ovs087 HVM_X86]# xm vcpu-list 52 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 64
>
> [root@ovs087 HVM_X86]# xm save 52 vm.save
>
> [root@ovs087 HVM_X86]# xm restore vm.save
>
> [root@ovs087 HVM_X86]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 36730.5
> OVM_OL5U7_X86_PVHVM_10GB 53 2048 32 r----- 19.8
>
> [root@ovs087 HVM_X86]# xm vcpu-list 53 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 64
>
> -= HVM x86, 128 VCPUs =-
>
> [root@ovs087 HVM_X86]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 36261.1
> OVM_OL5U7_X86_PVHVM_10GB 50 2048 32 r----- 34.9
>
> [root@ovs087 HVM_X86]# xm vcpu-list 50 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 128
>
> [root@ovs087 HVM_X86]# xm save 50 vm.save
>
> [root@ovs087 HVM_X86]# xm restore vm.save
>
> [root@ovs087 HVM_X86]# xm list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 511 8 r----- 36480.5
> OVM_OL5U7_X86_PVHVM_10GB 51 2048 32 r----- 20.3
>
> [root@ovs087 HVM_X86]# xm vcpu-list 51 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 128
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |