WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [Xen-users] Problem with restore/migration with Xen

To: Pasi Kärkkäinen <pasik@xxxxxx>
Subject: Re: [Xen-devel] Re: [Xen-users] Problem with restore/migration with Xen 4.0.0 and Jeremy kernel (2.6.32.12)
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Thu, 13 May 2010 10:08:23 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-users@xxxxxxxxxxxxxxxxxxx, Pierre POMES <ppomes@xxxxxxxxxxxx>
Delivery-date: Thu, 13 May 2010 10:09:37 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100513133116.GN17817@xxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4BE54B34.8090300@xxxxxxxxxxxx> <4BEB1D7A.2040904@xxxxxxxxxxxx> <20100513133116.GN17817@xxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100430 Fedora/3.0.4-2.fc12 Lightning/1.0b2pre Thunderbird/3.0.4
On 05/13/2010 06:31 AM, Pasi Kärkkäinen wrote:
> Forwarding to xen-devel ..
>
> Jeremy: Have you heard of this before? save/restore/migration
> takes 4-10x longer on pvops 2.6.32 compared to xenlinux 2.6.32.
>
> He verified the DEBUG options in kernel .configs are the same.
>   

No, I wasn't aware of any big save/restore performance differences.  Is
the difference caused by a pvops dom0 or domU or both?

One materially different thing is that pvops kernels support preemption,
which requires all processes to be frozen before a suspend.  I wonder if
disabling preemption makes a difference (assuming that it is the domU
which is causing the slowdown).

Ah, but the report is that its the restore which is very slow.  Which
suggests that it is the dom0 environment which is causing problems. 
Does "top" show a particular process is very cpu-bound during the
restore?  Or is it IO bound?

    J

> -- Pasi
>
> On Wed, May 12, 2010 at 05:28:26PM -0400, Pierre POMES wrote:
>   
>> Hi,
>>
>> First sorry for the double posting...
>>
>> I just built a 2.6.32.10 kernel with Andrew Lyon patches (so it is a  
>> "xenlinux" kernel, not a "pvops" kernel).
>>
>> Live migration and restore operations are between 4 and 10 times faster  
>> with this kernel . Furthermore, during live migration, hangs time in  
>> domu are shorter (1-2 seconds versus 1 to 15 seconds for a domu with  
>> 256M RAM).
>>
>> Error messages "Error when reading batch size" / "error when buffering  
>> batch, finishing" are still in my logs.
>>
>> Regarding times, all is now similar to what I had with Xen 3.x on top of  
>> xenlinux kernels.
>>
>> Regards,
>> Pierre
>>
>>
>>
>>     
>>> Hi all,
>>>
>>> I am using Xen 4.0.0 on top of Ubuntu Lucid (amd64), with the Jeremy  
>>> kernel taken from git (xen/stable-2.6.32.x branch, 2.6.32.12 when I am  
>>> writing this email). This kernel is also used in my domu.
>>>
>>> I can save a domu without any problem, but restoring it may need from  
>>> 2 to 5 minutes, from a 1G checkpoint file (domu has 1GB RAM). There  
>>> also errors in /var/log/xen/xend.log, "Error when reading batch size"  
>>> and "Error when reading batch size":
>>>
>>> [2010-05-08 04:23:16 9497] DEBUG (XendDomainInfo:1804) Storing domain  
>>> details: {'image/entry': '18446744071587529216', 'console/port': '2',  
>>> 'image/loader': 'generic', 'vm':  
>>> '/vm/156ea44d-6707-cbe6-2d58-7bea4792dff4',  
>>> 'control/platform-feature-multiprocessor-suspend': '1',  
>>> 'image/hv-start-low': '18446603336221196288', 'image/guest-os':  
>>> 'linux', 'image/virt-base': '18446744071562067968', 'memory/target':  
>>> '1048576', 'image/guest-version': '2.6', 'image/pae-mode': 'yes',  
>>> 'description': '', 'console/limit': '1048576', 'image/paddr-offset':  
>>> '0', 'image/hypercall-page': '18446744071578882048',  
>>> 'image/suspend-cancel': '1', 'cpu/0/availability': 'online',  
>>> 'image/features/pae-pgdir-above-4gb': '1',  
>>> 'image/features/writable-page-tables': '0', 'console/type':  
>>> 'xenconsoled', 'name': 'domusample', 'domid': '10',  
>>> 'image/xen-version': 'xen-3.0', 'store/port': '1'}
>>> [2010-05-08 04:23:16 9497] DEBUG (XendCheckpoint:286)  
>>> restore:shadow=0x0, _static_max=0x40000000, _static_min=0x0,
>>> [2010-05-08 04:23:16 9497] DEBUG (XendCheckpoint:305) [xc_restore]:  
>>> /usr/lib/xen/bin/xc_restore 22 10 1 2 0 0 0 0
>>> [2010-05-08 04:23:16 9497] INFO (XendCheckpoint:423) xc_domain_restore  
>>> start: p2m_size = 40000
>>> [2010-05-08 04:23:16 9497] INFO (XendCheckpoint:423) Reloading memory  
>>> pages:   0%
>>> [2010-05-08 04:25:53 9497] INFO (XendCheckpoint:423) ERROR Internal  
>>> error: Error when reading batch size
>>> [2010-05-08 04:25:53 9497] INFO (XendCheckpoint:423) ERROR Internal  
>>> error: error when buffering batch, finishing
>>> [2010-05-08 04:25:53 9497] INFO (XendCheckpoint:423)
>>> [2010-05-08 04:25:53 9497] INFO (XendCheckpoint:423) ^H^H^H^H100%
>>> [2010-05-08 04:25:53 9497] INFO (XendCheckpoint:423) Memory reloaded  
>>> (0 pages)
>>> [2010-05-08 04:25:53 9497] INFO (XendCheckpoint:423) read VCPU 0
>>>
>>> Live migration has the same problem, it may need several minutes to  
>>> complete. Please note that restore and migration do not fail, but  
>>> there are very long.
>>>
>>> My domu is on top of DRBD, and the config file is:
>>>
>>>
>>> -------------
>>> kernel      = '/boot/vmlinuz-2.6.32.12-it-xen'
>>> ramdisk     = '/boot/initrd.img-2.6.32.12-it-xen'
>>> memory      = '1024'
>>>
>>> #
>>> #  Disk device(s).
>>> #
>>> root        = '/dev/xvda2 ro'
>>> disk        = [
>>>                   'drbd:domusampleswap,xvda1,w',
>>>                   'drbd:domusampleslash,xvda2,w',
>>>               ]
>>>
>>>
>>>
>>> #
>>> #  Hostname
>>> #
>>> name        = 'domusample'
>>>
>>> #
>>> #  Networking
>>> #
>>> vif         = [ 'mac=00:16:3E:58:FC:F9' ]
>>>
>>> #
>>> #  Behaviour
>>> #
>>> on_poweroff = 'destroy'
>>> on_reboot   = 'restart'
>>> on_crash    = 'restart'
>>>
>>> extra = '2 console=hvc0'
>>> ----------
>>>
>>> I do not have any idea here.
>>>
>>> Did somebody already have (and solve ?)  this issue ?
>>>
>>> Thanks.
>>> Pierre
>>>
>>>
>>> *
>>> *
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-users
>>>       
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-users
>>     
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
>   


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>