Hi again,
I think I posted some detail to xen-users a while ago about our problems
with network I/O.
Here:
http://lists.xensource.com/archives/html/xen-users/2006-11/msg00834.html
We were unable to fix this issue, but have limited the symptoms by only
running one or two virtual machines pr physical network card.
/Daniel
On 3/15/07 3:05 PM, "Chris Fanning" <christopher.fanning@xxxxxxxxx> wrote:
> Hi Daniel,
>
> Well, I've got this up and running on the work bench. I'm still using
> 100mb/s network but intend to upgrade.
>
>> We still use Xen in production, but due to network I/O performance isssues,
>> I wouldn't recomment our setup if you intend to run more than one or two
>> virtual machines on each dom0.
>
> Can you please tell me more about this? I wouldn't like to contiue
> down this road if it's a dead end.
>
> Thanks.
> Chris.
>
> On 3/13/07, Daniel J. Nielsen <djn@xxxxxxxxxx> wrote:
>> Hi Chris,
>>
>> We still use Xen in production, but due to network I/O performance isssues,
>> I wouldn't recomment our setup if you intend to run more than one or two
>> virtual machines on each dom0.
>>
>> In the case described below, we discovered we missed the experimental
>> support for hotpluggable cpus in our custom debian kernels. A recompile
>> later and all worked without a hitch.
>>
>> As to network card I'm not sure. We use the ones provided in our HP Proliant
>> servers. For one of our servers, there are two:
>>
>> Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10)
>>
>> I hope this clears something up. I'm not subscribed to xen-users anymore (I
>> just peruse the archives), so please include me in eventual replis.
>>
>> /Daniel
>>
>> On 3/13/07 9:11 AM, "Chris Fanning" <christopher.fanning@xxxxxxxxx> wrote:
>>
>>> Hello Daniel,
>>>
>>> I am trying to setup the same installation that you mention.
>>> I have dom0's on nfsroot and domU's on AoE.
>>>
>>> At present I've got everything on 100mb/s and it doesn't work very
>>> well. xend takes about 20secs to starup and domU's don't recover
>>> network connection after migration. I'd like to try it with 1000mb/s.
>>>
>>> Can you please recommend me the network cards I should use? I have
>>> some Dlinks but (for some reason) the modules don't get loaded even
>>> though lspci does show them.
>>> The thin server boxes need to boot pxe (of course).
>>>
>>> Thanks.
>>> Chris.
>>>
>>> On 9/15/06, Daniel Nielsen <djn@xxxxxxxxxx> wrote:
>>>> Hi.
>>>>
>>>> We are currently migrating to Xen for our production servers, version
>>>> 3.0.2-2. But we are having problems with the live-migration feature.
>>>>
>>>> Our setup is this;
>>>>
>>>> We run debian-stable (sarge), with selected packages from backports.org.
>>>> Our
>>>> glibc is patched to be "Xen-friendly". In our test-setup, we have two
>>>> dom0's
>>>> both netbooting from a central NFS/tftpboot server e.g. not storing
>>>> anything
>>>> locally. Both dom0's have two ethernet ports. eth0 is used by the dom0 and
>>>> eth1 is bridged to Xen.
>>>>
>>>> Our domUs also use a NFS-root, also debian sarge. They use the same kernel.
>>>> They have no "ties" to the local machine, except for network access, they
>>>> do
>>>> not mount any localdrives or files as drives. All is exclusively run
>>>> through
>>>> NFS and in RAM.
>>>>
>>>> When migrating machines (our dom0's are named after fictional planets, and
>>>> virtual machines after fictional spaceships):
>>>>
>>>> geonosis:/ root# xm migrate --live serenity lv426
>>>> it just hangs.
>>>>
>>>> A machine called serenity pops up on lv426:
>>>>
>>>> lv426:/ root# xm list
>>>> Name ID Mem(MiB) VCPUs State Time(s)
>>>> Domain-0 0 128 4 r----- 21106.6
>>>> serenity 8 2048 1 --p--- 0.0
>>>> lv426:/ root#
>>>>
>>>> But nothing happens.
>>>>
>>>> If we migrate a lower mem domU with eg. 256MiB it works without a hitch.
>>>> If we migrate a domU with eg. 512 MiB it sometimes works, othertimes it
>>>> doesn't. But for domUs with 2GiB ram, it consistently fails.
>>>>
>>>> In the above example, if we wait quite some hours, then serenity will stop
>>>> responding, and geonosis will be left with a
>>>>
>>>> genosis:/ root# xm list
>>>> Name ID Mem(MiB) VCPUs State Time(s)
>>>> Domain-0 0 128 4 r----- 21106.6
>>>> Zombie-serenity 8 2048 2 -----d 3707.8
>>>> geonosis:/ root#
>>>>
>>>>
>>>> I have attached the relevant entries from the xend.log files from both
>>>> geonosis and lv426.
>>>>
>>>> I hope somebody is able to clear up what we are missing.
>>>>
>>>> I noticed in geonosis.log, that it wants 2057 MiB. I'm unsure of what it
>>>> means...?
>>>>
>>>>
>>>> /Daniel
>>>> Portalen
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-users mailing list
>>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-users
>>>>
>>>>
>>
>>
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|