WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] disk access besk practice

To: "Brian Krusic" <brian@xxxxxxxxxx>
Subject: Re: [Xen-users] disk access besk practice
From: "Todd Deshane" <deshantm@xxxxxxxxx>
Date: Tue, 6 Jan 2009 13:20:42 -0500
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 06 Jan 2009 10:21:21 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:reply-to :to:subject:cc:in-reply-to:mime-version:content-type :content-transfer-encoding:content-disposition:references; bh=9hI9KPROywTap01zucbn3iZ4twTn30dLuJ/n0ZhFDj0=; b=EJ97aT79hrzf3vq3l59SSc9ZnSrxD5ucevsnqKOXGHCzH0SR40mfTbdrfkmU7opb77 f2XCngGzVWJi1Bhhv5npyXcKXy7uF82UmnsAeLJnjfnMCkKTc9RZRf+5ys4sbuSJJpkR 6gFJzufRqVf2sxkckcU1B5O6DzzTHtKPm9ogQ=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:reply-to:to:subject:cc:in-reply-to :mime-version:content-type:content-transfer-encoding :content-disposition:references; b=Nf1ezWRaK6MBdc5T1HNS0fwSD7c+ZVI489N3lwel71vcweswYJXL1TVB7uTVQvOLxj v5Yr40WgVahdW13zI8/HFH3/LsKzTwqjQLKztgF61weBT7c8xrmAuFapG6wvE/iKYwSf /OCmLUJVX6KtOqkTCxlmR2vV7r+vEte6hUKGk=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <6BDB134C-4C25-4474-BDEA-4F5FEE7101E2@xxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <6BDB134C-4C25-4474-BDEA-4F5FEE7101E2@xxxxxxxxxx>
Reply-to: deshantm@xxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, Jan 6, 2009 at 12:58 PM, Brian Krusic <brian@xxxxxxxxxx> wrote:
> Hi all,
>
> While I've read some faqs, forums and Professional Xen Virtualization, I
> would like your take on this.
>

You should read Running Xen ;)

> I've 2 paravirtualized domUs running, each using tap:aio disk image located
> on a local 500GB raid.
>
> While performance seems fine both interactively and using benchmarks, is
> there a practical limit to the image size before I should start breaking it
> up?
>

If there is a hard limit on the file system for max file size that is
could be an issue.
Otherwise, it can really depend on usage, backup considerations, etc.

> I plan to build another dom0 box with a 24TB raid on it and hosting 2
> paravirtualized domUs, one of which will need 20TB.
>
> Should I break up the domU into 2 images, 1 for the OS and the other for
> storage needs?
>

This can be beneficial in a general sense, for backup purposes and
also performance
could be achieved, just like  with a non-virtualized system writing to
different physical disks.

> So my questions are;
>
> 1 - Whats a practical single disk image size?

Others may have experience with very large disks....

> 2 - Should I pre allocate all image space during domU creation or have it
> dynamically grow?
>

It depends on performance needed. Dynamically growing will have some
performance degradation. And the dynamically allocated ones will save
you a lot of space. It is a trade off. In practice, if breaking up, you could
have a mixture of disks, the performance crucial ones could be pre-allocated
and the less performance crucial, less used, could be dynamically grown
(aka sparse files).

Hope that helps some.

Cheers,
Todd

-- 
Todd Deshane
http://todddeshane.net
http://runningxen.com

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>