WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] blkif shared ring

To: Samvel Yuri <samvelox@xxxxxxxxx>
Subject: Re: [Xen-devel] blkif shared ring
From: Samuel Thibault <samuel.thibault@xxxxxxxxxxxx>
Date: Wed, 19 Nov 2008 00:39:43 +0100
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 18 Nov 2008 15:40:11 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <663860.81684.qm@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Mail-followup-to: Samuel Thibault <samuel.thibault@xxxxxxxxxxxx>, Samvel Yuri <samvelox@xxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
References: <663860.81684.qm@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.12-2006-07-14
Hello,

Samvel Yuri, le Tue 18 Nov 2008 15:29:48 -0800, a écrit :
> Wouldn't the shared ring be likely a bottleneck in very large number
> os IOs, have anybody ran into such issue?

Compared to disk speeds, it is almost a bottleneck nowadays. In mini-os,
I could achieve almost native performance, but that was by using the
whole buffer page. When disk will get faster, one page won't be enough.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>