WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

RE: [Xen-API] XCP: dom0 scalability

To: "xen-api@xxxxxxxxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-API] XCP: dom0 scalability
From: George Shuklin <george.shuklin@xxxxxxxxx>
Date: Mon, 11 Oct 2010 17:36:54 +0400
Delivery-date: Mon, 11 Oct 2010 06:37:10 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:from:to:in-reply-to :references:content-type:date:message-id:mime-version:x-mailer :content-transfer-encoding; bh=iFRxvN7DTebBxxp4vd4Zn6TTJHIw8sbC7nYWXBFRJHk=; b=HZIR0y9kA6zM9cnQZX98VqbZJ/wXY02iPeEhjMWpbmTF6pUDkV7g+sU2S1R4v2o6zD fzpHHxToI4ea+ROPs+TQMUHINoWhlfiO3E3S64NR7/1Wq4KD9YqBMw52TbQtwEiHRY5C Fw14XZPnpE1pfWTo9M9Wt064HrqyPdpvSEf4w=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:in-reply-to:references:content-type:date:message-id :mime-version:x-mailer:content-transfer-encoding; b=BUrk1C6yiLRz/vWjIAyLKZuoK2iz/2JGUidGE91TxPqW939H8gr8xziM+oLM9J83O+ 1GXsgtdJqRzJtR+jYjcstP/dTkKKzXnUF1gwOJli4+zBErSL4r/twqYO1TJrSr2jIedE xoBRwyyM88pVUOelJMneD+h3zptim7XSv+bAM=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <81A73678E76EA642801C8F2E4823AD219331853A53@xxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <81A73678E76EA642801C8F2E4823AD219331853A47@xxxxxxxxxxxxxxxxxxxxxxxxx> <1286646005.3706.6.camel@xxxxxxxxxxxxxxxx> <81A73678E76EA642801C8F2E4823AD219331853A53@xxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
I think memcpy have limited usage in case of highload data exchange. For
low load we already have XenStore. 

I think, the new <s>p2p</s> D2D communication protocol shall grant more
speed than IP-based communication via netchannel by reducing overhead on
IP-adresses TTL/CRC, etc.

You point problem about dying domain. I think it have to split it to
cases. As I understand, main problem is 'other (bad) domain' not
returning pages to normal domain wich one wants to shutdown, but could
not due bad domain keeping pages.

I think it have simple solution: good domain shall simply transfer
'snatched' pages (pointers to it?) to some service to dom0 which one
keep it until bad domain is not giving up.

Same approach used by init in Linux with zombie processes. Of course
this solution will not help will bugged behavior of bad domain, but it
allow to let rest in peace good domain without side effects.

And dom0 if found bad domain snatches too many pages from dying domains
may: 1st: request pages back from bad domain (if domain return it, it
ok, may be 'good' domain was wrong). 2nd: if bad domain refuse transfer
of zombie pages, after few (thousands of them) use most effective way to
cure patient - do domain_destroy() to it as buggly domain (and may be
restart it later).
 

В Пнд, 11/10/2010 в 14:08 +0100, Dave Scott пишет:
> Hi George,
> 
> > And one more...
> > 
> > I think, this is point where we shall think again about XenSockets.
> > They
> > well be perfect for IDC.
> 
> I definitely agree that we need to create a nice IDC mechanism :)
> 
> I don't really know what the best mechanism would be... I've heard people 
> talk about a few different options.
> 
> One option is to create a protocol similar to blkback/blkfront and 
> netback/netfront i.e. using shared memory pages and event channels. This 
> ought to be the highest bandwidth approach. One potential downside is that a 
> software bug in one endpoint can prevent the other domain being cleaned up 
> properly -- the domain will remain in the 'D'ying state until all memory is 
> returned to it.
> 
> Another option is to create a protocol where instead of sharing memory pages, 
> memcpy() is used to move the data between domains. I think the XCI people 
> might be using a variant of this already.
> 
> There may be other options... what do you think?
> 
> Cheers,
> Dave
> 
> 
> > 
> > В Птн, 08/10/2010 в 20:54 +0100, Dave Scott пишет:
> > > Hi,
> > >
> > > I've added a draft of a doc called "the dom0 capacity crunch" to the
> > wiki:
> > >
> > >
> > http://wiki.xensource.com/xenwiki/XCP_Overview?action=AttachFile&do=get
> > &target=xcp_dom0_capacity_crunch.pdf
> > >
> > > The doc contains a proposal for dealing with ever-increasing dom0
> > load, primarily by moving the load out of dom0 (stubdoms, helper
> > domains etc) and, where still necessary, tweaking the number of dom0
> > vcpus. I think this is becoming pretty important and we'll need to work
> > on this asap.
> > >
> > > Comments are welcome.
> > >
> > > Cheers,
> > > Dave
> > >
> > > _______________________________________________
> > > xen-api mailing list
> > > xen-api@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/mailman/listinfo/xen-api
> > 
> > 
> > 
> > _______________________________________________
> > xen-api mailing list
> > xen-api@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/mailman/listinfo/xen-api



_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api