WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

[Xen-ia64-devel] RE: Latest status about multiple domains on XEN/IPF

To: "Magenheimer, Dan \(HP Labs Fort Collins\)" <dan.magenheimer@xxxxxx>
Subject: [Xen-ia64-devel] RE: Latest status about multiple domains on XEN/IPF
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Fri, 16 Sep 2005 08:45:22 +0800
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 16 Sep 2005 00:43:06 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcW4ZfOzzchRRTCPTFuQnRhZtvFcDwADcXEgAAKwUYAACZFncAAB1poAABrDDSAABg+TcAAau19QAAO834AAEcfo8AAMivGQAA1I17A=
Thread-topic: Latest status about multiple domains on XEN/IPF
Yeah, seems we're on same page now. I doubt the console issue may be also the 
reason of the blkfront connection, since unwanted delay may cause timeout. 
Still need more investigation. ;-(

Thanks,
Kevin

>-----Original Message-----
>From: Magenheimer, Dan (HP Labs Fort Collins) [mailto:dan.magenheimer@xxxxxx]
>Sent: 2005年9月16日 3:24
>To: Tian, Kevin
>Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>Subject: RE: Latest status about multiple domains on XEN/IPF
>
>I got it all built with all the patches.  I am now
>able to run xend.  But when I do "xm create"
>I just get as far as:
>
>xen-event-channel using irq 233
>store-evtchn = 1
>
>and then the 0+1+01 (etc) debug output.
>
>Wait... I tried launching another domain and got
>further.  Or I guess this is just delayed console
>output from the first "xm create"?
>
>It gets as far as:
>Xen virtual console successfully installed as tty0
>Event-channel device installed.
>xen_blk: Initialising virtual block device driver
>
>and then nothing else.
>
>So I tried launching some more domains (with name=xxx).
>Now I get as far as the kernel unable-to-mount-root
>panic.
>
>It's hard to tell what is working because of the
>console problems (that I see you have posted a question
>about on xen-devel).
>
>Dan
>
>> -----Original Message-----
>> From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
>> Sent: Thursday, September 15, 2005 6:32 AM
>> To: Magenheimer, Dan (HP Labs Fort Collins)
>> Cc: ipf-xen
>> Subject: RE: Latest status about multiple domains on XEN/IPF
>>
>> Hi, Dan,
>>
>>      Attached are updated xeno patch (xen patch still same),
>> but no functional enhancement actually. Some Makefile change
>> is required to build latest xenolinux.hg, though bit ugly.
>> ;-) Together with another patch I sent out for solving domU
>> crash on the mailing list (Took me most time of the day),
>> hope you can reach same point as mine:
>>      Blkfront failed to connect to xenstore, and mount root fs panic.
>>
>> Thanks,
>> Kevin
>>
>> >-----Original Message-----
>> >From: Magenheimer, Dan (HP Labs Fort Collins)
>> [mailto:dan.magenheimer@xxxxxx]
>> >Sent: 2005年9月15日 12:05
>> >To: Tian, Kevin
>> >Cc: ipf-xen
>> >Subject: RE: Latest status about multiple domains on XEN/IPF
>> >
>> >>   Thanks for comments. When I sent out the patch, I
>> >> didn't mean it as the final one and just for you to continue
>> >> debug. So the style is a bit messed, and your most comments
>> >> regarding coding style are correct. I anyway will be careful
>> >> next time even when sending out temp patch.
>> >
>> >Oh, OK.  I didn't realize it was a "continue debug" patch.
>> >
>> >> >I haven't seen any machine crashes, but I am both
>> >> >running on a different machine and exercising it
>> >> >differently.  If you have any test to reproduce
>> >> >it, please let me know.  I have noticed that
>> >> >running "hg clone" seems to reproducibly cause
>> >> >a segmentation fault... I haven't had any time
>> >> >to try to track this down.  (I think Intel has better
>> >> >hardware debugging capabilities... perhaps if you
>> >> >can reproduce this, someone on the Intel team can
>> >> >track it down?)
>> >>
>> >> I see the crash when domU was executing. Actually if only
>> >> dom0 is up, it can run safely for several days.
>> >
>> >OK.  Yes, I have seen dom0 stay up for many days
>> >too; that's why I was concerned if it was crashing.
>> >
>> >> >When I last tried, I wasn't able to get xend to
>> >> >run (lots of python errors).  It looks like you
>> >> >have gotten it to run?
>> >>
>> >> Is it possible due to the python version? The default python
>> >> version on EL3 is 2.2, and with it we saw many python errors
>> >> before. Now we're using 2.4.1.
>> >
>> >I am using 2.3.5 but that has always worked before.
>> >
>> >> One more question. Did you try xend with all my patches
>> >> applied? Without change to do_memory_ops which is explained
>> >> below, xend doesn't start since its memory reservation
>> >> request will fail.
>> >
>> >I bet that is the problem.  I haven't tried it since
>> >receiving your patch and will try it again tomorrow.
>> >
>> >> >3) In privcmd.c (other than the same comment about
>> >> >   ifdef'ing every change), why did you change the
>> >> >   direct_remap_... --> remap__... define back?
>> >> >   Was it incorrect or just a style change?  Again,
>> >> >   I am trying to change the patches to something that
>> >> >   will likely be more acceptable upstream and
>> >> >   I think we will be able to move this simple
>> >> >   define into an asm header file.  If my change
>> >> >   to your patch is broken, please let me know.
>> >>
>> >> But as you may note, two functions requires different
>> >> parameters, one for mm_struct and another for vma. So your
>> >> previous change is incorrect.
>> >
>> >No I missed that difference entirely!  Good catch!
>> >
>> >> >6) I will add your patch to hypercall.c (in the hypervisor).
>> >> >   But the comment immediately preceding concerns me...
>> >> >   are reservations implemented or not?  (I think not,
>> >> >   unless maybe they are only in VTI?)
>> >>
>> >> No, both don't handle the reservation. However the issue is
>> >> that now nr_extents is not the level 1 parameter which
>> >> previous code simply retrieves from pt_regs. Now it's a sub
>> >> field in a new reservation structure, with the later only
>> >> parameter passed in. So I have to add above logic to get
>> >> nr_extents and return result that caller wants.
>> >
>> >OK.
>> >
>> >If you have an updated patch by the end of your day,
>> >please send it and I will try it out tomorrow.
>> >
>> >Dan
>>

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>