[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-4.0-testing test] 7462: trouble: pass/preparing/queued

flight 7462 xen-4.0-testing running [real]

Failures and problems with tests :-(

Tests which did not succeed and are blocking:
 build-amd64-pvops             1 hosts-allocate               running
 build-i386-pvops              1 hosts-allocate               running
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-amd64-pv             <none executed>              queued
 test-amd64-amd64-win            <none executed>              queued
 test-amd64-amd64-xl-win         <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-amd64-i386-pv              <none executed>              queued
 test-amd64-i386-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-win-vcpus1      <none executed>              queued
 test-amd64-i386-win             <none executed>              queued
 test-amd64-i386-xl-credit2      <none executed>              queued
 test-amd64-i386-xl-multivcpu    <none executed>              queued
 test-amd64-i386-xl-win-vcpus1    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-xcpkern-i386-pair  2 hosts-allocate               running
 test-amd64-xcpkern-i386-pv    2 hosts-allocate               running
 test-amd64-xcpkern-i386-rhel6hvm-amd  2 hosts-allocate               running
 test-amd64-xcpkern-i386-rhel6hvm-intel  2 hosts-allocate               running
 test-amd64-xcpkern-i386-win   2 hosts-allocate               running
 test-amd64-xcpkern-i386-xl-credit2  2 hosts-allocate               running
 test-amd64-xcpkern-i386-xl-multivcpu  2 hosts-allocate               running
 test-amd64-xcpkern-i386-xl-win  2 hosts-allocate               running
 test-amd64-xcpkern-i386-xl    2 hosts-allocate               running
 test-i386-i386-pair             <none executed>              queued
 test-i386-i386-pv               <none executed>              queued
 test-i386-i386-win              <none executed>              queued
 test-i386-i386-xl-win           <none executed>              queued
 test-i386-i386-xl               <none executed>              queued
 test-i386-xcpkern-i386-pair   2 hosts-allocate               running
 test-i386-xcpkern-i386-pv     2 hosts-allocate               running
 test-i386-xcpkern-i386-win    2 hosts-allocate               running
 test-i386-xcpkern-i386-xl     2 hosts-allocate               running

version targeted for testing:
 xen                  b11ae09ae58b
baseline version:
 xen                  5054ed412032

People who touched revisions under test:
  Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
  Jim Fehlig <jfehlig@xxxxxxxxxx>
  Keir Fraser <keir@xxxxxxx>

 build-i386-xcpkern                                           pass     
 build-amd64                                                  pass     
 build-i386                                                   pass     
 build-amd64-oldkern                                          pass     
 build-i386-oldkern                                           pass     
 build-amd64-pvops                                            preparing
 build-i386-pvops                                             preparing
 test-amd64-amd64-xl                                          queued   
 test-amd64-i386-xl                                           queued   
 test-i386-i386-xl                                            queued   
 test-amd64-xcpkern-i386-xl                                   preparing
 test-i386-xcpkern-i386-xl                                    preparing
 test-amd64-i386-rhel6hvm-amd                                 queued   
 test-amd64-xcpkern-i386-rhel6hvm-amd                         preparing
 test-amd64-i386-xl-credit2                                   queued   
 test-amd64-xcpkern-i386-xl-credit2                           preparing
 test-amd64-i386-rhel6hvm-intel                               queued   
 test-amd64-xcpkern-i386-rhel6hvm-intel                       preparing
 test-amd64-i386-xl-multivcpu                                 queued   
 test-amd64-xcpkern-i386-xl-multivcpu                         preparing
 test-amd64-amd64-pair                                        queued   
 test-amd64-i386-pair                                         queued   
 test-i386-i386-pair                                          queued   
 test-amd64-xcpkern-i386-pair                                 preparing
 test-i386-xcpkern-i386-pair                                  preparing
 test-amd64-amd64-pv                                          queued   
 test-amd64-i386-pv                                           queued   
 test-i386-i386-pv                                            queued   
 test-amd64-xcpkern-i386-pv                                   preparing
 test-i386-xcpkern-i386-pv                                    preparing
 test-amd64-i386-win-vcpus1                                   queued   
 test-amd64-i386-xl-win-vcpus1                                queued   
 test-amd64-amd64-win                                         queued   
 test-amd64-i386-win                                          queued   
 test-i386-i386-win                                           queued   
 test-amd64-xcpkern-i386-win                                  preparing
 test-i386-xcpkern-i386-win                                   preparing
 test-amd64-amd64-xl-win                                      queued   
 test-i386-i386-xl-win                                        queued   
 test-amd64-xcpkern-i386-xl-win                               preparing

sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at

Test harness code can be found at

Not pushing.

changeset:   21503:b11ae09ae58b
tag:         tip
user:        Ian Campbell <ian.campbell@xxxxxxxxxx>
date:        Sat May 28 09:29:40 2011 +0100
    IOMMU: Fail if intremap is not available and iommu=required/force.
    Rather than sprinkling panic()s throughout the setup code hoist the
    check up into common code.
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Signed-off-by: Keir Fraser <keir@xxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    xen-unstable changeset:   23402:f979a1a69fe3
    xen-unstable date:        Thu May 26 08:18:44 2011 +0100
changeset:   21502:5768b9b19aaf
user:        Markus Gross <gross@xxxxxxxxxxxxx>
date:        Sat May 28 09:28:28 2011 +0100
    libxc: obtain correct length of p2m during core dumping
    while implementing core dumping functionality for the libxl driver
    of libvirt, I discovered an issue with mapping pages of a pv guest.
    After dumping the core of a pv guest the domain was not cleared up
    properly and some pages were not unmapped. This issue is similar
    to the one reported here:
    In xc_domain_dumpcore_via_callback in the file xc_core.c the function
    xc_core_arch_map_p2m is called to map P2M_FL_ENTRIES pages to the
    variable p2m.
    But to unmap the pages later, the dinfo->p2m_size has to be set
    This was not done, instead a variable named p2m_size was set.
    This way P2M_FL_ENTRIES was always zero and the pages were left
    Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    xen-unstable changeset:   23374:8bd7b5e98f2a
    xen-unstable date:        Tue May 24 15:00:16 2011 +0100
changeset:   21501:3220df717f10
user:        Jim Fehlig <jfehlig@xxxxxxxxxx>
date:        Sat May 28 09:26:32 2011 +0100
    libxc: after saving, unmap correct amount for live_m2p
    With some help from Olaf, I've finally got to the bottom of an issue I
    came across while trying to implement save/restore in the libvirt
    libxenlight driver.  After issuing the save operation, the saved
    domain was not being cleaned up properly and left in this state from
    xl's perspective
    xen33:# xl list
    Name                   ID   Mem VCPUs      State   Time(s)
    Domain-0                0  6821     8     r-----     122.5
    (null)                  2     2     2     --pssd      10.8
    Checking the libvirtd /proc/$pid/maps I found this
    7f3798984000-7f3798b86000 r--s 00002000 00:03 4026532097
    So not all all pages belonging to the domain were unmapped from
    libvirtd.  In tools/libxc/xc_domain_save.c we found that
    P2M_FL_ENTRIES were being mapped but only P2M_FLL_ENTRIES were being
    unmapped.  The attached patch changes the unmapping to use the same
    P2M_FL_ENTRIES macro.  I'm not too familiar with this code though so
    posting here for review.
    I suspect this was not noticed before since most (all?) processes
    doing save terminate after the save and are not long-running like
    Ian Campbell writes:
    > Looks like I introduced this in 18558:ccf0205255e1, sorry!
    > I guess it is also wrong in the error path out of map_and_save_p2m_table
    > and so we also need [another hunk].
    This change should be backported to relevant earlier trees. -iwj
    From: Jim Fehlig <jfehlig@xxxxxxxxxx>
    From: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
    Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Cc: Olaf Hering <olaf@xxxxxxxxx>
    Acked-by: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    xen-unstable changeset:   23373:171007b4e2c4
    xen-unstable date:        Tue May 24 14:50:00 2011 +0100
changeset:   21500:5054ed412032
user:        Keir Fraser <keir@xxxxxxx>
date:        Tue May 24 08:21:48 2011 +0100
    Added signature for changeset d4cefc444b74
(qemu changes not included)

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.