WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [xen-unstable test] 5743: regressions - FAIL

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [xen-unstable test] 5743: regressions - FAIL
From: xen.org <ian.jackson@xxxxxxxxxxxxx>
Date: Sat, 12 Feb 2011 05:41:19 +0000
Cc: ian.jackson@xxxxxxxxxxxxx
Delivery-date: Fri, 11 Feb 2011 21:42:27 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
flight 5743 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/5743/

Regressions :-(

Tests which did not succeed and are blocking:
 test-amd64-amd64-xl-win       8 guest-saverestore          fail REGR. vs. 5740
 test-amd64-i386-xl-win-vcpus1  8 guest-saverestore         fail REGR. vs. 5740
 test-amd64-xcpkern-i386-xl-win  8 guest-saverestore        fail REGR. vs. 5740
 test-i386-i386-xl-win         8 guest-saverestore          fail REGR. vs. 5740

Tests which are failing intermittently (not blocking):
 test-amd64-xcpkern-i386-xl-credit2 17 guest-destroy          fail pass in 5741

Tests which did not succeed, but are not blocking,
including regressions (tests previously passed) regarded as allowable:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-rhel6hvm-amd  7 redhat-install               fail    like 5740
 test-amd64-i386-rhel6hvm-intel  8 guest-saverestore            fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-xcpkern-i386-rhel6hvm-amd  8 guest-saverestore      fail never pass
 test-amd64-xcpkern-i386-rhel6hvm-intel  8 guest-saverestore    fail never pass
 test-amd64-xcpkern-i386-win  16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-xcpkern-i386-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  67f2fed57034
baseline version:
 xen                  c64dcc4d2eca

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
  Patrick Scharrenberg <pittipatti@xxxxxx>
  Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
  Tim Deegan <Tim.Deegan@xxxxxxxxxx>
------------------------------------------------------------

jobs:
 build-i386-xcpkern                                           pass     
 build-amd64                                                  pass     
 build-i386                                                   pass     
 build-amd64-oldkern                                          pass     
 build-i386-oldkern                                           pass     
 build-amd64-pvops                                            pass     
 build-i386-pvops                                             pass     
 test-amd64-amd64-xl                                          pass     
 test-amd64-i386-xl                                           pass     
 test-i386-i386-xl                                            pass     
 test-amd64-xcpkern-i386-xl                                   pass     
 test-i386-xcpkern-i386-xl                                    pass     
 test-amd64-i386-rhel6hvm-amd                                 fail     
 test-amd64-xcpkern-i386-rhel6hvm-amd                         fail     
 test-amd64-i386-xl-credit2                                   pass     
 test-amd64-xcpkern-i386-xl-credit2                           fail     
 test-amd64-i386-rhel6hvm-intel                               fail     
 test-amd64-xcpkern-i386-rhel6hvm-intel                       fail     
 test-amd64-i386-xl-multivcpu                                 pass     
 test-amd64-xcpkern-i386-xl-multivcpu                         pass     
 test-amd64-amd64-pair                                        pass     
 test-amd64-i386-pair                                         pass     
 test-i386-i386-pair                                          pass     
 test-amd64-xcpkern-i386-pair                                 pass     
 test-i386-xcpkern-i386-pair                                  pass     
 test-amd64-amd64-pv                                          pass     
 test-amd64-i386-pv                                           pass     
 test-i386-i386-pv                                            pass     
 test-amd64-xcpkern-i386-pv                                   pass     
 test-i386-xcpkern-i386-pv                                    pass     
 test-amd64-i386-win-vcpus1                                   fail     
 test-amd64-i386-xl-win-vcpus1                                fail     
 test-amd64-amd64-win                                         fail     
 test-amd64-i386-win                                          fail     
 test-i386-i386-win                                           fail     
 test-amd64-xcpkern-i386-win                                  fail     
 test-i386-xcpkern-i386-win                                   fail     
 test-amd64-amd64-xl-win                                      fail     
 test-i386-i386-xl-win                                        fail     
 test-amd64-xcpkern-i386-xl-win                               fail     


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   22911:67f2fed57034
tag:         tip
user:        Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
date:        Fri Feb 11 18:22:37 2011 +0000
    
    QEMU_TAG update
    
    
changeset:   22910:d4bc41a8cecb
user:        Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
date:        Fri Feb 11 18:21:35 2011 +0000
    
    tools/hotplug/Linux: Use correct device name for vifs in setup scripts
    
    In vif-common.sh, set the shell variable "dev" to the new interface
    name when interfaces are renamed, and consistently use this variable
    in all the vif scripts.
    
    This fixes hotplug of renamed interfaces.
    
    From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    From: Patrick Scharrenberg <pittipatti@xxxxxx>
    Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Signed-off-by: Patrick Scharrenberg <pittipatti@xxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22909:6868f7f3ab3f
user:        Ian Campbell <ian.campbell@xxxxxxxxxx>
date:        Fri Feb 11 17:57:32 2011 +0000
    
    libxl/xl: improve behaviour when guest fails to suspend itself.
    
    The PV suspend protocol requires guest co-operating whereby the guest
    must respond to a suspend request written to the xenstore control node
    by clearing the node and then making a suspend hypercall.
    
    Currently when a guest fails to do this libxl times out and returns
    a generic failure code to the caller.
    
    In response to this failure xl attempts to resume the guest. However
    if the guest has not responded to the suspend request then the is no
    guarantee that the guest has made the suspend hypercall (in fact it is
    quite unlikely). Since the resume process attempts to modify the
    return value of the hypercall (to indicate a cancelled suspend) this
    results in the guest eax/rax register being corrupted!
    
    To fix this change libxl to do the following:
       * Wait for the guest to acknowledge the suspend request.
         - on timeout cancel the suspend request.
           - if cancellation is successful then return a new error code to
             indicate that the guest is not responding.
           - if the cancel does not succeed then we raced with the guest
             which actually did acknowledge at the last minute, so
             continue.
       * Wait for the guest to suspend.
         - on timeout return the standard error code as before
       * Guest successfully suspended, return success.
    
    Lastly in xl do not attempt to resume a guest if it has not responded
    to the suspend request.
    
    Tested by live migration of PVops kernels which either ignore the
    suspend request, have already crashed and those which suspend/resume
    correctly. In the first two cases the source domain is left alone (and
    continues to function in the first case) and in the third the
    migration is successful.
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22908:c4b843d0b5f4
user:        Ian Campbell <ian.campbell@xxxxxxxxxx>
date:        Fri Feb 11 17:56:24 2011 +0000
    
    libxl: allow guest to write "control/shutdown" xenstore node.
    
    The PV shutdown/reboot/suspend protocol requires that the guest
    acknowledge a request by clearing the node therefore it is necessary
    to allow the guest to write to the node.
    
    Currently libxl is quite relaxed about this protocol and doesn't
    reeally seem to mind that the guest is unable to write the node to
    perform the acknowledgement. However in a followup patch libxl needs
    to be able to detect that a guest has acknowledged a suspend request.
    
    A side effect of this change is that an empty "control/shutdown" node
    is created upon domain creation instead of only being created when a
    shutdown/reboot/suspend is requested. This should not (and does not
    in my tests) have any negative impact on the guest.
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22907:9280f1674705
user:        Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
date:        Fri Feb 11 17:53:08 2011 +0000
    
    libxl: do not call libxl__file_reference_unmap twice
    
    Fix double free due to libxl__file_reference_unmap(&info->kernel) called
    multiple times: first at the end of libxl__domain_build and then in
    libxl_domain_build_info_destroy.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22906:4376c4f0196f
user:        Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
date:        Fri Feb 11 17:49:13 2011 +0000
    
    libxc: increase lzma max memory constant to 128Mby
    
    According to lzma's configure.ac (!) the minimum memory limit to cope
    with arbitrary input is 128Mby (!)
    
    This is obviously an unreasonable amount of memory for this kind of
    task, but we need to increase the constant limit for it not to
    randomly fail.  So do so.
    
    Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22905:6c22ae0f6540
user:        Tim Deegan <Tim.Deegan@xxxxxxxxxx>
date:        Fri Feb 11 16:51:44 2011 +0000
    
    x86/mm: fix typo in 22897:21df67ee7040
    that caused the wrong page to be freed.
    
    Signed-off-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    
    
changeset:   22904:c64dcc4d2eca
user:        Keir Fraser <keir@xxxxxxx>
date:        Thu Feb 10 17:24:41 2011 +0000
    
    Update Xen evrsion to 4.1.0-rc5-pre
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] [xen-unstable test] 5743: regressions - FAIL, xen . org <=