flight 6752 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/6752/
Regressions :-(
Tests which did not succeed and are blocking:
test-amd64-xcpkern-i386-xl-credit2 11 guest-localmigrate fail REGR. vs. 6751
test-i386-xcpkern-i386-pair 8 xen-boot/dst_host fail REGR. vs. 6751
Tests which did not succeed, but are not blocking,
including regressions (tests previously passed) regarded as allowable:
test-amd64-amd64-win 16 leak-check/check fail never pass
test-amd64-amd64-xl-win 13 guest-stop fail never pass
test-amd64-i386-rhel6hvm-amd 8 guest-saverestore fail never pass
test-amd64-i386-rhel6hvm-intel 8 guest-saverestore fail never pass
test-amd64-i386-win-vcpus1 16 leak-check/check fail never pass
test-amd64-i386-win 16 leak-check/check fail never pass
test-amd64-i386-xl-win-vcpus1 13 guest-stop fail never pass
test-amd64-xcpkern-i386-rhel6hvm-amd 8 guest-saverestore fail never pass
test-amd64-xcpkern-i386-rhel6hvm-intel 8 guest-saverestore fail never pass
test-amd64-xcpkern-i386-win 16 leak-check/check fail never pass
test-amd64-xcpkern-i386-xl-win 13 guest-stop fail never pass
test-i386-i386-win 16 leak-check/check fail never pass
test-i386-i386-xl-win 13 guest-stop fail never pass
test-i386-xcpkern-i386-win 16 leak-check/check fail never pass
version targeted for testing:
xen fbfee2a01a91
baseline version:
xen 967e1925775c
------------------------------------------------------------
People who touched revisions under test:
Jan Beulich <jbeulich@xxxxxxxxxx>
Machon Gregory <mbgrego@xxxxxxxxxxxxxx>
------------------------------------------------------------
jobs:
build-i386-xcpkern pass
build-amd64 pass
build-i386 pass
build-amd64-oldkern pass
build-i386-oldkern pass
build-amd64-pvops pass
build-i386-pvops pass
test-amd64-amd64-xl pass
test-amd64-i386-xl pass
test-i386-i386-xl pass
test-amd64-xcpkern-i386-xl pass
test-i386-xcpkern-i386-xl pass
test-amd64-i386-rhel6hvm-amd fail
test-amd64-xcpkern-i386-rhel6hvm-amd fail
test-amd64-i386-xl-credit2 pass
test-amd64-xcpkern-i386-xl-credit2 fail
test-amd64-i386-rhel6hvm-intel fail
test-amd64-xcpkern-i386-rhel6hvm-intel fail
test-amd64-i386-xl-multivcpu pass
test-amd64-xcpkern-i386-xl-multivcpu pass
test-amd64-amd64-pair pass
test-amd64-i386-pair pass
test-i386-i386-pair pass
test-amd64-xcpkern-i386-pair pass
test-i386-xcpkern-i386-pair fail
test-amd64-amd64-pv pass
test-amd64-i386-pv pass
test-i386-i386-pv pass
test-amd64-xcpkern-i386-pv pass
test-i386-xcpkern-i386-pv pass
test-amd64-i386-win-vcpus1 fail
test-amd64-i386-xl-win-vcpus1 fail
test-amd64-amd64-win fail
test-amd64-i386-win fail
test-i386-i386-win fail
test-amd64-xcpkern-i386-win fail
test-i386-xcpkern-i386-win fail
test-amd64-amd64-xl-win fail
test-i386-i386-xl-win fail
test-amd64-xcpkern-i386-xl-win fail
------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images
Logs, config files, etc. are available at
http://www.chiark.greenend.org.uk/~xensrcts/logs
Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
Not pushing.
------------------------------------------------------------
changeset: 23146:fbfee2a01a91
tag: tip
user: Jan Beulich <jbeulich@xxxxxxxxxx>
date: Tue Apr 05 13:05:05 2011 +0100
passthrough: use domain pirq as index of struct hvm_irq_dpci's hvm_timer
array
Since d->nr_pirqs is guaranteed to be not larger than nr_irqs,
indexing arrays by the former ought to be preferred. In the case
given, the indices so far had to be computed specially in a number of
cases, whereas the indexes use now are all readily available.
This opens the possibility to fold the ->mirq[] and ->hvm_timer[]
members of struct hvm_irq_dpci into a single array, possibly with some
members overlayed in a union to reduce size (see
http://lists.xensource.com/archives/html/xen-devel/2011-03/msg02006.html).
Such space saving wouldn't, however, suffice to generally get the
respective allocation sizes here to below PAGE_SIZE, not even when
converting the array of structures into an array of pointers to
structures. Whether a multi-level lookup mechanism would make sense
here is questionable, as it can be expected that for other than Dom0
(which isn't hvm, and hence shouldn't use these data structures - see
http://lists.xensource.com/archives/html/xen-devel/2011-03/msg02004.html)
only very few entries would commonly be used here. An obvious
alternative would be to use rb or radix trees (both currently only
used in tmem).
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
changeset: 23145:4fe0442aa5b7
user: Jan Beulich <jbeulich@xxxxxxxxxx>
date: Tue Apr 05 13:03:29 2011 +0100
x86: introduce alloc_vcpu_guest_context()
This is necessary because on x86-64 struct vcpu_guest_context is
larger than PAGE_SIZE, and hence not suitable for a general purpose
runtime allocation. On x86-32, FIX_PAE_HIGHMEM_* fixmap entries are
being re-used, whiule on x86-64 new per-CPU fixmap entries get
introduced. The implication of using per-CPU fixmaps is that these
allocations have to happen from non-preemptable hypercall context
(which they all do).
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
changeset: 23144:37c4f7d492a4
user: Jan Beulich <jbeulich@xxxxxxxxxx>
date: Tue Apr 05 13:02:57 2011 +0100
x86: split struct domain
This is accomplished by converting a couple of embedded arrays (in one
case a structure containing an array) into separately allocated
pointers, and (just as for struct arch_vcpu in a prior patch)
overlaying some PV-only fields with HVM-only ones.
One particularly noteworthy change in the opposite direction is that
of PITState - this field so far lived in the HVM-only portion, but is
being used by PV guests too, and hence needed to be moved out of
struct hvm_domain.
The change to XENMEM_set_memory_map (and hence libxl__build_pre() and
the movement of the E820 related pieces to struct pv_domain) are
subject to a positive response to a query sent to xen-devel regarding
the need for this to happen for HVM guests (see
http://lists.xensource.com/archives/html/xen-devel/2011-03/msg01848.html).
The protection of arch.hvm_domain.irq.dpci accesses by is_hvm_domain()
is subject to confirmation that the field is used for HVM guests only
(see
http://lists.xensource.com/archives/html/xen-devel/2011-03/msg02004.html).
In the absence of any reply to these queries, and given the early
state of 4.2 development, I think it should be acceptable to take the
risk of having to later undo/redo some of this.
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
changeset: 23143:2f7f24fe5924
user: Jan Beulich <jbeulich@xxxxxxxxxx>
date: Tue Apr 05 13:02:00 2011 +0100
x86: move pv-only members of struct vcpu to struct pv_vcpu
... thus further shrinking overall size of struct arch_vcpu.
This has a minor effect on XEN_DOMCTL_{get,set}_ext_vcpucontext - for
HVM guests, some meaningless fields will no longer get stored or
retrieved: reads will now return zero, and writes are required to be
(mostly) zero (the same as was already done on x86-32).
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
changeset: 23142:f5e8d152a565
user: Jan Beulich <jbeulich@xxxxxxxxxx>
date: Tue Apr 05 13:01:25 2011 +0100
x86: split struct vcpu
This is accomplished by splitting the guest_context member, which by
itself is larger than a page on x86-64. Quite a number of fields of
this structure is completely meaningless for HVM guests, and thus a
new struct pv_vcpu gets introduced, which is being overlaid with
struct hvm_vcpu in struct arch_vcpu. The one member that is mostly
responsible for the large size is trap_ctxt, which now gets allocated
separately (unless fitting on the same page as struct arch_vcpu, as is
currently the case for x86-32), and only for non-hvm, non-idle
domains.
This change pointed out a latent problem in arch_set_info_guest(),
which is permitted to be called on already initialized vCPU-s, but
so far copied the new state into struct arch_vcpu without (in this
case) actually going through all the necessary accounting/validation
steps. The logic gets changed so that the pieces that bypass
accounting
will at least be verified to be no different from the currently active
bits, and the whole change will fail in case they are. The logic does
*not* get adjusted here to do full error recovery, that is, partially
modified state continues to not get unrolled in case of failure.
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
changeset: 23141:c2d7a9fd7364
user: Jan Beulich <jbeulich@xxxxxxxxxx>
date: Tue Apr 05 13:00:54 2011 +0100
Remove direct cpumask_t members from struct vcpu and struct domain
The CPU masks embedded in these structures prevent NR_CPUS-independent
sizing of these structures.
Basic concept (in xen/include/cpumask.h) taken from recent Linux.
For scalability purposes, many other uses of cpumask_t should be
replaced by cpumask_var_t, particularly local variables of functions.
This implies that no functions should have by-value cpumask_t
parameters, and that the whole old cpumask interface (cpus_...())
should go away in favor of the new (cpumask_...()) one.
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
changeset: 23140:967e1925775c
user: Machon Gregory <mbgrego@xxxxxxxxxxxxxx>
date: Mon Apr 04 15:54:45 2011 +0100
xsm: Error code consistency
Signed-off-by: Machon Gregory <mbgrego@xxxxxxxxxxxxxx>
(qemu changes not included)
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|