WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Xen Roadmap proposal

I think to be successful in the market we still need to focus on
stability and we need to make usability a no 1 priority.

Whilst there is a regression test suite, it's not clear how much of the
code is actually tested by it and there are areas such as concurrency
and error inject which it is clear are hardly tested at all.

It would be useful to establish some metrics for code coverage by the
test suite and also establish some process for enumerating the tests
that need to be created to test any new feature dropped into the tree.
We need to be able to continually track how much testing work is
outstanding and make sure that we don't let ourselves get behind in
developing regression tests.

One of the difficulties with creating the test suite is that it requires
expert knowledge of the testing requirements of each area.  For the
development process to be scalable it would be ideal if developers would
contribute a list of tests required to fully exercise any new features
introduced (i.e. a test plan) and a set of implemented tests as well.
This has been happening already to some extent which is good but I think
we need to see a bit more of it.

If I were developing a hypervisor from scratch, I would make it
self-hosting simply because it would make testing easier: it would be
possible to bring up a cluster of hypervisor instances on a single
physical machine and, for example, simulate the effect of power failure
during inter-machine migration.  With this kind of environment and
random simulated error injection and random management API calls with
probability distribution chosen to maximise code coverage it is possible
to create a very effective regression stress test which can set a very
high threshold for code quality.

We have to start from where we are however and now that Xen is
reasonably stable in simple, good path operation, it might be time to
start some discussion about how to adapt xm-test or build a stress test
environment for testing concurrency, inter-machine operations and error
inject including the effects of power failure.

On the usability front, I think what is most lacking is a community
vision for an out-of-the-box Xen solution.  Whilst it is good that on
the cutting edge the open source nature of Xen means that it is
developing in all directions at once, for market acceptance there needs
to be a core idea of a Xen product with well-defined stable supported
configurations and a supported feature set that is easy to deploy and
'just works'.

It may be the intention to leave this productization step to
distributions and other commercial interests but it's such a lot of work
and so important for success in the market that any common core effort
on this front can only be beneficial to all parties.

So, I'd place emphasis on stability backed by trusted metrics and
usability as an explicit common goal.

My 2p.

Harry.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>