|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] FreeBSD netfront.c / problem
Hello!
I'm working on a guest OS port which uses NetBSD drivers, and I'm
currently working on the netfront support. I'm basing the driver on
the FreeBSD 7.0 driver, and I've run into a problem in
network_alloc_rx_buffers.
Parts of the code with line numbers is given below, and I'll try to
explain where the problem occurs.
The basic problem is that the hypercall on line 112 below fails
because Xen is handed the page number -1.
1 static void
2 network_alloc_rx_buffers(struct netfront_info *sc)
3 {
[...]
48 for (i = 0, m_new = sc->xn_rx_batch; m_new;
49 i++, sc->xn_rx_batchlen--, m_new = next) {
50
[...]
70 rx_pfn_array[i] = vtomach(mtod(m_new,vm_offset_t)) >>
PAGE_SHIFT;
The above call fails gives -1 (i.e., an invalid virtual -> machine
translation) time since some mbufs are placed on the same page.
71
72 /* Remove this page from pseudo phys map before passing back to
Xen. */
73 xen_phys_machine[((unsigned long)m_new->m_ext.ext_args >>
PAGE_SHIFT)]
74 = INVALID_P2M_ENTRY;
... because this invalidates that mapping. I therefore get every
second entry in rx_pfn_array set to -1.
75
76 rx_mcl[i].op = __HYPERVISOR_update_va_mapping;
77 rx_mcl[i].args[0] = (unsigned long)mtod(m_new,vm_offset_t);
78 rx_mcl[i].args[1] = 0;
79 rx_mcl[i].args[2] = 0;
80
81 }
[...]
111 /* Zap PTEs and give away pages in one big multicall. */
112 (void)HYPERVISOR_multicall(rx_mcl, i+1);
[...]
126 }
... So the multicall fails.
Questions: I don't know FreeBSD very well, but does FreeBSD always
place the mbufs on separate pages? How could the above code otherwise
work?
I hope I didn't just make any stupid mistake when porting the code.
// Simon
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] FreeBSD netfront.c / problem,
Simon Kagstrom <=
|
|
|
|
|