======= 2008-06-24 07:21:28 您在来信中写道:=======
>> what about another question:
>>
>> In netback netif_be_start_xmit()
>> if (!netif->copying_receiver ||
>> ((skb_headlen(skb) + offset_in_page(skb->data)) >= PAGE_SIZE)) {
>> struct sk_buff *nskb = netbk_copy_skb(skb);
>> which calls skb_copy_bits()
>>
>> it means when rx-flip, the skb data and frags are copied to another buffer.
>> then the buffer flip with domU's pages.
>> so before page flip with domU, copying happens in dom0. Does rx-flip make
>> any sense?
>
>The reason why you have to copy is to make sure that you pull the skb
>into a fresh page (which doesn't have any other pieces of data in it).
>It's only then that you can flip it to DomU.
>It used to be the case that the backend was more efficient in the
>rx-flip mode. Some/most skbs copies were avoided by keeping a close
>eye on the skb allocation. Since rx-flip is now only/mostly kept for
>backward compatibility (rx-copy is the default) the code was
>simplified.
You mean in previous Xen release, rx-flip is more efficient than today's
release. And keeping rx-flip in Xen Source Code tree today is only for
compatibility. So no need to do more optimizing for it.
Is my understanding right?
>Note: with grant table copy you can ask the hypervisor to copy
>sub-page regions (in our case you can match the size of the skb)
>
>Cheers
>Gr(z)egor(z)
>>
>>
>>
>> ======= 2008-06-23 19:22:37 您在来信中写道:=======
>>
>>>> thanks. Maybe I understand.
>>>> Another question
>>>> In netback:
>>>> static inline void maybe_schedule_tx_action(void)
>>>> {
>>>> smp_mb();
>>>> if ((NR_PENDING_REQS < (MAX_PENDING_REQS/2)) &&
>>>> !list_empty(&net_schedule_list))
>>>> tasklet_schedule(&net_tx_tasklet);
>>>> }
>>>> ... ...
>>>>
>>>> while
>>>> #define NR_PENDING_REQS (MAX_PENDING_REQS - pending_prod + pending_cons)
>>>> that is NR_PENDING_REQS = MAX_PENDING_REQS - (pending_prod - pending_cons)
>>>>
>>>> My question is when NR_PENDING_REQS cannot satisfy the above condition
>>>> "(NR_PENDING_REQS < (MAX_PENDING_REQS/2)"?
>>>> why not handle the condition of "NR_PENDING_REQS >= (MAX_PENDING_REQS/2)"?
>>>
>>>If there are many pending requests (more then MAX/2), new requests
>>>will have to wait on frontend-backend rings (a form of congestion
>>>control). When some of the pending requests get dealt with (and
>>>NR_PENDING_REQS falls below MAX/2) new ones will be accepted.
>>>
>>>Also, note that net_tx_tasklet doesn't only get scheduled by
>>>maybe_schedule_tx_action. There also is:
>>>- netbk_tx_pending_timeout: triggered by a timeout timer
>>>- netif_idx_release: triggered by a net data page being released
>>>The three net_tx_tasklet schedulers guarantee continuous flow of skbs
>>>out of netback.
>>>
>>>Cheers
>>>Gr(z)egor(z)
>>>
>>>
>>>>
>>>>
>>>> ======= 2008-06-19 15:22:58 您在来信中写道:=======
>>>>
>>>>>The aim is to achieve batching and scheduling of work (to some extent) on
>>>>>both the transmit and receive paths. So no vif gets starved, and larger
>>>>>batches of packets are handled more efficiently.
>>>>>
>>>>> -- Keir
>>>>>
>>>>>
>>>>>On 19/6/08 01:40, "Zang Hongyong" <zanghongyong@xxxxxxxxxx> wrote:
>>>>>
>>>>>> Thanks again!
>>>>>> Another question about netback.
>>>>>> tasklet is used in both tx and rx. Lets take a look at rx, Before
>>>>>> tasklet,
>>>>>> packets of all vnifs must be queued together, and in tasklet packet will
>>>>>> be
>>>>>> dequeued, and handled to its proper netfront of domU.
>>>>>> 1)why not handle packet directly without the overall queue and tasklet
>>>>>> stuff?
>>>>>> 2)Is the overall queue and tasklet stuff fair to all vnifs? For example,
>>>>>> when
>>>>>> vif1.0 rx, the netback driver put its packet to overall queue and do
>>>>>> tasklet,
>>>>>> while in tasklet, packets belonging to other vif maybe handled.
>>>>>> I've noticed when tx, netfront handle packet directly to its proper
>>>>>> Ring,Request stuff.
>>>>>>
>>>>>>
>>>>>>
>>>>>> ======= 2008-06-19 00:37:40 您在来信中写道:=======
>>>>>>
>>>>>>>> Many thanks!
>>>>>>>> And that is, when tx, after the data page is sent by native Nic driver
>>>>>>>> in
>>>>>>>> dom0, the data page will be freed, then netif_page_release() called
>>>>>>>> which
>>>>>>>> indicates netback to unmap the page offered by domU, and moves on its
>>>>>>>> tx
>>>>>>>> response.
>>>>>>>>
>>>>>>>> Is that so?
>>>>>>>
>>>>>>> Correct.
>>>>>>>
>>>>>>>> If so, how about a bad NIC driver which doen't call free_page() after
>>>>>>>> sending data out of machine ?
>>>>>>>
>>>>>>> Well, it could happen if there was a memory leak in the driver. This
>>>>>>> would also be present in non-xenified linux. We are hoping for
>>>>>>> bug-free device drivers.
>>>>>>>
>>>>>>>
>>>>>>>> and Why mmap_pages is allocated by
>>>>>>>> alloc_empty_pages_and_pagevec(MAX_PENDING_REQS)?
>>>>>>>> can mmap_pages be allocated by alloc_vm_area() and vmalloc_to_page()
>>>>>>>> ??
>>>>>>>
>>>>>>> alloc_empty_pages_and_pagevec() balloons machine memory frames away
>>>>>>> from Dom0, you are therefore left with pseudo-physical page that's not
>>>>>>> backed by real memory. You want that, because you'll substitute DomU's
>>>>>>> memory frame in it's place. I don't think alloc_vm_area does that. It
>>>>>>> would only allocate virtually continuous range of memory.
>>>>>>>
>>>>>>> Cheers
>>>>>>> Gr(z)egor(z)
>>>>>>>
>>>>>>>>
>>>>>>>> Forgive my silly questions above please.
>>>>>>>>
>>>>>>>>
>>>>>>>> ======= 2008-06-18 18:52:27 您在来信中写道:=======
>>>>>>>>
>>>>>>>>>> hi,
>>>>>>>>>> in netback init mmap_pages,
>>>>>>>>>> SetPageForeign(page, netif_page_release);
>>>>>>>>>> that is, page->index = netif_page_release
>>>>>>>>>> while netif_page_release is a function.
>>>>>>>>>
>>>>>>>>> netif_page_release is a function, and therefore:
>>>>>>>>> page->index = netif_page_release
>>>>>>>>> will store netif_page_release function pointer in 'index'
>>>>>>>>>
>>>>>>>>>> so what's the meaning of SetPageForeign?
>>>>>>>>>
>>>>>>>>> Setting a page foreign means that the page is owned by another domain,
>>>>>>>>> and that some care needs to be taken when freeing it.
>>>>>>>>>
>>>>>>>>>> And when the function netif_page_release() will be called?
>>>>>>>>>
>>>>>>>>> Whenever PageForeignDestructor is called (as it calls the destructor
>>>>>>>>> function stored in the 'index' field).
>>>>>>>>> PageForeignDestructor is called from:
>>>>>>>>> __free_pages_ok
>>>>>>>>> and
>>>>>>>>> free_hot_cold_page
>>>>>>>>>
>>>>>>>>> Hope this helps.
>>>>>>>>>
>>>>>>>>> Cheers
>>>>>>>>> Gr(z)egor(z)
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Xen-devel mailing list
>>>>>>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>>>>>>> http://lists.xensource.com/xen-devel
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> = = = = = = = = = = = = = = = = = = = =
>>>>>>>> Zang Hongyong
>>>>>>>> zanghongyong@xxxxxxxxxx
>>>>>>>> 2008-06-18
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Xen-devel mailing list
>>>>>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>>>>>> http://lists.xensource.com/xen-devel
>>>>>>>>
>>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Xen-devel mailing list
>>>>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>>>>> http://lists.xensource.com/xen-devel
>>>>>>>
>>>>>>
>>>>>> = = = = = = = = = = = = = = = = = = = =
>>>>>> Zang Hongyong
>>>>>> zanghongyong@xxxxxxxxxx
>>>>>> 2008-06-19
>>>>>>
>>>>>> _______________________________________________
>>>>>> Xen-devel mailing list
>>>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>>>> http://lists.xensource.com/xen-devel
>>>>>
>>>>>
>>>>>
>>>>>_______________________________________________
>>>>>Xen-devel mailing list
>>>>>Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>>>http://lists.xensource.com/xen-devel
>>>>
>>>> = = = = = = = = = = = = = = = = = = = =
>>>> Zang Hongyong
>>>> zanghongyong@xxxxxxxxxx
>>>> 2008-06-22
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-devel
>>>>
>>>>
>>>_______________________________________________
>>>Xen-devel mailing list
>>>Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>http://lists.xensource.com/xen-devel
>>>
>>
>> = = = = = = = = = = = = = = = = = = = =
>> Zang Hongyong
>> zanghongyong@xxxxxxxxxx
>> 2008-06-23
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>
>>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-devel
>
= = = = = = = = = = = = = = = = = = = =
Zang Hongyong
zanghongyong@xxxxxxxxxx
2008-06-24
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|