WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH V3 2/3] xen-blkfront: teach blkfront driver handl

To: "Jan Beulich" <JBeulich@xxxxxxxx>
Subject: Re: [Xen-devel] [PATCH V3 2/3] xen-blkfront: teach blkfront driver handle trim request
From: "Dong Yang Li" <lidongyang@xxxxxxxx>
Date: Mon, 29 Aug 2011 04:33:14 -0600
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, owen.smith@xxxxxxxxxx
Delivery-date: Mon, 29 Aug 2011 08:58:27 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
as I've said, if we propagate the rq->limits.max_discard_sectors of the trim 
device in the host to xenstore,
it's the number of the sectors we have for the whole device, and if we are 
exporting just a partition to guest,
the number is just wrong, so we should also use the capacity for device case, 
Thanks


Li Dongyang

>>> Jan Beulich 08/25/11 3:02 PM >>>
>>> On 25.08.11 at 08:37, Li Dongyang  wrote:
> On Wed, Aug 24, 2011 at 6:42 PM, Jan Beulich  wrote:
>>>>> On 24.08.11 at 11:23, Li Dongyang  wrote:
>>> The blkfront driver now will read feature-trim from xenstore,
>>> and set up the request queue with trim params, then we can forward the
>>> discard requests to backend driver.
>>>
>>> Signed-off-by: Li Dongyang 
>>> ---
>>>  drivers/block/xen-blkfront.c |  111 
>>> +++++++++++++++++++++++++++++++++---------
>>>  1 files changed, 88 insertions(+), 23 deletions(-)
>>>
>>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>>> index 9ea8c25..aa3cede 100644
>>> --- a/drivers/block/xen-blkfront.c
>>> +++ b/drivers/block/xen-blkfront.c
>>> @@ -98,6 +98,9 @@ struct blkfront_info
>>>       unsigned long shadow_free;
>>>       unsigned int feature_flush;
>>>       unsigned int flush_op;
>>> +     unsigned int feature_trim;
>>> +     unsigned int discard_granularity;
>>> +     unsigned int discard_alignment;
>>>       int is_ready;
>>>  };
>>>
>>> @@ -302,29 +305,36 @@ static int blkif_queue_request(struct request *req)
>>>               ring_req->operation = info->flush_op;
>>>       }
>>>
>>> -     ring_req->nr_segments = blk_rq_map_sg(req->q, req, info->sg);
>>> -     BUG_ON(ring_req->nr_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST);
>>> +     if (unlikely(req->cmd_flags & REQ_DISCARD)) {
>>> +             /* id, sector_number and handle are set above. */
>>> +             ring_req->operation = BLKIF_OP_TRIM;
>>> +             ring_req->nr_segments = 0;
>>> +             ring_req->u.trim.nr_sectors = blk_rq_sectors(req);
>>> +     } else {
>>> +             ring_req->nr_segments = blk_rq_map_sg(req->q, req, info->sg);
>>> +             BUG_ON(ring_req->nr_segments > 
>>> BLKIF_MAX_SEGMENTS_PER_REQUEST);
>>>
>>> -     for_each_sg(info->sg, sg, ring_req->nr_segments, i) {
>>> -             buffer_mfn = pfn_to_mfn(page_to_pfn(sg_page(sg)));
>>> -             fsect = sg->offset >> 9;
>>> -             lsect = fsect + (sg->length >> 9) - 1;
>>> -             /* install a grant reference. */
>>> -             ref = gnttab_claim_grant_reference(&gref_head);
>>> -             BUG_ON(ref == -ENOSPC);
>>> +             for_each_sg(info->sg, sg, ring_req->nr_segments, i) {
>>> +                     buffer_mfn = pfn_to_mfn(page_to_pfn(sg_page(sg)));
>>> +                     fsect = sg->offset >> 9;
>>> +                     lsect = fsect + (sg->length >> 9) - 1;
>>> +                     /* install a grant reference. */
>>> +                     ref = gnttab_claim_grant_reference(&gref_head);
>>> +                     BUG_ON(ref == -ENOSPC);
>>>
>>> -             gnttab_grant_foreign_access_ref(
>>> -                             ref,
>>> -                             info->xbdev->otherend_id,
>>> -                             buffer_mfn,
>>> -                             rq_data_dir(req) );
>>> -
>>> -             info->shadow[id].frame[i] = mfn_to_pfn(buffer_mfn);
>>> -             ring_req->u.rw.seg[i] =
>>> -                             (struct blkif_request_segment) {
>>> -                                     .gref       = ref,
>>> -                                     .first_sect = fsect,
>>> -                                     .last_sect  = lsect };
>>> +                     gnttab_grant_foreign_access_ref(
>>> +                                     ref,
>>> +                                     info->xbdev->otherend_id,
>>> +                                     buffer_mfn,
>>> +                                     rq_data_dir(req));
>>> +
>>> +                     info->shadow[id].frame[i] = mfn_to_pfn(buffer_mfn);
>>> +                     ring_req->u.rw.seg[i] =
>>> +                                     (struct blkif_request_segment) {
>>> +                                             .gref       = ref,
>>> +                                             .first_sect = fsect,
>>> +                                             .last_sect  = lsect };
>>> +             }
>>>       }
>>>
>>>       info->ring.req_prod_pvt++;
>>> @@ -399,6 +409,7 @@ wait:
>>>  static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size)
>>>  {
>>>       struct request_queue *rq;
>>> +     struct blkfront_info *info = gd->private_data;
>>>
>>>       rq = blk_init_queue(do_blkif_request, &blkif_io_lock);
>>>       if (rq == NULL)
>>> @@ -406,6 +417,13 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16
>>> sector_size)
>>>
>>>       queue_flag_set_unlocked(QUEUE_FLAG_VIRT, rq);
>>>
>>> +     if (info->feature_trim) {
>>> +             queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, rq);
>>> +             blk_queue_max_discard_sectors(rq, get_capacity(gd));
>>> +             rq->limits.discard_granularity = info->discard_granularity;
>>> +             rq->limits.discard_alignment = info->discard_alignment;
>>
>> Don't you also need to set rq->limits.max_discard_sectors here (since
>> when zero blkdev_issue_discard() doesn't do anything)? And wouldn't
>> that need to be propagated from the backend, too?
> the max_discard_sectors are set by blk_queue_max_discard_sectors() above ;-)

Oh, silly me to overlook that.

> rq->limits.max_discard_sectors is the full phy device size, and if we
> only assign
> a partition to guest, the number is incorrect for guest, so the
> max_discard_sectors should
> be the capacity the guest will see, Thanks

Using the capacity seems right only for the file: case; I'd still think
the backend should pass the underlying disk's setting for the phy:
one.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • Re: [Xen-devel] [PATCH V3 2/3] xen-blkfront: teach blkfront driver handle trim request, Dong Yang Li <=