>> Do you think when xenblk backend make_reponse() means the request have
>> commited to disk?
>> We have did a sample testing and found without BI_RW_SYNC flag setted,
>> either O_DIRECT or O_SYNC flag, when vm crashed/power outage, lots of
>> data lost, sometimes more than 1M data lost, that means vm under
>> high data lost risk. With BI_RW_SYNC flag when call submit_bio, open
>> file with O_DIRECT or O_SYNC flag, could sync data very well.
>What's the vbd type in this case: raw partition, lvm, qcow file, ...?
Raw partition, crashed/power outage, lots of data lost :(
>The existing BLKIF_OP_WRITE_BARRIER and BLKIF_OP_FLUSH_DISKCACHE should
>suffice to implement O_SYNC on the blkfront side, I think. O_DIRECT doesn't
>mean writes are synchronous to the platters -- just means the buffer cache
>is bypassed -- which should generally be the case on the blkback side always
>anyway.
However, frontend driver could not got the write's flag at all.
if we could know the request with O_SYNC flag should be easy to handle,
need not touch filesystem layer.
O_DIRECT is difference not only buffer head, if read/write is direct-io,
need to commit request as soon as possible, means unplug request_queue
if request with O_DIRECT flag(request should be marked REQ_RW_SYNC
flag).
Thanks,
Joe
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|