[PATCH 0/6] virtio-fs: initial support for DAX window

41 views
Skip to first unread message

Fotis Xenakis

unread,
Apr 20, 2020, 4:48:07 PM4/20/20
to osv...@googlegroups.com, Fotis Xenakis
This adds initial support for utilizing the DAX window of a virtio-fs
device, building on top of the low-level support for shared memory
regions in PCI virtio devices.

The focus for this patch series is on getting the implementation right
and not on optimizing performance (which is the motive behind the DAX
window in the device). This is why the window is used in a most
simplistic way: containing a single mapping at a time, set up on a
per-request basis, to accommodate a single read() operation. The next
step (which I am already looking into) would be to focus on making
better usage of this resource (the first approach would be to integrate
it with the page cache).

To test it out, after applying these patches, one should add
"cache-size=" to the arguments passed to the respective QEMU device
flag, as detailed in [1].

Note that the last patch is not necessary functionality-wise, but tries
to simplify the inner structure of the two components (the driver and
the filesystm). Please feel free to point out if it is not in the right
direction and / or it should not be included with this patch series.

All feedback is more than welcome.

References:
[1] Official virtio-fs QEMU howto:
https://virtio-fs.gitlab.io/howto-qemu.html

Fotis Xenakis (6):
virtio-fs: minor code improvements in driver
virtio-fs: minor code improvements in filesystem
virtio-fs: update fuse protocol header
virtio-fs: add driver support for the DAX window
virtio-fs: add basic read using the DAX window
virtio-fs: refactor driver / fs

drivers/virtio-fs.cc | 114 +++++----
drivers/virtio-fs.hh | 46 +++-
fs/virtiofs/fuse_kernel.h | 82 +++----
fs/virtiofs/virtiofs.hh | 2 +-
fs/virtiofs/virtiofs_i.hh | 29 +--
fs/virtiofs/virtiofs_vfsops.cc | 140 +++++------
fs/virtiofs/virtiofs_vnops.cc | 430 ++++++++++++++++++++++-----------
7 files changed, 492 insertions(+), 351 deletions(-)

--
2.26.1

Fotis Xenakis

unread,
Apr 20, 2020, 5:01:35 PM4/20/20
to osv...@googlegroups.com, Fotis Xenakis
These include:
- Checking memory allocations
- Using static_cast instead of reinterpret_cast where possible
- Formatting and consistency

Signed-off-by: Fotis Xenakis <fo...@windowslive.com>
---
drivers/virtio-fs.cc | 82 ++++++++++++++++++++++++--------------------
drivers/virtio-fs.hh | 13 +++----
2 files changed, 52 insertions(+), 43 deletions(-)

diff --git a/drivers/virtio-fs.cc b/drivers/virtio-fs.cc
index 4869306d..d95f7740 100644
--- a/drivers/virtio-fs.cc
+++ b/drivers/virtio-fs.cc
@@ -28,7 +28,7 @@

using namespace memory;

-void fuse_req_wait(struct fuse_request* req)
+void fuse_req_wait(fuse_request* req)
{
WITH_LOCK(req->req_mutex) {
req->req_wait.wait(req->req_mutex);
@@ -37,37 +37,37 @@ void fuse_req_wait(struct fuse_request* req)

namespace virtio {

-static int fuse_make_request(void *driver, struct fuse_request* req)
+static int fuse_make_request(void* driver, fuse_request* req)
{
- auto fs_driver = reinterpret_cast<fs*>(driver);
+ auto fs_driver = static_cast<fs*>(driver);
return fs_driver->make_request(req);
}

-static void fuse_req_done(struct fuse_request* req)
+static void fuse_req_done(fuse_request* req)
{
WITH_LOCK(req->req_mutex) {
req->req_wait.wake_one(req->req_mutex);
}
}

-static void fuse_req_enqueue_input(vring* queue, struct fuse_request* req)
+static void fuse_req_enqueue_input(vring* queue, fuse_request* req)
{
// Header goes first
queue->add_out_sg(&req->in_header, sizeof(struct fuse_in_header));
- //
+
// Add fuse in arguments as out sg
- if (req->input_args_size) {
+ if (req->input_args_size > 0) {
queue->add_out_sg(req->input_args_data, req->input_args_size);
}
}

-static void fuse_req_enqueue_output(vring* queue, struct fuse_request* req)
+static void fuse_req_enqueue_output(vring* queue, fuse_request* req)
{
// Header goes first
queue->add_in_sg(&req->out_header, sizeof(struct fuse_out_header));
- //
+
// Add fuse out arguments as in sg
- if (req->output_args_size) {
+ if (req->output_args_size > 0) {
queue->add_in_sg(req->output_args_data, req->output_args_size);
}
}
@@ -93,14 +93,13 @@ struct driver fs_driver = {
bool fs::ack_irq()
{
auto isr = _dev.read_and_ack_isr();
- auto queue = get_virt_queue(VQ_REQUEST);
+ auto* queue = get_virt_queue(VQ_REQUEST);

if (isr) {
queue->disable_interrupts();
return true;
- } else {
- return false;
}
+ return false;
}

fs::fs(virtio_device& virtio_dev)
@@ -113,7 +112,6 @@ fs::fs(virtio_device& virtio_dev)
// Steps 4, 5 & 6 - negotiate and confirm features
setup_features();
read_config();
-
if (_config.num_queues < 1) {
virtio_i("Expected at least one request queue -> baling out!\n");
return;
@@ -122,18 +120,21 @@ fs::fs(virtio_device& virtio_dev)
// Step 7 - generic init of virtqueues
probe_virt_queues();

- //register the single irq callback for the block
+ // register the single irq callback for the block
sched::thread* t = sched::thread::make([this] { this->req_done(); },
- sched::thread::attr().name("virtio-fs"));
+ sched::thread::attr().name("virtio-fs"));
t->start();
- auto queue = get_virt_queue(VQ_REQUEST);
+ auto* queue = get_virt_queue(VQ_REQUEST);

interrupt_factory int_factory;
- int_factory.register_msi_bindings = [queue, t](interrupt_manager &msi) {
- msi.easy_register( {{ VQ_REQUEST, [=] { queue->disable_interrupts(); }, t }});
+ int_factory.register_msi_bindings = [queue, t](interrupt_manager& msi) {
+ msi.easy_register({
+ {VQ_HIPRIO, nullptr, nullptr},
+ {VQ_REQUEST, [=] { queue->disable_interrupts(); }, t}
+ });
};

- int_factory.create_pci_interrupt = [this,t](pci::device &pci_dev) {
+ int_factory.create_pci_interrupt = [this, t](pci::device& pci_dev) {
return new pci_interrupt(
pci_dev,
[=] { return this->ack_irq(); },
@@ -141,10 +142,10 @@ fs::fs(virtio_device& virtio_dev)
};

#ifndef AARCH64_PORT_STUB
- int_factory.create_gsi_edge_interrupt = [this,t]() {
+ int_factory.create_gsi_edge_interrupt = [this, t]() {
return new gsi_edge_interrupt(
- _dev.get_irq(),
- [=] { if (this->ack_irq()) t->wake(); });
+ _dev.get_irq(),
+ [=] { if (this->ack_irq()) t->wake(); });
};
#endif

@@ -159,37 +160,40 @@ fs::fs(virtio_device& virtio_dev)
std::string dev_name("virtiofs");
dev_name += std::to_string(_disk_idx++);

- struct device *dev = device_create(&fs_driver, dev_name.c_str(), D_BLK); //TODO Should it be really D_BLK?
- struct fuse_strategy *strategy = reinterpret_cast<struct fuse_strategy*>(dev->private_data);
+ struct device* dev = device_create(&fs_driver, dev_name.c_str(), D_BLK); // TODO Should it be really D_BLK?
+ auto* strategy = static_cast<fuse_strategy*>(dev->private_data);
strategy->drv = this;
strategy->make_request = fuse_make_request;

- debugf("virtio-fs: Add device instance %d as [%s]\n", _id, dev_name.c_str());
+ debugf("virtio-fs: Add device instance %d as [%s]\n", _id,
+ dev_name.c_str());
}

fs::~fs()
{
- //TODO: In theory maintain the list of free instances and gc it
+ // TODO: In theory maintain the list of free instances and gc it
// including the thread objects and their stack
}

void fs::read_config()
{
- virtio_conf_read(0, &(_config.tag[0]), sizeof(_config.tag));
- virtio_conf_read(offsetof(fs_config,num_queues), &(_config.num_queues), sizeof(_config.num_queues));
- debugf("virtio-fs: Detected device with tag: [%s] and num_queues: %d\n", _config.tag, _config.num_queues);
+ virtio_conf_read(0, &_config, sizeof(_config));
+ debugf("virtio-fs: Detected device with tag: [%s] and num_queues: %d\n",
+ _config.tag, _config.num_queues);
}

void fs::req_done()
{
auto* queue = get_virt_queue(VQ_REQUEST);
- fs_req* req;

- while (1) {
+ while (true) {
virtio_driver::wait_for_queue(queue, &vring::used_ring_not_empty);

+ fs_req* req;
u32 len;
- while((req = static_cast<fs_req*>(queue->get_buf_elem(&len))) != nullptr) {
+ while ((req = static_cast<fs_req*>(queue->get_buf_elem(&len))) !=
+ nullptr) {
+
fuse_req_done(req->fuse_req);
delete req;
queue->get_buf_finalize();
@@ -200,12 +204,13 @@ void fs::req_done()
}
}

-int fs::make_request(struct fuse_request* req)
+int fs::make_request(fuse_request* req)
{
// The lock is here for parallel requests protection
WITH_LOCK(_lock) {
-
- if (!req) return EIO;
+ if (!req) {
+ return EIO;
+ }

auto* queue = get_virt_queue(VQ_REQUEST);

@@ -214,7 +219,10 @@ int fs::make_request(struct fuse_request* req)
fuse_req_enqueue_input(queue, req);
fuse_req_enqueue_output(queue, req);

- auto* fs_request = new fs_req(req);
+ auto* fs_request = new (std::nothrow) fs_req(req);
+ if (!fs_request) {
+ return ENOMEM;
+ }
queue->add_buf_wait(fs_request);
queue->kick();

diff --git a/drivers/virtio-fs.hh b/drivers/virtio-fs.hh
index efdb956d..626bd906 100644
--- a/drivers/virtio-fs.hh
+++ b/drivers/virtio-fs.hh
@@ -17,8 +17,8 @@
namespace virtio {

enum {
- VQ_HIPRIO,
- VQ_REQUEST
+ VQ_HIPRIO = 0,
+ VQ_REQUEST = 1
};

class fs : public virtio_driver {
@@ -34,7 +34,7 @@ public:
virtual std::string get_name() const { return _driver_name; }
void read_config();

- int make_request(struct fuse_request*);
+ int make_request(fuse_request*);

void req_done();
int64_t size();
@@ -42,18 +42,19 @@ public:
bool ack_irq();

static hw_driver* probe(hw_device* dev);
+
private:
struct fs_req {
- fs_req(struct fuse_request* f) :fuse_req(f) {};
+ fs_req(fuse_request* f) : fuse_req(f) {};
~fs_req() {};

- struct fuse_request* fuse_req;
+ fuse_request* fuse_req;
};

std::string _driver_name;
fs_config _config;

- //maintains the virtio instance number for multiple drives
+ // maintains the virtio instance number for multiple drives
static int _instance;
int _id;
// This mutex protects parallel make_request invocations
--
2.26.1

Fotis Xenakis

unread,
Apr 20, 2020, 5:03:27 PM4/20/20
to osv...@googlegroups.com, Fotis Xenakis
These include:
- Checking memory allocations
- Using smart pointers where possible
- Using static_cast instead of reinterpret_cast or C-style cast where
possible
- Formatting and consistency

Signed-off-by: Fotis Xenakis <fo...@windowslive.com>
---
fs/virtiofs/virtiofs.hh | 2 +-
fs/virtiofs/virtiofs_i.hh | 19 +-
fs/virtiofs/virtiofs_vfsops.cc | 136 +++++++-------
fs/virtiofs/virtiofs_vnops.cc | 318 ++++++++++++++++++---------------
4 files changed, 245 insertions(+), 230 deletions(-)

diff --git a/fs/virtiofs/virtiofs.hh b/fs/virtiofs/virtiofs.hh
index 892c9ca7..475d5eba 100644
--- a/fs/virtiofs/virtiofs.hh
+++ b/fs/virtiofs/virtiofs.hh
@@ -32,7 +32,7 @@ struct virtiofs_file_data {
uint64_t file_handle;
};

-void virtiofs_set_vnode(struct vnode *vnode, struct virtiofs_inode *inode);
+void virtiofs_set_vnode(struct vnode* vnode, struct virtiofs_inode* inode);

extern struct vfsops virtiofs_vfsops;
extern struct vnops virtiofs_vnops;
diff --git a/fs/virtiofs/virtiofs_i.hh b/fs/virtiofs/virtiofs_i.hh
index c5dc10d2..17fbcd36 100644
--- a/fs/virtiofs/virtiofs_i.hh
+++ b/fs/virtiofs/virtiofs_i.hh
@@ -12,15 +12,14 @@
#include <osv/mutex.h>
#include <osv/waitqueue.hh>

-struct fuse_request
-{
+struct fuse_request {
struct fuse_in_header in_header;
struct fuse_out_header out_header;

- void *input_args_data;
+ void* input_args_data;
size_t input_args_size;

- void *output_args_data;
+ void* output_args_data;
size_t output_args_size;

mutex_t req_mutex;
@@ -28,14 +27,14 @@ struct fuse_request
};

struct fuse_strategy {
- void *drv;
- int (*make_request)(void*, struct fuse_request*);
+ void* drv;
+ int (*make_request)(void*, fuse_request*);
};

-int fuse_req_send_and_receive_reply(fuse_strategy* strategy, uint32_t opcode, uint64_t nodeid,
- void *input_args_data, size_t input_args_size,
- void *output_args_data, size_t output_args_size);
+int fuse_req_send_and_receive_reply(fuse_strategy* strategy, uint32_t opcode,
+ uint64_t nodeid, void* input_args_data, size_t input_args_size,
+ void* output_args_data, size_t output_args_size);

-void fuse_req_wait(struct fuse_request* req);
+void fuse_req_wait(fuse_request* req);

#endif
diff --git a/fs/virtiofs/virtiofs_vfsops.cc b/fs/virtiofs/virtiofs_vfsops.cc
index 4e8bf26e..968f93fc 100644
--- a/fs/virtiofs/virtiofs_vfsops.cc
+++ b/fs/virtiofs/virtiofs_vfsops.cc
@@ -13,36 +13,21 @@
#include "virtiofs.hh"
#include "virtiofs_i.hh"

-static int virtiofs_mount(struct mount *mp, const char *dev, int flags, const void *data);
-static int virtiofs_sync(struct mount *mp);
-static int virtiofs_statfs(struct mount *mp, struct statfs *statp);
-static int virtiofs_unmount(struct mount *mp, int flags);
+static std::atomic<uint64_t> fuse_unique_id(1);

-#define virtiofs_vget ((vfsop_vget_t)vfs_nullop)
-
-struct vfsops virtiofs_vfsops = {
- virtiofs_mount, /* mount */
- virtiofs_unmount, /* unmount */
- virtiofs_sync, /* sync */
- virtiofs_vget, /* vget */
- virtiofs_statfs, /* statfs */
- &virtiofs_vnops /* vnops */
-};
-
-std::atomic<uint64_t> fuse_unique_id(1);
-
-int fuse_req_send_and_receive_reply(fuse_strategy* strategy, uint32_t opcode, uint64_t nodeid,
- void *input_args_data, size_t input_args_size, void *output_args_data, size_t output_args_size)
+int fuse_req_send_and_receive_reply(fuse_strategy* strategy, uint32_t opcode,
+ uint64_t nodeid, void* input_args_data, size_t input_args_size,
+ void* output_args_data, size_t output_args_size)
{
- auto *req = new (std::nothrow) fuse_request();
-
- req->in_header.len = 0; //TODO
+ std::unique_ptr<fuse_request> req {new (std::nothrow) fuse_request()};
+ if (!req) {
+ return ENOMEM;
+ }
+ req->in_header.len = sizeof(req->in_header) + input_args_size;
req->in_header.opcode = opcode;
- req->in_header.unique = fuse_unique_id.fetch_add(1, std::memory_order_relaxed);
+ req->in_header.unique = fuse_unique_id.fetch_add(1,
+ std::memory_order_relaxed);
req->in_header.nodeid = nodeid;
- req->in_header.uid = 0;
- req->in_header.gid = 0;
- req->in_header.pid = 0;

req->input_args_data = input_args_data;
req->input_args_size = input_args_size;
@@ -51,18 +36,17 @@ int fuse_req_send_and_receive_reply(fuse_strategy* strategy, uint32_t opcode, ui
req->output_args_size = output_args_size;

assert(strategy->drv);
- strategy->make_request(strategy->drv, req);
- fuse_req_wait(req);
+ strategy->make_request(strategy->drv, req.get());
+ fuse_req_wait(req.get());

int error = -req->out_header.error;
- delete req;

return error;
}

-void virtiofs_set_vnode(struct vnode *vnode, struct virtiofs_inode *inode)
+void virtiofs_set_vnode(struct vnode* vnode, struct virtiofs_inode* inode)
{
- if (vnode == nullptr || inode == nullptr) {
+ if (!vnode || !inode) {
return;
}

@@ -82,81 +66,85 @@ void virtiofs_set_vnode(struct vnode *vnode, struct virtiofs_inode *inode)
vnode->v_size = inode->attr.size;
}

-static int
-virtiofs_mount(struct mount *mp, const char *dev, int flags, const void *data) {
- struct device *device;
- int error = -1;
+static int virtiofs_mount(struct mount* mp, const char* dev, int flags,
+ const void* data)
+{
+ struct device* device;

- error = device_open(dev + 5, DO_RDWR, &device);
+ int error = device_open(dev + strlen("/dev/"), DO_RDWR, &device);
if (error) {
kprintf("[virtiofs] Error opening device!\n");
return error;
}

- mp->m_dev = device;
-
- auto *in_args = new(std::nothrow) fuse_init_in();
+ std::unique_ptr<fuse_init_in> in_args {new (std::nothrow) fuse_init_in()};
+ std::unique_ptr<fuse_init_out> out_args {new (std::nothrow) fuse_init_out};
+ if (!in_args || !out_args) {
+ return ENOMEM;
+ }
in_args->major = FUSE_KERNEL_VERSION;
in_args->minor = FUSE_KERNEL_MINOR_VERSION;
in_args->max_readahead = PAGE_SIZE;
- in_args->flags = 0; //TODO Investigate which flags to set
-
- auto *out_args = new(std::nothrow) fuse_init_out();
+ in_args->flags = 0; // TODO: Verify that we need not set any flag

- auto *strategy = reinterpret_cast<fuse_strategy *>(device->private_data);
+ auto* strategy = static_cast<fuse_strategy*>(device->private_data);
error = fuse_req_send_and_receive_reply(strategy, FUSE_INIT, FUSE_ROOT_ID,
- in_args, sizeof(*in_args), out_args, sizeof(*out_args));
-
- if (!error) {
- virtiofs_debug("Initialized fuse filesystem with version major: %d, minor: %d\n",
- out_args->major, out_args->minor);
-
- auto *root_node = new virtiofs_inode();
- root_node->nodeid = FUSE_ROOT_ID;
- root_node->attr.mode = S_IFDIR;
+ in_args.get(), sizeof(*in_args), out_args.get(), sizeof(*out_args));
+ if (error) {
+ kprintf("[virtiofs] Failed to initialize fuse filesystem!\n");
+ return error;
+ }
+ // TODO: Handle version negotiation

- virtiofs_set_vnode(mp->m_root->d_vnode, root_node);
+ virtiofs_debug("Initialized fuse filesystem with version major: %d, "
+ "minor: %d\n", out_args->major, out_args->minor);

- mp->m_data = strategy;
- mp->m_dev = device;
- } else {
- kprintf("[virtiofs] Failed to initialized fuse filesystem!\n");
+ auto* root_node {new (std::nothrow) virtiofs_inode()};
+ if (!root_node) {
+ return ENOMEM;
}
+ root_node->nodeid = FUSE_ROOT_ID;
+ root_node->attr.mode = S_IFDIR;

- delete out_args;
- delete in_args;
+ virtiofs_set_vnode(mp->m_root->d_vnode, root_node);

- return error;
-}
+ mp->m_data = strategy;
+ mp->m_dev = device;

-static int virtiofs_sync(struct mount *mp) {
return 0;
}

-static int virtiofs_statfs(struct mount *mp, struct statfs *statp)
+static int virtiofs_sync(struct mount* mp)
{
- //TODO
- //struct virtiofs_info *virtiofs = (struct virtiofs_info *) mp->m_data;
+ return 0;
+}

- //statp->f_bsize = sb->block_size;
+static int virtiofs_statfs(struct mount* mp, struct statfs* statp)
+{
+ // TODO: Call FUSE_STATFS

- // Total blocks
- //statp->f_blocks = sb->structure_info_blocks_count + sb->structure_info_first_block;
// Read only. 0 blocks free
statp->f_bfree = 0;
statp->f_bavail = 0;

statp->f_ffree = 0;
- //statp->f_files = sb->inodes_count; //Needs to be inode count
-
- statp->f_namelen = 0; //FIXME

return 0;
}

-static int
-virtiofs_unmount(struct mount *mp, int flags)
+static int virtiofs_unmount(struct mount* mp, int flags)
{
- struct device *dev = mp->m_dev;
+ struct device* dev = mp->m_dev;
return device_close(dev);
}
+
+#define virtiofs_vget ((vfsop_vget_t)vfs_nullop)
+
+struct vfsops virtiofs_vfsops = {
+ virtiofs_mount, /* mount */
+ virtiofs_unmount, /* unmount */
+ virtiofs_sync, /* sync */
+ virtiofs_vget, /* vget */
+ virtiofs_statfs, /* statfs */
+ &virtiofs_vnops /* vnops */
+};
diff --git a/fs/virtiofs/virtiofs_vnops.cc b/fs/virtiofs/virtiofs_vnops.cc
index 3c212274..7fbb2cd2 100644
--- a/fs/virtiofs/virtiofs_vnops.cc
+++ b/fs/virtiofs/virtiofs_vnops.cc
@@ -27,206 +27,233 @@
#include "virtiofs.hh"
#include "virtiofs_i.hh"

-#define VERIFY_READ_INPUT_ARGUMENTS() \
- /* Cant read directories */\
- if (vnode->v_type == VDIR) \
- return EISDIR; \
- /* Cant read anything but reg */\
- if (vnode->v_type != VREG) \
- return EINVAL; \
- /* Cant start reading before the first byte */\
- if (uio->uio_offset < 0) \
- return EINVAL; \
- /* Need to read more than 1 byte */\
- if (uio->uio_resid == 0) \
- return 0; \
- /* Cant read after the end of the file */\
- if (uio->uio_offset >= (off_t)vnode->v_size) \
- return 0;
+static constexpr uint32_t OPEN_FLAGS = O_RDONLY;

-int virtiofs_init(void) {
+int virtiofs_init()
+{
return 0;
}

-static int virtiofs_lookup(struct vnode *vnode, char *name, struct vnode **vpp)
+static int virtiofs_lookup(struct vnode* vnode, char* name, struct vnode** vpp)
{
- struct virtiofs_inode *inode = (struct virtiofs_inode *) vnode->v_data;
- struct vnode *vp = nullptr;
+ auto* inode = static_cast<virtiofs_inode*>(vnode->v_data);

if (*name == '\0') {
return ENOENT;
}

if (!S_ISDIR(inode->attr.mode)) {
- kprintf("[virtiofs] inode:%d, ABORTED lookup of %s because not a directory\n", inode->nodeid, name);
+ kprintf("[virtiofs] inode:%lld, ABORTED lookup of %s because not a "
+ "directory\n", inode->nodeid, name);
return ENOTDIR;
}

- auto *out_args = new (std::nothrow) fuse_entry_out();
- auto input = new char[strlen(name) + 1];
- strcpy(input, name);
-
- auto *strategy = reinterpret_cast<fuse_strategy*>(vnode->v_mount->m_data);
- int error = fuse_req_send_and_receive_reply(strategy, FUSE_LOOKUP, inode->nodeid,
- input, strlen(name) + 1, out_args, sizeof(*out_args));
-
- if (!error) {
- if (vget(vnode->v_mount, out_args->nodeid, &vp)) { //TODO: Will it ever work? Revisit
- virtiofs_debug("lookup found vp in cache!\n");
- *vpp = vp;
- return 0;
- }
-
- auto *new_inode = new virtiofs_inode();
- new_inode->nodeid = out_args->nodeid;
- virtiofs_debug("inode %d, lookup found inode %d for %s!\n", inode->nodeid, new_inode->nodeid, name);
- memcpy(&new_inode->attr, &out_args->attr, sizeof(out_args->attr));
+ auto in_args_len = strlen(name) + 1;
+ std::unique_ptr<char[]> in_args {new (std::nothrow) char[in_args_len]};
+ std::unique_ptr<fuse_entry_out> out_args {
+ new (std::nothrow) fuse_entry_out};
+ if (!out_args || !in_args) {
+ return ENOMEM;
+ }
+ strcpy(in_args.get(), name);
+
+ auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
+ auto error = fuse_req_send_and_receive_reply(strategy, FUSE_LOOKUP,
+ inode->nodeid, in_args.get(), in_args_len, out_args.get(),
+ sizeof(*out_args));
+ if (error) {
+ kprintf("[virtiofs] inode:%lld, lookup failed to find %s\n",
+ inode->nodeid, name);
+ // TODO: Implement proper error handling by sending FUSE_FORGET
+ return error;
+ }

- virtiofs_set_vnode(vp, new_inode);
+ struct vnode* vp;
+ // TODO OPT: Should we even use the cache? (consult spec on metadata)
+ if (vget(vnode->v_mount, out_args->nodeid, &vp) == 1) {
+ virtiofs_debug("lookup found vp in cache!\n");
*vpp = vp;
- } else {
- kprintf("[virtiofs] inode:%d, lookup failed to find %s\n", inode->nodeid, name);
- //TODO Implement proper error handling by sending FUSE_FORGET
+ return 0;
}

- delete input;
- delete out_args;
+ auto* new_inode = new (std::nothrow) virtiofs_inode;
+ if (!new_inode) {
+ return ENOMEM;
+ }
+ new_inode->nodeid = out_args->nodeid;
+ virtiofs_debug("inode %lld, lookup found inode %lld for %s!\n",
+ inode->nodeid, new_inode->nodeid, name);
+ memcpy(&new_inode->attr, &out_args->attr, sizeof(out_args->attr));

- return error;
+ virtiofs_set_vnode(vp, new_inode);
+ *vpp = vp;
+
+ return 0;
}

-static int virtiofs_open(struct file *fp)
+static int virtiofs_open(struct file* fp)
{
if ((file_flags(fp) & FWRITE)) {
- // Do no allow opening files to write
- return (EROFS);
+ // Do not allow opening files to write
+ return EROFS;
}

- struct vnode *vnode = file_dentry(fp)->d_vnode;
- struct virtiofs_inode *inode = (struct virtiofs_inode *) vnode->v_data;
+ auto* vnode = file_dentry(fp)->d_vnode;
+ auto* inode = static_cast<virtiofs_inode*>(vnode->v_data);

- auto *out_args = new (std::nothrow) fuse_open_out();
- auto *input_args = new (std::nothrow) fuse_open_in();
- input_args->flags = O_RDONLY;
-
- auto *strategy = reinterpret_cast<fuse_strategy*>(vnode->v_mount->m_data);
- int error = fuse_req_send_and_receive_reply(strategy, FUSE_OPEN, inode->nodeid,
- input_args, sizeof(*input_args), out_args, sizeof(*out_args));
+ std::unique_ptr<fuse_open_in> in_args {new (std::nothrow) fuse_open_in()};
+ std::unique_ptr<fuse_open_out> out_args {new (std::nothrow) fuse_open_out};
+ if (!out_args || !in_args) {
+ return ENOMEM;
+ }
+ in_args->flags = OPEN_FLAGS;
+
+ auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
+ auto error = fuse_req_send_and_receive_reply(strategy, FUSE_OPEN,
+ inode->nodeid, in_args.get(), sizeof(*in_args), out_args.get(),
+ sizeof(*out_args));
+ if (error) {
+ kprintf("[virtiofs] inode %lld, open failed\n", inode->nodeid);
+ return error;
+ }

- if (!error) {
- virtiofs_debug("inode %d, opened\n", inode->nodeid);
+ virtiofs_debug("inode %lld, opened\n", inode->nodeid);

- auto *file_data = new virtiofs_file_data();
- file_data->file_handle = out_args->fh;
- fp->f_data = file_data;
+ auto* f_data = new (std::nothrow) virtiofs_file_data;
+ if (!f_data) {
+ return ENOMEM;
}
+ f_data->file_handle = out_args->fh;
+ // TODO OPT: Consult and possibly act upon out_args->open_flags
+ file_setdata(fp, f_data);

- delete input_args;
- delete out_args;
-
- return error;
+ return 0;
}

-static int virtiofs_close(struct vnode *vnode, struct file *fp)
+static int virtiofs_close(struct vnode* vnode, struct file* fp)
{
- struct virtiofs_inode *inode = (struct virtiofs_inode *) vnode->v_data;
-
- auto *input_args = new (std::nothrow) fuse_release_in();
- auto *file_data = reinterpret_cast<virtiofs_file_data*>(fp->f_data);
- input_args->fh = file_data->file_handle;
+ auto* inode = static_cast<virtiofs_inode*>(vnode->v_data);

- auto *strategy = reinterpret_cast<fuse_strategy*>(vnode->v_mount->m_data);
- auto error = fuse_req_send_and_receive_reply(strategy, FUSE_RELEASE, inode->nodeid,
- input_args, sizeof(*input_args), nullptr, 0);
-
- if (!error) {
- fp->f_data = nullptr;
- delete file_data;
- virtiofs_debug("inode %d, closed\n", inode->nodeid);
+ std::unique_ptr<fuse_release_in> in_args {
+ new (std::nothrow) fuse_release_in()};
+ if (!in_args) {
+ return ENOMEM;
+ }
+ auto* f_data = static_cast<virtiofs_file_data*>(file_data(fp));
+ in_args->fh = f_data->file_handle;
+ in_args->flags = OPEN_FLAGS; // need to be same as in FUSE_OPEN
+
+ auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
+ auto error = fuse_req_send_and_receive_reply(strategy, FUSE_RELEASE,
+ inode->nodeid, in_args.get(), sizeof(*in_args), nullptr, 0);
+ if (error) {
+ kprintf("[virtiofs] inode %lld, close failed\n", inode->nodeid);
+ return error;
}

- //TODO: Investigate if we should send FUSE_FORGET once all handles to the file closed on our side
+ file_setdata(fp, nullptr);
+ delete f_data;
+ virtiofs_debug("inode %lld, closed\n", inode->nodeid);

- delete input_args;
+ // TODO: Investigate if we should send FUSE_FORGET once all handles to the
+ // file closed on our side

- return error;
+ return 0;
}

-static int virtiofs_readlink(struct vnode *vnode, struct uio *uio)
+static int virtiofs_readlink(struct vnode* vnode, struct uio* uio)
{
- struct virtiofs_inode *inode = (struct virtiofs_inode *) vnode->v_data;
-
- auto *link_path = new (std::nothrow) char[PATH_MAX];
+ auto* inode = static_cast<virtiofs_inode*>(vnode->v_data);

- auto *strategy = reinterpret_cast<fuse_strategy*>(vnode->v_mount->m_data);
- int error = fuse_req_send_and_receive_reply(strategy, FUSE_READLINK, inode->nodeid,
- nullptr, 0, link_path, PATH_MAX);
-
- int ret = 0;
- if (!error) {
- virtiofs_debug("inode %d, read symlink [%s]\n", inode->nodeid, link_path);
- ret = uiomove(link_path, strlen(link_path), uio);
- } else {
- kprintf("[virtiofs] Error reading data\n");
- ret = error;
+ std::unique_ptr<char[]> link_path {new (std::nothrow) char[PATH_MAX]};
+ if (!link_path) {
+ return ENOMEM;
}

- delete link_path;
+ auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
+ auto error = fuse_req_send_and_receive_reply(strategy, FUSE_READLINK,
+ inode->nodeid, nullptr, 0, link_path.get(), PATH_MAX);
+ if (error) {
+ kprintf("[virtiofs] inode %lld, readlink failed\n", inode->nodeid);
+ return error;
+ }

- return ret;
+ virtiofs_debug("inode %lld, read symlink [%s]\n", inode->nodeid,
+ link_path.get());
+ return uiomove(link_path.get(), strlen(link_path.get()), uio);
}

-//TODO: Optimize it to reduce number of exits to host (each fuse_req_send_and_receive_reply())
-// by reading eagerly "ahead/around" just like ROFS does and caching it
-static int virtiofs_read(struct vnode *vnode, struct file *fp, struct uio *uio, int ioflag)
+// TODO: Optimize it to reduce number of exits to host (each
+// fuse_req_send_and_receive_reply()) by reading eagerly "ahead/around" just
+// like ROFS does and caching it
+static int virtiofs_read(struct vnode* vnode, struct file* fp, struct uio* uio,
+ int ioflag)
{
- struct virtiofs_inode *inode = (struct virtiofs_inode *) vnode->v_data;
+ auto* inode = static_cast<virtiofs_inode*>(vnode->v_data);

- VERIFY_READ_INPUT_ARGUMENTS()
+ // Can't read directories
+ if (vnode->v_type == VDIR) {
+ return EISDIR;
+ }
+ // Can't read anything but reg
+ if (vnode->v_type != VREG) {
+ return EINVAL;
+ }
+ // Can't start reading before the first byte
+ if (uio->uio_offset < 0) {
+ return EINVAL;
+ }
+ // Need to read at least 1 byte
+ if (uio->uio_resid == 0) {
+ return 0;
+ }
+ // Can't read after the end of the file
+ if (uio->uio_offset >= vnode->v_size) {
+ return 0;
+ }

// Total read amount is what they requested, or what is left
- uint64_t read_amt = std::min<uint64_t>(inode->attr.size - uio->uio_offset, uio->uio_resid);
- void *buf = malloc(read_amt);
-
- auto *input_args = new (std::nothrow) fuse_read_in();
- auto *file_data = reinterpret_cast<virtiofs_file_data*>(fp->f_data);
- input_args->fh = file_data->file_handle;
- input_args->offset = uio->uio_offset;
- input_args->size = read_amt;
- input_args->flags = ioflag;
- input_args->lock_owner = 0;
-
- virtiofs_debug("inode %d, reading %d bytes at offset %d\n", inode->nodeid, read_amt, uio->uio_offset);
-
- auto *strategy = reinterpret_cast<fuse_strategy*>(vnode->v_mount->m_data);
- auto error = fuse_req_send_and_receive_reply(strategy, FUSE_READ, inode->nodeid,
- input_args, sizeof(*input_args), buf, read_amt);
-
- int ret = 0;
- if (!error) {
- ret = uiomove(buf, read_amt, uio);
- } else {
- kprintf("[virtiofs] Error reading data\n");
- ret = error;
+ auto read_amt = std::min<uint64_t>(uio->uio_resid,
+ inode->attr.size - uio->uio_offset);
+ std::unique_ptr<u8[]> buf {new (std::nothrow) u8[read_amt]};
+ std::unique_ptr<fuse_read_in> in_args {new (std::nothrow) fuse_read_in()};
+ if (!buf || !in_args) {
+ return ENOMEM;
+ }
+ auto* f_data = static_cast<virtiofs_file_data*>(file_data(fp));
+ in_args->fh = f_data->file_handle;
+ in_args->offset = uio->uio_offset;
+ in_args->size = read_amt;
+ in_args->flags = ioflag;
+
+ virtiofs_debug("inode %lld, reading %lld bytes at offset %lld\n",
+ inode->nodeid, read_amt, uio->uio_offset);
+
+ auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
+ auto error = fuse_req_send_and_receive_reply(strategy, FUSE_READ,
+ inode->nodeid, in_args.get(), sizeof(*in_args), buf.get(), read_amt);
+ if (error) {
+ kprintf("[virtiofs] inode %lld, read failed\n", inode->nodeid);
+ return error;
}

- free(buf);
- free(input_args);
-
- return ret;
+ return uiomove(buf.get(), read_amt, uio);
}
-//
-static int virtiofs_readdir(struct vnode *vnode, struct file *fp, struct dirent *dir)
+
+static int virtiofs_readdir(struct vnode* vnode, struct file* fp,
+ struct dirent* dir)
{
- //TODO Implement
+ // TODO: Implement
return EPERM;
}

-static int virtiofs_getattr(struct vnode *vnode, struct vattr *attr)
+static int virtiofs_getattr(struct vnode* vnode, struct vattr* attr)
{
- struct virtiofs_inode *inode = (struct virtiofs_inode *) vnode->v_data;
+ auto* inode = static_cast<virtiofs_inode*>(vnode->v_data);

- attr->va_mode = 0555; //Is it really correct?
+ // TODO: Call FUSE_GETATTR? But figure out if fuse_getattr_in.fh is
+ // necessary (look at the flags)
+ attr->va_mode = 0555; // TODO: Is it really correct?

if (S_ISDIR(inode->attr.mode)) {
attr->va_type = VDIR;
@@ -277,10 +304,11 @@ struct vnops virtiofs_vnops = {
virtiofs_getattr, /* getattr */
virtiofs_setattr, /* setattr - returns error when called */
virtiofs_inactive, /* inactive */
- virtiofs_truncate, /* truncate - returns error when called*/
- virtiofs_link, /* link - returns error when called*/
- virtiofs_arc, /* arc */ //TODO: Implement to allow memory re-use when mapping files, investigate using virtio-fs DAX
- virtiofs_fallocate, /* fallocate - returns error when called*/
+ virtiofs_truncate, /* truncate - returns error when called */
+ virtiofs_link, /* link - returns error when called */
+ virtiofs_arc, /* arc */ //TODO: Implement to allow memory re-use when
+ // mapping files, investigate using virtio-fs DAX
+ virtiofs_fallocate, /* fallocate - returns error when called */
virtiofs_readlink, /* read link */
- virtiofs_symlink /* symbolic link - returns error when called*/
+ virtiofs_symlink /* symbolic link - returns error when called */
};
--
2.26.1

Fotis Xenakis

unread,
Apr 20, 2020, 5:04:27 PM4/20/20
to osv...@googlegroups.com, Fotis Xenakis
Copy from virtiofsd @ 32006c66f2578af4121d7effaccae4aa4fa12e46. This
includes the definitions for FUSE_SETUPMAPPING AND FUSE_REMOVEMAPPING.

Signed-off-by: Fotis Xenakis <fo...@windowslive.com>
---
fs/virtiofs/fuse_kernel.h | 82 ++++++++++++++++++---------------------
1 file changed, 38 insertions(+), 44 deletions(-)

diff --git a/fs/virtiofs/fuse_kernel.h b/fs/virtiofs/fuse_kernel.h
index 018a00a2..ce46046a 100644
--- a/fs/virtiofs/fuse_kernel.h
+++ b/fs/virtiofs/fuse_kernel.h
@@ -44,7 +44,6 @@
* - add lock_owner field to fuse_setattr_in, fuse_read_in and fuse_write_in
* - add blksize field to fuse_attr
* - add file flags field to fuse_read_in and fuse_write_in
- * - Add ATIME_NOW and MTIME_NOW flags to fuse_setattr_in
*
* 7.10
* - add nonseekable open flag
@@ -55,7 +54,7 @@
* - add POLL message and NOTIFY_POLL notification
*
* 7.12
- * - add umask flag to input argument of create, mknod and mkdir
+ * - add umask flag to input argument of open, mknod and mkdir
* - add notification messages for invalidation of inodes and
* directory entries
*
@@ -120,19 +119,6 @@
*
* 7.28
* - add FUSE_COPY_FILE_RANGE
- * - add FOPEN_CACHE_DIR
- * - add FUSE_MAX_PAGES, add max_pages to init_out
- * - add FUSE_CACHE_SYMLINKS
- *
- * 7.29
- * - add FUSE_NO_OPENDIR_SUPPORT flag
- *
- * 7.30
- * - add FUSE_EXPLICIT_INVAL_DATA
- * - add FUSE_IOCTL_COMPAT_X32
- *
- * 7.31
- * - add FUSE_WRITE_KILL_PRIV flag
*/

#ifndef _LINUX_FUSE_H
@@ -168,7 +154,7 @@
#define FUSE_KERNEL_VERSION 7

/** Minor version number of this interface */
-#define FUSE_KERNEL_MINOR_VERSION 31
+#define FUSE_KERNEL_MINOR_VERSION 27

/** The node ID of the root inode */
#define FUSE_ROOT_ID 1
@@ -236,14 +222,10 @@ struct fuse_file_lock {
* FOPEN_DIRECT_IO: bypass page cache for this open file
* FOPEN_KEEP_CACHE: don't invalidate the data cache on open
* FOPEN_NONSEEKABLE: the file is not seekable
- * FOPEN_CACHE_DIR: allow caching this directory
- * FOPEN_STREAM: the file is stream-like (no file position at all)
*/
#define FOPEN_DIRECT_IO (1 << 0)
#define FOPEN_KEEP_CACHE (1 << 1)
#define FOPEN_NONSEEKABLE (1 << 2)
-#define FOPEN_CACHE_DIR (1 << 3)
-#define FOPEN_STREAM (1 << 4)

/**
* INIT request/reply flags
@@ -270,10 +252,6 @@ struct fuse_file_lock {
* FUSE_HANDLE_KILLPRIV: fs handles killing suid/sgid/cap on write/chown/trunc
* FUSE_POSIX_ACL: filesystem supports posix acls
* FUSE_ABORT_ERROR: reading the device after abort returns ECONNABORTED
- * FUSE_MAX_PAGES: init_out.max_pages contains the max number of req pages
- * FUSE_CACHE_SYMLINKS: cache READLINK responses
- * FUSE_NO_OPENDIR_SUPPORT: kernel supports zero-message opendir
- * FUSE_EXPLICIT_INVAL_DATA: only invalidate cached pages on explicit request
*/
#define FUSE_ASYNC_READ (1 << 0)
#define FUSE_POSIX_LOCKS (1 << 1)
@@ -297,10 +275,6 @@ struct fuse_file_lock {
#define FUSE_HANDLE_KILLPRIV (1 << 19)
#define FUSE_POSIX_ACL (1 << 20)
#define FUSE_ABORT_ERROR (1 << 21)
-#define FUSE_MAX_PAGES (1 << 22)
-#define FUSE_CACHE_SYMLINKS (1 << 23)
-#define FUSE_NO_OPENDIR_SUPPORT (1 << 24)
-#define FUSE_EXPLICIT_INVAL_DATA (1 << 25)

/**
* CUSE INIT request/reply flags
@@ -330,11 +304,9 @@ struct fuse_file_lock {
*
* FUSE_WRITE_CACHE: delayed write from page cache, file handle is guessed
* FUSE_WRITE_LOCKOWNER: lock_owner field is valid
- * FUSE_WRITE_KILL_PRIV: kill suid and sgid bits
*/
#define FUSE_WRITE_CACHE (1 << 0)
#define FUSE_WRITE_LOCKOWNER (1 << 1)
-#define FUSE_WRITE_KILL_PRIV (1 << 2)

/**
* Read flags
@@ -349,7 +321,6 @@ struct fuse_file_lock {
* FUSE_IOCTL_RETRY: retry with new iovecs
* FUSE_IOCTL_32BIT: 32bit ioctl
* FUSE_IOCTL_DIR: is a directory
- * FUSE_IOCTL_COMPAT_X32: x32 compat ioctl on 64bit machine (64bit time_t)
*
* FUSE_IOCTL_MAX_IOV: maximum of in_iovecs + out_iovecs
*/
@@ -358,7 +329,6 @@ struct fuse_file_lock {
#define FUSE_IOCTL_RETRY (1 << 2)
#define FUSE_IOCTL_32BIT (1 << 3)
#define FUSE_IOCTL_DIR (1 << 4)
-#define FUSE_IOCTL_COMPAT_X32 (1 << 5)

#define FUSE_IOCTL_MAX_IOV 256

@@ -369,13 +339,6 @@ struct fuse_file_lock {
*/
#define FUSE_POLL_SCHEDULE_NOTIFY (1 << 0)

-/**
- * Fsync flags
- *
- * FUSE_FSYNC_FDATASYNC: Sync data only, not metadata
- */
-#define FUSE_FSYNC_FDATASYNC (1 << 0)
-
enum fuse_opcode {
FUSE_LOOKUP = 1,
FUSE_FORGET = 2, /* no reply */
@@ -422,9 +385,11 @@ enum fuse_opcode {
FUSE_RENAME2 = 45,
FUSE_LSEEK = 46,
FUSE_COPY_FILE_RANGE = 47,
+ FUSE_SETUPMAPPING = 48,
+ FUSE_REMOVEMAPPING = 49,

/* CUSE specific operations */
- CUSE_INIT = 4096
+ CUSE_INIT = 4096,
};

enum fuse_notify_code {
@@ -434,7 +399,7 @@ enum fuse_notify_code {
FUSE_NOTIFY_STORE = 4,
FUSE_NOTIFY_RETRIEVE = 5,
FUSE_NOTIFY_DELETE = 6,
- FUSE_NOTIFY_CODE_MAX
+ FUSE_NOTIFY_CODE_MAX,
};

/* The read buffer is required to be at least 8k, but may be much larger */
@@ -651,9 +616,7 @@ struct fuse_init_out {
uint16_t congestion_threshold;
uint32_t max_write;
uint32_t time_gran;
- uint16_t max_pages;
- uint16_t padding;
- uint32_t unused[8];
+ uint32_t unused[9];
};

#define CUSE_INIT_INFO_MAX 4096
@@ -845,4 +808,35 @@ struct fuse_copy_file_range_in {
uint64_t flags;
};

+#define FUSE_SETUPMAPPING_ENTRIES 8
+#define FUSE_SETUPMAPPING_FLAG_WRITE (1ull << 0)
+struct fuse_setupmapping_in {
+ /* An already open handle */
+ uint64_t fh;
+ /* Offset into the file to start the mapping */
+ uint64_t foffset;
+ /* Length of mapping required */
+ uint64_t len;
+ /* Flags, FUSE_SETUPMAPPING_FLAG_* */
+ uint64_t flags;
+ /* memory offset in to dax window */
+ uint64_t moffset;
+};
+
+struct fuse_setupmapping_out {
+ /* Offsets into the cache of mappings */
+ uint64_t coffset[FUSE_SETUPMAPPING_ENTRIES];
+ /* Lengths of each mapping */
+ uint64_t len[FUSE_SETUPMAPPING_ENTRIES];
+};
+
+struct fuse_removemapping_in {
+ /* An already open handle */
+ uint64_t fh;
+ /* Offset into the dax to start the unmapping */
+ uint64_t moffset;
+ /* Length of mapping required */
+ uint64_t len;
+};
+
#endif /* _LINUX_FUSE_H */
--
2.26.1

Fotis Xenakis

unread,
Apr 20, 2020, 5:05:25 PM4/20/20
to osv...@googlegroups.com, Fotis Xenakis
Signed-off-by: Fotis Xenakis <fo...@windowslive.com>
---
drivers/virtio-fs.cc | 12 ++++++++++++
drivers/virtio-fs.hh | 10 ++++++++++
2 files changed, 22 insertions(+)

diff --git a/drivers/virtio-fs.cc b/drivers/virtio-fs.cc
index d95f7740..b7363040 100644
--- a/drivers/virtio-fs.cc
+++ b/drivers/virtio-fs.cc
@@ -180,6 +180,18 @@ void fs::read_config()
virtio_conf_read(0, &_config, sizeof(_config));
debugf("virtio-fs: Detected device with tag: [%s] and num_queues: %d\n",
_config.tag, _config.num_queues);
+
+ // Query for DAX window
+ mmioaddr_t dax_addr;
+ u64 dax_len;
+ if (_dev.get_shm(0, dax_addr, dax_len)) {
+ _dax.addr = dax_addr;
+ _dax.len = dax_len;
+ debugf("virtio-fs: Detected DAX window with length %lld\n", dax_len);
+ } else {
+ _dax.addr = mmio_nullptr;
+ _dax.len = 0;
+ }
}

void fs::req_done()
diff --git a/drivers/virtio-fs.hh b/drivers/virtio-fs.hh
index 626bd906..d1c116de 100644
--- a/drivers/virtio-fs.hh
+++ b/drivers/virtio-fs.hh
@@ -28,6 +28,12 @@ public:
u32 num_queues;
} __attribute__((packed));

+ struct dax_window {
+ mmioaddr_t addr;
+ u64 len;
+ mutex lock;
+ };
+
explicit fs(virtio_device& dev);
virtual ~fs();

@@ -35,6 +41,9 @@ public:
void read_config();

int make_request(fuse_request*);
+ dax_window* get_dax() {
+ return (_dax.addr != mmio_nullptr) ? &_dax : nullptr;
+ }

void req_done();
int64_t size();
@@ -53,6 +62,7 @@ private:

std::string _driver_name;
fs_config _config;
+ dax_window _dax;

// maintains the virtio instance number for multiple drives
static int _instance;
--
2.26.1

Fotis Xenakis

unread,
Apr 20, 2020, 5:06:19 PM4/20/20
to osv...@googlegroups.com, Fotis Xenakis
When the DAX window is available from the device, the filesystem prefers
to use it instead of the regular FUSE_READ request. If that fails,
FUSE_READ is used as a fallback.

To use the DAX window, a part of the file is mapped to it with
FUSE_SETUPMAPPING, the contents are copied from it to the user buffers
and the mapping is cleaned-up with FUSE_REMOVEMAPPING. In this naive
implementation, the window is used for a single mapping at a time, with
no caching or readahead.

Signed-off-by: Fotis Xenakis <fo...@windowslive.com>
---
fs/virtiofs/virtiofs_vnops.cc | 167 +++++++++++++++++++++++++++++-----
1 file changed, 144 insertions(+), 23 deletions(-)

diff --git a/fs/virtiofs/virtiofs_vnops.cc b/fs/virtiofs/virtiofs_vnops.cc
index 7fbb2cd2..9551ff07 100644
--- a/fs/virtiofs/virtiofs_vnops.cc
+++ b/fs/virtiofs/virtiofs_vnops.cc
@@ -23,9 +23,11 @@
#include <sys/types.h>
#include <osv/device.h>
#include <osv/sched.hh>
+#include <osv/mmio.hh>

#include "virtiofs.hh"
#include "virtiofs_i.hh"
+#include "drivers/virtio-fs.hh"

static constexpr uint32_t OPEN_FLAGS = O_RDONLY;

@@ -183,14 +185,139 @@ static int virtiofs_readlink(struct vnode* vnode, struct uio* uio)
return uiomove(link_path.get(), strlen(link_path.get()), uio);
}

+// Read @read_amt bytes from @inode, using the DAX window.
+static int virtiofs_read_direct(virtiofs_inode& inode, u64 file_handle,
+ u64 read_amt, fuse_strategy& strategy, struct uio& uio)
+{
+ auto* drv = static_cast<virtio::fs*>(strategy.drv);
+ auto* dax = drv->get_dax();
+ // Enter the critical path: setup mapping -> read -> remove mapping
+ std::lock_guard<mutex> guard {dax->lock};
+
+ // Setup mapping
+ // NOTE: There are restrictions on the arguments to FUSE_SETUPMAPPING (in
+ // the future will be negotiated with FUSE_INIT, from the spec: "Alignment
+ // constraints for FUSE_SETUPMAPPING and FUSE_REMOVEMAPPING requests are
+ // communicated during FUSE_INIT negotiation"):
+ // - foffset: multiple of host's page size (passed to host mmap())
+ // - len: not larger than remaining file?
+ // - moffset: multiple of host's page size (passed to host mmap())
+ std::unique_ptr<fuse_setupmapping_in> in_args {
+ new (std::nothrow) fuse_setupmapping_in()};
+ if (!in_args) {
+ return ENOMEM;
+ }
+ in_args->fh = file_handle;
+ in_args->flags = 0;
+ uint64_t moffset = 0;
+ in_args->moffset = moffset;
+
+ // TODO: When implemented in virtiofsd, get alignment from FUSE_INIT
+ uint64_t alignment = 1 << 12;
+ auto foffset = align_down(static_cast<uint64_t>(uio.uio_offset), alignment);
+ in_args->foffset = foffset;
+
+ // The possible excess part of the file mapped due to alignment constraints
+ // NOTE: map_excess <= alignemnt
+ auto map_excess = uio.uio_offset - foffset;
+ if (moffset + map_excess >= dax->len) {
+ // No usable room in DAX window due to map_excess
+ return ENOBUFS;
+ }
+ // Actual read amount is read_amt, or what fits in the DAX window
+ auto read_amt_act = std::min<uint64_t>(read_amt,
+ dax->len - moffset - map_excess);
+ in_args->len = read_amt_act + map_excess;
+
+ // NOTE: This is not used, and seems like it will go away in the future (it
+ // is absent in the development branches of virtiofsd).
+ std::unique_ptr<fuse_setupmapping_out> out_args {
+ new (std::nothrow) fuse_setupmapping_out};
+ if (!out_args) {
+ return ENOMEM;
+ }
+
+ virtiofs_debug("inode %lld, setting up mapping (foffset=%lld, len=%lld, "
+ "moffset=%lld)\n", inode.nodeid, in_args->foffset,
+ in_args->len, in_args->moffset);
+ auto error = fuse_req_send_and_receive_reply(&strategy, FUSE_SETUPMAPPING,
+ inode.nodeid, in_args.get(), sizeof(*in_args), out_args.get(),
+ sizeof(*out_args));
+ if (error) {
+ kprintf("[virtiofs] inode %lld, mapping setup failed\n", inode.nodeid);
+ return error;
+ }
+
+ // Read from the DAX window
+ // NOTE: It shouldn't be necessary to use the mmio* interface (i.e. volatile
+ // accesses). From the spec: "Drivers map this shared memory region with
+ // writeback caching as if it were regular RAM."
+ // The location of the requested data in the DAX window
+ auto req_data = dax->addr + moffset + map_excess;
+ error = uiomove(const_cast<void*>(req_data), read_amt_act, &uio);
+ if (error) {
+ kprintf("[virtiofs] inode %lld, uiomove failed\n", inode.nodeid);
+ return error;
+ }
+
+ // Remove mapping
+ // NOTE: This is only necessary when FUSE_SETUPMAPPING fails. From the spec:
+ // "If the device runs out of resources the FUSE_SETUPMAPPING request fails
+ // until resources are available again following FUSE_REMOVEMAPPING."
+ std::unique_ptr<fuse_removemapping_in> iargs {
+ new (std::nothrow) fuse_removemapping_in()};
+ if (!iargs) {
+ return ENOMEM;
+ }
+ iargs->fh = in_args->fh;
+ iargs->moffset = in_args->moffset;
+ iargs->len = in_args->len;
+
+ virtiofs_debug("inode %lld, removing mapping (moffset=%lld, len=%lld)\n",
+ inode.nodeid, iargs->moffset, iargs->len);
+ error = fuse_req_send_and_receive_reply(&strategy, FUSE_REMOVEMAPPING,
+ inode.nodeid, iargs.get(), sizeof(*iargs), nullptr, 0);
+ if (error) {
+ kprintf("[virtiofs] inode %lld, mapping removal failed\n",
+ inode.nodeid);
+ return error;
+ }
+
+ return 0;
+}
+
+// Read @read_amt bytes from @inode, using the fallback FUSE_READ mechanism.
+static int virtiofs_read_fallback(virtiofs_inode& inode, u64 file_handle,
+ u32 read_amt, u32 flags, fuse_strategy& strategy, struct uio& uio)
+{
+ std::unique_ptr<fuse_read_in> in_args {new (std::nothrow) fuse_read_in()};
+ std::unique_ptr<u8[]> buf {new (std::nothrow) u8[read_amt]};
+ if (!in_args | !buf) {
+ return ENOMEM;
+ }
+ in_args->fh = file_handle;
+ in_args->offset = uio.uio_offset;
+ in_args->size = read_amt;
+ in_args->flags = flags;
+
+ virtiofs_debug("inode %lld, reading %lld bytes at offset %lld\n",
+ inode.nodeid, read_amt, uio.uio_offset);
+ auto error = fuse_req_send_and_receive_reply(&strategy, FUSE_READ,
+ inode.nodeid, in_args.get(), sizeof(*in_args), buf.get(), read_amt);
+ if (error) {
+ kprintf("[virtiofs] inode %lld, read failed\n", inode.nodeid);
+ return error;
+ }
+
+ return uiomove(buf.get(), read_amt, &uio);
+}
+
// TODO: Optimize it to reduce number of exits to host (each
// fuse_req_send_and_receive_reply()) by reading eagerly "ahead/around" just
// like ROFS does and caching it
static int virtiofs_read(struct vnode* vnode, struct file* fp, struct uio* uio,
int ioflag)
{
- auto* inode = static_cast<virtiofs_inode*>(vnode->v_data);
-
// Can't read directories
if (vnode->v_type == VDIR) {
return EISDIR;
@@ -212,32 +339,26 @@ static int virtiofs_read(struct vnode* vnode, struct file* fp, struct uio* uio,
return 0;
}

+ auto* inode = static_cast<virtiofs_inode*>(vnode->v_data);
+ auto* file_data = static_cast<virtiofs_file_data*>(fp->f_data);
+ auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
+
// Total read amount is what they requested, or what is left
auto read_amt = std::min<uint64_t>(uio->uio_resid,
inode->attr.size - uio->uio_offset);
- std::unique_ptr<u8[]> buf {new (std::nothrow) u8[read_amt]};
- std::unique_ptr<fuse_read_in> in_args {new (std::nothrow) fuse_read_in()};
- if (!buf || !in_args) {
- return ENOMEM;
- }
- auto* f_data = static_cast<virtiofs_file_data*>(file_data(fp));
- in_args->fh = f_data->file_handle;
- in_args->offset = uio->uio_offset;
- in_args->size = read_amt;
- in_args->flags = ioflag;

- virtiofs_debug("inode %lld, reading %lld bytes at offset %lld\n",
- inode->nodeid, read_amt, uio->uio_offset);
+ auto* drv = static_cast<virtio::fs*>(strategy->drv);
+ if (drv->get_dax()) {
+ // Try to read from DAX
+ if (!virtiofs_read_direct(*inode, file_data->file_handle, read_amt,
+ *strategy, *uio)) {

- auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
- auto error = fuse_req_send_and_receive_reply(strategy, FUSE_READ,
- inode->nodeid, in_args.get(), sizeof(*in_args), buf.get(), read_amt);
- if (error) {
- kprintf("[virtiofs] inode %lld, read failed\n", inode->nodeid);
- return error;
+ return 0;
+ }
}
-
- return uiomove(buf.get(), read_amt, uio);
+ // DAX unavailable or failed, use fallback
+ return virtiofs_read_fallback(*inode, file_data->file_handle, read_amt,
+ ioflag, *strategy, *uio);
}

static int virtiofs_readdir(struct vnode* vnode, struct file* fp,
@@ -307,7 +428,7 @@ struct vnops virtiofs_vnops = {
virtiofs_truncate, /* truncate - returns error when called */
virtiofs_link, /* link - returns error when called */
virtiofs_arc, /* arc */ //TODO: Implement to allow memory re-use when
- // mapping files, investigate using virtio-fs DAX
+ // mapping files
virtiofs_fallocate, /* fallocate - returns error when called */
virtiofs_readlink, /* read link */

Fotis Xenakis

unread,
Apr 20, 2020, 5:07:18 PM4/20/20
to osv...@googlegroups.com, Fotis Xenakis
Since in virtio-fs the filesystem is very tightly coupled with the
driver, this tries to make clear the dependence of the first on the
second, as well as simplify.

This includes:
- The definition of fuse_request is moved from the fs to the driver,
since it is part of the interface it provides. Also, it is enhanced
with methods, somewhat promoting it to a "proper" class.
- fuse_strategy, as a redirection to the driver is removed and instead
the dependence on the driver is made explicit.
- Last, virtio::fs::fs_req is removed and fuse_request is used in its
place, since it offered no value with fuse_request now defined in the
driver.

Signed-off-by: Fotis Xenakis <fo...@windowslive.com>
---
drivers/virtio-fs.cc | 42 +++++++++++++---------------------
drivers/virtio-fs.hh | 27 +++++++++++++++-------
fs/virtiofs/virtiofs_i.hh | 24 ++-----------------
fs/virtiofs/virtiofs_vfsops.cc | 16 +++++++------
fs/virtiofs/virtiofs_vnops.cc | 37 ++++++++++++++----------------
5 files changed, 63 insertions(+), 83 deletions(-)

diff --git a/drivers/virtio-fs.cc b/drivers/virtio-fs.cc
index b7363040..af1246c1 100644
--- a/drivers/virtio-fs.cc
+++ b/drivers/virtio-fs.cc
@@ -28,25 +28,23 @@

using namespace memory;

-void fuse_req_wait(fuse_request* req)
-{
- WITH_LOCK(req->req_mutex) {
- req->req_wait.wait(req->req_mutex);
- }
-}
+using fuse_request = virtio::fs::fuse_request;

namespace virtio {

-static int fuse_make_request(void* driver, fuse_request* req)
+// Wait for the request to be marked as completed.
+void fs::fuse_request::wait()
{
- auto fs_driver = static_cast<fs*>(driver);
- return fs_driver->make_request(req);
+ WITH_LOCK(req_mutex) {
+ req_wait.wait(req_mutex);
+ }
}

-static void fuse_req_done(fuse_request* req)
+// Mark the request as completed.
+void fs::fuse_request::done()
{
- WITH_LOCK(req->req_mutex) {
- req->req_wait.wake_one(req->req_mutex);
+ WITH_LOCK(req_mutex) {
+ req_wait.wake_one(req_mutex);
}
}

@@ -87,7 +85,7 @@ static struct devops fs_devops {
struct driver fs_driver = {
"virtio_fs",
&fs_devops,
- sizeof(struct fuse_strategy),
+ sizeof(fs*),
};

bool fs::ack_irq()
@@ -161,10 +159,7 @@ fs::fs(virtio_device& virtio_dev)
dev_name += std::to_string(_disk_idx++);

struct device* dev = device_create(&fs_driver, dev_name.c_str(), D_BLK); // TODO Should it be really D_BLK?
- auto* strategy = static_cast<fuse_strategy*>(dev->private_data);
- strategy->drv = this;
- strategy->make_request = fuse_make_request;
-
+ dev->private_data = this;
debugf("virtio-fs: Add device instance %d as [%s]\n", _id,
dev_name.c_str());
}
@@ -201,13 +196,12 @@ void fs::req_done()
while (true) {
virtio_driver::wait_for_queue(queue, &vring::used_ring_not_empty);

- fs_req* req;
+ fuse_request* req;
u32 len;
- while ((req = static_cast<fs_req*>(queue->get_buf_elem(&len))) !=
+ while ((req = static_cast<fuse_request*>(queue->get_buf_elem(&len))) !=
nullptr) {

- fuse_req_done(req->fuse_req);
- delete req;
+ req->done();
queue->get_buf_finalize();
}

@@ -231,11 +225,7 @@ int fs::make_request(fuse_request* req)
fuse_req_enqueue_input(queue, req);
fuse_req_enqueue_output(queue, req);

- auto* fs_request = new (std::nothrow) fs_req(req);
- if (!fs_request) {
- return ENOMEM;
- }
- queue->add_buf_wait(fs_request);
+ queue->add_buf_wait(req);
queue->kick();

return 0;
diff --git a/drivers/virtio-fs.hh b/drivers/virtio-fs.hh
index d1c116de..f35fd710 100644
--- a/drivers/virtio-fs.hh
+++ b/drivers/virtio-fs.hh
@@ -12,7 +12,7 @@
#include <osv/waitqueue.hh>
#include "drivers/virtio.hh"
#include "drivers/virtio-device.hh"
-#include "fs/virtiofs/virtiofs_i.hh"
+#include "fs/virtiofs/fuse_kernel.h"

namespace virtio {

@@ -23,6 +23,24 @@ enum {

class fs : public virtio_driver {
public:
+ struct fuse_request {
+ struct fuse_in_header in_header;
+ struct fuse_out_header out_header;
+
+ void* input_args_data;
+ size_t input_args_size;
+
+ void* output_args_data;
+ size_t output_args_size;
+
+ void wait();
+ void done();
+
+ private:
+ mutex_t req_mutex;
+ waitqueue req_wait;
+ };
+
struct fs_config {
char tag[36];
u32 num_queues;
@@ -53,13 +71,6 @@ public:
static hw_driver* probe(hw_device* dev);

private:
- struct fs_req {
- fs_req(fuse_request* f) : fuse_req(f) {};
- ~fs_req() {};
-
- fuse_request* fuse_req;
- };
-
std::string _driver_name;
fs_config _config;
dax_window _dax;
diff --git a/fs/virtiofs/virtiofs_i.hh b/fs/virtiofs/virtiofs_i.hh
index 17fbcd36..76533d74 100644
--- a/fs/virtiofs/virtiofs_i.hh
+++ b/fs/virtiofs/virtiofs_i.hh
@@ -11,30 +11,10 @@
#include "fuse_kernel.h"
#include <osv/mutex.h>
#include <osv/waitqueue.hh>
+#include "drivers/virtio-fs.hh"

-struct fuse_request {
- struct fuse_in_header in_header;
- struct fuse_out_header out_header;
-
- void* input_args_data;
- size_t input_args_size;
-
- void* output_args_data;
- size_t output_args_size;
-
- mutex_t req_mutex;
- waitqueue req_wait;
-};
-
-struct fuse_strategy {
- void* drv;
- int (*make_request)(void*, fuse_request*);
-};
-
-int fuse_req_send_and_receive_reply(fuse_strategy* strategy, uint32_t opcode,
+int fuse_req_send_and_receive_reply(virtio::fs* drv, uint32_t opcode,
uint64_t nodeid, void* input_args_data, size_t input_args_size,
void* output_args_data, size_t output_args_size);

-void fuse_req_wait(fuse_request* req);
-
#endif
diff --git a/fs/virtiofs/virtiofs_vfsops.cc b/fs/virtiofs/virtiofs_vfsops.cc
index 968f93fc..ee5725e4 100644
--- a/fs/virtiofs/virtiofs_vfsops.cc
+++ b/fs/virtiofs/virtiofs_vfsops.cc
@@ -13,9 +13,11 @@
#include "virtiofs.hh"
#include "virtiofs_i.hh"

+using fuse_request = virtio::fs::fuse_request;
+
static std::atomic<uint64_t> fuse_unique_id(1);

-int fuse_req_send_and_receive_reply(fuse_strategy* strategy, uint32_t opcode,
+int fuse_req_send_and_receive_reply(virtio::fs* drv, uint32_t opcode,
uint64_t nodeid, void* input_args_data, size_t input_args_size,
void* output_args_data, size_t output_args_size)
{
@@ -35,9 +37,9 @@ int fuse_req_send_and_receive_reply(fuse_strategy* strategy, uint32_t opcode,
req->output_args_data = output_args_data;
req->output_args_size = output_args_size;

- assert(strategy->drv);
- strategy->make_request(strategy->drv, req.get());
- fuse_req_wait(req.get());
+ assert(drv);
+ drv->make_request(req.get());
+ req->wait();

int error = -req->out_header.error;

@@ -87,8 +89,8 @@ static int virtiofs_mount(struct mount* mp, const char* dev, int flags,
in_args->max_readahead = PAGE_SIZE;
in_args->flags = 0; // TODO: Verify that we need not set any flag

- auto* strategy = static_cast<fuse_strategy*>(device->private_data);
- error = fuse_req_send_and_receive_reply(strategy, FUSE_INIT, FUSE_ROOT_ID,
+ auto* drv = static_cast<virtio::fs*>(device->private_data);
+ error = fuse_req_send_and_receive_reply(drv, FUSE_INIT, FUSE_ROOT_ID,
in_args.get(), sizeof(*in_args), out_args.get(), sizeof(*out_args));
if (error) {
kprintf("[virtiofs] Failed to initialize fuse filesystem!\n");
@@ -108,7 +110,7 @@ static int virtiofs_mount(struct mount* mp, const char* dev, int flags,

virtiofs_set_vnode(mp->m_root->d_vnode, root_node);

- mp->m_data = strategy;
+ mp->m_data = drv;
mp->m_dev = device;

return 0;
diff --git a/fs/virtiofs/virtiofs_vnops.cc b/fs/virtiofs/virtiofs_vnops.cc
index 9551ff07..6779eb93 100644
--- a/fs/virtiofs/virtiofs_vnops.cc
+++ b/fs/virtiofs/virtiofs_vnops.cc
@@ -27,7 +27,6 @@

#include "virtiofs.hh"
#include "virtiofs_i.hh"
-#include "drivers/virtio-fs.hh"

static constexpr uint32_t OPEN_FLAGS = O_RDONLY;

@@ -59,8 +58,8 @@ static int virtiofs_lookup(struct vnode* vnode, char* name, struct vnode** vpp)
}
strcpy(in_args.get(), name);

- auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
- auto error = fuse_req_send_and_receive_reply(strategy, FUSE_LOOKUP,
+ auto* drv = static_cast<virtio::fs*>(vnode->v_mount->m_data);
+ auto error = fuse_req_send_and_receive_reply(drv, FUSE_LOOKUP,
inode->nodeid, in_args.get(), in_args_len, out_args.get(),
sizeof(*out_args));
if (error) {
@@ -110,8 +109,8 @@ static int virtiofs_open(struct file* fp)
}
in_args->flags = OPEN_FLAGS;

- auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
- auto error = fuse_req_send_and_receive_reply(strategy, FUSE_OPEN,
+ auto* drv = static_cast<virtio::fs*>(vnode->v_mount->m_data);
+ auto error = fuse_req_send_and_receive_reply(drv, FUSE_OPEN,
inode->nodeid, in_args.get(), sizeof(*in_args), out_args.get(),
sizeof(*out_args));
if (error) {
@@ -145,8 +144,8 @@ static int virtiofs_close(struct vnode* vnode, struct file* fp)
in_args->fh = f_data->file_handle;
in_args->flags = OPEN_FLAGS; // need to be same as in FUSE_OPEN

- auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
- auto error = fuse_req_send_and_receive_reply(strategy, FUSE_RELEASE,
+ auto* drv = static_cast<virtio::fs*>(vnode->v_mount->m_data);
+ auto error = fuse_req_send_and_receive_reply(drv, FUSE_RELEASE,
inode->nodeid, in_args.get(), sizeof(*in_args), nullptr, 0);
if (error) {
kprintf("[virtiofs] inode %lld, close failed\n", inode->nodeid);
@@ -172,8 +171,8 @@ static int virtiofs_readlink(struct vnode* vnode, struct uio* uio)
return ENOMEM;
}

- auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
- auto error = fuse_req_send_and_receive_reply(strategy, FUSE_READLINK,
+ auto* drv = static_cast<virtio::fs*>(vnode->v_mount->m_data);
+ auto error = fuse_req_send_and_receive_reply(drv, FUSE_READLINK,
inode->nodeid, nullptr, 0, link_path.get(), PATH_MAX);
if (error) {
kprintf("[virtiofs] inode %lld, readlink failed\n", inode->nodeid);
@@ -187,10 +186,9 @@ static int virtiofs_readlink(struct vnode* vnode, struct uio* uio)

// Read @read_amt bytes from @inode, using the DAX window.
static int virtiofs_read_direct(virtiofs_inode& inode, u64 file_handle,
- u64 read_amt, fuse_strategy& strategy, struct uio& uio)
+ u64 read_amt, virtio::fs& drv, struct uio& uio)
{
- auto* drv = static_cast<virtio::fs*>(strategy.drv);
- auto* dax = drv->get_dax();
+ auto* dax = drv.get_dax();
// Enter the critical path: setup mapping -> read -> remove mapping
std::lock_guard<mutex> guard {dax->lock};

@@ -240,7 +238,7 @@ static int virtiofs_read_direct(virtiofs_inode& inode, u64 file_handle,
virtiofs_debug("inode %lld, setting up mapping (foffset=%lld, len=%lld, "
"moffset=%lld)\n", inode.nodeid, in_args->foffset,
in_args->len, in_args->moffset);
- auto error = fuse_req_send_and_receive_reply(&strategy, FUSE_SETUPMAPPING,
+ auto error = fuse_req_send_and_receive_reply(&drv, FUSE_SETUPMAPPING,
inode.nodeid, in_args.get(), sizeof(*in_args), out_args.get(),
sizeof(*out_args));
if (error) {
@@ -275,7 +273,7 @@ static int virtiofs_read_direct(virtiofs_inode& inode, u64 file_handle,

virtiofs_debug("inode %lld, removing mapping (moffset=%lld, len=%lld)\n",
inode.nodeid, iargs->moffset, iargs->len);
- error = fuse_req_send_and_receive_reply(&strategy, FUSE_REMOVEMAPPING,
+ error = fuse_req_send_and_receive_reply(&drv, FUSE_REMOVEMAPPING,
inode.nodeid, iargs.get(), sizeof(*iargs), nullptr, 0);
if (error) {
kprintf("[virtiofs] inode %lld, mapping removal failed\n",
@@ -288,7 +286,7 @@ static int virtiofs_read_direct(virtiofs_inode& inode, u64 file_handle,

// Read @read_amt bytes from @inode, using the fallback FUSE_READ mechanism.
static int virtiofs_read_fallback(virtiofs_inode& inode, u64 file_handle,
- u32 read_amt, u32 flags, fuse_strategy& strategy, struct uio& uio)
+ u32 read_amt, u32 flags, virtio::fs& drv, struct uio& uio)
{
std::unique_ptr<fuse_read_in> in_args {new (std::nothrow) fuse_read_in()};
std::unique_ptr<u8[]> buf {new (std::nothrow) u8[read_amt]};
@@ -302,7 +300,7 @@ static int virtiofs_read_fallback(virtiofs_inode& inode, u64 file_handle,

virtiofs_debug("inode %lld, reading %lld bytes at offset %lld\n",
inode.nodeid, read_amt, uio.uio_offset);
- auto error = fuse_req_send_and_receive_reply(&strategy, FUSE_READ,
+ auto error = fuse_req_send_and_receive_reply(&drv, FUSE_READ,
inode.nodeid, in_args.get(), sizeof(*in_args), buf.get(), read_amt);
if (error) {
kprintf("[virtiofs] inode %lld, read failed\n", inode.nodeid);
@@ -341,24 +339,23 @@ static int virtiofs_read(struct vnode* vnode, struct file* fp, struct uio* uio,

auto* inode = static_cast<virtiofs_inode*>(vnode->v_data);
auto* file_data = static_cast<virtiofs_file_data*>(fp->f_data);
- auto* strategy = static_cast<fuse_strategy*>(vnode->v_mount->m_data);
+ auto* drv = static_cast<virtio::fs*>(vnode->v_mount->m_data);

// Total read amount is what they requested, or what is left
auto read_amt = std::min<uint64_t>(uio->uio_resid,
inode->attr.size - uio->uio_offset);

- auto* drv = static_cast<virtio::fs*>(strategy->drv);
if (drv->get_dax()) {
// Try to read from DAX
if (!virtiofs_read_direct(*inode, file_data->file_handle, read_amt,
- *strategy, *uio)) {
+ *drv, *uio)) {

return 0;
}
}
// DAX unavailable or failed, use fallback
return virtiofs_read_fallback(*inode, file_data->file_handle, read_amt,
- ioflag, *strategy, *uio);
+ ioflag, *drv, *uio);
}

static int virtiofs_readdir(struct vnode* vnode, struct file* fp,
--
2.26.1

Message has been deleted

Waldek Kozaczuk

unread,
Apr 21, 2020, 12:50:36 AM4/21/20
to OSv Development
Fotis,

Great work!

Unfortunately, I will not have time to review these patches until next week. Unless somebody else has time and wants to jump in.

Waldek

Commit Bot

unread,
Apr 29, 2020, 12:34:19 AM4/29/20
to osv...@googlegroups.com, Fotis Xenakis
From: Fotis Xenakis <fo...@windowslive.com>
Committer: Waldemar Kozaczuk <jwkoz...@gmail.com>
Branch: master

virtio-fs: minor code improvements in driver

These include:
- Checking memory allocations
- Using static_cast instead of reinterpret_cast where possible
- Formatting and consistency

Signed-off-by: Fotis Xenakis <fo...@windowslive.com>
Message-Id: <VI1PR03MB43837C6759...@VI1PR03MB4383.eurprd03.prod.outlook.com>

---
diff --git a/drivers/virtio-fs.cc b/drivers/virtio-fs.cc
@@ -122,29 +120,32 @@ fs::fs(virtio_device& virtio_dev)
// Step 7 - generic init of virtqueues
probe_virt_queues();

- //register the single irq callback for the block
+ // register the single irq callback for the block
sched::thread* t = sched::thread::make([this] { this->req_done(); },
- sched::thread::attr().name("virtio-fs"));
+ sched::thread::attr().name("virtio-fs"));
t->start();
- auto queue = get_virt_queue(VQ_REQUEST);
+ auto* queue = get_virt_queue(VQ_REQUEST);

interrupt_factory int_factory;
- int_factory.register_msi_bindings = [queue, t](interrupt_manager &msi) {
- msi.easy_register( {{ VQ_REQUEST, [=] { queue->disable_interrupts(); }, t }});
+ int_factory.register_msi_bindings = [queue, t](interrupt_manager& msi) {
+ msi.easy_register({
+ {VQ_HIPRIO, nullptr, nullptr},
+ {VQ_REQUEST, [=] { queue->disable_interrupts(); }, t}
+ });
};

- int_factory.create_pci_interrupt = [this,t](pci::device &pci_dev) {
+ int_factory.create_pci_interrupt = [this, t](pci::device& pci_dev) {
return new pci_interrupt(
pci_dev,
[=] { return this->ack_irq(); },
[=] { t->wake(); });
--- a/drivers/virtio-fs.hh
+++ b/drivers/virtio-fs.hh
@@ -17,8 +17,8 @@
namespace virtio {

enum {
- VQ_HIPRIO,
- VQ_REQUEST
+ VQ_HIPRIO = 0,
+ VQ_REQUEST = 1
};

class fs : public virtio_driver {
@@ -34,26 +34,27 @@ public:
virtual std::string get_name() const { return _driver_name; }
void read_config();

- int make_request(struct fuse_request*);
+ int make_request(fuse_request*);

void req_done();
int64_t size();

Commit Bot

unread,
Apr 29, 2020, 12:34:21 AM4/29/20
to osv...@googlegroups.com, Fotis Xenakis
From: Fotis Xenakis <fo...@windowslive.com>
Committer: Waldemar Kozaczuk <jwkoz...@gmail.com>
Branch: master

virtio-fs: minor code improvements in filesystem

These include:
- Checking memory allocations
- Using smart pointers where possible
- Using static_cast instead of reinterpret_cast or C-style cast where
possible
- Formatting and consistency

Signed-off-by: Fotis Xenakis <fo...@windowslive.com>
Message-Id: <VI1PR03MB438375635D...@VI1PR03MB4383.eurprd03.prod.outlook.com>

---
diff --git a/fs/virtiofs/virtiofs.hh b/fs/virtiofs/virtiofs.hh
--- a/fs/virtiofs/virtiofs.hh
+++ b/fs/virtiofs/virtiofs.hh
@@ -32,7 +32,7 @@ struct virtiofs_file_data {
uint64_t file_handle;
};

-void virtiofs_set_vnode(struct vnode *vnode, struct virtiofs_inode *inode);
+void virtiofs_set_vnode(struct vnode* vnode, struct virtiofs_inode* inode);

extern struct vfsops virtiofs_vfsops;
extern struct vnops virtiofs_vnops;
diff --git a/fs/virtiofs/virtiofs_i.hh b/fs/virtiofs/virtiofs_i.hh
--- a/fs/virtiofs/virtiofs_i.hh
+++ b/fs/virtiofs/virtiofs_i.hh
@@ -12,30 +12,29 @@
#include <osv/mutex.h>
#include <osv/waitqueue.hh>

-struct fuse_request
-{
+struct fuse_request {
struct fuse_in_header in_header;
struct fuse_out_header out_header;

- void *input_args_data;
+ void* input_args_data;
size_t input_args_size;

- void *output_args_data;
+ void* output_args_data;
size_t output_args_size;

mutex_t req_mutex;
waitqueue req_wait;
};

struct fuse_strategy {
- void *drv;
- int (*make_request)(void*, struct fuse_request*);
+ void* drv;
+ int (*make_request)(void*, fuse_request*);
};

-int fuse_req_send_and_receive_reply(fuse_strategy* strategy, uint32_t opcode, uint64_t nodeid,
- void *input_args_data, size_t input_args_size,
- void *output_args_data, size_t output_args_size);
+int fuse_req_send_and_receive_reply(fuse_strategy* strategy, uint32_t opcode,
+ uint64_t nodeid, void* input_args_data, size_t input_args_size,
+ void* output_args_data, size_t output_args_size);

-void fuse_req_wait(struct fuse_request* req);
+void fuse_req_wait(fuse_request* req);

#endif
diff --git a/fs/virtiofs/virtiofs_vfsops.cc b/fs/virtiofs/virtiofs_vfsops.cc

Commit Bot

unread,
Apr 29, 2020, 12:34:23 AM4/29/20
to osv...@googlegroups.com, Fotis Xenakis
From: Fotis Xenakis <fo...@windowslive.com>
Committer: Waldemar Kozaczuk <jwkoz...@gmail.com>
Branch: master

virtio-fs: update fuse protocol header

Copy from virtiofsd @ 32006c66f2578af4121d7effaccae4aa4fa12e46. This
includes the definitions for FUSE_SETUPMAPPING AND FUSE_REMOVEMAPPING.

Signed-off-by: Fotis Xenakis <fo...@windowslive.com>
Message-Id: <VI1PR03MB4383C4316C...@VI1PR03MB4383.eurprd03.prod.outlook.com>

---
diff --git a/fs/virtiofs/fuse_kernel.h b/fs/virtiofs/fuse_kernel.h

Commit Bot

unread,
Apr 29, 2020, 12:38:38 AM4/29/20
to osv...@googlegroups.com, Fotis Xenakis
From: Fotis Xenakis <fo...@windowslive.com>
Committer: Waldemar Kozaczuk <jwkoz...@gmail.com>
Branch: master

virtio-fs: add driver support for the DAX window

Signed-off-by: Fotis Xenakis <fo...@windowslive.com>
Message-Id: <VI1PR03MB4383C53D7C...@VI1PR03MB4383.eurprd03.prod.outlook.com>

---
diff --git a/drivers/virtio-fs.cc b/drivers/virtio-fs.cc
--- a/drivers/virtio-fs.cc
+++ b/drivers/virtio-fs.cc
@@ -180,6 +180,18 @@ void fs::read_config()
virtio_conf_read(0, &_config, sizeof(_config));
debugf("virtio-fs: Detected device with tag: [%s] and num_queues: %d\n",
_config.tag, _config.num_queues);
+
+ // Query for DAX window
+ mmioaddr_t dax_addr;
+ u64 dax_len;
+ if (_dev.get_shm(0, dax_addr, dax_len)) {
+ _dax.addr = dax_addr;
+ _dax.len = dax_len;
+ debugf("virtio-fs: Detected DAX window with length %lld\n", dax_len);
+ } else {
+ _dax.addr = mmio_nullptr;
+ _dax.len = 0;
+ }
}

void fs::req_done()
diff --git a/drivers/virtio-fs.hh b/drivers/virtio-fs.hh
--- a/drivers/virtio-fs.hh
+++ b/drivers/virtio-fs.hh
@@ -28,13 +28,22 @@ public:
u32 num_queues;
} __attribute__((packed));

+ struct dax_window {
+ mmioaddr_t addr;
+ u64 len;
+ mutex lock;
+ };
+
explicit fs(virtio_device& dev);
virtual ~fs();

virtual std::string get_name() const { return _driver_name; }

Waldek Kozaczuk

unread,
Apr 29, 2020, 12:48:02 PM4/29/20
to OSv Development
I have applied this patch but when I started testing your later patches that enable DAX logic I would get error messages about the wrong protocol version:

OSv v0.54.0-179-g2f92fc91
4 CPUs detected
Firmware vendor: SeaBIOS
bsd: initializing - done
VFS: mounting ramfs at /
VFS: mounting devfs at /dev
net: initializing - done
vga: Add VGA device instance
eth0: ethernet address: 52:54:00:12:34:56
virtio-blk: Add blk device instances 0 as vblk0, devsize=6470656
random: virtio-rng registered as a source.
virtio-fs: Detected device with tag: [myfs] and num_queues: 1
virtio-fs: Detected DAX window with length 67108864
virtio-fs: Add device instance 0 as [virtiofs1]
random: intel drng, rdrand registered as a source.
random: <Software, Yarrow> initialized
VFS: unmounting /dev
VFS: mounting rofs at /rofs
VFS: mounting devfs at /dev
VFS: mounting procfs at /proc
VFS: mounting sysfs at /sys
VFS: mounting ramfs at /tmp
VFS: mounting virtiofs at /virtiofs
[virtiofs] Failed to initialize fuse filesystem!
failed to mount virtiofs, error = Protocol error
[I/43 dhcp]: Broadcasting DHCPDISCOVER message with xid: [1603537588]
[I/43 dhcp]: Waiting for IP...
[I/55 dhcp]: Received DHCPOFFER message from DHCP server: 192.168.122.1 regarding offerred IP address: 192.168.122.15
[I/55 dhcp]: Broadcasting DHCPREQUEST message with xid: [1603537588] to SELECT offered IP: 192.168.122.15
[I/55 dhcp]: Received DHCPACK message from DHCP server: 192.168.122.1 regarding offerred IP address: 192.168.122.15
[I/55 dhcp]: Server acknowledged IP 192.168.122.15 for interface eth0 with time to lease in seconds: 86400
eth0: 192.168.122.15
[I/55 dhcp]: Configuring eth0: ip 192.168.122.15 subnet mask 255.255.255.0 gateway 192.168.122.1 MTU 1500
Booted up in 140.48 ms
Cmdline: /virtiofs/hello
Failed to load object: /virtiofs/hello. Powering off.

# and from virtiofsd [7426562093843] [ID: 00000008] INIT: 7.27
[7426562097664] [ID: 00000008] flags=0x00000000
[7426562100498] [ID: 00000008] max_readahead=0x00001000
[7426562104503] [ID: 00000008] fuse: unsupported protocol version: 7.27
[7426562119457] [ID: 00000008]    unique: 1, error: -71 (Protocol error), outsize: 16
[7426562125006] [ID: 00000008] virtio_send_msg: elem 0: with 2 in desc of length 80
[7426577096593] [ID: 00000001] virtio_loop: Got VU event

This happens when I use stock QEMU 5.0 (just released a couple of days ago, which seems to have not DAX support yet) and qemu version from https://gitlab.com/virtio-fs/qemu/-/commits/virtio-dev (see virtio-dev branch).

I had to bump the version to 31 and then it works. Could you please investigate?

Waldek

Waldek Kozaczuk

unread,
Apr 29, 2020, 1:30:25 PM4/29/20
to OSv Development
Let me start with the results of my testing this patch. First I tried with stock QEMU 5.0 just to verify that non-DAX logic still works. In general, it does, however, I encountered that protocol mismatch error which I reported in my other email. 

Stock QEMU still does not have DAX support so I used one from https://gitlab.com/virtio-fs/qemu/-/commits/virtio-dev (shall I be using this?) to test the DAX logic.

When I ran a simple example I got this:

#In another window
./build/virtiofsd --socket-path=/tmp/vhostqemu -o source=~/projects/osv/apps/native-example -o cache=always -d

# Main
/home/wkozaczuk/projects/qemu/build/x86_64-softmmu/qemu-system-x86_64 \
-m 4G \
-smp 4 \
-vnc :1 \
-gdb tcp::1234,server,nowait \
-kernel /home/wkozaczuk/projects/osv/build/last/kernel.elf \
-append "$1" \
-device virtio-blk-pci,id=blk0,drive=hd0,scsi=off \
-drive file=/home/wkozaczuk/projects/osv/build/last/usr.img,if=none,id=hd0,cache=none,aio=native \
-netdev user,id=un0,net=192.168.122.0/24,host=192.168.122.1 \
-device virtio-net-pci,netdev=un0 \
-device virtio-rng-pci \
-enable-kvm \
-cpu host,+x2apic \
-chardev stdio,mux=on,id=stdio,signal=off \
-mon chardev=stdio,mode=readline \
-device isa-serial,chardev=stdio \
-chardev socket,id=char0,path=/tmp/vhostqemu \
-device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=myfs,cache-size=64M \
-object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on -numa node,memdev=mem #do we need that line?

OSv v0.54.0-179-g2f92fc91
4 CPUs detected
Firmware vendor: SeaBIOS
bsd: initializing - done
VFS: mounting ramfs at /
VFS: mounting devfs at /dev
net: initializing - done
vga: Add VGA device instance
eth0: ethernet address: 52:54:00:12:34:56
virtio-blk: Add blk device instances 0 as vblk0, devsize=6470656
random: virtio-rng registered as a source.
virtio-fs: Detected device with tag: [myfs] and num_queues: 1
virtio-fs: Detected DAX window with length 67108864
virtio-fs: Add device instance 0 as [virtiofs1]
random: intel drng, rdrand registered as a source.
random: <Software, Yarrow> initialized
VFS: unmounting /dev
VFS: mounting rofs at /rofs
VFS: mounting devfs at /dev
VFS: mounting procfs at /proc
VFS: mounting sysfs at /sys
VFS: mounting ramfs at /tmp
VFS: mounting virtiofs at /virtiofs
[virtiofs] Initialized fuse filesystem with version major: 7, minor: 31
[I/43 dhcp]: Broadcasting DHCPDISCOVER message with xid: [1369429892]
[I/43 dhcp]: Waiting for IP...
[I/55 dhcp]: Received DHCPOFFER message from DHCP server: 192.168.122.1 regarding offerred IP address: 192.168.122.15
[I/55 dhcp]: Broadcasting DHCPREQUEST message with xid: [1369429892] to SELECT offered IP: 192.168.122.15
[I/55 dhcp]: Received DHCPACK message from DHCP server: 192.168.122.1 regarding offerred IP address: 192.168.122.15
[I/55 dhcp]: Server acknowledged IP 192.168.122.15 for interface eth0 with time to lease in seconds: 86400
eth0: 192.168.122.15
[I/55 dhcp]: Configuring eth0: ip 192.168.122.15 subnet mask 255.255.255.0 gateway 192.168.122.1 MTU 1500
Booted up in 145.94 ms
Cmdline: /virtiofs/hello
[virtiofs] inode 1, lookup found inode 2 for hello!
[virtiofs] inode 1, lookup found inode 2 for hello!
[virtiofs] inode 2, opened
[virtiofs] inode 2, setting up mapping (foffset=0, len=64, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+40 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 64 bytes at offset 0
[virtiofs] inode 2, setting up mapping (foffset=0, len=120, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+78 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 64
[virtiofs] inode 2, setting up mapping (foffset=0, len=176, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+b0 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 120
[virtiofs] inode 2, setting up mapping (foffset=0, len=232, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+e8 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 176
[virtiofs] inode 2, setting up mapping (foffset=0, len=288, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+120 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 232
[virtiofs] inode 2, setting up mapping (foffset=0, len=344, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+158 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 288
[virtiofs] inode 2, setting up mapping (foffset=0, len=400, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+190 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 344
[virtiofs] inode 2, setting up mapping (foffset=0, len=456, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1c8 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 400
[virtiofs] inode 2, setting up mapping (foffset=0, len=512, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+200 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 456
[virtiofs] inode 2, setting up mapping (foffset=0, len=568, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+238 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 512
[virtiofs] inode 2, setting up mapping (foffset=0, len=624, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+270 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 568
[virtiofs] inode 2, setting up mapping (foffset=0, len=680, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+2a8 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 624
[virtiofs] inode 2, setting up mapping (foffset=0, len=736, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+2e0 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 680
[virtiofs] inode 2, setting up mapping (foffset=0, len=792, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+318 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 56 bytes at offset 736
[virtiofs] inode 2, setting up mapping (foffset=12288, len=4408, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1138 from 3000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 1984 bytes at offset 14712
[virtiofs] inode 2, setting up mapping (foffset=12288, len=4408, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1138 from 3000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 1984 bytes at offset 14712
[virtiofs] inode 2, setting up mapping (foffset=12288, len=2421, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+975 from 3000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 282 bytes at offset 14427
[virtiofs] inode 2, setting up mapping (foffset=12288, len=4408, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1138 from 3000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 1984 bytes at offset 14712
[virtiofs] inode 2, setting up mapping (foffset=12288, len=4408, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1138 from 3000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 1984 bytes at offset 14712
[virtiofs] inode 2, setting up mapping (foffset=12288, len=4408, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1138 from 3000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 1984 bytes at offset 14712
[virtiofs] inode 2, setting up mapping (foffset=12288, len=4096, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1000 from 3000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 4096 bytes at offset 12288
[virtiofs] inode 2, setting up mapping (foffset=0, len=4096, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1000 from 0
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 4096 bytes at offset 0
[virtiofs] inode 2, setting up mapping (foffset=8192, len=4096, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1000 from 2000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 4096 bytes at offset 8192
[virtiofs] inode 2, setting up mapping (foffset=12288, len=4408, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1138 from 3000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 1984 bytes at offset 14712
[virtiofs] inode 2, setting up mapping (foffset=4096, len=4096, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1000 from 1000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 4096 bytes at offset 4096
[virtiofs] inode 2, setting up mapping (foffset=8192, len=4096, moffset=0)
vhost_user_fs_slave_map: map failed err 19 [0] 0+1000 from 2000
[virtiofs] inode 2, mapping setup failed
[virtiofs] inode 2, reading 4096 bytes at offset 8192
Hello from C code
random: device unblocked.
[virtiofs] inode 2, closed
[I/0 dhcp]: Unicasting DHCPRELEASE message with xid: [2037604332] from client: 192.168.122.15 to server: 192.168.122.1
VFS: unmounting /dev
VFS: unmounting /proc
VFS: unmounting /
ROFS: spent 0.88 ms reading from disk
ROFS: read 31 512-byte blocks from disk
ROFS: allocated 28 512-byte blocks of cache memory
ROFS: hit ratio is 90.91%
Powering off.

# --- from virtiofs daemon
[9219297364154] [ID: 00028096] virtio_session_mount: Waiting for vhost-user socket connection...
[9249162235652] [ID: 00028096] virtio_session_mount: Received vhost-user socket connection
[9249164340260] [ID: 00000001] virtio_loop: Entry
[9249164372505] [ID: 00000001] virtio_loop: Waiting for VU event
[9249184550001] [ID: 00000001] virtio_loop: Got VU event
[9249184589992] [ID: 00000001] virtio_loop: Waiting for VU event
[9249184611833] [ID: 00000001] virtio_loop: Got VU event
[9249184625473] [ID: 00000001] virtio_loop: Waiting for VU event
[9249184641865] [ID: 00000001] virtio_loop: Got VU event
[9249184651170] [ID: 00000001] virtio_loop: Waiting for VU event
[9249184655179] [ID: 00000001] virtio_loop: Got VU event
[9249184664978] [ID: 00000001] virtio_loop: Waiting for VU event
[9249184689289] [ID: 00000001] virtio_loop: Got VU event
[9249184700075] [ID: 00000001] virtio_loop: Waiting for VU event
[9249184711125] [ID: 00000001] virtio_loop: Got VU event
[9249184716746] [ID: 00000001] virtio_loop: Waiting for VU event
[9249184720836] [ID: 00000001] virtio_loop: Got VU event
[9249184731893] [ID: 00000001] virtio_loop: Waiting for VU event
[9249184745853] [ID: 00000001] virtio_loop: Got VU event
[9249184752940] [ID: 00000001] virtio_loop: Waiting for VU event
[9249184758110] [ID: 00000001] virtio_loop: Got VU event
[9249184764199] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364236204] [ID: 00000001] virtio_loop: Got VU event
[9249364265160] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364273732] [ID: 00000001] virtio_loop: Got VU event
[9249364285075] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364290786] [ID: 00000001] virtio_loop: Got VU event
[9249364297956] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364303844] [ID: 00000001] virtio_loop: Got VU event
[9249364328100] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364343703] [ID: 00000001] virtio_loop: Got VU event
[9249364353594] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364360048] [ID: 00000001] virtio_loop: Got VU event
[9249364367820] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364373924] [ID: 00000001] virtio_loop: Got VU event
[9249364391839] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364398187] [ID: 00000001] virtio_loop: Got VU event
[9249364407842] [ID: 00000001] fv_queue_set_started: qidx=0 started=1
[9249364464882] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364474399] [ID: 00000001] virtio_loop: Got VU event
[9249364481756] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364485904] [ID: 00000001] virtio_loop: Got VU event
[9249364492800] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364497196] [ID: 00000001] virtio_loop: Got VU event
[9249364508482] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364515407] [ID: 00000001] virtio_loop: Got VU event
[9249364524257] [ID: 00000001] fv_queue_set_started: qidx=1 started=1
[9249364573934] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364585011] [ID: 00000001] virtio_loop: Got VU event
[9249364600006] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364625104] [ID: 00000001] virtio_loop: Got VU event
[9249364654277] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364688094] [ID: 00000001] virtio_loop: Got VU event
[9249364702845] [ID: 00000001] virtio_loop: Waiting for VU event
[9249364710061] [ID: 00000001] virtio_loop: Got VU event
[9249364723919] [ID: 00000001] virtio_loop: Waiting for VU event
[9249366067416] [ID: 00000005] fv_queue_thread: Start for queue 1 kick_fd 12
[9249366082934] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249366077326] [ID: 00000003] fv_queue_thread: Start for queue 0 kick_fd 9
[9249366090195] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249366093393] [ID: 00000003] fv_queue_thread: Waiting for Queue 0 event
[9249366101372] [ID: 00000003] fv_queue_thread: Got queue event on Queue 0
[9249366102360] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 0 out: 0
[9249366110558] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249366112383] [ID: 00000003] fv_queue_thread: Queue 0 gave evalue: 1 available: in: 0 out: 0
[9249366118974] [ID: 00000003] fv_queue_thread: Waiting for Queue 0 event
[9249374892326] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249374904012] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 80 out: 56
[9249374934131] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249374952478] [ID: 00000009] fv_queue_worker: elem 0: with 2 out desc of length 56 bad_in_num=0 bad_out_num=0
[9249374973101] [ID: 00000009] unique: 1, opcode: INIT (26), nodeid: 1, insize: 56, pid: 0
[9249374978513] [ID: 00000009] INIT: 7.31
[9249374981936] [ID: 00000009] flags=0x00000000
[9249374986298] [ID: 00000009] max_readahead=0x00001000
[9249374990203] [ID: 00000009]    INIT: 7.31
[9249374993468] [ID: 00000009]    flags=0x00000020
[9249374996481] [ID: 00000009]    max_readahead=0x00001000
[9249374999609] [ID: 00000009]    max_write=0x00020000
[9249375002781] [ID: 00000009]    max_background=0
[9249375005895] [ID: 00000009]    congestion_threshold=0
[9249375008949] [ID: 00000009]    time_gran=1
[9249375011950] [ID: 00000009]    map_alignment=0
[9249375015234] [ID: 00000009]    unique: 1, success, outsize: 80
[9249375018746] [ID: 00000009] virtio_send_msg: elem 0: with 2 in desc of length 80
[9249388628910] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249388645266] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 46
[9249388654201] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249388676022] [ID: 00000013] fv_queue_worker: elem 0: with 2 out desc of length 46 bad_in_num=0 bad_out_num=0
[9249388698563] [ID: 00000013] unique: 2, opcode: LOOKUP (1), nodeid: 1, insize: 46, pid: 0
[9249388704802] [ID: 00000013] lo_lookup(parent=1, name=hello)
[9249388742711] [ID: 00000013]   1/hello -> 2 (version_table[0]=0)
[9249388747683] [ID: 00000013]    unique: 2, success, outsize: 144
[9249388751520] [ID: 00000013] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249389594247] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249389602640] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 46
[9249389612928] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249389643953] [ID: 00000122] fv_queue_worker: elem 0: with 2 out desc of length 46 bad_in_num=0 bad_out_num=0
[9249389658227] [ID: 00000122] unique: 3, opcode: LOOKUP (1), nodeid: 1, insize: 46, pid: 0
[9249389663078] [ID: 00000122] lo_lookup(parent=1, name=hello)
[9249389676746] [ID: 00000122]   1/hello -> 2 (version_table[0]=0)
[9249389680957] [ID: 00000122]    unique: 3, success, outsize: 144
[9249389684394] [ID: 00000122] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249390464470] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249390470787] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 32 out: 48
[9249390480181] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249390497618] [ID: 00000011] fv_queue_worker: elem 0: with 2 out desc of length 48 bad_in_num=0 bad_out_num=0
[9249390511576] [ID: 00000011] unique: 4, opcode: OPEN (14), nodeid: 2, insize: 48, pid: 0
[9249390517684] [ID: 00000011] lo_open(ino=2, flags=0)
[9249390539984] [ID: 00000011]    unique: 4, success, outsize: 32
[9249390544506] [ID: 00000011] virtio_send_msg: elem 0: with 2 in desc of length 32
[9249391978294] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249391984602] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249391993855] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249392005767] [ID: 00000015] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249392017220] [ID: 00000015] unique: 5, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249392021582] [ID: 00000015] lo_setupmapping(ino=2, fi=0x0x7fc47fffec60, foffset=0, len=64, moffset=0, flags=0)
[9249392105560] [ID: 00000015] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249392121208] [ID: 00000015]    unique: 5, error: -22 (Invalid argument), outsize: 16
[9249392125024] [ID: 00000015] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249393460800] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249393471742] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 80 out: 80
[9249393483093] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249393500057] [ID: 00000129] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249393512842] [ID: 00000129] unique: 6, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249393518827] [ID: 00000129] lo_read(ino=2, size=64, off=0)
[9249393523326] [ID: 00000129] virtio_send_data_iov: count=1 len=64 iov_len=16
[9249393527829] [ID: 00000129] virtio_send_data_iov: elem 0: with 2 in desc of length 80
[9249393532868] [ID: 00000129] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=64
[9249393540554] [ID: 00000129] virtio_send_data_iov: preadv ret=64 len=64
[9249394566540] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249394574279] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249394586046] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249394599505] [ID: 00000017] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249394612392] [ID: 00000017] unique: 7, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249394617078] [ID: 00000017] lo_setupmapping(ino=2, fi=0x0x7fc47effcc60, foffset=0, len=120, moffset=0, flags=0)
[9249394670847] [ID: 00000017] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249394677377] [ID: 00000017]    unique: 7, error: -22 (Invalid argument), outsize: 16
[9249394681457] [ID: 00000017] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249396052993] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249396060605] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249396070277] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249396079960] [ID: 00000112] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249396090280] [ID: 00000112] unique: 8, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249396093654] [ID: 00000112] lo_read(ino=2, size=56, off=64)
[9249396096462] [ID: 00000112] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249396099045] [ID: 00000112] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249396101787] [ID: 00000112] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249396108136] [ID: 00000112] virtio_send_data_iov: preadv ret=56 len=56
[9249397164852] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249397173531] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249397182093] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249397201306] [ID: 00000019] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249397214127] [ID: 00000019] unique: 9, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249397217527] [ID: 00000019] lo_setupmapping(ino=2, fi=0x0x7fc47dffac60, foffset=0, len=176, moffset=0, flags=0)
[9249397435430] [ID: 00000019] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249397441524] [ID: 00000019]    unique: 9, error: -22 (Invalid argument), outsize: 16
[9249397454078] [ID: 00000019] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249398867198] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249398875435] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249398883776] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249398902017] [ID: 00000127] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249398914633] [ID: 00000127] unique: 10, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249398919140] [ID: 00000127] lo_read(ino=2, size=56, off=120)
[9249398922546] [ID: 00000127] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249398925731] [ID: 00000127] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249398928997] [ID: 00000127] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249398934407] [ID: 00000127] virtio_send_data_iov: preadv ret=56 len=56
[9249399907784] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249399916384] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249399925085] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249399936672] [ID: 00000125] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249399948784] [ID: 00000125] unique: 11, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249399952702] [ID: 00000125] lo_setupmapping(ino=2, fi=0x0x7fc42a7a3c60, foffset=0, len=232, moffset=0, flags=0)
[9249400057859] [ID: 00000125] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249400063163] [ID: 00000125]    unique: 11, error: -22 (Invalid argument), outsize: 16
[9249400066644] [ID: 00000125] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249401401872] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249401413855] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249401429460] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249401448982] [ID: 00000021] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249401463655] [ID: 00000021] unique: 12, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249401471468] [ID: 00000021] lo_read(ino=2, size=56, off=176)
[9249401476605] [ID: 00000021] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249401481259] [ID: 00000021] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249401485284] [ID: 00000021] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249401492772] [ID: 00000021] virtio_send_data_iov: preadv ret=56 len=56
[9249402674616] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249402685068] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249402693532] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249402706746] [ID: 00000124] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249402719429] [ID: 00000124] unique: 13, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249402723771] [ID: 00000124] lo_setupmapping(ino=2, fi=0x0x7fc42b7a5c60, foffset=0, len=288, moffset=0, flags=0)
[9249402786159] [ID: 00000124] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249402792708] [ID: 00000124]    unique: 13, error: -22 (Invalid argument), outsize: 16
[9249402796575] [ID: 00000124] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249404278683] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249404287324] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249404295093] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249404303042] [ID: 00000025] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249404312989] [ID: 00000025] unique: 14, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249404318560] [ID: 00000025] lo_read(ino=2, size=56, off=232)
[9249404322486] [ID: 00000025] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249404326525] [ID: 00000025] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249404329419] [ID: 00000025] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249404334827] [ID: 00000025] virtio_send_data_iov: preadv ret=56 len=56
[9249405432252] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249405438333] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249405447219] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249405454613] [ID: 00000131] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249405472306] [ID: 00000131] unique: 15, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249405476323] [ID: 00000131] lo_setupmapping(ino=2, fi=0x0x7fc427f9ec60, foffset=0, len=344, moffset=0, flags=0)
[9249405525957] [ID: 00000131] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249405535271] [ID: 00000131]    unique: 15, error: -22 (Invalid argument), outsize: 16
[9249405539101] [ID: 00000131] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249406879761] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249406887945] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249406895289] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249406914015] [ID: 00000023] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249406930268] [ID: 00000023] unique: 16, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249406938332] [ID: 00000023] lo_read(ino=2, size=56, off=288)
[9249406943668] [ID: 00000023] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249406948359] [ID: 00000023] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249406953179] [ID: 00000023] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249406960758] [ID: 00000023] virtio_send_data_iov: preadv ret=56 len=56
[9249407984132] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249407988447] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249407994289] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249408009393] [ID: 00000027] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249408030468] [ID: 00000027] unique: 17, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249408035255] [ID: 00000027] lo_setupmapping(ino=2, fi=0x0x7fc474ff8c60, foffset=0, len=400, moffset=0, flags=0)
[9249408080642] [ID: 00000027] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249408085935] [ID: 00000027]    unique: 17, error: -22 (Invalid argument), outsize: 16
[9249408089673] [ID: 00000027] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249409434730] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249409443594] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249409451642] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249409460710] [ID: 00000029] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249409470596] [ID: 00000029] unique: 18, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249409475258] [ID: 00000029] lo_read(ino=2, size=56, off=344)
[9249409478934] [ID: 00000029] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249409483191] [ID: 00000029] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249409486073] [ID: 00000029] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249409491600] [ID: 00000029] virtio_send_data_iov: preadv ret=56 len=56
[9249410515108] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249410520635] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249410526005] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249410542743] [ID: 00000030] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249410552967] [ID: 00000030] unique: 19, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249410558956] [ID: 00000030] lo_setupmapping(ino=2, fi=0x0x7fc45e7fbc60, foffset=0, len=456, moffset=0, flags=0)
[9249410604244] [ID: 00000030] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249410610355] [ID: 00000030]    unique: 19, error: -22 (Invalid argument), outsize: 16
[9249410614587] [ID: 00000030] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249411963400] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249411973593] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249411981221] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249412001349] [ID: 00000032] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249412031262] [ID: 00000032] unique: 20, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249412037867] [ID: 00000032] lo_read(ino=2, size=56, off=400)
[9249412051745] [ID: 00000032] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249412058841] [ID: 00000032] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249412065563] [ID: 00000032] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249412073262] [ID: 00000032] virtio_send_data_iov: preadv ret=56 len=56
[9249413213652] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249413223919] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249413231981] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249413251947] [ID: 00000034] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249413264355] [ID: 00000034] unique: 21, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249413268787] [ID: 00000034] lo_setupmapping(ino=2, fi=0x0x7fc45cff8c60, foffset=0, len=512, moffset=0, flags=0)
[9249413341678] [ID: 00000034] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249413354571] [ID: 00000034]    unique: 21, error: -22 (Invalid argument), outsize: 16
[9249413361446] [ID: 00000034] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249414705309] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249414714816] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249414722988] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249414740195] [ID: 00000036] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249414758729] [ID: 00000036] unique: 22, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249414765835] [ID: 00000036] lo_read(ino=2, size=56, off=456)
[9249414771786] [ID: 00000036] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249414777754] [ID: 00000036] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249414782104] [ID: 00000036] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249414797471] [ID: 00000036] virtio_send_data_iov: preadv ret=56 len=56
[9249415831266] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249415838956] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249415846458] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249415856280] [ID: 00000038] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249415870991] [ID: 00000038] unique: 23, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249415875928] [ID: 00000038] lo_setupmapping(ino=2, fi=0x0x7fc4567fbc60, foffset=0, len=568, moffset=0, flags=0)
[9249415933471] [ID: 00000038] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249415943171] [ID: 00000038]    unique: 23, error: -22 (Invalid argument), outsize: 16
[9249415947035] [ID: 00000038] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249417573765] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249417582689] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249417594057] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249417608220] [ID: 00000040] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249417621683] [ID: 00000040] unique: 24, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249417626295] [ID: 00000040] lo_read(ino=2, size=56, off=512)
[9249417630112] [ID: 00000040] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249417633690] [ID: 00000040] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249417637063] [ID: 00000040] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249417642654] [ID: 00000040] virtio_send_data_iov: preadv ret=56 len=56
[9249418678590] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249418689608] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249418701287] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249418713968] [ID: 00000044] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249418726583] [ID: 00000044] unique: 25, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249418731564] [ID: 00000044] lo_setupmapping(ino=2, fi=0x0x7fc4537f5c60, foffset=0, len=624, moffset=0, flags=0)
[9249418785917] [ID: 00000044] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249418791436] [ID: 00000044]    unique: 25, error: -22 (Invalid argument), outsize: 16
[9249418797053] [ID: 00000044] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249420150574] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249420161560] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249420178264] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249420195968] [ID: 00000048] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249420208740] [ID: 00000048] unique: 26, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249420214504] [ID: 00000048] lo_read(ino=2, size=56, off=568)
[9249420218730] [ID: 00000048] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249420222815] [ID: 00000048] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249420227093] [ID: 00000048] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249420233726] [ID: 00000048] virtio_send_data_iov: preadv ret=56 len=56
[9249421278721] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249421289524] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249421301503] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249421311683] [ID: 00000050] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249421324004] [ID: 00000050] unique: 27, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249421329441] [ID: 00000050] lo_setupmapping(ino=2, fi=0x0x7fc4507efc60, foffset=0, len=680, moffset=0, flags=0)
[9249421393017] [ID: 00000050] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249421398311] [ID: 00000050]    unique: 27, error: -22 (Invalid argument), outsize: 16
[9249421402197] [ID: 00000050] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249422866367] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249422876294] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249422887417] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249422898493] [ID: 00000054] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249422914125] [ID: 00000054] unique: 28, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249422918423] [ID: 00000054] lo_read(ino=2, size=56, off=624)
[9249422921976] [ID: 00000054] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249422925300] [ID: 00000054] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249422928661] [ID: 00000054] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249422934098] [ID: 00000054] virtio_send_data_iov: preadv ret=56 len=56
[9249424035222] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249424045973] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249424058539] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249424067805] [ID: 00000056] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249424084667] [ID: 00000056] unique: 29, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249424088981] [ID: 00000056] lo_setupmapping(ino=2, fi=0x0x7fc44d7e9c60, foffset=0, len=736, moffset=0, flags=0)
[9249424143981] [ID: 00000056] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249424149690] [ID: 00000056]    unique: 29, error: -22 (Invalid argument), outsize: 16
[9249424153340] [ID: 00000056] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249425589058] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249425599584] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249425610643] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249425621279] [ID: 00000058] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249425634936] [ID: 00000058] unique: 30, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249425639627] [ID: 00000058] lo_read(ino=2, size=56, off=680)
[9249425644148] [ID: 00000058] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249425648562] [ID: 00000058] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249425654030] [ID: 00000058] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249425660813] [ID: 00000058] virtio_send_data_iov: preadv ret=56 len=56
[9249426739268] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249426750015] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249426761654] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249426771475] [ID: 00000060] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249426783902] [ID: 00000060] unique: 31, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249426788195] [ID: 00000060] lo_setupmapping(ino=2, fi=0x0x7fc44b7e5c60, foffset=0, len=792, moffset=0, flags=0)
[9249426844482] [ID: 00000060] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249426849814] [ID: 00000060]    unique: 31, error: -22 (Invalid argument), outsize: 16
[9249426853495] [ID: 00000060] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249428214855] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249428226334] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 72 out: 80
[9249428237262] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249428249163] [ID: 00000062] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249428259449] [ID: 00000062] unique: 32, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249428265865] [ID: 00000062] lo_read(ino=2, size=56, off=736)
[9249428271070] [ID: 00000062] virtio_send_data_iov: count=1 len=56 iov_len=16
[9249428275771] [ID: 00000062] virtio_send_data_iov: elem 0: with 2 in desc of length 72
[9249428279814] [ID: 00000062] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=56
[9249428284585] [ID: 00000062] virtio_send_data_iov: preadv ret=56 len=56
[9249429529022] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249429539506] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249429550845] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249429575903] [ID: 00000064] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249429589969] [ID: 00000064] unique: 33, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249429594656] [ID: 00000064] lo_setupmapping(ino=2, fi=0x0x7fc4497e1c60, foffset=12288, len=4408, moffset=0, flags=0)
[9249429658623] [ID: 00000064] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249429668079] [ID: 00000064]    unique: 33, error: -22 (Invalid argument), outsize: 16
[9249429672100] [ID: 00000064] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249431249435] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249431258222] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 2000 out: 80
[9249431267803] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249431296620] [ID: 00000052] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249431313480] [ID: 00000052] unique: 34, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249431318126] [ID: 00000052] lo_read(ino=2, size=1984, off=14712)
[9249431321739] [ID: 00000052] virtio_send_data_iov: count=1 len=1984 iov_len=16
[9249431325952] [ID: 00000052] virtio_send_data_iov: elem 0: with 2 in desc of length 2000
[9249431329537] [ID: 00000052] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=1984
[9249431340910] [ID: 00000052] virtio_send_data_iov: preadv ret=1984 len=1984
[9249432551419] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249432562638] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249432574271] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249432595255] [ID: 00000046] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249432610977] [ID: 00000046] unique: 35, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249432615592] [ID: 00000046] lo_setupmapping(ino=2, fi=0x0x7fc4527f3c60, foffset=12288, len=4408, moffset=0, flags=0)
[9249432673379] [ID: 00000046] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249432679633] [ID: 00000046]    unique: 35, error: -22 (Invalid argument), outsize: 16
[9249432683485] [ID: 00000046] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249434119655] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249434129616] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 2000 out: 80
[9249434141294] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249434153916] [ID: 00000042] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249434165310] [ID: 00000042] unique: 36, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249434172689] [ID: 00000042] lo_read(ino=2, size=1984, off=14712)
[9249434176581] [ID: 00000042] virtio_send_data_iov: count=1 len=1984 iov_len=16
[9249434181037] [ID: 00000042] virtio_send_data_iov: elem 0: with 2 in desc of length 2000
[9249434185161] [ID: 00000042] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=1984
[9249434194370] [ID: 00000042] virtio_send_data_iov: preadv ret=1984 len=1984
[9249435315449] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249435326335] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249435338034] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249435359443] [ID: 00000066] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249435374762] [ID: 00000066] unique: 37, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249435379411] [ID: 00000066] lo_setupmapping(ino=2, fi=0x0x7fc4487dfc60, foffset=12288, len=2421, moffset=0, flags=0)
[9249435442972] [ID: 00000066] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249435452267] [ID: 00000066]    unique: 37, error: -22 (Invalid argument), outsize: 16
[9249435456387] [ID: 00000066] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249436950347] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249436959327] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 298 out: 80
[9249436969095] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249436975029] [ID: 00000068] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249436985792] [ID: 00000068] unique: 38, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249436989910] [ID: 00000068] lo_read(ino=2, size=282, off=14427)
[9249436993328] [ID: 00000068] virtio_send_data_iov: count=1 len=282 iov_len=16
[9249436996596] [ID: 00000068] virtio_send_data_iov: elem 0: with 2 in desc of length 298
[9249437000026] [ID: 00000068] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=282
[9249437005232] [ID: 00000068] virtio_send_data_iov: preadv ret=282 len=282
[9249438253081] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249438263713] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249438274935] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249438302486] [ID: 00000070] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249438322250] [ID: 00000070] unique: 39, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249438336469] [ID: 00000070] lo_setupmapping(ino=2, fi=0x0x7fc4467dbc60, foffset=12288, len=4408, moffset=0, flags=0)
[9249438433404] [ID: 00000070] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249438442916] [ID: 00000070]    unique: 39, error: -22 (Invalid argument), outsize: 16
[9249438446996] [ID: 00000070] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249440042716] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249440053584] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 2000 out: 80
[9249440069937] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249440097006] [ID: 00000072] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249440111277] [ID: 00000072] unique: 40, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249440116005] [ID: 00000072] lo_read(ino=2, size=1984, off=14712)
[9249440119497] [ID: 00000072] virtio_send_data_iov: count=1 len=1984 iov_len=16
[9249440122957] [ID: 00000072] virtio_send_data_iov: elem 0: with 2 in desc of length 2000
[9249440126403] [ID: 00000072] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=1984
[9249440135489] [ID: 00000072] virtio_send_data_iov: preadv ret=1984 len=1984
[9249441368116] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249441377282] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249441387982] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249441410827] [ID: 00000074] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249441426509] [ID: 00000074] unique: 41, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249441431336] [ID: 00000074] lo_setupmapping(ino=2, fi=0x0x7fc4447d7c60, foffset=12288, len=4408, moffset=0, flags=0)
[9249441497022] [ID: 00000074] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249441506401] [ID: 00000074]    unique: 41, error: -22 (Invalid argument), outsize: 16
[9249441510520] [ID: 00000074] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249443039583] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249443050748] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 2000 out: 80
[9249443061514] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249443089122] [ID: 00000076] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249443103541] [ID: 00000076] unique: 42, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249443108060] [ID: 00000076] lo_read(ino=2, size=1984, off=14712)
[9249443111666] [ID: 00000076] virtio_send_data_iov: count=1 len=1984 iov_len=16
[9249443115119] [ID: 00000076] virtio_send_data_iov: elem 0: with 2 in desc of length 2000
[9249443118535] [ID: 00000076] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=1984
[9249443127035] [ID: 00000076] virtio_send_data_iov: preadv ret=1984 len=1984
[9249444352667] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249444363829] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249444376477] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249444412707] [ID: 00000078] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249444428008] [ID: 00000078] unique: 43, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249444433925] [ID: 00000078] lo_setupmapping(ino=2, fi=0x0x7fc4427d3c60, foffset=12288, len=4408, moffset=0, flags=0)
[9249444498288] [ID: 00000078] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249444507095] [ID: 00000078]    unique: 43, error: -22 (Invalid argument), outsize: 16
[9249444515650] [ID: 00000078] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249446080932] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249446091801] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 2000 out: 80
[9249446100935] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249446111754] [ID: 00000080] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249446129973] [ID: 00000080] unique: 44, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249446134963] [ID: 00000080] lo_read(ino=2, size=1984, off=14712)
[9249446138976] [ID: 00000080] virtio_send_data_iov: count=1 len=1984 iov_len=16
[9249446142661] [ID: 00000080] virtio_send_data_iov: elem 0: with 2 in desc of length 2000
[9249446147622] [ID: 00000080] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=1984
[9249446159016] [ID: 00000080] virtio_send_data_iov: preadv ret=1984 len=1984
[9249447273184] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249447283049] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249447291590] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249447310978] [ID: 00000082] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249447323258] [ID: 00000082] unique: 45, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249447327784] [ID: 00000082] lo_setupmapping(ino=2, fi=0x0x7fc4407cfc60, foffset=12288, len=4096, moffset=0, flags=0)
[9249447381378] [ID: 00000082] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249447392159] [ID: 00000082]    unique: 45, error: -22 (Invalid argument), outsize: 16
[9249447396725] [ID: 00000082] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249448796554] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249448805336] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 4112 out: 80
[9249448813514] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249448822045] [ID: 00000084] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249448836780] [ID: 00000084] unique: 46, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249448841323] [ID: 00000084] lo_read(ino=2, size=4096, off=12288)
[9249448844927] [ID: 00000084] virtio_send_data_iov: count=1 len=4096 iov_len=16
[9249448848428] [ID: 00000084] virtio_send_data_iov: elem 0: with 2 in desc of length 4112
[9249448858468] [ID: 00000084] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=4096
[9249448868128] [ID: 00000084] virtio_send_data_iov: preadv ret=4096 len=4096
[9249449923263] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249449933732] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249449944261] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249449961189] [ID: 00000086] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249449976655] [ID: 00000086] unique: 47, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249449982981] [ID: 00000086] lo_setupmapping(ino=2, fi=0x0x7fc43e7cbc60, foffset=0, len=4096, moffset=0, flags=0)
[9249450038443] [ID: 00000086] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249450046486] [ID: 00000086]    unique: 47, error: -22 (Invalid argument), outsize: 16
[9249450051979] [ID: 00000086] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249451474863] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249451483807] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 4112 out: 80
[9249451491831] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249451500567] [ID: 00000088] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249451516851] [ID: 00000088] unique: 48, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249451521376] [ID: 00000088] lo_read(ino=2, size=4096, off=0)
[9249451525084] [ID: 00000088] virtio_send_data_iov: count=1 len=4096 iov_len=16
[9249451528705] [ID: 00000088] virtio_send_data_iov: elem 0: with 2 in desc of length 4112
[9249451532117] [ID: 00000088] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=4096
[9249451540885] [ID: 00000088] virtio_send_data_iov: preadv ret=4096 len=4096
[9249452715908] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249452722975] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249452730618] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249452750621] [ID: 00000090] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249452763645] [ID: 00000090] unique: 49, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249452768157] [ID: 00000090] lo_setupmapping(ino=2, fi=0x0x7fc43c7c7c60, foffset=8192, len=4096, moffset=0, flags=0)
[9249452816901] [ID: 00000090] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249452822179] [ID: 00000090]    unique: 49, error: -22 (Invalid argument), outsize: 16
[9249452825877] [ID: 00000090] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249454464358] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249454475182] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 4112 out: 80
[9249454483938] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249454503112] [ID: 00000092] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249454515255] [ID: 00000092] unique: 50, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249454521381] [ID: 00000092] lo_read(ino=2, size=4096, off=8192)
[9249454525079] [ID: 00000092] virtio_send_data_iov: count=1 len=4096 iov_len=16
[9249454528483] [ID: 00000092] virtio_send_data_iov: elem 0: with 2 in desc of length 4112
[9249454531986] [ID: 00000092] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=4096
[9249454541787] [ID: 00000092] virtio_send_data_iov: preadv ret=4096 len=4096
[9249455803590] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249455817426] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249455826370] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249455837353] [ID: 00000094] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249455850582] [ID: 00000094] unique: 51, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249455855519] [ID: 00000094] lo_setupmapping(ino=2, fi=0x0x7fc43a7c3c60, foffset=12288, len=4408, moffset=0, flags=0)
[9249455921460] [ID: 00000094] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249455926962] [ID: 00000094]    unique: 51, error: -22 (Invalid argument), outsize: 16
[9249455930802] [ID: 00000094] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249457394900] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249457404861] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 2000 out: 80
[9249457421060] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249457433366] [ID: 00000096] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249457445217] [ID: 00000096] unique: 52, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249457449691] [ID: 00000096] lo_read(ino=2, size=1984, off=14712)
[9249457453334] [ID: 00000096] virtio_send_data_iov: count=1 len=1984 iov_len=16
[9249457457469] [ID: 00000096] virtio_send_data_iov: elem 0: with 2 in desc of length 2000
[9249457460878] [ID: 00000096] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=1984
[9249457469330] [ID: 00000096] virtio_send_data_iov: preadv ret=1984 len=1984
[9249458792504] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249458799560] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249458807725] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249458813861] [ID: 00000098] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249458826518] [ID: 00000098] unique: 53, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249458832431] [ID: 00000098] lo_setupmapping(ino=2, fi=0x0x7fc4387bfc60, foffset=4096, len=4096, moffset=0, flags=0)
[9249458880896] [ID: 00000098] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249458890404] [ID: 00000098]    unique: 53, error: -22 (Invalid argument), outsize: 16
[9249458894649] [ID: 00000098] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249460424844] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249460435299] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 4112 out: 80
[9249460443724] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249460456693] [ID: 00000100] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249460468963] [ID: 00000100] unique: 54, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249460473347] [ID: 00000100] lo_read(ino=2, size=4096, off=4096)
[9249460476905] [ID: 00000100] virtio_send_data_iov: count=1 len=4096 iov_len=16
[9249460480236] [ID: 00000100] virtio_send_data_iov: elem 0: with 2 in desc of length 4112
[9249460483594] [ID: 00000100] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=4096
[9249460493186] [ID: 00000100] virtio_send_data_iov: preadv ret=4096 len=4096
[9249461699884] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249461708219] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 144 out: 80
[9249461716420] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249461742161] [ID: 00000102] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249461755778] [ID: 00000102] unique: 55, opcode: SETUPMAPPING (48), nodeid: 2, insize: 80, pid: 0
[9249461760432] [ID: 00000102] lo_setupmapping(ino=2, fi=0x0x7fc4367bbc60, foffset=8192, len=4096, moffset=0, flags=0)
[9249461807420] [ID: 00000102] lo_setupmapping: map over virtio failed (ino=2fd=0 moffset=0x0)
[9249461814102] [ID: 00000102]    unique: 55, error: -22 (Invalid argument), outsize: 16
[9249461817867] [ID: 00000102] virtio_send_msg: elem 0: with 2 in desc of length 144
[9249463334013] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249463342264] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 4112 out: 80
[9249463360372] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249463368682] [ID: 00000104] fv_queue_worker: elem 0: with 2 out desc of length 80 bad_in_num=0 bad_out_num=0
[9249463382180] [ID: 00000104] unique: 56, opcode: READ (15), nodeid: 2, insize: 80, pid: 0
[9249463387414] [ID: 00000104] lo_read(ino=2, size=4096, off=8192)
[9249463391120] [ID: 00000104] virtio_send_data_iov: count=1 len=4096 iov_len=16
[9249463395261] [ID: 00000104] virtio_send_data_iov: elem 0: with 2 in desc of length 4112
[9249463398108] [ID: 00000104] virtio_send_data_iov: after skip skip_size=0 in_sg_cpy_count=1 in_sg_left=4096
[9249463406239] [ID: 00000104] virtio_send_data_iov: preadv ret=4096 len=4096
[9249465829335] [ID: 00000005] fv_queue_thread: Got queue event on Queue 1
[9249465842040] [ID: 00000005] fv_queue_thread: Queue 1 gave evalue: 1 available: in: 16 out: 64
[9249465853777] [ID: 00000005] fv_queue_thread: Waiting for Queue 1 event
[9249465865103] [ID: 00000106] fv_queue_worker: elem 0: with 2 out desc of length 64 bad_in_num=0 bad_out_num=0
[9249465883682] [ID: 00000106] unique: 57, opcode: RELEASE (18), nodeid: 2, insize: 64, pid: 0
[9249465891096] [ID: 00000106]    unique: 57, success, outsize: 16
[9249465894793] [ID: 00000106] virtio_send_msg: elem 0: with 1 in desc of length 16
[9249473431762] [ID: 00000001] virtio_loop: Got VU event
[9249473446407] [ID: 00000001] fv_queue_set_started: qidx=0 started=0
[9249473473138] [ID: 00000003] fv_queue_thread: kill event on queue 0 - quitting
[9249473800677] [ID: 00000001] fv_remove_watch: TODO! fd=9
[9249473826110] [ID: 00000001] virtio_loop: Waiting for VU event
[9249473871901] [ID: 00000001] virtio_loop: Got VU event
[9249473891700] [ID: 00000001] fv_queue_set_started: qidx=1 started=0
[9249473916239] [ID: 00000005] fv_queue_thread: kill event on queue 1 - quitting
[9249474240476] [ID: 00000001] fv_remove_watch: TODO! fd=12
[9249474262549] [ID: 00000001] virtio_loop: Waiting for VU event
[9249474280765] [ID: 00000001] virtio_loop: Got VU event
[9249474300740] [ID: 00000001] virtio_loop: Waiting for VU event
[9249474313854] [ID: 00000001] virtio_loop: Got VU event
[9249474334494] [ID: 00000001] virtio_loop: Waiting for VU event
[9249475656888] [ID: 00000001] virtio_loop: Unexpected poll revents 11
[9249475668892] [ID: 00000001] virtio_loop: Exit
[9249572770324] [ID: 00000001] fv_panic: libvhost-user: Error while recvmsg: Connection reset by peer

As you can see DAX mapping requests would fail, but the failover logic to regular FUSE_READ worked fine. 

BTW is there a way to make virtiofsd daemon not to terminate every time after single OSv run? 

I am using Ubuntu 19.10.

Waldek
As  I understand this is a temporary solution until we integrate DAX logic with page cache, right? Eventually, pages mapped with FUSE_SETUPMAPPING should stay in page cache until a file gets unmapped? Right now we copy data from DAX window but eventually, we would like not to which is the whole point of DAX, right?

Not right now but going forward as we design integration with page cache we should think of a way to have a simple read-ahead cache in virtio fs just like ROFS has so we can optimize reading even if there is no DAX enabled. In other words, eventually, page cache should either point to pages from DAX window (if DAX on) or to pages in a local cache where we would keep data read using regular FUSE_READ. Ideally we should refactor the read-ahead/around cache in ROFS to make it more generic and usable with virtiofs.

But all that is the future.

Waldek Kozaczuk

unread,
Apr 29, 2020, 2:21:13 PM4/29/20
to OSv Development
I think your patch looks good and I like your simplifications.

Couple of things to make sure we have covered all bases. 

1) Are we sure none of these changes break any thread-safety? 
2) Are we certain we do not need to use "free_phys_contiguous_aligned" in some places to make sure the host sees contiguous physical memory? Currently, we use new in all virtiofs related code which uses regular malloc behind the scenes. 


On Monday, April 20, 2020 at 5:07:18 PM UTC-4, Fotis Xenakis wrote:
Since in virtio-fs the filesystem is very tightly coupled with the
driver, this tries to make clear the dependence of the first on the
second, as well as simplify.
Agree. 

This includes:
- The definition of fuse_request is moved from the fs to the driver,
  since it is part of the interface it provides. Also, it is enhanced
  with methods, somewhat promoting it to a "proper" class.
I like this. 

Fotis Xenakis

unread,
Apr 30, 2020, 6:19:14 PM4/30/20
to OSv Development
Indeed, QEMU 5.0 does not support DAX and the virtiofsd in QEMU 5.0 won't accept any version other than 7.31 as I see here, thus the mount fails.
Both on the QEMU and the Linux side, DAX is not close to upstreaming yet. Although it seems no longer marked as "experimental" here, I think it's still under development (not verified with the devs) and that's the source for some instability.

To summarize:
  • Upstream QEMU 5.0 includes stable virtio-fs support, with the basic feature set. It negotiates FUSE 7.31 (latest in upstream Linux).
  • Downstream virtio-fs QEMU currently contains:
    • The default (thus recommended in the docs) virtio-fs branch. This negotiates FUSE 7.27 and supports DAX. This is the one I have based my patches upon, because it is the most stable with DAX support.
    • The development branches, virtio-dev and virtio-fs-dev (don't know what distinguishes them TBH). They both negotiate FUSE 7.31 and support DAX (with changed protocol details). These iterate quickly, so I haven't used them.
I hadn't anticipated this hard constraint upstream, which poses a problem, since I guess we want to be compatible with it.
My plan is to reach out to the virtio-fs devs, asking for the status of DAX in the dev branches. If they deem it stabilized, I will probably try to go with those, offering upstream compatibility and DAX.
Otherwise, we could have a hybrid approach, compatible with upstream for the stable features, but following the more stale "virtio-fs" downstream branch as far as DAX is concerned.
What do you think?

Waldek Kozaczuk

unread,
Apr 30, 2020, 6:31:00 PM4/30/20
to Fotis Xenakis, OSv Development
On Thu, Apr 30, 2020 at 6:19 PM Fotis Xenakis <fo...@windowslive.com> wrote:
Indeed, QEMU 5.0 does not support DAX and the virtiofsd in QEMU 5.0 won't accept any version other than 7.31 as I see here, thus the mount fails.
Both on the QEMU and the Linux side, DAX is not close to upstreaming yet. Although it seems no longer marked as "experimental" here, I think it's still under development (not verified with the devs) and that's the source for some instability.

To summarize:
  • Upstream QEMU 5.0 includes stable virtio-fs support, with the basic feature set. It negotiates FUSE 7.31 (latest in upstream Linux).
  • Downstream virtio-fs QEMU currently contains:
    • The default (thus recommended in the docs) virtio-fs branch. This negotiates FUSE 7.27 and supports DAX. This is the one I have based my patches upon, because it is the most stable with DAX support.
    • The development branches, virtio-dev and virtio-fs-dev (don't know what distinguishes them TBH). They both negotiate FUSE 7.31 and support DAX (with changed protocol details). These iterate quickly, so I haven't used them.
I hadn't anticipated this hard constraint upstream, which poses a problem, since I guess we want to be compatible with it.
My plan is to reach out to the virtio-fs devs, asking for the status of DAX in the dev branches. If they deem it stabilized, I will probably try to go with those, offering upstream compatibility and DAX.
Otherwise, we could have a hybrid approach, compatible with upstream for the stable features, but following the more stale "virtio-fs" downstream branch as far as DAX is concerned.
What do you think?
I am not sure I 100% understand what you are proposing. Adding some kind of negotiating logic on OSv side that will be able to deal with both 27 and 31 and "advertise" accordingly? Can we simply send 31 if there is no DAX window detected in the driver layer and 27 otherwise?

I guess for we could just keep this header per 31 and add FUSE_SETUPMAPPING AND FUSE_REMOVEMAPPING to our header, no?

Meanwhile I will rollback this particular patch to make OSv work with with stock qemu and virtiofs. 
--
You received this message because you are subscribed to the Google Groups "OSv Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osv-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osv-dev/99fdd9fc-e1e6-48c9-9b3f-a50965d22654%40googlegroups.com.

Fotis Xenakis

unread,
Apr 30, 2020, 6:40:43 PM4/30/20
to OSv Development
Stock QEMU still does not have DAX support so I used one from https://gitlab.com/virtio-fs/qemu/-/commits/virtio-dev (shall I be using this?) to test the DAX logic.
The branch you mention is under active development and will not work with my current patches. Those are based upon the more stable virtio-fs branch.

BTW is there a way to make virtiofsd daemon not to terminate every time after single OSv run?
I am not aware of a way to make make virtiofsd not terminate every time, it would sure be nice not having to restart it all the time though...

As  I understand this is a temporary solution until we integrate DAX logic with page cache, right? Eventually, pages mapped with FUSE_SETUPMAPPING should stay in page cache until a file gets unmapped? Right now we copy data from DAX window but eventually, we would like not to which is the whole point of DAX, right?
That would be ideal, but I can't see how we could avoid the copy to the user buffers, while retaining proper read() semantics(?). We might be able to do it in the case of mmap() though (my only concern is that the DAX window is device memory, but that doesn't seem to be a problem.

Not right now but going forward as we design integration with page cache we should think of a way to have a simple read-ahead cache in virtio fs just like ROFS has so we can optimize reading even if there is no DAX enabled. In other words, eventually, page cache should either point to pages from DAX window (if DAX on) or to pages in a local cache where we would keep data read using regular FUSE_READ. Ideally we should refactor the read-ahead/around cache in ROFS to make it more generic and usable with virtiofs.

But all that is the future.
 
 I totally agree, with both points!
...

Waldek Kozaczuk

unread,
Apr 30, 2020, 6:47:35 PM4/30/20
to Fotis Xenakis, OSv Development
On Thu, Apr 30, 2020 at 6:40 PM Fotis Xenakis <fo...@windowslive.com> wrote:
Stock QEMU still does not have DAX support so I used one from https://gitlab.com/virtio-fs/qemu/-/commits/virtio-dev (shall I be using this?) to test the DAX logic.
The branch you mention is under active development and will not work with my current patches. Those are based upon the more stable virtio-fs branch.

BTW is there a way to make virtiofsd daemon not to terminate every time after single OSv run?
I am not aware of a way to make make virtiofsd not terminate every time, it would sure be nice not having to restart it all the time though...

As  I understand this is a temporary solution until we integrate DAX logic with page cache, right? Eventually, pages mapped with FUSE_SETUPMAPPING should stay in page cache until a file gets unmapped? Right now we copy data from DAX window but eventually, we would like not to which is the whole point of DAX, right?
That would be ideal, but I can't see how we could avoid the copy to the user buffers, while retaining proper read() semantics(?). We might be able to do it in the case of mmap() though (my only concern is that the DAX window is device memory, but that doesn't seem to be a problem.
Yes we cannot avoid it in virtiofs_read() which btw may not make sense to use DAX in long term over regular FUSE_READ.
For mmap() I agree. I could be as simple (probably not as simple) as what we do in ROFS (see rofs_map_cached_page()). 
--
You received this message because you are subscribed to the Google Groups "OSv Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osv-dev+u...@googlegroups.com.

Fotis Xenakis

unread,
Apr 30, 2020, 6:49:36 PM4/30/20
to OSv Development

Τη Παρασκευή, 1 Μαΐου 2020 - 1:31:00 π.μ. UTC+3, ο χρήστης Waldek Kozaczuk έγραψε:


On Thu, Apr 30, 2020 at 6:19 PM Fotis Xenakis <fo...@windowslive.com> wrote:
Indeed, QEMU 5.0 does not support DAX and the virtiofsd in QEMU 5.0 won't accept any version other than 7.31 as I see here, thus the mount fails.
Both on the QEMU and the Linux side, DAX is not close to upstreaming yet. Although it seems no longer marked as "experimental" here, I think it's still under development (not verified with the devs) and that's the source for some instability.

To summarize:
  • Upstream QEMU 5.0 includes stable virtio-fs support, with the basic feature set. It negotiates FUSE 7.31 (latest in upstream Linux).
  • Downstream virtio-fs QEMU currently contains:
    • The default (thus recommended in the docs) virtio-fs branch. This negotiates FUSE 7.27 and supports DAX. This is the one I have based my patches upon, because it is the most stable with DAX support.
    • The development branches, virtio-dev and virtio-fs-dev (don't know what distinguishes them TBH). They both negotiate FUSE 7.31 and support DAX (with changed protocol details). These iterate quickly, so I haven't used them.
I hadn't anticipated this hard constraint upstream, which poses a problem, since I guess we want to be compatible with it.
My plan is to reach out to the virtio-fs devs, asking for the status of DAX in the dev branches. If they deem it stabilized, I will probably try to go with those, offering upstream compatibility and DAX.
Otherwise, we could have a hybrid approach, compatible with upstream for the stable features, but following the more stale "virtio-fs" downstream branch as far as DAX is concerned.
What do you think?
I am not sure I 100% understand what you are proposing. Adding some kind of negotiating logic on OSv side that will be able to deal with both 27 and 31 and "advertise" accordingly? Can we simply send 31 if there is no DAX window detected in the driver layer and 27 otherwise?

I guess for we could just keep this header per 31 and add FUSE_SETUPMAPPING AND FUSE_REMOVEMAPPING to our header, no?
This is the "hybrid" approach I was thinking of above and the one I will go with for now.
Also, I will contact the virtio-fs devs for insight on how the project will evolve in the near future.

Meanwhile I will rollback this particular patch to make OSv work with with stock qemu and virtiofs. 
Absolutely, this makes sense.
To unsubscribe from this group and stop receiving emails from it, send an email to osv...@googlegroups.com.

Fotis Xenakis

unread,
Apr 30, 2020, 6:59:47 PM4/30/20
to OSv Development

Τη Τετάρτη, 29 Απριλίου 2020 - 9:21:13 μ.μ. UTC+3, ο χρήστης Waldek Kozaczuk έγραψε:
I think your patch looks good and I like your simplifications.

Couple of things to make sure we have covered all bases. 

1) Are we sure none of these changes break any thread-safety? 
I had checked for this, both trying to reason about the code and testing with a program with concurrent read()ers, but by chance of the second version, I shall check again, paying more attention to these changes.
2) Are we certain we do not need to use "free_phys_contiguous_aligned" in some places to make sure the host sees contiguous physical memory? Currently, we use new in all virtiofs related code which uses regular malloc behind the scenes. 
This is actually a valid point I hadn't given much thought into. I will look into it, thank you!

Waldek Kozaczuk

unread,
May 1, 2020, 4:18:23 PM5/1/20
to OSv Development


On Thursday, April 30, 2020 at 6:40:43 PM UTC-4, Fotis Xenakis wrote:
Stock QEMU still does not have DAX support so I used one from https://gitlab.com/virtio-fs/qemu/-/commits/virtio-dev (shall I be using this?) to test the DAX logic.
The branch you mention is under active development and will not work with my current patches. Those are based upon the more stable virtio-fs branch.

BTW is there a way to make virtiofsd daemon not to terminate every time after single OSv run?
I am not aware of a way to make make virtiofsd not terminate every time, it would sure be nice not having to restart it all the time though...
I have just sent 2 patches that should make testing easier - run.py can automatically start virtiofsd and it is possible to pass virtiofs mount point information as a boot parameter instead of having to add it to /etc/fstab. 
...
Reply all
Reply to author
Forward
0 new messages