This is the fourth version of this patchset. Chances since v3:
- Dropped a prep patch, it has been included in mainline since.
- Add a work-to-do list to the bdi. This is struct bdi_work. Each
wb thread will notice and execute work on bdi->work_list. The arguments
are which sb (or NULL for all) to flush and how many pages to flush.
- Fix a bug where not all bdi's would end up on the bdi_list, so potentially
some data would not be flushed.
- Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
behaviour for kupdated flushes.
- Have the wb thread flush first before sleeping, to avoid losing the
first flush on lazy register.
- Rebase to newer kernels.
- Little fixes here and there.
So generally not a lot of changes, the major one is using the ->work_list
and getting rid of writeback_acquire()/writeback_release(). This fixes
the concern Jan Kara had about missing sync/WB_SYNC_ALL, if writeback
was already in progress.
I've run a few benchmarks today:
1) Large file writes from a single process
2) Random file writes from multiple (16) processes.
Each benchmark was run 3 times on each kernel. The disk used was an
Intel X25-E and it was security erased before each run for consistency.
2.6.30-rc6 (22ef37eed673587ac984965dc88ba94c68873291) is the baseline
at 100. Filesystem was ext4 without barriers. The system was a Core 2
Quad with 2G of memory.
Kernel Test TPS CPU
---------------------------------------------------
Baseline 1 100 100
Writeback 1 101 95
Baseline 2 100 100
Writeback 2 105 94
For the sequential test, speed is almost identical, but CPU usage is a
lot lower. For the random write case with 16 threads, transaction rate
is up for the writeback patches while the CPU usage is down as well.
So pretty good results for this initial test, I'd expect larger
improvements on systems with more disks. As soon as Intel sends me
4 more drives for testing, I'll update the results :-)
You can pull the patches from the block git repo, branch is 'writeback':
git://git.kernel.dk/linux-2.6-block.git writeback
---
b/block/blk-core.c | 1
b/drivers/block/aoe/aoeblk.c | 1
b/drivers/char/mem.c | 1
b/fs/btrfs/disk-io.c | 24 +
b/fs/buffer.c | 2
b/fs/char_dev.c | 1
b/fs/configfs/inode.c | 1
b/fs/fs-writeback.c | 689 ++++++++++++++++++++++++++++++++----------
b/fs/fuse/inode.c | 1
b/fs/hugetlbfs/inode.c | 1
b/fs/nfs/client.c | 1
b/fs/ntfs/super.c | 32 -
b/fs/ocfs2/dlm/dlmfs.c | 1
b/fs/ramfs/inode.c | 1
b/fs/super.c | 3
b/fs/sync.c | 2
b/fs/sysfs/inode.c | 1
b/fs/ubifs/super.c | 1
b/include/linux/backing-dev.h | 74 ++++
b/include/linux/fs.h | 11
b/include/linux/writeback.h | 15
b/kernel/cgroup.c | 1
b/mm/Makefile | 2
b/mm/backing-dev.c | 481 ++++++++++++++++++++++++++++-
b/mm/page-writeback.c | 144 --------
b/mm/swap_state.c | 1
b/mm/vmscan.c | 2
mm/pdflush.c | 269 ----------------
28 files changed, 1130 insertions(+), 634 deletions(-)
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 140 +++++++++++++++++-----------
include/linux/backing-dev.h | 42 +++++----
mm/backing-dev.c | 218 ++++++++++++++++++++++++++++--------------
3 files changed, 256 insertions(+), 144 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 8a25d14..50e21e8 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -46,9 +46,11 @@ int nr_pdflush_threads;
* unless they implement their own. Which is somewhat inefficient, as this
* may prevent concurrent writeback against multiple devices.
*/
-static int writeback_acquire(struct backing_dev_info *bdi)
+static int writeback_acquire(struct bdi_writeback *wb)
{
- return !test_and_set_bit(BDI_pdflush, &bdi->state);
+ struct backing_dev_info *bdi = wb->bdi;
+
+ return !test_and_set_bit(wb->nr, &bdi->wb_active);
}
/**
@@ -59,19 +61,38 @@ static int writeback_acquire(struct backing_dev_info *bdi)
*/
int writeback_in_progress(struct backing_dev_info *bdi)
{
- return test_bit(BDI_pdflush, &bdi->state);
+ return bdi->wb_active != 0;
}
/**
* writeback_release - relinquish exclusive writeback access against a device.
* @bdi: the device's backing_dev_info structure
*/
-static void writeback_release(struct backing_dev_info *bdi)
+static void writeback_release(struct bdi_writeback *wb)
{
- WARN_ON_ONCE(!writeback_in_progress(bdi));
- bdi->wb_arg.nr_pages = 0;
- bdi->wb_arg.sb = NULL;
- clear_bit(BDI_pdflush, &bdi->state);
+ struct backing_dev_info *bdi = wb->bdi;
+
+ wb->nr_pages = 0;
+ wb->sb = NULL;
+ clear_bit(wb->nr, &bdi->wb_active);
+}
+
+static void wb_start_writeback(struct bdi_writeback *wb, struct super_block *sb,
+ long nr_pages)
+{
+ if (!wb_has_dirty_io(wb))
+ return;
+
+ if (writeback_acquire(wb)) {
+ wb->nr_pages = nr_pages;
+ wb->sb = sb;
+
+ /*
+ * make above store seen before the task is woken
+ */
+ smp_mb();
+ wake_up(&wb->wait);
+ }
}
int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
@@ -81,21 +102,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
* This only happens the first time someone kicks this bdi, so put
* it out-of-line.
*/
- if (unlikely(!bdi->task)) {
+ if (unlikely(!bdi->wb.task)) {
bdi_add_default_flusher_task(bdi);
return 1;
}
- if (writeback_acquire(bdi)) {
- bdi->wb_arg.nr_pages = nr_pages;
- bdi->wb_arg.sb = sb;
- /*
- * make above store seen before the task is woken
- */
- smp_mb();
- wake_up(&bdi->wait);
- }
-
+ wb_start_writeback(&bdi->wb, sb, nr_pages);
return 0;
}
@@ -123,12 +135,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
* older_than_this takes precedence over nr_to_write. So we'll only write back
* all dirty pages if they are all attached to "old" mappings.
*/
-static void bdi_kupdated(struct backing_dev_info *bdi)
+static void wb_kupdated(struct bdi_writeback *wb)
{
unsigned long oldest_jif;
long nr_to_write;
struct writeback_control wbc = {
- .bdi = bdi,
+ .bdi = wb->bdi,
.sync_mode = WB_SYNC_NONE,
.older_than_this = &oldest_jif,
.nr_to_write = 0,
@@ -155,15 +167,19 @@ static void bdi_kupdated(struct backing_dev_info *bdi)
}
}
-static void bdi_pdflush(struct backing_dev_info *bdi)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+ struct super_block *sb,
+ struct writeback_control *wbc);
+
+static void wb_writeback(struct bdi_writeback *wb)
{
struct writeback_control wbc = {
- .bdi = bdi,
+ .bdi = wb->bdi,
.sync_mode = WB_SYNC_NONE,
.older_than_this = NULL,
.range_cyclic = 1,
};
- long nr_pages = bdi->wb_arg.nr_pages;
+ long nr_pages = wb->nr_pages;
for (;;) {
unsigned long background_thresh, dirty_thresh;
@@ -177,7 +193,7 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
wbc.encountered_congestion = 0;
wbc.nr_to_write = MAX_WRITEBACK_PAGES;
wbc.pages_skipped = 0;
- generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
+ generic_sync_wb_inodes(wb, wb->sb, &wbc);
nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
/*
* If we ran out of stuff to write, bail unless more_io got set
@@ -194,13 +210,13 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
* Handle writeback of dirty data for the device backed by this bdi. Also
* wakes up periodically and does kupdated style flushing.
*/
-int bdi_writeback_task(struct backing_dev_info *bdi)
+int bdi_writeback_task(struct bdi_writeback *wb)
{
while (!kthread_should_stop()) {
unsigned long wait_jiffies;
DEFINE_WAIT(wait);
- prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
+ prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
schedule_timeout(wait_jiffies);
try_to_freeze();
@@ -219,13 +235,13 @@ int bdi_writeback_task(struct backing_dev_info *bdi)
* pdflush style writeout.
*
*/
- if (writeback_acquire(bdi))
- bdi_kupdated(bdi);
+ if (writeback_acquire(wb))
+ wb_kupdated(wb);
else
- bdi_pdflush(bdi);
+ wb_writeback(wb);
- writeback_release(bdi);
- finish_wait(&bdi->wait, &wait);
+ writeback_release(wb);
+ finish_wait(&wb->wait, &wait);
}
return 0;
@@ -248,6 +264,14 @@ restart:
rcu_read_unlock();
}
+/*
+ * We have only a single wb per bdi, so just return that.
+ */
+static inline struct bdi_writeback *inode_get_wb(struct inode *inode)
+{
+ return &inode_to_bdi(inode)->wb;
+}
+
/**
* __mark_inode_dirty - internal function
* @inode: inode to mark
@@ -346,9 +370,10 @@ void __mark_inode_dirty(struct inode *inode, int flags)
* reposition it (that would break b_dirty time-ordering).
*/
if (!was_dirty) {
+ struct bdi_writeback *wb = inode_get_wb(inode);
+
inode->dirtied_when = jiffies;
- list_move(&inode->i_list,
- &inode_to_bdi(inode)->b_dirty);
+ list_move(&inode->i_list, &wb->b_dirty);
}
}
out:
@@ -375,16 +400,16 @@ static int write_inode(struct inode *inode, int sync)
*/
static void redirty_tail(struct inode *inode)
{
- struct backing_dev_info *bdi = inode_to_bdi(inode);
+ struct bdi_writeback *wb = inode_get_wb(inode);
- if (!list_empty(&bdi->b_dirty)) {
+ if (!list_empty(&wb->b_dirty)) {
struct inode *tail;
- tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
+ tail = list_entry(wb->b_dirty.next, struct inode, i_list);
if (time_before(inode->dirtied_when, tail->dirtied_when))
inode->dirtied_when = jiffies;
}
- list_move(&inode->i_list, &bdi->b_dirty);
+ list_move(&inode->i_list, &wb->b_dirty);
}
/*
@@ -392,7 +417,9 @@ static void redirty_tail(struct inode *inode)
*/
static void requeue_io(struct inode *inode)
{
- list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
+ struct bdi_writeback *wb = inode_get_wb(inode);
+
+ list_move(&inode->i_list, &wb->b_more_io);
}
static void inode_sync_complete(struct inode *inode)
@@ -439,11 +466,10 @@ static void move_expired_inodes(struct list_head *delaying_queue,
/*
* Queue all expired dirty inodes for io, eldest first.
*/
-static void queue_io(struct backing_dev_info *bdi,
- unsigned long *older_than_this)
+static void queue_io(struct bdi_writeback *wb, unsigned long *older_than_this)
{
- list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
- move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
+ list_splice_init(&wb->b_more_io, wb->b_io.prev);
+ move_expired_inodes(&wb->b_dirty, &wb->b_io, older_than_this);
}
/*
@@ -604,20 +630,20 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
return __sync_single_inode(inode, wbc);
}
-void generic_sync_bdi_inodes(struct super_block *sb,
- struct writeback_control *wbc)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+ struct super_block *sb,
+ struct writeback_control *wbc)
{
const int is_blkdev_sb = sb_is_blkdev_sb(sb);
- struct backing_dev_info *bdi = wbc->bdi;
const unsigned long start = jiffies; /* livelock avoidance */
spin_lock(&inode_lock);
- if (!wbc->for_kupdate || list_empty(&bdi->b_io))
- queue_io(bdi, wbc->older_than_this);
+ if (!wbc->for_kupdate || list_empty(&wb->b_io))
+ queue_io(wb, wbc->older_than_this);
- while (!list_empty(&bdi->b_io)) {
- struct inode *inode = list_entry(bdi->b_io.prev,
+ while (!list_empty(&wb->b_io)) {
+ struct inode *inode = list_entry(wb->b_io.prev,
struct inode, i_list);
long pages_skipped;
@@ -629,7 +655,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
continue;
}
- if (!bdi_cap_writeback_dirty(bdi)) {
+ if (!bdi_cap_writeback_dirty(wb->bdi)) {
redirty_tail(inode);
if (is_blkdev_sb) {
/*
@@ -651,7 +677,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
continue;
}
- if (wbc->nonblocking && bdi_write_congested(bdi)) {
+ if (wbc->nonblocking && bdi_write_congested(wb->bdi)) {
wbc->encountered_congestion = 1;
if (!is_blkdev_sb)
break; /* Skip a congested fs */
@@ -685,7 +711,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
wbc->more_io = 1;
break;
}
- if (!list_empty(&bdi->b_more_io))
+ if (!list_empty(&wb->b_more_io))
wbc->more_io = 1;
}
@@ -693,6 +719,14 @@ void generic_sync_bdi_inodes(struct super_block *sb,
/* Leave any unwritten inodes on b_io */
}
+void generic_sync_bdi_inodes(struct super_block *sb,
+ struct writeback_control *wbc)
+{
+ struct backing_dev_info *bdi = wbc->bdi;
+
+ generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+}
+
/*
* Write out a superblock's list of dirty inodes. A wait will be performed
* upon no inodes, all inodes or the final one, depending upon sync_mode.
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index a848eea..a0c70f1 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -23,8 +23,8 @@ struct dentry;
* Bits in backing_dev_info.state
*/
enum bdi_state {
- BDI_pdflush, /* A pdflush thread is working this device */
BDI_pending, /* On its way to being activated */
+ BDI_wb_alloc, /* Default embedded wb allocated */
BDI_async_congested, /* The async (write) queue is getting full */
BDI_sync_congested, /* The sync queue is getting full */
BDI_unused, /* Available bits start here */
@@ -40,15 +40,23 @@ enum bdi_stat_item {
#define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
-struct bdi_writeback_arg {
- unsigned long nr_pages;
- struct super_block *sb;
+struct bdi_writeback {
+ struct backing_dev_info *bdi; /* our parent bdi */
+ unsigned int nr;
+
+ struct task_struct *task; /* writeback task */
+ wait_queue_head_t wait;
+ struct list_head b_dirty; /* dirty inodes */
+ struct list_head b_io; /* parked for writeback */
+ struct list_head b_more_io; /* parked for more writeback */
+
+ unsigned long nr_pages;
+ struct super_block *sb;
};
struct backing_dev_info {
- struct list_head bdi_list;
struct rcu_head rcu_head;
-
+ struct list_head bdi_list;
unsigned long ra_pages; /* max readahead in PAGE_CACHE_SIZE units */
unsigned long state; /* Always use atomic bitops on this */
unsigned int capabilities; /* Device capabilities */
@@ -65,14 +73,11 @@ struct backing_dev_info {
unsigned int min_ratio;
unsigned int max_ratio, max_prop_frac;
- struct device *dev;
+ struct bdi_writeback wb; /* default writeback info for this bdi */
+ unsigned long wb_active; /* bitmap of active tasks */
+ unsigned long wb_mask; /* number of registered tasks */
- struct task_struct *task; /* writeback task */
- wait_queue_head_t wait;
- struct bdi_writeback_arg wb_arg; /* protected by BDI_pdflush */
- struct list_head b_dirty; /* dirty inodes */
- struct list_head b_io; /* parked for writeback */
- struct list_head b_more_io; /* parked for more writeback */
+ struct device *dev;
#ifdef CONFIG_DEBUG_FS
struct dentry *debug_dir;
@@ -89,18 +94,19 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
void bdi_unregister(struct backing_dev_info *bdi);
int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
long nr_pages);
-int bdi_writeback_task(struct backing_dev_info *bdi);
+int bdi_writeback_task(struct bdi_writeback *wb);
void bdi_writeback_all(struct super_block *sb, long nr_pages);
void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
+int bdi_has_dirty_io(struct backing_dev_info *bdi);
extern spinlock_t bdi_lock;
extern struct list_head bdi_list;
-static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
+static inline int wb_has_dirty_io(struct bdi_writeback *wb)
{
- return !list_empty(&bdi->b_dirty) ||
- !list_empty(&bdi->b_io) ||
- !list_empty(&bdi->b_more_io);
+ return !list_empty(&wb->b_dirty) ||
+ !list_empty(&wb->b_io) ||
+ !list_empty(&wb->b_more_io);
}
static inline void __add_bdi_stat(struct backing_dev_info *bdi,
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index c759449..677a8c6 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -199,17 +199,59 @@ static int __init default_bdi_init(void)
}
subsys_initcall(default_bdi_init);
+static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+{
+ memset(wb, 0, sizeof(*wb));
+
+ wb->bdi = bdi;
+ init_waitqueue_head(&wb->wait);
+ INIT_LIST_HEAD(&wb->b_dirty);
+ INIT_LIST_HEAD(&wb->b_io);
+ INIT_LIST_HEAD(&wb->b_more_io);
+}
+
+static void bdi_flush_io(struct backing_dev_info *bdi)
+{
+ struct writeback_control wbc = {
+ .bdi = bdi,
+ .sync_mode = WB_SYNC_NONE,
+ .older_than_this = NULL,
+ .range_cyclic = 1,
+ .nr_to_write = 1024,
+ };
+
+ generic_sync_bdi_inodes(NULL, &wbc);
+}
+
+static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+ set_bit(0, &bdi->wb_mask);
+ wb->nr = 0;
+ return 0;
+}
+
+static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+ clear_bit(wb->nr, &bdi->wb_mask);
+ clear_bit(BDI_wb_alloc, &bdi->state);
+}
+
+static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
+{
+ struct bdi_writeback *wb;
+
+ set_bit(BDI_wb_alloc, &bdi->state);
+ wb = &bdi->wb;
+ wb_assign_nr(bdi, wb);
+ return wb;
+}
+
static int bdi_start_fn(void *ptr)
{
- struct backing_dev_info *bdi = ptr;
+ struct bdi_writeback *wb = ptr;
+ struct backing_dev_info *bdi = wb->bdi;
struct task_struct *tsk = current;
-
- /*
- * Add us to the active bdi_list
- */
- spin_lock_bh(&bdi_lock);
- list_add_rcu(&bdi->bdi_list, &bdi_list);
- spin_unlock_bh(&bdi_lock);
+ int ret;
tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
set_freezable();
@@ -225,77 +267,81 @@ static int bdi_start_fn(void *ptr)
clear_bit(BDI_pending, &bdi->state);
wake_up_bit(&bdi->state, BDI_pending);
- return bdi_writeback_task(bdi);
+ ret = bdi_writeback_task(wb);
+
+ bdi_put_wb(bdi, wb);
+ return ret;
+}
+
+int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+ return wb_has_dirty_io(&bdi->wb);
}
static int bdi_forker_task(void *ptr)
{
- struct backing_dev_info *bdi, *me = ptr;
+ struct bdi_writeback *me = ptr;
for (;;) {
+ struct backing_dev_info *bdi;
+ struct bdi_writeback *wb;
DEFINE_WAIT(wait);
/*
* Should never trigger on the default bdi
*/
- WARN_ON(bdi_has_dirty_io(me));
+ if (wb_has_dirty_io(me)) {
+ bdi_flush_io(me->bdi);
+ WARN_ON(1);
+ }
prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+
smp_mb();
if (list_empty(&bdi_pending_list))
schedule();
- else {
+
+ finish_wait(&me->wait, &wait);
repeat:
- bdi = NULL;
+ bdi = NULL;
+ spin_lock_bh(&bdi_lock);
+ if (!list_empty(&bdi_pending_list)) {
+ bdi = list_entry(bdi_pending_list.next,
+ struct backing_dev_info, bdi_list);
+ list_del_init(&bdi->bdi_list);
+ }
+ spin_unlock_bh(&bdi_lock);
- spin_lock_bh(&bdi_lock);
- if (!list_empty(&bdi_pending_list)) {
- bdi = list_entry(bdi_pending_list.next,
- struct backing_dev_info,
- bdi_list);
- list_del_init(&bdi->bdi_list);
- }
- spin_unlock_bh(&bdi_lock);
+ if (!bdi)
+ continue;
- /*
- * If no bdi or bdi already got setup, continue
- */
- if (!bdi || bdi->task)
- continue;
+ wb = bdi_new_wb(bdi);
+ if (!wb)
+ goto readd_flush;
- bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
+ wb->task = kthread_run(bdi_start_fn, wb, "bdi-%s",
dev_name(bdi->dev));
+ /*
+ * If task creation fails, then readd the bdi to
+ * the pending list and force writeout of the bdi
+ * from this forker thread. That will free some memory
+ * and we can try again.
+ */
+ if (!wb->task) {
+ bdi_put_wb(bdi, wb);
+readd_flush:
/*
- * If task creation fails, then readd the bdi to
- * the pending list and force writeout of the bdi
- * from this forker thread. That will free some memory
- * and we can try again.
+ * Add this 'bdi' to the back, so we get
+ * a chance to flush other bdi's to free
+ * memory.
*/
- if (!bdi->task) {
- struct writeback_control wbc = {
- .bdi = bdi,
- .sync_mode = WB_SYNC_NONE,
- .older_than_this = NULL,
- .range_cyclic = 1,
- };
-
- /*
- * Add this 'bdi' to the back, so we get
- * a chance to flush other bdi's to free
- * memory.
- */
- spin_lock_bh(&bdi_lock);
- list_add_tail(&bdi->bdi_list,
- &bdi_pending_list);
- spin_unlock_bh(&bdi_lock);
-
- wbc.nr_to_write = 1024;
- generic_sync_bdi_inodes(NULL, &wbc);
- goto repeat;
- }
- }
+ spin_lock_bh(&bdi_lock);
+ list_add_tail(&bdi->bdi_list, &bdi_pending_list);
+ spin_unlock_bh(&bdi_lock);
- finish_wait(&me->wait, &wait);
+ bdi_flush_io(bdi);
+ goto repeat;
+ }
}
return 0;
@@ -318,11 +364,21 @@ static void bdi_add_to_pending(struct rcu_head *head)
list_add_tail(&bdi->bdi_list, &bdi_pending_list);
spin_unlock(&bdi_lock);
- wake_up(&default_backing_dev_info.wait);
+ wake_up(&default_backing_dev_info.wb.wait);
}
+/*
+ * Add a new flusher task that gets created for any bdi
+ * that has dirty data pending writeout
+ */
void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
{
+ if (!bdi_cap_writeback_dirty(bdi))
+ return;
+
+ /*
+ * Someone already marked this pending for task creation
+ */
if (test_and_set_bit(BDI_pending, &bdi->state))
return;
@@ -363,9 +419,18 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
* on-demand when they need it.
*/
if (bdi_cap_flush_forker(bdi)) {
- bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
+ struct bdi_writeback *wb;
+
+ wb = bdi_new_wb(bdi);
+ if (!wb) {
+ ret = -ENOMEM;
+ goto exit;
+ }
+
+ wb->task = kthread_run(bdi_forker_task, wb, "bdi-%s",
dev_name(dev));
- if (!bdi->task) {
+ if (!wb->task) {
+ bdi_put_wb(bdi, wb);
ret = -ENOMEM;
goto exit;
}
@@ -395,34 +460,44 @@ static int sched_wait(void *word)
return 0;
}
+/*
+ * Remove bdi from global list and shutdown any threads we have running
+ */
static void bdi_wb_shutdown(struct backing_dev_info *bdi)
{
+ if (!bdi_cap_writeback_dirty(bdi))
+ return;
+
/*
* If setup is pending, wait for that to complete first
+ * Make sure nobody finds us on the bdi_list anymore
*/
wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
+ /*
+ * Make sure nobody finds us on the bdi_list anymore
+ */
spin_lock_bh(&bdi_lock);
list_del_rcu(&bdi->bdi_list);
spin_unlock_bh(&bdi_lock);
/*
- * In case the bdi is freed right after unregister, we need to
- * make sure any RCU sections have exited
+ * Now make sure that anybody who is currently looking at us from
+ * the bdi_list iteration have exited.
*/
synchronize_rcu();
+
+ /*
+ * Finally, kill the kernel thread
+ */
+ kthread_stop(bdi->wb.task);
}
void bdi_unregister(struct backing_dev_info *bdi)
{
if (bdi->dev) {
- if (!bdi_cap_flush_forker(bdi)) {
+ if (!bdi_cap_flush_forker(bdi))
bdi_wb_shutdown(bdi);
- if (bdi->task) {
- kthread_stop(bdi->task);
- bdi->task = NULL;
- }
- }
bdi_debug_unregister(bdi);
device_unregister(bdi->dev);
bdi->dev = NULL;
@@ -440,11 +515,10 @@ int bdi_init(struct backing_dev_info *bdi)
bdi->min_ratio = 0;
bdi->max_ratio = 100;
bdi->max_prop_frac = PROP_FRAC_BASE;
- init_waitqueue_head(&bdi->wait);
INIT_LIST_HEAD(&bdi->bdi_list);
- INIT_LIST_HEAD(&bdi->b_io);
- INIT_LIST_HEAD(&bdi->b_dirty);
- INIT_LIST_HEAD(&bdi->b_more_io);
+ bdi->wb_mask = bdi->wb_active = 0;
+
+ bdi_wb_init(&bdi->wb, bdi);
for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -469,9 +543,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
{
int i;
- WARN_ON(!list_empty(&bdi->b_dirty));
- WARN_ON(!list_empty(&bdi->b_io));
- WARN_ON(!list_empty(&bdi->b_more_io));
+ WARN_ON(bdi_has_dirty_io(bdi));
bdi_unregister(bdi);
--
1.6.3.rc0.1.gf800
r b swpd free buff cache si so bi bo in cs us sy id wa
0 1 0 608848 2652 375372 0 0 0 71024 604 24 1 10 48 42
0 1 0 549644 2712 433736 0 0 0 60692 505 27 1 8 48 44
1 0 0 476928 2784 505192 0 0 4 29540 553 24 0 9 53 37
0 1 0 457972 2808 524008 0 0 0 54876 331 16 0 4 38 58
0 1 0 366128 2928 614284 0 0 4 92168 710 58 0 13 53 34
0 1 0 295092 3000 684140 0 0 0 62924 572 23 0 9 53 37
0 1 0 236592 3064 741704 0 0 4 58256 523 17 0 8 48 44
0 1 0 165608 3132 811464 0 0 0 57460 560 21 0 8 54 38
0 1 0 102952 3200 873164 0 0 4 74748 540 29 1 10 48 41
0 1 0 48604 3252 926472 0 0 0 53248 469 29 0 7 47 45
where vanilla tends to fluctuate a lot in the creation phase:
r b swpd free buff cache si so bi bo in cs us sy id wa
1 1 0 678716 5792 303380 0 0 0 74064 565 50 1 11 52 36
1 0 0 662488 5864 319396 0 0 4 352 302 329 0 2 47 51
0 1 0 599312 5924 381468 0 0 0 78164 516 55 0 9 51 40
0 1 0 519952 6008 459516 0 0 4 78156 622 56 1 11 52 37
1 1 0 436640 6092 541632 0 0 0 82244 622 54 0 11 48 41
0 1 0 436640 6092 541660 0 0 0 8 152 39 0 0 51 49
0 1 0 332224 6200 644252 0 0 4 102800 728 46 1 13 49 36
1 0 0 274492 6260 701056 0 0 4 12328 459 49 0 7 50 43
0 1 0 211220 6324 763356 0 0 0 106940 515 37 1 10 51 39
1 0 0 160412 6376 813468 0 0 0 8224 415 43 0 6 49 45
1 1 0 85980 6452 886556 0 0 4 113516 575 39 1 11 54 34
0 2 0 85968 6452 886620 0 0 0 1640 158 211 0 0 46 54
So apart from seemingly behaving better for buffered writeout, this also
allows us to potentially have more than one bdi thread flushing out data.
This may be useful for NUMA type setups.
A 10 disk test with btrfs performs 26% faster with per-bdi flushing. Other
tests pending.
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/buffer.c | 2 +-
fs/fs-writeback.c | 309 ++++++++++++++++++++++++++-----------------
fs/ntfs/super.c | 32 +----
fs/sync.c | 2 +-
include/linux/backing-dev.h | 28 ++++
include/linux/fs.h | 3 +-
include/linux/writeback.h | 2 +-
mm/backing-dev.c | 198 ++++++++++++++++++++++++++--
mm/page-writeback.c | 141 +-------------------
mm/vmscan.c | 2 +-
10 files changed, 416 insertions(+), 303 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index aed2977..14f0802 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -281,7 +281,7 @@ static void free_more_memory(void)
struct zone *zone;
int nid;
- wakeup_pdflush(1024);
+ wakeup_flusher_threads(1024);
yield();
for_each_online_node(nid) {
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 34c8d1d..c40345c 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -19,6 +19,8 @@
#include <linux/sched.h>
#include <linux/fs.h>
#include <linux/mm.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
#include <linux/writeback.h>
#include <linux/blkdev.h>
#include <linux/backing-dev.h>
@@ -61,10 +63,186 @@ int writeback_in_progress(struct backing_dev_info *bdi)
*/
static void writeback_release(struct backing_dev_info *bdi)
{
- BUG_ON(!writeback_in_progress(bdi));
+ WARN_ON_ONCE(!writeback_in_progress(bdi));
+ bdi->wb_arg.nr_pages = 0;
+ bdi->wb_arg.sb = NULL;
clear_bit(BDI_pdflush, &bdi->state);
}
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+ long nr_pages)
+{
+ /*
+ * This only happens the first time someone kicks this bdi, so put
+ * it out-of-line.
+ */
+ if (unlikely(!bdi->task)) {
+ bdi_add_default_flusher_task(bdi);
+ return 1;
+ }
+
+ if (writeback_acquire(bdi)) {
+ bdi->wb_arg.nr_pages = nr_pages;
+ bdi->wb_arg.sb = sb;
+ /*
+ * make above store seen before the task is woken
+ */
+ smp_mb();
+ wake_up(&bdi->wait);
+ }
+
+ return 0;
+}
+
+/*
+ * The maximum number of pages to writeout in a single bdi flush/kupdate
+ * operation. We do this so we don't hold I_SYNC against an inode for
+ * enormous amounts of time, which would block a userspace task which has
+ * been forced to throttle against that inode. Also, the code reevaluates
+ * the dirty each time it has written this many pages.
+ */
+#define MAX_WRITEBACK_PAGES 1024
+
+/*
+ * Periodic writeback of "old" data.
+ *
+ * Define "old": the first time one of an inode's pages is dirtied, we mark the
+ * dirtying-time in the inode's address_space. So this periodic writeback code
+ * just walks the superblock inode list, writing back any inodes which are
+ * older than a specific point in time.
+ *
+ * Try to run once per dirty_writeback_interval. But if a writeback event
+ * takes longer than a dirty_writeback_interval interval, then leave a
+ * one-second gap.
+ *
+ * older_than_this takes precedence over nr_to_write. So we'll only write back
+ * all dirty pages if they are all attached to "old" mappings.
+ */
+static void bdi_kupdated(struct backing_dev_info *bdi)
+{
+ unsigned long oldest_jif;
+ long nr_to_write;
+ struct writeback_control wbc = {
+ .bdi = bdi,
+ .sync_mode = WB_SYNC_NONE,
+ .older_than_this = &oldest_jif,
+ .nr_to_write = 0,
+ .for_kupdate = 1,
+ .range_cyclic = 1,
+ };
+
+ sync_supers();
+
+ oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
+
+ nr_to_write = global_page_state(NR_FILE_DIRTY) +
+ global_page_state(NR_UNSTABLE_NFS) +
+ (inodes_stat.nr_inodes - inodes_stat.nr_unused);
+
+ while (nr_to_write > 0) {
+ wbc.more_io = 0;
+ wbc.encountered_congestion = 0;
+ wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+ generic_sync_bdi_inodes(NULL, &wbc);
+ if (wbc.nr_to_write > 0)
+ break; /* All the old data is written */
+ nr_to_write -= MAX_WRITEBACK_PAGES;
+ }
+}
+
+static void bdi_pdflush(struct backing_dev_info *bdi)
+{
+ struct writeback_control wbc = {
+ .bdi = bdi,
+ .sync_mode = WB_SYNC_NONE,
+ .older_than_this = NULL,
+ .range_cyclic = 1,
+ };
+ long nr_pages = bdi->wb_arg.nr_pages;
+
+ for (;;) {
+ unsigned long background_thresh, dirty_thresh;
+ get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
+ if ((global_page_state(NR_FILE_DIRTY) +
+ global_page_state(NR_UNSTABLE_NFS) < background_thresh) &&
+ nr_pages <= 0)
+ break;
+
+ wbc.more_io = 0;
+ wbc.encountered_congestion = 0;
+ wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+ wbc.pages_skipped = 0;
+ generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
+ nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+ /*
+ * If we ran out of stuff to write, bail unless more_io got set
+ */
+ if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
+ if (wbc.more_io)
+ continue;
+ break;
+ }
+ }
+}
+
+/*
+ * Handle writeback of dirty data for the device backed by this bdi. Also
+ * wakes up periodically and does kupdated style flushing.
+ */
+int bdi_writeback_task(struct backing_dev_info *bdi)
+{
+ while (!kthread_should_stop()) {
+ unsigned long wait_jiffies;
+ DEFINE_WAIT(wait);
+
+ prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
+ wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
+ schedule_timeout(wait_jiffies);
+ try_to_freeze();
+
+ /*
+ * We get here in two cases:
+ *
+ * schedule_timeout() returned because the dirty writeback
+ * interval has elapsed. If that happens, we will be able
+ * to acquire the writeback lock and will proceed to do
+ * kupdated style writeout.
+ *
+ * Someone called bdi_start_writeback(), which will acquire
+ * the writeback lock. This means our writeback_acquire()
+ * below will fail and we call into bdi_pdflush() for
+ * pdflush style writeout.
+ *
+ */
+ if (writeback_acquire(bdi))
+ bdi_kupdated(bdi);
+ else
+ bdi_pdflush(bdi);
+
+ writeback_release(bdi);
+ finish_wait(&bdi->wait, &wait);
+ }
+
+ return 0;
+}
+
+void bdi_writeback_all(struct super_block *sb, long nr_pages)
+{
+ struct backing_dev_info *bdi;
+
+ rcu_read_lock();
+
+restart:
+ list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
+ if (!bdi_has_dirty_io(bdi))
+ continue;
+ if (bdi_start_writeback(bdi, sb, nr_pages))
+ goto restart;
+ }
+
+ rcu_read_unlock();
+}
+
/**
* __mark_inode_dirty - internal function
* @inode: inode to mark
@@ -263,46 +441,6 @@ static void queue_io(struct backing_dev_info *bdi,
move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
}
-static int sb_on_inode_list(struct super_block *sb, struct list_head *list)
-{
- struct inode *inode;
- int ret = 0;
-
- spin_lock(&inode_lock);
- list_for_each_entry(inode, list, i_list) {
- if (inode->i_sb == sb) {
- ret = 1;
- break;
- }
- }
- spin_unlock(&inode_lock);
- return ret;
-}
-
-int sb_has_dirty_inodes(struct super_block *sb)
-{
- struct backing_dev_info *bdi;
- int ret = 0;
-
- /*
- * This is REALLY expensive right now, but it'll go away
- * when the bdi writeback is introduced
- */
- rcu_read_lock();
- list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
- if (sb_on_inode_list(sb, &bdi->b_dirty) ||
- sb_on_inode_list(sb, &bdi->b_io) ||
- sb_on_inode_list(sb, &bdi->b_more_io)) {
- ret = 1;
- break;
- }
- }
- rcu_read_unlock();
-
- return ret;
-}
-EXPORT_SYMBOL(sb_has_dirty_inodes);
-
/*
* Write a single inode's dirty pages and inode data out to disk.
* If `wait' is set, wait on the writeout.
@@ -461,11 +599,11 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
return __sync_single_inode(inode, wbc);
}
-static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
- struct writeback_control *wbc,
- struct super_block *sb,
- int is_blkdev_sb)
+void generic_sync_bdi_inodes(struct super_block *sb,
+ struct writeback_control *wbc)
{
+ const int is_blkdev_sb = sb_is_blkdev_sb(sb);
+ struct backing_dev_info *bdi = wbc->bdi;
const unsigned long start = jiffies; /* livelock avoidance */
spin_lock(&inode_lock);
@@ -516,13 +654,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
continue; /* Skip a congested blockdev */
}
- if (wbc->bdi && bdi != wbc->bdi) {
- if (!is_blkdev_sb)
- break; /* fs has the wrong queue */
- requeue_io(inode);
- continue; /* blockdev has wrong queue */
- }
-
/*
* Was this inode dirtied after sync_sb_inodes was called?
* This keeps sync from extra jobs and livelock.
@@ -530,16 +661,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
if (inode_dirtied_after(inode, start))
break;
- /* Is another pdflush already flushing this queue? */
- if (current_is_pdflush() && !writeback_acquire(bdi))
- break;
-
BUG_ON(inode->i_state & I_FREEING);
__iget(inode);
pages_skipped = wbc->pages_skipped;
__writeback_single_inode(inode, wbc);
- if (current_is_pdflush())
- writeback_release(bdi);
if (wbc->pages_skipped != pages_skipped) {
/*
* writeback is not making progress due to locked
@@ -578,11 +703,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
* a variety of queues, so all inodes are searched. For other superblocks,
* assume that all inodes are backed by the same queue.
*
- * FIXME: this linear search could get expensive with many fileystems. But
- * how to fix? We need to go from an address_space to all inodes which share
- * a queue with that address_space. (Easy: have a global "dirty superblocks"
- * list).
- *
* The inodes to be written are parked on bdi->b_io. They are moved back onto
* bdi->b_dirty as they are selected for writing. This way, none can be missed
* on the writer throttling path, and we get decent balancing between many
@@ -591,13 +711,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
void generic_sync_sb_inodes(struct super_block *sb,
struct writeback_control *wbc)
{
- const int is_blkdev_sb = sb_is_blkdev_sb(sb);
- struct backing_dev_info *bdi;
-
- rcu_read_lock();
- list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
- generic_sync_bdi_inodes(bdi, wbc, sb, is_blkdev_sb);
- rcu_read_unlock();
+ if (wbc->bdi)
+ bdi_start_writeback(wbc->bdi, sb, 0);
+ else
+ bdi_writeback_all(sb, 0);
if (wbc->sync_mode == WB_SYNC_ALL) {
struct inode *inode, *old_inode = NULL;
@@ -653,58 +770,6 @@ static void sync_sb_inodes(struct super_block *sb,
}
/*
- * Start writeback of dirty pagecache data against all unlocked inodes.
- *
- * Note:
- * We don't need to grab a reference to superblock here. If it has non-empty
- * ->b_dirty it's hadn't been killed yet and kill_super() won't proceed
- * past sync_inodes_sb() until the ->b_dirty/b_io/b_more_io lists are all
- * empty. Since __sync_single_inode() regains inode_lock before it finally moves
- * inode from superblock lists we are OK.
- *
- * If `older_than_this' is non-zero then only flush inodes which have a
- * flushtime older than *older_than_this.
- *
- * If `bdi' is non-zero then we will scan the first inode against each
- * superblock until we find the matching ones. One group will be the dirty
- * inodes against a filesystem. Then when we hit the dummy blockdev superblock,
- * sync_sb_inodes will seekout the blockdev which matches `bdi'. Maybe not
- * super-efficient but we're about to do a ton of I/O...
- */
-void
-writeback_inodes(struct writeback_control *wbc)
-{
- struct super_block *sb;
-
- might_sleep();
- spin_lock(&sb_lock);
-restart:
- list_for_each_entry_reverse(sb, &super_blocks, s_list) {
- if (sb_has_dirty_inodes(sb)) {
- /* we're making our own get_super here */
- sb->s_count++;
- spin_unlock(&sb_lock);
- /*
- * If we can't get the readlock, there's no sense in
- * waiting around, most of the time the FS is going to
- * be unmounted by the time it is released.
- */
- if (down_read_trylock(&sb->s_umount)) {
- if (sb->s_root)
- sync_sb_inodes(sb, wbc);
- up_read(&sb->s_umount);
- }
- spin_lock(&sb_lock);
- if (__put_super_and_need_restart(sb))
- goto restart;
- }
- if (wbc->nr_to_write <= 0)
- break;
- }
- spin_unlock(&sb_lock);
-}
-
-/*
* writeback and wait upon the filesystem's dirty inodes. The caller will
* do this in two passes - one to write, and one to wait.
*
diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
index f76951d..c4cb157 100644
--- a/fs/ntfs/super.c
+++ b/fs/ntfs/super.c
@@ -2373,39 +2373,13 @@ static void ntfs_put_super(struct super_block *sb)
vol->mftmirr_ino = NULL;
}
/*
- * If any dirty inodes are left, throw away all mft data page cache
- * pages to allow a clean umount. This should never happen any more
- * due to mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
- * the underlying mft records are written out and cleaned. If it does,
+ * We should have no dirty inodes left, due to
+ * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
+ * the underlying mft records are written out and cleaned.
* happen anyway, we want to know...
*/
ntfs_commit_inode(vol->mft_ino);
write_inode_now(vol->mft_ino, 1);
- if (sb_has_dirty_inodes(sb)) {
- const char *s1, *s2;
-
- mutex_lock(&vol->mft_ino->i_mutex);
- truncate_inode_pages(vol->mft_ino->i_mapping, 0);
- mutex_unlock(&vol->mft_ino->i_mutex);
- write_inode_now(vol->mft_ino, 1);
- if (sb_has_dirty_inodes(sb)) {
- static const char *_s1 = "inodes";
- static const char *_s2 = "";
- s1 = _s1;
- s2 = _s2;
- } else {
- static const char *_s1 = "mft pages";
- static const char *_s2 = "They have been thrown "
- "away. ";
- s1 = _s1;
- s2 = _s2;
- }
- ntfs_error(sb, "Dirty %s found at umount time. %sYou should "
- "run chkdsk. Please email "
- "linux-n...@lists.sourceforge.net and say "
- "that you saw this message. Thank you.", s1,
- s2);
- }
#endif /* NTFS_RW */
iput(vol->mft_ino);
diff --git a/fs/sync.c b/fs/sync.c
index 7abc65f..3887f10 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -23,7 +23,7 @@
*/
static void do_sync(unsigned long wait)
{
- wakeup_pdflush(0);
+ wakeup_flusher_threads(0);
sync_inodes(0); /* All mappings, inodes and their blockdevs */
vfs_dq_sync(NULL);
sync_supers(); /* Write the superblocks */
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 86668c7..a848eea 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -24,6 +24,7 @@ struct dentry;
*/
enum bdi_state {
BDI_pdflush, /* A pdflush thread is working this device */
+ BDI_pending, /* On its way to being activated */
BDI_async_congested, /* The async (write) queue is getting full */
BDI_sync_congested, /* The sync queue is getting full */
BDI_unused, /* Available bits start here */
@@ -39,8 +40,14 @@ enum bdi_stat_item {
#define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
+struct bdi_writeback_arg {
+ unsigned long nr_pages;
+ struct super_block *sb;
+};
+
struct backing_dev_info {
struct list_head bdi_list;
+ struct rcu_head rcu_head;
unsigned long ra_pages; /* max readahead in PAGE_CACHE_SIZE units */
unsigned long state; /* Always use atomic bitops on this */
@@ -60,6 +67,9 @@ struct backing_dev_info {
struct device *dev;
+ struct task_struct *task; /* writeback task */
+ wait_queue_head_t wait;
+ struct bdi_writeback_arg wb_arg; /* protected by BDI_pdflush */
struct list_head b_dirty; /* dirty inodes */
struct list_head b_io; /* parked for writeback */
struct list_head b_more_io; /* parked for more writeback */
@@ -77,10 +87,22 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
const char *fmt, ...);
int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
void bdi_unregister(struct backing_dev_info *bdi);
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+ long nr_pages);
+int bdi_writeback_task(struct backing_dev_info *bdi);
+void bdi_writeback_all(struct super_block *sb, long nr_pages);
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
extern spinlock_t bdi_lock;
extern struct list_head bdi_list;
+static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+ return !list_empty(&bdi->b_dirty) ||
+ !list_empty(&bdi->b_io) ||
+ !list_empty(&bdi->b_more_io);
+}
+
static inline void __add_bdi_stat(struct backing_dev_info *bdi,
enum bdi_stat_item item, s64 amount)
{
@@ -196,6 +218,7 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
#define BDI_CAP_EXEC_MAP 0x00000040
#define BDI_CAP_NO_ACCT_WB 0x00000080
#define BDI_CAP_SWAP_BACKED 0x00000100
+#define BDI_CAP_FLUSH_FORKER 0x00000200
#define BDI_CAP_VMFLAGS \
(BDI_CAP_READ_MAP | BDI_CAP_WRITE_MAP | BDI_CAP_EXEC_MAP)
@@ -265,6 +288,11 @@ static inline bool bdi_cap_swap_backed(struct backing_dev_info *bdi)
return bdi->capabilities & BDI_CAP_SWAP_BACKED;
}
+static inline bool bdi_cap_flush_forker(struct backing_dev_info *bdi)
+{
+ return bdi->capabilities & BDI_CAP_FLUSH_FORKER;
+}
+
static inline bool mapping_cap_writeback_dirty(struct address_space *mapping)
{
return bdi_cap_writeback_dirty(mapping->backing_dev_info);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 6b475d4..ecdc544 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2063,6 +2063,8 @@ extern int invalidate_inode_pages2_range(struct address_space *mapping,
pgoff_t start, pgoff_t end);
extern void generic_sync_sb_inodes(struct super_block *sb,
struct writeback_control *wbc);
+extern void generic_sync_bdi_inodes(struct super_block *sb,
+ struct writeback_control *);
extern int write_inode_now(struct inode *, int);
extern int filemap_fdatawrite(struct address_space *);
extern int filemap_flush(struct address_space *);
@@ -2180,7 +2182,6 @@ extern int bdev_read_only(struct block_device *);
extern int set_blocksize(struct block_device *, int);
extern int sb_set_blocksize(struct super_block *, int);
extern int sb_min_blocksize(struct super_block *, int);
-extern int sb_has_dirty_inodes(struct super_block *);
extern int generic_file_mmap(struct file *, struct vm_area_struct *);
extern int generic_file_readonly_mmap(struct file *, struct vm_area_struct *);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 9344547..a8e9f78 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -99,7 +99,7 @@ static inline void inode_sync_wait(struct inode *inode)
/*
* mm/page-writeback.c
*/
-int wakeup_pdflush(long nr_pages);
+void wakeup_flusher_threads(long nr_pages);
void laptop_io_completion(void);
void laptop_sync_completion(void);
void throttle_vm_writeout(gfp_t gfp_mask);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 883ee8a..c759449 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -1,8 +1,11 @@
#include <linux/wait.h>
#include <linux/backing-dev.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
#include <linux/fs.h>
#include <linux/pagemap.h>
+#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/module.h>
#include <linux/writeback.h>
@@ -16,7 +19,7 @@ EXPORT_SYMBOL(default_unplug_io_fn);
struct backing_dev_info default_backing_dev_info = {
.ra_pages = VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE,
.state = 0,
- .capabilities = BDI_CAP_MAP_COPY,
+ .capabilities = BDI_CAP_MAP_COPY | BDI_CAP_FLUSH_FORKER,
.unplug_io_fn = default_unplug_io_fn,
};
EXPORT_SYMBOL_GPL(default_backing_dev_info);
@@ -24,6 +27,7 @@ EXPORT_SYMBOL_GPL(default_backing_dev_info);
static struct class *bdi_class;
DEFINE_SPINLOCK(bdi_lock);
LIST_HEAD(bdi_list);
+LIST_HEAD(bdi_pending_list);
#ifdef CONFIG_DEBUG_FS
#include <linux/debugfs.h>
@@ -195,6 +199,146 @@ static int __init default_bdi_init(void)
}
subsys_initcall(default_bdi_init);
+static int bdi_start_fn(void *ptr)
+{
+ struct backing_dev_info *bdi = ptr;
+ struct task_struct *tsk = current;
+
+ /*
+ * Add us to the active bdi_list
+ */
+ spin_lock_bh(&bdi_lock);
+ list_add_rcu(&bdi->bdi_list, &bdi_list);
+ spin_unlock_bh(&bdi_lock);
+
+ tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
+ set_freezable();
+
+ /*
+ * Our parent may run at a different priority, just set us to normal
+ */
+ set_user_nice(tsk, 0);
+
+ /*
+ * Clear pending bit and wakeup anybody waiting to tear us down
+ */
+ clear_bit(BDI_pending, &bdi->state);
+ wake_up_bit(&bdi->state, BDI_pending);
+
+ return bdi_writeback_task(bdi);
+}
+
+static int bdi_forker_task(void *ptr)
+{
+ struct backing_dev_info *bdi, *me = ptr;
+
+ for (;;) {
+ DEFINE_WAIT(wait);
+
+ /*
+ * Should never trigger on the default bdi
+ */
+ WARN_ON(bdi_has_dirty_io(me));
+
+ prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+ smp_mb();
+ if (list_empty(&bdi_pending_list))
+ schedule();
+ else {
+repeat:
+ bdi = NULL;
+
+ spin_lock_bh(&bdi_lock);
+ if (!list_empty(&bdi_pending_list)) {
+ bdi = list_entry(bdi_pending_list.next,
+ struct backing_dev_info,
+ bdi_list);
+ list_del_init(&bdi->bdi_list);
+ }
+ spin_unlock_bh(&bdi_lock);
+
+ /*
+ * If no bdi or bdi already got setup, continue
+ */
+ if (!bdi || bdi->task)
+ continue;
+
+ bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
+ dev_name(bdi->dev));
+ /*
+ * If task creation fails, then readd the bdi to
+ * the pending list and force writeout of the bdi
+ * from this forker thread. That will free some memory
+ * and we can try again.
+ */
+ if (!bdi->task) {
+ struct writeback_control wbc = {
+ .bdi = bdi,
+ .sync_mode = WB_SYNC_NONE,
+ .older_than_this = NULL,
+ .range_cyclic = 1,
+ };
+
+ /*
+ * Add this 'bdi' to the back, so we get
+ * a chance to flush other bdi's to free
+ * memory.
+ */
+ spin_lock_bh(&bdi_lock);
+ list_add_tail(&bdi->bdi_list,
+ &bdi_pending_list);
+ spin_unlock_bh(&bdi_lock);
+
+ wbc.nr_to_write = 1024;
+ generic_sync_bdi_inodes(NULL, &wbc);
+ goto repeat;
+ }
+ }
+
+ finish_wait(&me->wait, &wait);
+ }
+
+ return 0;
+}
+
+/*
+ * Grace period has now ended, init bdi->bdi_list and add us to the
+ * list of bdi's that are pending for task creation. Wake up
+ * bdi_forker_task() to finish the job and add us back to the
+ * active bdi_list.
+ */
+static void bdi_add_to_pending(struct rcu_head *head)
+{
+ struct backing_dev_info *bdi;
+
+ bdi = container_of(head, struct backing_dev_info, rcu_head);
+ INIT_LIST_HEAD(&bdi->bdi_list);
+
+ spin_lock(&bdi_lock);
+ list_add_tail(&bdi->bdi_list, &bdi_pending_list);
+ spin_unlock(&bdi_lock);
+
+ wake_up(&default_backing_dev_info.wait);
+}
+
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+{
+ if (test_and_set_bit(BDI_pending, &bdi->state))
+ return;
+
+ spin_lock_bh(&bdi_lock);
+ list_del_rcu(&bdi->bdi_list);
+ spin_unlock_bh(&bdi_lock);
+
+ /*
+ * We need to wait for the current grace period to end,
+ * in case others were browsing the bdi_list as well.
+ * So defer the adding and wakeup to after the RCU
+ * grace period has ended.
+ */
+ call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+}
+
int bdi_register(struct backing_dev_info *bdi, struct device *parent,
const char *fmt, ...)
{
@@ -213,9 +357,23 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
goto exit;
}
- spin_lock(&bdi_lock);
- list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
- spin_unlock(&bdi_lock);
+ /*
+ * Just start the forker thread for our default backing_dev_info,
+ * and add other bdi's to the list. They will get a thread created
+ * on-demand when they need it.
+ */
+ if (bdi_cap_flush_forker(bdi)) {
+ bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
+ dev_name(dev));
+ if (!bdi->task) {
+ ret = -ENOMEM;
+ goto exit;
+ }
+ } else {
+ spin_lock_bh(&bdi_lock);
+ list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+ spin_unlock_bh(&bdi_lock);
+ }
bdi->dev = dev;
bdi_debug_register(bdi, dev_name(dev));
@@ -231,11 +389,22 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
}
EXPORT_SYMBOL(bdi_register_dev);
-static void bdi_remove_from_list(struct backing_dev_info *bdi)
+static int sched_wait(void *word)
{
- spin_lock(&bdi_lock);
+ schedule();
+ return 0;
+}
+
+static void bdi_wb_shutdown(struct backing_dev_info *bdi)
+{
+ /*
+ * If setup is pending, wait for that to complete first
+ */
+ wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
+
+ spin_lock_bh(&bdi_lock);
list_del_rcu(&bdi->bdi_list);
- spin_unlock(&bdi_lock);
+ spin_unlock_bh(&bdi_lock);
/*
* In case the bdi is freed right after unregister, we need to
@@ -247,7 +416,13 @@ static void bdi_remove_from_list(struct backing_dev_info *bdi)
void bdi_unregister(struct backing_dev_info *bdi)
{
if (bdi->dev) {
- bdi_remove_from_list(bdi);
+ if (!bdi_cap_flush_forker(bdi)) {
+ bdi_wb_shutdown(bdi);
+ if (bdi->task) {
+ kthread_stop(bdi->task);
+ bdi->task = NULL;
+ }
+ }
bdi_debug_unregister(bdi);
device_unregister(bdi->dev);
bdi->dev = NULL;
@@ -257,14 +432,15 @@ EXPORT_SYMBOL(bdi_unregister);
int bdi_init(struct backing_dev_info *bdi)
{
- int i;
- int err;
+ int i, err;
+ INIT_RCU_HEAD(&bdi->rcu_head);
bdi->dev = NULL;
bdi->min_ratio = 0;
bdi->max_ratio = 100;
bdi->max_prop_frac = PROP_FRAC_BASE;
+ init_waitqueue_head(&bdi->wait);
INIT_LIST_HEAD(&bdi->bdi_list);
INIT_LIST_HEAD(&bdi->b_io);
INIT_LIST_HEAD(&bdi->b_dirty);
@@ -283,8 +459,6 @@ int bdi_init(struct backing_dev_info *bdi)
err:
while (i--)
percpu_counter_destroy(&bdi->bdi_stat[i]);
-
- bdi_remove_from_list(bdi);
}
return err;
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 2296ff4..76269f8 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -36,15 +36,6 @@
#include <linux/pagevec.h>
/*
- * The maximum number of pages to writeout in a single bdflush/kupdate
- * operation. We do this so we don't hold I_SYNC against an inode for
- * enormous amounts of time, which would block a userspace task which has
- * been forced to throttle against that inode. Also, the code reevaluates
- * the dirty each time it has written this many pages.
- */
-#define MAX_WRITEBACK_PAGES 1024
-
-/*
* After a CPU has dirtied this many pages, balance_dirty_pages_ratelimited
* will look to see if it needs to force writeback or throttling.
*/
@@ -117,8 +108,6 @@ EXPORT_SYMBOL(laptop_mode);
/* End of sysctl-exported parameters */
-static void background_writeout(unsigned long _min_pages);
-
/*
* Scale the writeback cache size proportional to the relative writeout speeds.
*
@@ -541,7 +530,7 @@ static void balance_dirty_pages(struct address_space *mapping)
* been flushed to permanent storage.
*/
if (bdi_nr_reclaimable) {
- writeback_inodes(&wbc);
+ generic_sync_bdi_inodes(NULL, &wbc);
pages_written += write_chunk - wbc.nr_to_write;
get_dirty_limits(&background_thresh, &dirty_thresh,
&bdi_thresh, bdi);
@@ -592,7 +581,7 @@ static void balance_dirty_pages(struct address_space *mapping)
(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
+ global_page_state(NR_UNSTABLE_NFS)
> background_thresh)))
- pdflush_operation(background_writeout, 0);
+ bdi_start_writeback(bdi, NULL, 0);
}
void set_page_dirty_balance(struct page *page, int page_mkwrite)
@@ -677,152 +666,36 @@ void throttle_vm_writeout(gfp_t gfp_mask)
}
/*
- * writeback at least _min_pages, and keep writing until the amount of dirty
- * memory is less than the background threshold, or until we're all clean.
- */
-static void background_writeout(unsigned long _min_pages)
-{
- long min_pages = _min_pages;
- struct writeback_control wbc = {
- .bdi = NULL,
- .sync_mode = WB_SYNC_NONE,
- .older_than_this = NULL,
- .nr_to_write = 0,
- .nonblocking = 1,
- .range_cyclic = 1,
- };
-
- for ( ; ; ) {
- unsigned long background_thresh;
- unsigned long dirty_thresh;
-
- get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
- if (global_page_state(NR_FILE_DIRTY) +
- global_page_state(NR_UNSTABLE_NFS) < background_thresh
- && min_pages <= 0)
- break;
- wbc.more_io = 0;
- wbc.encountered_congestion = 0;
- wbc.nr_to_write = MAX_WRITEBACK_PAGES;
- wbc.pages_skipped = 0;
- writeback_inodes(&wbc);
- min_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
- if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
- /* Wrote less than expected */
- if (wbc.encountered_congestion || wbc.more_io)
- congestion_wait(WRITE, HZ/10);
- else
- break;
- }
- }
-}
-
-/*
* Start writeback of `nr_pages' pages. If `nr_pages' is zero, write back
* the whole world. Returns 0 if a pdflush thread was dispatched. Returns
* -1 if all pdflush threads were busy.
*/
-int wakeup_pdflush(long nr_pages)
+void wakeup_flusher_threads(long nr_pages)
{
if (nr_pages == 0)
nr_pages = global_page_state(NR_FILE_DIRTY) +
global_page_state(NR_UNSTABLE_NFS);
- return pdflush_operation(background_writeout, nr_pages);
+ bdi_writeback_all(NULL, nr_pages);
+ return;
}
-static void wb_timer_fn(unsigned long unused);
static void laptop_timer_fn(unsigned long unused);
-static DEFINE_TIMER(wb_timer, wb_timer_fn, 0, 0);
static DEFINE_TIMER(laptop_mode_wb_timer, laptop_timer_fn, 0, 0);
/*
- * Periodic writeback of "old" data.
- *
- * Define "old": the first time one of an inode's pages is dirtied, we mark the
- * dirtying-time in the inode's address_space. So this periodic writeback code
- * just walks the superblock inode list, writing back any inodes which are
- * older than a specific point in time.
- *
- * Try to run once per dirty_writeback_interval. But if a writeback event
- * takes longer than a dirty_writeback_interval interval, then leave a
- * one-second gap.
- *
- * older_than_this takes precedence over nr_to_write. So we'll only write back
- * all dirty pages if they are all attached to "old" mappings.
- */
-static void wb_kupdate(unsigned long arg)
-{
- unsigned long oldest_jif;
- unsigned long start_jif;
- unsigned long next_jif;
- long nr_to_write;
- struct writeback_control wbc = {
- .bdi = NULL,
- .sync_mode = WB_SYNC_NONE,
- .older_than_this = &oldest_jif,
- .nr_to_write = 0,
- .nonblocking = 1,
- .for_kupdate = 1,
- .range_cyclic = 1,
- };
-
- sync_supers();
-
- oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
- start_jif = jiffies;
- next_jif = start_jif + msecs_to_jiffies(dirty_writeback_interval * 10);
- nr_to_write = global_page_state(NR_FILE_DIRTY) +
- global_page_state(NR_UNSTABLE_NFS) +
- (inodes_stat.nr_inodes - inodes_stat.nr_unused);
- while (nr_to_write > 0) {
- wbc.more_io = 0;
- wbc.encountered_congestion = 0;
- wbc.nr_to_write = MAX_WRITEBACK_PAGES;
- writeback_inodes(&wbc);
- if (wbc.nr_to_write > 0) {
- if (wbc.encountered_congestion || wbc.more_io)
- congestion_wait(WRITE, HZ/10);
- else
- break; /* All the old data is written */
- }
- nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
- }
- if (time_before(next_jif, jiffies + HZ))
- next_jif = jiffies + HZ;
- if (dirty_writeback_interval)
- mod_timer(&wb_timer, next_jif);
-}
-
-/*
* sysctl handler for /proc/sys/vm/dirty_writeback_centisecs
*/
int dirty_writeback_centisecs_handler(ctl_table *table, int write,
struct file *file, void __user *buffer, size_t *length, loff_t *ppos)
{
proc_dointvec(table, write, file, buffer, length, ppos);
- if (dirty_writeback_interval)
- mod_timer(&wb_timer, jiffies +
- msecs_to_jiffies(dirty_writeback_interval * 10));
- else
- del_timer(&wb_timer);
return 0;
}
-static void wb_timer_fn(unsigned long unused)
-{
- if (pdflush_operation(wb_kupdate, 0) < 0)
- mod_timer(&wb_timer, jiffies + HZ); /* delay 1 second */
-}
-
-static void laptop_flush(unsigned long unused)
-{
- sys_sync();
-}
-
static void laptop_timer_fn(unsigned long unused)
{
- pdflush_operation(laptop_flush, 0);
+ wakeup_flusher_threads(0);
}
/*
@@ -905,8 +778,6 @@ void __init page_writeback_init(void)
{
int shift;
- mod_timer(&wb_timer,
- jiffies + msecs_to_jiffies(dirty_writeback_interval * 10));
writeback_set_ratelimit();
register_cpu_notifier(&ratelimit_nb);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5fa3eda..e37fd38 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1654,7 +1654,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
*/
if (total_scanned > sc->swap_cluster_max +
sc->swap_cluster_max / 2) {
- wakeup_pdflush(laptop_mode ? 0 : total_scanned);
+ wakeup_flusher_threads(laptop_mode ? 0 : total_scanned);
sc->may_writepage = 1;
}
--
1.6.3.rc0.1.gf800
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 2 +-
include/linux/writeback.h | 1 +
mm/backing-dev.c | 30 ++++++++++++++++++------------
3 files changed, 20 insertions(+), 13 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index efdce88..d9cd3b7 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -335,7 +335,7 @@ static void wb_writeback(struct bdi_writeback *wb)
* This will be inlined in bdi_writeback_task() once we get rid of any
* dirty inodes on the default_backing_dev_info
*/
-static void wb_do_writeback(struct bdi_writeback *wb)
+void wb_do_writeback(struct bdi_writeback *wb)
{
/*
* We get here in two cases:
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index baf04a9..e414702 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -69,6 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
int inode_wait(void *);
void sync_inodes_sb(struct super_block *, int wait);
void sync_inodes(int wait);
+void wb_do_writeback(struct bdi_writeback *wb);
/* writeback.h requires fs.h; it, too, is not included from here. */
static inline void wait_on_inode(struct inode *inode)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index b4bcb14..89d6eea 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -386,20 +386,26 @@ static int bdi_forker_task(void *ptr)
struct backing_dev_info *bdi;
struct bdi_writeback *wb;
- /*
- * Should never trigger on the default bdi
- */
- if (wb_has_dirty_io(me)) {
- bdi_flush_io(me->bdi);
- WARN_ON(1);
- }
-
prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
smp_mb();
if (list_empty(&bdi_pending_list))
schedule();
+ /*
+ * Ideally we'd like not to see any dirty inodes on the
+ * default_backing_dev_info. Until these are tracked down,
+ * perform the same writeback here that bdi_writeback_task
+ * does. For logic, see comment in
+ * fs/fs-writeback.c:bdi_writeback_task()
+ */
+ if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
+ wb_do_writeback(me);
+
+ /*
+ * This is our real job - check for pending entries in
+ * bdi_pending_list, and create the tasks that got added
+ */
repeat:
bdi = NULL;
spin_lock_bh(&bdi_lock);
@@ -567,12 +573,12 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
ret = -ENOMEM;
goto exit;
}
- } else {
- spin_lock_bh(&bdi_lock);
- list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
- spin_unlock_bh(&bdi_lock);
}
+ spin_lock_bh(&bdi_lock);
+ list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+ spin_unlock_bh(&bdi_lock);
+
bdi->dev = dev;
bdi_debug_register(bdi, dev_name(dev));
--
1.6.3.rc0.1.gf800
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/btrfs/disk-io.c | 23 ++++++++++++++++++-----
1 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 4b0ea0b..2dc19c9 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1345,12 +1345,24 @@ static void btrfs_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
free_extent_map(em);
}
+/*
+ * If this fails, caller must call bdi_destroy() to get rid of the
+ * bdi again.
+ */
static int setup_bdi(struct btrfs_fs_info *info, struct backing_dev_info *bdi)
{
- bdi_init(bdi);
+ int err;
+
+ bdi->capabilities = BDI_CAP_MAP_COPY;
+ err = bdi_init(bdi);
+ if (err)
+ return err;
+
+ err = bdi_register(bdi, NULL, "btrfs");
+ if (err)
+ return err;
+
bdi->ra_pages = default_backing_dev_info.ra_pages;
- bdi->state = 0;
- bdi->capabilities = default_backing_dev_info.capabilities;
bdi->unplug_io_fn = btrfs_unplug_io_fn;
bdi->unplug_io_data = info;
bdi->congested_fn = btrfs_congested_fn;
@@ -1574,7 +1586,8 @@ struct btrfs_root *open_ctree(struct super_block *sb,
fs_info->sb = sb;
fs_info->max_extent = (u64)-1;
fs_info->max_inline = 8192 * 1024;
- setup_bdi(fs_info, &fs_info->bdi);
+ if (setup_bdi(fs_info, &fs_info->bdi))
+ goto fail_bdi;
fs_info->btree_inode = new_inode(sb);
fs_info->btree_inode->i_ino = 1;
fs_info->btree_inode->i_nlink = 1;
@@ -1931,8 +1944,8 @@ fail_iput:
btrfs_close_devices(fs_info->fs_devices);
btrfs_mapping_tree_free(&fs_info->mapping_tree);
+fail_bdi:
bdi_destroy(&fs_info->bdi);
-
fail:
kfree(extent_root);
kfree(tree_root);
--
1.6.3.rc0.1.gf800
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
mm/backing-dev.c | 43 +++++++++++++++++++++++++++++++++++++++----
1 files changed, 39 insertions(+), 4 deletions(-)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 89d6eea..314b739 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -43,9 +43,33 @@ static void bdi_debug_init(void)
static int bdi_debug_stats_show(struct seq_file *m, void *v)
{
struct backing_dev_info *bdi = m->private;
+ struct bdi_writeback *wb;
unsigned long background_thresh;
unsigned long dirty_thresh;
unsigned long bdi_thresh;
+ unsigned long nr_dirty, nr_io, nr_more_io, nr_wb;
+ struct inode *inode;
+
+ /*
+ * inode lock is enough here, the bdi->wb_list is protected by
+ * RCU on the reader side
+ */
+ nr_wb = nr_dirty = nr_io = nr_more_io = 0;
+ spin_lock(&inode_lock);
+ list_for_each_entry(wb, &bdi->wb_list, list) {
+ nr_wb++;
+ list_for_each_entry(inode, &wb->b_dirty, i_list)
+ nr_dirty++;
+ list_for_each_entry(inode, &wb->b_io, i_list)
+ nr_io++;
+ list_for_each_entry(inode, &wb->b_more_io, i_list)
+ nr_more_io++;
+ }
+ spin_unlock(&inode_lock);
+
+ nr_dirty <<= (PAGE_CACHE_SHIFT - 10);
+ nr_io <<= (PAGE_CACHE_SHIFT - 10);
+ nr_more_io <<= (PAGE_CACHE_SHIFT - 10);
get_dirty_limits(&background_thresh, &dirty_thresh, &bdi_thresh, bdi);
@@ -55,12 +79,23 @@ static int bdi_debug_stats_show(struct seq_file *m, void *v)
"BdiReclaimable: %8lu kB\n"
"BdiDirtyThresh: %8lu kB\n"
"DirtyThresh: %8lu kB\n"
- "BackgroundThresh: %8lu kB\n",
+ "BackgroundThresh: %8lu kB\n"
+ "WriteBack threads:%8lu\n"
+ "b_dirty: %8lu\n"
+ "b_io: %8lu\n"
+ "b_more_io: %8lu\n"
+ "bdi: %8p\n"
+ "bdi_list: %8u\n"
+ "state: %8lx\n"
+ "wb_mask: %8lx\n"
+ "wb_list: %8u\n"
+ "wb_cnt: %8u\n",
(unsigned long) K(bdi_stat(bdi, BDI_WRITEBACK)),
(unsigned long) K(bdi_stat(bdi, BDI_RECLAIMABLE)),
- K(bdi_thresh),
- K(dirty_thresh),
- K(background_thresh));
+ K(bdi_thresh), K(dirty_thresh),
+ K(background_thresh), nr_wb, nr_dirty, nr_io, nr_more_io,
+ bdi, !list_empty(&bdi->bdi_list), bdi->state, bdi->wb_mask,
+ !list_empty(&bdi->wb_list), bdi->wb_cnt);
#undef K
return 0;
--
1.6.3.rc0.1.gf800
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 52 ++++++++++++++++++++++++++++++++++--------
include/linux/backing-dev.h | 5 ++++
include/linux/writeback.h | 2 +-
3 files changed, 48 insertions(+), 11 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index d9cd3b7..7e70f80 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -226,10 +226,10 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
* older_than_this takes precedence over nr_to_write. So we'll only write back
* all dirty pages if they are all attached to "old" mappings.
*/
-static void wb_kupdated(struct bdi_writeback *wb)
+static long wb_kupdated(struct bdi_writeback *wb)
{
unsigned long oldest_jif;
- long nr_to_write;
+ long nr_to_write, wrote = 0;
struct writeback_control wbc = {
.bdi = wb->bdi,
.sync_mode = WB_SYNC_NONE,
@@ -252,13 +252,16 @@ static void wb_kupdated(struct bdi_writeback *wb)
wbc.encountered_congestion = 0;
wbc.nr_to_write = MAX_WRITEBACK_PAGES;
generic_sync_wb_inodes(wb, NULL, &wbc);
+ wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
if (wbc.nr_to_write > 0)
break; /* All the old data is written */
nr_to_write -= MAX_WRITEBACK_PAGES;
}
+
+ return wrote;
}
-static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
+static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
struct super_block *sb)
{
struct writeback_control wbc = {
@@ -267,6 +270,7 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
.older_than_this = NULL,
.range_cyclic = 1,
};
+ long wrote = 0;
for (;;) {
unsigned long background_thresh, dirty_thresh;
@@ -283,6 +287,7 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
wbc.pages_skipped = 0;
generic_sync_wb_inodes(wb, sb, &wbc);
nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+ wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
/*
* If we ran out of stuff to write, bail unless more_io got set
*/
@@ -292,6 +297,8 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
break;
}
}
+
+ return wrote;
}
/*
@@ -317,26 +324,31 @@ static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
return ret;
}
-static void wb_writeback(struct bdi_writeback *wb)
+static long wb_writeback(struct bdi_writeback *wb)
{
struct backing_dev_info *bdi = wb->bdi;
struct bdi_work *work;
+ long wrote = 0;
while ((work = get_next_work_item(bdi, wb)) != NULL) {
struct super_block *sb = bdi_work_sb(work);
long nr_pages = work->nr_pages;
wb_clear_pending(wb, work);
- __wb_writeback(wb, nr_pages, sb);
+ wrote += __wb_writeback(wb, nr_pages, sb);
}
+
+ return wrote;
}
/*
* This will be inlined in bdi_writeback_task() once we get rid of any
* dirty inodes on the default_backing_dev_info
*/
-void wb_do_writeback(struct bdi_writeback *wb)
+long wb_do_writeback(struct bdi_writeback *wb)
{
+ long wrote;
+
/*
* We get here in two cases:
*
@@ -348,9 +360,11 @@ void wb_do_writeback(struct bdi_writeback *wb)
* items on the work_list. Process those.
*/
if (list_empty(&wb->bdi->work_list))
- wb_kupdated(wb);
+ wrote = wb_kupdated(wb);
else
- wb_writeback(wb);
+ wrote = wb_writeback(wb);
+
+ return wrote;
}
/*
@@ -359,12 +373,30 @@ void wb_do_writeback(struct bdi_writeback *wb)
*/
int bdi_writeback_task(struct bdi_writeback *wb)
{
+ unsigned long last_active = jiffies;
+ unsigned long wait_jiffies = -1UL;
+ long pages_written;
DEFINE_WAIT(wait);
while (!kthread_should_stop()) {
- unsigned long wait_jiffies;
- wb_do_writeback(wb);
+ pages_written = wb_do_writeback(wb);
+
+ if (pages_written)
+ last_active = jiffies;
+ else if (wait_jiffies != -1UL) {
+ unsigned long max_idle;
+
+ /*
+ * Longest period of inactivity that we tolerate. If we
+ * see dirty data again later, the task will get
+ * recreated automatically.
+ */
+ max_idle = max(5UL * 60 * HZ, wait_jiffies);
+ if (time_after(jiffies, max_idle + last_active) &&
+ wb_is_default_task(wb))
+ break;
+ }
prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 6ccfa35..5d93237 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -112,6 +112,11 @@ int bdi_has_dirty_io(struct backing_dev_info *bdi);
extern spinlock_t bdi_lock;
extern struct list_head bdi_list;
+static inline int wb_is_default_task(struct bdi_writeback *wb)
+{
+ return wb == &wb->bdi->wb;
+}
+
static inline int bdi_wblist_needs_lock(struct backing_dev_info *bdi)
{
return test_bit(BDI_wblist_lock, &bdi->state);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index e414702..30e318b 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -69,7 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
int inode_wait(void *);
void sync_inodes_sb(struct super_block *, int wait);
void sync_inodes(int wait);
-void wb_do_writeback(struct bdi_writeback *wb);
+long wb_do_writeback(struct bdi_writeback *wb);
/* writeback.h requires fs.h; it, too, is not included from here. */
static inline void wait_on_inode(struct inode *inode)
--
1.6.3.rc0.1.gf800
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 7 +++++++
include/linux/backing-dev.h | 1 +
mm/backing-dev.c | 6 ++++++
3 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 7e70f80..a287c09 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -557,6 +557,13 @@ void __mark_inode_dirty(struct inode *inode, int flags)
*/
if (!was_dirty) {
struct bdi_writeback *wb = inode_get_wb(inode);
+ struct backing_dev_info *bdi = wb->bdi;
+
+ if (bdi_cap_writeback_dirty(bdi) &&
+ !test_bit(BDI_registered, &bdi->state)) {
+ WARN_ON(1);
+ printk("bdi-%s not registered\n", bdi->name);
+ }
inode->dirtied_when = jiffies;
list_move(&inode->i_list, &wb->b_dirty);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 14fa7b1..7c2874f 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -30,6 +30,7 @@ enum bdi_state {
BDI_wblist_lock, /* bdi->wb_list now needs locking */
BDI_async_congested, /* The async (write) queue is getting full */
BDI_sync_congested, /* The sync queue is getting full */
+ BDI_registered, /* bdi_register() was done */
BDI_unused, /* Available bits start here */
};
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 89a8385..d45251f 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -514,6 +514,11 @@ static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
if (!bdi_cap_writeback_dirty(bdi))
return;
+ if (WARN_ON(!test_bit(BDI_registered, &bdi->state))) {
+ printk("bdi %p/%s is not registered!\n", bdi, bdi->name);
+ return;
+ }
+
/*
* Check with the helper whether to proceed adding a task. Will only
* abort if we two or more simultanous calls to
@@ -617,6 +622,7 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
bdi->dev = dev;
bdi_debug_register(bdi, dev_name(dev));
+ set_bit(BDI_registered, &bdi->state);
exit:
return ret;
--
1.6.3.rc0.1.gf800
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 196 +++++++++++++++++++++++++++---------------
fs/super.c | 3 -
include/linux/backing-dev.h | 9 ++
include/linux/fs.h | 5 +-
mm/backing-dev.c | 30 +++++++
mm/page-writeback.c | 1 -
6 files changed, 166 insertions(+), 78 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 91013ff..34c8d1d 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -25,6 +25,7 @@
#include <linux/buffer_head.h>
#include "internal.h"
+#define inode_to_bdi(inode) ((inode)->i_mapping->backing_dev_info)
/**
* writeback_acquire - attempt to get exclusive writeback access to a device
@@ -158,12 +159,13 @@ void __mark_inode_dirty(struct inode *inode, int flags)
goto out;
/*
- * If the inode was already on s_dirty/s_io/s_more_io, don't
- * reposition it (that would break s_dirty time-ordering).
+ * If the inode was already on b_dirty/b_io/b_more_io, don't
+ * reposition it (that would break b_dirty time-ordering).
*/
if (!was_dirty) {
inode->dirtied_when = jiffies;
- list_move(&inode->i_list, &sb->s_dirty);
+ list_move(&inode->i_list,
+ &inode_to_bdi(inode)->b_dirty);
}
}
out:
@@ -184,31 +186,30 @@ static int write_inode(struct inode *inode, int sync)
* furthest end of its superblock's dirty-inode list.
*
* Before stamping the inode's ->dirtied_when, we check to see whether it is
- * already the most-recently-dirtied inode on the s_dirty list. If that is
+ * already the most-recently-dirtied inode on the b_dirty list. If that is
* the case then the inode must have been redirtied while it was being written
* out and we don't reset its dirtied_when.
*/
static void redirty_tail(struct inode *inode)
{
- struct super_block *sb = inode->i_sb;
+ struct backing_dev_info *bdi = inode_to_bdi(inode);
- if (!list_empty(&sb->s_dirty)) {
- struct inode *tail_inode;
+ if (!list_empty(&bdi->b_dirty)) {
+ struct inode *tail;
- tail_inode = list_entry(sb->s_dirty.next, struct inode, i_list);
- if (time_before(inode->dirtied_when,
- tail_inode->dirtied_when))
+ tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
+ if (time_before(inode->dirtied_when, tail->dirtied_when))
inode->dirtied_when = jiffies;
}
- list_move(&inode->i_list, &sb->s_dirty);
+ list_move(&inode->i_list, &bdi->b_dirty);
}
/*
- * requeue inode for re-scanning after sb->s_io list is exhausted.
+ * requeue inode for re-scanning after bdi->b_io list is exhausted.
*/
static void requeue_io(struct inode *inode)
{
- list_move(&inode->i_list, &inode->i_sb->s_more_io);
+ list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
}
static void inode_sync_complete(struct inode *inode)
@@ -255,18 +256,50 @@ static void move_expired_inodes(struct list_head *delaying_queue,
/*
* Queue all expired dirty inodes for io, eldest first.
*/
-static void queue_io(struct super_block *sb,
- unsigned long *older_than_this)
+static void queue_io(struct backing_dev_info *bdi,
+ unsigned long *older_than_this)
+{
+ list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
+ move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
+}
+
+static int sb_on_inode_list(struct super_block *sb, struct list_head *list)
{
- list_splice_init(&sb->s_more_io, sb->s_io.prev);
- move_expired_inodes(&sb->s_dirty, &sb->s_io, older_than_this);
+ struct inode *inode;
+ int ret = 0;
+
+ spin_lock(&inode_lock);
+ list_for_each_entry(inode, list, i_list) {
+ if (inode->i_sb == sb) {
+ ret = 1;
+ break;
+ }
+ }
+ spin_unlock(&inode_lock);
+ return ret;
}
int sb_has_dirty_inodes(struct super_block *sb)
{
- return !list_empty(&sb->s_dirty) ||
- !list_empty(&sb->s_io) ||
- !list_empty(&sb->s_more_io);
+ struct backing_dev_info *bdi;
+ int ret = 0;
+
+ /*
+ * This is REALLY expensive right now, but it'll go away
+ * when the bdi writeback is introduced
+ */
+ rcu_read_lock();
+ list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
+ if (sb_on_inode_list(sb, &bdi->b_dirty) ||
+ sb_on_inode_list(sb, &bdi->b_io) ||
+ sb_on_inode_list(sb, &bdi->b_more_io)) {
+ ret = 1;
+ break;
+ }
+ }
+ rcu_read_unlock();
+
+ return ret;
}
EXPORT_SYMBOL(sb_has_dirty_inodes);
@@ -322,11 +355,11 @@ __sync_single_inode(struct inode *inode, struct writeback_control *wbc)
/*
* We didn't write back all the pages. nfs_writepages()
* sometimes bales out without doing anything. Redirty
- * the inode; Move it from s_io onto s_more_io/s_dirty.
+ * the inode; Move it from b_io onto b_more_io/b_dirty.
*/
/*
* akpm: if the caller was the kupdate function we put
- * this inode at the head of s_dirty so it gets first
+ * this inode at the head of b_dirty so it gets first
* consideration. Otherwise, move it to the tail, for
* the reasons described there. I'm not really sure
* how much sense this makes. Presumably I had a good
@@ -336,7 +369,7 @@ __sync_single_inode(struct inode *inode, struct writeback_control *wbc)
if (wbc->for_kupdate) {
/*
* For the kupdate function we move the inode
- * to s_more_io so it will get more writeout as
+ * to b_more_io so it will get more writeout as
* soon as the queue becomes uncongested.
*/
inode->i_state |= I_DIRTY_PAGES;
@@ -402,10 +435,10 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
if ((wbc->sync_mode != WB_SYNC_ALL) && (inode->i_state & I_SYNC)) {
/*
* We're skipping this inode because it's locked, and we're not
- * doing writeback-for-data-integrity. Move it to s_more_io so
- * that writeback can proceed with the other inodes on s_io.
+ * doing writeback-for-data-integrity. Move it to b_more_io so
+ * that writeback can proceed with the other inodes on b_io.
* We'll have another go at writing back this inode when we
- * completed a full scan of s_io.
+ * completed a full scan of b_io.
*/
requeue_io(inode);
return 0;
@@ -428,51 +461,34 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
return __sync_single_inode(inode, wbc);
}
-/*
- * Write out a superblock's list of dirty inodes. A wait will be performed
- * upon no inodes, all inodes or the final one, depending upon sync_mode.
- *
- * If older_than_this is non-NULL, then only write out inodes which
- * had their first dirtying at a time earlier than *older_than_this.
- *
- * If we're a pdflush thread, then implement pdflush collision avoidance
- * against the entire list.
- *
- * If `bdi' is non-zero then we're being asked to writeback a specific queue.
- * This function assumes that the blockdev superblock's inodes are backed by
- * a variety of queues, so all inodes are searched. For other superblocks,
- * assume that all inodes are backed by the same queue.
- *
- * FIXME: this linear search could get expensive with many fileystems. But
- * how to fix? We need to go from an address_space to all inodes which share
- * a queue with that address_space. (Easy: have a global "dirty superblocks"
- * list).
- *
- * The inodes to be written are parked on sb->s_io. They are moved back onto
- * sb->s_dirty as they are selected for writing. This way, none can be missed
- * on the writer throttling path, and we get decent balancing between many
- * throttled threads: we don't want them all piling up on inode_sync_wait.
- */
-void generic_sync_sb_inodes(struct super_block *sb,
- struct writeback_control *wbc)
+static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
+ struct writeback_control *wbc,
+ struct super_block *sb,
+ int is_blkdev_sb)
{
const unsigned long start = jiffies; /* livelock avoidance */
- int sync = wbc->sync_mode == WB_SYNC_ALL;
spin_lock(&inode_lock);
- if (!wbc->for_kupdate || list_empty(&sb->s_io))
- queue_io(sb, wbc->older_than_this);
- while (!list_empty(&sb->s_io)) {
- struct inode *inode = list_entry(sb->s_io.prev,
+ if (!wbc->for_kupdate || list_empty(&bdi->b_io))
+ queue_io(bdi, wbc->older_than_this);
+
+ while (!list_empty(&bdi->b_io)) {
+ struct inode *inode = list_entry(bdi->b_io.prev,
struct inode, i_list);
- struct address_space *mapping = inode->i_mapping;
- struct backing_dev_info *bdi = mapping->backing_dev_info;
long pages_skipped;
+ /*
+ * super block given and doesn't match, skip this inode
+ */
+ if (sb && sb != inode->i_sb) {
+ redirty_tail(inode);
+ continue;
+ }
+
if (!bdi_cap_writeback_dirty(bdi)) {
redirty_tail(inode);
- if (sb_is_blkdev_sb(sb)) {
+ if (is_blkdev_sb) {
/*
* Dirty memory-backed blockdev: the ramdisk
* driver does this. Skip just this inode
@@ -494,14 +510,14 @@ void generic_sync_sb_inodes(struct super_block *sb,
if (wbc->nonblocking && bdi_write_congested(bdi)) {
wbc->encountered_congestion = 1;
- if (!sb_is_blkdev_sb(sb))
+ if (!is_blkdev_sb)
break; /* Skip a congested fs */
requeue_io(inode);
continue; /* Skip a congested blockdev */
}
if (wbc->bdi && bdi != wbc->bdi) {
- if (!sb_is_blkdev_sb(sb))
+ if (!is_blkdev_sb)
break; /* fs has the wrong queue */
requeue_io(inode);
continue; /* blockdev has wrong queue */
@@ -539,13 +555,55 @@ void generic_sync_sb_inodes(struct super_block *sb,
wbc->more_io = 1;
break;
}
- if (!list_empty(&sb->s_more_io))
+ if (!list_empty(&bdi->b_more_io))
wbc->more_io = 1;
}
- if (sync) {
+ spin_unlock(&inode_lock);
+ /* Leave any unwritten inodes on b_io */
+}
+
+/*
+ * Write out a superblock's list of dirty inodes. A wait will be performed
+ * upon no inodes, all inodes or the final one, depending upon sync_mode.
+ *
+ * If older_than_this is non-NULL, then only write out inodes which
+ * had their first dirtying at a time earlier than *older_than_this.
+ *
+ * If we're a pdlfush thread, then implement pdflush collision avoidance
+ * against the entire list.
+ *
+ * If `bdi' is non-zero then we're being asked to writeback a specific queue.
+ * This function assumes that the blockdev superblock's inodes are backed by
+ * a variety of queues, so all inodes are searched. For other superblocks,
+ * assume that all inodes are backed by the same queue.
+ *
+ * FIXME: this linear search could get expensive with many fileystems. But
+ * how to fix? We need to go from an address_space to all inodes which share
+ * a queue with that address_space. (Easy: have a global "dirty superblocks"
+ * list).
+ *
+ * The inodes to be written are parked on bdi->b_io. They are moved back onto
+ * bdi->b_dirty as they are selected for writing. This way, none can be missed
+ * on the writer throttling path, and we get decent balancing between many
+ * throttled threads: we don't want them all piling up on inode_sync_wait.
+ */
+void generic_sync_sb_inodes(struct super_block *sb,
+ struct writeback_control *wbc)
+{
+ const int is_blkdev_sb = sb_is_blkdev_sb(sb);
+ struct backing_dev_info *bdi;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
+ generic_sync_bdi_inodes(bdi, wbc, sb, is_blkdev_sb);
+ rcu_read_unlock();
+
+ if (wbc->sync_mode == WB_SYNC_ALL) {
struct inode *inode, *old_inode = NULL;
+ spin_lock(&inode_lock);
+
/*
* Data integrity sync. Must wait for all pages under writeback,
* because there may have been pages dirtied before our sync
@@ -583,10 +641,8 @@ void generic_sync_sb_inodes(struct super_block *sb,
}
spin_unlock(&inode_lock);
iput(old_inode);
- } else
- spin_unlock(&inode_lock);
+ }
- return; /* Leave any unwritten inodes on s_io */
}
EXPORT_SYMBOL_GPL(generic_sync_sb_inodes);
@@ -601,8 +657,8 @@ static void sync_sb_inodes(struct super_block *sb,
*
* Note:
* We don't need to grab a reference to superblock here. If it has non-empty
- * ->s_dirty it's hadn't been killed yet and kill_super() won't proceed
- * past sync_inodes_sb() until the ->s_dirty/s_io/s_more_io lists are all
+ * ->b_dirty it's hadn't been killed yet and kill_super() won't proceed
+ * past sync_inodes_sb() until the ->b_dirty/b_io/b_more_io lists are all
* empty. Since __sync_single_inode() regains inode_lock before it finally moves
* inode from superblock lists we are OK.
*
diff --git a/fs/super.c b/fs/super.c
index 1943fdf..76dd5b2 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -64,9 +64,6 @@ static struct super_block *alloc_super(struct file_system_type *type)
s = NULL;
goto out;
}
- INIT_LIST_HEAD(&s->s_dirty);
- INIT_LIST_HEAD(&s->s_io);
- INIT_LIST_HEAD(&s->s_more_io);
INIT_LIST_HEAD(&s->s_files);
INIT_LIST_HEAD(&s->s_instances);
INIT_HLIST_HEAD(&s->s_anon);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 0ec2c59..86668c7 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -40,6 +40,8 @@ enum bdi_stat_item {
#define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
struct backing_dev_info {
+ struct list_head bdi_list;
+
unsigned long ra_pages; /* max readahead in PAGE_CACHE_SIZE units */
unsigned long state; /* Always use atomic bitops on this */
unsigned int capabilities; /* Device capabilities */
@@ -58,6 +60,10 @@ struct backing_dev_info {
struct device *dev;
+ struct list_head b_dirty; /* dirty inodes */
+ struct list_head b_io; /* parked for writeback */
+ struct list_head b_more_io; /* parked for more writeback */
+
#ifdef CONFIG_DEBUG_FS
struct dentry *debug_dir;
struct dentry *debug_stats;
@@ -72,6 +78,9 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
void bdi_unregister(struct backing_dev_info *bdi);
+extern spinlock_t bdi_lock;
+extern struct list_head bdi_list;
+
static inline void __add_bdi_stat(struct backing_dev_info *bdi,
enum bdi_stat_item item, s64 amount)
{
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 3b534e5..6b475d4 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -712,7 +712,7 @@ static inline int mapping_writably_mapped(struct address_space *mapping)
struct inode {
struct hlist_node i_hash;
- struct list_head i_list;
+ struct list_head i_list; /* backing dev IO list */
struct list_head i_sb_list;
struct list_head i_dentry;
unsigned long i_ino;
@@ -1329,9 +1329,6 @@ struct super_block {
struct xattr_handler **s_xattr;
struct list_head s_inodes; /* all inodes */
- struct list_head s_dirty; /* dirty inodes */
- struct list_head s_io; /* parked for writeback */
- struct list_head s_more_io; /* parked for more writeback */
struct hlist_head s_anon; /* anonymous dentries for (nfs) exporting */
struct list_head s_files;
/* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 493b468..883ee8a 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -22,6 +22,8 @@ struct backing_dev_info default_backing_dev_info = {
EXPORT_SYMBOL_GPL(default_backing_dev_info);
static struct class *bdi_class;
+DEFINE_SPINLOCK(bdi_lock);
+LIST_HEAD(bdi_list);
#ifdef CONFIG_DEBUG_FS
#include <linux/debugfs.h>
@@ -211,6 +213,10 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
goto exit;
}
+ spin_lock(&bdi_lock);
+ list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+ spin_unlock(&bdi_lock);
+
bdi->dev = dev;
bdi_debug_register(bdi, dev_name(dev));
@@ -225,9 +231,23 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
}
EXPORT_SYMBOL(bdi_register_dev);
+static void bdi_remove_from_list(struct backing_dev_info *bdi)
+{
+ spin_lock(&bdi_lock);
+ list_del_rcu(&bdi->bdi_list);
+ spin_unlock(&bdi_lock);
+
+ /*
+ * In case the bdi is freed right after unregister, we need to
+ * make sure any RCU sections have exited
+ */
+ synchronize_rcu();
+}
+
void bdi_unregister(struct backing_dev_info *bdi)
{
if (bdi->dev) {
+ bdi_remove_from_list(bdi);
bdi_debug_unregister(bdi);
device_unregister(bdi->dev);
bdi->dev = NULL;
@@ -245,6 +265,10 @@ int bdi_init(struct backing_dev_info *bdi)
bdi->min_ratio = 0;
bdi->max_ratio = 100;
bdi->max_prop_frac = PROP_FRAC_BASE;
+ INIT_LIST_HEAD(&bdi->bdi_list);
+ INIT_LIST_HEAD(&bdi->b_io);
+ INIT_LIST_HEAD(&bdi->b_dirty);
+ INIT_LIST_HEAD(&bdi->b_more_io);
for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -259,6 +283,8 @@ int bdi_init(struct backing_dev_info *bdi)
err:
while (i--)
percpu_counter_destroy(&bdi->bdi_stat[i]);
+
+ bdi_remove_from_list(bdi);
}
return err;
@@ -269,6 +295,10 @@ void bdi_destroy(struct backing_dev_info *bdi)
{
int i;
+ WARN_ON(!list_empty(&bdi->b_dirty));
+ WARN_ON(!list_empty(&bdi->b_io));
+ WARN_ON(!list_empty(&bdi->b_more_io));
+
bdi_unregister(bdi);
for (i = 0; i < NR_BDI_STAT_ITEMS; i++)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index bb553c3..2296ff4 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -319,7 +319,6 @@ static void task_dirty_limit(struct task_struct *tsk, long *pdirty)
/*
*
*/
-static DEFINE_SPINLOCK(bdi_lock);
static unsigned int bdi_min_ratio;
int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
--
1.6.3.rc0.1.gf800
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 328 ++++++++++++++++++++++++++++++++-----------
include/linux/backing-dev.h | 31 ++++-
include/linux/fs.h | 3 +
mm/backing-dev.c | 257 ++++++++++++++++++++++++++--------
mm/page-writeback.c | 4 +-
5 files changed, 479 insertions(+), 144 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 50e21e8..efdce88 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -34,84 +34,175 @@
*/
int nr_pdflush_threads;
-/**
- * writeback_acquire - attempt to get exclusive writeback access to a device
- * @bdi: the device's backing_dev_info structure
- *
- * It is a waste of resources to have more than one pdflush thread blocked on
- * a single request queue. Exclusion at the request_queue level is obtained
- * via a flag in the request_queue's backing_dev_info.state.
- *
- * Non-request_queue-backed address_spaces will share default_backing_dev_info,
- * unless they implement their own. Which is somewhat inefficient, as this
- * may prevent concurrent writeback against multiple devices.
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+ struct super_block *sb,
+ struct writeback_control *wbc);
+
+/*
+ * Work items for the bdi_writeback threads
*/
-static int writeback_acquire(struct bdi_writeback *wb)
+struct bdi_work {
+ struct list_head list;
+ struct rcu_head rcu_head;
+
+ unsigned long seen;
+ atomic_t pending;
+
+ unsigned long sb_data;
+ unsigned long nr_pages;
+
+ unsigned long state;
+};
+
+static struct super_block *bdi_work_sb(struct bdi_work *work)
{
- struct backing_dev_info *bdi = wb->bdi;
+ return (struct super_block *) (work->sb_data & ~1UL);
+}
+
+static inline bool bdi_work_on_stack(struct bdi_work *work)
+{
+ return work->sb_data & 1UL;
+}
+
+static inline void bdi_work_init(struct bdi_work *work, struct super_block *sb,
+ unsigned long nr_pages)
+{
+ INIT_RCU_HEAD(&work->rcu_head);
+ work->sb_data = (unsigned long) sb;
+ work->nr_pages = nr_pages;
+ work->state = 0;
+}
- return !test_and_set_bit(wb->nr, &bdi->wb_active);
+static inline void bdi_work_init_on_stack(struct bdi_work *work,
+ struct super_block *sb,
+ unsigned long nr_pages)
+{
+ bdi_work_init(work, sb, nr_pages);
+ set_bit(0, &work->state);
+ work->sb_data |= 1UL;
}
/**
* writeback_in_progress - determine whether there is writeback in progress
* @bdi: the device's backing_dev_info structure.
*
- * Determine whether there is writeback in progress against a backing device.
+ * Determine whether there is writeback waiting to be handled against a
+ * backing device.
*/
int writeback_in_progress(struct backing_dev_info *bdi)
{
- return bdi->wb_active != 0;
+ return !list_empty(&bdi->work_list);
}
-/**
- * writeback_release - relinquish exclusive writeback access against a device.
- * @bdi: the device's backing_dev_info structure
- */
-static void writeback_release(struct bdi_writeback *wb)
+static void bdi_work_free(struct rcu_head *head)
{
- struct backing_dev_info *bdi = wb->bdi;
+ struct bdi_work *work = container_of(head, struct bdi_work, rcu_head);
- wb->nr_pages = 0;
- wb->sb = NULL;
- clear_bit(wb->nr, &bdi->wb_active);
+ if (!bdi_work_on_stack(work))
+ kfree(work);
+ else {
+ clear_bit(0, &work->state);
+ wake_up_bit(&work->state, 0);
+ }
}
-static void wb_start_writeback(struct bdi_writeback *wb, struct super_block *sb,
- long nr_pages)
+static void wb_clear_pending(struct bdi_writeback *wb, struct bdi_work *work)
{
- if (!wb_has_dirty_io(wb))
- return;
+ /*
+ * The caller has retrieved the work arguments from this work,
+ * drop our reference. If this is the last ref, delete and free it
+ */
+ if (atomic_dec_and_test(&work->pending)) {
+ struct backing_dev_info *bdi = wb->bdi;
- if (writeback_acquire(wb)) {
- wb->nr_pages = nr_pages;
- wb->sb = sb;
+ spin_lock(&bdi->wb_lock);
+ list_del_rcu(&work->list);
+ spin_unlock(&bdi->wb_lock);
+
+ call_rcu(&work->rcu_head, bdi_work_free);
+ }
+}
+
+static void wb_start_writeback(struct bdi_writeback *wb, struct bdi_work *work)
+{
+ /*
+ * If we failed allocating the bdi work item, wake up the wb thread
+ * always. As a safety precaution, it'll flush out everything
+ */
+ if (!wb_has_dirty_io(wb) && work)
+ wb_clear_pending(wb, work);
+ else
+ wake_up(&wb->wait);
+}
+
+static int bdi_queue_writeback(struct backing_dev_info *bdi,
+ struct bdi_work *work)
+{
+ if (work) {
+ work->seen = bdi->wb_mask;
+ atomic_set(&work->pending, bdi->wb_cnt);
/*
- * make above store seen before the task is woken
+ * Make sure stores are seen before it appears on the list
*/
smp_mb();
- wake_up(&wb->wait);
+
+ spin_lock(&bdi->wb_lock);
+ list_add_tail_rcu(&work->list, &bdi->work_list);
+ spin_unlock(&bdi->wb_lock);
}
-}
-int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
- long nr_pages)
-{
/*
* This only happens the first time someone kicks this bdi, so put
* it out-of-line.
*/
- if (unlikely(!bdi->wb.task)) {
+ if (unlikely(list_empty_careful(&bdi->wb_list))) {
bdi_add_default_flusher_task(bdi);
return 1;
}
- wb_start_writeback(&bdi->wb, sb, nr_pages);
+ if (!bdi_wblist_needs_lock(bdi))
+ wb_start_writeback(&bdi->wb, work);
+ else {
+ struct bdi_writeback *wb;
+ int idx;
+
+ idx = srcu_read_lock(&bdi->srcu);
+
+ list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+ wb_start_writeback(wb, work);
+
+ srcu_read_unlock(&bdi->srcu, idx);
+ }
+
return 0;
}
/*
+ * Used for on-stack allocated work items. The caller needs to wait until
+ * the wb threads have acked the work before it's safe to continue.
+ */
+static void bdi_wait_on_work_start(struct bdi_work *work)
+{
+ wait_on_bit(&work->state, 0, bdi_sched_wait, TASK_UNINTERRUPTIBLE);
+}
+
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+ long nr_pages)
+{
+ struct bdi_work work;
+ int ret;
+
+ bdi_work_init_on_stack(&work, sb, nr_pages);
+
+ ret = bdi_queue_writeback(bdi, &work);
+
+ bdi_wait_on_work_start(&work);
+
+ return ret;
+}
+
+/*
* The maximum number of pages to writeout in a single bdi flush/kupdate
* operation. We do this so we don't hold I_SYNC against an inode for
* enormous amounts of time, which would block a userspace task which has
@@ -160,18 +251,15 @@ static void wb_kupdated(struct bdi_writeback *wb)
wbc.more_io = 0;
wbc.encountered_congestion = 0;
wbc.nr_to_write = MAX_WRITEBACK_PAGES;
- generic_sync_bdi_inodes(NULL, &wbc);
+ generic_sync_wb_inodes(wb, NULL, &wbc);
if (wbc.nr_to_write > 0)
break; /* All the old data is written */
nr_to_write -= MAX_WRITEBACK_PAGES;
}
}
-static void generic_sync_wb_inodes(struct bdi_writeback *wb,
- struct super_block *sb,
- struct writeback_control *wbc);
-
-static void wb_writeback(struct bdi_writeback *wb)
+static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
+ struct super_block *sb)
{
struct writeback_control wbc = {
.bdi = wb->bdi,
@@ -179,10 +267,10 @@ static void wb_writeback(struct bdi_writeback *wb)
.older_than_this = NULL,
.range_cyclic = 1,
};
- long nr_pages = wb->nr_pages;
for (;;) {
unsigned long background_thresh, dirty_thresh;
+
get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
if ((global_page_state(NR_FILE_DIRTY) +
global_page_state(NR_UNSTABLE_NFS) < background_thresh) &&
@@ -193,7 +281,7 @@ static void wb_writeback(struct bdi_writeback *wb)
wbc.encountered_congestion = 0;
wbc.nr_to_write = MAX_WRITEBACK_PAGES;
wbc.pages_skipped = 0;
- generic_sync_wb_inodes(wb, wb->sb, &wbc);
+ generic_sync_wb_inodes(wb, sb, &wbc);
nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
/*
* If we ran out of stuff to write, bail unless more_io got set
@@ -207,69 +295,135 @@ static void wb_writeback(struct bdi_writeback *wb)
}
/*
+ * Return the next bdi_work struct that hasn't been processed by this
+ * wb thread yet
+ */
+static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
+ struct bdi_writeback *wb)
+{
+ struct bdi_work *work, *ret = NULL;
+
+ rcu_read_lock();
+
+ list_for_each_entry_rcu(work, &bdi->work_list, list) {
+ if (!test_and_clear_bit(wb->nr, &work->seen))
+ continue;
+
+ ret = work;
+ break;
+ }
+
+ rcu_read_unlock();
+ return ret;
+}
+
+static void wb_writeback(struct bdi_writeback *wb)
+{
+ struct backing_dev_info *bdi = wb->bdi;
+ struct bdi_work *work;
+
+ while ((work = get_next_work_item(bdi, wb)) != NULL) {
+ struct super_block *sb = bdi_work_sb(work);
+ long nr_pages = work->nr_pages;
+
+ wb_clear_pending(wb, work);
+ __wb_writeback(wb, nr_pages, sb);
+ }
+}
+
+/*
+ * This will be inlined in bdi_writeback_task() once we get rid of any
+ * dirty inodes on the default_backing_dev_info
+ */
+static void wb_do_writeback(struct bdi_writeback *wb)
+{
+ /*
+ * We get here in two cases:
+ *
+ * schedule_timeout() returned because the dirty writeback
+ * interval has elapsed. If that happens, the work item list
+ * will be empty and we will proceed to do kupdated style writeout.
+ *
+ * Someone called bdi_start_writeback(), which put one/more work
+ * items on the work_list. Process those.
+ */
+ if (list_empty(&wb->bdi->work_list))
+ wb_kupdated(wb);
+ else
+ wb_writeback(wb);
+}
+
+/*
* Handle writeback of dirty data for the device backed by this bdi. Also
* wakes up periodically and does kupdated style flushing.
*/
int bdi_writeback_task(struct bdi_writeback *wb)
{
+ DEFINE_WAIT(wait);
+
while (!kthread_should_stop()) {
unsigned long wait_jiffies;
- DEFINE_WAIT(wait);
+
+ wb_do_writeback(wb);
prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
schedule_timeout(wait_jiffies);
try_to_freeze();
-
- /*
- * We get here in two cases:
- *
- * schedule_timeout() returned because the dirty writeback
- * interval has elapsed. If that happens, we will be able
- * to acquire the writeback lock and will proceed to do
- * kupdated style writeout.
- *
- * Someone called bdi_start_writeback(), which will acquire
- * the writeback lock. This means our writeback_acquire()
- * below will fail and we call into bdi_pdflush() for
- * pdflush style writeout.
- *
- */
- if (writeback_acquire(wb))
- wb_kupdated(wb);
- else
- wb_writeback(wb);
-
- writeback_release(wb);
- finish_wait(&wb->wait, &wait);
}
+ finish_wait(&wb->wait, &wait);
return 0;
}
void bdi_writeback_all(struct super_block *sb, long nr_pages)
{
- struct backing_dev_info *bdi;
+ struct list_head *entry = &bdi_list;
rcu_read_lock();
-restart:
- list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
+ list_for_each_continue_rcu(entry, &bdi_list) {
+ struct backing_dev_info *bdi;
+ struct list_head *next;
+ struct bdi_work *work;
+
+ bdi = list_entry(entry, struct backing_dev_info, bdi_list);
if (!bdi_has_dirty_io(bdi))
continue;
- if (bdi_start_writeback(bdi, sb, nr_pages))
- goto restart;
+
+ /*
+ * If this allocation fails, we just wakeup the thread and
+ * let it do kupdate writeback
+ */
+ work = kmalloc(sizeof(*work), GFP_ATOMIC);
+ if (work)
+ bdi_work_init(work, sb, nr_pages);
+
+ /*
+ * Prepare to start from previous entry if this one gets moved
+ * to the bdi_pending list.
+ */
+ next = entry->prev;
+ if (bdi_queue_writeback(bdi, work))
+ entry = next;
}
rcu_read_unlock();
}
/*
- * We have only a single wb per bdi, so just return that.
+ * If the filesystem didn't provide a way to map an inode to a dedicated
+ * flusher thread, it doesn't support more than 1 thread. So we know it's
+ * the default thread, return that.
*/
static inline struct bdi_writeback *inode_get_wb(struct inode *inode)
{
- return &inode_to_bdi(inode)->wb;
+ const struct super_operations *sop = inode->i_sb->s_op;
+
+ if (!sop->inode_get_wb)
+ return &inode_to_bdi(inode)->wb;
+
+ return sop->inode_get_wb(inode);
}
/**
@@ -723,8 +877,24 @@ void generic_sync_bdi_inodes(struct super_block *sb,
struct writeback_control *wbc)
{
struct backing_dev_info *bdi = wbc->bdi;
+ struct bdi_writeback *wb;
+
+ /*
+ * Common case is just a single wb thread and that is embedded in
+ * the bdi, so it doesn't need locking
+ */
+ if (!bdi_wblist_needs_lock(bdi))
+ generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+ else {
+ int idx;
- generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+ idx = srcu_read_lock(&bdi->srcu);
+
+ list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+ generic_sync_wb_inodes(wb, sb, wbc);
+
+ srcu_read_unlock(&bdi->srcu, idx);
+ }
}
/*
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index a0c70f1..6ccfa35 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -13,6 +13,8 @@
#include <linux/proportions.h>
#include <linux/kernel.h>
#include <linux/fs.h>
+#include <linux/sched.h>
+#include <linux/srcu.h>
#include <asm/atomic.h>
struct page;
@@ -25,6 +27,7 @@ struct dentry;
enum bdi_state {
BDI_pending, /* On its way to being activated */
BDI_wb_alloc, /* Default embedded wb allocated */
+ BDI_wblist_lock, /* bdi->wb_list now needs locking */
BDI_async_congested, /* The async (write) queue is getting full */
BDI_sync_congested, /* The sync queue is getting full */
BDI_unused, /* Available bits start here */
@@ -41,6 +44,8 @@ enum bdi_stat_item {
#define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
struct bdi_writeback {
+ struct list_head list; /* hangs off the bdi */
+
struct backing_dev_info *bdi; /* our parent bdi */
unsigned int nr;
@@ -49,13 +54,13 @@ struct bdi_writeback {
struct list_head b_dirty; /* dirty inodes */
struct list_head b_io; /* parked for writeback */
struct list_head b_more_io; /* parked for more writeback */
-
- unsigned long nr_pages;
- struct super_block *sb;
};
+#define BDI_MAX_FLUSHERS 32
+
struct backing_dev_info {
struct rcu_head rcu_head;
+ struct srcu_struct srcu; /* for wb_list read side protection */
struct list_head bdi_list;
unsigned long ra_pages; /* max readahead in PAGE_CACHE_SIZE units */
unsigned long state; /* Always use atomic bitops on this */
@@ -74,8 +79,12 @@ struct backing_dev_info {
unsigned int max_ratio, max_prop_frac;
struct bdi_writeback wb; /* default writeback info for this bdi */
- unsigned long wb_active; /* bitmap of active tasks */
- unsigned long wb_mask; /* number of registered tasks */
+ spinlock_t wb_lock; /* protects update side of wb_list */
+ struct list_head wb_list; /* the flusher threads hanging off this bdi */
+ unsigned long wb_mask; /* bitmask of registered tasks */
+ unsigned int wb_cnt; /* number of registered tasks */
+
+ struct list_head work_list;
struct device *dev;
@@ -97,11 +106,17 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
int bdi_writeback_task(struct bdi_writeback *wb);
void bdi_writeback_all(struct super_block *sb, long nr_pages);
void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
+void bdi_add_flusher_task(struct backing_dev_info *bdi);
int bdi_has_dirty_io(struct backing_dev_info *bdi);
extern spinlock_t bdi_lock;
extern struct list_head bdi_list;
+static inline int bdi_wblist_needs_lock(struct backing_dev_info *bdi)
+{
+ return test_bit(BDI_wblist_lock, &bdi->state);
+}
+
static inline int wb_has_dirty_io(struct bdi_writeback *wb)
{
return !list_empty(&wb->b_dirty) ||
@@ -314,4 +329,10 @@ static inline bool mapping_cap_swap_backed(struct address_space *mapping)
return bdi_cap_swap_backed(mapping->backing_dev_info);
}
+static inline int bdi_sched_wait(void *word)
+{
+ schedule();
+ return 0;
+}
+
#endif /* _LINUX_BACKING_DEV_H */
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ecdc544..d3bda5d 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1550,11 +1550,14 @@ extern ssize_t vfs_readv(struct file *, const struct iovec __user *,
extern ssize_t vfs_writev(struct file *, const struct iovec __user *,
unsigned long, loff_t *);
+struct bdi_writeback;
+
struct super_operations {
struct inode *(*alloc_inode)(struct super_block *sb);
void (*destroy_inode)(struct inode *);
void (*dirty_inode) (struct inode *);
+ struct bdi_writeback *(*inode_get_wb) (struct inode *);
int (*write_inode) (struct inode *, int);
void (*drop_inode) (struct inode *);
void (*delete_inode) (struct inode *);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 677a8c6..b4bcb14 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -199,7 +199,42 @@ static int __init default_bdi_init(void)
}
subsys_initcall(default_bdi_init);
-static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+ unsigned long mask = BDI_MAX_FLUSHERS - 1;
+ unsigned int nr;
+
+ do {
+ if ((bdi->wb_mask & mask) == mask)
+ return 1;
+
+ nr = find_first_zero_bit(&bdi->wb_mask, BDI_MAX_FLUSHERS);
+ } while (test_and_set_bit(nr, &bdi->wb_mask));
+
+ wb->nr = nr;
+
+ spin_lock(&bdi->wb_lock);
+ bdi->wb_cnt++;
+ spin_unlock(&bdi->wb_lock);
+
+ return 0;
+}
+
+static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+ clear_bit(wb->nr, &bdi->wb_mask);
+
+ if (wb == &bdi->wb)
+ clear_bit(BDI_wb_alloc, &bdi->state);
+ else
+ kfree(wb);
+
+ spin_lock(&bdi->wb_lock);
+ bdi->wb_cnt--;
+ spin_unlock(&bdi->wb_lock);
+}
+
+static int bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
{
memset(wb, 0, sizeof(*wb));
@@ -208,6 +243,30 @@ static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
INIT_LIST_HEAD(&wb->b_dirty);
INIT_LIST_HEAD(&wb->b_io);
INIT_LIST_HEAD(&wb->b_more_io);
+
+ return wb_assign_nr(bdi, wb);
+}
+
+static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
+{
+ struct bdi_writeback *wb;
+
+ /*
+ * Default bdi->wb is already assigned, so just return it
+ */
+ if (!test_and_set_bit(BDI_wb_alloc, &bdi->state))
+ wb = &bdi->wb;
+ else {
+ wb = kmalloc(sizeof(struct bdi_writeback), GFP_KERNEL);
+ if (wb) {
+ if (bdi_wb_init(wb, bdi)) {
+ kfree(wb);
+ wb = NULL;
+ }
+ }
+ }
+
+ return wb;
}
static void bdi_flush_io(struct backing_dev_info *bdi)
@@ -223,35 +282,26 @@ static void bdi_flush_io(struct backing_dev_info *bdi)
generic_sync_bdi_inodes(NULL, &wbc);
}
-static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+static void bdi_task_init(struct backing_dev_info *bdi,
+ struct bdi_writeback *wb)
{
- set_bit(0, &bdi->wb_mask);
- wb->nr = 0;
- return 0;
-}
+ struct task_struct *tsk = current;
+ int was_empty;
-static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
-{
- clear_bit(wb->nr, &bdi->wb_mask);
- clear_bit(BDI_wb_alloc, &bdi->state);
-}
+ /*
+ * Add us to the active bdi_list. If we are adding threads beyond
+ * the default embedded bdi_writeback, then we need to start using
+ * proper locking. Check the list for empty first, then set the
+ * BDI_wblist_lock flag if there's > 1 entry on the list now
+ */
+ spin_lock(&bdi->wb_lock);
-static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
-{
- struct bdi_writeback *wb;
+ was_empty = list_empty(&bdi->wb_list);
+ list_add_tail_rcu(&wb->list, &bdi->wb_list);
+ if (!was_empty)
+ set_bit(BDI_wblist_lock, &bdi->state);
- set_bit(BDI_wb_alloc, &bdi->state);
- wb = &bdi->wb;
- wb_assign_nr(bdi, wb);
- return wb;
-}
-
-static int bdi_start_fn(void *ptr)
-{
- struct bdi_writeback *wb = ptr;
- struct backing_dev_info *bdi = wb->bdi;
- struct task_struct *tsk = current;
- int ret;
+ spin_unlock(&bdi->wb_lock);
tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
set_freezable();
@@ -260,6 +310,15 @@ static int bdi_start_fn(void *ptr)
* Our parent may run at a different priority, just set us to normal
*/
set_user_nice(tsk, 0);
+}
+
+static int bdi_start_fn(void *ptr)
+{
+ struct bdi_writeback *wb = ptr;
+ struct backing_dev_info *bdi = wb->bdi;
+ int ret;
+
+ bdi_task_init(bdi, wb);
/*
* Clear pending bit and wakeup anybody waiting to tear us down
@@ -267,25 +326,65 @@ static int bdi_start_fn(void *ptr)
clear_bit(BDI_pending, &bdi->state);
wake_up_bit(&bdi->state, BDI_pending);
+ /*
+ * Make us discoverable on the bdi_list again
+ */
+ spin_lock(&bdi_lock);
+ list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+ spin_unlock(&bdi_lock);
+
ret = bdi_writeback_task(wb);
+ /*
+ * Remove us from the list
+ */
+ spin_lock(&bdi->wb_lock);
+ list_del_rcu(&wb->list);
+ spin_unlock(&bdi->wb_lock);
+
+ /*
+ * wait for rcu grace period to end, so we can free wb
+ */
+ synchronize_srcu(&bdi->srcu);
+
bdi_put_wb(bdi, wb);
return ret;
}
int bdi_has_dirty_io(struct backing_dev_info *bdi)
{
- return wb_has_dirty_io(&bdi->wb);
+ struct bdi_writeback *wb;
+ int ret = 0;
+
+ if (!bdi_wblist_needs_lock(bdi))
+ ret = wb_has_dirty_io(&bdi->wb);
+ else {
+ int idx;
+
+ idx = srcu_read_lock(&bdi->srcu);
+
+ list_for_each_entry_rcu(wb, &bdi->wb_list, list) {
+ ret = wb_has_dirty_io(wb);
+ if (ret)
+ break;
+ }
+
+ srcu_read_unlock(&bdi->srcu, idx);
+ }
+
+ return ret;
}
static int bdi_forker_task(void *ptr)
{
struct bdi_writeback *me = ptr;
+ DEFINE_WAIT(wait);
+
+ bdi_task_init(me->bdi, me);
for (;;) {
struct backing_dev_info *bdi;
struct bdi_writeback *wb;
- DEFINE_WAIT(wait);
/*
* Should never trigger on the default bdi
@@ -301,7 +400,6 @@ static int bdi_forker_task(void *ptr)
if (list_empty(&bdi_pending_list))
schedule();
- finish_wait(&me->wait, &wait);
repeat:
bdi = NULL;
spin_lock_bh(&bdi_lock);
@@ -344,6 +442,7 @@ readd_flush:
}
}
+ finish_wait(&me->wait, &wait);
return 0;
}
@@ -367,34 +466,68 @@ static void bdi_add_to_pending(struct rcu_head *head)
wake_up(&default_backing_dev_info.wb.wait);
}
-/*
- * Add a new flusher task that gets created for any bdi
- * that has dirty data pending writeout
- */
-void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
+ int(*func)(struct backing_dev_info *))
{
if (!bdi_cap_writeback_dirty(bdi))
return;
/*
- * Someone already marked this pending for task creation
+ * Check with the helper whether to proceed adding a task. Will only
+ * abort if we two or more simultanous calls to
+ * bdi_add_default_flusher_task() occured, further additions will block
+ * waiting for previous additions to finish.
*/
- if (test_and_set_bit(BDI_pending, &bdi->state))
- return;
+ if (!func(bdi)) {
+ spin_lock_bh(&bdi_lock);
+ list_del_rcu(&bdi->bdi_list);
+ spin_unlock_bh(&bdi_lock);
- spin_lock_bh(&bdi_lock);
- list_del_rcu(&bdi->bdi_list);
- spin_unlock_bh(&bdi_lock);
+ /*
+ * We need to wait for the current grace period to end,
+ * in case others were browsing the bdi_list as well.
+ * So defer the adding and wakeup to after the RCU
+ * grace period has ended.
+ */
+ call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+ }
+}
- /*
- * We need to wait for the current grace period to end,
- * in case others were browsing the bdi_list as well.
- * So defer the adding and wakeup to after the RCU
- * grace period has ended.
- */
- call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+static int flusher_add_helper_block(struct backing_dev_info *bdi)
+{
+ wait_on_bit_lock(&bdi->state, BDI_pending, bdi_sched_wait,
+ TASK_UNINTERRUPTIBLE);
+ return 0;
+}
+
+static int flusher_add_helper_test(struct backing_dev_info *bdi)
+{
+ return test_and_set_bit(BDI_pending, &bdi->state);
+}
+
+/*
+ * Add the default flusher task that gets created for any bdi
+ * that has dirty data pending writeout
+ */
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+{
+ bdi_add_one_flusher_task(bdi, flusher_add_helper_test);
}
+/**
+ * bdi_add_flusher_task - add one more flusher task to this @bdi
+ * @bdi: the bdi
+ *
+ * Add an additional flusher task to this @bdi. Will block waiting on
+ * previous additions, if any.
+ *
+ */
+void bdi_add_flusher_task(struct backing_dev_info *bdi)
+{
+ bdi_add_one_flusher_task(bdi, flusher_add_helper_block);
+}
+EXPORT_SYMBOL(bdi_add_flusher_task);
+
int bdi_register(struct backing_dev_info *bdi, struct device *parent,
const char *fmt, ...)
{
@@ -454,17 +587,13 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
}
EXPORT_SYMBOL(bdi_register_dev);
-static int sched_wait(void *word)
-{
- schedule();
- return 0;
-}
-
/*
* Remove bdi from global list and shutdown any threads we have running
*/
static void bdi_wb_shutdown(struct backing_dev_info *bdi)
{
+ struct bdi_writeback *wb;
+
if (!bdi_cap_writeback_dirty(bdi))
return;
@@ -472,7 +601,8 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
* If setup is pending, wait for that to complete first
* Make sure nobody finds us on the bdi_list anymore
*/
- wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
+ wait_on_bit(&bdi->state, BDI_pending, bdi_sched_wait,
+ TASK_UNINTERRUPTIBLE);
/*
* Make sure nobody finds us on the bdi_list anymore
@@ -488,9 +618,11 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
synchronize_rcu();
/*
- * Finally, kill the kernel thread
+ * Finally, kill the kernel threads. We don't need to be RCU
+ * safe anymore, since the bdi is gone from visibility.
*/
- kthread_stop(bdi->wb.task);
+ list_for_each_entry(wb, &bdi->wb_list, list)
+ kthread_stop(wb->task);
}
void bdi_unregister(struct backing_dev_info *bdi)
@@ -515,8 +647,12 @@ int bdi_init(struct backing_dev_info *bdi)
bdi->min_ratio = 0;
bdi->max_ratio = 100;
bdi->max_prop_frac = PROP_FRAC_BASE;
+ spin_lock_init(&bdi->wb_lock);
+ bdi->wb_mask = 0;
+ bdi->wb_cnt = 0;
INIT_LIST_HEAD(&bdi->bdi_list);
- bdi->wb_mask = bdi->wb_active = 0;
+ INIT_LIST_HEAD(&bdi->wb_list);
+ INIT_LIST_HEAD(&bdi->work_list);
bdi_wb_init(&bdi->wb, bdi);
@@ -526,10 +662,15 @@ int bdi_init(struct backing_dev_info *bdi)
goto err;
}
+ err = init_srcu_struct(&bdi->srcu);
+ if (err)
+ goto err;
+
bdi->dirty_exceeded = 0;
err = prop_local_init_percpu(&bdi->completions);
if (err) {
+ cleanup_srcu_struct(&bdi->srcu);
err:
while (i--)
percpu_counter_destroy(&bdi->bdi_stat[i]);
@@ -547,6 +688,8 @@ void bdi_destroy(struct backing_dev_info *bdi)
bdi_unregister(bdi);
+ cleanup_srcu_struct(&bdi->srcu);
+
for (i = 0; i < NR_BDI_STAT_ITEMS; i++)
percpu_counter_destroy(&bdi->bdi_stat[i]);
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 76269f8..de3178a 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -667,8 +667,7 @@ void throttle_vm_writeout(gfp_t gfp_mask)
/*
* Start writeback of `nr_pages' pages. If `nr_pages' is zero, write back
- * the whole world. Returns 0 if a pdflush thread was dispatched. Returns
- * -1 if all pdflush threads were busy.
+ * the whole world.
*/
void wakeup_flusher_threads(long nr_pages)
{
@@ -676,7 +675,6 @@ void wakeup_flusher_threads(long nr_pages)
nr_pages = global_page_state(NR_FILE_DIRTY) +
global_page_state(NR_UNSTABLE_NFS);
bdi_writeback_all(NULL, nr_pages);
- return;
}
static void laptop_timer_fn(unsigned long unused);
--
1.6.3.rc0.1.gf800
Applied V4 to 2.6.30-rc6 and got some confliction reports.
----------patch-2----------
patching file fs/buffer.c
patching file fs/fs-writeback.c
patching file fs/ntfs/super.c
patching file fs/sync.c
patching file include/linux/backing-dev.h
patching file include/linux/fs.h
patching file include/linux/writeback.h
patching file mm/backing-dev.c
patching file mm/page-writeback.c
Hunk #5 FAILED at 666.
1 out of 6 hunks FAILED -- saving rejects to file mm/page-writeback.c.rej
patching file mm/vmscan.c
----------patch-3----------
patching file fs/fs-writeback.c
patching file include/linux/writeback.h
patching file mm/Makefile
patching file mm/pdflush.c
----------patch-4----------
patching file fs/fs-writeback.c
patching file include/linux/backing-dev.h
patching file mm/backing-dev.c
----------patch-5----------
patching file fs/fs-writeback.c
patching file include/linux/backing-dev.h
patching file include/linux/fs.h
patching file mm/backing-dev.c
patching file mm/page-writeback.c
Hunk #1 succeeded at 708 with fuzz 2 (offset 41 lines).
Hunk #2 FAILED at 716.
1 out of 2 hunks FAILED -- saving rejects to file mm/page-writeback.c.rej
Then, I manually fixed the conflictions, but compilation reports errors.
Your patches seem not clean.
CC fs/exec.o
mm/page-writeback.c: In function 'background_writeout':
mm/page-writeback.c:695: error: 'MAX_WRITEBACK_PAGES' undeclared (first use in this function)
mm/page-writeback.c:695: error: (Each undeclared identifier is reported only once
mm/page-writeback.c:695: error: for each function it appears in.)
mm/page-writeback.c: In function 'wb_kupdate':
mm/page-writeback.c:769: error: 'MAX_WRITEBACK_PAGES' undeclared (first use in this function)
mm/page-writeback.c: In function 'wb_timer_fn':
mm/page-writeback.c:802: error: implicit declaration of function 'pdflush_operation'
make[1]: *** [mm/page-writeback.o] Error 1
make[1]: *** Waiting for unfinished jobs....
CC fs/pipe.o
Yanmin
>
> - Little fixes here and there.
>
> So generally not a lot of changes, the major one is using the ->work_list
> and getting rid of writeback_acquire()/writeback_release(). This fixes
> the concern Jan Kara had about missing sync/WB_SYNC_ALL, if writeback
> was already in progress.
>
> I've run a few benchmarks today:
>
> 1) Large file writes from a single process
> 2) Random file writes from multiple (16) processes.
It's not against -rc6, it's against current -git. And current -git had a
one-liner fixup to the centisec calculation, so it'll fail. If you apply
the below patch to -rc6, then the series should apply cleanly on top of
that.
> Then, I manually fixed the conflictions, but compilation reports errors.
> Your patches seem not clean.
>
> CC fs/exec.o
> mm/page-writeback.c: In function 'background_writeout':
> mm/page-writeback.c:695: error: 'MAX_WRITEBACK_PAGES' undeclared (first use in this function)
> mm/page-writeback.c:695: error: (Each undeclared identifier is reported only once
> mm/page-writeback.c:695: error: for each function it appears in.)
> mm/page-writeback.c: In function 'wb_kupdate':
> mm/page-writeback.c:769: error: 'MAX_WRITEBACK_PAGES' undeclared (first use in this function)
> mm/page-writeback.c: In function 'wb_timer_fn':
> mm/page-writeback.c:802: error: implicit declaration of function 'pdflush_operation'
> make[1]: *** [mm/page-writeback.o] Error 1
> make[1]: *** Waiting for unfinished jobs....
> CC fs/pipe.o
You still have remnants of pdflush, so there's definite something wrong
with your manual patching :-)
I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
of the patch series that you can apply next.
--
Jens Axboe
Thanks,
Yanmin
I'm interested in this slight change of behaviour, when over the
background dirty limit background_writeout will write any dirty pages
while bdi_start_writeout writes only pages for the current bdi. Are
there any benefits in making this change?
Thinking about the case of 2 apps writing to different bdis. When app A
stops writing, then next time app B goes over the background dirty
threshold it will only be able to write its own pages, leaving any from
app A dirty until they reach their age limit.
So we may be keeping dirty pages for the app that's finished longer than
necessary. Keeping pages for a finished app while flushing pages from a
running app seems a bit strange. I guess this is an odd corner case and
may not be worth worrying about, but I'd be interested to hear what you
think.
Do you think your new code will require any changes to the per bdi dirty
limits? It may be informative & interesting to run some tests writing to
fast & slow devices at the same time.
regards
Richard
The function in question balances dirty pages against a specific address
space, which has a specific mapping. The async part of the background
writeout could be global as you mention. The whole thing is a bit weird
in balance_dirty_pages(), for instance it checks for writeout against a
given queue then proceeds to do a global writeout if not busy. At least
it's consistent now.
> So we may be keeping dirty pages for the app that's finished longer than
> necessary. Keeping pages for a finished app while flushing pages from a
> running app seems a bit strange. I guess this is an odd corner case and
> may not be worth worrying about, but I'd be interested to hear what you
> think.
The kupdated() initiated background writeout will take care of that, if
nobody does a sync on that data first. If nobody is dirtying new data on
the given bdi, then it seems perfectly fine to let normal background
writeout handle it.
> Do you think your new code will require any changes to the per bdi dirty
> limits? It may be informative & interesting to run some tests writing to
> fast & slow devices at the same time.
Generally the code should behave fairly closely to the existing pdflush
based code, so I don't think bdi dirty limit tweaking will be necessary.
I'd definitely welcome some testing though, particularly slow vs fast as
you mention. I've mainly been doing benchmarking to make sure we don't
regress on performance, and that has been for fairly similar hardware.
Since testing does take a lot of time, it would be nice if someone else
would gather their own experiences, especially in areas that have been
problematic in the past (slow vs fast devices, for instance!).
--
Jens Axboe
Thanks for the explanation.
I'm definitely going to test this, although I don't have any interesting
hardware, only a basic workstation. But I'll let you know if I turn up
anything useful.
Balance_dirty_pages contains Peter Zijlstra's per bdi write throttling
code and I wonder if it will need tuning for best performance with your
changes, just because some of its assumptions may have changed. I'll run
some tests here and see what happens. Peter may have some insight and
possibly useful test cases.
regards
Richard
Any testing is useful, so go for it.
> Balance_dirty_pages contains Peter Zijlstra's per bdi write throttling
> code and I wonder if it will need tuning for best performance with your
> changes, just because some of its assumptions may have changed. I'll run
> some tests here and see what happens. Peter may have some insight and
> possibly useful test cases.
I'm assuming those are setting in -mm? I'll take a look.
--
Jens Axboe
Nah, those got merged ages ago (.24 iirc) and I don't think that would
need any touch ups wrt this series.
> I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> of the patch series that you can apply next.
Jens,
I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
Tue May 19 00:00:00 CST 2009
BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
PGD 0
Oops: 0000 [#1] SMP
last sysfs file: /sys/block/sdb/stat
CPU 0
Modules linked in: igb
Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
RIP: 0010:[<ffffffff803f3c4c>] [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
RSP: 0018:ffff8800bd04da60 EFLAGS: 00010206
RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
FS: 0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
Stack:
0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
Call Trace:
[<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
[<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
[<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
[<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
[<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
[<ffffffff8027e8d2>] ? __writepage+0xa/0x25
[<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
[<ffffffff8027e8c8>] ? __writepage+0x0/0x25
[<ffffffff8027f195>] ? do_writepages+0x27/0x2d
[<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
[<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
[<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
[<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
[<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
[<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
[<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
[<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
[<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
[<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
[<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
[<ffffffff8024c860>] ? kthread+0x54/0x80
[<ffffffff8020c97a>] ? child_rip+0xa/0x20
[<ffffffff8024c80c>] ? kthread+0x0/0x80
[<ffffffff8020c970>] ? child_rip+0x0/0x20
The panic happened at the beginging of a mmap randrw after a mmap randwrite.
It's triggered in __generic_make_request => bdev_get_queue(bio->bi_bdev),
because bio->bi_bdev->bd_disk is equal to NULL.
The callchain is:
bdi_writeback_task =>
wb_do_writeback =>
generic_sync_wb_inodes =>
__writeback_single_inode =>
...
__block_write_full_page =>
submit_bh =>
submit_bio=>
generic_make_request
yanmin
Wow, that is really odd. Can you pass the details of the test you ran?
--
Jens Axboe
I found one issue yesterday and one today that could cause issues, not
sure it would explain this one. But at least it's worth a try, if it's
reproducible. I'm attaching the three patches I have against the posted
series. The one in the middle is just an optimization, the first and
third are the bug fixes.
--
Jens Axboe
>
> I found one issue yesterday and one today that could cause issues, not
> sure it would explain this one. But at least it's worth a try, if it's
> reproducible.
I just reproduced it a moment ago manually.
[global]
direct=0
ioengine=mmap
iodepth=256
iodepth_batch=32
size=4G
bs=4k
pre_read=1
overwrite=1
numjobs=1
loops=5
runtime=600
group_reporting
directory=/mnt/stp/fiodata
[job_group0_sub0]
startdelay=0
rw=randwrite
filename=data0/f1:data0/f2
The fio includes my preread patch to flush files to memory.
Before starting the second testing, I did a cache dropping by:
#echo "3">/proc/sys/vm/drop_caches.
I suspect the drop_caches trigger it.
> I'm attaching the three patches I have against the posted
> series. The one in the middle is just an optimization, the first and
> third are the bug fixes.
I will test it tomorrow.
Thanks, will try this. What filesystem and mount options did you use?
> > I'm attaching the three patches I have against the posted
> > series. The one in the middle is just an optimization, the first and
> > third are the bug fixes.
> I will test it tomorrow.
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majo...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Jens Axboe
No luck reproducing so far. In other news, I have finally merged your
fio pre_read patch :-)
I've run it here many times, works fine with the current writeback
branch. Since I did the runs anyway, I did comparisons between mainline
and writeback for this test. Each test was run 10 times, averages below.
The throughput deviated less than 1MB/sec, so results are very stable.
CPU usage percentages were always within 0.5%.
Kernel Throughput usr sys disk util
-----------------------------------------------------------------
writeback 175MB/sec 17.55% 43.04% 97.80%
vanilla 147MB/sec 13.44% 47.33% 85.98%
The results for this test is particularly interesting, since it's very
heavy on the writeback side. pdflush/bdi threads were pretty busy. User
time is up (even if corrected for higher throughput), but system time is
down a lot. Vanilla isn't close to keeping the disk busy, with the
writeback patches we are basically there (100% would be pretty much
impossible to reach).
Please try with the patches I sent. If you still see problems, we need
to look more closely into that.
Some of it, most of it is due to switching from one fixed thread to the
potential of having lots more. The moving code around is mostly due to
other callers now having to use functions that were below them, and I'd
rather move them around instead of having prototypes at the top.
It would be easy to unify the two patches, but I wanted to separate the
switch from pdflush to 1 bdi thread from the transition from 1 bdi
thread to several.
--
Jens Axboe
>
Quite a lot of changes in this function (and creation of bdi_flush_io())
are just cleanups of patch #2 so it would be nice to move them there...
This whole chunk is just a cleanup of patch #2, isn't it? Maybe move
there?
> @@ -440,11 +515,10 @@ int bdi_init(struct backing_dev_info *bdi)
> bdi->min_ratio = 0;
> bdi->max_ratio = 100;
> bdi->max_prop_frac = PROP_FRAC_BASE;
> - init_waitqueue_head(&bdi->wait);
> INIT_LIST_HEAD(&bdi->bdi_list);
> - INIT_LIST_HEAD(&bdi->b_io);
> - INIT_LIST_HEAD(&bdi->b_dirty);
> - INIT_LIST_HEAD(&bdi->b_more_io);
> + bdi->wb_mask = bdi->wb_active = 0;
> +
> + bdi_wb_init(&bdi->wb, bdi);
>
> for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
> err = percpu_counter_init(&bdi->bdi_stat[i], 0);
> @@ -469,9 +543,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
> {
> int i;
>
> - WARN_ON(!list_empty(&bdi->b_dirty));
> - WARN_ON(!list_empty(&bdi->b_io));
> - WARN_ON(!list_empty(&bdi->b_more_io));
> + WARN_ON(bdi_has_dirty_io(bdi));
>
> bdi_unregister(bdi);
>
Honza
--
Jan Kara <ja...@suse.cz>
SUSE Labs, CR
Thanks! I took the liberty of killing some of the code in between, to
make it easier to see.
> > +void bdi_writeback_all(struct super_block *sb, long nr_pages)
> > +{
> > + struct backing_dev_info *bdi;
> > +
> > + rcu_read_lock();
> > +
> > +restart:
> > + list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
> Isn't the RCU list here a bit overengineering? AFAICS we use the list
> only here and if I'm grepping right, generic_sync_sb_inodes() is currently
> only used for data integrity sync (after your patches) from fs-writeback.c
> and by UBIFS to do equivalent of writeback_inodes(). So simple spinlock
> guarding the list should be just fine. Or am I missing something?
Sure, we could. But it's really not that much of a difference,
implementation wise.
> > @@ -591,13 +711,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
> > void generic_sync_sb_inodes(struct super_block *sb,
> > struct writeback_control *wbc)
> > {
> > - const int is_blkdev_sb = sb_is_blkdev_sb(sb);
> > - struct backing_dev_info *bdi;
> > -
> > - rcu_read_lock();
> > - list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
> > - generic_sync_bdi_inodes(bdi, wbc, sb, is_blkdev_sb);
> > - rcu_read_unlock();
> > + if (wbc->bdi)
> > + bdi_start_writeback(wbc->bdi, sb, 0);
> > + else
> > + bdi_writeback_all(sb, 0);
> It does not work like this. The way you call writeback here, you never
> endup calling __writeback_single_inode() with WB_SYNC_ALL set in wbc (your
> writeback routines always call inode writeback with WB_SYNC_NONE). And
> that is required for proper data integrity sync... So you have to somehow
> propagate this down to the writeback thread.
Good point, we need to pass down sync mode too. Not a big problem, we
can just add that to bdi_work as well.
> Alternatively, what probably makes a lot of sence, is to separate data
> integrity sync path from just data writeback. In the first case we care
> more about correctness, in the second case we care more about performance
> and overall throughput.
Yep agree, that would clean it up as well. I'll include that in the next
revision, I think I'll post it on friday.
> BTW your patch also significantly changes one thing: With your patch data
> integrity sync is done by flusher threads while previously is was done from
> the context of the thread calling sync(). I'm undecided whether it is a
> good or bad thing but it definitely deserves a comment in the changelog.
I'll look at the implications of this again, perhaps it'll be better to
just switch it back for now.
> > +static int bdi_forker_task(void *ptr)
> > +{
> > + struct backing_dev_info *bdi, *me = ptr;
> > +
> > + for (;;) {
> > + DEFINE_WAIT(wait);
> > +
> > + /*
> > + * Should never trigger on the default bdi
> > + */
> > + WARN_ON(bdi_has_dirty_io(me));
> > +
> > + prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
> > + smp_mb();
> Wouldn't the code look simpler like:
> spin_lock_bh(&bdi_lock);
> if (list_empty(&bdi_pending_list)) {
> spin_unlock_bh(&bdi_lock);
> schedule();
> } else {
> bdi = list_entry(bdi_pending_list.next,
> struct backing_dev_info, bdi_list);
> list_del_init(&bdi->bdi_list);
> spin_unlock_bh(&bdi_lock);
> if (bdi->task)
> continue;
> ... do work ...
> }
Not a bad suggestion, I'll fiddle it around a bit.
Thanks for your review Jan, always helpful!
--
Jens Axboe
a few comments here. Mainly, I still don't think the sys_sync() is
working right - see comments below.
Isn't the RCU list here a bit overengineering? AFAICS we use the list
only here and if I'm grepping right, generic_sync_sb_inodes() is currently
only used for data integrity sync (after your patches) from fs-writeback.c
and by UBIFS to do equivalent of writeback_inodes(). So simple spinlock
guarding the list should be just fine. Or am I missing something?
> + if (!bdi_has_dirty_io(bdi))
It does not work like this. The way you call writeback here, you never
endup calling __writeback_single_inode() with WB_SYNC_ALL set in wbc (your
writeback routines always call inode writeback with WB_SYNC_NONE). And
that is required for proper data integrity sync... So you have to somehow
propagate this down to the writeback thread.
Alternatively, what probably makes a lot of sence, is to separate data
integrity sync path from just data writeback. In the first case we care
more about correctness, in the second case we care more about performance
and overall throughput.
BTW your patch also significantly changes one thing: With your patch data
integrity sync is done by flusher threads while previously is was done from
the context of the thread calling sync(). I'm undecided whether it is a
good or bad thing but it definitely deserves a comment in the changelog.
> if (wbc->sync_mode == WB_SYNC_ALL) {
Wouldn't the code look simpler like:
spin_lock_bh(&bdi_lock);
if (list_empty(&bdi_pending_list)) {
spin_unlock_bh(&bdi_lock);
schedule();
} else {
bdi = list_entry(bdi_pending_list.next,
struct backing_dev_info, bdi_list);
list_del_init(&bdi->bdi_list);
spin_unlock_bh(&bdi_lock);
if (bdi->task)
continue;
... do work ...
}
> + if (list_empty(&bdi_pending_list))
Jan Kara <ja...@suse.cz>
SUSE Labs, CR
OK, I'll double check for silly changes between the two. Since I added
some functionalty at the end of the series and then later moved it back
up the chain, it's quite likely that there are silly diffs between those
two patches.
> It would be easy to unify the two patches, but I wanted to separate the
> switch from pdflush to 1 bdi thread from the transition from 1 bdi
> thread to several.
Yes, this is probably desirable.
Honza
--
Jan Kara <ja...@suse.cz>
SUSE Labs, CR
It's a fine rule, I agree ;-)
I'll take another look at this when splitting the sync paths.
Honza
--
Jan Kara <ja...@suse.cz>
SUSE Labs, CR
Btw, there has been quite a bit of work on the higher level sync code in
the VFS tree, and I have some TODO list items for the lower level sync
code. The most important one would be splitting data and metadata
writeback.
Currently __sync_single_inode first calls do_writepages to write back
the data, then write_inode to potentially write the metadata and then
finally filemap_fdatawait to wait for the inode to be completed.
Now for one thing doing the data wait after the metadata writeout is
wrong for all those filesystems performing some kind of metadata updates
in the I/O completion handler, and e.g. XFS has to work around this
by doing a wait by itself in it's write_inode handler.
Second inodes are usually clustered together, so if a filesystem can
issue multiple dirty inodes at the same time performance will be much
better.
So an optimal sync could would first issue data I/O for all inodes it
wants to write back, then wait for the data I/O to finish and finally
write out the inodes in big clusters.
I'm not quite sure when we'll get to that, just making sure we don't
work against this direction anywhere.
And yeah, I really need to take a detailed look at the current
incarnation of your patchset :)
--
Please do, I'm particularly interested in the possibility of having
multiple inode placements. Would it be feasible to have the inode
backing be differentiated by type (eg data or meta-data)?
--
Jens Axboe
Yes, it really should go out of this patchset and into a prep patch.
Anton, care to comment?
--
Jens Axboe
You need to remove the above line, too. It does not make sense to
leave half a sentence there...
Otherwise you can apply this patch if you really want. It is just a
debug/bandaid. I used to have problems where dirty inodes were left
and I had put that in to allow the unmount to succeed properly. I
believe that should not happen any more as explained in the comment
above but I left the fixup code as a sanity check that would produce
output to the system log that people would hopefully report should my
fix not be correct/sufficient...
Best regards,
Anton
--
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/
> In other news, I have finally merged your
> fio pre_read patch :-)
Thanks.
>
> I've run it here many times, works fine with the current writeback
> branch. Since I did the runs anyway, I did comparisons between mainline
> and writeback for this test. Each test was run 10 times, averages below.
> The throughput deviated less than 1MB/sec, so results are very stable.
> CPU usage percentages were always within 0.5%.
>
> Kernel Throughput usr sys disk util
> -----------------------------------------------------------------
> writeback 175MB/sec 17.55% 43.04% 97.80%
> vanilla 147MB/sec 13.44% 47.33% 85.98%
>
> The results for this test is particularly interesting, since it's very
> heavy on the writeback side. pdflush/bdi threads were pretty busy. User
> time is up (even if corrected for higher throughput), but system time is
> down a lot. Vanilla isn't close to keeping the disk busy, with the
> writeback patches we are basically there (100% would be pretty much
> impossible to reach).
>
> Please try with the patches I sent. If you still see problems, we need
> to look more closely into that.
I tried the new patches. It seems it improves fio mmap randwrite 4k for about
50% on the machine (single disk). The old panic disappears, but there is a new panic.
[ROOT@LKP-NE01 ~]# BUG: unable to handle kernel NULL pointer dereference at 0000000000000190
IP: [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
PGD 0
Oops: 0000 [#1] SMP
last sysfs file: /sys/block/sdb/stat
CPU 0
Modules linked in: igb
Pid: 7681, comm: umount Not tainted 2.6.30-rc6-bdiflusherv4fix #1 X8DTN
RIP: 0010:[<ffffffff803270b6>] [<ffffffff803270b6>] ext3_invalidatepage+0x18/0x38
RSP: 0018:ffff8801bdc47d20 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffffe200058514a0 RCX: 0000000000000002
RDX: 000000000000000e RSI: 0000000000000000 RDI: ffffe200058514a0
RBP: 0000000000000000 R08: 0000000000000003 R09: 000000000000000e
R10: 000000000000000d R11: ffffffff8032709e R12: 0000000000000000
R13: 0000000000000000 R14: ffff8801bdc47d78 R15: ffff8800bc0dd888
FS: 00007f48d77237d0(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000190 CR3: 00000000bc867000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process umount (pid: 7681, threadinfo ffff8801bdc46000, task ffff8801bde194d0)
Stack:
ffffffff80280ef7 ffffe200058514a0 ffffffff80280ffd ffff8801bdc47d78
0000000e0290c538 000000000049d801 0000000000000000 0000000000000000
ffffffffffffffff 000000000000000e 0000000000000000 ffffe200058514a0
Call Trace:
[<ffffffff80280ef7>] ? truncate_complete_page+0x1d/0x59
[<ffffffff80280ffd>] ? truncate_inode_pages_range+0xca/0x32e
[<ffffffff802ba8bc>] ? dispose_list+0x39/0xe4
[<ffffffff802bac68>] ? invalidate_inodes+0xf1/0x10f
[<ffffffff802ab77b>] ? generic_shutdown_super+0x78/0xde
[<ffffffff802ab803>] ? kill_block_super+0x22/0x3a
[<ffffffff802abe49>] ? deactivate_super+0x5f/0x76
[<ffffffff802bdf2f>] ? sys_umount+0x2cd/0x2fc
[<ffffffff8020ba2b>] ? system_call_fastpath+0x16/0x1b
ext3_invalidatepage => EXT3_JOURNAL(page->mapping->host) while
EXT3_SB((inode)->i_sb) is equal to NULL.
It seems umount triggers the new panic.
Yanmin
Honza
--
Jan Kara <ja...@suse.cz>
SUSE Labs, CR
BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
IP: [<ffffffff803f3cec>] generic_make_request+0x10a/0x384
PGD 0
Oops: 0000 [#1] SMP
last sysfs file: /sys/block/sdb/stat
CPU 0
Modules linked in: igb
Pid: 1446, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherV4fix #1 X8DTN
RIP: 0010:[<ffffffff803f3cec>] [<ffffffff803f3cec>] generic_make_request+0x10a/0x384
RSP: 0018:ffff8800bd295a60 EFLAGS: 00010206
RAX: 0000000000000000 RBX: ffff8800bd405b00 RCX: 0000000002cd1a40
RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf4096c0
RBP: ffff8800bd405b00 R08: ffffe20006141cf8 R09: ffff8800bd295a98
R10: 0000000000000000 R11: ffff8800bd405c80 R12: ffff8800bd405b00
R13: ffff88008bc4c150 R14: 0000000000000008 R15: ffff88008059dda0
FS: 0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process bdi-8:16 (pid: 1446, threadinfo ffff8800bd294000, task ffff8800bd2375f0)
Stack:
0000000000000008 ffffffff8027a613 00000000bd0f60d0 ffffffffffffffff
ffff88007b5cfb10 0000000000000001 ffff88007d504000 ffff880000000006
0000000000011200 ffff8800bd61d444 ffffffffffffffcf 0000000000000000
Call Trace:
[<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
[<ffffffff803f4010>] ? submit_bio+0xaa/0xb1
[<ffffffff802c6aeb>] ? submit_bh+0xe3/0x103
[<ffffffff802c9396>] ? __block_write_full_page+0x1fb/0x2f2
[<ffffffff802c7e16>] ? end_buffer_async_write+0x0/0xfb
[<ffffffff8027e8d2>] ? __writepage+0xa/0x25
[<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
[<ffffffff8027e8c8>] ? __writepage+0x0/0x25
[<ffffffff8027f195>] ? do_writepages+0x27/0x2d
[<ffffffff802c22c9>] ? __writeback_single_inode+0x159/0x2b3
[<ffffffff8071e5ca>] ? thread_return+0x3e/0xaa
[<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
[<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
[<ffffffff802c27c4>] ? generic_sync_wb_inodes+0x1b4/0x220
[<ffffffff802c31dd>] ? wb_do_writeback+0x16c/0x215
[<ffffffff802c32eb>] ? bdi_writeback_task+0x65/0x10d
[<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
[<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
[<ffffffff80289257>] ? bdi_start_fn+0x0/0xc0
[<ffffffff802892cc>] ? bdi_start_fn+0x75/0xc0
[<ffffffff8024c860>] ? kthread+0x54/0x80
[<ffffffff8020c97a>] ? child_rip+0xa/0x20
[<ffffffff8024c80c>] ? kthread+0x0/0x80
[<ffffffff8020c970>] ? child_rip+0x0/0x20
Code: 39 c8 0f 82 ba 01 00 00 44 89 f0 c7 44 24 14 00 00 00 00 48 c7 44 24 18 ff ff ff ff 48 89 04 24 48 8b 7d 10 48 8b 87
RIP [<ffffffff803f3cec>] generic_make_request+0x10a/0x384
The former got fixed this morning, btw:
http://git.kernel.dk/?p=linux-2.6-block.git;a=commit;h=237af7b3c87a37ab8aacd99eb842e6bd35a30289
--
Jens Axboe
Could this be due to the missing WB_SYNC_ALL carry? Or the out-of-line
flushing in generic_sync_sb_inodes()? The latter could be exposing a
missing wait somewhere.
I'll see about reproducing and fixing it locally.
--
Jens Axboe
Thanks, I'll get this reproduced and fixed. Can you post the results
you got comparing writeback and vanilla meanwhile?
--
Jens Axboe
Please try with this combined patch against what you are running now, it
should resolve the issue. It needs a bit more work, but I'm running out
of time today. I'l get it finalized, cleaned up, and integrated. Then
I'll post a new revision of the patch set.
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index f80afaa..e9fc346 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -50,6 +50,7 @@ struct bdi_work {
unsigned long sb_data;
unsigned long nr_pages;
+ enum writeback_sync_modes sync_mode;
unsigned long state;
};
@@ -65,19 +66,22 @@ static inline bool bdi_work_on_stack(struct bdi_work *work)
}
static inline void bdi_work_init(struct bdi_work *work, struct super_block *sb,
- unsigned long nr_pages)
+ unsigned long nr_pages,
+ enum writeback_sync_modes sync_mode)
{
INIT_RCU_HEAD(&work->rcu_head);
work->sb_data = (unsigned long) sb;
work->nr_pages = nr_pages;
+ work->sync_mode = sync_mode;
work->state = 0;
}
static inline void bdi_work_init_on_stack(struct bdi_work *work,
struct super_block *sb,
- unsigned long nr_pages)
+ unsigned long nr_pages,
+ enum writeback_sync_modes sync_mode)
{
- bdi_work_init(work, sb, nr_pages);
+ bdi_work_init(work, sb, nr_pages, sync_mode);
set_bit(0, &work->state);
work->sb_data |= 1UL;
}
@@ -189,17 +193,17 @@ static void bdi_wait_on_work_start(struct bdi_work *work)
}
int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
- long nr_pages)
+ long nr_pages, enum writeback_sync_modes sync_mode)
{
struct bdi_work work_stack, *work;
int ret;
work = kmalloc(sizeof(*work), GFP_ATOMIC);
if (work)
- bdi_work_init(work, sb, nr_pages);
+ bdi_work_init(work, sb, nr_pages, sync_mode);
else {
work = &work_stack;
- bdi_work_init_on_stack(work, sb, nr_pages);
+ bdi_work_init_on_stack(work, sb, nr_pages, sync_mode);
}
ret = bdi_queue_writeback(bdi, work);
@@ -274,11 +278,12 @@ static long wb_kupdated(struct bdi_writeback *wb)
}
static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
- struct super_block *sb)
+ struct super_block *sb,
+ enum writeback_sync_modes sync_mode)
{
struct writeback_control wbc = {
.bdi = wb->bdi,
- .sync_mode = WB_SYNC_NONE,
+ .sync_mode = sync_mode,
.older_than_this = NULL,
.range_cyclic = 1,
};
@@ -345,9 +350,10 @@ static long wb_writeback(struct bdi_writeback *wb)
while ((work = get_next_work_item(bdi, wb)) != NULL) {
struct super_block *sb = bdi_work_sb(work);
long nr_pages = work->nr_pages;
+ enum writeback_sync_modes sync_mode = work->sync_mode;
wb_clear_pending(wb, work);
- wrote += __wb_writeback(wb, nr_pages, sb);
+ wrote += __wb_writeback(wb, nr_pages, sb, sync_mode);
}
return wrote;
@@ -420,39 +426,36 @@ int bdi_writeback_task(struct bdi_writeback *wb)
return 0;
}
-void bdi_writeback_all(struct super_block *sb, long nr_pages)
+/*
+ * Do in-line writeback of all backing devices. Expensive!
+ */
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+ enum writeback_sync_modes sync_mode)
{
- struct list_head *entry = &bdi_list;
+ struct backing_dev_info *bdi;
- rcu_read_lock();
+ mutex_lock(&bdi_mutex);
- list_for_each_continue_rcu(entry, &bdi_list) {
- struct backing_dev_info *bdi;
- struct list_head *next;
- struct bdi_work *work;
-
- bdi = list_entry(entry, struct backing_dev_info, bdi_list);
+ list_for_each_entry(bdi, &bdi_list, bdi_list) {
if (!bdi_has_dirty_io(bdi))
continue;
- /*
- * If this allocation fails, we just wakeup the thread and
- * let it do kupdate writeback
- */
- work = kmalloc(sizeof(*work), GFP_ATOMIC);
- if (work)
- bdi_work_init(work, sb, nr_pages);
+ if (!bdi_wblist_needs_lock(bdi))
+ r = __wb_writeback(&bdi->wb, 0, sb, sync_mode);
+ else {
+ struct bdi_writeback *wb;
+ int idx;
- /*
- * Prepare to start from previous entry if this one gets moved
- * to the bdi_pending list.
- */
- next = entry->prev;
- if (bdi_queue_writeback(bdi, work))
- entry = next;
+ idx = srcu_read_lock(&bdi->srcu);
+
+ list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+ r += __wb_writeback(&bdi->wb, 0, sb, sync_mode);
+
+ srcu_read_unlock(&bdi->srcu, idx);
+ }
}
- rcu_read_unlock();
+ mutex_unlock(&bdi_mutex);
}
/*
@@ -972,9 +975,9 @@ void generic_sync_sb_inodes(struct super_block *sb,
struct writeback_control *wbc)
{
if (wbc->bdi)
- bdi_start_writeback(wbc->bdi, sb, 0);
+ generic_sync_bdi_inodes(sb, wbc);
else
- bdi_writeback_all(sb, 0);
+ bdi_writeback_all(sb, 0, wbc->sync_mode);
if (wbc->sync_mode == WB_SYNC_ALL) {
struct inode *inode, *old_inode = NULL;
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 7c2874f..c9ddca4 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -15,6 +15,7 @@
#include <linux/fs.h>
#include <linux/sched.h>
#include <linux/srcu.h>
+#include <linux/writeback.h>
#include <asm/atomic.h>
struct page;
@@ -60,7 +61,6 @@ struct bdi_writeback {
#define BDI_MAX_FLUSHERS 32
struct backing_dev_info {
- struct rcu_head rcu_head;
struct srcu_struct srcu; /* for wb_list read side protection */
struct list_head bdi_list;
unsigned long ra_pages; /* max readahead in PAGE_CACHE_SIZE units */
@@ -105,14 +105,15 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
void bdi_unregister(struct backing_dev_info *bdi);
int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
- long nr_pages);
+ long nr_pages, enum writeback_sync_modes sync_mode);
int bdi_writeback_task(struct bdi_writeback *wb);
-void bdi_writeback_all(struct super_block *sb, long nr_pages);
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+ enum writeback_sync_modes sync_mode);
void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
void bdi_add_flusher_task(struct backing_dev_info *bdi);
int bdi_has_dirty_io(struct backing_dev_info *bdi);
-extern spinlock_t bdi_lock;
+extern struct mutex bdi_mutex;
extern struct list_head bdi_list;
static inline int wb_is_default_task(struct bdi_writeback *wb)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 60578bc..0e09051 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -26,7 +26,7 @@ struct backing_dev_info default_backing_dev_info = {
EXPORT_SYMBOL_GPL(default_backing_dev_info);
static struct class *bdi_class;
-DEFINE_SPINLOCK(bdi_lock);
+DEFINE_MUTEX(bdi_mutex);
LIST_HEAD(bdi_list);
LIST_HEAD(bdi_pending_list);
@@ -360,14 +360,15 @@ static int bdi_start_fn(void *ptr)
* Clear pending bit and wakeup anybody waiting to tear us down
*/
clear_bit(BDI_pending, &bdi->state);
+ smp_mb__after_clear_bit();
wake_up_bit(&bdi->state, BDI_pending);
/*
* Make us discoverable on the bdi_list again
*/
- spin_lock_bh(&bdi_lock);
- list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
- spin_unlock_bh(&bdi_lock);
+ mutex_lock(&bdi_mutex);
+ list_add_tail(&bdi->bdi_list, &bdi_list);
+ mutex_unlock(&bdi_mutex);
ret = bdi_writeback_task(wb);
@@ -422,12 +423,6 @@ static int bdi_forker_task(void *ptr)
struct backing_dev_info *bdi;
struct bdi_writeback *wb;
- prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
-
- smp_mb();
- if (list_empty(&bdi_pending_list))
- schedule();
-
/*
* Ideally we'd like not to see any dirty inodes on the
* default_backing_dev_info. Until these are tracked down,
@@ -438,19 +433,23 @@ static int bdi_forker_task(void *ptr)
if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
wb_do_writeback(me);
+ prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+
+ mutex_lock(&bdi_mutex);
+ if (list_empty(&bdi_pending_list)) {
+ mutex_unlock(&bdi_mutex);
+ schedule();
+ continue;
+ }
+
/*
* This is our real job - check for pending entries in
* bdi_pending_list, and create the tasks that got added
*/
-repeat:
- bdi = NULL;
- spin_lock_bh(&bdi_lock);
- if (!list_empty(&bdi_pending_list)) {
- bdi = list_entry(bdi_pending_list.next,
+ bdi = list_entry(bdi_pending_list.next,
struct backing_dev_info, bdi_list);
- list_del_init(&bdi->bdi_list);
- }
- spin_unlock_bh(&bdi_lock);
+ list_del_init(&bdi->bdi_list);
+ mutex_unlock(&bdi_mutex);
if (!bdi)
continue;
@@ -475,12 +474,11 @@ readd_flush:
* a chance to flush other bdi's to free
* memory.
*/
- spin_lock_bh(&bdi_lock);
+ mutex_lock(&bdi_mutex);
list_add_tail(&bdi->bdi_list, &bdi_pending_list);
- spin_unlock_bh(&bdi_lock);
+ mutex_unlock(&bdi_mutex);
bdi_flush_io(bdi);
- goto repeat;
}
}
@@ -488,26 +486,6 @@ readd_flush:
return 0;
}
-/*
- * Grace period has now ended, init bdi->bdi_list and add us to the
- * list of bdi's that are pending for task creation. Wake up
- * bdi_forker_task() to finish the job and add us back to the
- * active bdi_list.
- */
-static void bdi_add_to_pending(struct rcu_head *head)
-{
- struct backing_dev_info *bdi;
-
- bdi = container_of(head, struct backing_dev_info, rcu_head);
- INIT_LIST_HEAD(&bdi->bdi_list);
-
- spin_lock(&bdi_lock);
- list_add_tail(&bdi->bdi_list, &bdi_pending_list);
- spin_unlock(&bdi_lock);
-
- wake_up(&default_backing_dev_info.wb.wait);
-}
-
static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
int(*func)(struct backing_dev_info *))
{
@@ -526,17 +504,15 @@ static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
* waiting for previous additions to finish.
*/
if (!func(bdi)) {
- spin_lock_bh(&bdi_lock);
- list_del_rcu(&bdi->bdi_list);
- spin_unlock_bh(&bdi_lock);
+ mutex_lock(&bdi_mutex);
+ list_move_tail(&bdi->bdi_list, &bdi_pending_list);
+ mutex_unlock(&bdi_mutex);
/*
- * We need to wait for the current grace period to end,
- * in case others were browsing the bdi_list as well.
- * So defer the adding and wakeup to after the RCU
- * grace period has ended.
+ * We are now on the pending list, wake up bdi_forker_task()
+ * to finish the job and add us abck to the active bdi_list
*/
- call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+ wake_up(&default_backing_dev_info.wb.wait);
}
}
@@ -593,6 +569,14 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
goto exit;
}
+ mutex_lock(&bdi_mutex);
+ list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+ mutex_unlock(&bdi_mutex);
+
+ bdi->dev = dev;
+ bdi_debug_register(bdi, dev_name(dev));
+ set_bit(BDI_registered, &bdi->state);
+
/*
* Just start the forker thread for our default backing_dev_info,
* and add other bdi's to the list. They will get a thread created
@@ -614,16 +598,16 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
ret = -ENOMEM;
goto exit;
}
+ } else {
+ /*
+ * start the default thread. this will exit if nothing
+ * happens for a while, but it's important to start it here
+ * or we will not notice that we have dirty data there,
+ * until memory pressure sets in.
+ */
+ bdi_add_default_flusher_task(bdi);
}
- spin_lock_bh(&bdi_lock);
- list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
- spin_unlock_bh(&bdi_lock);
-
- bdi->dev = dev;
- bdi_debug_register(bdi, dev_name(dev));
- set_bit(BDI_registered, &bdi->state);
-
exit:
return ret;
}
@@ -655,15 +639,9 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
/*
* Make sure nobody finds us on the bdi_list anymore
*/
- spin_lock_bh(&bdi_lock);
+ mutex_lock(&bdi_mutex);
list_del_rcu(&bdi->bdi_list);
- spin_unlock_bh(&bdi_lock);
-
- /*
- * Now make sure that anybody who is currently looking at us from
- * the bdi_list iteration have exited.
- */
- synchronize_rcu();
+ mutex_unlock(&bdi_mutex);
/*
* Finally, kill the kernel threads. We don't need to be RCU
@@ -689,7 +667,6 @@ int bdi_init(struct backing_dev_info *bdi)
{
int i, err;
- INIT_RCU_HEAD(&bdi->rcu_head);
bdi->dev = NULL;
bdi->min_ratio = 0;
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index de3178a..f1785bb 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -313,9 +313,8 @@ static unsigned int bdi_min_ratio;
int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
{
int ret = 0;
- unsigned long flags;
- spin_lock_irqsave(&bdi_lock, flags);
+ mutex_lock(&bdi_mutex);
if (min_ratio > bdi->max_ratio) {
ret = -EINVAL;
} else {
@@ -327,27 +326,26 @@ int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
ret = -EINVAL;
}
}
- spin_unlock_irqrestore(&bdi_lock, flags);
+ mutex_unlock(&bdi_mutex);
return ret;
}
int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned max_ratio)
{
- unsigned long flags;
int ret = 0;
if (max_ratio > 100)
return -EINVAL;
- spin_lock_irqsave(&bdi_lock, flags);
+ mutex_lock(&bdi_mutex);
if (bdi->min_ratio > max_ratio) {
ret = -EINVAL;
} else {
bdi->max_ratio = max_ratio;
bdi->max_prop_frac = (PROP_FRAC_BASE * max_ratio) / 100;
}
- spin_unlock_irqrestore(&bdi_lock, flags);
+ mutex_unlock(&bdi_mutex);
return ret;
}
@@ -581,7 +579,7 @@ static void balance_dirty_pages(struct address_space *mapping)
(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
+ global_page_state(NR_UNSTABLE_NFS)
> background_thresh)))
- bdi_start_writeback(bdi, NULL, 0);
+ bdi_start_writeback(bdi, NULL, 0, WB_SYNC_NONE);
}
void set_page_dirty_balance(struct page *page, int page_mkwrite)
@@ -674,7 +672,7 @@ void wakeup_flusher_threads(long nr_pages)
if (nr_pages == 0)
nr_pages = global_page_state(NR_FILE_DIRTY) +
global_page_state(NR_UNSTABLE_NFS);
- bdi_writeback_all(NULL, nr_pages);
+ bdi_writeback_all(NULL, nr_pages, WB_SYNC_NONE);
}
static void laptop_timer_fn(unsigned long unused);
--
This one has been tested good and has a few more tweaks. So please try
that! It should be pretty close to final now, will repost the series on
monday.
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index f80afaa..33357c3 100644
@@ -136,6 +140,9 @@ static void wb_start_writeback(struct bdi_writeback *wb, struct bdi_work *work)
wake_up(&wb->wait);
}
+/*
+ * Add work to bdi work list.
+ */
static int bdi_queue_writeback(struct backing_dev_info *bdi,
struct bdi_work *work)
{
@@ -189,17 +196,17 @@ static void bdi_wait_on_work_start(struct bdi_work *work)
}
int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
- long nr_pages)
+ long nr_pages, enum writeback_sync_modes sync_mode)
{
struct bdi_work work_stack, *work;
int ret;
work = kmalloc(sizeof(*work), GFP_ATOMIC);
if (work)
- bdi_work_init(work, sb, nr_pages);
+ bdi_work_init(work, sb, nr_pages, sync_mode);
else {
work = &work_stack;
- bdi_work_init_on_stack(work, sb, nr_pages);
+ bdi_work_init_on_stack(work, sb, nr_pages, sync_mode);
}
ret = bdi_queue_writeback(bdi, work);
@@ -273,24 +280,31 @@ static long wb_kupdated(struct bdi_writeback *wb)
return wrote;
}
+static inline bool over_bground_thresh(void)
+{
+ unsigned long background_thresh, dirty_thresh;
+
+ get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
+
+ return (global_page_state(NR_FILE_DIRTY) +
+ global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
+}
+
static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
- struct super_block *sb)
+ struct super_block *sb,
+ enum writeback_sync_modes sync_mode)
{
struct writeback_control wbc = {
.bdi = wb->bdi,
- .sync_mode = WB_SYNC_NONE,
+ .sync_mode = sync_mode,
.older_than_this = NULL,
.range_cyclic = 1,
};
long wrote = 0;
for (;;) {
- unsigned long background_thresh, dirty_thresh;
-
- get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
- if ((global_page_state(NR_FILE_DIRTY) +
- global_page_state(NR_UNSTABLE_NFS) < background_thresh) &&
- nr_pages <= 0)
+ if (sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
+ !over_bground_thresh())
break;
wbc.more_io = 0;
@@ -345,9 +359,10 @@ static long wb_writeback(struct bdi_writeback *wb)
while ((work = get_next_work_item(bdi, wb)) != NULL) {
struct super_block *sb = bdi_work_sb(work);
long nr_pages = work->nr_pages;
+ enum writeback_sync_modes sync_mode = work->sync_mode;
wb_clear_pending(wb, work);
- wrote += __wb_writeback(wb, nr_pages, sb);
+ wrote += __wb_writeback(wb, nr_pages, sb, sync_mode);
}
return wrote;
@@ -420,39 +435,36 @@ int bdi_writeback_task(struct bdi_writeback *wb)
return 0;
}
-void bdi_writeback_all(struct super_block *sb, long nr_pages)
+/*
+ * Do in-line writeback for all backing devices. Expensive!
+ */
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+ enum writeback_sync_modes sync_mode)
{
- struct list_head *entry = &bdi_list;
+ struct backing_dev_info *bdi, *tmp;
- rcu_read_lock();
-
- list_for_each_continue_rcu(entry, &bdi_list) {
- struct backing_dev_info *bdi;
- struct list_head *next;
- struct bdi_work *work;
+ mutex_lock(&bdi_lock);
- bdi = list_entry(entry, struct backing_dev_info, bdi_list);
+ list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
if (!bdi_has_dirty_io(bdi))
continue;
- /*
- * If this allocation fails, we just wakeup the thread and
- * let it do kupdate writeback
- */
- work = kmalloc(sizeof(*work), GFP_ATOMIC);
- if (work)
- bdi_work_init(work, sb, nr_pages);
+ if (!bdi_wblist_needs_lock(bdi))
+ __wb_writeback(&bdi->wb, 0, sb, sync_mode);
+ else {
+ struct bdi_writeback *wb;
+ int idx;
- /*
- * Prepare to start from previous entry if this one gets moved
- * to the bdi_pending list.
- */
- next = entry->prev;
- if (bdi_queue_writeback(bdi, work))
- entry = next;
+ idx = srcu_read_lock(&bdi->srcu);
+
+ list_for_each_entry_rcu(wb, &bdi->wb_list, list)
+ __wb_writeback(&bdi->wb, 0, sb, sync_mode);
+
+ srcu_read_unlock(&bdi->srcu, idx);
+ }
}
- rcu_read_unlock();
+ mutex_unlock(&bdi_lock);
}
/*
@@ -972,9 +984,9 @@ void generic_sync_sb_inodes(struct super_block *sb,
struct writeback_control *wbc)
{
if (wbc->bdi)
- bdi_start_writeback(wbc->bdi, sb, 0);
+ generic_sync_bdi_inodes(sb, wbc);
else
- bdi_writeback_all(sb, 0);
+ bdi_writeback_all(sb, 0, wbc->sync_mode);
if (wbc->sync_mode == WB_SYNC_ALL) {
struct inode *inode, *old_inode = NULL;
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 7c2874f..0b20d4b 100644
+extern struct mutex bdi_lock;
extern struct list_head bdi_list;
static inline int wb_is_default_task(struct bdi_writeback *wb)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 60578bc..3ce3b57 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -26,7 +26,7 @@ struct backing_dev_info default_backing_dev_info = {
EXPORT_SYMBOL_GPL(default_backing_dev_info);
static struct class *bdi_class;
-DEFINE_SPINLOCK(bdi_lock);
+DEFINE_MUTEX(bdi_lock);
LIST_HEAD(bdi_list);
LIST_HEAD(bdi_pending_list);
@@ -360,14 +360,15 @@ static int bdi_start_fn(void *ptr)
* Clear pending bit and wakeup anybody waiting to tear us down
*/
clear_bit(BDI_pending, &bdi->state);
+ smp_mb__after_clear_bit();
wake_up_bit(&bdi->state, BDI_pending);
/*
* Make us discoverable on the bdi_list again
*/
- spin_lock_bh(&bdi_lock);
- list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
- spin_unlock_bh(&bdi_lock);
+ mutex_lock(&bdi_lock);
+ list_add_tail(&bdi->bdi_list, &bdi_list);
+ mutex_unlock(&bdi_lock);
ret = bdi_writeback_task(wb);
@@ -419,15 +420,9 @@ static int bdi_forker_task(void *ptr)
bdi_task_init(me->bdi, me);
for (;;) {
- struct backing_dev_info *bdi;
+ struct backing_dev_info *bdi, *tmp;
struct bdi_writeback *wb;
- prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
-
- smp_mb();
- if (list_empty(&bdi_pending_list))
- schedule();
-
/*
* Ideally we'd like not to see any dirty inodes on the
* default_backing_dev_info. Until these are tracked down,
@@ -438,19 +433,39 @@ static int bdi_forker_task(void *ptr)
if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
wb_do_writeback(me);
+ prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+
+ mutex_lock(&bdi_lock);
+
+ /*
+ * Check if any existing bdi's have dirty data without
+ * a thread registered. If so, set that up.
+ */
+ list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
+ if (!list_empty(&bdi->wb_list) ||
+ !bdi_has_dirty_io(bdi))
+ continue;
+
+ bdi_add_default_flusher_task(bdi);
+ }
+
+ if (list_empty(&bdi_pending_list)) {
+ unsigned long wait;
+
+ mutex_unlock(&bdi_lock);
+ wait = msecs_to_jiffies(dirty_writeback_interval * 10);
+ schedule_timeout(wait);
+ continue;
+ }
+
/*
* This is our real job - check for pending entries in
* bdi_pending_list, and create the tasks that got added
*/
-repeat:
- bdi = NULL;
- spin_lock_bh(&bdi_lock);
- if (!list_empty(&bdi_pending_list)) {
- bdi = list_entry(bdi_pending_list.next,
+ bdi = list_entry(bdi_pending_list.next,
struct backing_dev_info, bdi_list);
- list_del_init(&bdi->bdi_list);
- }
- spin_unlock_bh(&bdi_lock);
+ list_del_init(&bdi->bdi_list);
+ mutex_unlock(&bdi_lock);
if (!bdi)
continue;
@@ -475,12 +490,11 @@ readd_flush:
* a chance to flush other bdi's to free
* memory.
*/
- spin_lock_bh(&bdi_lock);
+ mutex_lock(&bdi_lock);
list_add_tail(&bdi->bdi_list, &bdi_pending_list);
- spin_unlock_bh(&bdi_lock);
+ mutex_unlock(&bdi_lock);
bdi_flush_io(bdi);
- goto repeat;
}
}
@@ -489,25 +503,8 @@ readd_flush:
}
/*
- * Grace period has now ended, init bdi->bdi_list and add us to the
- * list of bdi's that are pending for task creation. Wake up
- * bdi_forker_task() to finish the job and add us back to the
- * active bdi_list.
+ * bdi_lock held on entry
*/
-static void bdi_add_to_pending(struct rcu_head *head)
-{
- struct backing_dev_info *bdi;
-
- bdi = container_of(head, struct backing_dev_info, rcu_head);
- INIT_LIST_HEAD(&bdi->bdi_list);
-
- spin_lock(&bdi_lock);
- list_add_tail(&bdi->bdi_list, &bdi_pending_list);
- spin_unlock(&bdi_lock);
-
- wake_up(&default_backing_dev_info.wb.wait);
-}
-
static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
int(*func)(struct backing_dev_info *))
{
@@ -526,24 +523,22 @@ static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
* waiting for previous additions to finish.
*/
if (!func(bdi)) {
- spin_lock_bh(&bdi_lock);
- list_del_rcu(&bdi->bdi_list);
- spin_unlock_bh(&bdi_lock);
+ list_move_tail(&bdi->bdi_list, &bdi_pending_list);
/*
- * We need to wait for the current grace period to end,
- * in case others were browsing the bdi_list as well.
- * So defer the adding and wakeup to after the RCU
- * grace period has ended.
+ * We are now on the pending list, wake up bdi_forker_task()
+ * to finish the job and add us abck to the active bdi_list
*/
- call_rcu(&bdi->rcu_head, bdi_add_to_pending);
+ wake_up(&default_backing_dev_info.wb.wait);
}
}
static int flusher_add_helper_block(struct backing_dev_info *bdi)
{
+ mutex_unlock(&bdi_lock);
wait_on_bit_lock(&bdi->state, BDI_pending, bdi_sched_wait,
TASK_UNINTERRUPTIBLE);
+ mutex_lock(&bdi_lock);
return 0;
}
@@ -571,7 +566,9 @@ void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
*/
void bdi_add_flusher_task(struct backing_dev_info *bdi)
{
+ mutex_lock(&bdi_lock);
bdi_add_one_flusher_task(bdi, flusher_add_helper_block);
+ mutex_unlock(&bdi_lock);
}
EXPORT_SYMBOL(bdi_add_flusher_task);
@@ -593,6 +590,14 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
goto exit;
}
+ mutex_lock(&bdi_lock);
+ list_add_tail(&bdi->bdi_list, &bdi_list);
+ mutex_unlock(&bdi_lock);
+
+ bdi->dev = dev;
+ bdi_debug_register(bdi, dev_name(dev));
+ set_bit(BDI_registered, &bdi->state);
+
/*
* Just start the forker thread for our default backing_dev_info,
* and add other bdi's to the list. They will get a thread created
@@ -616,14 +621,6 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
}
}
- spin_lock_bh(&bdi_lock);
- list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
- spin_unlock_bh(&bdi_lock);
-
- bdi->dev = dev;
- bdi_debug_register(bdi, dev_name(dev));
- set_bit(BDI_registered, &bdi->state);
-
exit:
return ret;
}
@@ -655,15 +652,9 @@ static void bdi_wb_shutdown(struct backing_dev_info *bdi)
/*
* Make sure nobody finds us on the bdi_list anymore
*/
- spin_lock_bh(&bdi_lock);
- list_del_rcu(&bdi->bdi_list);
- spin_unlock_bh(&bdi_lock);
-
- /*
- * Now make sure that anybody who is currently looking at us from
- * the bdi_list iteration have exited.
- */
- synchronize_rcu();
+ mutex_lock(&bdi_lock);
+ list_del(&bdi->bdi_list);
+ mutex_unlock(&bdi_lock);
/*
* Finally, kill the kernel threads. We don't need to be RCU
@@ -689,7 +680,6 @@ int bdi_init(struct backing_dev_info *bdi)
{
int i, err;
- INIT_RCU_HEAD(&bdi->rcu_head);
bdi->dev = NULL;
bdi->min_ratio = 0;
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index de3178a..7dd7de7 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -313,9 +313,8 @@ static unsigned int bdi_min_ratio;
int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
{
int ret = 0;
- unsigned long flags;
- spin_lock_irqsave(&bdi_lock, flags);
+ mutex_lock(&bdi_lock);
if (min_ratio > bdi->max_ratio) {
ret = -EINVAL;
} else {
@@ -327,27 +326,26 @@ int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
ret = -EINVAL;
}
}
- spin_unlock_irqrestore(&bdi_lock, flags);
+ mutex_unlock(&bdi_lock);
return ret;
}
int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned max_ratio)
{
- unsigned long flags;
int ret = 0;
if (max_ratio > 100)
return -EINVAL;
- spin_lock_irqsave(&bdi_lock, flags);
+ mutex_lock(&bdi_lock);
if (bdi->min_ratio > max_ratio) {
ret = -EINVAL;
} else {
bdi->max_ratio = max_ratio;
bdi->max_prop_frac = (PROP_FRAC_BASE * max_ratio) / 100;
}
- spin_unlock_irqrestore(&bdi_lock, flags);
+ mutex_unlock(&bdi_lock);
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 52 ++++++++++++++++++++++++++++++++++--------
include/linux/backing-dev.h | 5 ++++
include/linux/writeback.h | 2 +-
3 files changed, 48 insertions(+), 11 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 47f5ace..1292a88 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -247,10 +247,10 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
* older_than_this takes precedence over nr_to_write. So we'll only write back
* all dirty pages if they are all attached to "old" mappings.
*/
-static void wb_kupdated(struct bdi_writeback *wb)
+static long wb_kupdated(struct bdi_writeback *wb)
{
unsigned long oldest_jif;
- long nr_to_write;
+ long nr_to_write, wrote = 0;
struct writeback_control wbc = {
.bdi = wb->bdi,
.sync_mode = WB_SYNC_NONE,
@@ -273,10 +273,13 @@ static void wb_kupdated(struct bdi_writeback *wb)
wbc.encountered_congestion = 0;
wbc.nr_to_write = MAX_WRITEBACK_PAGES;
generic_sync_wb_inodes(wb, NULL, &wbc);
+ wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
if (wbc.nr_to_write > 0)
break; /* All the old data is written */
nr_to_write -= MAX_WRITEBACK_PAGES;
}
+
+ return wrote;
}
static inline bool over_bground_thresh(void)
@@ -289,7 +292,7 @@ static inline bool over_bground_thresh(void)
global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
}
-static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
+static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
struct super_block *sb,
enum writeback_sync_modes sync_mode)
{
@@ -299,6 +302,7 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
.older_than_this = NULL,
.range_cyclic = 1,
};
+ long wrote = 0;
for (;;) {
if (sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
@@ -311,6 +315,7 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
wbc.pages_skipped = 0;
generic_sync_wb_inodes(wb, sb, &wbc);
nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+ wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
/*
* If we ran out of stuff to write, bail unless more_io got set
*/
@@ -320,6 +325,8 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
break;
}
}
+
+ return wrote;
}
/*
@@ -345,10 +352,11 @@ static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
return ret;
}
-static void wb_writeback(struct bdi_writeback *wb)
+static long wb_writeback(struct bdi_writeback *wb)
{
struct backing_dev_info *bdi = wb->bdi;
struct bdi_work *work;
+ long wrote = 0;
while ((work = get_next_work_item(bdi, wb)) != NULL) {
struct super_block *sb = bdi_work_sb(work);
@@ -356,16 +364,20 @@ static void wb_writeback(struct bdi_writeback *wb)
enum writeback_sync_modes sync_mode = work->sync_mode;
wb_clear_pending(wb, work);
- __wb_writeback(wb, nr_pages, sb, sync_mode);
+ wrote += __wb_writeback(wb, nr_pages, sb, sync_mode);
}
+
+ return wrote;
}
/*
* This will be inlined in bdi_writeback_task() once we get rid of any
* dirty inodes on the default_backing_dev_info
*/
-void wb_do_writeback(struct bdi_writeback *wb)
+long wb_do_writeback(struct bdi_writeback *wb)
{
+ long wrote;
+
/*
* We get here in two cases:
*
@@ -377,9 +389,11 @@ void wb_do_writeback(struct bdi_writeback *wb)
* items on the work_list. Process those.
*/
if (list_empty(&wb->bdi->work_list))
- wb_kupdated(wb);
+ wrote = wb_kupdated(wb);
else
- wb_writeback(wb);
+ wrote = wb_writeback(wb);
+
+ return wrote;
}
/*
@@ -388,12 +402,30 @@ void wb_do_writeback(struct bdi_writeback *wb)
*/
int bdi_writeback_task(struct bdi_writeback *wb)
{
+ unsigned long last_active = jiffies;
+ unsigned long wait_jiffies = -1UL;
+ long pages_written;
DEFINE_WAIT(wait);
while (!kthread_should_stop()) {
- unsigned long wait_jiffies;
- wb_do_writeback(wb);
+ pages_written = wb_do_writeback(wb);
+
+ if (pages_written)
+ last_active = jiffies;
+ else if (wait_jiffies != -1UL) {
+ unsigned long max_idle;
+
+ /*
+ * Longest period of inactivity that we tolerate. If we
+ * see dirty data again later, the task will get
+ * recreated automatically.
+ */
+ max_idle = max(5UL * 60 * HZ, wait_jiffies);
+ if (time_after(jiffies, max_idle + last_active) &&
+ wb_is_default_task(wb))
+ break;
+ }
prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 72b4797..53e6c8d 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -113,6 +113,11 @@ int bdi_has_dirty_io(struct backing_dev_info *bdi);
extern struct mutex bdi_lock;
extern struct list_head bdi_list;
+static inline int wb_is_default_task(struct bdi_writeback *wb)
+{
+ return wb == &wb->bdi->wb;
+}
+
static inline int bdi_wblist_needs_lock(struct backing_dev_info *bdi)
{
return test_bit(BDI_wblist_lock, &bdi->state);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index e414702..30e318b 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -69,7 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
int inode_wait(void *);
void sync_inodes_sb(struct super_block *, int wait);
void sync_inodes(int wait);
-void wb_do_writeback(struct bdi_writeback *wb);
+long wb_do_writeback(struct bdi_writeback *wb);
/* writeback.h requires fs.h; it, too, is not included from here. */
static inline void wait_on_inode(struct inode *inode)
--
1.6.3.rc0.1.gf800
Add code to mpage.c to properly propagate read vs reada information to
the block layer and let the elevator core check and prevent such merges.
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
block/elevator.c | 7 +++++++
fs/mpage.c | 30 ++++++++++++++++++++++++------
2 files changed, 31 insertions(+), 6 deletions(-)
diff --git a/block/elevator.c b/block/elevator.c
index 6261b24..17cfaa2 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -67,6 +67,13 @@ static int elv_iosched_allow_merge(struct request *rq, struct bio *bio)
{
struct request_queue *q = rq->q;
+ /*
+ * Disallow merge of a read-ahead bio into a normal request for SSD
+ */
+ if (blk_queue_nonrot(q) &&
+ bio_rw_ahead(bio) && !(rq->cmd_flags & REQ_FAILFAST_DEV))
+ return 0;
+
if (q->elv_ops.elevator_allow_merge_fn)
return elv_call_allow_merge_fn(q, rq, bio);
diff --git a/fs/mpage.c b/fs/mpage.c
index 680ba60..d02cf51 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -180,11 +180,18 @@ do_mpage_readpage(struct bio *bio, struct page *page, unsigned nr_pages,
unsigned page_block;
unsigned first_hole = blocks_per_page;
struct block_device *bdev = NULL;
- int length;
+ int length, rw;
int fully_mapped = 1;
unsigned nblocks;
unsigned relative_block;
+ /*
+ * If there's some read-ahead in this range, be sure to tell
+ * the block layer about it. We start off as a READ, then switch
+ * to READA if we spot the read-ahead marker on the page.
+ */
+ rw = READ;
+
if (page_has_buffers(page))
goto confused;
@@ -289,7 +296,7 @@ do_mpage_readpage(struct bio *bio, struct page *page, unsigned nr_pages,
* This page will go to BIO. Do we need to send this BIO off first?
*/
if (bio && (*last_block_in_bio != blocks[0] - 1))
- bio = mpage_bio_submit(READ, bio);
+ bio = mpage_bio_submit(rw, bio);
alloc_new:
if (bio == NULL) {
@@ -301,8 +308,19 @@ alloc_new:
}
length = first_hole << blkbits;
- if (bio_add_page(bio, page, length, 0) < length) {
- bio = mpage_bio_submit(READ, bio);
+
+ /*
+ * If this is an SSD, don't merge the read-ahead part of the IO
+ * with the actual request. We want the interesting part to complete
+ * as quickly as possible.
+ */
+ if (blk_queue_nonrot(bdev_get_queue(bdev)) &&
+ bio->bi_size && PageReadahead(page)) {
+ bio = mpage_bio_submit(rw, bio);
+ rw = READA;
+ goto alloc_new;
+ } else if (bio_add_page(bio, page, length, 0) < length) {
+ bio = mpage_bio_submit(rw, bio);
goto alloc_new;
}
@@ -310,7 +328,7 @@ alloc_new:
nblocks = map_bh->b_size >> blkbits;
if ((buffer_boundary(map_bh) && relative_block == nblocks) ||
(first_hole != blocks_per_page))
- bio = mpage_bio_submit(READ, bio);
+ bio = mpage_bio_submit(rw, bio);
else
*last_block_in_bio = blocks[blocks_per_page - 1];
out:
@@ -318,7 +336,7 @@ out:
confused:
if (bio)
- bio = mpage_bio_submit(READ, bio);
+ bio = mpage_bio_submit(rw, bio);
if (!PageUptodate(page))
block_read_full_page(page, get_block);
else
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 145 ++++++++++++++++++++++++---------------
include/linux/backing-dev.h | 40 ++++++-----
mm/backing-dev.c | 161 ++++++++++++++++++++++++++++++++-----------
3 files changed, 233 insertions(+), 113 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index ca4d9da..7a9f0b0 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -46,9 +46,11 @@ int nr_pdflush_threads;
* unless they implement their own. Which is somewhat inefficient, as this
* may prevent concurrent writeback against multiple devices.
*/
-static int writeback_acquire(struct backing_dev_info *bdi)
+static int writeback_acquire(struct bdi_writeback *wb)
{
- return !test_and_set_bit(BDI_pdflush, &bdi->state);
+ struct backing_dev_info *bdi = wb->bdi;
+
+ return !test_and_set_bit(wb->nr, &bdi->wb_active);
}
/**
@@ -59,19 +61,40 @@ static int writeback_acquire(struct backing_dev_info *bdi)
*/
int writeback_in_progress(struct backing_dev_info *bdi)
{
- return test_bit(BDI_pdflush, &bdi->state);
+ return bdi->wb_active != 0;
}
/**
* writeback_release - relinquish exclusive writeback access against a device.
* @bdi: the device's backing_dev_info structure
*/
-static void writeback_release(struct backing_dev_info *bdi)
+static void writeback_release(struct bdi_writeback *wb)
{
- WARN_ON_ONCE(!writeback_in_progress(bdi));
- bdi->wb_arg.nr_pages = 0;
- bdi->wb_arg.sb = NULL;
- clear_bit(BDI_pdflush, &bdi->state);
+ struct backing_dev_info *bdi = wb->bdi;
+
+ wb->nr_pages = 0;
+ wb->sb = NULL;
+ clear_bit(wb->nr, &bdi->wb_active);
+}
+
+static void wb_start_writeback(struct bdi_writeback *wb, struct super_block *sb,
+ long nr_pages,
+ enum writeback_sync_modes sync_mode)
+{
+ if (!wb_has_dirty_io(wb))
+ return;
+
+ if (writeback_acquire(wb)) {
+ wb->nr_pages = nr_pages;
+ wb->sb = sb;
+ wb->sync_mode = sync_mode;
+
+ /*
+ * make above store seen before the task is woken
+ */
+ smp_mb();
+ wake_up(&wb->wait);
+ }
}
int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
@@ -81,22 +104,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
* This only happens the first time someone kicks this bdi, so put
* it out-of-line.
*/
- if (unlikely(!bdi->task)) {
+ if (unlikely(!bdi->wb.task)) {
bdi_add_default_flusher_task(bdi);
return 1;
}
- if (writeback_acquire(bdi)) {
- bdi->wb_arg.nr_pages = nr_pages;
- bdi->wb_arg.sb = sb;
- bdi->wb_arg.sync_mode = sync_mode;
- /*
- * make above store seen before the task is woken
- */
- smp_mb();
- wake_up(&bdi->wait);
- }
-
+ wb_start_writeback(&bdi->wb, sb, nr_pages, sync_mode);
return 0;
}
@@ -124,12 +137,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
* older_than_this takes precedence over nr_to_write. So we'll only write back
* all dirty pages if they are all attached to "old" mappings.
*/
-static void bdi_kupdated(struct backing_dev_info *bdi)
+static void wb_kupdated(struct bdi_writeback *wb)
{
unsigned long oldest_jif;
long nr_to_write;
struct writeback_control wbc = {
- .bdi = bdi,
+ .bdi = wb->bdi,
.sync_mode = WB_SYNC_NONE,
.older_than_this = &oldest_jif,
.nr_to_write = 0,
@@ -166,15 +179,19 @@ static inline bool over_bground_thresh(void)
global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
}
-static void bdi_pdflush(struct backing_dev_info *bdi)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+ struct super_block *sb,
+ struct writeback_control *wbc);
+
+static void wb_writeback(struct bdi_writeback *wb)
{
struct writeback_control wbc = {
- .bdi = bdi,
- .sync_mode = bdi->wb_arg.sync_mode,
+ .bdi = wb->bdi,
+ .sync_mode = wb->sync_mode,
.older_than_this = NULL,
.range_cyclic = 1,
};
- long nr_pages = bdi->wb_arg.nr_pages;
+ long nr_pages = wb->nr_pages;
for (;;) {
if (wbc.sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
@@ -185,7 +202,7 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
wbc.encountered_congestion = 0;
wbc.nr_to_write = MAX_WRITEBACK_PAGES;
wbc.pages_skipped = 0;
- generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
+ generic_sync_wb_inodes(wb, wb->sb, &wbc);
nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
/*
* If we ran out of stuff to write, bail unless more_io got set
@@ -202,13 +219,13 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
* Handle writeback of dirty data for the device backed by this bdi. Also
* wakes up periodically and does kupdated style flushing.
*/
-int bdi_writeback_task(struct backing_dev_info *bdi)
+int bdi_writeback_task(struct bdi_writeback *wb)
{
while (!kthread_should_stop()) {
unsigned long wait_jiffies;
DEFINE_WAIT(wait);
- prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
+ prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
schedule_timeout(wait_jiffies);
try_to_freeze();
@@ -227,13 +244,13 @@ int bdi_writeback_task(struct backing_dev_info *bdi)
* pdflush style writeout.
*
*/
- if (writeback_acquire(bdi))
- bdi_kupdated(bdi);
+ if (writeback_acquire(wb))
+ wb_kupdated(wb);
else
- bdi_pdflush(bdi);
+ wb_writeback(wb);
- writeback_release(bdi);
- finish_wait(&bdi->wait, &wait);
+ writeback_release(wb);
+ finish_wait(&wb->wait, &wait);
}
return 0;
@@ -255,6 +272,14 @@ void bdi_writeback_all(struct super_block *sb, long nr_pages,
mutex_unlock(&bdi_lock);
}
+/*
+ * We have only a single wb per bdi, so just return that.
+ */
+static inline struct bdi_writeback *inode_get_wb(struct inode *inode)
+{
+ return &inode_to_bdi(inode)->wb;
+}
+
/**
* __mark_inode_dirty - internal function
* @inode: inode to mark
@@ -353,9 +378,10 @@ void __mark_inode_dirty(struct inode *inode, int flags)
* reposition it (that would break b_dirty time-ordering).
*/
if (!was_dirty) {
+ struct bdi_writeback *wb = inode_get_wb(inode);
+
inode->dirtied_when = jiffies;
- list_move(&inode->i_list,
- &inode_to_bdi(inode)->b_dirty);
+ list_move(&inode->i_list, &wb->b_dirty);
}
}
out:
@@ -382,16 +408,16 @@ static int write_inode(struct inode *inode, int sync)
*/
static void redirty_tail(struct inode *inode)
{
- struct backing_dev_info *bdi = inode_to_bdi(inode);
+ struct bdi_writeback *wb = inode_get_wb(inode);
- if (!list_empty(&bdi->b_dirty)) {
+ if (!list_empty(&wb->b_dirty)) {
struct inode *tail;
- tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
+ tail = list_entry(wb->b_dirty.next, struct inode, i_list);
if (time_before(inode->dirtied_when, tail->dirtied_when))
inode->dirtied_when = jiffies;
}
- list_move(&inode->i_list, &bdi->b_dirty);
+ list_move(&inode->i_list, &wb->b_dirty);
}
/*
@@ -399,7 +425,9 @@ static void redirty_tail(struct inode *inode)
*/
static void requeue_io(struct inode *inode)
{
- list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
+ struct bdi_writeback *wb = inode_get_wb(inode);
+
+ list_move(&inode->i_list, &wb->b_more_io);
}
static void inode_sync_complete(struct inode *inode)
@@ -446,11 +474,10 @@ static void move_expired_inodes(struct list_head *delaying_queue,
/*
* Queue all expired dirty inodes for io, eldest first.
*/
-static void queue_io(struct backing_dev_info *bdi,
- unsigned long *older_than_this)
+static void queue_io(struct bdi_writeback *wb, unsigned long *older_than_this)
{
- list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
- move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
+ list_splice_init(&wb->b_more_io, wb->b_io.prev);
+ move_expired_inodes(&wb->b_dirty, &wb->b_io, older_than_this);
}
/*
@@ -611,20 +638,20 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
return __sync_single_inode(inode, wbc);
}
-void generic_sync_bdi_inodes(struct super_block *sb,
- struct writeback_control *wbc)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+ struct super_block *sb,
+ struct writeback_control *wbc)
{
const int is_blkdev_sb = sb_is_blkdev_sb(sb);
- struct backing_dev_info *bdi = wbc->bdi;
const unsigned long start = jiffies; /* livelock avoidance */
spin_lock(&inode_lock);
- if (!wbc->for_kupdate || list_empty(&bdi->b_io))
- queue_io(bdi, wbc->older_than_this);
+ if (!wbc->for_kupdate || list_empty(&wb->b_io))
+ queue_io(wb, wbc->older_than_this);
- while (!list_empty(&bdi->b_io)) {
- struct inode *inode = list_entry(bdi->b_io.prev,
+ while (!list_empty(&wb->b_io)) {
+ struct inode *inode = list_entry(wb->b_io.prev,
struct inode, i_list);
long pages_skipped;
@@ -636,7 +663,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
continue;
}
- if (!bdi_cap_writeback_dirty(bdi)) {
+ if (!bdi_cap_writeback_dirty(wb->bdi)) {
redirty_tail(inode);
if (is_blkdev_sb) {
/*
@@ -658,7 +685,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
continue;
}
- if (wbc->nonblocking && bdi_write_congested(bdi)) {
+ if (wbc->nonblocking && bdi_write_congested(wb->bdi)) {
wbc->encountered_congestion = 1;
if (!is_blkdev_sb)
break; /* Skip a congested fs */
@@ -692,7 +719,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
wbc->more_io = 1;
break;
}
- if (!list_empty(&bdi->b_more_io))
+ if (!list_empty(&wb->b_more_io))
wbc->more_io = 1;
}
@@ -700,6 +727,14 @@ void generic_sync_bdi_inodes(struct super_block *sb,
/* Leave any unwritten inodes on b_io */
}
+void generic_sync_bdi_inodes(struct super_block *sb,
+ struct writeback_control *wbc)
+{
+ struct backing_dev_info *bdi = wbc->bdi;
+
+ generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+}
+
/*
* Write out a superblock's list of dirty inodes. A wait will be performed
* upon no inodes, all inodes or the final one, depending upon sync_mode.
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index f164925..77dc62c 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -24,8 +24,8 @@ struct dentry;
* Bits in backing_dev_info.state
*/
enum bdi_state {
- BDI_pdflush, /* A pdflush thread is working this device */
BDI_pending, /* On its way to being activated */
+ BDI_wb_alloc, /* Default embedded wb allocated */
BDI_async_congested, /* The async (write) queue is getting full */
BDI_sync_congested, /* The sync queue is getting full */
BDI_unused, /* Available bits start here */
@@ -41,15 +41,23 @@ enum bdi_stat_item {
#define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
-struct bdi_writeback_arg {
- unsigned long nr_pages;
- struct super_block *sb;
+struct bdi_writeback {
+ struct backing_dev_info *bdi; /* our parent bdi */
+ unsigned int nr;
+
+ struct task_struct *task; /* writeback task */
+ wait_queue_head_t wait;
+ struct list_head b_dirty; /* dirty inodes */
+ struct list_head b_io; /* parked for writeback */
+ struct list_head b_more_io; /* parked for more writeback */
+
+ unsigned long nr_pages;
+ struct super_block *sb;
enum writeback_sync_modes sync_mode;
};
struct backing_dev_info {
struct list_head bdi_list;
-
unsigned long ra_pages; /* max readahead in PAGE_CACHE_SIZE units */
unsigned long state; /* Always use atomic bitops on this */
unsigned int capabilities; /* Device capabilities */
@@ -66,14 +74,11 @@ struct backing_dev_info {
unsigned int min_ratio;
unsigned int max_ratio, max_prop_frac;
- struct device *dev;
+ struct bdi_writeback wb; /* default writeback info for this bdi */
+ unsigned long wb_active; /* bitmap of active tasks */
+ unsigned long wb_mask; /* number of registered tasks */
- struct task_struct *task; /* writeback task */
- wait_queue_head_t wait;
- struct bdi_writeback_arg wb_arg; /* protected by BDI_pdflush */
- struct list_head b_dirty; /* dirty inodes */
- struct list_head b_io; /* parked for writeback */
- struct list_head b_more_io; /* parked for more writeback */
+ struct device *dev;
#ifdef CONFIG_DEBUG_FS
struct dentry *debug_dir;
@@ -90,19 +95,20 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
void bdi_unregister(struct backing_dev_info *bdi);
int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
long nr_pages, enum writeback_sync_modes sync_mode);
-int bdi_writeback_task(struct backing_dev_info *bdi);
+int bdi_writeback_task(struct bdi_writeback *wb);
void bdi_writeback_all(struct super_block *sb, long nr_pages,
enum writeback_sync_modes sync_mode);
void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
+int bdi_has_dirty_io(struct backing_dev_info *bdi);
extern struct mutex bdi_lock;
extern struct list_head bdi_list;
-static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
+static inline int wb_has_dirty_io(struct bdi_writeback *wb)
{
- return !list_empty(&bdi->b_dirty) ||
- !list_empty(&bdi->b_io) ||
- !list_empty(&bdi->b_more_io);
+ return !list_empty(&wb->b_dirty) ||
+ !list_empty(&wb->b_io) ||
+ !list_empty(&wb->b_more_io);
}
static inline void __add_bdi_stat(struct backing_dev_info *bdi,
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 57c8487..df90b0e 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -199,17 +199,59 @@ static int __init default_bdi_init(void)
}
subsys_initcall(default_bdi_init);
+static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+{
+ memset(wb, 0, sizeof(*wb));
+
+ wb->bdi = bdi;
+ init_waitqueue_head(&wb->wait);
+ INIT_LIST_HEAD(&wb->b_dirty);
+ INIT_LIST_HEAD(&wb->b_io);
+ INIT_LIST_HEAD(&wb->b_more_io);
+}
+
+static void bdi_flush_io(struct backing_dev_info *bdi)
+{
+ struct writeback_control wbc = {
+ .bdi = bdi,
+ .sync_mode = WB_SYNC_NONE,
+ .older_than_this = NULL,
+ .range_cyclic = 1,
+ .nr_to_write = 1024,
+ };
+
+ generic_sync_bdi_inodes(NULL, &wbc);
+}
+
+static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+ set_bit(0, &bdi->wb_mask);
+ wb->nr = 0;
+ return 0;
+}
+
+static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+ clear_bit(wb->nr, &bdi->wb_mask);
+ clear_bit(BDI_wb_alloc, &bdi->state);
+}
+
+static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
+{
+ struct bdi_writeback *wb;
+
+ set_bit(BDI_wb_alloc, &bdi->state);
+ wb = &bdi->wb;
+ wb_assign_nr(bdi, wb);
+ return wb;
+}
+
static int bdi_start_fn(void *ptr)
{
- struct backing_dev_info *bdi = ptr;
+ struct bdi_writeback *wb = ptr;
+ struct backing_dev_info *bdi = wb->bdi;
struct task_struct *tsk = current;
-
- /*
- * Add us to the active bdi_list
- */
- mutex_lock(&bdi_lock);
- list_add(&bdi->bdi_list, &bdi_list);
- mutex_unlock(&bdi_lock);
+ int ret;
tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
set_freezable();
@@ -226,21 +268,33 @@ static int bdi_start_fn(void *ptr)
smp_mb__after_clear_bit();
wake_up_bit(&bdi->state, BDI_pending);
- return bdi_writeback_task(bdi);
+ ret = bdi_writeback_task(wb);
+
+ bdi_put_wb(bdi, wb);
+ return ret;
+}
+
+int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+ return wb_has_dirty_io(&bdi->wb);
}
static int bdi_forker_task(void *ptr)
{
- struct backing_dev_info *me = ptr;
+ struct bdi_writeback *me = ptr;
DEFINE_WAIT(wait);
for (;;) {
struct backing_dev_info *bdi, *tmp;
+ struct bdi_writeback *wb;
/*
* Should never trigger on the default bdi
*/
- WARN_ON(bdi_has_dirty_io(me));
+ if (wb_has_dirty_io(me)) {
+ bdi_flush_io(me->bdi);
+ WARN_ON(1);
+ }
prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
@@ -251,7 +305,7 @@ static int bdi_forker_task(void *ptr)
* a thread registered. If so, set that up.
*/
list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
- if (bdi->task || !bdi_has_dirty_io(bdi))
+ if (bdi->wb.task || !bdi_has_dirty_io(bdi))
continue;
bdi_add_default_flusher_task(bdi);
@@ -272,24 +326,22 @@ static int bdi_forker_task(void *ptr)
list_del_init(&bdi->bdi_list);
mutex_unlock(&bdi_lock);
- BUG_ON(bdi->task);
+ wb = bdi_new_wb(bdi);
+ if (!wb)
+ goto readd_flush;
- bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
+ wb->task = kthread_run(bdi_start_fn, wb, "bdi-%s",
dev_name(bdi->dev));
+
/*
* If task creation fails, then readd the bdi to
* the pending list and force writeout of the bdi
* from this forker thread. That will free some memory
* and we can try again.
*/
- if (!bdi->task) {
- struct writeback_control wbc = {
- .bdi = bdi,
- .sync_mode = WB_SYNC_NONE,
- .older_than_this = NULL,
- .range_cyclic = 1,
- };
-
+ if (!wb->task) {
+ bdi_put_wb(bdi, wb);
+readd_flush:
/*
* Add this 'bdi' to the back, so we get
* a chance to flush other bdi's to free
@@ -299,8 +351,7 @@ static int bdi_forker_task(void *ptr)
list_add_tail(&bdi->bdi_list, &bdi_pending_list);
mutex_unlock(&bdi_lock);
- wbc.nr_to_write = 1024;
- generic_sync_bdi_inodes(NULL, &wbc);
+ bdi_flush_io(bdi);
}
}
@@ -308,8 +359,18 @@ static int bdi_forker_task(void *ptr)
return 0;
}
+/*
+ * Add a new flusher task that gets created for any bdi
+ * that has dirty data pending writeout
+ */
void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
{
+ if (!bdi_cap_writeback_dirty(bdi))
+ return;
+
+ /*
+ * Someone already marked this pending for task creation
+ */
if (test_and_set_bit(BDI_pending, &bdi->state))
return;
@@ -317,7 +378,7 @@ void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
list_move_tail(&bdi->bdi_list, &bdi_pending_list);
mutex_unlock(&bdi_lock);
- wake_up(&default_backing_dev_info.wait);
+ wake_up(&default_backing_dev_info.wb.wait);
}
int bdi_register(struct backing_dev_info *bdi, struct device *parent,
@@ -350,13 +411,23 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
* on-demand when they need it.
*/
if (bdi_cap_flush_forker(bdi)) {
- bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
+ struct bdi_writeback *wb;
+
+ wb = bdi_new_wb(bdi);
+ if (!wb) {
+ ret = -ENOMEM;
+ goto remove_err;
+ }
+
+ wb->task = kthread_run(bdi_forker_task, wb, "bdi-%s",
dev_name(dev));
- if (!bdi->task) {
+ if (!wb->task) {
+ bdi_put_wb(bdi, wb);
+ ret = -ENOMEM;
+remove_err:
mutex_lock(&bdi_lock);
list_del(&bdi->bdi_list);
mutex_unlock(&bdi_lock);
- ret = -ENOMEM;
goto exit;
}
}
@@ -379,28 +450,39 @@ static int sched_wait(void *word)
return 0;
}
+/*
+ * Remove bdi from global list and shutdown any threads we have running
+ */
static void bdi_wb_shutdown(struct backing_dev_info *bdi)
{
+ if (!bdi_cap_writeback_dirty(bdi))
+ return;
+
/*
* If setup is pending, wait for that to complete first
+ * Make sure nobody finds us on the bdi_list anymore
*/
wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
+ /*
+ * Make sure nobody finds us on the bdi_list anymore
+ */
mutex_lock(&bdi_lock);
list_del(&bdi->bdi_list);
mutex_unlock(&bdi_lock);
+
+ /*
+ * Finally, kill the kernel thread
+ */
+ kthread_stop(bdi->wb.task);
}
void bdi_unregister(struct backing_dev_info *bdi)
{
if (bdi->dev) {
- if (!bdi_cap_flush_forker(bdi)) {
+ if (!bdi_cap_flush_forker(bdi))
bdi_wb_shutdown(bdi);
- if (bdi->task) {
- kthread_stop(bdi->task);
- bdi->task = NULL;
- }
- }
+
bdi_debug_unregister(bdi);
device_unregister(bdi->dev);
bdi->dev = NULL;
@@ -417,11 +499,10 @@ int bdi_init(struct backing_dev_info *bdi)
bdi->min_ratio = 0;
bdi->max_ratio = 100;
bdi->max_prop_frac = PROP_FRAC_BASE;
- init_waitqueue_head(&bdi->wait);
INIT_LIST_HEAD(&bdi->bdi_list);
- INIT_LIST_HEAD(&bdi->b_io);
- INIT_LIST_HEAD(&bdi->b_dirty);
- INIT_LIST_HEAD(&bdi->b_more_io);
+ bdi->wb_mask = bdi->wb_active = 0;
+
+ bdi_wb_init(&bdi->wb, bdi);
for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -446,9 +527,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
{
int i;
- WARN_ON(!list_empty(&bdi->b_dirty));
- WARN_ON(!list_empty(&bdi->b_io));
- WARN_ON(!list_empty(&bdi->b_more_io));
+ WARN_ON(bdi_has_dirty_io(bdi));
bdi_unregister(bdi);
Acked-by: Anton Altaparmakov <ai...@cam.ac.uk>
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/ntfs/super.c | 33 +++------------------------------
1 files changed, 3 insertions(+), 30 deletions(-)
diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
index f76951d..3fc03bd 100644
--- a/fs/ntfs/super.c
+++ b/fs/ntfs/super.c
@@ -2373,39 +2373,12 @@ static void ntfs_put_super(struct super_block *sb)
vol->mftmirr_ino = NULL;
}
/*
- * If any dirty inodes are left, throw away all mft data page cache
- * pages to allow a clean umount. This should never happen any more
- * due to mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
- * the underlying mft records are written out and cleaned. If it does,
- * happen anyway, we want to know...
+ * We should have no dirty inodes left, due to
+ * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
+ * the underlying mft records are written out and cleaned.
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 2 +-
include/linux/writeback.h | 1 +
mm/backing-dev.c | 16 +++++++++++-----
3 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 563860c..47f5ace 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -364,7 +364,7 @@ static void wb_writeback(struct bdi_writeback *wb)
* This will be inlined in bdi_writeback_task() once we get rid of any
* dirty inodes on the default_backing_dev_info
*/
-static void wb_do_writeback(struct bdi_writeback *wb)
+void wb_do_writeback(struct bdi_writeback *wb)
{
/*
* We get here in two cases:
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index baf04a9..e414702 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -69,6 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
int inode_wait(void *);
void sync_inodes_sb(struct super_block *, int wait);
void sync_inodes(int wait);
+void wb_do_writeback(struct bdi_writeback *wb);
/* writeback.h requires fs.h; it, too, is not included from here. */
static inline void wait_on_inode(struct inode *inode)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 3e74041..3a032be 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -388,12 +388,14 @@ static int bdi_forker_task(void *ptr)
struct bdi_writeback *wb;
/*
- * Should never trigger on the default bdi
+ * Ideally we'd like not to see any dirty inodes on the
+ * default_backing_dev_info. Until these are tracked down,
+ * perform the same writeback here that bdi_writeback_task
+ * does. For logic, see comment in
+ * fs/fs-writeback.c:bdi_writeback_task()
*/
- if (wb_has_dirty_io(me)) {
- bdi_flush_io(me->bdi);
- WARN_ON(1);
- }
+ if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
+ wb_do_writeback(me);
prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
@@ -420,6 +422,10 @@ static int bdi_forker_task(void *ptr)
continue;
}
+ /*
+ * This is our real job - check for pending entries in
+ * bdi_pending_list, and create the tasks that got added
+ */
bdi = list_entry(bdi_pending_list.next, struct backing_dev_info,
bdi_list);
list_del_init(&bdi->bdi_list);
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
drivers/ata/libata-core.c | 11 +++++------
1 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index c924230..ca4d208 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -5031,7 +5031,6 @@ int ata_qc_complete_multiple(struct ata_port *ap, u32 qc_active)
{
int nr_done = 0;
u32 done_mask;
- int i;
done_mask = ap->qc_active ^ qc_active;
@@ -5041,16 +5040,16 @@ int ata_qc_complete_multiple(struct ata_port *ap, u32 qc_active)
return -EINVAL;
}
- for (i = 0; i < ATA_MAX_QUEUE; i++) {
+ while (done_mask) {
struct ata_queued_cmd *qc;
+ unsigned int tag = __ffs(done_mask);
- if (!(done_mask & (1 << i)))
- continue;
-
- if ((qc = ata_qc_from_tag(ap, i))) {
+ qc = ata_qc_from_tag(ap, tag);
+ if (qc) {
ata_qc_complete(qc);
nr_done++;
}
+ done_mask &= ~(1 << tag);
}
return nr_done;
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 7 +++++++
include/linux/backing-dev.h | 1 +
mm/backing-dev.c | 6 ++++++
3 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 1292a88..bf8e0d5 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -583,6 +583,13 @@ void __mark_inode_dirty(struct inode *inode, int flags)
*/
if (!was_dirty) {
struct bdi_writeback *wb = inode_get_wb(inode);
+ struct backing_dev_info *bdi = wb->bdi;
+
+ if (bdi_cap_writeback_dirty(bdi) &&
+ !test_bit(BDI_registered, &bdi->state)) {
+ WARN_ON(1);
+ printk("bdi-%s not registered\n", bdi->name);
+ }
inode->dirtied_when = jiffies;
list_move(&inode->i_list, &wb->b_dirty);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 4507569..0b20d4b 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -31,6 +31,7 @@ enum bdi_state {
BDI_wblist_lock, /* bdi->wb_list now needs locking */
BDI_async_congested, /* The async (write) queue is getting full */
BDI_sync_congested, /* The sync queue is getting full */
+ BDI_registered, /* bdi_register() was done */
BDI_unused, /* Available bits start here */
};
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 0834ff9..ed66081 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -504,6 +504,11 @@ static void bdi_add_one_flusher_task(struct backing_dev_info *bdi,
if (!bdi_cap_writeback_dirty(bdi))
return;
+ if (WARN_ON(!test_bit(BDI_registered, &bdi->state))) {
+ printk("bdi %p/%s is not registered!\n", bdi, bdi->name);
+ return;
+ }
+
/*
* Check with the helper whether to proceed adding a task. Will only
* abort if we two or more simultanous calls to
@@ -612,6 +617,7 @@ remove_err:
}
bdi_debug_register(bdi, dev_name(dev));
+ set_bit(BDI_registered, &bdi->state);
exit:
return ret;
Also fixes failure to check bdi_init() return value, and bad inherit of
->capabilities flags from the default bdi.
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/btrfs/disk-io.c | 23 ++++++++++++++++++-----
1 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 4b0ea0b..2dc19c9 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1345,12 +1345,24 @@ static void btrfs_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
free_extent_map(em);
}
+/*
+ * If this fails, caller must call bdi_destroy() to get rid of the
+ * bdi again.
+ */
static int setup_bdi(struct btrfs_fs_info *info, struct backing_dev_info *bdi)
{
- bdi_init(bdi);
+ int err;
+
+ bdi->capabilities = BDI_CAP_MAP_COPY;
+ err = bdi_init(bdi);
+ if (err)
+ return err;
+
+ err = bdi_register(bdi, NULL, "btrfs");
+ if (err)
+ return err;
+
bdi->ra_pages = default_backing_dev_info.ra_pages;
- bdi->state = 0;
- bdi->capabilities = default_backing_dev_info.capabilities;
bdi->unplug_io_fn = btrfs_unplug_io_fn;
bdi->unplug_io_data = info;
bdi->congested_fn = btrfs_congested_fn;
@@ -1574,7 +1586,8 @@ struct btrfs_root *open_ctree(struct super_block *sb,
fs_info->sb = sb;
fs_info->max_extent = (u64)-1;
fs_info->max_inline = 8192 * 1024;
- setup_bdi(fs_info, &fs_info->bdi);
+ if (setup_bdi(fs_info, &fs_info->bdi))
+ goto fail_bdi;
fs_info->btree_inode = new_inode(sb);
fs_info->btree_inode->i_ino = 1;
fs_info->btree_inode->i_nlink = 1;
@@ -1931,8 +1944,8 @@ fail_iput:
btrfs_close_devices(fs_info->fs_devices);
btrfs_mapping_tree_free(&fs_info->mapping_tree);
+fail_bdi:
bdi_destroy(&fs_info->bdi);
-
fail:
kfree(extent_root);
kfree(tree_root);
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
block/Makefile | 2 +-
block/blk-ipoll.c | 160 +++++++++++++++++++++++++++++++++++++++++++++
drivers/ata/ahci.c | 53 ++++++++++++++-
include/linux/blk-ipoll.h | 38 +++++++++++
include/linux/interrupt.h | 1 +
include/linux/libata.h | 2 +
6 files changed, 252 insertions(+), 4 deletions(-)
create mode 100644 block/blk-ipoll.c
create mode 100644 include/linux/blk-ipoll.h
diff --git a/block/Makefile b/block/Makefile
index e9fa4dd..537e88a 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -5,7 +5,7 @@
obj-$(CONFIG_BLOCK) := elevator.o blk-core.o blk-tag.o blk-sysfs.o \
blk-barrier.o blk-settings.o blk-ioc.o blk-map.o \
blk-exec.o blk-merge.o blk-softirq.o blk-timeout.o \
- ioctl.o genhd.o scsi_ioctl.o cmd-filter.o
+ blk-ipoll.o ioctl.o genhd.o scsi_ioctl.o cmd-filter.o
obj-$(CONFIG_BLK_DEV_BSG) += bsg.o
obj-$(CONFIG_IOSCHED_NOOP) += noop-iosched.o
diff --git a/block/blk-ipoll.c b/block/blk-ipoll.c
new file mode 100644
index 0000000..700b74d
--- /dev/null
+++ b/block/blk-ipoll.c
@@ -0,0 +1,160 @@
+/*
+ * Functions related to interrupt-poll handling in the block layer. This
+ * is similar to NAPI for network devices.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/interrupt.h>
+#include <linux/cpu.h>
+#include <linux/blk-ipoll.h>
+
+#include "blk.h"
+
+static DEFINE_PER_CPU(struct list_head, blk_cpu_ipoll);
+
+void blk_ipoll_sched(struct blk_ipoll *ipoll)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ list_add_tail(&ipoll->list, &__get_cpu_var(blk_cpu_ipoll));
+ __raise_softirq_irqoff(BLOCK_IPOLL_SOFTIRQ);
+ local_irq_restore(flags);
+}
+EXPORT_SYMBOL(blk_ipoll_sched);
+
+void __blk_ipoll_complete(struct blk_ipoll *ipoll)
+{
+ list_del(&ipoll->list);
+ smp_mb__before_clear_bit();
+ clear_bit(IPOLL_F_SCHED, &ipoll->state);
+}
+
+void blk_ipoll_complete(struct blk_ipoll *ipoll)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ __blk_ipoll_complete(ipoll);
+ local_irq_restore(flags);
+}
+
+static void blk_ipoll_softirq(struct softirq_action *h)
+{
+ struct list_head *list = &__get_cpu_var(blk_cpu_ipoll);
+ unsigned long start_time = jiffies;
+ int rearm = 0, budget = 64;
+
+ local_irq_disable();
+
+ while (!list_empty(list)) {
+ struct blk_ipoll *ipoll;
+ int work, weight;
+
+ /*
+ * If softirq window is exhausted then punt.
+ */
+ if (budget <= 0 || jiffies != start_time) {
+ rearm = 1;
+ break;
+ }
+
+ local_irq_enable();
+
+ /* Even though interrupts have been re-enabled, this
+ * access is safe because interrupts can only add new
+ * entries to the tail of this list, and only ->ipoll()
+ * calls can remove this head entry from the list.
+ */
+ ipoll = list_entry(list->next, struct blk_ipoll, list);
+
+ weight = ipoll->weight;
+ work = ipoll->ipoll(ipoll, weight);
+ budget -= work;
+
+ local_irq_disable();
+
+ /* Drivers must not modify the NAPI state if they
+ * consume the entire weight. In such cases this code
+ * still "owns" the NAPI instance and therefore can
+ * move the instance around on the list at-will.
+ */
+ if (work >= weight) {
+ if (blk_ipoll_disable_pending(ipoll))
+ __blk_ipoll_complete(ipoll);
+ else
+ list_move_tail(&ipoll->list, list);
+ }
+ }
+
+ if (rearm)
+ __raise_softirq_irqoff(BLOCK_IPOLL_SOFTIRQ);
+
+ local_irq_enable();
+}
+
+void blk_ipoll_disable(struct blk_ipoll *ipoll)
+{
+ set_bit(IPOLL_F_DISABLE, &ipoll->state);
+ while (test_and_set_bit(IPOLL_F_SCHED, &ipoll->state))
+ msleep(1);
+ clear_bit(IPOLL_F_DISABLE, &ipoll->state);
+}
+EXPORT_SYMBOL(blk_ipoll_disable);
+
+void blk_ipoll_enable(struct blk_ipoll *ipoll)
+{
+ BUG_ON(!test_bit(IPOLL_F_SCHED, &ipoll->state));
+ smp_mb__before_clear_bit();
+ clear_bit(IPOLL_F_SCHED, &ipoll->state);
+}
+EXPORT_SYMBOL(blk_ipoll_enable);
+
+void blk_ipoll_init(struct blk_ipoll *ipoll, int weight, blk_ipoll_fn *poll_fn)
+{
+ memset(ipoll, 0, sizeof(*ipoll));
+ INIT_LIST_HEAD(&ipoll->list);
+ ipoll->weight = weight;
+ ipoll->ipoll = poll_fn;
+}
+EXPORT_SYMBOL(blk_ipoll_init);
+
+static int __cpuinit blk_ipoll_cpu_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ /*
+ * If a CPU goes away, splice its entries to the current CPU
+ * and trigger a run of the softirq
+ */
+ if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
+ int cpu = (unsigned long) hcpu;
+
+ local_irq_disable();
+ list_splice_init(&per_cpu(blk_cpu_ipoll, cpu),
+ &__get_cpu_var(blk_cpu_ipoll));
+ raise_softirq_irqoff(BLOCK_IPOLL_SOFTIRQ);
+ local_irq_enable();
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block __cpuinitdata blk_ipoll_cpu_notifier = {
+ .notifier_call = blk_ipoll_cpu_notify,
+};
+
+static __init int blk_ipoll_setup(void)
+{
+ int i;
+
+ for_each_possible_cpu(i)
+ INIT_LIST_HEAD(&per_cpu(blk_cpu_ipoll, i));
+
+ open_softirq(BLOCK_IPOLL_SOFTIRQ, blk_ipoll_softirq);
+ register_hotcpu_notifier(&blk_ipoll_cpu_notifier);
+ return 0;
+}
+subsys_initcall(blk_ipoll_setup);
diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index 08186ec..9701f93 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -45,6 +45,7 @@
#include <scsi/scsi_host.h>
#include <scsi/scsi_cmnd.h>
#include <linux/libata.h>
+#include <linux/blk-ipoll.h>
#define DRV_NAME "ahci"
#define DRV_VERSION "3.0"
@@ -2047,7 +2048,7 @@ static void ahci_error_intr(struct ata_port *ap, u32 irq_stat)
ata_port_abort(ap);
}
-static void ahci_port_intr(struct ata_port *ap)
+static int ahci_port_intr(struct ata_port *ap)
{
void __iomem *port_mmio = ahci_port_base(ap);
struct ata_eh_info *ehi = &ap->link.eh_info;
@@ -2077,7 +2078,7 @@ static void ahci_port_intr(struct ata_port *ap)
if (unlikely(status & PORT_IRQ_ERROR)) {
ahci_error_intr(ap, status);
- return;
+ return 0;
}
if (status & PORT_IRQ_SDB_FIS) {
@@ -2118,7 +2119,48 @@ static void ahci_port_intr(struct ata_port *ap)
ehi->err_mask |= AC_ERR_HSM;
ehi->action |= ATA_EH_RESET;
ata_port_freeze(ap);
+ rc = 0;
+ }
+
+ return rc;
+}
+
+static void ap_irq_disable(struct ata_port *ap)
+{
+ void __iomem *port_mmio = ahci_port_base(ap);
+
+ writel(0, port_mmio + PORT_IRQ_MASK);
+}
+
+static void ap_irq_enable(struct ata_port *ap)
+{
+ void __iomem *port_mmio = ahci_port_base(ap);
+ struct ahci_port_priv *pp = ap->private_data;
+
+ writel(pp->intr_mask, port_mmio + PORT_IRQ_MASK);
+}
+
+static int ahci_ipoll(struct blk_ipoll *ipoll, int budget)
+{
+ struct ata_port *ap = container_of(ipoll, struct ata_port, ipoll);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&ap->host->lock, flags);
+ ret = ahci_port_intr(ap);
+ spin_unlock_irqrestore(&ap->host->lock, flags);
+
+ if (ret > ipoll->max) {
+ printk("new ipoll max of %d\n", ret);
+ ipoll->max = ret;
+ }
+
+ if (ret < budget) {
+ blk_ipoll_complete(ipoll);
+ ap_irq_enable(ap);
}
+
+ return ret;
}
static irqreturn_t ahci_interrupt(int irq, void *dev_instance)
@@ -2151,7 +2193,10 @@ static irqreturn_t ahci_interrupt(int irq, void *dev_instance)
ap = host->ports[i];
if (ap) {
- ahci_port_intr(ap);
+ if (blk_ipoll_sched_prep(&ap->ipoll)) {
+ ap_irq_disable(ap);
+ blk_ipoll_sched(&ap->ipoll);
+ }
VPRINTK("port %u\n", i);
} else {
VPRINTK("port %u (no irq)\n", i);
@@ -2407,6 +2452,8 @@ static int ahci_port_start(struct ata_port *ap)
ap->private_data = pp;
+ blk_ipoll_init(&ap->ipoll, 32, ahci_ipoll);
+
/* engage engines, captain */
return ahci_port_resume(ap);
}
diff --git a/include/linux/blk-ipoll.h b/include/linux/blk-ipoll.h
new file mode 100644
index 0000000..dcc638f
--- /dev/null
+++ b/include/linux/blk-ipoll.h
@@ -0,0 +1,38 @@
+#ifndef BLK_IPOLL_H
+#define BLK_IPOLL_H
+
+struct blk_ipoll;
+typedef int (blk_ipoll_fn)(struct blk_ipoll *, int);
+
+struct blk_ipoll {
+ struct list_head list;
+ unsigned long state;
+ int weight;
+ int max;
+ blk_ipoll_fn *ipoll;
+};
+
+enum {
+ IPOLL_F_SCHED = 0,
+ IPOLL_F_DISABLE = 1,
+};
+
+static inline int blk_ipoll_sched_prep(struct blk_ipoll *ipoll)
+{
+ return !test_bit(IPOLL_F_DISABLE, &ipoll->state) &&
+ !test_and_set_bit(IPOLL_F_SCHED, &ipoll->state);
+}
+
+static inline int blk_ipoll_disable_pending(struct blk_ipoll *ipoll)
+{
+ return test_bit(IPOLL_F_DISABLE, &ipoll->state);
+}
+
+extern void blk_ipoll_sched(struct blk_ipoll *);
+extern void blk_ipoll_init(struct blk_ipoll *, int, blk_ipoll_fn *);
+extern void blk_ipoll_complete(struct blk_ipoll *);
+extern void __blk_ipoll_complete(struct blk_ipoll *);
+extern void blk_ipoll_enable(struct blk_ipoll *);
+extern void blk_ipoll_disable(struct blk_ipoll *);
+
+#endif
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 91bb76f..514cd75 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -335,6 +335,7 @@ enum
NET_TX_SOFTIRQ,
NET_RX_SOFTIRQ,
BLOCK_SOFTIRQ,
+ BLOCK_IPOLL_SOFTIRQ,
TASKLET_SOFTIRQ,
SCHED_SOFTIRQ,
HRTIMER_SOFTIRQ,
diff --git a/include/linux/libata.h b/include/linux/libata.h
index cf1e54e..9f9df5e 100644
--- a/include/linux/libata.h
+++ b/include/linux/libata.h
@@ -37,6 +37,7 @@
#include <scsi/scsi_host.h>
#include <linux/acpi.h>
#include <linux/cdrom.h>
+#include <linux/blk-ipoll.h>
/*
* Define if arch has non-standard setup. This is a _PCI_ standard
@@ -759,6 +760,7 @@ struct ata_port {
#endif
/* owned by EH */
u8 sector_buf[ATA_SECT_SIZE] ____cacheline_aligned;
+ struct blk_ipoll ipoll;
};
/* The following initializer overrides a method to NULL whether one of
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
mm/backing-dev.c | 38 ++++++++++++++++++++++++++++++++++----
1 files changed, 34 insertions(+), 4 deletions(-)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 3a032be..fcc0b2a 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -43,9 +43,29 @@ static void bdi_debug_init(void)
static int bdi_debug_stats_show(struct seq_file *m, void *v)
{
struct backing_dev_info *bdi = m->private;
+ struct bdi_writeback *wb;
unsigned long background_thresh;
unsigned long dirty_thresh;
unsigned long bdi_thresh;
+ unsigned long nr_dirty, nr_io, nr_more_io, nr_wb;
+ struct inode *inode;
+
+ /*
+ * inode lock is enough here, the bdi->wb_list is protected by
+ * RCU on the reader side
+ */
+ nr_wb = nr_dirty = nr_io = nr_more_io = 0;
+ spin_lock(&inode_lock);
+ list_for_each_entry(wb, &bdi->wb_list, list) {
+ nr_wb++;
+ list_for_each_entry(inode, &wb->b_dirty, i_list)
+ nr_dirty++;
+ list_for_each_entry(inode, &wb->b_io, i_list)
+ nr_io++;
+ list_for_each_entry(inode, &wb->b_more_io, i_list)
+ nr_more_io++;
+ }
+ spin_unlock(&inode_lock);
get_dirty_limits(&background_thresh, &dirty_thresh, &bdi_thresh, bdi);
@@ -55,12 +75,22 @@ static int bdi_debug_stats_show(struct seq_file *m, void *v)
"BdiReclaimable: %8lu kB\n"
"BdiDirtyThresh: %8lu kB\n"
"DirtyThresh: %8lu kB\n"
- "BackgroundThresh: %8lu kB\n",
+ "BackgroundThresh: %8lu kB\n"
+ "WriteBack threads:%8lu\n"
+ "b_dirty: %8lu\n"
+ "b_io: %8lu\n"
+ "b_more_io: %8lu\n"
+ "bdi_list: %8u\n"
+ "state: %8lx\n"
+ "wb_mask: %8lx\n"
+ "wb_list: %8u\n"
+ "wb_cnt: %8u\n",
(unsigned long) K(bdi_stat(bdi, BDI_WRITEBACK)),
(unsigned long) K(bdi_stat(bdi, BDI_RECLAIMABLE)),
- K(bdi_thresh),
- K(dirty_thresh),
- K(background_thresh));
+ K(bdi_thresh), K(dirty_thresh),
+ K(background_thresh), nr_wb, nr_dirty, nr_io, nr_more_io,
+ !list_empty(&bdi->bdi_list), bdi->state, bdi->wb_mask,
+ !list_empty(&bdi->wb_list), bdi->wb_cnt);
#undef K
return 0;
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
drivers/scsi/scsi.c | 44 ++++++++++----------------------------------
include/scsi/scsi_cmnd.h | 12 ++++++------
2 files changed, 16 insertions(+), 40 deletions(-)
diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index 166417a..6a993af 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -133,7 +133,6 @@ EXPORT_SYMBOL(scsi_device_type);
struct scsi_host_cmd_pool {
struct kmem_cache *cmd_slab;
- struct kmem_cache *sense_slab;
unsigned int users;
char *cmd_name;
char *sense_name;
@@ -167,20 +166,9 @@ static DEFINE_MUTEX(host_cmd_pool_mutex);
static struct scsi_cmnd *
scsi_pool_alloc_command(struct scsi_host_cmd_pool *pool, gfp_t gfp_mask)
{
- struct scsi_cmnd *cmd;
-
- cmd = kmem_cache_zalloc(pool->cmd_slab, gfp_mask | pool->gfp_mask);
- if (!cmd)
- return NULL;
+ gfp_t gfp = gfp_mask | pool->gfp_mask;
- cmd->sense_buffer = kmem_cache_alloc(pool->sense_slab,
- gfp_mask | pool->gfp_mask);
- if (!cmd->sense_buffer) {
- kmem_cache_free(pool->cmd_slab, cmd);
- return NULL;
- }
-
- return cmd;
+ return kmem_cache_zalloc(pool->cmd_slab, gfp);
}
/**
@@ -198,7 +186,6 @@ scsi_pool_free_command(struct scsi_host_cmd_pool *pool,
if (cmd->prot_sdb)
kmem_cache_free(scsi_sdb_cache, cmd->prot_sdb);
- kmem_cache_free(pool->sense_slab, cmd->sense_buffer);
kmem_cache_free(pool->cmd_slab, cmd);
}
@@ -242,7 +229,6 @@ scsi_host_alloc_command(struct Scsi_Host *shost, gfp_t gfp_mask)
struct scsi_cmnd *__scsi_get_command(struct Scsi_Host *shost, gfp_t gfp_mask)
{
struct scsi_cmnd *cmd;
- unsigned char *buf;
cmd = scsi_host_alloc_command(shost, gfp_mask);
@@ -257,11 +243,8 @@ struct scsi_cmnd *__scsi_get_command(struct Scsi_Host *shost, gfp_t gfp_mask)
}
spin_unlock_irqrestore(&shost->free_list_lock, flags);
- if (cmd) {
- buf = cmd->sense_buffer;
+ if (cmd)
memset(cmd, 0, sizeof(*cmd));
- cmd->sense_buffer = buf;
- }
}
return cmd;
@@ -361,19 +344,13 @@ static struct scsi_host_cmd_pool *scsi_get_host_cmd_pool(gfp_t gfp_mask)
pool = (gfp_mask & __GFP_DMA) ? &scsi_cmd_dma_pool :
&scsi_cmd_pool;
if (!pool->users) {
- pool->cmd_slab = kmem_cache_create(pool->cmd_name,
- sizeof(struct scsi_cmnd), 0,
- pool->slab_flags, NULL);
- if (!pool->cmd_slab)
- goto fail;
+ unsigned int slab_size;
- pool->sense_slab = kmem_cache_create(pool->sense_name,
- SCSI_SENSE_BUFFERSIZE, 0,
- pool->slab_flags, NULL);
- if (!pool->sense_slab) {
- kmem_cache_destroy(pool->cmd_slab);
+ slab_size = sizeof(struct scsi_cmnd) + SCSI_SENSE_BUFFERSIZE;
+ pool->cmd_slab = kmem_cache_create(pool->cmd_name, slab_size,
+ 0, pool->slab_flags, NULL);
+ if (!pool->cmd_slab)
goto fail;
- }
}
pool->users++;
@@ -397,10 +374,9 @@ static void scsi_put_host_cmd_pool(gfp_t gfp_mask)
*/
BUG_ON(pool->users == 0);
- if (!--pool->users) {
+ if (!--pool->users)
kmem_cache_destroy(pool->cmd_slab);
- kmem_cache_destroy(pool->sense_slab);
- }
+
mutex_unlock(&host_cmd_pool_mutex);
}
diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index 43b50d3..649ad36 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -102,12 +102,6 @@ struct scsi_cmnd {
struct request *request; /* The command we are
working on */
-#define SCSI_SENSE_BUFFERSIZE 96
- unsigned char *sense_buffer;
- /* obtained by REQUEST SENSE when
- * CHECK CONDITION is received on original
- * command (auto-sense) */
-
/* Low-level done function - can be used by low-level driver to point
* to completion function. Not used by mid/upper level code. */
void (*scsi_done) (struct scsi_cmnd *);
@@ -129,6 +123,12 @@ struct scsi_cmnd {
int result; /* Status code from lower level driver */
unsigned char tag; /* SCSI-II queued command tag */
+
+#define SCSI_SENSE_BUFFERSIZE 96
+ unsigned char sense_buffer[0];
+ /* obtained by REQUEST SENSE when
+ * CHECK CONDITION is received on original
+ * command (auto-sense) */
};
extern struct scsi_cmnd *scsi_get_command(struct scsi_device *, gfp_t);
Typically, you'd setup a cache for the full depth of the device.
This defaults to 128, so by doing:
echo 128 > /sys/block/sda/queue/rq_cache
you would turn this feature on for sda. Writing "0" to the file
will turn it back off.
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
block/blk-core.c | 43 ++++++++++++++++++++++++++-
block/blk-sysfs.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++
include/linux/blkdev.h | 5 +++
3 files changed, 120 insertions(+), 2 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index c89883b..fe1eca4 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -635,17 +635,56 @@ int blk_get_queue(struct request_queue *q)
return 1;
}
+static struct request *blk_rq_cache_alloc(struct request_queue *q)
+{
+ int tag;
+
+ do {
+ if (q->rq_cache_last != -1) {
+ tag = q->rq_cache_last;
+ q->rq_cache_last = -1;
+ } else {
+ tag = find_first_zero_bit(q->rq_cache_map,
+ q->rq_cache_sz);
+ }
+ if (tag >= q->rq_cache_sz)
+ return NULL;
+ } while (test_and_set_bit_lock(tag, q->rq_cache_map));
+
+ return &q->rq_cache[tag];
+}
+
+static int blk_rq_cache_free(struct request_queue *q, struct request *rq)
+{
+ if (!q->rq_cache)
+ return 1;
+ if (rq >= &q->rq_cache[0] && rq <= &q->rq_cache[q->rq_cache_sz - 1]) {
+ unsigned long idx = rq - q->rq_cache;
+
+ clear_bit(idx, q->rq_cache_map);
+ q->rq_cache_last = idx;
+ return 0;
+ }
+
+ return 1;
+}
+
static inline void blk_free_request(struct request_queue *q, struct request *rq)
{
if (rq->cmd_flags & REQ_ELVPRIV)
elv_put_request(q, rq);
- mempool_free(rq, q->rq.rq_pool);
+ if (blk_rq_cache_free(q, rq))
+ mempool_free(rq, q->rq.rq_pool);
}
static struct request *
blk_alloc_request(struct request_queue *q, int flags, int priv, gfp_t gfp_mask)
{
- struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
+ struct request *rq;
+
+ rq = blk_rq_cache_alloc(q);
+ if (!rq)
+ rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
if (!rq)
return NULL;
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 3ff9bba..c2d8a71 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -218,6 +218,68 @@ static ssize_t queue_iostats_store(struct request_queue *q, const char *page,
return ret;
}
+static ssize_t queue_rq_cache_show(struct request_queue *q, char *page)
+{
+ return queue_var_show(q->rq_cache_sz, page);
+}
+
+static ssize_t
+queue_rq_cache_store(struct request_queue *q, const char *page, size_t count)
+{
+ unsigned long *rq_cache_map = NULL;
+ struct request *rq_cache = NULL;
+ unsigned long val;
+ ssize_t ret;
+
+ /*
+ * alloc cache up front
+ */
+ ret = queue_var_store(&val, page, count);
+ if (val) {
+ unsigned int map_sz;
+
+ if (val > q->nr_requests)
+ val = q->nr_requests;
+
+ rq_cache = kcalloc(val, sizeof(*rq_cache), GFP_KERNEL);
+ if (!rq_cache)
+ return -ENOMEM;
+
+ map_sz = (val + BITS_PER_LONG - 1) / BITS_PER_LONG;
+ rq_cache_map = kzalloc(map_sz, GFP_KERNEL);
+ if (!rq_cache_map) {
+ kfree(rq_cache);
+ return -ENOMEM;
+ }
+ }
+
+ spin_lock_irq(q->queue_lock);
+ elv_quiesce_start(q);
+
+ /*
+ * free existing rqcache
+ */
+ if (q->rq_cache_sz) {
+ kfree(q->rq_cache);
+ kfree(q->rq_cache_map);
+ q->rq_cache = NULL;
+ q->rq_cache_map = NULL;
+ q->rq_cache_sz = 0;
+ }
+
+ if (val) {
+ memset(rq_cache, 0, val * sizeof(struct request));
+ q->rq_cache = rq_cache;
+ q->rq_cache_map = rq_cache_map;
+ q->rq_cache_sz = val;
+ q->rq_cache_last = -1;
+ }
+
+ elv_quiesce_end(q);
+ spin_unlock_irq(q->queue_lock);
+ return ret;
+}
+
static struct queue_sysfs_entry queue_requests_entry = {
.attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
.show = queue_requests_show,
@@ -276,6 +338,12 @@ static struct queue_sysfs_entry queue_iostats_entry = {
.store = queue_iostats_store,
};
+static struct queue_sysfs_entry queue_rqcache_entry = {
+ .attr = {.name = "rq_cache", .mode = S_IRUGO | S_IWUSR },
+ .show = queue_rq_cache_show,
+ .store = queue_rq_cache_store,
+};
+
static struct attribute *default_attrs[] = {
&queue_requests_entry.attr,
&queue_ra_entry.attr,
@@ -287,6 +355,7 @@ static struct attribute *default_attrs[] = {
&queue_nomerges_entry.attr,
&queue_rq_affinity_entry.attr,
&queue_iostats_entry.attr,
+ &queue_rqcache_entry.attr,
NULL,
};
@@ -363,6 +432,11 @@ static void blk_release_queue(struct kobject *kobj)
if (q->queue_tags)
__blk_queue_free_tags(q);
+ if (q->rq_cache) {
+ kfree(q->rq_cache);
+ kfree(q->rq_cache_map);
+ }
+
blk_trace_shutdown(q);
bdi_destroy(&q->backing_dev_info);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b4f71f1..c00f050 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -444,6 +444,11 @@ struct request_queue
struct bsg_class_device bsg_dev;
#endif
struct blk_cmd_filter cmd_filter;
+
+ struct request *rq_cache;
+ unsigned int rq_cache_sz;
+ unsigned int rq_cache_last;
+ unsigned long *rq_cache_map;
};
#define QUEUE_FLAG_CLUSTER 0 /* cluster several segments into 1 */
r b swpd free buff cache si so bi bo in cs us sy id wa
0 1 0 608848 2652 375372 0 0 0 71024 604 24 1 10 48 42
0 1 0 549644 2712 433736 0 0 0 60692 505 27 1 8 48 44
1 0 0 476928 2784 505192 0 0 4 29540 553 24 0 9 53 37
0 1 0 457972 2808 524008 0 0 0 54876 331 16 0 4 38 58
0 1 0 366128 2928 614284 0 0 4 92168 710 58 0 13 53 34
0 1 0 295092 3000 684140 0 0 0 62924 572 23 0 9 53 37
0 1 0 236592 3064 741704 0 0 4 58256 523 17 0 8 48 44
0 1 0 165608 3132 811464 0 0 0 57460 560 21 0 8 54 38
0 1 0 102952 3200 873164 0 0 4 74748 540 29 1 10 48 41
0 1 0 48604 3252 926472 0 0 0 53248 469 29 0 7 47 45
where vanilla tends to fluctuate a lot in the creation phase:
r b swpd free buff cache si so bi bo in cs us sy id wa
1 1 0 678716 5792 303380 0 0 0 74064 565 50 1 11 52 36
1 0 0 662488 5864 319396 0 0 4 352 302 329 0 2 47 51
0 1 0 599312 5924 381468 0 0 0 78164 516 55 0 9 51 40
0 1 0 519952 6008 459516 0 0 4 78156 622 56 1 11 52 37
1 1 0 436640 6092 541632 0 0 0 82244 622 54 0 11 48 41
0 1 0 436640 6092 541660 0 0 0 8 152 39 0 0 51 49
0 1 0 332224 6200 644252 0 0 4 102800 728 46 1 13 49 36
1 0 0 274492 6260 701056 0 0 4 12328 459 49 0 7 50 43
0 1 0 211220 6324 763356 0 0 0 106940 515 37 1 10 51 39
1 0 0 160412 6376 813468 0 0 0 8224 415 43 0 6 49 45
1 1 0 85980 6452 886556 0 0 4 113516 575 39 1 11 54 34
0 2 0 85968 6452 886620 0 0 0 1640 158 211 0 0 46 54
So apart from seemingly behaving better for buffered writeout, this also
allows us to potentially have more than one bdi thread flushing out data.
This may be useful for NUMA type setups.
A 10 disk test with btrfs performs 26% faster with per-bdi flushing. Other
tests pending.
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/buffer.c | 2 +-
fs/fs-writeback.c | 316 ++++++++++++++++++++++++++-----------------
fs/sync.c | 2 +-
include/linux/backing-dev.h | 30 ++++
include/linux/fs.h | 3 +-
include/linux/writeback.h | 2 +-
mm/backing-dev.c | 181 +++++++++++++++++++++++--
mm/page-writeback.c | 141 +------------------
mm/vmscan.c | 2 +-
9 files changed, 402 insertions(+), 277 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index aed2977..14f0802 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -281,7 +281,7 @@ static void free_more_memory(void)
struct zone *zone;
int nid;
- wakeup_pdflush(1024);
+ wakeup_flusher_threads(1024);
yield();
for_each_online_node(nid) {
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 1137408..7cb4d02 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -19,6 +19,8 @@
#include <linux/sched.h>
#include <linux/fs.h>
#include <linux/mm.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
#include <linux/writeback.h>
#include <linux/blkdev.h>
#include <linux/backing-dev.h>
@@ -61,10 +63,193 @@ int writeback_in_progress(struct backing_dev_info *bdi)
*/
static void writeback_release(struct backing_dev_info *bdi)
{
- BUG_ON(!writeback_in_progress(bdi));
+ WARN_ON_ONCE(!writeback_in_progress(bdi));
+ bdi->wb_arg.nr_pages = 0;
+ bdi->wb_arg.sb = NULL;
clear_bit(BDI_pdflush, &bdi->state);
}
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+ long nr_pages, enum writeback_sync_modes sync_mode)
+{
+ /*
+ * This only happens the first time someone kicks this bdi, so put
+ * it out-of-line.
+ */
+ if (unlikely(!bdi->task)) {
+ bdi_add_default_flusher_task(bdi);
+ return 1;
+ }
+
+ if (writeback_acquire(bdi)) {
+ bdi->wb_arg.nr_pages = nr_pages;
+ bdi->wb_arg.sb = sb;
+ bdi->wb_arg.sync_mode = sync_mode;
+ /*
+ * make above store seen before the task is woken
+ */
+ smp_mb();
+ wake_up(&bdi->wait);
+ }
+
+ return 0;
+}
+
+/*
+ * The maximum number of pages to writeout in a single bdi flush/kupdate
+ * operation. We do this so we don't hold I_SYNC against an inode for
+ * enormous amounts of time, which would block a userspace task which has
+ * been forced to throttle against that inode. Also, the code reevaluates
+ * the dirty each time it has written this many pages.
+ */
+#define MAX_WRITEBACK_PAGES 1024
+
+/*
+ * Periodic writeback of "old" data.
+ *
+ * Define "old": the first time one of an inode's pages is dirtied, we mark the
+ * dirtying-time in the inode's address_space. So this periodic writeback code
+ * just walks the superblock inode list, writing back any inodes which are
+ * older than a specific point in time.
+ *
+ * Try to run once per dirty_writeback_interval. But if a writeback event
+ * takes longer than a dirty_writeback_interval interval, then leave a
+ * one-second gap.
+ *
+ * older_than_this takes precedence over nr_to_write. So we'll only write back
+ * all dirty pages if they are all attached to "old" mappings.
+ */
+static void bdi_kupdated(struct backing_dev_info *bdi)
+{
+ unsigned long oldest_jif;
+ long nr_to_write;
+ struct writeback_control wbc = {
+ .bdi = bdi,
+ .sync_mode = WB_SYNC_NONE,
+ .older_than_this = &oldest_jif,
+ .nr_to_write = 0,
+ .for_kupdate = 1,
+ .range_cyclic = 1,
+ };
+
+ sync_supers();
+
+ oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
+
+ nr_to_write = global_page_state(NR_FILE_DIRTY) +
+ global_page_state(NR_UNSTABLE_NFS) +
+ (inodes_stat.nr_inodes - inodes_stat.nr_unused);
+
+ while (nr_to_write > 0) {
+ wbc.more_io = 0;
+ wbc.encountered_congestion = 0;
+ wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+ generic_sync_bdi_inodes(NULL, &wbc);
+ if (wbc.nr_to_write > 0)
+ break; /* All the old data is written */
+ nr_to_write -= MAX_WRITEBACK_PAGES;
+ }
+}
+
+static inline bool over_bground_thresh(void)
+{
+ unsigned long background_thresh, dirty_thresh;
+
+ get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
+
+ return (global_page_state(NR_FILE_DIRTY) +
+ global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
+}
+
+static void bdi_pdflush(struct backing_dev_info *bdi)
+{
+ struct writeback_control wbc = {
+ .bdi = bdi,
+ .sync_mode = bdi->wb_arg.sync_mode,
+ .older_than_this = NULL,
+ .range_cyclic = 1,
+ };
+ long nr_pages = bdi->wb_arg.nr_pages;
+
+ for (;;) {
+ if (wbc.sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
+ !over_bground_thresh())
+ break;
+
+ wbc.more_io = 0;
+ wbc.encountered_congestion = 0;
+ wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+ wbc.pages_skipped = 0;
+ generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
+ nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+ /*
+ * If we ran out of stuff to write, bail unless more_io got set
+ */
+ if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
+ if (wbc.more_io)
+ continue;
+ break;
+ }
+ }
+}
+
+/*
+ * Handle writeback of dirty data for the device backed by this bdi. Also
+ * wakes up periodically and does kupdated style flushing.
+ */
+int bdi_writeback_task(struct backing_dev_info *bdi)
+{
+ while (!kthread_should_stop()) {
+ unsigned long wait_jiffies;
+ DEFINE_WAIT(wait);
+
+ prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
+ wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
+ schedule_timeout(wait_jiffies);
+ try_to_freeze();
+
+ /*
+ * We get here in two cases:
+ *
+ * schedule_timeout() returned because the dirty writeback
+ * interval has elapsed. If that happens, we will be able
+ * to acquire the writeback lock and will proceed to do
+ * kupdated style writeout.
+ *
+ * Someone called bdi_start_writeback(), which will acquire
+ * the writeback lock. This means our writeback_acquire()
+ * below will fail and we call into bdi_pdflush() for
+ * pdflush style writeout.
+ *
+ */
+ if (writeback_acquire(bdi))
+ bdi_kupdated(bdi);
+ else
+ bdi_pdflush(bdi);
+
+ writeback_release(bdi);
+ finish_wait(&bdi->wait, &wait);
+ }
+
+ return 0;
+}
+
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+ enum writeback_sync_modes sync_mode)
+{
+ struct backing_dev_info *bdi, *tmp;
+
+ mutex_lock(&bdi_lock);
+
+ list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
+ if (!bdi_has_dirty_io(bdi))
+ continue;
+ bdi_start_writeback(bdi, sb, nr_pages, sync_mode);
+ }
+
+ mutex_unlock(&bdi_lock);
+}
+
/**
* __mark_inode_dirty - internal function
* @inode: inode to mark
@@ -263,46 +448,6 @@ static void queue_io(struct backing_dev_info *bdi,
move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
}
-static int sb_on_inode_list(struct super_block *sb, struct list_head *list)
-{
- struct inode *inode;
- int ret = 0;
-
- spin_lock(&inode_lock);
- list_for_each_entry(inode, list, i_list) {
- if (inode->i_sb == sb) {
- ret = 1;
- break;
- }
- }
- spin_unlock(&inode_lock);
- return ret;
-}
-
-int sb_has_dirty_inodes(struct super_block *sb)
-{
- struct backing_dev_info *bdi;
- int ret = 0;
-
- /*
- * This is REALLY expensive right now, but it'll go away
- * when the bdi writeback is introduced
- */
- mutex_lock(&bdi_lock);
- list_for_each_entry(bdi, &bdi_list, bdi_list) {
- if (sb_on_inode_list(sb, &bdi->b_dirty) ||
- sb_on_inode_list(sb, &bdi->b_io) ||
- sb_on_inode_list(sb, &bdi->b_more_io)) {
- ret = 1;
- break;
- }
- }
- mutex_unlock(&bdi_lock);
-
- return ret;
-}
-EXPORT_SYMBOL(sb_has_dirty_inodes);
-
/*
* Write a single inode's dirty pages and inode data out to disk.
* If `wait' is set, wait on the writeout.
@@ -461,11 +606,11 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
return __sync_single_inode(inode, wbc);
}
-static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
- struct writeback_control *wbc,
- struct super_block *sb,
- int is_blkdev_sb)
+void generic_sync_bdi_inodes(struct super_block *sb,
+ struct writeback_control *wbc)
{
+ const int is_blkdev_sb = sb_is_blkdev_sb(sb);
+ struct backing_dev_info *bdi = wbc->bdi;
const unsigned long start = jiffies; /* livelock avoidance */
spin_lock(&inode_lock);
@@ -516,13 +661,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
continue; /* Skip a congested blockdev */
}
- if (wbc->bdi && bdi != wbc->bdi) {
- if (!is_blkdev_sb)
- break; /* fs has the wrong queue */
- requeue_io(inode);
- continue; /* blockdev has wrong queue */
- }
-
/*
* Was this inode dirtied after sync_sb_inodes was called?
* This keeps sync from extra jobs and livelock.
@@ -530,16 +668,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
if (inode_dirtied_after(inode, start))
break;
- /* Is another pdflush already flushing this queue? */
- if (current_is_pdflush() && !writeback_acquire(bdi))
- break;
-
BUG_ON(inode->i_state & I_FREEING);
__iget(inode);
pages_skipped = wbc->pages_skipped;
__writeback_single_inode(inode, wbc);
- if (current_is_pdflush())
- writeback_release(bdi);
if (wbc->pages_skipped != pages_skipped) {
/*
* writeback is not making progress due to locked
@@ -578,11 +710,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
* a variety of queues, so all inodes are searched. For other superblocks,
* assume that all inodes are backed by the same queue.
*
- * FIXME: this linear search could get expensive with many fileystems. But
- * how to fix? We need to go from an address_space to all inodes which share
- * a queue with that address_space. (Easy: have a global "dirty superblocks"
- * list).
- *
* The inodes to be written are parked on bdi->b_io. They are moved back onto
* bdi->b_dirty as they are selected for writing. This way, none can be missed
* on the writer throttling path, and we get decent balancing between many
@@ -591,13 +718,10 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
void generic_sync_sb_inodes(struct super_block *sb,
struct writeback_control *wbc)
{
- const int is_blkdev_sb = sb_is_blkdev_sb(sb);
- struct backing_dev_info *bdi;
-
- mutex_lock(&bdi_lock);
- list_for_each_entry(bdi, &bdi_list, bdi_list)
- generic_sync_bdi_inodes(bdi, wbc, sb, is_blkdev_sb);
- mutex_unlock(&bdi_lock);
+ if (wbc->bdi)
+ generic_sync_bdi_inodes(sb, wbc);
+ else
+ bdi_writeback_all(sb, wbc->nr_to_write, wbc->sync_mode);
if (wbc->sync_mode == WB_SYNC_ALL) {
struct inode *inode, *old_inode = NULL;
@@ -653,58 +777,6 @@ static void sync_sb_inodes(struct super_block *sb,
}
/*
- * Start writeback of dirty pagecache data against all unlocked inodes.
- *
- * Note:
- * We don't need to grab a reference to superblock here. If it has non-empty
- * ->b_dirty it's hadn't been killed yet and kill_super() won't proceed
- * past sync_inodes_sb() until the ->b_dirty/b_io/b_more_io lists are all
- * empty. Since __sync_single_inode() regains inode_lock before it finally moves
- * inode from superblock lists we are OK.
- *
- * If `older_than_this' is non-zero then only flush inodes which have a
- * flushtime older than *older_than_this.
- *
- * If `bdi' is non-zero then we will scan the first inode against each
- * superblock until we find the matching ones. One group will be the dirty
- * inodes against a filesystem. Then when we hit the dummy blockdev superblock,
- * sync_sb_inodes will seekout the blockdev which matches `bdi'. Maybe not
- * super-efficient but we're about to do a ton of I/O...
- */
-void
-writeback_inodes(struct writeback_control *wbc)
-{
- struct super_block *sb;
-
- might_sleep();
- spin_lock(&sb_lock);
-restart:
- list_for_each_entry_reverse(sb, &super_blocks, s_list) {
- if (sb_has_dirty_inodes(sb)) {
- /* we're making our own get_super here */
- sb->s_count++;
- spin_unlock(&sb_lock);
- /*
- * If we can't get the readlock, there's no sense in
- * waiting around, most of the time the FS is going to
- * be unmounted by the time it is released.
- */
- if (down_read_trylock(&sb->s_umount)) {
- if (sb->s_root)
- sync_sb_inodes(sb, wbc);
- up_read(&sb->s_umount);
- }
- spin_lock(&sb_lock);
- if (__put_super_and_need_restart(sb))
- goto restart;
- }
- if (wbc->nr_to_write <= 0)
- break;
- }
- spin_unlock(&sb_lock);
-}
-
-/*
* writeback and wait upon the filesystem's dirty inodes. The caller will
* do this in two passes - one to write, and one to wait.
*
diff --git a/fs/sync.c b/fs/sync.c
index 7abc65f..3887f10 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -23,7 +23,7 @@
*/
static void do_sync(unsigned long wait)
{
- wakeup_pdflush(0);
+ wakeup_flusher_threads(0);
sync_inodes(0); /* All mappings, inodes and their blockdevs */
vfs_dq_sync(NULL);
sync_supers(); /* Write the superblocks */
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 8719c87..f164925 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -13,6 +13,7 @@
#include <linux/proportions.h>
#include <linux/kernel.h>
#include <linux/fs.h>
+#include <linux/writeback.h>
#include <asm/atomic.h>
struct page;
@@ -24,6 +25,7 @@ struct dentry;
*/
enum bdi_state {
BDI_pdflush, /* A pdflush thread is working this device */
+ BDI_pending, /* On its way to being activated */
BDI_async_congested, /* The async (write) queue is getting full */
BDI_sync_congested, /* The sync queue is getting full */
BDI_unused, /* Available bits start here */
@@ -39,6 +41,12 @@ enum bdi_stat_item {
#define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
+struct bdi_writeback_arg {
+ unsigned long nr_pages;
+ struct super_block *sb;
+ enum writeback_sync_modes sync_mode;
+};
+
struct backing_dev_info {
struct list_head bdi_list;
@@ -60,6 +68,9 @@ struct backing_dev_info {
struct device *dev;
+ struct task_struct *task; /* writeback task */
+ wait_queue_head_t wait;
+ struct bdi_writeback_arg wb_arg; /* protected by BDI_pdflush */
struct list_head b_dirty; /* dirty inodes */
struct list_head b_io; /* parked for writeback */
struct list_head b_more_io; /* parked for more writeback */
@@ -77,10 +88,23 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
const char *fmt, ...);
int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
void bdi_unregister(struct backing_dev_info *bdi);
+int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
+ long nr_pages, enum writeback_sync_modes sync_mode);
+int bdi_writeback_task(struct backing_dev_info *bdi);
+void bdi_writeback_all(struct super_block *sb, long nr_pages,
+ enum writeback_sync_modes sync_mode);
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
extern struct mutex bdi_lock;
extern struct list_head bdi_list;
+static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+ return !list_empty(&bdi->b_dirty) ||
+ !list_empty(&bdi->b_io) ||
+ !list_empty(&bdi->b_more_io);
+}
+
static inline void __add_bdi_stat(struct backing_dev_info *bdi,
enum bdi_stat_item item, s64 amount)
{
@@ -196,6 +220,7 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
#define BDI_CAP_EXEC_MAP 0x00000040
#define BDI_CAP_NO_ACCT_WB 0x00000080
#define BDI_CAP_SWAP_BACKED 0x00000100
+#define BDI_CAP_FLUSH_FORKER 0x00000200
#define BDI_CAP_VMFLAGS \
(BDI_CAP_READ_MAP | BDI_CAP_WRITE_MAP | BDI_CAP_EXEC_MAP)
@@ -265,6 +290,11 @@ static inline bool bdi_cap_swap_backed(struct backing_dev_info *bdi)
return bdi->capabilities & BDI_CAP_SWAP_BACKED;
}
+static inline bool bdi_cap_flush_forker(struct backing_dev_info *bdi)
+{
+ return bdi->capabilities & BDI_CAP_FLUSH_FORKER;
+}
+
static inline bool mapping_cap_writeback_dirty(struct address_space *mapping)
{
return bdi_cap_writeback_dirty(mapping->backing_dev_info);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 6b475d4..ecdc544 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2063,6 +2063,8 @@ extern int invalidate_inode_pages2_range(struct address_space *mapping,
pgoff_t start, pgoff_t end);
extern void generic_sync_sb_inodes(struct super_block *sb,
struct writeback_control *wbc);
+extern void generic_sync_bdi_inodes(struct super_block *sb,
+ struct writeback_control *);
extern int write_inode_now(struct inode *, int);
extern int filemap_fdatawrite(struct address_space *);
extern int filemap_flush(struct address_space *);
@@ -2180,7 +2182,6 @@ extern int bdev_read_only(struct block_device *);
extern int set_blocksize(struct block_device *, int);
extern int sb_set_blocksize(struct super_block *, int);
extern int sb_min_blocksize(struct super_block *, int);
-extern int sb_has_dirty_inodes(struct super_block *);
extern int generic_file_mmap(struct file *, struct vm_area_struct *);
extern int generic_file_readonly_mmap(struct file *, struct vm_area_struct *);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 9344547..a8e9f78 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -99,7 +99,7 @@ static inline void inode_sync_wait(struct inode *inode)
/*
* mm/page-writeback.c
*/
-int wakeup_pdflush(long nr_pages);
+void wakeup_flusher_threads(long nr_pages);
void laptop_io_completion(void);
void laptop_sync_completion(void);
void throttle_vm_writeout(gfp_t gfp_mask);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 186fdce..57c8487 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -1,8 +1,11 @@
#include <linux/wait.h>
#include <linux/backing-dev.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
#include <linux/fs.h>
#include <linux/pagemap.h>
+#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/module.h>
#include <linux/writeback.h>
@@ -16,7 +19,7 @@ EXPORT_SYMBOL(default_unplug_io_fn);
struct backing_dev_info default_backing_dev_info = {
.ra_pages = VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE,
.state = 0,
- .capabilities = BDI_CAP_MAP_COPY,
+ .capabilities = BDI_CAP_MAP_COPY | BDI_CAP_FLUSH_FORKER,
.unplug_io_fn = default_unplug_io_fn,
};
EXPORT_SYMBOL_GPL(default_backing_dev_info);
@@ -24,6 +27,7 @@ EXPORT_SYMBOL_GPL(default_backing_dev_info);
static struct class *bdi_class;
DEFINE_MUTEX(bdi_lock);
LIST_HEAD(bdi_list);
+LIST_HEAD(bdi_pending_list);
#ifdef CONFIG_DEBUG_FS
#include <linux/debugfs.h>
@@ -195,6 +199,127 @@ static int __init default_bdi_init(void)
}
subsys_initcall(default_bdi_init);
+static int bdi_start_fn(void *ptr)
+{
+ struct backing_dev_info *bdi = ptr;
+ struct task_struct *tsk = current;
+
+ /*
+ * Add us to the active bdi_list
+ */
+ mutex_lock(&bdi_lock);
+ list_add(&bdi->bdi_list, &bdi_list);
+ mutex_unlock(&bdi_lock);
+
+ tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
+ set_freezable();
+
+ /*
+ * Our parent may run at a different priority, just set us to normal
+ */
+ set_user_nice(tsk, 0);
+
+ /*
+ * Clear pending bit and wakeup anybody waiting to tear us down
+ */
+ clear_bit(BDI_pending, &bdi->state);
+ smp_mb__after_clear_bit();
+ wake_up_bit(&bdi->state, BDI_pending);
+
+ return bdi_writeback_task(bdi);
+}
+
+static int bdi_forker_task(void *ptr)
+{
+ struct backing_dev_info *me = ptr;
+ DEFINE_WAIT(wait);
+
+ for (;;) {
+ struct backing_dev_info *bdi, *tmp;
+
+ /*
+ * Should never trigger on the default bdi
+ */
+ WARN_ON(bdi_has_dirty_io(me));
+
+ prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
+
+ mutex_lock(&bdi_lock);
+
+ /*
+ * Check if any existing bdi's have dirty data without
+ * a thread registered. If so, set that up.
+ */
+ list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
+ if (bdi->task || !bdi_has_dirty_io(bdi))
+ continue;
+
+ bdi_add_default_flusher_task(bdi);
+ }
+
+ if (list_empty(&bdi_pending_list)) {
+ unsigned long wait;
+
+ mutex_unlock(&bdi_lock);
+ wait = msecs_to_jiffies(dirty_writeback_interval * 10);
+ schedule_timeout(wait);
+ try_to_freeze();
+ continue;
+ }
+
+ bdi = list_entry(bdi_pending_list.next, struct backing_dev_info,
+ bdi_list);
+ list_del_init(&bdi->bdi_list);
+ mutex_unlock(&bdi_lock);
+
+ BUG_ON(bdi->task);
+
+ bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
+ dev_name(bdi->dev));
+ /*
+ * If task creation fails, then readd the bdi to
+ * the pending list and force writeout of the bdi
+ * from this forker thread. That will free some memory
+ * and we can try again.
+ */
+ if (!bdi->task) {
+ struct writeback_control wbc = {
+ .bdi = bdi,
+ .sync_mode = WB_SYNC_NONE,
+ .older_than_this = NULL,
+ .range_cyclic = 1,
+ };
+
+ /*
+ * Add this 'bdi' to the back, so we get
+ * a chance to flush other bdi's to free
+ * memory.
+ */
+ mutex_lock(&bdi_lock);
+ list_add_tail(&bdi->bdi_list, &bdi_pending_list);
+ mutex_unlock(&bdi_lock);
+
+ wbc.nr_to_write = 1024;
+ generic_sync_bdi_inodes(NULL, &wbc);
+ }
+ }
+
+ finish_wait(&me->wait, &wait);
+ return 0;
+}
+
+void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+{
+ if (test_and_set_bit(BDI_pending, &bdi->state))
+ return;
+
+ mutex_lock(&bdi_lock);
+ list_move_tail(&bdi->bdi_list, &bdi_pending_list);
+ mutex_unlock(&bdi_lock);
+
+ wake_up(&default_backing_dev_info.wait);
+}
+
int bdi_register(struct backing_dev_info *bdi, struct device *parent,
const char *fmt, ...)
{
@@ -214,12 +339,29 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
}
mutex_lock(&bdi_lock);
- list_add_tail_rcu(&bdi->bdi_list, &bdi_list);
+ list_add_tail(&bdi->bdi_list, &bdi_list);
mutex_unlock(&bdi_lock);
bdi->dev = dev;
- bdi_debug_register(bdi, dev_name(dev));
+ /*
+ * Just start the forker thread for our default backing_dev_info,
+ * and add other bdi's to the list. They will get a thread created
+ * on-demand when they need it.
+ */
+ if (bdi_cap_flush_forker(bdi)) {
+ bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
+ dev_name(dev));
+ if (!bdi->task) {
+ mutex_lock(&bdi_lock);
+ list_del(&bdi->bdi_list);
+ mutex_unlock(&bdi_lock);
+ ret = -ENOMEM;
+ goto exit;
+ }
+ }
+
+ bdi_debug_register(bdi, dev_name(dev));
exit:
return ret;
}
@@ -231,23 +373,34 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
}
EXPORT_SYMBOL(bdi_register_dev);
-static void bdi_remove_from_list(struct backing_dev_info *bdi)
+static int sched_wait(void *word)
{
- mutex_lock(&bdi_lock);
- list_del_rcu(&bdi->bdi_list);
- mutex_unlock(&bdi_lock);
+ schedule();
+ return 0;
+}
+static void bdi_wb_shutdown(struct backing_dev_info *bdi)
+{
/*
- * In case the bdi is freed right after unregister, we need to
- * make sure any RCU sections have exited
+ * If setup is pending, wait for that to complete first
*/
- synchronize_rcu();
+ wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
+
+ mutex_lock(&bdi_lock);
+ list_del(&bdi->bdi_list);
+ mutex_unlock(&bdi_lock);
}
void bdi_unregister(struct backing_dev_info *bdi)
{
if (bdi->dev) {
- bdi_remove_from_list(bdi);
+ if (!bdi_cap_flush_forker(bdi)) {
+ bdi_wb_shutdown(bdi);
+ if (bdi->task) {
+ kthread_stop(bdi->task);
+ bdi->task = NULL;
+ }
+ }
bdi_debug_unregister(bdi);
device_unregister(bdi->dev);
bdi->dev = NULL;
@@ -257,14 +410,14 @@ EXPORT_SYMBOL(bdi_unregister);
int bdi_init(struct backing_dev_info *bdi)
{
- int i;
- int err;
+ int i, err;
bdi->dev = NULL;
bdi->min_ratio = 0;
bdi->max_ratio = 100;
bdi->max_prop_frac = PROP_FRAC_BASE;
+ init_waitqueue_head(&bdi->wait);
INIT_LIST_HEAD(&bdi->bdi_list);
INIT_LIST_HEAD(&bdi->b_io);
INIT_LIST_HEAD(&bdi->b_dirty);
@@ -283,8 +436,6 @@ int bdi_init(struct backing_dev_info *bdi)
err:
while (i--)
percpu_counter_destroy(&bdi->bdi_stat[i]);
-
- bdi_remove_from_list(bdi);
}
return err;
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 7c44314..54a4a65 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -36,15 +36,6 @@
#include <linux/pagevec.h>
/*
- * The maximum number of pages to writeout in a single bdflush/kupdate
- * operation. We do this so we don't hold I_SYNC against an inode for
- * enormous amounts of time, which would block a userspace task which has
- * been forced to throttle against that inode. Also, the code reevaluates
- * the dirty each time it has written this many pages.
- */
-#define MAX_WRITEBACK_PAGES 1024
-
-/*
* After a CPU has dirtied this many pages, balance_dirty_pages_ratelimited
* will look to see if it needs to force writeback or throttling.
*/
@@ -117,8 +108,6 @@ EXPORT_SYMBOL(laptop_mode);
/* End of sysctl-exported parameters */
-static void background_writeout(unsigned long _min_pages);
-
/*
* Scale the writeback cache size proportional to the relative writeout speeds.
*
@@ -539,7 +528,7 @@ static void balance_dirty_pages(struct address_space *mapping)
* been flushed to permanent storage.
*/
if (bdi_nr_reclaimable) {
- writeback_inodes(&wbc);
+ generic_sync_bdi_inodes(NULL, &wbc);
pages_written += write_chunk - wbc.nr_to_write;
get_dirty_limits(&background_thresh, &dirty_thresh,
&bdi_thresh, bdi);
@@ -590,7 +579,7 @@ static void balance_dirty_pages(struct address_space *mapping)
(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
+ global_page_state(NR_UNSTABLE_NFS)
> background_thresh)))
- pdflush_operation(background_writeout, 0);
+ bdi_start_writeback(bdi, NULL, 0, WB_SYNC_NONE);
}
void set_page_dirty_balance(struct page *page, int page_mkwrite)
@@ -675,152 +664,36 @@ void throttle_vm_writeout(gfp_t gfp_mask)
}
/*
- * writeback at least _min_pages, and keep writing until the amount of dirty
- * memory is less than the background threshold, or until we're all clean.
- */
-static void background_writeout(unsigned long _min_pages)
-{
- long min_pages = _min_pages;
- struct writeback_control wbc = {
- .bdi = NULL,
- .sync_mode = WB_SYNC_NONE,
- .older_than_this = NULL,
- .nr_to_write = 0,
- .nonblocking = 1,
- .range_cyclic = 1,
- };
-
- for ( ; ; ) {
- unsigned long background_thresh;
- unsigned long dirty_thresh;
-
- get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
- if (global_page_state(NR_FILE_DIRTY) +
- global_page_state(NR_UNSTABLE_NFS) < background_thresh
- && min_pages <= 0)
- break;
- wbc.more_io = 0;
- wbc.encountered_congestion = 0;
- wbc.nr_to_write = MAX_WRITEBACK_PAGES;
- wbc.pages_skipped = 0;
- writeback_inodes(&wbc);
- min_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
- if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
- /* Wrote less than expected */
- if (wbc.encountered_congestion || wbc.more_io)
- congestion_wait(WRITE, HZ/10);
- else
- break;
- }
- }
-}
-
-/*
* Start writeback of `nr_pages' pages. If `nr_pages' is zero, write back
* the whole world. Returns 0 if a pdflush thread was dispatched. Returns
* -1 if all pdflush threads were busy.
*/
-int wakeup_pdflush(long nr_pages)
+void wakeup_flusher_threads(long nr_pages)
{
if (nr_pages == 0)
nr_pages = global_page_state(NR_FILE_DIRTY) +
global_page_state(NR_UNSTABLE_NFS);
- return pdflush_operation(background_writeout, nr_pages);
+ bdi_writeback_all(NULL, nr_pages, WB_SYNC_NONE);
+ return;
}
-static void wb_timer_fn(unsigned long unused);
static void laptop_timer_fn(unsigned long unused);
-static DEFINE_TIMER(wb_timer, wb_timer_fn, 0, 0);
static DEFINE_TIMER(laptop_mode_wb_timer, laptop_timer_fn, 0, 0);
/*
- * Periodic writeback of "old" data.
- *
- * Define "old": the first time one of an inode's pages is dirtied, we mark the
- * dirtying-time in the inode's address_space. So this periodic writeback code
- * just walks the superblock inode list, writing back any inodes which are
- * older than a specific point in time.
- *
- * Try to run once per dirty_writeback_interval. But if a writeback event
- * takes longer than a dirty_writeback_interval interval, then leave a
- * one-second gap.
- *
- * older_than_this takes precedence over nr_to_write. So we'll only write back
- * all dirty pages if they are all attached to "old" mappings.
- */
-static void wb_kupdate(unsigned long arg)
-{
- unsigned long oldest_jif;
- unsigned long start_jif;
- unsigned long next_jif;
- long nr_to_write;
- struct writeback_control wbc = {
- .bdi = NULL,
- .sync_mode = WB_SYNC_NONE,
- .older_than_this = &oldest_jif,
- .nr_to_write = 0,
- .nonblocking = 1,
- .for_kupdate = 1,
- .range_cyclic = 1,
- };
-
- sync_supers();
-
- oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
- start_jif = jiffies;
- next_jif = start_jif + msecs_to_jiffies(dirty_writeback_interval * 10);
- nr_to_write = global_page_state(NR_FILE_DIRTY) +
- global_page_state(NR_UNSTABLE_NFS) +
- (inodes_stat.nr_inodes - inodes_stat.nr_unused);
- while (nr_to_write > 0) {
- wbc.more_io = 0;
- wbc.encountered_congestion = 0;
- wbc.nr_to_write = MAX_WRITEBACK_PAGES;
- writeback_inodes(&wbc);
- if (wbc.nr_to_write > 0) {
- if (wbc.encountered_congestion || wbc.more_io)
- congestion_wait(WRITE, HZ/10);
- else
- break; /* All the old data is written */
- }
- nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
- }
- if (time_before(next_jif, jiffies + HZ))
- next_jif = jiffies + HZ;
- if (dirty_writeback_interval)
- mod_timer(&wb_timer, next_jif);
-}
-
-/*
* sysctl handler for /proc/sys/vm/dirty_writeback_centisecs
*/
int dirty_writeback_centisecs_handler(ctl_table *table, int write,
struct file *file, void __user *buffer, size_t *length, loff_t *ppos)
{
proc_dointvec(table, write, file, buffer, length, ppos);
- if (dirty_writeback_interval)
- mod_timer(&wb_timer, jiffies +
- msecs_to_jiffies(dirty_writeback_interval * 10));
- else
- del_timer(&wb_timer);
return 0;
}
-static void wb_timer_fn(unsigned long unused)
-{
- if (pdflush_operation(wb_kupdate, 0) < 0)
- mod_timer(&wb_timer, jiffies + HZ); /* delay 1 second */
-}
-
-static void laptop_flush(unsigned long unused)
-{
- sys_sync();
-}
-
static void laptop_timer_fn(unsigned long unused)
{
- pdflush_operation(laptop_flush, 0);
+ wakeup_flusher_threads(0);
}
/*
@@ -903,8 +776,6 @@ void __init page_writeback_init(void)
{
int shift;
- mod_timer(&wb_timer,
- jiffies + msecs_to_jiffies(dirty_writeback_interval * 10));
writeback_set_ratelimit();
register_cpu_notifier(&ratelimit_nb);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5fa3eda..e37fd38 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1654,7 +1654,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
*/
if (total_scanned > sc->swap_cluster_max +
sc->swap_cluster_max / 2) {
- wakeup_pdflush(laptop_mode ? 0 : total_scanned);
+ wakeup_flusher_threads(laptop_mode ? 0 : total_scanned);
sc->may_writepage = 1;
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/aio.c | 151 +++++++++++++++++++++++++++++++++------------------
include/linux/aio.h | 11 ++--
2 files changed, 103 insertions(+), 59 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c
index 76da125..98c82f2 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -79,9 +79,8 @@ static int __init aio_setup(void)
return 0;
}
-static void aio_free_ring(struct kioctx *ctx)
+static void __aio_free_ring(struct kioctx *ctx, struct aio_ring_info *info)
{
- struct aio_ring_info *info = &ctx->ring_info;
long i;
for (i=0; i<info->nr_pages; i++)
@@ -99,16 +98,28 @@ static void aio_free_ring(struct kioctx *ctx)
info->nr = 0;
}
-static int aio_setup_ring(struct kioctx *ctx)
+static void aio_free_ring(struct kioctx *ctx)
+{
+ unsigned int i;
+
+ for_each_possible_cpu(i) {
+ struct aio_ring_info *info = per_cpu_ptr(ctx->ring_info, i);
+
+ __aio_free_ring(ctx, info);
+ }
+ free_percpu(ctx->ring_info);
+ ctx->ring_info = NULL;
+}
+
+static int __aio_setup_ring(struct kioctx *ctx, struct aio_ring_info *info)
{
struct aio_ring *ring;
- struct aio_ring_info *info = &ctx->ring_info;
unsigned nr_events = ctx->max_reqs;
unsigned long size;
int nr_pages;
- /* Compensate for the ring buffer's head/tail overlap entry */
- nr_events += 2; /* 1 is required, 2 for good luck */
+ /* round nr_event to next power of 2 */
+ nr_events = roundup_pow_of_two(nr_events);
size = sizeof(struct aio_ring);
size += sizeof(struct io_event) * nr_events;
@@ -117,8 +128,6 @@ static int aio_setup_ring(struct kioctx *ctx)
if (nr_pages < 0)
return -EINVAL;
- nr_events = (PAGE_SIZE * nr_pages - sizeof(struct aio_ring)) / sizeof(struct io_event);
-
info->nr = 0;
info->ring_pages = info->internal_pages;
if (nr_pages > AIO_RING_PAGES) {
@@ -158,7 +167,8 @@ static int aio_setup_ring(struct kioctx *ctx)
ring = kmap_atomic(info->ring_pages[0], KM_USER0);
ring->nr = nr_events; /* user copy */
ring->id = ctx->user_id;
- ring->head = ring->tail = 0;
+ atomic_set(&ring->head, 0);
+ ring->tail = 0;
ring->magic = AIO_RING_MAGIC;
ring->compat_features = AIO_RING_COMPAT_FEATURES;
ring->incompat_features = AIO_RING_INCOMPAT_FEATURES;
@@ -168,6 +178,27 @@ static int aio_setup_ring(struct kioctx *ctx)
return 0;
}
+static int aio_setup_ring(struct kioctx *ctx)
+{
+ unsigned int i;
+ int ret;
+
+ ctx->ring_info = alloc_percpu(struct aio_ring_info);
+ if (!ctx->ring_info)
+ return -ENOMEM;
+
+ ret = 0;
+ for_each_possible_cpu(i) {
+ struct aio_ring_info *info = per_cpu_ptr(ctx->ring_info, i);
+ int err;
+
+ err = __aio_setup_ring(ctx, info);
+ if (err && !ret)
+ ret = err;
+ }
+
+ return ret;
+}
/* aio_ring_event: returns a pointer to the event at the given index from
* kmap_atomic(, km). Release the pointer with put_aio_ring_event();
@@ -176,8 +207,8 @@ static int aio_setup_ring(struct kioctx *ctx)
#define AIO_EVENTS_FIRST_PAGE ((PAGE_SIZE - sizeof(struct aio_ring)) / sizeof(struct io_event))
#define AIO_EVENTS_OFFSET (AIO_EVENTS_PER_PAGE - AIO_EVENTS_FIRST_PAGE)
-#define aio_ring_event(info, nr, km) ({ \
- unsigned pos = (nr) + AIO_EVENTS_OFFSET; \
+#define aio_ring_event(info, __nr, km) ({ \
+ unsigned pos = ((__nr) & ((info)->nr - 1)) + AIO_EVENTS_OFFSET; \
struct io_event *__event; \
__event = kmap_atomic( \
(info)->ring_pages[pos / AIO_EVENTS_PER_PAGE], km); \
@@ -262,7 +293,6 @@ static struct kioctx *ioctx_alloc(unsigned nr_events)
atomic_set(&ctx->users, 1);
spin_lock_init(&ctx->ctx_lock);
- spin_lock_init(&ctx->ring_info.ring_lock);
init_waitqueue_head(&ctx->wait);
INIT_LIST_HEAD(&ctx->active_reqs);
@@ -426,6 +456,7 @@ void exit_aio(struct mm_struct *mm)
static struct kiocb *__aio_get_req(struct kioctx *ctx)
{
struct kiocb *req = NULL;
+ struct aio_ring_info *info;
struct aio_ring *ring;
int okay = 0;
@@ -448,15 +479,18 @@ static struct kiocb *__aio_get_req(struct kioctx *ctx)
/* Check if the completion queue has enough free space to
* accept an event from this io.
*/
- spin_lock_irq(&ctx->ctx_lock);
- ring = kmap_atomic(ctx->ring_info.ring_pages[0], KM_USER0);
- if (ctx->reqs_active < aio_ring_avail(&ctx->ring_info, ring)) {
+ local_irq_disable();
+ info = per_cpu_ptr(ctx->ring_info, smp_processor_id());
+ ring = kmap_atomic(info->ring_pages[0], KM_IRQ0);
+ if (ctx->reqs_active < aio_ring_avail(info, ring)) {
+ spin_lock(&ctx->ctx_lock);
list_add(&req->ki_list, &ctx->active_reqs);
ctx->reqs_active++;
+ spin_unlock(&ctx->ctx_lock);
okay = 1;
}
- kunmap_atomic(ring, KM_USER0);
- spin_unlock_irq(&ctx->ctx_lock);
+ kunmap_atomic(ring, KM_IRQ0);
+ local_irq_enable();
if (!okay) {
kmem_cache_free(kiocb_cachep, req);
@@ -578,9 +612,11 @@ int aio_put_req(struct kiocb *req)
{
struct kioctx *ctx = req->ki_ctx;
int ret;
+
spin_lock_irq(&ctx->ctx_lock);
ret = __aio_put_req(ctx, req);
spin_unlock_irq(&ctx->ctx_lock);
+
return ret;
}
@@ -954,7 +990,7 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
struct aio_ring *ring;
struct io_event *event;
unsigned long flags;
- unsigned long tail;
+ unsigned tail;
int ret;
/*
@@ -972,15 +1008,14 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
return 1;
}
- info = &ctx->ring_info;
-
/* add a completion event to the ring buffer.
* must be done holding ctx->ctx_lock to prevent
* other code from messing with the tail
* pointer since we might be called from irq
* context.
*/
- spin_lock_irqsave(&ctx->ctx_lock, flags);
+ local_irq_save(flags);
+ info = per_cpu_ptr(ctx->ring_info, smp_processor_id());
if (iocb->ki_run_list.prev && !list_empty(&iocb->ki_run_list))
list_del_init(&iocb->ki_run_list);
@@ -996,8 +1031,6 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
tail = info->tail;
event = aio_ring_event(info, tail, KM_IRQ0);
- if (++tail >= info->nr)
- tail = 0;
event->obj = (u64)(unsigned long)iocb->ki_obj.user;
event->data = iocb->ki_user_data;
@@ -1013,13 +1046,14 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
*/
smp_wmb(); /* make event visible before updating tail */
+ tail++;
info->tail = tail;
ring->tail = tail;
put_aio_ring_event(event, KM_IRQ0);
kunmap_atomic(ring, KM_IRQ1);
- pr_debug("added to ring %p at [%lu]\n", iocb, tail);
+ pr_debug("added to ring %p at [%u]\n", iocb, tail);
/*
* Check if the user asked us to deliver the result through an
@@ -1031,7 +1065,9 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
put_rq:
/* everything turned out well, dispose of the aiocb. */
+ spin_lock(&ctx->ctx_lock);
ret = __aio_put_req(ctx, iocb);
+ spin_unlock(&ctx->ctx_lock);
/*
* We have to order our ring_info tail store above and test
@@ -1044,49 +1080,58 @@ put_rq:
if (waitqueue_active(&ctx->wait))
wake_up(&ctx->wait);
- spin_unlock_irqrestore(&ctx->ctx_lock, flags);
+ local_irq_restore(flags);
+ return ret;
+}
+
+static int __aio_read_evt(struct aio_ring_info *info, struct aio_ring *ring,
+ struct io_event *ent)
+{
+ struct io_event *evp;
+ unsigned head;
+ int ret = 0;
+
+ do {
+ head = atomic_read(&ring->head);
+ if (head == ring->tail)
+ break;
+ evp = aio_ring_event(info, head, KM_USER1);
+ *ent = *evp;
+ smp_mb(); /* finish reading the event before updatng the head */
+ ++ret;
+ put_aio_ring_event(evp, KM_USER1);
+ } while (head != atomic_cmpxchg(&ring->head, head, head + 1));
+
return ret;
}
/* aio_read_evt
* Pull an event off of the ioctx's event ring. Returns the number of
* events fetched (0 or 1 ;-)
- * FIXME: make this use cmpxchg.
- * TODO: make the ringbuffer user mmap()able (requires FIXME).
+ * TODO: make the ringbuffer user mmap()able
*/
static int aio_read_evt(struct kioctx *ioctx, struct io_event *ent)
{
- struct aio_ring_info *info = &ioctx->ring_info;
- struct aio_ring *ring;
- unsigned long head;
- int ret = 0;
+ int i, ret = 0;
- ring = kmap_atomic(info->ring_pages[0], KM_USER0);
- dprintk("in aio_read_evt h%lu t%lu m%lu\n",
- (unsigned long)ring->head, (unsigned long)ring->tail,
- (unsigned long)ring->nr);
+ for_each_possible_cpu(i) {
+ struct aio_ring_info *info;
+ struct aio_ring *ring;
- if (ring->head == ring->tail)
- goto out;
+ info = per_cpu_ptr(ioctx->ring_info, i);
+ ring = kmap_atomic(info->ring_pages[0], KM_USER0);
+ dprintk("in aio_read_evt h%u t%u m%u\n",
+ atomic_read(&ring->head), ring->tail, ring->nr);
- spin_lock(&info->ring_lock);
-
- head = ring->head % info->nr;
- if (head != ring->tail) {
- struct io_event *evp = aio_ring_event(info, head, KM_USER1);
- *ent = *evp;
- head = (head + 1) % info->nr;
- smp_mb(); /* finish reading the event before updatng the head */
- ring->head = head;
- ret = 1;
- put_aio_ring_event(evp, KM_USER1);
+ ret = __aio_read_evt(info, ring, ent);
+ kunmap_atomic(ring, KM_USER0);
+ if (ret)
+ break;
}
- spin_unlock(&info->ring_lock);
-out:
- kunmap_atomic(ring, KM_USER0);
- dprintk("leaving aio_read_evt: %d h%lu t%lu\n", ret,
- (unsigned long)ring->head, (unsigned long)ring->tail);
+ dprintk("leaving aio_read_evt: %d h%u t%u\n", ret,
+ atomic_read(&ring->head), ring->tail);
+
return ret;
}
diff --git a/include/linux/aio.h b/include/linux/aio.h
index b16a957..9a7acb4 100644
--- a/include/linux/aio.h
+++ b/include/linux/aio.h
@@ -149,7 +149,7 @@ struct kiocb {
struct aio_ring {
unsigned id; /* kernel internal index number */
unsigned nr; /* number of io_events */
- unsigned head;
+ atomic_t head;
unsigned tail;
unsigned magic;
@@ -157,11 +157,11 @@ struct aio_ring {
unsigned incompat_features;
unsigned header_length; /* size of aio_ring */
-
- struct io_event io_events[0];
+ struct io_event io_events[0];
}; /* 128 bytes + ring size */
-#define aio_ring_avail(info, ring) (((ring)->head + (info)->nr - 1 - (ring)->tail) % (info)->nr)
+#define aio_ring_avail(info, ring) \
+ ((info)->nr + (unsigned) atomic_read(&(ring)->head) - (ring)->tail)
#define AIO_RING_PAGES 8
struct aio_ring_info {
@@ -169,7 +169,6 @@ struct aio_ring_info {
unsigned long mmap_size;
struct page **ring_pages;
- spinlock_t ring_lock;
long nr_pages;
unsigned nr, tail;
@@ -197,7 +196,7 @@ struct kioctx {
/* sys_io_setup currently limits this to an unsigned int */
unsigned max_reqs;
- struct aio_ring_info ring_info;
+ struct aio_ring_info *ring_info;
struct delayed_work wq;
yeah, as I later posted, this wasn't meant to be sent out as part of
the writeback series :-)
> But that patch looks good to me, avoiding one allocation for each
> command and simplifying the code. I try to remember why these were
> two slabs to start with but can't find any reason.
>
> Btw, we might just want to declare the sense buffer directly as a sized
> array in the scsi command as there really doesn't seem to be a reason
> not to allocate it.
That is also a workable solution. I've been trying to cut down on the
number of allocations required per-IO, and there's definitely still some
low hanging fruit there. Some of it is already included, like the inline
io_vecs in the bio.
--
Jens Axboe
Might help to send it to linux-scsi to get people to review and apply it
:)
But that patch looks good to me, avoiding one allocation for each
command and simplifying the code. I try to remember why these were
two slabs to start with but can't find any reason.
Btw, we might just want to declare the sense buffer directly as a sized
array in the scsi command as there really doesn't seem to be a reason
not to allocate it.
--
Btw, one thing I wanted to do for years is to add ->alloc_cmnd and
->destroy_cmnd method to the host template which optionally move the
command allocation to the LLDD. That way we can embedd the scsi_cmnd
into the drivers per-commad structure and eliminate another memory
allocation. Also this would naturally extend the keep one cmnd pool
to drivers without requiring additional code. As a second step it
would also allow killing the scsi_host_cmd_pool byt just having
a set of library routines that drivers which need SLAB_CACHE_DMA can
use.
That's a good idea and could kill one more alloc/free per IO. I'll add
that to the mix!
And in case anyone is interested, the patches that got mixed up with the
writeback patches are from the 'ssd' branch. It's basically a mix of
experimental patches for improving performance. Some are crap, some are
worth continuing with. There's been a steady influx of patches from
there to mainline, so it's a continually changing branch. Well not so
much lately since I spent most of the time in the writeback branch, but
otherwise.
--
Jens Axboe
yanmin
--
Goodness, thanks for retesting. Can you share some performance
comparisons and what hw/storage you are running it on?
The v5/v6 posting includes these fixes, so it should work fine now.
--
Jens Axboe
Interesting. I wonder how this affects the SLAB vs. SLUB regression
people are seeing on high end machines in OLTP benchmarks.
Pekka
INFO: task fio:6566 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
fio D ffff8800280a9300 4976 6566 6564
ffff88022f8c0de0 0000000000000086 ffff8800b584fcb0 000000000000000a
0000000000000002 ffff88022df0c560 ffff88022df0c8e8 000000010000daea
ffffe200027457d8 0000000000000246 000000c10000000d 0000000000000313
Call Trace:
[<ffffffff802b6897>] ? bdi_sched_wait+0x0/0xd
[<ffffffff807254f6>] ? schedule+0x9/0x1d
[<ffffffff802b68a0>] ? bdi_sched_wait+0x9/0xd
[<ffffffff80725aa5>] ? __wait_on_bit+0x40/0x6f
[<ffffffff802b6897>] ? bdi_sched_wait+0x0/0xd
[<ffffffff80725b40>] ? out_of_line_wait_on_bit+0x6c/0x78
[<ffffffff8024a42e>] ? wake_bit_function+0x0/0x23
[<ffffffff802b62a4>] ? bdi_queue_writeback+0x7a/0xe6
[<ffffffff802b6461>] ? bdi_start_writeback+0x63/0x6c
[<ffffffff8027a3a9>] ? balance_dirty_pages_ratelimited_nr+0x2a9/0x2b8
[<ffffffff80274c90>] ? generic_file_buffered_write+0x1d8/0x2b2
[<ffffffff80275230>] ? __generic_file_aio_write_nolock+0x33b/0x3a5
[<ffffffff802866ab>] ? handle_mm_fault+0x2e5/0x6f3
[<ffffffff80275498>] ? generic_file_aio_write+0x61/0xc1
[<ffffffff80315efe>] ? ext3_file_write+0x16/0x94
[<ffffffff8029d8c2>] ? do_sync_write+0xc9/0x10c
[<ffffffff8024a400>] ? autoremove_wake_function+0x0/0x2e
[<ffffffff8024c8f6>] ? __hrtimer_start_range_ns+0x101/0x114
[<ffffffff8029dfcf>] ? vfs_write+0xad/0x136
[<ffffffff8029e513>] ? sys_write+0x45/0x6e
[<ffffffff8020b9ab>] ? system_call_fastpath+0x16/0x1b
I didn't run into it with the 3 new patches and am not sure if it's resolved.
yanmin
That's the wake_up_bit() race that was fixed with one of the 3 new
patches, so v5/6 should be good here too.
--
Jens Axboe
> Thanks, I'll get this reproduced and fixed. Can you post the results
> you got comparing writeback and vanilla meanwhile?
I didn't post the result because some test cases benefit from the patches
while others are hurt from the patches. Sometime one case benefit from the patches
on this machine, but is hurt on another machine.
As a matter of fact, I tested the patches on 4 machines. One machine which
triggered the bug has only 1 disk. The other 3 machines have 1 JBOD per machine.
1) machine lkp-st02 (stoakley): has a fiberchannel JBOD with 13 SCSI disks. Every
disk has 1 partition (ext3 filesystem). Memory is 8GB.
2) machine lkp-st01: has a SAS JBOD with 7 SAS disks. Every disk has 2
partitions. 8GB memory.
3) Machine lkp-ne02 (nehalem): has a SATA JBOD with 11 disks. Every disk has
2 partitions. 6GB memory.
The HBA cards connecting to JBOD have no raid capability,
or they have, but I don't turn raid on.

Mount ext3 with option '-o writeback'.
Below results focus on the 3 machines who have JBOD.
I use iozone/tiobench/fio/ffsb for this testing. With iozone/tiobench, I always
use one disk on all machines. But with fio/ffsb which has lots of subtest cases,
I use all disks of the JBOD connecting to the corresponding machine.
The comparation is between 2.6.30-rc6 and 2.6.30-rc6+V4_patches, or plus
3 new patches (starting with 0001~0003).
1) iozone: 500MB iozone testing has no result difference. But 1.2GB testing has
about 40% regression on rewrite with the 3 new patches (001~003). If no the 3 new
patches, the regression is more than 90%. write has the simular regression, but its
regression disappears with the new 3 patches.
2) tiobench: result variation is considered as fluctuation.
3) fio: consists of more than 30 sub test cases, including sync/aio/mmap,
plus the combination with block size (less4k/4k/64k, soetimes 128k) and random.
As for write testing, mostly, one thread per partition.
Mostly, fio_mmap_randwrite(randrw)_4k_preread has 5%~30% improvement. But with
the new 3 patches, the improvement becomes smaller, for example becomes 14% from 30%.
fio_mmap_randwrite has 5%~10% regression on lkp-st01 and lkp-ne02 (both machines'
JBOD has 2 partitions per disk), but has 2%~15% improvement on lkp-st02 (one partition
per disk).fio_mmap_randrw has the similar behavior.
fio_mmap_randwrite_4k_halfbusy (Use 4 disks and less workload than other fio cases)
has about 20%~30% improvement.
fio sync read has about 15%~30% regression on lkp-st01, but the regression disappears
with the 3 new patches. Other machines haven't the issue.
aio has no regression.
4) ffsb:
ffsb_create (blocksize 4k, 64k) has 10%~20% improvement on lkp-st01 and
lkp-ne02, but hasn't on lkp-st02.
The data of other ffsb test cases looks suspicious, so I need double-check it, or
tune parameters to rerun.
Yanmin
Honza
Jens Hi.
I'm "TO:" this to Tomo.
This is the way it used to be for a long time. It was only recently changed by
Tomo because of a bug on none-cache-coherent arches that need to dma-access the
sense_buffer and also on the other hand change scsi_cmnd members by CPU.
In my opinion all you need is an __aligned(SMP_CACHE_BYTES) declaration at
sense_buffer[] and let there be a hole at the end before the array. But Tomo
did not like that, so he separated the two.
Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
at the __align() attribute. (So only stupid ARCHES get hurt)
(see below)
You might as well just define sense array as unsigned char sense_buffer[SCSI_SENSE_BUFFERSIZE]
and save the manual calculation.
+ unsigned char sense_buffer[BUFFERSIZE]; __aligned(CACHE_COHERENT_BYTES)
> + /* obtained by REQUEST SENSE when
> + * CHECK CONDITION is received on original
> + * command (auto-sense) */
> };
>
> extern struct scsi_cmnd *scsi_get_command(struct scsi_device *, gfp_t);
Thanks
Boaz
I posted an answer to that here:
http://www.spinics.net/lists/kernel/msg889604.html
It was done for none-cache-coherent systems that need to dma into sense_buffer.
>>> Btw, we might just want to declare the sense buffer directly as a sized
>>> array in the scsi command as there really doesn't seem to be a reason
>>> not to allocate it.
>> That is also a workable solution. I've been trying to cut down on the
>> number of allocations required per-IO, and there's definitely still some
>> low hanging fruit there. Some of it is already included, like the inline
>> io_vecs in the bio.
>
> Btw, one thing I wanted to do for years is to add ->alloc_cmnd and
> ->destroy_cmnd method to the host template which optionally move the
> command allocation to the LLDD. That way we can embedd the scsi_cmnd
> into the drivers per-commad structure and eliminate another memory
> allocation.
It is nice in theory, but when trying to implement I encountered some
problems.
1. If we have a machine with few type of hosts active each with it's own
cmnd_slab we end up with many more slabs then today. Even though at the
end they all happen to be of the same size. (With the pool reserves it
can get big also).
2. Some considerations are system-wide and system-dependent (like above
problem) and should be centralized into one place so if/when things
change they can be changed in one place.
2.1. Don't trust driver writers to do the right thing.
3. There are common needs that are cross drivers, and no code should be duplicated.
For example Bidi-Commands, use of scsi_ptr, ISA_DMA, ... and such not.
I totally agree with the need and robustness this will give...
So I think we might approach this from a slightly different way.
Hosts specify an size_of_private_command at host template, which might include
the common-scsi_cmnd + sense_buffer + private_cmnd + optional scsi_ptr +
bidi_data_buffer + ...
scsi_ml has a base-two-sized set of slabs that get allocated on first use
(at host registration) and hosts get to share the pools with same size.
[Alternatively hosts just keep reserved-commands list and regular use gets
kmalloced]
All handling is centralized, with special needs specified at host template
like dma_mask ISA_flags and such.
> Also this would naturally extend the keep one cmnd pool
> to drivers without requiring additional code. As a second step it
> would also allow killing the scsi_host_cmd_pool byt just having
> a set of library routines that drivers which need SLAB_CACHE_DMA can
> use.
>
I'm afraid this will need to be done first. Layout the new facilities
and implement today's lowest-denominator on top of that. Then convert
driver by driver. Finally remove the old croft.
Lets all agree on a rough sketch and we can all get behind it. There are
a few people I know that will help, Matthew Wilcox, Me , perhaps Jens
and Christoph.
This will also finally help Andi Kleen's needs with the masked allocators
Boaz
Note that this should be optional. Device not having their own
per-command structure would continue using the global pools. Those
that have their own per-command structures already have their own pools
anyway.
> Hosts specify an size_of_private_command at host template, which might include
> the common-scsi_cmnd + sense_buffer + private_cmnd + optional scsi_ptr +
> bidi_data_buffer + ...
That sounds fine, too.
The multiple pools of the same size "issue" can also easily be resolved
by having SCSI provide a way to setup/destroy these pools. Then it can
just reuse an existing pool, if it has the same size.
However, I doubt that this is really a real life issue that's worth
worrying about.
--
Jens Axboe
It could improve it. I think these (bios, requests, commands etc) allocations
are what SLUB has trouble with in that workload, so eliminating one of them
should help it. I guess it will help the other allocators as well, but maybe
a smaller relative improvement?
I've been testing this from your git version which builds as
2.6.30-rc6-00057-g81eabcf.
Unfortunately it's not doing too well.
When building a kernel with 'make -j 8' on my AMDX2 64bit, the screen
repeatedly locked up for several minutes at a time,and my music player
also froze.
In total the full kernel build took over 80 minutes, normally it's only
about 15.
However the machine seems to have recovered correctly, & now everything
is back to normal.
Maybe it does need the congestion handling after all?
regards
Richard
Weird, perhaps you hit an unlucky revision. I only use the git branch
for development, and it's continually rebased to collect and split
patches and fixes. So I don't generally recommend to use that, just the
posted patches. I build -j8 or larger kernels with the writeback patches
all the time, and haven't seen any issues. That's on a core 2 quad. Just
for kicks, can you send me your .config?
I'll post a new revision tomorrow, if you could try that I'd appreciate
it!
> Maybe it does need the congestion handling after all?
No it does not, by the very nature of the bdi threads being blocking,
congestion is not relevant.
--
Jens Axboe
this seems to come up repeatedly -- I had a proposal a _long_ time ago
that never quite got merged, cf http://lwn.net/Articles/2265/ and
http://lwn.net/Articles/2269/ -- from 2002 (!?). The idea is to go a
step further and create a __dma_buffer annotation for structure members.
Maybe I should resurrect that work one more time?
- R.
> > Ideally there should be a MACRO that is defined to WORD_SIZE on cache-coherent
> > ARCHs and to SMP_CACHE_BYTES on none-cache-coherent systems and use that size
> > at the __align() attribute. (So only stupid ARCHES get hurt)
>
> this seems to come up repeatedly -- I had a proposal a _long_ time ago
> that never quite got merged, cf http://lwn.net/Articles/2265/ and
> http://lwn.net/Articles/2269/ -- from 2002 (!?). The idea is to go a
Yeah, I think that Benjamin did last time:
http://www.mail-archive.com/linux...@vger.kernel.org/msg12632.html
IIRC, James didn't like it so I wrote the current code. I didn't see
any big performance difference with scsi_debug:
http://marc.info/?l=linux-scsi&m=120038907123706&w=2
Jens, you see the performance difference due to this unification?
Personally, I don't fancy __cached_alignment__ annotation much. I
prefer to leave it behind a memory allocator.
> step further and create a __dma_buffer annotation for structure members.
> On Mon, May 25, 2009 at 09:46:47AM +0200, Jens Axboe wrote:
> > > But that patch looks good to me, avoiding one allocation for each
> > > command and simplifying the code. I try to remember why these were
> > > two slabs to start with but can't find any reason.
> > >
> > > Btw, we might just want to declare the sense buffer directly as a sized
> > > array in the scsi command as there really doesn't seem to be a reason
> > > not to allocate it.
> >
> > That is also a workable solution. I've been trying to cut down on the
> > number of allocations required per-IO, and there's definitely still some
> > low hanging fruit there. Some of it is already included, like the inline
> > io_vecs in the bio.
>
> Btw, one thing I wanted to do for years is to add ->alloc_cmnd and
> ->destroy_cmnd method to the host template which optionally move the
> command allocation to the LLDD. That way we can embedd the scsi_cmnd
> into the drivers per-commad structure and eliminate another memory
> allocation. Also this would naturally extend the keep one cmnd pool
> to drivers without requiring additional code. As a second step it
> would also allow killing the scsi_host_cmd_pool byt just having
> a set of library routines that drivers which need SLAB_CACHE_DMA can
> use.
We discussed this idea when I rewrote the sense allocation code, I
think.
I like that idea that unifying scsi_cmnd and llds' per-commad
structure however there is one tricky thing about it.
Currently, a lld frees (or reuses) its per-commad structure when it
calls scsi_done(). SCSI-ml uses scsi_cmd after that so we need to
change the lifetime management (so we need to inspect all the llds,
e.g. this change will break iscsi ldd).
With that change, we can't tell llds how many per-commad structure are
possibly necessary. In general, LLDs want to know the maximum number
of per-commad structure; drivers allocates the number of per-commad
structure equal to host_template->can_queue.
Oops, as you said, this can be optional (so we don't need to convert
all llds). But as I said, this changes the definition of when
scsi_cmnd is free and ldds don't like that change, I think.
> On 05/25/2009 10:30 AM, Jens Axboe wrote:
> > Fold the sense buffer into the command, thereby eliminating a slab
> > allocation and free per command.
> >
> > Signed-off-by: Jens Axboe <jens....@oracle.com>
>
> Jens Hi.
>
> I'm "TO:" this to Tomo.
>
> This is the way it used to be for a long time. It was only recently changed by
> Tomo because of a bug on none-cache-coherent arches that need to dma-access the
> sense_buffer and also on the other hand change scsi_cmnd members by CPU.
>
> In my opinion all you need is an __aligned(SMP_CACHE_BYTES) declaration at
> sense_buffer[] and let there be a hole at the end before the array. But Tomo
> did not like that, so he separated the two.
IIRC, it was not my opinion :) I don't think that putting
CACHE_ALIGNMENT here is a good idea though.
If this separated sense buffer allocation actually hurts the
performance, then I prefer the ->alloc_cmnd and ->destroy_cmnd hook
idea. Then most of llds are happy about the current sense buffer
scheme and some can use ->alloc_cmnd and ->destroy_cmnd hooks for the
better performance.
Yes, it's definitely a worth while optimization. The problem isn't as
such this specific allocation, it's the total number of allocations we
do for a piece of IO. This sense buffer one is just one of many, I'm
continually working to reduce them. If we get rid of this one and add
the ->alloc_cmd() stuff, we can kill one more. The bio path already lost
one. So in the IO stack, we went from 6 allocations to 3 for a piece of
IO. And then it starts to add up. Even at just 30-50k iops, that's more
than 1% of time in the testing I did.
--
Jens Axboe
I see, thanks. Hmm, possibly slab becomes slower. ;)
Then I think that we need something like the ->alloc_cmd()
method. Let's ask James.
I don't think that it's just about simply adding the hook; there are
some issues that we need to think about. Though Boaz worries too much
a bit, I think.
I'm not sure about this patch if we add ->alloc_cmd(). I doubt that
there are any llds don't use ->alloc_cmd() worry about the overhead of
the separated sense buffer allocation. If a lld doesn't define the own
alloc_cmd, then I think it's fine to use the generic command
allocator that does the separate sense buffer allocation.
I think we should do the two things seperately. If we can safely inline
the sense buffer in the command by doing the right alignment, then lets
do that. The ->alloc_cmd() approach will be easier to do with an inline
sense buffer.
But there's really no reason to tie the two things together.
--
Jens Axboe
James rejected this in the past. Let's wait for his verdict.
Yeah, we can inline the sense buffer but as we discussed in the past
several times, there are some good reasons that we should not do so, I
think.
> But there's really no reason to tie the two things together.
--
BTW, only alignment is not enough (Boaz didn't point out it, I
think). You need alignment and a hole after the buffer:
http://lkml.org/lkml/2007/12/20/661
I think that it is one of these good reasons that we should not inline
the sense buffer. We will enlarge scsi_cmnd lots.
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 52 ++++++++++++++++++++++++++++++++++--------
include/linux/backing-dev.h | 5 ++++
include/linux/writeback.h | 2 +-
3 files changed, 48 insertions(+), 11 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index e4f96b9..c61c797 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -301,10 +301,10 @@ void bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
* older_than_this takes precedence over nr_to_write. So we'll only write back
* all dirty pages if they are all attached to "old" mappings.
*/
-static void wb_kupdated(struct bdi_writeback *wb)
+static long wb_kupdated(struct bdi_writeback *wb)
{
unsigned long oldest_jif;
- long nr_to_write;
+ long nr_to_write, wrote = 0;
struct writeback_control wbc = {
.bdi = wb->bdi,
.sync_mode = WB_SYNC_NONE,
@@ -325,10 +325,13 @@ static void wb_kupdated(struct bdi_writeback *wb)
wbc.encountered_congestion = 0;
wbc.nr_to_write = MAX_WRITEBACK_PAGES;
generic_sync_wb_inodes(wb, NULL, &wbc);
+ wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
if (wbc.nr_to_write > 0)
break; /* All the old data is written */
nr_to_write -= MAX_WRITEBACK_PAGES;
}
+
+ return wrote;
}
static inline bool over_bground_thresh(void)
@@ -341,7 +344,7 @@ static inline bool over_bground_thresh(void)
global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
}
-static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
+static long __wb_writeback(struct bdi_writeback *wb, long nr_pages,
struct super_block *sb,
enum writeback_sync_modes sync_mode)
{
@@ -351,6 +354,7 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
.older_than_this = NULL,
.range_cyclic = 1,
};
+ long wrote = 0;
for (;;) {
if (sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
@@ -363,6 +367,7 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
wbc.pages_skipped = 0;
generic_sync_wb_inodes(wb, sb, &wbc);
nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+ wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
/*
* If we ran out of stuff to write, bail unless more_io got set
*/
@@ -372,6 +377,8 @@ static void __wb_writeback(struct bdi_writeback *wb, long nr_pages,
break;
}
}
+
+ return wrote;
}
/*
@@ -400,10 +407,11 @@ static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
/*
* Retrieve work items and do the writeback they describe
*/
-static void wb_writeback(struct bdi_writeback *wb)
+static long wb_writeback(struct bdi_writeback *wb)
{
struct backing_dev_info *bdi = wb->bdi;
struct bdi_work *work;
+ long wrote = 0;
while ((work = get_next_work_item(bdi, wb)) != NULL) {
struct super_block *sb = bdi_work_sb(work);
@@ -417,7 +425,7 @@ static void wb_writeback(struct bdi_writeback *wb)
if (sync_mode == WB_SYNC_NONE)
wb_clear_pending(wb, work);
- __wb_writeback(wb, nr_pages, sb, sync_mode);
+ wrote += __wb_writeback(wb, nr_pages, sb, sync_mode);
/*
* This is a data integrity writeback, so only do the
@@ -426,14 +434,18 @@ static void wb_writeback(struct bdi_writeback *wb)
if (sync_mode == WB_SYNC_ALL)
wb_clear_pending(wb, work);
}
+
+ return wrote;
}
/*
* This will be inlined in bdi_writeback_task() once we get rid of any
* dirty inodes on the default_backing_dev_info
*/
-void wb_do_writeback(struct bdi_writeback *wb)
+long wb_do_writeback(struct bdi_writeback *wb)
{
+ long wrote;
+
/*
* We get here in two cases:
*
@@ -445,9 +457,11 @@ void wb_do_writeback(struct bdi_writeback *wb)
* items on the work_list. Process those.
*/
if (list_empty(&wb->bdi->work_list))
- wb_kupdated(wb);
+ wrote = wb_kupdated(wb);
else
- wb_writeback(wb);
+ wrote = wb_writeback(wb);
+
+ return wrote;
}
/*
@@ -456,12 +470,30 @@ void wb_do_writeback(struct bdi_writeback *wb)
*/
int bdi_writeback_task(struct bdi_writeback *wb)
{
+ unsigned long last_active = jiffies;
+ unsigned long wait_jiffies = -1UL;
+ long pages_written;
DEFINE_WAIT(wait);
while (!kthread_should_stop()) {
- unsigned long wait_jiffies;
- wb_do_writeback(wb);
+ pages_written = wb_do_writeback(wb);
+
+ if (pages_written)
+ last_active = jiffies;
+ else if (wait_jiffies != -1UL) {
+ unsigned long max_idle;
+
+ /*
+ * Longest period of inactivity that we tolerate. If we
+ * see dirty data again later, the task will get
+ * recreated automatically.
+ */
+ max_idle = max(5UL * 60 * HZ, wait_jiffies);
+ if (time_after(jiffies, max_idle + last_active) &&
+ wb_is_default_task(wb))
+ break;
+ }
prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 0559cf8..9523df3 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -113,6 +113,11 @@ int bdi_has_dirty_io(struct backing_dev_info *bdi);
extern struct mutex bdi_lock;
extern struct list_head bdi_list;
+static inline int wb_is_default_task(struct bdi_writeback *wb)
+{
+ return wb == &wb->bdi->wb;
+}
+
static inline int bdi_wblist_needs_lock(struct backing_dev_info *bdi)
{
return test_bit(BDI_wblist_lock, &bdi->state);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index e414702..30e318b 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -69,7 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
int inode_wait(void *);
void sync_inodes_sb(struct super_block *, int wait);
void sync_inodes(int wait);
-void wb_do_writeback(struct bdi_writeback *wb);
+long wb_do_writeback(struct bdi_writeback *wb);
/* writeback.h requires fs.h; it, too, is not included from here. */
static inline void wait_on_inode(struct inode *inode)
--
1.6.3.rc0.1.gf800
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 2 +-
include/linux/writeback.h | 1 +
mm/backing-dev.c | 8 ++++++--
3 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 8e0902e..e4f96b9 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -432,7 +432,7 @@ static void wb_writeback(struct bdi_writeback *wb)
* This will be inlined in bdi_writeback_task() once we get rid of any
* dirty inodes on the default_backing_dev_info
*/
-static void wb_do_writeback(struct bdi_writeback *wb)
+void wb_do_writeback(struct bdi_writeback *wb)
{
/*
* We get here in two cases:
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index baf04a9..e414702 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -69,6 +69,7 @@ void writeback_inodes(struct writeback_control *wbc);
int inode_wait(void *);
void sync_inodes_sb(struct super_block *, int wait);
void sync_inodes(int wait);
+void wb_do_writeback(struct bdi_writeback *wb);
/* writeback.h requires fs.h; it, too, is not included from here. */
static inline void wait_on_inode(struct inode *inode)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 57e44e3..977c171 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -396,8 +396,8 @@ static int bdi_forker_task(void *ptr)
* Temporary measure, we want to make sure we don't see
* dirty data on the default backing_dev_info
*/
- if (wb_has_dirty_io(me))
- bdi_flush_io(me->bdi);
+ if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
+ wb_do_writeback(me);
prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
@@ -424,6 +424,10 @@ static int bdi_forker_task(void *ptr)
continue;
}
+ /*
+ * This is our real job - check for pending entries in
+ * bdi_pending_list, and create the tasks that got added
+ */
bdi = list_entry(bdi_pending_list.next, struct backing_dev_info,
bdi_list);
list_del_init(&bdi->bdi_list);
Acked-by: Anton Altaparmakov <ai...@cam.ac.uk>
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/ntfs/super.c | 33 +++------------------------------
1 files changed, 3 insertions(+), 30 deletions(-)
diff --git a/fs/ntfs/super.c b/fs/ntfs/super.c
index f76951d..3fc03bd 100644
--- a/fs/ntfs/super.c
+++ b/fs/ntfs/super.c
@@ -2373,39 +2373,12 @@ static void ntfs_put_super(struct super_block *sb)
vol->mftmirr_ino = NULL;
}
/*
- * If any dirty inodes are left, throw away all mft data page cache
- * pages to allow a clean umount. This should never happen any more
- * due to mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
- * the underlying mft records are written out and cleaned. If it does,
- * happen anyway, we want to know...
+ * We should have no dirty inodes left, due to
+ * mft.c::ntfs_mft_writepage() cleaning all the dirty pages as
+ * the underlying mft records are written out and cleaned.
*/
ntfs_commit_inode(vol->mft_ino);
write_inode_now(vol->mft_ino, 1);
- if (sb_has_dirty_inodes(sb)) {
- const char *s1, *s2;
-
- mutex_lock(&vol->mft_ino->i_mutex);
- truncate_inode_pages(vol->mft_ino->i_mapping, 0);
- mutex_unlock(&vol->mft_ino->i_mutex);
- write_inode_now(vol->mft_ino, 1);
- if (sb_has_dirty_inodes(sb)) {
- static const char *_s1 = "inodes";
- static const char *_s2 = "";
- s1 = _s1;
- s2 = _s2;
- } else {
- static const char *_s1 = "mft pages";
- static const char *_s2 = "They have been thrown "
- "away. ";
- s1 = _s1;
- s2 = _s2;
- }
- ntfs_error(sb, "Dirty %s found at umount time. %sYou should "
- "run chkdsk. Please email "
- "linux-n...@lists.sourceforge.net and say "
- "that you saw this message. Thank you.", s1,
- s2);
- }
#endif /* NTFS_RW */
iput(vol->mft_ino);
Signed-off-by: Jens Axboe <jens....@oracle.com>
---
fs/fs-writeback.c | 145 ++++++++++++++++++++++++++----------------
include/linux/backing-dev.h | 40 +++++++-----
mm/backing-dev.c | 128 ++++++++++++++++++++++++++++++--------
3 files changed, 215 insertions(+), 98 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 7a558a6..e72db8b 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -46,9 +46,11 @@ int nr_pdflush_threads;
* unless they implement their own. Which is somewhat inefficient, as this
* may prevent concurrent writeback against multiple devices.
*/
-static int writeback_acquire(struct backing_dev_info *bdi)
+static int writeback_acquire(struct bdi_writeback *wb)
{
- return !test_and_set_bit(BDI_pdflush, &bdi->state);
+ struct backing_dev_info *bdi = wb->bdi;
+
+ return !test_and_set_bit(wb->nr, &bdi->wb_active);
}
/**
@@ -59,19 +61,40 @@ static int writeback_acquire(struct backing_dev_info *bdi)
*/
int writeback_in_progress(struct backing_dev_info *bdi)
{
- return test_bit(BDI_pdflush, &bdi->state);
+ return bdi->wb_active != 0;
}
/**
* writeback_release - relinquish exclusive writeback access against a device.
* @bdi: the device's backing_dev_info structure
*/
-static void writeback_release(struct backing_dev_info *bdi)
+static void writeback_release(struct bdi_writeback *wb)
{
- WARN_ON_ONCE(!writeback_in_progress(bdi));
- bdi->wb_arg.nr_pages = 0;
- bdi->wb_arg.sb = NULL;
- clear_bit(BDI_pdflush, &bdi->state);
+ struct backing_dev_info *bdi = wb->bdi;
+
+ wb->nr_pages = 0;
+ wb->sb = NULL;
+ clear_bit(wb->nr, &bdi->wb_active);
+}
+
+static void wb_start_writeback(struct bdi_writeback *wb, struct super_block *sb,
+ long nr_pages,
+ enum writeback_sync_modes sync_mode)
+{
+ if (!wb_has_dirty_io(wb))
+ return;
+
+ if (writeback_acquire(wb)) {
+ wb->nr_pages = nr_pages;
+ wb->sb = sb;
+ wb->sync_mode = sync_mode;
+
+ /*
+ * make above store seen before the task is woken
+ */
+ smp_mb();
+ wake_up(&wb->wait);
+ }
}
int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
@@ -81,22 +104,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
* This only happens the first time someone kicks this bdi, so put
* it out-of-line.
*/
- if (unlikely(!bdi->task)) {
+ if (unlikely(!bdi->wb.task)) {
bdi_add_default_flusher_task(bdi);
return 1;
}
- if (writeback_acquire(bdi)) {
- bdi->wb_arg.nr_pages = nr_pages;
- bdi->wb_arg.sb = sb;
- bdi->wb_arg.sync_mode = sync_mode;
- /*
- * make above store seen before the task is woken
- */
- smp_mb();
- wake_up(&bdi->wait);
- }
-
+ wb_start_writeback(&bdi->wb, sb, nr_pages, sync_mode);
return 0;
}
@@ -124,12 +137,12 @@ int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
* older_than_this takes precedence over nr_to_write. So we'll only write back
* all dirty pages if they are all attached to "old" mappings.
*/
-static void bdi_kupdated(struct backing_dev_info *bdi)
+static void wb_kupdated(struct bdi_writeback *wb)
{
unsigned long oldest_jif;
long nr_to_write;
struct writeback_control wbc = {
- .bdi = bdi,
+ .bdi = wb->bdi,
.sync_mode = WB_SYNC_NONE,
.older_than_this = &oldest_jif,
.nr_to_write = 0,
@@ -164,15 +177,19 @@ static inline bool over_bground_thresh(void)
global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
}
-static void bdi_pdflush(struct backing_dev_info *bdi)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+ struct super_block *sb,
+ struct writeback_control *wbc);
+
+static void wb_writeback(struct bdi_writeback *wb)
{
struct writeback_control wbc = {
- .bdi = bdi,
- .sync_mode = bdi->wb_arg.sync_mode,
+ .bdi = wb->bdi,
+ .sync_mode = wb->sync_mode,
.older_than_this = NULL,
.range_cyclic = 1,
};
- long nr_pages = bdi->wb_arg.nr_pages;
+ long nr_pages = wb->nr_pages;
for (;;) {
if (wbc.sync_mode == WB_SYNC_NONE && nr_pages <= 0 &&
@@ -183,7 +200,7 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
wbc.encountered_congestion = 0;
wbc.nr_to_write = MAX_WRITEBACK_PAGES;
wbc.pages_skipped = 0;
- generic_sync_bdi_inodes(bdi->wb_arg.sb, &wbc);
+ generic_sync_wb_inodes(wb, wb->sb, &wbc);
nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
/*
* If we ran out of stuff to write, bail unless more_io got set
@@ -200,13 +217,13 @@ static void bdi_pdflush(struct backing_dev_info *bdi)
* Handle writeback of dirty data for the device backed by this bdi. Also
* wakes up periodically and does kupdated style flushing.
*/
-int bdi_writeback_task(struct backing_dev_info *bdi)
+int bdi_writeback_task(struct bdi_writeback *wb)
{
while (!kthread_should_stop()) {
unsigned long wait_jiffies;
DEFINE_WAIT(wait);
- prepare_to_wait(&bdi->wait, &wait, TASK_INTERRUPTIBLE);
+ prepare_to_wait(&wb->wait, &wait, TASK_INTERRUPTIBLE);
wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
schedule_timeout(wait_jiffies);
try_to_freeze();
@@ -225,13 +242,13 @@ int bdi_writeback_task(struct backing_dev_info *bdi)
* pdflush style writeout.
*
*/
- if (writeback_acquire(bdi))
- bdi_kupdated(bdi);
+ if (writeback_acquire(wb))
+ wb_kupdated(wb);
else
- bdi_pdflush(bdi);
+ wb_writeback(wb);
- writeback_release(bdi);
- finish_wait(&bdi->wait, &wait);
+ writeback_release(wb);
+ finish_wait(&wb->wait, &wait);
}
return 0;
@@ -253,6 +270,14 @@ void bdi_writeback_all(struct super_block *sb, long nr_pages,
mutex_unlock(&bdi_lock);
}
+/*
+ * We have only a single wb per bdi, so just return that.
+ */
+static inline struct bdi_writeback *inode_get_wb(struct inode *inode)
+{
+ return &inode_to_bdi(inode)->wb;
+}
+
/**
* __mark_inode_dirty - internal function
* @inode: inode to mark
@@ -351,9 +376,10 @@ void __mark_inode_dirty(struct inode *inode, int flags)
* reposition it (that would break b_dirty time-ordering).
*/
if (!was_dirty) {
+ struct bdi_writeback *wb = inode_get_wb(inode);
+
inode->dirtied_when = jiffies;
- list_move(&inode->i_list,
- &inode_to_bdi(inode)->b_dirty);
+ list_move(&inode->i_list, &wb->b_dirty);
}
}
out:
@@ -380,16 +406,16 @@ static int write_inode(struct inode *inode, int sync)
*/
static void redirty_tail(struct inode *inode)
{
- struct backing_dev_info *bdi = inode_to_bdi(inode);
+ struct bdi_writeback *wb = inode_get_wb(inode);
- if (!list_empty(&bdi->b_dirty)) {
+ if (!list_empty(&wb->b_dirty)) {
struct inode *tail;
- tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
+ tail = list_entry(wb->b_dirty.next, struct inode, i_list);
if (time_before(inode->dirtied_when, tail->dirtied_when))
inode->dirtied_when = jiffies;
}
- list_move(&inode->i_list, &bdi->b_dirty);
+ list_move(&inode->i_list, &wb->b_dirty);
}
/*
@@ -397,7 +423,9 @@ static void redirty_tail(struct inode *inode)
*/
static void requeue_io(struct inode *inode)
{
- list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
+ struct bdi_writeback *wb = inode_get_wb(inode);
+
+ list_move(&inode->i_list, &wb->b_more_io);
}
static void inode_sync_complete(struct inode *inode)
@@ -444,11 +472,10 @@ static void move_expired_inodes(struct list_head *delaying_queue,
/*
* Queue all expired dirty inodes for io, eldest first.
*/
-static void queue_io(struct backing_dev_info *bdi,
- unsigned long *older_than_this)
+static void queue_io(struct bdi_writeback *wb, unsigned long *older_than_this)
{
- list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
- move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
+ list_splice_init(&wb->b_more_io, wb->b_io.prev);
+ move_expired_inodes(&wb->b_dirty, &wb->b_io, older_than_this);
}
/*
@@ -609,20 +636,20 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
return __sync_single_inode(inode, wbc);
}
-void generic_sync_bdi_inodes(struct super_block *sb,
- struct writeback_control *wbc)
+static void generic_sync_wb_inodes(struct bdi_writeback *wb,
+ struct super_block *sb,
+ struct writeback_control *wbc)
{
const int is_blkdev_sb = sb_is_blkdev_sb(sb);
- struct backing_dev_info *bdi = wbc->bdi;
const unsigned long start = jiffies; /* livelock avoidance */
spin_lock(&inode_lock);
- if (!wbc->for_kupdate || list_empty(&bdi->b_io))
- queue_io(bdi, wbc->older_than_this);
+ if (!wbc->for_kupdate || list_empty(&wb->b_io))
+ queue_io(wb, wbc->older_than_this);
- while (!list_empty(&bdi->b_io)) {
- struct inode *inode = list_entry(bdi->b_io.prev,
+ while (!list_empty(&wb->b_io)) {
+ struct inode *inode = list_entry(wb->b_io.prev,
struct inode, i_list);
long pages_skipped;
@@ -634,7 +661,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
continue;
}
- if (!bdi_cap_writeback_dirty(bdi)) {
+ if (!bdi_cap_writeback_dirty(wb->bdi)) {
redirty_tail(inode);
if (is_blkdev_sb) {
/*
@@ -656,7 +683,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
continue;
}
- if (wbc->nonblocking && bdi_write_congested(bdi)) {
+ if (wbc->nonblocking && bdi_write_congested(wb->bdi)) {
wbc->encountered_congestion = 1;
if (!is_blkdev_sb)
break; /* Skip a congested fs */
@@ -690,7 +717,7 @@ void generic_sync_bdi_inodes(struct super_block *sb,
wbc->more_io = 1;
break;
}
- if (!list_empty(&bdi->b_more_io))
+ if (!list_empty(&wb->b_more_io))
wbc->more_io = 1;
}
@@ -698,6 +725,14 @@ void generic_sync_bdi_inodes(struct super_block *sb,
/* Leave any unwritten inodes on b_io */
}
+void generic_sync_bdi_inodes(struct super_block *sb,
+ struct writeback_control *wbc)
+{
+ struct backing_dev_info *bdi = wbc->bdi;
+
+ generic_sync_wb_inodes(&bdi->wb, sb, wbc);
+}
+
/*
* Write out a superblock's list of dirty inodes. A wait will be performed
* upon no inodes, all inodes or the final one, depending upon sync_mode.
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index f164925..77dc62c 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -24,8 +24,8 @@ struct dentry;
* Bits in backing_dev_info.state
*/
enum bdi_state {
- BDI_pdflush, /* A pdflush thread is working this device */
BDI_pending, /* On its way to being activated */
+ BDI_wb_alloc, /* Default embedded wb allocated */
BDI_async_congested, /* The async (write) queue is getting full */
BDI_sync_congested, /* The sync queue is getting full */
BDI_unused, /* Available bits start here */
@@ -41,15 +41,23 @@ enum bdi_stat_item {
#define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
-struct bdi_writeback_arg {
- unsigned long nr_pages;
- struct super_block *sb;
+struct bdi_writeback {
+ struct backing_dev_info *bdi; /* our parent bdi */
+ unsigned int nr;
+
+ struct task_struct *task; /* writeback task */
+ wait_queue_head_t wait;
+ struct list_head b_dirty; /* dirty inodes */
+ struct list_head b_io; /* parked for writeback */
+ struct list_head b_more_io; /* parked for more writeback */
+
+ unsigned long nr_pages;
+ struct super_block *sb;
enum writeback_sync_modes sync_mode;
};
struct backing_dev_info {
struct list_head bdi_list;
-
unsigned long ra_pages; /* max readahead in PAGE_CACHE_SIZE units */
unsigned long state; /* Always use atomic bitops on this */
unsigned int capabilities; /* Device capabilities */
@@ -66,14 +74,11 @@ struct backing_dev_info {
unsigned int min_ratio;
unsigned int max_ratio, max_prop_frac;
- struct device *dev;
+ struct bdi_writeback wb; /* default writeback info for this bdi */
+ unsigned long wb_active; /* bitmap of active tasks */
+ unsigned long wb_mask; /* number of registered tasks */
- struct task_struct *task; /* writeback task */
- wait_queue_head_t wait;
- struct bdi_writeback_arg wb_arg; /* protected by BDI_pdflush */
- struct list_head b_dirty; /* dirty inodes */
- struct list_head b_io; /* parked for writeback */
- struct list_head b_more_io; /* parked for more writeback */
+ struct device *dev;
#ifdef CONFIG_DEBUG_FS
struct dentry *debug_dir;
@@ -90,19 +95,20 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
void bdi_unregister(struct backing_dev_info *bdi);
int bdi_start_writeback(struct backing_dev_info *bdi, struct super_block *sb,
long nr_pages, enum writeback_sync_modes sync_mode);
-int bdi_writeback_task(struct backing_dev_info *bdi);
+int bdi_writeback_task(struct bdi_writeback *wb);
void bdi_writeback_all(struct super_block *sb, long nr_pages,
enum writeback_sync_modes sync_mode);
void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
+int bdi_has_dirty_io(struct backing_dev_info *bdi);
extern struct mutex bdi_lock;
extern struct list_head bdi_list;
-static inline int bdi_has_dirty_io(struct backing_dev_info *bdi)
+static inline int wb_has_dirty_io(struct bdi_writeback *wb)
{
- return !list_empty(&bdi->b_dirty) ||
- !list_empty(&bdi->b_io) ||
- !list_empty(&bdi->b_more_io);
+ return !list_empty(&wb->b_dirty) ||
+ !list_empty(&wb->b_io) ||
+ !list_empty(&wb->b_more_io);
}
static inline void __add_bdi_stat(struct backing_dev_info *bdi,
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index bae3d4f..c8201f0 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -199,10 +199,46 @@ static int __init default_bdi_init(void)
}
subsys_initcall(default_bdi_init);
+static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+{
+ memset(wb, 0, sizeof(*wb));
+
+ wb->bdi = bdi;
+ init_waitqueue_head(&wb->wait);
+ INIT_LIST_HEAD(&wb->b_dirty);
+ INIT_LIST_HEAD(&wb->b_io);
+ INIT_LIST_HEAD(&wb->b_more_io);
+}
+
+static int wb_assign_nr(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+ set_bit(0, &bdi->wb_mask);
+ wb->nr = 0;
+ return 0;
+}
+
+static void bdi_put_wb(struct backing_dev_info *bdi, struct bdi_writeback *wb)
+{
+ clear_bit(wb->nr, &bdi->wb_mask);
+ clear_bit(BDI_wb_alloc, &bdi->state);
+}
+
+static struct bdi_writeback *bdi_new_wb(struct backing_dev_info *bdi)
+{
+ struct bdi_writeback *wb;
+
+ set_bit(BDI_wb_alloc, &bdi->state);
+ wb = &bdi->wb;
+ wb_assign_nr(bdi, wb);
+ return wb;
+}
+
static int bdi_start_fn(void *ptr)
{
- struct backing_dev_info *bdi = ptr;
+ struct bdi_writeback *wb = ptr;
+ struct backing_dev_info *bdi = wb->bdi;
struct task_struct *tsk = current;
+ int ret;
/*
* Add us to the active bdi_list
@@ -226,7 +262,15 @@ static int bdi_start_fn(void *ptr)
smp_mb__after_clear_bit();
wake_up_bit(&bdi->state, BDI_pending);
- return bdi_writeback_task(bdi);
+ ret = bdi_writeback_task(wb);
+
+ bdi_put_wb(bdi, wb);
+ return ret;
+}
+
+int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+ return wb_has_dirty_io(&bdi->wb);
}
static void bdi_flush_io(struct backing_dev_info *bdi)
@@ -244,11 +288,12 @@ static void bdi_flush_io(struct backing_dev_info *bdi)
static int bdi_forker_task(void *ptr)
{
- struct backing_dev_info *me = ptr;
+ struct bdi_writeback *me = ptr;
DEFINE_WAIT(wait);
for (;;) {
struct backing_dev_info *bdi, *tmp;
+ struct bdi_writeback *wb;
/*
* Do this periodically, like kupdated() did before.
@@ -259,8 +304,8 @@ static int bdi_forker_task(void *ptr)
* Temporary measure, we want to make sure we don't see
* dirty data on the default backing_dev_info
*/
- if (bdi_has_dirty_io(me))
- bdi_flush_io(me);
+ if (wb_has_dirty_io(me))
+ bdi_flush_io(me->bdi);
prepare_to_wait(&me->wait, &wait, TASK_INTERRUPTIBLE);
@@ -271,7 +316,7 @@ static int bdi_forker_task(void *ptr)
* a thread registered. If so, set that up.
*/
list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
- if (bdi->task || !bdi_has_dirty_io(bdi))
+ if (bdi->wb.task || !bdi_has_dirty_io(bdi))
continue;
bdi_add_default_flusher_task(bdi);
@@ -292,17 +337,22 @@ static int bdi_forker_task(void *ptr)
list_del_init(&bdi->bdi_list);
mutex_unlock(&bdi_lock);
- BUG_ON(bdi->task);
+ wb = bdi_new_wb(bdi);
+ if (!wb)
+ goto readd_flush;
- bdi->task = kthread_run(bdi_start_fn, bdi, "bdi-%s",
+ wb->task = kthread_run(bdi_start_fn, wb, "bdi-%s",
dev_name(bdi->dev));
+
/*
* If task creation fails, then readd the bdi to
* the pending list and force writeout of the bdi
* from this forker thread. That will free some memory
* and we can try again.
*/
- if (!bdi->task) {
+ if (!wb->task) {
+ bdi_put_wb(bdi, wb);
+readd_flush:
/*
* Add this 'bdi' to the back, so we get
* a chance to flush other bdi's to free
@@ -320,8 +370,18 @@ static int bdi_forker_task(void *ptr)
return 0;
}
+/*
+ * Add a new flusher task that gets created for any bdi
+ * that has dirty data pending writeout
+ */
void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
{
+ if (!bdi_cap_writeback_dirty(bdi))
+ return;
+
+ /*
+ * Someone already marked this pending for task creation
+ */
if (test_and_set_bit(BDI_pending, &bdi->state))
return;
@@ -329,7 +389,7 @@ void bdi_add_default_flusher_task(struct backing_dev_info *bdi)
list_move_tail(&bdi->bdi_list, &bdi_pending_list);
mutex_unlock(&bdi_lock);
- wake_up(&default_backing_dev_info.wait);
+ wake_up(&default_backing_dev_info.wb.wait);
}
int bdi_register(struct backing_dev_info *bdi, struct device *parent,
@@ -362,13 +422,23 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
* on-demand when they need it.
*/
if (bdi_cap_flush_forker(bdi)) {
- bdi->task = kthread_run(bdi_forker_task, bdi, "bdi-%s",
+ struct bdi_writeback *wb;
+
+ wb = bdi_new_wb(bdi);
+ if (!wb) {
+ ret = -ENOMEM;
+ goto remove_err;
+ }
+
+ wb->task = kthread_run(bdi_forker_task, wb, "bdi-%s",
dev_name(dev));
- if (!bdi->task) {
+ if (!wb->task) {
+ bdi_put_wb(bdi, wb);
+ ret = -ENOMEM;
+remove_err:
mutex_lock(&bdi_lock);
list_del(&bdi->bdi_list);
mutex_unlock(&bdi_lock);
- ret = -ENOMEM;
goto exit;
}
}
@@ -391,28 +461,37 @@ static int sched_wait(void *word)
return 0;
}
+/*
+ * Remove bdi from global list and shutdown any threads we have running
+ */
static void bdi_wb_shutdown(struct backing_dev_info *bdi)
{
+ if (!bdi_cap_writeback_dirty(bdi))
+ return;
+
/*
* If setup is pending, wait for that to complete first
*/
wait_on_bit(&bdi->state, BDI_pending, sched_wait, TASK_UNINTERRUPTIBLE);
+ /*
+ * Make sure nobody finds us on the bdi_list anymore
+ */
mutex_lock(&bdi_lock);
list_del(&bdi->bdi_list);
mutex_unlock(&bdi_lock);
+
+ /*
+ * Finally, kill the kernel thread
+ */
+ kthread_stop(bdi->wb.task);
}
void bdi_unregister(struct backing_dev_info *bdi)
{
if (bdi->dev) {
- if (!bdi_cap_flush_forker(bdi)) {
+ if (!bdi_cap_flush_forker(bdi))
bdi_wb_shutdown(bdi);
- if (bdi->task) {
- kthread_stop(bdi->task);
- bdi->task = NULL;
- }
- }
bdi_debug_unregister(bdi);
device_unregister(bdi->dev);
bdi->dev = NULL;
@@ -429,11 +508,10 @@ int bdi_init(struct backing_dev_info *bdi)
bdi->min_ratio = 0;
bdi->max_ratio = 100;
bdi->max_prop_frac = PROP_FRAC_BASE;
- init_waitqueue_head(&bdi->wait);
INIT_LIST_HEAD(&bdi->bdi_list);
- INIT_LIST_HEAD(&bdi->b_io);
- INIT_LIST_HEAD(&bdi->b_dirty);
- INIT_LIST_HEAD(&bdi->b_more_io);
+ bdi->wb_mask = bdi->wb_active = 0;
+
+ bdi_wb_init(&bdi->wb, bdi);
for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -458,9 +536,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
{
int i;
- WARN_ON(!list_empty(&bdi->b_dirty));
- WARN_ON(!list_empty(&bdi->b_io));
- WARN_ON(!list_empty(&bdi->b_more_io));
+ WARN_ON(bdi_has_dirty_io(bdi));
bdi_unregister(bdi);
OK, so the reason for the original problems where the sense buffer was
inlined with the scsi_command was that we need to DMA to the sense
buffer but not to the command. Plus the command is in fairly constant
use so we get cacheline interference unless they're always in separate
caches. This necessitates opening up a hole in the command to achieve
this (you can separate to the next cache line if you can guarantee that
the command always begins on a cacheline. If not, it has to be
2*cacheline). The L1 cacheline can be up to 128 bytes on some
architectures, so we'd need to know the waste of space is worth it in
terms of speed. The other problem is that the entire command now has to
be allocated in DMAable memory, which restricts the allocation on some
systems.
> Yeah, we can inline the sense buffer but as we discussed in the past
> several times, there are some good reasons that we should not do so, I
> think.
There are several other approaches:
1. Keep the sense buffer packed in the command but disallow DMA to
it, which fixes all the alignment problems. Then we supply a
set of rotating DMA buffers to drivers which need to do the DMA
(which isn't the majority).
2. Sense is a comparative rarity, so us a more compact pooling
scheme and discard sense for reuse as soon as we know it's not
used (as in at softirq time when there's no sense collected).
I'd need a little more clarity on the actual size of the problem before
making any choices.
The other thing to bear in mind is that two allocations of M and N might
be more costly than a single allocation of N+M; however, an allocation
of M+N+extra can end up more costly if the extra causes more page
reclaim before we get an actual command.
James
I'm not sure if this is what you meant by option 2 or not, but one
proposal was to keep a number of sense buffers around per-host, and only
allocate extras when we run close to empty.
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
Yeah, I think that there are good reasons why we shouldn't inline the
sense buffer. As I already wrote, seems that the DMA requirement
wasn't properly understood; it's not about the alignment.
> > Yeah, we can inline the sense buffer but as we discussed in the past
> > several times, there are some good reasons that we should not do so, I
> > think.
>
> There are several other approaches:
>
> 1. Keep the sense buffer packed in the command but disallow DMA to
> it, which fixes all the alignment problems. Then we supply a
> set of rotating DMA buffers to drivers which need to do the DMA
> (which isn't the majority).
Can we just fix some drivers not to do the DMA with the sense buffer in
scsi_cmnd? IIRC, there are only five or six drivers that do such.
This is not so.
All drivers that go through scsi_eh_prep_cmnd() will eventually DMA through
the regular read path. Including all the drivers that do nothing and let
scsi-ml do the REQUEST_SENSE
Actually I have exact numbers, from the last time I did all that
Boaz
This one is not possible because it is scsi-ml in majority of cases that
does the DMA request through scsi_eh_prep_cmnd() and a regular read.
The drivers don't even know anything about it.
> 2. Sense is a comparative rarity, so us a more compact pooling
> scheme and discard sense for reuse as soon as we know it's not
> used (as in at softirq time when there's no sense collected).
>
This is the way to go for sure. And only on ARCHs with none-coherent-cache
all the good ARCHs can just use embedded sense just fine.
> I'd need a little more clarity on the actual size of the problem before
> making any choices.
>
> The other thing to bear in mind is that two allocations of M and N might
> be more costly than a single allocation of N+M; however, an allocation
> of M+N+extra can end up more costly if the extra causes more page
> reclaim before we get an actual command.
>
> James
>
Boaz
I retract that no, yes and scsi-ml is one more possible client of
the "rotating DMA buffers"
>> 2. Sense is a comparative rarity, so us a more compact pooling
>> scheme and discard sense for reuse as soon as we know it's not
>> used (as in at softirq time when there's no sense collected).
>>
>
> This is the way to go for sure. And only on ARCHs with none-coherent-cache
> all the good ARCHs can just use embedded sense just fine.
>
>> I'd need a little more clarity on the actual size of the problem before
>> making any choices.
>>
>> The other thing to bear in mind is that two allocations of M and N might
>> be more costly than a single allocation of N+M; however, an allocation
>> of M+N+extra can end up more costly if the extra causes more page
>> reclaim before we get an actual command.
>>
>> James
>>
> Boaz
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> On 05/26/2009 06:31 PM, FUJITA Tomonori wrote:
> >
> > Can we just fix some drivers not to do the DMA with the sense buffer in
> > scsi_cmnd? IIRC, there are only five or six drivers that do such.
>
> This is not so.
> All drivers that go through scsi_eh_prep_cmnd() will eventually DMA through
> the regular read path. Including all the drivers that do nothing and let
> scsi-ml do the REQUEST_SENSE
>
> Actually I have exact numbers, from the last time I did all that
Hmm, we discussed this before, I think.
scsi-ml uses scsi_eh_prep_cmnd only via scsi_send_eh_cmnd(). There are
some users of scsi_send_eh_cmnd in scsi-ml but only scsi_request_sense
does the DMA in the sense_buffer of scsi_cmnd.
Only scsi_error_handler() uses scsi_request_sense() and
scsi_send_eh_cmnd() works synchronously. So scsi-ml can easily avoid
the the DMA in the sense_buffer of scsi_cmnd if we have one sense
buffer per scsi_host.
Sure we did I sent these patches. to summarize, 3 types of drivers:
1. Only memcpy into sense_buffer - 60%
2. Use scsi_eh_prep_cmnd and DMA read into sense.
2.1 Do nothing and scsi-ml does scsi_eh_prep_cmnd - 30%
3. Prepare DMA descriptors for sense_buffer before execution - 10%
> scsi-ml uses scsi_eh_prep_cmnd only via scsi_send_eh_cmnd(). There are
> some users of scsi_send_eh_cmnd in scsi-ml but only scsi_request_sense
> does the DMA in the sense_buffer of scsi_cmnd.
>
Also drivers use scsi_eh_prep_cmnd at interrupt time and proceed to
DMA into the sense_buffer.
> Only scsi_error_handler() uses scsi_request_sense() and
> scsi_send_eh_cmnd() works synchronously. So scsi-ml can easily avoid
> the the DMA in the sense_buffer of scsi_cmnd if we have one sense
> buffer per scsi_host.
Not so. As James explained then, once you have a CHECK_CONDITION return, the
Q-per-host is frozen, yes. But as soon as you send the REQUEST_SENSE the
target Q is unfrozen again and all in-flight commands can error, much before
the REQUEST_SENSE returns.
Boaz
Hmm, I'm not sure what you mean.
Why is 'all in-flight commands can error' a problem? The sense_buffer
per host is used by only scsi_eh kernel thread.