The 2013 Linux Storage, Filesystem, and Memory Management Summit
Plenary sessions
The following sessions involved the entire group of nearly 100 developers:
- Lock scaling: fine-grained locking
is often seen as the path to greater scalability, but what happens
when increasing the number of locks makes the system less scalable
instead?
- Page forking: might the performance
problems associated with stable pages
be better addressed by a switch to an entirely different solution to
the problem of implementing stable writes within filesystems?
- The shrinker API is the source of
a number of problems in memory management and beyond; here, those
problems were discussed in the context of a proposal for an improved
shrinker API.
- A storage technology update. What can
we expect from upcoming storage devices, and how will the kernel
handle them?
- FUSE and cloud storage; how can we make FUSE work better?
MM-only sessions
The memory management developers had a number of sessions where they closed themselves up in a tiny, refrigerated room for MM-specific discussions. Reports from these sessions include:
- mmap_sem and filesystems: complexities
around the use of the memory management semaphore are creating pain
for filesystem developers.
- In-kernel compression: a seeming
resolution to the ongoing debate between zswap and zcache.
- Various short topics including
hardware-initiated paging from coprocessors, process exit times, and
volatile ranges.
- Writeback latency: the inevitable
writeback discussion was focused on a handful of specific problems in
need of solution in the near future.
- Toward better swapping especially when
the available swap devices have different performance characteristics.
- Improving the out-of-memory killer:
will we ever find a better way to kill off processes when the system
runs out of memory?
- Soft reclaim: making reclaim in control groups work better — though universal agreement on just how things should behave does not yet exist.
Filesystem and Storage sessions
The bulk of the non-plenary sessions were for both Filesystem and Storage developers. Here are the reports from those discussions:
- Storage data integrity: What are the
right interfaces for handling storage data integrity information?
- Unit attentions and thin provisioning
thresholds: When a storage array hits its "soft" threshold, it will
generate a "unit attention", what does the kernel need to handle that
situation?
- I/O hints: Higher layers can provide
hints to the storage layer about how the stored data will be used and accessed,
but it is not clear what filesystems should do to pass along any hints they
get or to generate some of their own.
- Copy offload: How to support
offloading data copies to servers or storage arrays.
- dm-cache and bcache: the future of two
storage-caching technologies for Linux.
- Error returns: filesystems could use
better error information from the storage layer.
- Storage management: how do we ease the
task of creating and managing filesystems on Linux systems?
- O_DIRECT: the kernel's direct I/O code
is complicated, fragile, and hard to change. Is it time to start
over?
- Reducing io_submit() latency: submitting asynchronous I/O operations can potentially block for long periods of time, which is not what callers want. Various ways of addressing this problem were discussed, but easy solutions are not readily at hand.
Filesystem-only sessions
- NFS status: what is going on in the
NFS subsystem.
- Btrfs status: what has happened with
the next-generation Linux filesystem, and when will it be ready for
production use?
- User-space filesystem servers: what
can the kernel do to support user-space servers like Samba and
NFS-GANESHA?
- Range locking: a proposal to lock
portions of files within the kernel.
- FedFS: where things stand with the creation of a Federated Filesystem implementation for Linux.
Storage-only sessions
- Reducing SCSI latency. The SCSI
stack is having a hard time keeping up with the fastest drives; what
can be done to speed things up?
- SCSI testing. It would be nice to
have a test suite for SCSI devices; after this session, one may well
be in the works.
- Error handling and firmware updates: some current problems with handling failing drives, and how can we do online firmware updates on SATA devices?
Before anybody asks: the taking of the group picture was a somewhat confused event this year, and we were unable to take a picture of our own. So we have no such picture to post at this time.
The Linux Foundation has posted a
set of photos from the event, including the group picture.
Index entries for this article | |
---|---|
Conference | Storage, Filesystem, and Memory-Management Summit/2013 |