I/O architecture

5 views
Skip to first unread message

Jon Taylor

unread,
Jul 29, 2011, 12:33:49 PM7/29/11
to perlos
These are my notes on a completely generic, resource template and
specification based I/O driver system. A language like Perl which can
be "rebuilt" at high levels of abstraction is perfect for this type of
system.

The Resource Description Framework is an XML-based meta-language spec
for describing conserved quantities. RDF is to be compiled through
low-level code templates. Abstractions are layered in hierarchically
with more and more specific constraint sections. At the top is the
system device queue handler - usually only one per machine - polled or
interrupt driven usually, not scheduled.

Template execution is driven through PerlIO layers. Layers are "drawn
up" into sections within a master template contained and managed in a
higher-level layer. Ideally everything would be statically defined at
build time and contained in generated code in a single layer, but this
doesn't allow for runtime addition of unspecced driver code (new
machine on network, device added to bus, device discovery in general,
native driver integration, etc).

Specific low-level profiles, generic high-level interfaces. Profiles
are tuned "in the middle", between the resource and interface
sections; this is the actual "device driver", which usually ends up
being a time<->space constraint conversion specification matrix.
Tuning is either done at compile-time with a profiler tool, or at
runtime (either at code init time or as an onoging process).

The top level is always a queue - in, out and/or through, with
optional out-of-band sections to interact with the lower levels of the
I/O hierarchy more directly. All constraint-based IO in the system
flows through this management interface. Usually implemented with
PerlIO standard IO functions using a virtual filesystem hierarchy
interface style - open a file, read and/or write data to the file
through a protocol template, close the file.

The bottom level is usually a simple set of mapping and constraint
based resource definitions: address spaces and formats, bus widths/
timings, register or DMA word based command queue formats, inter-
section timing constraints or resource allocation tradeoff matrices,
interrupt or exception message type generation specs, etc etc etc.
Usually any fundamental resource will be either time or space bound as
a constraint, or some sort of meta-definition spec which maps over a
lower-level constraint space.

The whole system should be able to tune itself up and run close to
perfectly tuned in optimality space if the underlying resource
constraint space is well mapped. The process of converting a parallel
constraint space traversal back and forth between a serialized
abstract interface is simply a shortest path traversal over a basis
space graph representing the resource constraint matrix. Usually you
then use a depth-first traversal over the tree which spans the
shortest path to find the least-path traversal or "optimality run".
Often you will see this type of algorithm implemented in I/O hardware
such as SCSI controllers which implement a tagged command queue - such
systems allow for out-of-order or parallelizable I/O to be optimally
scheduled and managed. Systems software which manages I/O should
virtualize this (presumably absolutely optimal) approach.

Jon
Reply all
Reply to author
Forward
0 new messages