On 2017-10-13 18:04:42 +0000, Chris said:
> It's been some time, but did check with Stevens before posting the above.
Not sure what that "Stevens" is in reference to; what resource or
posting you're referring to by that.
> That wasn't clear, only saying that *most* implementations support the
> ping server in the kernel.
> It didn't answer the question as to how it handles multiple requests at once...
On OpenVMS, a device driver can either generate its responses directly
within the driver or from within the driver interrupt routine, or can
generate and use fork processes to process the request, or can pass the
request to an ancillary control process (ACP) and which can require
both a mode change and a context switch, or can pass the request off to
a process that's either demand-started, started at boot, or managed as
a pool akin to what Apache does with its worker processes. Some
drivers have associated physical hardware, and others are
pseudo-drivers; software-only device drivers.
What specifically OpenVMS does for this specific case within HPE TCP/IP
Services or within the upcoming VSI TCPIP stack, I don't know. I've
not looked at the implementation of either of those two IP stacks, and
this is not the sort of stuff that tends to be documented. I have
done more than a little networking work in kernel mode, including with
the VCI API, and more than a little device driver coding and debug work
over the years.
Fork processes are not at all similar to what OpenVMS developers —
folks that are writing code and debugging code and working outside of
kernel and driver development — think of as processes.
> but the general method, as per inetd, is to offload and fork / spawn a
> new process to handle the request.
Possible, but for something as simple as an ICMP echo reply, an OpenVMS
process creation (whether through inetd or otherwise) with the mode
switch and the context switch would be more overhead than might be
preferred, and it's also mean interesting system loading behavior
secondary to the arrival of a flood of ICMP ping requests. Which is
why I'd expect that the IP device driver itself just answers the ping
request with an echo reply; why bother transferring data from kernel to
an outer mode, with the corresponding mode switch and probably also
process context switch? NIC driver to IP driver and back to NIC
driver and back out...
> Otherwise, you could lose requests while servicing the current one.
Drivers use constructs such as spinlocks, IPLs, fork locks and device
locks to prevent data corruptions or losses and to coordinate access to
critical sections, and use queues or fork processes to avoid request
loss.
Given that this is ICMP, the loss of a packet or three due to excessive
system loading is likely not going to be particularly disruptive
problem, nor a misbehavior that would be considered a fatal error.
> Perhaps ping is a special case, optimised for speed, but not enough info.
ICMP is comparatively simple traffic. Managing and buffering a TCP
stream is somewhat more involved, for instance.
> None of that negates the original assertion, that all code running in a
> multitasking os runs within a defined process context...
That assertion is incorrect. All code running in a multitasking OS can
be implemented by whatever particular design that multitasking OS
happens to implement. On the particular operating system known as
OpenVMS, code can be invoked without having any process context, and
other code can be invoked with or within process context; either from
system virtual space with a specific process context available, in
system virtual address space with who-knows-what in process virtual
address space possibly including 'nothing', or in process virtual
address space within process context. Process context is one of the
abstractions that's commonly used, and it's a familiar one But no, I
have no idea why some folks here in this thread are seemingly wrapped
around the axle about what's executing in a process context and what
isn't, either.
I'm not sure why anybody would be particularly interested in the
implementation details of ICMP ping might be, either. Beyond the
developers supporting the code involved. So long as a ping echo
request gets an echo reply, there's no security vulnerabilities, and
getting hit with a barrage of pings doesn't substantially degrade
server performance.
What a particular Unix stack does? Donno. Check with the folks that
better know that particular platform. I'd expect to find some
differences between different implementations too; I'd not expect that
Linux, BSD, DragonflyBSD, seL4 and Minix all implemented their
networking in the same way, for instance.
Apropos of some of the more general discussions of around the different
sorts of kernel designs and implementation languages available for
same, there's _The Case for Writing a Kernel in Rust_ from last month
or so — and no, I don't expect a kernel rewrite on OpenVMS, nor would I
expect the existing development team would seek ti migrate to new
implementation languages.
http://www.cs.virginia.edu/~bjc8c/papers/levy17rustkernel.pdf