On Thu, 8 Nov 2018 02:59:42 -0800 (PST)
> ..OK, so imagine there's a process running on an OS, and it does a
> syscall (int 80 on Linux, I believe....?) - so then, the CPU jumps
> inside the kernel, right? Now, say another process comes along, and
> also, while doing it's stuff, does a syscall...
I hope you don't mind, I added alt.os.development, as I'm becoming
somewhat more rusty on this topic, but here goes anyway ... I tried to
keep it concise, but I failed. There is lots to learn and plenty of
inter-related computer science (CS) concepts involved. Sorry, there is
nothing I can do about that. You'll just have to invest the time.
Also, I've not posted any code examples in assembly or a high-level
language. This is because of the amount work to be done, such examples
can become large and difficult to explain or understand.
So, if the CPU or processor jumped inside the kernel, how could another
process be executing? That's what you should be thinking at that
point in your paragraph. AISI, there are basically two answers to this
question. The first answer is for processors and code executing a
single thread, i.e., single-processing (on a uniprocessor, i.e., single
core). The second is for processors and code executing multiple
threads, i.e., multi-processing (such as multi-core) or parallel
computing. In essence, single-processing is like one of those
children's adventure books where you can choose which path the story
takes. No matter what path is taken, there is only one you involved
the story. Multi-processing is like when there are multiple people,
i.e., your friends, all reading the same book at the same time. So,
some of you could be flipping to the same pages, while taking different
paths. (What that too bizarre of an introductory explanation? ...)
a) With single threaded code, such as any high-level programming
language or on a single core processor, i.e., prior to multi-core
processors and not involving parallel computing, there is only a single
point of processor execution at any given time. I.e., after the Int
80h routine executes, the code must exit the kernel, return to the
calling thread, then that thread must be interrupted somehow to switch
to another thread. This is done either preemptively such as by a
hardware interrupt, or cooperatively by releasing execution control such
as by a software interrupt (like Int 80h) or a call/jump instruction.
The former is called preemptive multi-tasking. The latter is called
cooperative multi-tasking. A this point, the context of the
interrupted task must be halted and saved. Now, the OS must decide
which thread to execute next. The code that does this is called a
scheduler. After that, a stack and execution context is set up for the
thread to be called. Finally, code execution must transfer to the
other thread. This is called a context-switch for preemptive
multi-tasking. This is called a yield for cooperative multi-tasking.
Now, with hardware interrupts involved, it is possible to have an
attempt to execute the same code, by differing threads. If the same
piece of code in the kernel is designed to be executed by multiple
threads, it's called reentrant code. Reentrant meaning it can be called
again. Usually, only the kernel code needs to be reentrant in a
single threaded environement, due to hardware interrupts driving the
preemptive multitasking. If the OS/kernel code is not reentrant, then
you'll experience either some type of crash or data corruption if the
code is called again, or sometimes a halt or long delay will occur
until the earlier thread is finished, depending on the OS design. If
delays or halts are involved, the implementation uses blocking,
locking, or mutual exclusion (mutex) to prevent reentrancy. Reentrant
code typically saves all registers and switches stacks, which is
commonly done by a context switch for preemptive multi-tasking, but not
by a yield for cooperative multi-tasking.
b) When you have multi-processing, i.e., multiple processor cores, or
parallel computing involved, it's very easy for software, e.g.,
different threads, to attempt to execute the same piece of code. This
can only occur normally for hardware interrupts within a single
processing environment (as described above in a). So, all of the code
in a multi-processing environment is usually constructed to be
reentrant, both code in the kernel and code for applications, e.g., by
the compiler or by the programmer for assembly. With multi-processing,
you also have to take into account whether the system shares memory
space or separates memory space for each thread. Generally, memory is
shared - accessible by any executing thread - for the most common
computing design, i.e., von Neumann architecture. So, data and program
objects located in shared memory are to be avoided, i.e., globals or
file scope variables. This is because multiple threads executing on
different processor cores, may attempt to write different data into the
same shared memory, i.e., resulting in data corruption. The reentrant
coding takes care of saving/restoring registers and creating separate
stacks for each thread, so as not to overwrite each others data.
However, the programmer must take care to use only locals or auto scope
variables which are usually created on a procedure stack (or
activation record), instead of globals or file scope which are stored in
shared memory, for a multi-processing environments.
> at this point there
> will be 2 processes running, but it will be 2 different points within
> the kernel which will be being executed - am I right about this? So -
> there's ONE kernel, ie. ONE body of code, multiple points within
> which will be being executed? I find it very hard to envision the
> whole thing....
See above, and links below which describe the concepts above.
You might also want to add these to that list:
PS. I used the term "thread" throughout, but it may actually be a
"process" or "task" etc ... Consult Wikipedia.
"The most ironic outcome is the most probable." Elon Musk
Could someone tell Elon that means Tesla implodes? ...