> Suppose you could design the ultimate game driver? What kinds of
> features would you like to see? These can be either features
> existing in current game drivers, or features as of yet
> unimplemented.
Define "game driver" and I might be able to tell you. Every time I've
seen the term it's been in connection with a combat mud like an LP or
Diku, never a tiny - yet you crossposted to r.g.m.tiny, so it
presumably has something to do with them.
der Mouse
Personally, I'd like to see a more modular design done along object-oriented
principles. I'd like to be able to plug different modules in and out for
different effects. For example, suppose I want to implement a paging-style
memory manager to keep the memory use more-or-less constant. I'd like to
do this WITHOUT having to reach into all the different places in the
gamedriver where memory is allocated.
Also, most drivers I've seen are monolithic, single-process systems. What
about multiple processes? I have my own opinions on this, but I'd like to
hear what others have to say.
Tom Ault
Hmmm.... That one's interesting... I would like to see the "ultimate
game driver" contain the following:
1> A combat and social interaction system like that of a Diku.
2> An object and robot set like that of a MUSH.
3> The ability to interchange and create new types of items,
races, languages, monsters, etc. easily (ie, without having
to alter the source code)
That about sums it up for me...
DW
--
"Don't be intimidated with these [Suns]. They are scum, the only good
thing about them is that they make a cool sound when you drop them
from the top of Kirkbride." - Sven Heinicke (Widener U)
-- D S Wallace: wal...@cs.widener.edu ---------- Yes, this is a .sig. --
In LPMUD (and it's derivatives/cousins), the game driver is a compiler/
interpreter that runs a game (mudlib). ie there is a distinction here
which some tiny mud'ers are not familiar with. In LPMUD, the game driver
has nothing to do with (at least, it shouldn't have) skills, character
classes, combat, socials, etc. In this respect, it's unclear why you
crossposted to r.g.m.tiny... For the purposes of discussion, I'll assume
that was an error (and remove it from the Newsgroups field).
>Personally, I'd like to see a more modular design done along object-oriented
>principles. I'd like to be able to plug different modules in and out for
>different effects. For example, suppose I want to implement a paging-style
>memory manager to keep the memory use more-or-less constant. I'd like to
>do this WITHOUT having to reach into all the different places in the
>gamedriver where memory is allocated.
A clean modular design is always nice. At present, there are alternative
malloc packages to replace the default system malloc...but I think something
on the order of a paging-style memory manager would be difficult--either
system dependent to handle low level details or require all references to be
indirect. It might be better to handle paging explicitly through another
module (eg the scheduler), than hack around with the memory manager.
Another example of where replaceable modules would be ideal is where there are
size vs speed issues--for example, an implementation of mappings that uses
alists, vs one which uses extendible hash tables, vs one which uses binary
trees.
There is also a need to make the game driver as independent as possible from
the mudlib--to some extent, I believe this should also apply to efuns.
>Also, most drivers I've seen are monolithic, single-process systems. What
>about multiple processes? I have my own opinions on this, but I'd like to
>hear what others have to say.
Drivers that fully support multiple processes (pre-emption, synchronization,
blocking, etc) are a ways off, IMHO. In designing such a driver, consideration
must be given to making the coding of the mudlib simple. LPC coders already
take for granted the ease by which they can manipulate shared data, files, etc.
Imagine the debugging problems, questions/complaints, unexplained data
corruption problems, that will come with semaphores, P() & V(), flock(), etc.
So...are you contemplating actually writing the ultimate game driver, or are
you looking for ideas to nag some driver hackers with? :-)
I mean both the underlying C code for the game itself, as well as any code
written in the game's programming language (if any). I guess what I'm going
for is a list of features that people would like to see implemented in MUDs.
This can be anything from better database management to more characterization
aids.
Basically, in Tiny lingo, "game driver" == server, although I've extended
the concept to not only include the game code itself, but how the technical
aspects of world design are implemented on the server itself.
> der Mouse
>
> mo...@mcrcim.mcgill.edu
Tom Ault
Mad Roboticist in Training
I'm currently implementing a game driver from scratch. I'm trying pretty hard
to keep even the idea of ``a game'' out of the interpreter.
>>Personally, I'd like to see a more modular design done along object-oriented
>>principles.
I just took a break to convert to C++. Now it doesn't work....
>>I'd like to be able to plug different modules in and out for
>>different effects. For example, suppose I want to implement a paging-style
>>memory manager to keep the memory use more-or-less constant.
What exactly do you mean? I think paging is best left to the operating system.
I think memory allocation techniques could try to work with the paging system
if possible, but you never know if what's good for one machine is bad for
another.
Has anyone experimented with the vadvise() system call? It's in SunOS4.1.2 at
least, and probably a lot of other systems.
>There is also a need to make the game driver as independent as possible from
>the mudlib--to some extent, I believe this should also apply to efuns.
I'm forgetting my LPMUD stuff. efuns are builtin functions?
>>Also, most drivers I've seen are monolithic, single-process systems. What
>>about multiple processes? I have my own opinions on this, but I'd like to
>>hear what others have to say.
>Drivers that fully support multiple processes (pre-emption, synchronization,
>blocking, etc) are a ways off, IMHO.
NOT. I've been planning for multiple threads from the beginning. The main
problem with threading is compatability. I'm currently implementing this with
Sun's Light Weight Process library, but that's not compatible with any other
OS's, and it's been dumped from Solaris 2.0. I have Frank Mueller's
implementation of POSIX threads for Sun's, but I can't afford to order the
POSIX standard yet (the only documentation) so I'm not using it. I'm trying to
hide the threads in a module so that in the future it will be more portable.
I think I'll call my mud SunMUD II, after the original LPMUD SunMUD, and
because it probably won't work on anything but a sun at first.
>In designing such a driver, consideration
>must be given to making the coding of the mudlib simple.
I'm still trying to figure out how to do network connections. I don't like
the LPMUD model where verbs and input are in the parser. But putting the
handling in the player object seems just as ugly. I want my (privledged)
objects to be able to open /dev/network and read player commands from that.
I'm imagining something like this for /init.c:
file new;
while(-1!=(new=open("/dev/network")))
{
LoadObject("/object/player","init",new);
close(new);
}
and then this in player.c:
init(file input)
{
for(;;)
{
/* get some input from `input' */
/* do something with the line */
}
close(new);
DestroyObject();
}
Parsing verbs in the player object ain't going to be nice. And I my
interpreter has no idea of objects being "inside" other objects. That's
something else I'd like to keep in the "mudlib" and out of the parser.
I want my interpreter to be nothing more than an interpreter.
>LPC coders already
>take for granted the ease by which they can manipulate shared data, files, etc.
Yeh, this won't be a mud for novice wizards for a long long time. It probably
won't be a mud for anyone but me for another year at least.
>Imagine the debugging problems, questions/complaints, unexplained data
>corruption problems, that will come with semaphores, P() & V(), flock(), etc.
flock???? Dang, didn't even think of that!
>So...are you contemplating actually writing the ultimate game driver, or are
>you looking for ideas to nag some driver hackers with? :-)
I'm looking for ideas. I don't think I'll have the ultimate game driver, but
I've sure learned a lot about sockets, grammars, C programming and threads.
--
| Rob Quinn |
| r...@phys.ksu.edu |
| Quin...@KSUVM.BITNET |
Here's something I once wroteup. Its not complete but might give you
the general idea.
-- cut here --
The MudOS driver is the program (written in C in the case of LPmud) which
provides the lowlevel support that makes mudlib possible. The driver
does many things including:
0) accepts connections from remote machines (via a communications port) and
attaches those connections to the login object (/adm/login.c on TMI).
1) provides a set of external functions (efuns) that may be called from
within LPC objects.
2) compiles files into a compact internal tokenized form via the new(filename)
(or clone_object(filename)) efun.
3) interprets (executes) objects represented in the tokenized form. The
two main ways in which code gets executed are as follows:
a) the driver calls functions in objects based on input received from
users (via the communications port). The specific functions that get
called depend on what associations the objects of the mud have specified
between player-typed commands and functions (via the
add_action(function_name,command_name) efun). The driver also
calls functions in LPC objects from within certain efuns (such as "init",
"create", "clean_up", etc.).
b) objects can cause the driver to execute code in other objects via
the call_other(object,function_name,args,...) efun. An alternate
form of the call_other efun is object->function_name(args,...).
Ok, i want a driver that i just leave a lot of energy on, like a big bang,
and it'll create it's own dimensions and physics and time, then someone
who thinks they're really powerful will create a couple things called
'humans' and plonk another few life forms about, this powerful being will
call itself a 'god' and think it's wonderful, and i can just pat it on it's
head and say "Yeah, Right.." while it does all my work for me and creates
a nice little gaming world
~Beebop
Have you ever tried to explain the workings of input_to() in LPmud
to a new wizard?
(funny story, I once was trying to explain input_to() to someone the
"stupid" way. We were both getting really frustrated until he finally
lit up like a spotlight and said, "Oh! You mean it's a nonblocking,
asynchronous version of scanf!" Heh.)
Sulam @ Gnosis
Here .sig, here here, Sulam has a tidbit for you.
>toma...@cs.cmu.edu (Thomas Galen Ault) writes:
>>Suppose you could design the ultimate game driver? What kinds of features
>>would you like to see? These can be either features existing in current
>>game drivers, or features as of yet unimplemented.
>In LPMUD (and it's derivatives/cousins), the game driver is a compiler/
>interpreter that runs a game (mudlib). ie there is a distinction here
>which some tiny mud'ers are not familiar with. In LPMUD, the game driver
>has nothing to do with (at least, it shouldn't have) skills, character
>classes, combat, socials, etc. In this respect, it's unclear why you
>crossposted to r.g.m.tiny... For the purposes of discussion, I'll assume
>that was an error (and remove it from the Newsgroups field).
>>Personally, I'd like to see a more modular design done along object-oriented
>>principles. I'd like to be able to plug different modules in and out for
>>different effects. For example, suppose I want to implement a paging-style
>>memory manager to keep the memory use more-or-less constant. I'd like to
>>do this WITHOUT having to reach into all the different places in the
>>gamedriver where memory is allocated.
A worthy hack would be to modularize the 'compiler'. For instance, what if
I wanted to write the lexer&parser to accept a different grammar. LPC could
be replaced with some other variants or a whole new language all together.
Lisp mabey? The point is that modern compilers do something like this
on the backend when generating machine code and using that idea on a mud
lexer/parser would be nice to see. (when I say modern compilers, I'm
refering to ones like the GNU suite of compilers)
>Drivers that fully support multiple processes (pre-emption, synchronization,
>blocking, etc) are a ways off, IMHO. In designing such a driver, consideration
>must be given to making the coding of the mudlib simple. LPC coders already
>take for granted the ease by which they can manipulate shared data, files, etc.
>Imagine the debugging problems, questions/complaints, unexplained data
>corruption problems, that will come with semaphores, P() & V(), flock(), etc.
And then when drivers get hacked for parallel architechture machines..gack.
I'm working with a Connection Machine now, trying to port a driver to it. Not
as much fun as I anticipated.
As for who will sit down and write the driver that does _the right thing_,
it depends on who has the lightest load from their professors :) :)
--
jeff wandling <j...@huxley.wwu.edu>
Where will you get the driver and the energy? What will cause the `Bang!'?
>and it'll create it's own dimensions and physics and time, then someone
Well, I would suggest that it doesn't create the second law of
thermodynamics in its physics! It could cause a lot of problems if your
little universe is supposed to be a closed system.
>who thinks they're really powerful will create a couple things called
>'humans' and plonk another few life forms about, this powerful being will
>call itself a 'god' and think it's wonderful, and i can just pat it on it's
Will this powerful person be a member of the little universe or does he
transcend it? If he is a member then who created him - you or someone
else?
>head and say "Yeah, Right.." while it does all my work for me and creates
>a nice little gaming world
Interesting concept. Are you planning on writing such a driver?
:-)
--------------------------------
.Marty.!
Lost in Space! (or is it Japan?)
<pau...@tai.jkj.sii.co.jp>
Right. Unfortunately, my ideal set of efuns would probably just have the
appearance of being independent of the mudlib. For example, instead of
find_player() and find_living()...merge them into the find_object() efun,
but also taking flags, specifying the nature of the search.
>>Drivers that fully support multiple processes (pre-emption, synchronization,
>>blocking, etc) are a ways off, IMHO.
> NOT. I've been planning for multiple threads from the beginning. The main
By a ways off, I meant becoming mainstream/widespread/popular. Sort of a
prerequisite objective if you're designing the ultimate driver. The difficulty
will be finding a portable implementation.
>problem with threading is compatability. I'm currently implementing this with
>Sun's Light Weight Process library, but that's not compatible with any other
>OS's, and it's been dumped from Solaris 2.0. I have Frank Mueller's
>implementation of POSIX threads for Sun's, but I can't afford to order the
>POSIX standard yet (the only documentation) so I'm not using it. I'm trying to
>hide the threads in a module so that in the future it will be more portable.
> I think I'll call my mud SunMUD II, after the original LPMUD SunMUD, and
>because it probably won't work on anything but a sun at first.
Another problem is that the Sun kernel has to be built with the LWP
configuration enabled. Which at the moment means I can't qualify as a
beta tester. :-(
> Parsing verbs in the player object ain't going to be nice. And I my
>interpreter has no idea of objects being "inside" other objects. That's
Don't you mean objects "encapsulated" within other objects? :-)
The notion of inventory doesn't have to be in the parser. The object could
have an array of objects, and functions to manipulate them. This should
pretty much be doable all at the mudlib level.
>something else I'd like to keep in the "mudlib" and out of the parser.
>I want my interpreter to be nothing more than an interpreter.
I think that's the same approach the Sludge developers took.
> I'm looking for ideas. I don't think I'll have the ultimate game driver, but
>I've sure learned a lot about sockets, grammars, C programming and threads.
And who said Mud wasn't educational? :-)
No...but I should probably thank Xurbax for his patience in explaining
things like this to me.
>(funny story, I once was trying to explain input_to() to someone the
>"stupid" way. We were both getting really frustrated until he finally
>lit up like a spotlight and said, "Oh! You mean it's a nonblocking,
>asynchronous version of scanf!" Heh.)
Heh...when you take it upon yourself you introduce newbie wizzes to the
wondrous world of LPC, you sometimes get stuck in stupid-explanation-mode.
:-) And kudos to the MudOS people for their man page on input_to().
Wasn't it neat how the old (circa LPMud 2.4.5) bulletin boards allowed two
users to enter a message at the same time and then intersperse their messages
together?
Why? Why not roll your own? To me anyway, it doesn't make sense to
do OS level tasking, such as LWP or the like. Instead, let the
LPMUD interpreter do the context switching itself. What is needed
then are multiple stacks. A context switch (for the initial
implemention) would then only occur when a program is blocked for
input (a variant of input_to). This is practically the only
use I can see for multitasking in LPMUD anyway (an impractical
use would be to have an infinite loop be split into chunks).
Ie, you could write stuff like:
name = input("What is your name:");
and do that entire login sequence in a single (readable) function.
Since everything else is bounded by the eval limit, it's simpler to
wait for the program to end before starting the next rather than
switch between them. It would be more efficient to use call_outs
rather than a loop with a sleep() in it because in the latter you need
a separate stack for that process. If you added in more general
processes, with time slices, etc, then you suddenly have to worry
about output being interleaved (shouts could show up in the middle of
the who commands output, etc), and synchronization, shared variables,
etc. Shared vars could possibly cause a problem when using input(),
but could be solved by just documenting that this is a bad thing to
do.
To implement this, all that is needed is a method of switching
between interpreter stacks. It is not a good idea to preallocate
a set of them however, since you never know how many people will
have pending inputs. Maybe a small set that can be expanded as
needed. If the stacks are too big, then lots of memory could
possibly get wasted. Even if you used LWP, you still need to
be concerned with saving the old stack (else two processes
could screw up each other). In an input(), you mark the player
as blocked, and switch to the next task, or wait until someone
types a command, etc. (heartbeats and resets could use the same
static process). Then when the input comes in, you put that
task into the ready queue.
Finally, another possible use for OS level tasking would be
to separate execution of commands, etc, with the compiling of
LPC. For instance, if a big file needs compiling, the player
that caused it to load will block while a second process
actually does the parsing. Another player can then execute
commands in the meantime. (on many machines, this compiling
is the most noticable lag to players) But this would be
much harder to implement.
--
Darin Johnson
djoh...@ucsd.edu
This is the first time I've ever eaten a patient -- Northern Exposure
>Hmmm.... That one's interesting... I would like to see the "ultimate
>game driver" contain the following:
> 1> A combat and social interaction system like that of a Diku.
> 2> An object and robot set like that of a MUSH.
> 3> The ability to interchange and create new types of items,
> races, languages, monsters, etc. easily (ie, without having
> to alter the source code)
Most of the things you mention have nothing to do with the driver. A
good driver should be flexible, powerful and err object orintated I
suppose. Most of this depends on the language. If you make it flexible
enough and nice enough to program in all of the above things can be
written in it. Sure, it may not be as fast as a diku... But...
Personaly I think hard wireing too much into the driver tends to make
for a "sameness" in muds. As it is harder to change thingys.
David.
[DDT] Pink fish forever.
--
David Bennett, gu...@uniwa.uwa.oz.au | University Computer Club
Where Pink fish swim backwards. | c/o Guild of Undergraduates
These words I am singing now mean nothing more than meow to an animal - TMBG
Disclaimer: Any spelling mistakes in this article are all entirly my fault. Any grammer errors spotted in this article were put there because I could.
Yeep. Much of the latter will be world-specific....
In that case, there isn't much I can say except for wanting a real
language. (Such muds probably exist; my experience has been very
limited in this regard: DaemonMUCK 2.2+, 2.2.8.5d and my evolution
thereof, and fuzzball. I occasionally think I should look at moos.
(You might as well skip the letters telling me "yes, moos are the
greatest thing since sliced bread" or "no, moos are terribly misguided
and have it all wrong".)
Everything else depends so much on the world the mud's founders choose
to put in place that I can't give specifics. But I do want
consistency: the world-building should hang together, and the mud
should be true to that world.
der Mouse
: Steve
if i fail to explain it any better, sorry. anhow here goes.
in a tiny* there is a single program, this program looks after
the maintenance of the database, the connections of players to and
from the database, and most of the command words within the database.
so you have a set of fixed objects of a limited number of types -
the type being controlled by flags usually that have a very similar
structure, eg a set of attributes (name,lock,etc.) that are always present
and those that can be set at will with user-defined names, special
attributes often having features coded into the game driver.
to alter the game driver, ie change a command, change the object loading
method or set certain attributes to have special features, you must edit
the c-source code, and recompile.
the major difference with lpmuds is that for them there are effectively
TWO of what you are calling "drivers". the first is the driver that
handles connections via the port, memory allocation at a low level,
but mainly is responsible for providing services to the second driver
(the MUD lib).
when you fire the first driver up (hence to be called driver), it will
compile up a specific c-style file (NOT c, written in an object oriented
STYLE of c called LPC) into tokens, which it then stores in memory
using its clever memory handling. this file that is created is then
responsible for loading up other objects.
if you think of the driver as unix, with it's security system, and
various system calls (cd,ls,cat,mkdir,etc.) and what sits on top
ie. the MUDlib, to be the shell and other applications you get
a slighty clearer picture.
ok so i connect up to the mud.
the driver calls the mudlib file it first compiled, and finds the
funtion within it called connect.
this function will ask the driver to compile and load another object
(usually called the player object).
the driver then transfers effective control of the connectee to that
object.
the only words that the driver will recognise from the connectee
are those words that the player object has been programmed to
accept - IN THE MUDlib.
so to change the effect of typing "north", all i have to do is
edit the LPC file called "player.c" and force the driver to compile
it and use that instead.
all other objects are controlled in the same manner.
eg. when i type "north"
the room will have registered the word "north" with the driver as
being a command word for people within that room.
the driver will then called the function within the room that is
related to the command "north".
this command will load in the room to the "north" and request that the
driver move the player object to the other room.
on this move, a function "init" will be called in the next room
that will ensure that the commands within the next room are
initialised correctly.
so what are these objects and what can you get them to do.
basically anything you want, if you know how.
you can define local variables within objects the same as in any c
code. you can define functions within the file that will allow
other objects access to the variables, or not as the case may be.
there are a large number of functions within the driver that support
objects, as well as having a very nice type set including easy to
use arrays.
eg.
int* fred;
fred = ({10,11,12});
sets up fred as being a array of integers, then gives it a value
of 10,11,12 in order.
arrays can be of ANY type, including the generic type (mixed) which
allows you to have arrays of arrays (the implementation may get
complex, but the theory is easy).
...and mappings (only in the TMI-2 drivers i beleive, though i could
be wrong)
eg.
mapping fred;
fred = ([10:"jane", 11:"john", 12:"jim"]);
if(fred[10] == "jane"); // true
...and objects, which are pointers to another object currently
existing under the drivers control.
eg.
object* all_inventory(object location);
returns an array of pointers to all the objects currently within
"location"
...and of course, integer and string types (STRINGs, not char*).
eg.
string fred;
fred = "There are "+5+" houses\n";
would change 5 into "5"(as a string), but only if it is an int + a string.
anyway.
so what of skills? levels? fighting?
you define them yourself.
there is also a method of informing the driver that you would like
the driver to call a function within the calling abject every second.
the driver will then call this function (heart_beat()) every second until
it is switched off or the object gets destructed.
so you simply get this function to check various variables within
the object to see whether it is offensive, there are objects present
to fight.
then this would call the function to attempt to hit the opposing player,
this would check the variables associated with fighting skills
in the "attacking" object and defensive skill variables in
the "efending" object.
ok so just to recap the driver handles the interface to and from the
mudlib. all game commands, skills and everything else defined within
the mudlib with various calls to supporting features within the driver.
hope this was some use.
onto the subject matter now.
i see a huge world (virtual map handler > 25000 rooms) with villages
(based on ascii maps with special hotspots) with taxes, post offices
estate agents,ranged missile fire, bleeding, stunning, elemental damage
on weapons, gods, prayers with chanting, spells with chanting and magic
components,jail cells,libraries,town halls, lord mayors, voting for
guild positions, entirely skill based with no levels, sacrificing
objects for power (lets hear it for tinymud) and above all
DEATH IS FATAL!
sorry, i haven't finished it yet, and probably never will, but it whiles
away those boring hours waiting for lectures.
duncan
>Right. Unfortunately, my ideal set of efuns would probably just have the
>appearance of being independent of the mudlib. For example, instead of
>find_player() and find_living()...merge them into the find_object() efun,
>but also taking flags, specifying the nature of the search.
I don't see the advantage of this. Different functionality should get
a different name. Actually, this approach would eventually lead to
a single efun comprising all other efuns.
The idea of efuns is to define a builtin class which is automatically
inherited by all classes defined in the mudlib. A better idea would be
to split up this root class into a number of builtin classes and
let programmers inherit what they need. This should be a much cleaner
approach.
>>>Drivers that fully support multiple processes (pre-emption, synchronization,
>>>blocking, etc) are a ways off, IMHO.
>> NOT. I've been planning for multiple threads from the beginning. The main
>By a ways off, I meant becoming mainstream/widespread/popular. Sort of a
>prerequisite objective if you're designing the ultimate driver. The difficulty
>will be finding a portable implementation.
>>problem with threading is compatability. I'm currently implementing this with
>>Sun's Light Weight Process library, but that's not compatible with any other
>>OS's, and it's been dumped from Solaris 2.0. I have Frank Mueller's
>>implementation of POSIX threads for Sun's, but I can't afford to order the
>>POSIX standard yet (the only documentation) so I'm not using it. I'm trying to
>>hide the threads in a module so that in the future it will be more portable.
>> I think I'll call my mud SunMUD II, after the original LPMUD SunMUD, and
>>because it probably won't work on anything but a sun at first.
>Another problem is that the Sun kernel has to be built with the LWP
>configuration enabled. Which at the moment means I can't qualify as a
>beta tester. :-(
My, don't you people know what you're up against? A parallel/distributed
mud gives you all kinds of consistency problems. Try some papers on
distributed simulation, like:
Fujimoto, R.M., 1989, Parallel Discrete Event Simulation, Proceedings
of the 1989 Winter Simulaton Conference for an overview of problems
and techniques.
Chandy, K.M. and Misra, J., 1981, Asynchornous Distributed Simulation
via a Sequence of Parallel Computations, CACM, Vol. 24, No. 11, April 1981
for so-called 'conservative methods'.
Jefferson, D.R., 1985, Virtual Time, ACM Transactions on Programming
Languages and Systems 7:3, 404-425 for the so-called 'optimistic
approach'.
Disregarding this might well lead to effects like people walking through
closed doors, items getting picked up by more than one person, etc.
Semaphores and the like are far too low-level to handle those problems.
And while we're on this subject I might as well add that IMHO due to
the problems mentioned above, a distirbuted mud on the internet is
currently an impossiblity (well, perhaps not _impossible_, but certainly
not feasible).
>> Parsing verbs in the player object ain't going to be nice. And I my
Add a builtin class "grammar" to handle the time-consuming part in
the driver.
>>interpreter has no idea of objects being "inside" other objects. That's
>>something else I'd like to keep in the "mudlib" and out of the parser.
>>I want my interpreter to be nothing more than an interpreter.
Fine, but will it run on anything short of a CRAY?
Reimer Behrends
Bascially, this is rubbish.
>And while we're on this subject I might as well add that IMHO due to
>the problems mentioned above, a distirbuted mud on the internet is
>currently an impossiblity (well, perhaps not _impossible_, but certainly
>not feasible).
Been done, actually. It's not *easy*, but if you know what you're
doing, it's certainly possible. Distributed applications aren't that
bad, as long as you design with care, to sidestep the issues that make it
hard. Appropriate atomicity, blah blah blah. The hard part is getting it
to actually go faster on many boxes than it does on one.
You may imagine a long list of references here if you like, but
I don't care to inflate my cause by dropping a bunch of names. However,
see UnterMUD and COOLMUD for two, rather different, approaches to
dirstibuted muds, both of which work.
Andrew
> Reimer Behrends
>
This is something more along the lines of what I have been thinking of
lately. If we do as has been suggested a few places, and remove
find_object(), then the only way to get a pointer to another object is
to be in the room with it, or to have it give you one. Then most of
the traffic across our net would be transferring object definitions.
Sulam @ Gnosis
I didn't realize .sigs were so flighty.
But making efuns called find_player and find_living, you are moving the
idea of `player' and `living' into the driver. This is what we don't want.
>Actually, this approach would eventually lead to
>a single efun comprising all other efuns.
No, it will lead to a library of builtin functions, each used for a certain
`idea' of programming.
>The idea of efuns is to define a builtin class which is automatically
>inherited by all classes defined in the mudlib.
Yes. By putting game stuff into the driver, you've made all future things
inherit the game stuff.
Put game stuff at the top of your mudlib and let the rest of your mudlib
inherit it.
>My, don't you people know what you're up against? A parallel/distributed
>mud gives you all kinds of consistency problems.
I think they are solvable problems.
>Try some papers on
>distributed simulation, like:
ftp sites?
>Semaphores and the like are far too low-level to handle those problems.
Why do you say that?
>Fine, but will it run on anything short of a CRAY?
Actually, I'm interested in comparing my current single threaded interpreter
to other game drivers out there.
Could someone try this (or similiar) code on another common driver and tell
me what they get?
static int fact(int a) {
if(a>1) return a*fact(a-1); else return 1;
}
static int init(int status)
{
int c,d,t;
d=12; /* Max fact() that won't overflow long int */
c=200; t=time();
for(;--c;) {
if(4.790016e+8 != fact(d))
{ printf("Failure!\n"); break; }
}
printf("Elapsed time in seconds:",time()-t,'\n');
return 0;
}
%time ./c_parser
Elapsed time in seconds:13
13.530u 0.460s 0:14.74 94.9% 0+381k 3+2io 0pf+0w
Under MudOS 0.9.15, running the equivalent code takes 0.125 cpu seconds on
a 68040 25Mhz NeXT cube.
gestalt[12] /usr/local/mud/bin/driver -f7 config.basis
MudOS 0.9.15.4
loading config file: /usr/local/mud/etc/config.basis
*Warning: Missing line in config file:
default error message
init_addr_server: connect: Connection refused
]factorial test
]time taken: 125
Shutting down new user conn...
int fact(int a)
{
if(a>1) return a*fact(a-1); else return 1;
}
void test7 ()
{
int i, time;
mapping r, s;
int c,d,t;
r = rusage();
time = r["usertime"];
d=12; /* Max fact() that won't overflow long int */
for(c = 200; --c; ) {
if (fact(d) != 479001600)
{ printf("Failure!\n"); break; }
}
r = rusage();
write ("time taken: "+(r["usertime"] - time) + "\n");
}
running that init() function on MudOS, on an HP 9000/827 system (~60 mips)
...the init() function took 0 seconds realtime.
cpu time was
.09 seconds total (u + s)
yours counted a bit of other time, in the setting up of the c_prase
program, since you 'time'd it all (and mine was just a time of the
init() function and its calls of fact())...however, your realtime
in init() is still 13 seconds.
but then again, this all depends on what type of computer you're using
*shrug* you may be beating the hell out of MudOS..if those cpu times
are for a Commodore 64.
>My, don't you people know what you're up against? A parallel/distributed
>mud gives you all kinds of consistency problems. Try some papers on
>distributed simulation, like:
Argh. Try adding language features like events, locks, conditions,
asynchronous requests, and sleep(). It can be done - in fact, you
can provide quite a nice language for doing it by allowing annotations on
your event handling functions. See my last article for a discussion
of why you'd *want* to (even in the same server), and add to it things
like "goal-directed assistance for NPCs" - i.e. going west is dependent
on the door being open (these can be supported by precisely the
same set of annotations).
All of the papers you cite are relevant to distributed message passing
systems like coolmud - but they don't address the LPmud situation. The
way to do serious distributed LPmudding is to be able to migrate objects
and to ensure that objects in the same room are on the same server -
because 95% of all call-others, etc, are between objects in the same
inventory tree. This just means that we could run a mud across our network
of decstations (NFS, so code is shared; variables can be pumped down a
socket on object migration), but we wouldn't provide full service to a
mud running a different mudlib (it'd have to create a shell object for
a player that wanders in). There are "obvious" (and efficient)
algorithms for migrating objects, finding moving objects, etc.
>And while we're on this subject I might as well add that IMHO due to
>the problems mentioned above, a distirbuted mud on the internet is
>currently an impossiblity (well, perhaps not _impossible_, but certainly
>not feasible).
I suspect you're just too young to remember the massive flamewars
we've had on this subject :)
Cheers,
Mike.
--
Mike McGaughey AARNET: mm...@bruce.cs.monash.edu.au
"Head stompin', ass kickin', finger licking nastiness"
[game-driver, mudlib, efuns, 'player', 'living', LPC code (?) etc deleted]
Guys, really! ;)
This thread started out with pretty general stuff, but this is techno-LPC
stuff, and hardly deserves the wide cross-posting.
Please check your Newsgroups line before replying on specific issues, thanx :)
--
Gnort @ { DikuII | Unicorn | Alex | Discworld | Igor } gn...@daimi.aau.dk
I tried almost the same code - only r["usertime"] replaced with r[0],
and type mapping swapped for mixed *, to compensate different rusage
return values - on my 486/33 with my current working version of the
gamedriver (There's nothing new for the benchmarked stuff compared to
3.2@22 , which is already run on some muds).
I got a time of 80 milliseconds.
Well, this was indeed a bit hazy, considering that a jiffy is 10 ms.
So i raised the loop count to 2000 . It showed repeatedly 790 ms.
As a little teaser for the recursion fanatics: the iterative version
( c initialised to 2000 as well ) was 420..430 ms.
Amylaar
amy...@mcshh.hanse.de - this site expires on the 1st of march
amy...@cs.tu-berlin.de - mail will be forwarded to the local account
amy...@jwminhh.hanse.de - mail will be read from the 1st of march on... :-)
Joern> gar...@ccwf.cc.utexas.edu (John Garnett) writes:
>Under MudOS 0.9.15, running the equivalent code takes 0.125 cpu seconds on
>a 68040 25Mhz NeXT cube.
>gestalt[12] /usr/local/mud/bin/driver -f7 config.basis
>MudOS 0.9.15.4
>loading config file: /usr/local/mud/etc/config.basis
>*Warning: Missing line in config file:
> default error message
>init_addr_server: connect: Connection refused
>]factorial test
>]time taken: 125
>Shutting down new user conn...
Joern> I tried almost the same code - only r["usertime"] replaced with r[0],
Joern> and type mapping swapped for mixed *, to compensate different rusage
Joern> return values - on my 486/33 with my current working version of the
Joern> gamedriver (There's nothing new for the benchmarked stuff compared to
Joern> 3.2@22 , which is already run on some muds).
Joern> I got a time of 80 milliseconds.
Joern> Well, this was indeed a bit hazy, considering that a jiffy is 10 ms.
Joern> So i raised the loop count to 2000 . It showed repeatedly 790 ms.
I tested BatMUD's driver on two machines with loop count 2000
lancelot Sparcstn IPC 36M SunOS 4.1.3 (UNIX)
palikka Sparcstn 2 52M SunOS 4.1.3 (UNIX)
driver on palikka had been running 11 hours, 80 players online
malloc about 32 megs
driver on lancelot was freshly booted only to make this test
results: palikka 1040, lancelot 1770
Petri
--
--------------------------------------------------------------------------
Petri Virkkula | Internet: Petri.V...@hut.fi
JMT 11 H 168 | pvir...@nic.funet.fi
02150 Espoo | X.400 : /G=Petri/S=Virkkula/O=hut/ADMD=fumail/C=fi/
FINLAND | Voice : +358 0 455 1277
--------------------------------------------------------------------------
I wrote> I tested BatMUD's driver on two machines with loop count 2000
I wrote> lancelot Sparcstn IPC 36M SunOS 4.1.3 (UNIX)
I wrote> palikka Sparcstn 2 52M SunOS 4.1.3 (UNIX)
I wrote> driver on palikka had been running 11 hours, 80 players online
I wrote> malloc about 32 megs
I wrote> driver on lancelot was freshly booted only to make this test
I wrote> results: palikka 1040, lancelot 1770
I made one more test on HP workstation (9000/705) running
HP-UX 8.07. Result was 1990 (driver was compiled only for this
test). I made test again on palikka after reboot and result
was 1010. All machines didn't have almost any other activities
(palikka had load average 1 when I made first test, all other
tests were made when load average was about 0.1-0.5).
What can we learn about this? All these benchmarks are quite
worthless when their purpose is to compare different drivers.
>Well, this was indeed a bit hazy, considering that a jiffy is 10 ms.
>So i raised the loop count to 2000 . It showed repeatedly 790 ms.
>As a little teaser for the recursion fanatics: the iterative version
>( c initialised to 2000 as well ) was 420..430 ms.
Bah! That just means its time to add tail recursion elimination
to your compiler :)
> But - again, perhaps I am too lazy? It is laziness not to be wanting
> a fully fledged Distributed Global Snapshot algorithm each time
> I want to write something non-trivial? And getting irreproducible
> errors for things trivial? Yes, I am lazy. I think the driver(s)
> should take care of problems caused by making it distributed, lest
> I end up with two thirds of my code taking care of the last third
> to be functional on a distributed system.
this is a consideration, to be sure. however, i found that, in
writing the simple mudlib/database which comes with COOLMUD, i had
to use semaphores (locks) in exactly 2 places: parsing input, and moving
objects. i did also have to check in quite a few places for
references to objects which were on downed servers (for example, when
the room you're in suddenly disappears). a little extra code, made
easier by exception handling. nowhere near 2/3 of the code. putting
this into the language (instead of expecting the gamedriver to handle
it) allows the programmer to take action appropriate to the situation.
> Let's start with a simple example. Adventurer A closes gate G from
> one side. At the same time Dragon D tries to walk through this
> gate. Both is done in different threads.
if you care to, take another look at the COOLMUD distribution and grep
located_obj.cl for:
lock("moveto");
(this is a semaphore, although i didn't know that when i invented it,
since i hadn't taken my concurrency course yet. :)
> Of course, by means of monitors and semaphores and whatever you
> like you'll be able to fix the problems mentioned. However, this
> will lead to unnecessarily complicated code.
i don't think it's unnecessarily complicated code. however, there could
be situations i haven't handled. the advantage of using it in an object-
oriented system is that once you've written the object-moving code once
(for example), all your objects can inherit it.
> [ description of TIME WARP approach deleted ]
interesting. i had similar thoughts, re: rolling back the database
for incorrect sequencing (a la rollbacks in SQL, although for a
different reason). however, unless i'm misunderstanding, timestamping
would be difficult across the Internet. see, for instance, the
problems encountered by the various metrics used by the routing
protocols on the Inet.
> Even object migration won't cure the problem. Even if we can
> make 99% of all calls local. With Time Warp, the 100th call
> will bring the faster system to a standstill for a short time.
> With conservative methods, the calls may be local. The null-messages
> or the deadlock recovery are not.
hmmm. will it actually bring the whole system to a halt, or just
one thread?
btw, there's an interesting discussion of object migration protocols
over in comp.object.
-- sfw "people are always telling me It Can't Be Done.
that ain't gonna stop me trying."
--
Stephen F. White
sfw...@sciborg.uwaterloo.ca
"I don't even know what reality is." - David Lynch
Thats not the question. The question is 'just how much locality does a
mud exhibit??'.
Is the answer is 'a lot', the distribution is no problem. If it's very
little, then you may have problems.
You seem to imply that your answer to the Q would be 'very little
locality'. This I find hard to swallow. Purely on intuitive grounds.
: But I guess by now everybody who thinks he's got something to say
: on this subject is dying to flame me. Thus I had better get on
No flame here. I just think you're being a bit pessimistic.
: with my discussion of the problems implicated by distributed mudding
: to give them food for thought.
:
:
: THE PROBLEM
: ===========
:
: Let's start with a simple example. Adventurer A closes gate G from
: one side. At the same time Dragon D tries to walk through this
: gate. Both is done in different threads. To make things easier we
: assume that a thread can only be interrupted at a function call
: (if we would take even this possibility out, we'd have a single-
: threaded system).
That is one hell of an assumption, and I don't know why you did it. It
doesn't simplify things that much. *shrug*. Let it stand, makes no
diff.
: Currently the dragon stands in room RD and the
: adventurer in room RA. The gate is represented by two different
: objects, GD in room RD, and GA in room RA.
Ahh. Now we see your problem. Faulty implementation. :)
The gate should be one object. Lets procced for now...
: [ classic example of concurrent pitfall deleted ].
:
: Another example: Two player try to pick up an object almost
: simultaneously. Imagine code like:
: if (ob->environment()==this_player()->environment())
: {
: write("You take it.\n");
: ob->move_to(this_player());
: }
: Now consider what happens if both threads are split up in a way
: that allows the check for both players to succeed, and therefore
: moves the object to both players. However, it will end up with
: only one of them since it doesn't multiply.
This is being silly. (Sorry, you get a flame here). These are 'straw
man' arguments. You make a silly statement, show that it is silly, and
claim this acomplishes something.
Of COURSE you can write code that doesn't work with threads. You can
write code that doesn't work in ANY language. Big deal.
You want to write correct code, and as you say....
: Of course, by means of monitors and semaphores and whatever you
: like you'll be able to fix the problems mentioned.
Exactly. You program around the problem inherent in input_to()
statements. Why don't you do the same for multi-threaded code??
: However, this
: will lead to unnecessarily complicated code. And it doesn't solve
: the general problem which can be stated as:
:
: How do I avoid event B influence event A if event A happens before
: event B? In other words, how do I avoid situations where time
: flows backwards, defying causality?
:
: There is no way to predict the flow of control and lock all affected
: objects in advance. The sequence of function calls isn't computable,
: not even in a single-threaded system.
You really are being silly. A multi-threaded system required a
different way of programing. You can't expect single threaded code to
work. Quick soln's to problems above...
Picking up an object..
room->move(ob, me); /* simple enough.. */
room code: /* Kludgy implementation. Don't bother flameing. :) */
move(what, where) {
static semephore s;
Wait(s);
what->move(where);
Signal(s);
}
Possibly better way of doing things.....
if (ob->move(me))
printf("That object is not in this room.");
object code:
move(dest) {{ /* The {{ is a language feature, that */
/* makes this routine mutex w.r.t itself */
if (reachable(dest, this_object()))
return -ENOREACH;
move_object(dest);
return 0;
}}
/* reachable() simply checks that you can reach the object from where */
/* you are.. */
Now (I think..) You get onto problems with distributed objects.
(hell, maybe you did before and I didn't notice.. :)
: [ big bunch of stuff dedicated to solving a problem I don't think ]
: [ exists. Please tell me if I am wrong.. ]
I don't understand why you want all this. Why not just write code that
is order independant? Don't depend on a global ordering, only on a
local ordering, and everything is fine.
If I understand right, this is all because of the perceived problem
with the gate right? Try code like..
gate.c:
Int open;
Monitor {
cross(me,where) {
if (!open) {
printf("You hit you head on that closed gate
and die.");
me->die();
} else {
me->move(where);
printf("You pass thru the gate");
/* do groovy room messages */
}
}
close() {
open = 0;
}
}
/* the Monitor is a keyword that simply maintains mu-tex between
functions in the enclosure */
Problem solved simply by useing local time (i.e. gate time). Nothing
else is important, so why depend on it?
: DISTRIBUTED MUDS
: ================
: [ description of problems that happen when you use global time ]
:
: But can't we avoid time? Afraid not. If we want causality, we
: need an A-happens-before-B relation. And therefore, we need time.
Sorry. This is just not true. The only thing we need is LOCAL
casuality. The only time 'time' is important, is for determining a
local ordering..
The key is to pick the right object to handle the ordering.
Enter exit messages:
let the room handle the order in which people enter, and leave a
room. This ensures that everyone in the room sees people enter and
leave in the right order. No need for a global time. Everyone in the
room is on local time.
Shouts:
have a single object to handle global messages. You want to shout, you
do broadcase->shout("hi"); . Everyone gets the right ordering. No
probs.
Can you think of any situation where it can't be solved by picking a
object to handle the ordering??
: Reimer Behrends
Michael
[stuff about benchmarks deleted]
> What can we learn about this? All these benchmarks are quite
> worthless when their purpose is to compare different drivers.
uh.....why? Upon what basis do you make the above statement? Are you in the
"we shouldn't compare different drivers because they're different" camp?
In my experience the people in that camp are usually the ones with the slower
drivers.
In my opinion, comparisons are useful -- they (like all benchmarks) should be
taken with a grain of salt, but I would definitely not call them "quite
worthless". I might go so far as to say that for a MUD driver for general use
(running an average sized game), features are more important than raw speed,
but: features combined with speed benchmarks will enable a good decision.
Dwayne (Jacques) Fontenot
> This handles small latencies, doesn't handle (gracefully)
>long latencies,
Agreed. If you read Jefferson's paper you'll find he relies on
'time locality' a lot like a virtual memory system relies on
'space locality'.
>and ignores the cases in which processes/channels
>completely fail.
Yes, all those methods rely entirely on a safe system where
messages don't get lost. There is, however, a paper on
computing GVT (can't conjure up the exact reference now)
where the algorithm presented can also be used to track down
lost messages.
>(It is hard to back out the fact that you've already
>printed the message 'A dragon walks through the gate', eh?)
As I wrote in my previous posting, I/O is handled differently
from computations that change only the internal state. In fact,
no messages will be printed unless the system is sure it won't be
rolled back beyond the timestamp of the event that did the output.
>>CHANDY-MISRA
>>============
>>
>>The Chandy-Misra family of methods comprises, contrary to Time Warp,
>>'conservative' mechanisms - conservative, because they execute
>>a call only if they know there won't be any other calls which
>>can arrive later and violate the correct order of execution.
>>
> This is kind of interesting, but with slow channels, it seems
>to slow everything down?
Yes. Time Warp is supposed to be faster than those methods. However,
this depends heavily on the hardware and the implementation.
>And also doesn't cope with the fact that
>things break. When one of the channels attached to you dies, you have
>to time out, and hope the fellow on the other end didn't send you a
>message you didn't get etc etc. All, loosely speaking, solvable
>problems, but you're simply stuck with the fact that you're going to
>be doing the wrong thing sometimes.
Quite right. This is a major problem. But I never claimed a distributed
mud would be easy. In fact, I said more or less the opposite.
>>DISTRIBUTED MUDS
>>================
>>
>[Stuff deleted]
>>Even object migration won't cure the problem. Even if we can
>>make 99% of all calls local.
> It seems viable to allow migration, and force 100% of calls
>to be local. This is the UnterMUD approach, it provides a very weak
>for of distributed MUD that works very well on the Internet.
I don't know UnterMUD. Yet, for any system that is Turing-complete
(at least as far as a maximum evalcost constraint permits) I doubt
that it will work well in all but non-trivial cases. I mean, simply
walking a linked list of objects (like a list of all users) would
move all objects in this list to a single server.
> I speculate that a stronger form of distribution is Not
>Possible across the internet as it exists today, which is roughly
>the point I was trying to make.
> I'd be fascinated to know what Behrends thinks *will* work.
Oh, quite a lot of things WILL work. But I think none of them
will be a) easy to implement and b) sufficiently fast. Well, b)
you can probably have if you take Micheal O'Reilly's approach and
wrap everything up in several layers of semaphores. ;-)
Reimer Behrends
Dwayne> In article <PETRI.VIRKKULA...@lesti.hut.fi> Petri.V...@hut.fi (Petri Virkkula) writes:
Dwayne> [stuff about benchmarks deleted]
> What can we learn about this? All these benchmarks are quite
> worthless when their purpose is to compare different drivers.
Dwayne> uh.....why? Upon what basis do you make the above statement? Are you in the
Dwayne> "we shouldn't compare different drivers because they're different" camp?
No, because they were run in different machines with different
operating system. Numbers that I posted should have shown my
point.
Dwayne> In my experience the people in that camp are usually the ones with the slower
Dwayne> drivers.
That sounds like you want to start flamewar ;) I have never
claimed (somebody else might have do it) that BatMUD has the
fastest driver in the world. And the show that BatMUDs driver
is faster than MudOS ;-) but is that the truth, you can't say
it based on this kinf of benchmarks.
Dwayne> In my opinion, comparisons are useful -- they (like all benchmarks) should be
Dwayne> taken with a grain of salt, but I would definitely not call them "quite
Dwayne> worthless". I might go so far as to say that for a MUD driver for general use
Dwayne> (running an average sized game), features are more important than raw speed,
Dwayne> but: features combined with speed benchmarks will enable a good decision.
I agree with that but what I was trying to say was that those
benchmarks we did don't mean the absolutely truth as long they
are run under different environments. See the numbers I
posted: 1010, 1770 and 1990. Those should tell how much the
speed depends on the machine (btw, that 1990 was with with
debugging mudlib containing only login object with 3 commands
without any password checking etc., gamedriver itself were same
in all cases).
>This is being silly. (Sorry, you get a flame here). These are 'straw
>man' arguments. You make a silly statement, show that it is silly, and
>claim this acomplishes something.
>Of COURSE you can write code that doesn't work with threads. You can
>write code that doesn't work in ANY language. Big deal.
Except, of course, PERL, which by definition works no matter what you stick
in it. ;)
>: However, this
>: will lead to unnecessarily complicated code. And it doesn't solve
>: the general problem which can be stated as:
>: How do I avoid event B influence event A if event A happens before
>: event B? In other words, how do I avoid situations where time
>: flows backwards, defying causality?
>: There is no way to predict the flow of control and lock all affected
>: objects in advance. The sequence of function calls isn't computable,
>: not even in a single-threaded system.
This seems to me to imply no control over the threading at all. Let me
offer, as an example, the case of cooperative threads. In a cooperatively
threaded system you do indeed known when a switch will occur and can
therefore follow the thread of control, preventing attempts to access a
single data structure by two threads for example.
In fact, inherently this works pretty well with the line based and
server oriented concept of a mud. Each thread maintains the state
information automatically, each thread gets its turn and each completes
whatever it has to do within the given time slice. In most cases, you're
probably not likely to have a mandelbrot thread running that is sucking up
more CPU time than you'd like and which needs to be made to run more
frequently. In my own threading experiments, I had each thread process
input, display output and then release before checking for additional
input. Thus, the player types close gate and the dragon attempts to move
through the gate (in essence input from an NPC). If the player's input is
processed first the gate is closed first, if the NPC's input is processed
first, the dragon arrives first. In either case, the result is clear.
Effectively, this scheduling is very similar to the typical round robin
scheduling of a single-thread server. So where do the advantages lie?
Well, in such things as not needing to maintain state information. Thus
one person can be entering the game, while another sits at the edit menu,
another at the bulletin board, another at a card game, etc., with no need
to keep messy state information floating around and building huge switch
statements that break up input based upon the current state, etc. This
eliminates things like the saving a finished board message under a Diku
(under a single thread a callback could be used to perform the same sort of
thing, but this results in a more complicated structure, more complicated
code, etc., and multi-threading just makes it all the easier).
Obviously a semaphore system is also possible, but this strikes me as
typically uneccesary, given the sort of concurrency you'll find present in
a mud.
>You really are being silly. A multi-threaded system required a
>different way of programing. You can't expect single threaded code to
>work. Quick soln's to problems above...
Actually, you can, to the degree that I mentioned above. You throw in
some thread code, you turn the big massive state switch thingy into a thread
routine (which will make it much cleaner) and then appropriately place your
releasing of the threads, etc. It's far less work than a full semaphore/
monitor system, and if the concurrency of multiple threads isn't needed (say
on a multi-processing system where you want to allocate more than one
processor at once), I doubt the efficiency is degraded. In fact, I would
estimate that efficiency is increased, as you are determining the most
effective places to perform context switching. Are we really worried about
time sharing? I mean, most muds are time oriented themselves, thus a player
can only perform activities so fast with regard to his or her fellow players,
etc. The least of my worries is stopping a player's thread because he or she
has taken up too much time. If all of the players have input on a given time
increment, I want to pick up all of the input before picking up one of the
player's secondary input, etc.
>Now (I think..) You get onto problems with distributed objects.
>(hell, maybe you did before and I didn't notice.. :)
Well, again, the above doesn't really apply to distributed or multi-processor
models. Although, for a distributed system it just seems more a manner of
agreeing how to share data between the two (that is, who should maintain
control over the gate and should process requests to open/close it, etc.).
Thus this model could probably hold up over a distributed case. Of course,
you could treat a multi-processor case as a distributed case in a sense, too.
I wouldn't say semaphores are a lot of work, as much as perhaps I might
suggest that debugging semaphores is a lot of work (i.e., if you forget to
protect a data item it may show up as a pretty strange error, and finding the
cause of the problem might be less than trivial in some cases).
>: But can't we avoid time? Afraid not. If we want causality, we
>: need an A-happens-before-B relation. And therefore, we need time.
>Sorry. This is just not true. The only thing we need is LOCAL
>casuality. The only time 'time' is important, is for determining a
>local ordering..
This is sort of what I meant above, as well. If I want to close the gate and
I'm an object from server A and server B owns the gate as well as the dragon
trying to pass through it, then simply pass my request to open the gate to
server B and allow server B to determine which event is received first. Now,
it may be that I actually sent my command to open the gate before the dragon
decided to move and network lags allow the dragon to move first. This isn't
much different than having net lag delay the processing of your input really.
It's going to happen. Your only other choice is to delay server B until
some sort of ack or nack is available from server A to simpy tell server B
whether or not server A did anything to it. Thus you end up with the
slowest server/connection becomes the maximum speed. This isn't really a
viable solution.
It would be interesting to see some sort of protocol evolve like NTP,
where the muds work on remaining synchronous as far as game time is
concerned (thus, although each one has its own local timing [which should
already be consitant between servers, to a degree], if say a server was late
for a tick, the other muds might try to adjust slightly to compensate). All
this tries to assure is that it's the same date/hour on every mud, everyone
has healed the same amounts within the same periods of time, etc. Of
course, radical differences in game time should probably be accounted for by
a simple updating to the farthest advanced period, so that the speed of the
servers, again, doesn't get slowed down to the slowest speed.
>Can you think of any situation where it can't be solved by picking a
>object to handle the ordering??
Or basically, allow the locality of the object to determine the ordering of
events.
--
Shawn L. Baird (Scarrow) | "By all means, take the moral high ground --
bai...@ursula.ee.pdx.edu | all that heavenly backlighting makes you a
-------------------------| much easier target." --Solomon Short
tub...@cs.tu-berlin.de (TUB Multi User Domain) writes:
>amol...@nmsu.edu (Andrew Molitor) writes:
>>In article <1mrmes$m...@news.cs.tu-berlin.de> r_be...@informatik.uni-kl.de writes:
>>>CHANDY-MISRA
>>>============
>>>
>>>The Chandy-Misra family of methods comprises, contrary to Time Warp,
>>>'conservative' mechanisms - conservative, because they execute
>>>a call only if they know there won't be any other calls which
>>>can arrive later and violate the correct order of execution.
>>>
>> This is kind of interesting, but with slow channels, it seems
>>to slow everything down?
>Yes. Time Warp is supposed to be faster than those methods. However,
>this depends heavily on the hardware and the implementation.
Firstly, the conservative PDES mechanisms that were introduced by Chandy &
Misra (and independently by Bryant) are only usable for a set of physical
processes (which are simulated by a set of logical processes) in which
a _static specification_ of the communication links is given. In other words,
the topology of the network to be simulated must be known at load-time. I
can't imagine, this will be the case in distubuted muds.
Secondly: yes, it depends heavily on the kind of simulation and the
hardware whether or not conservative mechanisms are slower than optimistic
ones. In some cases, using knowledge of the topology of the network and other
techniques (e.g. techniques to reduce the number of Null messages),
conservative mechanisms can outperform optimistics ones. The latter may suffer
from great roll-back overheads, in cases where time locality is hard to
obtain (and this _certainly_ is a huge problem!).
/Andy
--
Andy Pimentel
University of Amsterdam "If the facts don't fit the theory,
Dept. of Computer Science change the facts."
Email: pime...@fwi.uva.nl Albert Einstein.
Wow, I really suck!
>Well, this was indeed a bit hazy, considering that a jiffy is 10 ms.
>So i raised the loop count to 2000 . It showed repeatedly 790 ms.
I'm impressed. Time to optimize.....
Has anyone taken a close look at AT&T's Concurrent C++? It's a licensed
product, available only to our CIS department here so I stopped using it, but
I seem to recall reading that it had a lot of network support. When you create
a thread, you can specify if you want it on the local processor, an alternate
processor on the same machine, or an alternate machine. After that you treat
the (possible remote) thread just like any other thread and let the runtime
library take care of the internals and message passing.
At least that's what I recall. Any mud'ers out there used it?
Heh. Well haveing implemented most of a server before I got
sidetracked [:(] a) isn't too difficult to acheive. basically, if you
throw out LPmud and start from scratch with a true OO system, and
don't make sily assumptions, it all pretty much falls out. This
hardest thing I found was getting the damned language compiler to
work! (optimizing a strictly typed language is a bitch).
: Reimer Behrends
Michael
While co-operative threads are very useful, they aren't quite so
useful in muds. The main problem is that because your would (ideally)
be have large co-reuse, you can't guarentee that none of your calls
will block. Blocking should automatically result in a context switch.
If you call something, and it blocks, you get a context switch you
hadn't planned for.
While you can program around this, it gives rise to race conditions in
poor code, and is basically a hassle. Furthermore, if someone changes
the base code, your code will need to change too. This is a major
no-no.
If you write your code so that is doesn't rely on lower level code not
switching, then you basically have everything that you need for
pre-emptive threading.
: >You really are being silly. A multi-threaded system required a
: >different way of programing. You can't expect single threaded code to
: >work. Quick soln's to problems above...
:
: Actually, you can, to the degree that I mentioned above. You throw in
: some thread code, you turn the big massive state switch thingy into a thread
: routine (which will make it much cleaner) and then appropriately place your
: releasing of the threads, etc. It's far less work than a full semaphore/
: monitor system, and if the concurrency of multiple threads isn't needed (say
: on a multi-processing system where you want to allocate more than one
: processor at once), I doubt the efficiency is degraded.
There is very little overhead involved with semaphores/monitors. Much
of this would be driver supported with only 1 or 2 lines of C at
various critical places.
: In fact, I would
: estimate that efficiency is increased, as you are determining the most
: effective places to perform context switching.
This is very unlikely. You context switch under 2 instances.
1) You would block. The pre-emptive scheduler will catch this anyway.
2) You feel you are useing too much time. Problem is you can't
actually measure this, so you have to guess. The pre-emptive scheduler
doesn't have to guess.
: Are we really worried about
: time sharing?
Well, most people would be, yes.
The main problem is that while much of the code only wakes up for a
short time, there is the occasional piece that needs a lot of
low-priority CPU time. LPmuds are particulary bad at handleing this.
: I mean, most muds are time oriented themselves, thus a player
: can only perform activities so fast with regard to his or her fellow players,
: etc. The least of my worries is stopping a player's thread because he or she
: has taken up too much time. If all of the players have input on a given time
: increment, I want to pick up all of the input before picking up one of the
: player's secondary input, etc.
I really don't know what you are getting at here.
: >Now (I think..) You get onto problems with distributed objects.
: >(hell, maybe you did before and I didn't notice.. :)
:
: Well, again, the above doesn't really apply to distributed or multi-processor
: models.
Thats true, and that is one reason why it isn't under consideration.
We were discussing distributed muds. :)
: Although, for a distributed system it just seems more a manner of
: agreeing how to share data between the two (that is, who should maintain
: control over the gate and should process requests to open/close it,
: etc.).
This is like saying 'Swimming is just like moveing your arms and legs
to push yourself through the water' :). Says nothing.
: I wouldn't say semaphores are a lot of work, as much as perhaps I might
: suggest that debugging semaphores is a lot of work (i.e., if you forget to
: protect a data item it may show up as a pretty strange error, and finding the
: cause of the problem might be less than trivial in some cases).
Debugging is never easy. Concurrent debugging is even worse. This is
one reason why I am going the strictly typed way of things. The only
thing you can do is be careful how you write you code. Write
defensively. Good practice in anycase.
: >Sorry. This is just not true. The only thing we need is LOCAL
: >casuality. The only time 'time' is important, is for determining a
: >local ordering..
:
: This is sort of what I meant above, as well. If I want to close the gate and
: I'm an object from server A and server B owns the gate as well as the dragon
: trying to pass through it, then simply pass my request to open the gate to
: server B and allow server B to determine which event is received first.
This is a little jumbled. If player is on A, and gate is on B, then A
does a RPC or passes a method (pick your paragrim) to B. Doesn't
decide which to do next, it simply processes them in the order it gets
them.
: It's going to happen. Your only other choice is to delay server B until
: some sort of ack or nack is available from server A to simpy tell server B
: whether or not server A did anything to it. Thus you end up with the
: slowest server/connection becomes the maximum speed. This isn't really a
: viable solution.
This is not even a solution. You want every server to broadcast a
message asking who wants to mod an object before you mod it!?
: It would be interesting to see some sort of protocol evolve like NTP,
: where the muds work on remaining synchronous as far as game time is
: concerned (thus, although each one has its own local timing [which should
: already be consitant between servers, to a degree], if say a server was late
: for a tick, the other muds might try to adjust slightly to compensate). All
: this tries to assure is that it's the same date/hour on every mud, everyone
: has healed the same amounts within the same periods of time, etc. Of
: course, radical differences in game time should probably be accounted for by
: a simple updating to the farthest advanced period, so that the speed of the
: servers, again, doesn't get slowed down to the slowest speed.
Hmm. Maybe I am not makeing myself clear. This is obsolutely NO point
in getting servers to operate on the same time. Time moves at the same
speed for every server anyway. (A second is still almost exactly the
same length in london as new york after all!). How would it make a
differnce if server A was 8 seconds ahead of server B? Everything
works on time delta's anyway. And the time delta is going to be the
same no matter what server it is on?
: >Can you think of any situation where it can't be solved by picking a
: >object to handle the ordering??
:
: Or basically, allow the locality of the object to determine the ordering of
: events.
Yup.
Michael
>: >: There is no way to predict the flow of control and lock all affected
>: >: objects in advance. The sequence of function calls isn't computable,
>: >: not even in a single-threaded system.
>: This seems to me to imply no control over the threading at all. Let me
>: offer, as an example, the case of cooperative threads.
>While co-operative threads are very useful, they aren't quite so
>useful in muds. The main problem is that because your would (ideally)
>be have large co-reuse, you can't guarentee that none of your calls
>will block. Blocking should automatically result in a context switch.
>If you call something, and it blocks, you get a context switch you
>hadn't planned for.
Seems like people are still thinking of one cpu divided among many threads.
Remember on a modern multiprocessing machine there could be two or more cpu's
so that you can have two threads running at the same time. Best/worst case
is that you have one CPU per thread, or one machine on the net per thread.
When I read some of the posts it seems like the people are saying this could
never work without realizing there's already a large, established, working
system that has faced all of these problems before. It's called UNIX. When I
saw the example of the player and dragon both trying to access they same door I
thought about file locking. Since when was exclusive access to a file hard to
implement? Yeh yeh, I know about the problems with locking files over NFS. It's
still not impossible.
Ever played netrek? That's one server, and two processes per player. For the
average game that's 33 processes, all talking together with shared memory
segments and sockets.
Ever used X Windows? I have about 20 windows up now, and each could be doing
its own thing, all writing to the display at the same time. Isn't the windowing
system for Plan 9 even more distributed?
You're telling me they can do it and we can't? Get for real.
>This is very unlikely. You context switch under 2 instances.
Or never. There's a difference between blocking and context switching. I think
the idea of a context switch is too low level and we shouldn't be thinking
about where they might happen.
>1) You would block. The pre-emptive scheduler will catch this anyway.
Both Sun's LWP library and Concurrent C++ take care of this internally in
their libraries. It's not something the programmer has to worry too much about.
You can give threads priorities to help determine which gets the CPU time when
there's not enough to go around. I imagine it's the same or similar in other
languages like Ada.
>2) You feel you are useing too much time. Problem is you can't
>actually measure this, so you have to guess. The pre-emptive scheduler
>doesn't have to guess.
Yes, you might want to be ``nice'', just in case there isn't one cpu
per thread. I imagine for instance that I might do a yield() after handling
each line of player input, or lower the priorty of a users thread if I know
they are about to do some massive IO like a file upload.
>: >Now (I think..) You get onto problems with distributed objects.
There's no reason to ever know the object is ``distributed''. When /bin/cat
writes to stdout does it stop to wonder if stdout is a file, a named pipe,
a remote file over NFS, or a network socket setup by telnet/rlogin? A good
operating system abstracts this. A good concurrent/multiprocessing library
will too.
>Thats true, and that is one reason why it isn't under consideration.
>We were discussing distributed muds. :)
No, I started the conversation with multiprocessing and threads actually and
was promptly told it was near impossible.
But I think they are nearly the same problem.
>Debugging is never easy. Concurrent debugging is even worse.
Yeh! What he/she said.
>Hmm. Maybe I am not makeing myself clear. This is obsolutely NO point
>in getting servers to operate on the same time.
Ever been compiling a package on two different machines on an NFS disk? It
can really confuse Make when the write times on the files are for a future
date.
Simple (not best) solution: Don't distribute your clocks.
>Time moves at the same speed for every server anyway.
Make one master clock.
>> Make one master clock.
>I disagree. To my knowledge muds are _not_ time driven, but event driven.
>A synchronous approach will give poor performance. For that reason a PDES
>algorithm should be used (probably an optimistic one, in order to have the
>possibility to dynamically create new processes).
It really depends on the nature of your design. In a combat MUD, for example,
there are many time based events (healing ticks, combat ticks, mobile activity
ticks, etc.), which need to be somewhat synchronized between machines. But,
I think you'd get more efficiency if you used seperate clocks and they had
some sort of way of alleviating creeping disparities.
Umm. LPmud has NO blocking calls. It is just not possible for LPC to
block. It has to be this way as it is a single threaded driver.......
The whole point of going multi-threaded is so that you can block, and
so that you don't need to explicitly preserve state.
: Thus, your typical MUD
: server is already non-blocking. The steps to continue to assure non-
: blocking I/O are quite trivial and do not degrade performance as much as you
: would have us believe.
Sorry. I must have missed something. Did I talk about performance??
: Typically there are only about two places you could
: block. One, disk I/O to and from the server's database. You have a couple
: of options here ... many MUDs simply make the assumption that any blocking
: will result in a trivial delay and simply ignore the fact that such a call
: will/may block. Your other choice is to perform asynchronous disk I/O. In
: fact, with multiple threads this can be even easier, as you can craft a
: thread designed to server your asynchronous disk I/O requests for you.
There's a bit of a problem here... Most unices won't do async disk
I/O.
: As far as player input goes, you typically drive it on a timed event
: anyway, since you only want to process commands at a certain rate
: (disallowing some people to issue commands more quickly just because
: they can flood your server). Again, the solution is trivial.
This isn't quite correct. While you want to limit commands/second, you
DON'T want to do it by executing commands on second boundries or
someother silly method like that. You process a command as soon as you
get it, UNLESS the last command was received less than a second ago.
Hmm. You missed a few things that can block.
a) network I/O. Unlike disk I/O, networks can and frequently do return
EAGAIN (or EWOULDBLOCK for those BSD ppl). This SHOULD result in a
context switch in a multi-threaded environment. Or do you want your
user level code to be doing..
while (send_to_player() == EAGAIN) pause();
??
b) Any sort of sleep() or user requested pause.
c) Waiting for user input (You seemed to skim over that above). This
should result in a context switch only if it would block. I.e if there
is no input waiting.
: >While you can program around this, it gives rise to race conditions in
: >poor code, and is basically a hassle. Furthermore, if someone changes
: >the base code, your code will need to change too. This is a major
: >no-no.
:
: This is completely erroneous. The disk I/O and connection I/O routines
: should be hidden at the lowest level anyway.
Fine. They are hidden at the lowest levels.
: I perform context switches
: only in the lowest level routines, and the higher level routines have no
: concept of where a context switch will occur.
If this is the case, then you might as well have preemptive threading.
Am I missing something? If it doesn't know where a context switch will
occur, you might as be preemptive.
: >If you write your code so that is doesn't rely on lower level code not
: >switching, then you basically have everything that you need for
: >pre-emptive threading.
:
: When will you write upper level code that is completely independant of the
: fact that context switching is taking place?
Whenever you know that nothing else will be useing this data structure
at the same time as you are.
I.e. any access to object local variables (i.e. the majority of the
code for rooms,player object et al).
The only code that needs to be aware is code that is doing object
external accesses.
: You talk about semaphores and
: monitors and at the same time act displeased because I have some idea that
: threading is taking place at a lower level.
I am being stupid again. What are you trying to get across here?
: I suppose knowing when and
: where the context switches will take place is a bad idea?
IMHO, yes.
My one major case for pre-emptive scheduling, is that you can (almost)
guarentee that the entire mud won't hang. With co-operative, a look
like 'for (;;);' will stop every player dead.
Lpmud kludges around this, but breaking if too many intructions are
run at once. Would you do this to? Witness shattered problems with
this. (note that it doesn't have to be on purpose. Any code that
accidently degrenates to the above will kill the mud).
: Or would you
: prefer me to label the context switchable routines something like:
: get_io_and_a_context_switch_will_occur? I submit to you that the low level
: code may change significantly, but the locations of the context switches
: will not change as it is fundamental to the method I proposed.
True enough. You just say that any function call may result in a
context switch and you are safe.
: >There is very little overhead involved with semaphores/monitors. Much
: >of this would be driver supported with only 1 or 2 lines of C at
: >various critical places.
:
: Hmmm, a lot can be done in a line of C, as witness the obfuscated C code
: contests.
Heh.
: But seriously, semaphores/monitors also add a large degree of
: design/debug time. Something that may not even be neccesary to take. If
: I thought context switching over some sort of quantized, prioritized system
: was needed, I'd use it. Personally, the concept that something can be
: actually thought out in advance, rather than just always blindly using the
: most generic form, can be okay.
Co-operative:
+ easier to program in , if not familar with con-current
prog'ing.
- Have to have some recovery mechinism for infinite loops.
- Little dynamic scheduling available.
- hard to have CPU quota's.
Pre-emptive:
- more difficult to program in.
+ No need to guard against infinite loops (to the extent that
they aren't immediately fatal).
+ Can have dynamic scheduling.
+ Easier to implement hard + soft CPU quotas. (don't know if
this is important to you. It would be for me. I intend
to give players abilty to write programs for their own
use).
You weigh it up. Feel free to add all the ones I missed pls.
: >: In fact, I would
: >: estimate that efficiency is increased, as you are determining the most
: >: effective places to perform context switching.
:
: >This is very unlikely. You context switch under 2 instances.
: >1) You would block. The pre-emptive scheduler will catch this anyway.
: >2) You feel you are useing too much time. Problem is you can't
: >actually measure this, so you have to guess. The pre-emptive scheduler
: >doesn't have to guess.
:
: Wrongo. Assuming your threads aren't in the kernel itself, you run into
: blocking problems either way.
Huh? The kernel returns EAGAIN (or wouldblock). This means that it
would block. This means you need to hang around untill it's ready
before you try again.
choices:a) sleep on select().
b) context switch to something that doesn't need that (blocked)
service.
a) is obviously not right. So you use b). b) is done at the user
level. What is so bad? The only diff between co-operative and
preemptive, is that with co-operative your mud level code needs to
check. With pre-emptive, your driver level code can handle it
invisibly.
: As far as using too much time, no, this
: isn't a problem. Look at the nature of the server. If you want to assure
: that player B is processed after player A and before player A's new
: request, then time scheduling is _not_ important.
I DON'T want to assure that B is processed between A. What if B goes
and asks for a dynamic map of the entire mud? What if B's command
resolved to an infinite loop?. Do you want everyone to stop waiting
for B???
: >: Are we really worried about
: >: time sharing?
: >Well, most people would be, yes.
:
: Most people would probably take an open look at what the server was designed
: to do before making blanket statements based off of some idealogical stand-
: point as to how the universe should work.
The server was designed to be multi-user, yes?. This means that you
probably want some degree of equity and shareing between users? This
means that time-shareing is a very viable way to go. Care to try and
illistrate the viewpoint that timeshareing is NOT the way to go??
: >The main problem is that while much of the code only wakes up for a
: >short time, there is the occasional piece that needs a lot of
: >low-priority CPU time. LPmuds are particulary bad at handleing this.
:
: Well, surprise surprise ... I'm not talking about LPmud. This isn't the sort
: of problem that should be solved by building off of an existing server as
: much as one written from scratch.
I agree. LPmud was an example. I sure wouldn't be writting a
multi-threaded mud to be an LPmud clone.
: In a time synchronized environment it
: doesn't really matter. It's often better to lose a tick and make sure
: everyone has finished what they were supposed to accomplish in a tick than to
: delay one to the next tick. If you can't process everything you need to in
: each thread within the space of one tick, your main solution is going to have
: to be rewriting that thread or altering your tick lengths.
So if I run a command that involves a LOT of processing, you think
that everyone else should wait untill I have finished??
: Perhaps you need to think about what the server is supposed to accomplish
: before you start thinking about what will and will not be effective in
: solving your dilemna.
How about you state what YOU think the server is trying to do?
: Well, most of what you were discussing doesn't have any direct relevance
: either, so lets just say you could've fooled me. If you want to discuss
: distribution, fine, but distribution concepts really aren't dependant on
: how to properly semaphore your code. A distributed server using the
: threaded system I mentioned would be no more difficult than a distributed
: server in any other sense.
Are you really saying what I think you are? That you can
co-operatively thread a multi-processing program? The mind boggles.
The whole idea of multi-processing implies that you have concurrent
threads (which is what pre-emptive threading simulates).
: Although what I said doesn't apply, it didn't
: apply in the same manner the previous discussion didn't apply.
Now THAT is a classic piece of writing. :)
: >: Although, for a distributed system it just seems more a manner of
: >: agreeing how to share data between the two (that is, who should maintain
: >: control over the gate and should process requests to open/close it,
: >: etc.).
: >This is like saying 'Swimming is just like moveing your arms and legs
: >to push yourself through the water' :). Says nothing.
:
: Well, all I was referring to here, was the above mish-mash (which I notice
: you didn't bother to quote),
Apologies. The post was long enough already.
: which made it out into some theological
: argument about which would get to the gate first, the adventurer or the
: dragon. It treated the gate as two discrete objects which was, as another
: poster
me.
: mentioned, the inherent problem. Treating the gate as one object
: was the solution.
Apologies again, 'cos I was wrong. it doesn't 'say nothing', it states
what I thought was fairly self evident. Blame it on hot weather and
late nights.
: >Debugging is never easy. Concurrent debugging is even worse. This is
: >one reason why I am going the strictly typed way of things. The only
: >thing you can do is be careful how you write you code. Write
: >defensively. Good practice in anycase.
:
: This is true. But you spend your time trying to convince me that what I
: talked about is not good programming practice. This is something akin to
: telling something they've used poor programming practices because they used
: the processor's add instruction rather than doing it under Mathematica.
Heh. I am not saying that it's bad programming. I am saying that it is
a bad way to be solveing the 'problem' posed by mud. I think co-op
thread is fantastic in it's place. I have used it for some fairly
major projects, and regret very much that it isn't portable under
unix. (Sounds like "some of my best friends are frogs!")
However, I really don't think that co-op threading is the way to
program distributed/threaded muds. That is what I have been trying to
show. While you can program a thread mud co-operatively, and as you
say, do it reasonably well, the paragrim just doesn't extend to
multi-threaded/distributed muds, and thus you have gained little.
: This was the argument presented above mine. I was simply explaining why
: it was infeasible. If you want to correct him, fine, just don't bother
: trying to make it look like I'm the one you're correcting.
Oops.
: What you want to assure is that game time on both
: machines remains fairly consistant ... after having your server up for a
: month or so, a time bias of around an hour of game time could be somewhat
: disconcerting.
What I was getting at was, why dump this burden on the mud? Why not
sync game time to real time, and then reply on the machine the mud is
running on to maintain it's sync.
Michael
>Umm. LPmud has NO blocking calls. It is just not possible for LPC to
>block. It has to be this way as it is a single threaded driver.......
>The whole point of going multi-threaded is so that you can block, and
>so that you don't need to explicitly preserve state.
Gosh, try to write some blocking code without kernel level support ...
surprise, you can't ... if you ever blocked, all your other threads would,
yes, _stop_. The only thing you come up with is a clever simulation of
blocking coded into the threads themselves, which is something I have put
in.
>There's a bit of a problem here... Most unices won't do async disk
>I/O.
Actually, it may depend on the definition of most ... SunOS has it, and
SunOS is a very common OS. Regardless, show me how you can block a process
without blocking every thread in that process if the kernel doesn't support
it? You'll notice Sun LWP suggests you link with their non-blocking I/O
library ...
>: As far as player input goes, you typically drive it on a timed event
>: anyway, since you only want to process commands at a certain rate
>: (disallowing some people to issue commands more quickly just because
>: they can flood your server). Again, the solution is trivial.
>This isn't quite correct. While you want to limit commands/second, you
>DON'T want to do it by executing commands on second boundries or
>someother silly method like that. You process a command as soon as you
>get it, UNLESS the last command was received less than a second ago.
That really depends on the nature of your server. I am, primarily,
interested only in combat based servers. I need to delay people so that
their physical actions take up time. Also, processing is more on the nature
of 10th of a second delays, not one second.
>Hmm. You missed a few things that can block.
>a) network I/O. Unlike disk I/O, networks can and frequently do return
>EAGAIN (or EWOULDBLOCK for those BSD ppl). This SHOULD result in a
>context switch in a multi-threaded environment. Or do you want your
>user level code to be doing..
> while (send_to_player() == EAGAIN) pause();
>??
>b) Any sort of sleep() or user requested pause.
>c) Waiting for user input (You seemed to skim over that above). This
>should result in a context switch only if it would block. I.e if there
>is no input waiting.
I forget how I addressed "a)" (sorry, seem to left my mind at home today),
as to addressing "b)", my threads contain their own sleep function which
release the thread (in essence a thread level blocking) ... when sleep is
called it is assumed a context switch may occur ... if all threads are
sleeping, the entire process will block on the wake up of the first one
waiting. And in the case of "c)" a context switch is performed after the
processing of the command ... that is, the input is picked up, acted upon
and then a context switch occurs.
>: >While you can program around this, it gives rise to race conditions in
>: >poor code, and is basically a hassle. Furthermore, if someone changes
>: >the base code, your code will need to change too. This is a major
>: >no-no.
>:
>: This is completely erroneous. The disk I/O and connection I/O routines
>: should be hidden at the lowest level anyway.
>Fine. They are hidden at the lowest levels.
>: I perform context switches
>: only in the lowest level routines, and the higher level routines have no
>: concept of where a context switch will occur.
>If this is the case, then you might as well have preemptive threading.
>Am I missing something? If it doesn't know where a context switch will
>occur, you might as be preemptive.
>: >If you write your code so that is doesn't rely on lower level code not
>: >switching, then you basically have everything that you need for
>: >pre-emptive threading.
>:
>: When will you write upper level code that is completely independant of the
>: fact that context switching is taking place?
>Whenever you know that nothing else will be useing this data structure
>at the same time as you are.
Which is exactly what I do, but with no semaphores, etc. I've never said
semaphores were invalid (I left that to some other idiot), I simply stated
that there were methods that allowed one to code in a more traditional
manner. By limiting where context switches occur, I limit the number of
cases I have to protect. If a context switch could occur anywhere, I'd have
to protect _all_ shared data structures.
>I.e. any access to object local variables (i.e. the majority of the
>code for rooms,player object et al).
>The only code that needs to be aware is code that is doing object
>external accesses.
This requires a lot of careful coding, especially if I want to avoid wasting
uneccesary resources ... for example, should I lock the entire object or just
an attribute? If I'm locking a list of objects, should I just lock the
entire list or each object as I modify it? Or perhaps the entire function
should be locked?
>: You talk about semaphores and
>: monitors and at the same time act displeased because I have some idea that
>: threading is taking place at a lower level.
>I am being stupid again. What are you trying to get across here?
>: I suppose knowing when and
>: where the context switches will take place is a bad idea?
>IMHO, yes.
>My one major case for pre-emptive scheduling, is that you can (almost)
>guarentee that the entire mud won't hang. With co-operative, a look
>like 'for (;;);' will stop every player dead.
Okay, so what's your point? With a for(;;) loop in a non-threaded system the
exact same thing that could happen. I do believe that the original person
was complaining about wanting to be able to write code that felt like it was
for a singley threaded system ...
>Lpmud kludges around this, but breaking if too many intructions are
>run at once. Would you do this to? Witness shattered problems with
>this. (note that it doesn't have to be on purpose. Any code that
>accidently degrenates to the above will kill the mud).
Well, firstly, you make a faulty assumption about the nature of my server.
If I placed this as C code then yes, it most likely would. As far as the
interpreted side goes, each program will have its own thread, and each thread
will be limited to how much it can process before a context switch occurs.
The interpreted side works quite differently from the plain C code, however.
>: Or would you
>: prefer me to label the context switchable routines something like:
>: get_io_and_a_context_switch_will_occur? I submit to you that the low level
>: code may change significantly, but the locations of the context switches
>: will not change as it is fundamental to the method I proposed.
>True enough. You just say that any function call may result in a
>context switch and you are safe.
No, I specify which low level routines can (or more likely will) result in
a context switch. No other low level routines will do so.
>: >There is very little overhead involved with semaphores/monitors. Much
>: >of this would be driver supported with only 1 or 2 lines of C at
>: >various critical places.
>:
>: Hmmm, a lot can be done in a line of C, as witness the obfuscated C code
>: contests.
>Heh.
>: But seriously, semaphores/monitors also add a large degree of
>: design/debug time. Something that may not even be neccesary to take. If
>: I thought context switching over some sort of quantized, prioritized system
>: was needed, I'd use it. Personally, the concept that something can be
>: actually thought out in advance, rather than just always blindly using the
>: most generic form, can be okay.
>Co-operative:
> + easier to program in , if not familar with con-current
> prog'ing.
> - Have to have some recovery mechinism for infinite loops.
Eh? Not that I've seen. If you place an infinite loop in your hard
code, tough luck ... it's a bug. Shit happens.
> - Little dynamic scheduling available.
Little _need_ for dynamic scheduling. You perform your own scheduling,
persay.
> - hard to have CPU quota's.
I'm not interested in having CPU quotas. I'm interested in serving
everyone in a round robin fashion without fail.
>Pre-emptive:
> - more difficult to program in.
> + No need to guard against infinite loops (to the extent that
> they aren't immediately fatal).
Great, we can allow bad programming practices and they can get away
with them ... on the other hand, if it's something written in the interpreted
language, infinite loops won't hurt me either.
> + Can have dynamic scheduling.
Gosh, my who can get delayed for eternity because everyone elses
requests draw less of a load ... cool.
> + Easier to implement hard + soft CPU quotas. (don't know if
> this is important to you. It would be for me. I intend
> to give players abilty to write programs for their own
> use).
Again, this really isn't relevant for a couple of reasons ... players
who write programs for me will be going through the interpreted side ... it
can impose quotas in the sense that that side runs sort of like a virtual
machine.
>You weigh it up. Feel free to add all the ones I missed pls.
>: >: In fact, I would
>: >: estimate that efficiency is increased, as you are determining the most
>: >: effective places to perform context switching.
>:
>: >This is very unlikely. You context switch under 2 instances.
>: >1) You would block. The pre-emptive scheduler will catch this anyway.
>: >2) You feel you are useing too much time. Problem is you can't
>: >actually measure this, so you have to guess. The pre-emptive scheduler
>: >doesn't have to guess.
>:
>: Wrongo. Assuming your threads aren't in the kernel itself, you run into
>: blocking problems either way.
>Huh? The kernel returns EAGAIN (or wouldblock). This means that it
>would block. This means you need to hang around untill it's ready
>before you try again.
If your thread routines do not lie within the kernel, your threads
run as a single process, okay? If that process should block, _all_ threads
within the process will block as well. Thus the process must _never_ block
if you want the non-blocked threads to continue during the interrim.
>choices:a) sleep on select().
> b) context switch to something that doesn't need that (blocked)
> service.
>a) is obviously not right. So you use b). b) is done at the user
>level. What is so bad? The only diff between co-operative and
>preemptive, is that with co-operative your mud level code needs to
>check. With pre-emptive, your driver level code can handle it
>invisibly.
If you're talking about blocking individual threads, there is no such thing
without kernel level support. Any blocking mechanism, such as in Sun LWP,
is faked for you, something I also do.
>: As far as using too much time, no, this
>: isn't a problem. Look at the nature of the server. If you want to assure
>: that player B is processed after player A and before player A's new
>: request, then time scheduling is _not_ important.
>I DON'T want to assure that B is processed between A. What if B goes
>and asks for a dynamic map of the entire mud? What if B's command
>resolved to an infinite loop?. Do you want everyone to stop waiting
>for B???
Well, I'm not talking about a distributed system, and therein perhaps lies
the rub. I believe I mentioned that a lot of what I was talking about
didn't apply to distributed systems ... for my needs, I'm somewhat doubtful
a distributed system would even work that well, as reliability must be very
high.
>: >: Are we really worried about
>: >: time sharing?
>: >Well, most people would be, yes.
>:
>: Most people would probably take an open look at what the server was designed
>: to do before making blanket statements based off of some idealogical stand-
>: point as to how the universe should work.
>The server was designed to be multi-user, yes?. This means that you
>probably want some degree of equity and shareing between users? This
>means that time-shareing is a very viable way to go. Care to try and
>illistrate the viewpoint that timeshareing is NOT the way to go??
No. Multi-user does not imply you want equity and it would be a mistake to
assume so. In my case I want "command equity" if you would like to call it
such. Everyone should be able to submit commands at the rate delegated them
by the server (it will vary, as you will be delayed for every action, etc.),
those commands should complete within the minimum iota, and when you submit
your next command, and the server is willing to take it, it should then be
processed.
>: >The main problem is that while much of the code only wakes up for a
>: >short time, there is the occasional piece that needs a lot of
>: >low-priority CPU time. LPmuds are particulary bad at handleing this.
>:
>: Well, surprise surprise ... I'm not talking about LPmud. This isn't the sort
>: of problem that should be solved by building off of an existing server as
>: much as one written from scratch.
>I agree. LPmud was an example. I sure wouldn't be writting a
>multi-threaded mud to be an LPmud clone.
>: In a time synchronized environment it
>: doesn't really matter. It's often better to lose a tick and make sure
>: everyone has finished what they were supposed to accomplish in a tick than to
>: delay one to the next tick. If you can't process everything you need to in
>: each thread within the space of one tick, your main solution is going to have
>: to be rewriting that thread or altering your tick lengths.
>So if I run a command that involves a LOT of processing, you think
>that everyone else should wait untill I have finished??
In my particular case, yes. On the other hand, the design should try to
limit the occurances of commands which will result in delaying the other
users. This has a lot to do with why it wouldn't work well distributedly ...
I need to be able to process things in a timely fashion.
>: Perhaps you need to think about what the server is supposed to accomplish
>: before you start thinking about what will and will not be effective in
>: solving your dilemna.
>How about you state what YOU think the server is trying to do?
Okay, I've tried to do some of that in the above. I will submit that we are
probably talking about two different concepts. Try to remember that I'm not
trying to say that your view is invalid, only to show you why I feel in my
case my view is also valid.
>: Well, most of what you were discussing doesn't have any direct relevance
>: either, so lets just say you could've fooled me. If you want to discuss
>: distribution, fine, but distribution concepts really aren't dependant on
>: how to properly semaphore your code. A distributed server using the
>: threaded system I mentioned would be no more difficult than a distributed
>: server in any other sense.
>Are you really saying what I think you are? That you can
>co-operatively thread a multi-processing program? The mind boggles.
>The whole idea of multi-processing implies that you have concurrent
>threads (which is what pre-emptive threading simulates).
Actually, it doesn't. Scheduling strategies have little to do with multi-
processing. A batch system of yesteryear was multi-processing, but not
neccesarily time shared. It just seems to me that people are too ready to
bandy about the concept of time sharing when it may or may not be applicable
to the given problem. One must evaluate the problem and determine an
effective solution. Solutions are pure and acknowledge no popular theory of
the time.
>: Although what I said doesn't apply, it didn't
>: apply in the same manner the previous discussion didn't apply.
>Now THAT is a classic piece of writing. :)
*chuckle* Indirect directedness, if you'll pardon my directness of purpose.
Hey, I learned all my tricks at the Department of Redundancy Department.
>: >: Although, for a distributed system it just seems more a manner of
>: >: agreeing how to share data between the two (that is, who should maintain
>: >: control over the gate and should process requests to open/close it,
>: >: etc.).
>: >This is like saying 'Swimming is just like moveing your arms and legs
>: >to push yourself through the water' :). Says nothing.
>:
>: Well, all I was referring to here, was the above mish-mash (which I notice
>: you didn't bother to quote),
>Apologies. The post was long enough already.
True, true ... we may collapse the internet with the weight of our quoted
articles and the whole world will be a black whole ... and one wonders where
we will have gotten.
>: which made it out into some theological
>: argument about which would get to the gate first, the adventurer or the
>: dragon. It treated the gate as two discrete objects which was, as another
>: poster
>me.
>: mentioned, the inherent problem. Treating the gate as one object
>: was the solution.
>Apologies again, 'cos I was wrong. it doesn't 'say nothing', it states
>what I thought was fairly self evident. Blame it on hot weather and
>late nights.
>: >Debugging is never easy. Concurrent debugging is even worse. This is
>: >one reason why I am going the strictly typed way of things. The only
>: >thing you can do is be careful how you write you code. Write
>: >defensively. Good practice in anycase.
>:
>: This is true. But you spend your time trying to convince me that what I
>: talked about is not good programming practice. This is something akin to
>: telling something they've used poor programming practices because they used
>: the processor's add instruction rather than doing it under Mathematica.
>Heh. I am not saying that it's bad programming. I am saying that it is
>a bad way to be solveing the 'problem' posed by mud. I think co-op
>thread is fantastic in it's place. I have used it for some fairly
>major projects, and regret very much that it isn't portable under
>unix. (Sounds like "some of my best friends are frogs!")
I think it depends on your desired mud. Hmmm, and my own cooperative
threads are fairly portable ... it requires me to make some modifications to
the code that hacks the setjmp table for each architecture, but I found that
easier than trying to get Sun LWP on non-Suns or find any other C threading
interface that I could live with.
>However, I really don't think that co-op threading is the way to
>program distributed/threaded muds. That is what I have been trying to
>show. While you can program a thread mud co-operatively, and as you
>say, do it reasonably well, the paragrim just doesn't extend to
>multi-threaded/distributed muds, and thus you have gained little.
I'd probably have to agree with you. If you want to design a distributed
mud, I'd probably suggest doing it the correct way. The best way would
probably be to move to something like Solaris 2.0, which should have kernel
level thread support along with multi-processor intrinsics, and use a nice
OO language of your choice. However, at this point I haven't found an OO
language I can stand (and believe me, it's not the concepts that scare me,
it's the syntax) and because I wanted a very high level of portability. I
have managed to port my thread code to several BSD systems (Sun 3's, Sun 4's,
Dynix, Dynix/PTX, UTek) with little or no problems.
>: What you want to assure is that game time on both
>: machines remains fairly consistant ... after having your server up for a
>: month or so, a time bias of around an hour of game time could be somewhat
>: disconcerting.
>What I was getting at was, why dump this burden on the mud? Why not
>sync game time to real time, and then reply on the machine the mud is
>running on to maintain it's sync.
Well, the problem here might be that two servers running on different
machines would grow out of sync with each other (real time wise) ... the
fact is that two machines running clocks set at the same time will slowly
develop a bias between themselves ... something NTP helps to correct. Call
this whimsy mostly, though, as the uptime of a mud server probably makes
this negligable ... you should at least coordinate time at setup though, so
that if one system clock reads 123734172 and the other 123736734, you can
both realize that this means the same game time. This, of course, is
exactly what you talk about above.
Well, anyway, I don't know if I managed to respond to all of the points
above ... my mind is fried and needs rest, but I wanted to eek out a reply
before I seek oblivion. I do understand your point of view (I think). Ah
well, history proves all men fools.
This is sort of the problem. As written they really are time driven but
they shoudl be event driven ;)
(Much of the 'work' seems to be in the polling porition of the code and
not the even driven portion. This, of course is changing as people
rewrite stock code to be more event oriented)
Blocking code W/O kernel support?? Sure..
.....
Wait(s); /* This will block untill s is Signal()'ed */
Where's the kernel level support?
I don't think this is what you meant somehow. :) This is way off topic
anyway. My prose above was referring to 'user level' blocking. I.e.
trying to access a blocked resource. Obviously kernel level blocking
isn't something you can do anything about.
: >There's a bit of a problem here... Most unices won't do async disk
^^^^^^
: >I/O.
: Actually, it may depend on the definition of most ... SunOS has it, and
: SunOS is a very common OS. Regardless, show me how you can block a process
: without blocking every thread in that process if the kernel doesn't support
: it? You'll notice Sun LWP suggests you link with their non-blocking I/O
: library ...
SunOS does it. linux,386bsd,ultrix,xenix,sysvr4,.... don't do it.
It isn't something I would rely on existing.
Of course you can't help it if the kernel blocks. Why do you keep
pointing this out? It just isn't something you worry about. You only
worry about this things you CAN do something about. I.e. thread level
blocking, not process level.
: >This isn't quite correct. While you want to limit commands/second, you
: >DON'T want to do it by executing commands on second boundries or
: >someother silly method like that. You process a command as soon as you
: >get it, UNLESS the last command was received less than a second ago.
:
: That really depends on the nature of your server. I am, primarily,
: interested only in combat based servers. I need to delay people so that
: their physical actions take up time. Also, processing is more on the nature
: of 10th of a second delays, not one second.
Yup. I was just saying that if they haven't done anything for a while,
and then they type something, you don't want to block that untill the
unit boundry ticks over. Why add the extra delay? they are already
idle.
: >: When will you write upper level code that is completely independant of the
: >: fact that context switching is taking place?
:
: >Whenever you know that nothing else will be useing this data structure
: >at the same time as you are.
:
: Which is exactly what I do, but with no semaphores, etc. I've never said
: semaphores were invalid (I left that to some other idiot), I simply stated
: that there were methods that allowed one to code in a more traditional
: manner. By limiting where context switches occur, I limit the number of
: cases I have to protect. If a context switch could occur anywhere, I'd have
: to protect _all_ shared data structures.
True enough.
: >I.e. any access to object local variables (i.e. the majority of the
: >code for rooms,player object et al).
:
: >The only code that needs to be aware is code that is doing object
: >external accesses.
:
: This requires a lot of careful coding, especially if I want to avoid wasting
: uneccesary resources ... for example, should I lock the entire object or just
: an attribute? If I'm locking a list of objects, should I just lock the
: entire list or each object as I modify it? Or perhaps the entire function
: should be locked?
The better solution I think is encapsulation. You try not to have
multiple objects accessing some data structure. Instead you
escapsulate the data into some object, and then have everything ask
that object. Provides 'monitor' like functionality.
: >My one major case for pre-emptive scheduling, is that you can (almost)
: >guarentee that the entire mud won't hang. With co-operative, a look
: >like 'for (;;);' will stop every player dead.
:
: Okay, so what's your point? With a for(;;) loop in a non-threaded system the
: exact same thing that could happen.
I am saying that, with single-t or co-op threaded systems, the above
is a problem. In single-t muds it is kludged around (break if takeing
too long). In pre-emp systems, it is not an un-recoverable problem.
: I do believe that the original person
: was complaining about wanting to be able to write code that felt like it was
: for a singley threaded system ...
Is THAT what started all this? wow. :) Thats right. I think I said
that it wasn't a worth while expectation , and you disagreed. Got it.
Now I know what we are talking above. Phew. Thats a relief. :)
: >Lpmud kludges around this, but breaking if too many intructions are
: >run at once. Would you do this to? Witness shattered problems with
: >this. (note that it doesn't have to be on purpose. Any code that
: >accidently degrenates to the above will kill the mud).
:
: Well, firstly, you make a faulty assumption about the nature of my server.
: If I placed this as C code then yes, it most likely would. As far as the
: interpreted side goes, each program will have its own thread, and each thread
: will be limited to how much it can process before a context switch
: occurs.
Sorry. You can't do this. Because you end up with a async context
switch (you don't know when you are going to run out of time), and
your code assumes sync switches.
: >: Or would you
: >: prefer me to label the context switchable routines something like:
: >: get_io_and_a_context_switch_will_occur? I submit to you that the low level
: >: code may change significantly, but the locations of the context switches
: >: will not change as it is fundamental to the method I proposed.
:
: >True enough. You just say that any function call may result in a
: >context switch and you are safe.
:
: No, I specify which low level routines can (or more likely will) result in
: a context switch. No other low level routines will do so.
Can you see that this is a bad assumption to make? At this moment, the
code will work. And then 4 months down the track, someone will add or
change something, and oops, that call can now result in a context
switch. Hmm. either lose the functionality , or re-write the higher
level code....
: >: But seriously, semaphores/monitors also add a large degree of
: >: design/debug time. Something that may not even be neccesary to take. If
: >: I thought context switching over some sort of quantized, prioritized system
: >: was needed, I'd use it. Personally, the concept that something can be
: >: actually thought out in advance, rather than just always blindly using the
: >: most generic form, can be okay.
:
: >Co-operative:
: > + easier to program in , if not familar with con-current
: > prog'ing.
: > - Have to have some recovery mechinism for infinite loops.
:
: Eh? Not that I've seen. If you place an infinite loop in your hard
: code, tough luck ... it's a bug. Shit happens.
The point I was trying to make, is a thread level loop will cause all
other threads to hang. This doesn't happen in pre-emptive threaded
systems.
: > - Little dynamic scheduling available.
:
: Little _need_ for dynamic scheduling. You perform your own scheduling,
: persay.
This makes your scheduling hard-coded. Changeing priorities requires a
re-write of source code...
: > - hard to have CPU quota's.
:
: I'm not interested in having CPU quotas. I'm interested in serving
: everyone in a round robin fashion without fail.
ok. It's not a design goal I would have, but if thats what you want.
: >Pre-emptive:
: > - more difficult to program in.
: > + No need to guard against infinite loops (to the extent that
: > they aren't immediately fatal).
:
: Great, we can allow bad programming practices and they can get away
: with them ... on the other hand, if it's something written in the interpreted
: language, infinite loops won't hurt me either.
No. The point I was makeing is that bad programming is not totally
fatal. I.e. it doesn't hang every other thread.
: > + Can have dynamic scheduling.
:
: Gosh, my who can get delayed for eternity because everyone elses
: requests draw less of a load ... cool.
Grin. Look up concepts like 'total fairness', and 'partial fairness'
etc etc. A decent scheduler will never suspend a thread indefinately.
: > + Easier to implement hard + soft CPU quotas. (don't know if
: > this is important to you. It would be for me. I intend
: > to give players abilty to write programs for their own
: > use).
:
: Again, this really isn't relevant for a couple of reasons ... players
: who write programs for me will be going through the interpreted side ... it
: can impose quotas in the sense that that side runs sort of like a virtual
: machine.
We are talking interpreted side here always. Were we talking at cross
purposes? I have only been concerned with thread/interpreted level
programming.
Hmmm. I think we ARE talking at cross purposes. When I say 'thread' I
am talking about an interpreted level thread. The C code may or may
not be threaded (makes no diff). The interpreted code IS threaded.
: >choices:a) sleep on select().
: > b) context switch to something that doesn't need that (blocked)
: > service.
: >a) is obviously not right. So you use b). b) is done at the user
: >level. What is so bad? The only diff between co-operative and
: >preemptive, is that with co-operative your mud level code needs to
: >check. With pre-emptive, your driver level code can handle it
: >invisibly.
:
: If you're talking about blocking individual threads, there is no such thing
: without kernel level support. Any blocking mechanism, such as in Sun LWP,
: is faked for you, something I also do.
See above for blocking code W/O kernel support.
: >: As far as using too much time, no, this
: >: isn't a problem. Look at the nature of the server. If you want to assure
: >: that player B is processed after player A and before player A's new
: >: request, then time scheduling is _not_ important.
:
: >I DON'T want to assure that B is processed between A. What if B goes
: >and asks for a dynamic map of the entire mud? What if B's command
: >resolved to an infinite loop?. Do you want everyone to stop waiting
: >for B???
:
: Well, I'm not talking about a distributed system, and therein perhaps lies
: the rub.
Hmm. The above doesn't assume distributed. 'dynamic' means, the map is
built on-the-fly by some sort of tree search. It's just can example of
a long running command.
: I believe I mentioned that a lot of what I was talking about
: didn't apply to distributed systems ... for my needs, I'm somewhat doubtful
: a distributed system would even work that well, as reliability must be very
: high.
Very true. reliability is something that is very messy to work around
on distributed systems.
: >The server was designed to be multi-user, yes?. This means that you
: >probably want some degree of equity and shareing between users? This
: >means that time-shareing is a very viable way to go. Care to try and
: >illistrate the viewpoint that timeshareing is NOT the way to go??
:
: No. Multi-user does not imply you want equity and it would be a mistake to
: assume so. In my case I want "command equity" if you would like to call it
: such. Everyone should be able to submit commands at the rate delegated them
: by the server (it will vary, as you will be delayed for every action, etc.),
: those commands should complete within the minimum iota, and when you submit
^^^^^^^ maximum?
: your next command, and the server is willing to take it, it should then be
: processed.
Hmm. Ref the example of one person starting a LONG job. Do you want
everyone to stop untill his command finishes??
: >So if I run a command that involves a LOT of processing, you think
: >that everyone else should wait untill I have finished??
:
: In my particular case, yes. On the other hand, the design should try to
: limit the occurances of commands which will result in delaying the other
: users. This has a lot to do with why it wouldn't work well distributedly ...
: I need to be able to process things in a timely fashion.
Ok. And what is a command degrenates to a (thread level) inf loop?
everything will freeze and no-one else will be able to get a command
processed, right?
: >: Perhaps you need to think about what the server is supposed to accomplish
: >: before you start thinking about what will and will not be effective in
: >: solving your dilemna.
:
: >How about you state what YOU think the server is trying to do?
:
: Okay, I've tried to do some of that in the above. I will submit that we are
: probably talking about two different concepts. Try to remember that I'm not
: trying to say that your view is invalid, only to show you why I feel in my
: case my view is also valid.
Q summary:
You think a co-operatively threaded system is worth
implementing, as it gives you threads (interpreted level of course),
and yet is still able to be programming in much the same way as a
single threaded environment..
I think, that while you do get threads as above, they aren't
as useful as they would be if pre-emptive. If pre-emptive you avoid
some problems(fairness et al) while incurring others (sync, shareing).
How the pre-emptive paragrim if directly extensible to
distributed/multi-processor while co-operative is not.
How does this sit with you?
: >Are you really saying what I think you are? That you can
: >co-operatively thread a multi-processing program? The mind boggles.
: >The whole idea of multi-processing implies that you have concurrent
: >threads (which is what pre-emptive threading simulates).
:
: Actually, it doesn't. Scheduling strategies have little to do with multi-
: processing. A batch system of yesteryear was multi-processing, but not
: neccesarily time shared. It just seems to me that people are too ready to
: bandy about the concept of time sharing when it may or may not be applicable
: to the given problem. One must evaluate the problem and determine an
: effective solution. Solutions are pure and acknowledge no popular theory of
: the time.
What I think I was getting at was code like...
Array a; /* all variables are accessable by some other thread. */
Int len;
Value v;
v = get_value(); /* may context switch . no probs */
a += ({ v }); /* add to end of array */
len++;
return;
will work just fine under co-op thread, it fails to work under any
sort of multi-processing environment (multi-p == some env where more
than one thread can run at once).
: >Apologies. The post was long enough already.
:
: True, true ... we may collapse the internet with the weight of our quoted
: articles and the whole world will be a black whole ... and one wonders where
: we will have gotten.
I would venture that we would have gotten smaller... :)
: >Heh. I am not saying that it's bad programming. I am saying that it is
: >a bad way to be solveing the 'problem' posed by mud. I think co-op
: >thread is fantastic in it's place. I have used it for some fairly
: >major projects, and regret very much that it isn't portable under
: >unix. (Sounds like "some of my best friends are frogs!")
:
: I think it depends on your desired mud. Hmmm, and my own cooperative
: threads are fairly portable ... it requires me to make some modifications to
: the code that hacks the setjmp table for each architecture, but I found that
: easier than trying to get Sun LWP on non-Suns or find any other C threading
: interface that I could live with.
Thats the nice thing about co-op threads. You can do it in C in a VERY
small amount of code.
Note that you don't need threaded C to support a threaded game
language. It is a bit more difficult, but it is certainly possible.
: >However, I really don't think that co-op threading is the way to
: >program distributed/threaded muds. That is what I have been trying to
: >show. While you can program a thread mud co-operatively, and as you
: >say, do it reasonably well, the paragrim just doesn't extend to
: >multi-threaded/distributed muds, and thus you have gained little.
:
: I'd probably have to agree with you. If you want to design a distributed
: mud, I'd probably suggest doing it the correct way. The best way would
: probably be to move to something like Solaris 2.0, which should have kernel
: level thread support along with multi-processor intrinsics, and use a nice
: OO language of your choice. However, at this point I haven't found an OO
: language I can stand (and believe me, it's not the concepts that scare me,
: it's the syntax)
Agreed!!!!!!!!
: and because I wanted a very high level of portability. I
: have managed to port my thread code to several BSD systems (Sun 3's, Sun 4's,
: Dynix, Dynix/PTX, UTek) with little or no problems.
See above about not needing C level threads.
: Well, anyway, I don't know if I managed to respond to all of the points
: above ... my mind is fried and needs rest, but I wanted to eek out a reply
: before I seek oblivion. I do understand your point of view (I think). Ah
: well, history proves all men fools.
Amen.
Michael