Optimization-based versus "constructive" PCG

812 views
Skip to first unread message

Julian Togelius

unread,
Oct 22, 2009, 1:23:48 PM10/22/09
to Procedural Content Generation
Reading the introductions, it strikes me that many of us take an
optimization-based or "generate and test" approach to PCG. That is:
create lots of content instance more or less randomly, test them
according to some fitness/objective function, throw away the bad
instances and create This is certainly true for me, Cameron, Erin,
and Simon, who use evolutionary methods; Gillian certainly creates
many more platform levels than she needs.

However, it would seem to me that this approach would be very unusual
for online PCG in actual games, for the reasons that (1) it takes time
and that (2) you don't know how long time it will take. I would
imagine that games like Civilization use some rule-based mechanism for
creating its landscapes, which only generates one instance is
guaranteed to stop after a certain number of steps. But I don't know
of any examples of actual such "constructive" algorithms. Does anyone
of you know any?

Julian

Mark Riedl

unread,
Oct 22, 2009, 2:19:25 PM10/22/09
to procedur...@googlegroups.com
Julian, your email triggered a memory. I recommend the following paper:

Johnson-Laird, P.N. (2002). How Jazz Musicians Improvise. Music
Perception, 19(3), 415-442.

It has nothing to do with games, but, in the context of jazz, lays out
three types of "creative" algorithms:

-- neo-Darwinian
-- neo-Lamarkian
-- hybrid of the above

Neo-Darwinian describes what you are talking about Julian:
generate-and-test. The allusion to genetic algorithms is apparent.
Neo-Lamarkian algorithms use the evaluation metrics as a model in the
generation of successors. The hybrid approach is a compromise between
the other two. Pros and cons of each are discussed.

Mark Riedl
---
Assistant Professor
College of Computing
Georgia Institute of Technology
---
ri...@cc.gatech.edu
http://www.cc.gatech.edu/~riedl/

Tarn Adams

unread,
Oct 22, 2009, 3:59:07 PM10/22/09
to Procedural Content Generation
Yeah, generally speaking, you'd want to have control over when the
algorithm finishes. For map generation, fractals are often used.
Last I heard, Perlin noise is still what comes up the most. Some
other examples (this was the first page I found on wikipedia for the
diamond-square algorithm. There might be a better writeup somewhere):
http://www.gameprogrammer.com/fractal.html
For landscapes in particular, a lot of the early generators involve
just seeding some land blobs throughout an ocean world (or vice versa)
and having them grow out in random directions for N steps, and this
can still give you a good enough map if the final image is what you
are looking for.

I think a lot of the "generate and test" phase happens during
development, when time isn't as critical. Once you've got a fractal
landscape, you'll often need to massage the rules a lot to get the
game consistently giving you something you can keep, based on
aesthetics or some quality related to a game mechanic. The random
creature generator I've been working on has worked this way. In the
game itself, there are no rejected creatures, but I've personally
rejected many, many critters and rewritten the rules to avoid those
critters being generated again. On the other hand, I never got my
Dwarf Fortress world maps to the point where I have a guaranteed
keeper -- it'll run through several rejects first, but I explicitly
show that process to the user with a little snapshot of each one so
they know at least vaguely what's going on, and it attempts to salvage
a reject before it throws it away (for instance, if it decides that
the fractal has too many low altitude points, it'll raise the overall
height of the map instead of giving up).

As far as I know, random dungeon generators in Roguelike games (Rogue,
Nethack, Moria/Angband, ADOM, Dungeon Crawl, etc.) also generally just
make one dungeon each time without rejects. A lot of those are open
source if you want to dig up specifics -- I'm not familiar enough with
any of them to point something concrete out. A lot of it can work by
exhausting space, which you can do in a known amount of time roughly
speaking. For instance, for a "maze level", you might generate a
straight path from the entrance to the exit, deform it N times to make
it suitably loopy, then send off random length side branches from
random floor tiles until the level is exhausted. Then you could do
the iterative phase of rule-writing here until you have consistently
good mazes, and call it good enough. There are doubtless better ways
to create optimal mazes in whatever sense. It's just an example of
how something might work constructively. I think one of my maze
makers works roughly like that, anyway.

Other random map generators (Diablo or Daggerfall, say), work by
gluing pre-made modules together in random configurations. Once a
certain number of modules are placed, including whatever special
modules are necessary for the game to be playable (entry/exit etc),
you have a map. The gluing might include things like hallways of
random lengths/slopes, to break things up a bit, and other less pre-
made features.

A simple idea for building generation I saw kicked around in the
roguelike dev newsgroup many years ago was partitioning rectangles.
Take a rectangle, partition it with a vertical or horizontal wall at
random, and place one or more openings in the wall. This gives you
two connected rectangles -- repeat! The algorithm always finishes in
a set amount of time because the rectangles are getting smaller at
each stage. At first, you'll end up with walls blocking doors and so
on, so during the rule-massage portion, you'll select your wall
locations a little more carefully, but in the end, you end up with
something that's passable in a lot of situations. Then, given more
development time, you can alter the rules significantly to produce
better buildings.

There are sometimes in-play "generate and test" moments within a map
generation, if you've got an overall handle on them. For instance,
you might randomly distribute open rooms in a cavern, but reject
placements that place two rooms too close to each other. Given enough
space and few enough rooms, this generate-reject loop ends after some
very short period of time, so you don't have to worry about getting
stuck, even if the process is otherwise completely random (and could
theoretically last forever). If it ends up being a problem, you can
force the placement to occur more uniformly (for example by
constraining each room to a different area on some grid), but if your
goal is to generate as wide a variety of maps as possible, then the
fewer unnecessary constraints you can get away with the better. This
example doesn't involve any optimization after the generate and test
phase, but as long as the timing is kept under control it would be
fine.

I don't remember if Spelunky is open-source or not, but it's an
example of a platformer with random levels, and it probably doesn't
involve too many rejection steps, although I really have no idea.

So, yeah, in my experience it's all pretty much done with
"constructive" algorithms right now. I think most of their generators
are custom-built with certain building block algorithms that are
widely known (Perlin noise, midpoint displacement/diamond-square,
random walks, subdivision, etc) and then optimized manually, but
anything that evolves during play would be practical if it were fast
enough, easy enough to implement (or kick over to a third-party tool)
and the evolution, if noticeable, fit the theme of the game. And of
course using something like that to optimize not-so-manually during
development would be cool, even if it doesn't actually happen during
the game itself. I don't know anything about evolutionary algorithms
though, so I guess I should stop rambling now.

Tarn

E. Hastings

unread,
Oct 22, 2009, 9:27:07 PM10/22/09
to Procedural Content Generation
Actually, caring about whether or not some final solution is reached
depends on the type of content you are generating. If you are
generating game levels, yeah you need to quickly resolve a solution.
However, if you are generating more transient content (guns, items,
magic spells, ships, vehicles, etc...) then the fun for the player is
in the search. Think of the Diablo loot model, you find a lot of
worthless loot, but you also find a lot of good stuff.

The goal isn't for the player to ever find the "optimal item", the
goal is for them to continually find novel items. Diablo 2 is a 9-year
old 2D game and only has 5 levels, yet it still has thousands of
players on Bnet every night and is #30 on Xfire. The search for loot
(or whatever) keeps people playing.

Adam Smith

unread,
Oct 22, 2009, 10:23:07 PM10/22/09
to procedur...@googlegroups.com
My g&t vs constructive ramblings:

(I'm trying to use "artifact production" as local replacement for procedural generation that doesn't overlap with the "generate" of generate-and-test here.)

I take the view that anything that makes things can be reconciled with the generate and test view (which includes some important extremes).  I don't do this because I'm really attached to g&t architectures, but because plotting your system somewhere on the axis immediately makes you think of what it might look like for it to be somewhere else on the axis.  Additionally, thinking this way has gotten me to think about the role of search in the general artifact production area.

I suppose the canonical generate-and-test setup involves a generator that produces complete artifacts and a test process that simply says yes or no for each artifact.  The product of such a setup is some space of artifacts and the definition of the space is split between the generation rules and the testing rules.  Even within this perspective there are some interesting extremes.  You can imagine a detailed-generate-and-trivial-test system that might, for example, apply a very intricate set of guaranteed-safe mutations to some base artifact as its generation process and for its test process threshold some simple measure, say, number of polygons less than fixed constant.  Such a setup is nearly perfectly "constructive" in Julian's terms, but it has this little appendix of a test condition (which you immediately think to either remove altogether or replace with a more subtle measure to sculpt the resulting space).  On the other extreme, of mindless-generate-and-detailed-test, you might generate all possible text documents and tested them by submitting them to conferences and seeing which were accepted. Generation-heavy systems are really hard to make if you are very particular about the space you are describing, and test-focused systems either break at the whim of your artifact source or simply never produce output.  Clearly, the magic comes in somewhere between the managing the specificity of the space you are trying to populate and how well you can align your generate and test phases.

Extremes aside, it often makes sense to intermingle generate and test -- generating partial artifacts and rejecting some early on, backtracking or performing minor repairs if needed. By baking more and more of your test into the generate process (and packing the test as close as possible to the start of the process) you can build artifact producer thingies that are both efficient (in terms of avoiding wasted computation) and effective (in producing all and only those artifacts that you are interested in).  However, baking your test into your generator is easier said than done.  I suppose the general case (for hard domains) always involves some kind of backtracking search and, outside of Prolog, this can get tedious to implement (~and I'm known to suggest Prolog for such things).  In this old book I have (The Craft of Prolog), I recall a chapter specifically about generate-and-test in the context of logical solutions with some discussion as to how to squeeze the search out of your algorithm, apply deterministic, forward-only inference when possible.  Interestingly, the author suggests an approach to solving problems that involves starts with producing a naive generate and test solution and then iteratively refactoring the program towards a deterministic-as-possible result that still satisfies all of the conditions required. This Prolog thing is one approach, sure, but its tuned for the case when there is a very symbolic, logical definition to the space of artifacts you are interested in.  In the context of games, you are often a lot more flexible about what you will accept and further have some hints as to particular phase of generation that are reasonable (shape the mountains before you place the villages, etc.), so often a more ad-hoc approach is more realistic.  However, as you do try to pull in tighter control of your generated space, it seems you might need to be ready to start thinking in terms of backtracking search once again, even if your domain didn't initially require it.

On the general theme of balancing undergeneration and overgeneration (your generative space being more restrictive or less restrictive than ideal) with efficiency both in terms of how fast a system spits out new artifacts of interest and in terms of how much design/programming effort you had to put in to get it to work, there is another approach which I haven't been able to shoehorn into a Prolog-style backtracking search ideology: generate-and-categorize.  I wish I had a good external reference for this, but all I can think of is (wouldn't you know), my own paper about Tableau Machine, an interactive generative art system designed for long-term interaction: http://www.aaai.org/Papers/Symposia/Spring/2008/SS-08-03/SS08-03-013.pdf

The general idea behind generate-and-categorize (a term I didn't invent until I wrote it just now), is to delay the actual filtering part of your test process until the last possible moment, but do your generic evaluations as early as possible.  In the context of my generative art system, I used several context free design grammars to lay out the basic generation process, defining five distinct-but-related generative spaces. Images could be sampled from these spaces with minimal effort (say, thousands per second), but all I really knew about them after generation was which rules they came from and, symbolically, which shapes were placed where.  In order to wrangle the emergent properties of my images (such as that it appeared vertically symmetric or that it was lop-sided) I rendered my vector images to a low resolution bitmap and applied some basic image filters and metric.  If I were doing a traditional generate-and-test system I might use thresholds on these metrics to accept or reject the image right then, but instead I just stored the symbolic image along with a bunch of metadata I now knew about it (prescribed: ruleset=2; emergent: balance=-0.5,concentration=0.9,coverage=0.2) into a giant database for later.  When it came time to install the system in real houses, I didn't ship any generation code. I only shipped a 100MB database of compressed image sources and let the system efficiently recall a random image with desirable properties for the given situation from an indexed catalog.  The result was that our system could produce a new image to display in about one second, but the criteria for this selection essentially required over 200 cpu-hours of effort. Once the image was selected it was tweaked just a bit more for final display (replacing grayscales with colors from a selected palette and scaling to fit the desired dispaly).  Highly relevant, highly detailed artifacts were produced on a fixed time budget -- I'd like to use this technique again.  We came up this scheme because of really practical concerns like not wanting to get the generation component to work on someone elses machine and generating artifacts on a time budget when the conditions for accepting an artifact were changing all of the time, but I think it should be applicable in other areas (particularly if you want to allow player preferences to enter the process but you don't yet have any knowledge of the players).

I didn't work it into my ramblings yet, but here's specifically my thoughts on the relation between constructive production and generate and test production: I think constructive techniques are just generate and test without backtracking search (though they still make take a long time to produce a result if they are complex in their generation steps).  If you are aiming for a space that can only be reached with intermingled generate and test with backtracking search but don't want to put all of the effort in to get it perfect, you can relax your system by stripping out of one the test layers (causing it to overgenerate, letting through a few undesirables) or stripping out one of the generation rules (causing it to undergenerate, never producing a particular kind of desirable artifact).

I tend to categorize repair actions (like shifting up the terrain in DF to salvage a questionable map) as a kind of self-contained backtracking: the strategy of generating a close variation of the artifact in hand to pass a particular test without abandoning the whole last step of work (only runs when a test has already failed). But maybe repair and generate-and-categorize methods could be combined and enriched to the point where they form a whole new paradigm for artifact production that doesn't bow down to either the constructive or generate and test methods.  This might be some scheme where a system builds lots and lots of artifacts (including small, reusable pieces of them) and performs mostly intelligent retrieval and minor tweaking to produce relevant artifacts without ever having a clear generation phase or evaluation step.  Well, at least this is how I operate when I'm playing with Legos.

Adam

Mark J. Nelson

unread,
Oct 23, 2009, 11:24:12 PM10/23/09
to procedur...@googlegroups.com
Mark (Riedl)'s pointer to jazz improvization reminds me of another
distinction that comes up on and off: whether to have an explicit notion
of content quality, like an evaluation function, in the first place.
Some kinds of generative algorithms try to mimic the style of a
particular designer or set of designers by capturing some model of their
design *decisions* (when they'd do a particular thing in a particular
context), without explicitly representing why the result is "good". A
game analog might be a level generator that had no explicit notion of
"level quality", but does know things like: John Romero would probably
put the keycard here, and since I'm generating Romero-like levels,
that's what I should do.

Roguelikes were already brought up, which can be seen as hand-coded
versions of that; Harold Cohen's AARON, in a different domain, is
another hand-coded example. The decisions could also be learned from
examples of a designer's output, which might be particularly practical
in the game domain: it seems like a lot of the motivation for PCG in
industry is a hope that we can do things like, "here's 200 examples of
the sort of thing we want, give us a bunch more in the same style
please". An example of that in jazz, which reminded me of this approach,
is this system that learns a generative statistical model of how Charlie
Parker does solo-trading:

B. Thom (2000). Unsupervised Learning and Interactive Jazz/Blues
Improvisation. Proceedings of AAAI 2000.
http://www.cs.cmu.edu/~bthom/PAPERS/aaai2k.pdf

-Mark

---
PhD student, School of Interactive Computing, Georgia Tech
Researcher, Expressive Intelligence Studio, UC Santa Cruz
http://www.cc.gatech.edu/~mnelson/

Julian Togelius

unread,
Oct 24, 2009, 1:06:00 PM10/24/09
to proceduralcontent
Adam says:
"I suppose the canonical generate-and-test setup involves a generator
that produces complete artifacts and a test process that simply says
yes or no for each artifact."

The way I see it is that a generate-and-test system consists of a
generator that produces complete artifacts according to some
parameters, and a test system that assigns some
score/fitness/"goodness" to the artifact. The score does not need to
be a single scalar, but could be a vector, specifying how good the
artifact is according to several different criteria. The
parametrization for the next artifact(s) will then in some way depend
on the parametrization of the previous artifact(s), for example the
next generation could be created from the previous generation. (In the
case of a score vector, you use a multiobjective evolutionary
algorithm.)

So I would see what Adam calls generate-and-test as a degenerate case
of an optimization-based approach, where there is only a single
score/fitness dimension, this dimension is discrete and only has two
possible values (accept or reject), and where the parameters of the
new artifact that is generated might not depend in a meaningful way on
the parameters of the previous artifact that was generated.

I tend to get into taxonomizing now and then, and especially for a
research field which doesn't have a textbook yet I think there's a
case for doing so. Hmm... maybe it's unfair to call g&t a "degenerate
case". Maybe we should call both optimization-based approaches and the
procedure Adam refers to as different forms of g&t?

Julian

2009/10/23 Adam Smith <rndm...@gmail.com>:

--
Julian Togelius
Assistant Professor
IT University of Copenhagen
Rued Langgaards Vej 7
2300 Copenhagen S
Denmark
jul...@togelius.com
http://julian.togelius.com
+46-705-192088

Julian Togelius

unread,
Oct 24, 2009, 1:06:02 PM10/24/09
to proceduralcontent
A related distinction could be whether the content is crucial to your
progress (e.g. a dungeon you have to pass through) or whether it's
optional (e.g. loot you can choose to pick up or not, or a dungeon you
don't need to enter). The latter can afford to be a bit messed up, so
you can use a softer/graduated testing procedure; i.e. if the
optimization procedure didn't come out well this time, and your
particular piece of content only reached fitness 0.7, you can still
throw it into the game. I guess allowing for plenty of optional
content is key to designing PCG-based games.

Julian

2009/10/23 E. Hastings <ejhas...@gmail.com>:

--

Adam Smith

unread,
Oct 24, 2009, 8:44:39 PM10/24/09
to procedur...@googlegroups.com

So I would see what Adam calls generate-and-test as a degenerate case
of an optimization-based approach
 
I agree with Julian's categorization. All optimizing thingies will generate and test, but only some generate and test processes actually form a causal loop where the test from one cycle affects the generation of the next.

In the context of procedural content generation problems, "optimization-based" solutions are a particularly intelligent approach, though they may be overkill if your problem isn't particularly hard (in the case of optional content, perhaps).

Take, for example, basic terrain generation. The diamond-squares algorithm.  If all you need is a suitably craggy terrain with some basic requirements on the statistics of bumpiness that are ensured by the subdivision approach, then the algorithm works fine without feeding back into itself.  We don't really expect diamond-squares to get better over time and we don't need it to.  You might say that there wasn't really much opportunity for sequential optimization because there wasn't a clear test phase to diamond-squares, so let's look at another example.

Suppose we were doing procedural poem generation (for an extremely loose definition of poem).  One approach would be to use some English grammar model to generate, say, two sentences.  A test phase might look up the last words of the sentences and ensure that they were mentioned in some standard rhyming dictionary. To me, this is generate and test, as clear as can be, but the result is likely to be pretty slow at generating results and, futhermore, sloppy about what it lets through as a "poem" in the end.

Even though the original process had no numbers in it, one could optimization-ify the process (to at least achieve much better efficiency in this case) by assigning weights on various English grammar constructions and other weights on individual words used as endings.  Through repeated runs of the basic generate and test cycle, weights could be updated at the end of each test based on success or failure.  Over time, the process might learn that it should bias towards building sentences that end in adverbs (which tend to all end in -ly in English) and to strongly avoid ever using "orange" as the last word of the first sentences because of the notorious difficulty in finding a rhyming partner.  By transforming the basic generate and test process into a learning loop which improves over time, it becomes reasonable to stick in more nuanced generation and test processes which might be more computationally expensive.

Incidentally, I think any snapshot of an optimization-based approach can be looked at purely in terms of generate and test.  Because of this, for some applications, it might make sense to build a fairly rich optimization-based system in-house, let it run for a while, and then only ship a snapshot of your generator in your final product.  The result would be something that retains a lot of variation, but only variation in the local space around artifacts already known to be of high-quality.

Adam

Mark Nelson

unread,
Oct 25, 2009, 5:38:33 PM10/25/09
to Procedural Content Generation
On Oct 24, 5:44 pm, Adam Smith <rndmcn...@gmail.com> wrote:
> Suppose we were doing procedural poem generation
[...]
> Over time, the process might learn
> that it should bias towards building sentences that end in adverbs (which
> tend to all end in -ly in English) and to strongly avoid ever using "orange"
> as the last word of the first sentences because of the notorious difficulty
> in finding a rhyming partner.  By transforming the basic generate and test
> process into a learning loop which improves over time, it becomes reasonable
> to stick in more nuanced generation and test processes which might be more
> computationally expensive.

This sounds a lot like what the ML and operations-research people call
model-based optimization, no? Basically interleave a distribution-
estimation step with a candidate-generation step that uses the
estimated distribution as bias, and repeat. One example I'm familiar
with (since I took a class from one of the authors) is MIMIC, which on
each iteration estimates the n% worst part of the remaining search
space, and avoids generating examples from there in the next iteration
(e.g. "avoid generating anything that ends with 'orange'"),
progressively ratcheting towards what's hopefully the good part of the
space: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.6732

And a decent (if somewhat dry) survey of such methods:
http://iridia0.ulb.ac.be/~mbiro/paperi/ZloBirMeuDor2004aor.pdf

-Mark

Julian Togelius

unread,
Oct 25, 2009, 5:46:16 PM10/25/09
to procedur...@googlegroups.com
> This sounds a lot like what the ML and operations-research people call
> model-based optimization, no? Basically interleave a distribution-
> estimation step with a candidate-generation step that uses the
> estimated distribution as bias, and repeat. One example I'm familiar
> with (since I took a class from one of the authors) is MIMIC, which on
> each iteration estimates the n% worst part of the remaining search
> space, and avoids generating examples from there in the next iteration
> (e.g. "avoid generating anything that ends with 'orange'"),
> progressively ratcheting towards what's hopefully the good part of the
> space: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.6732
>
> And a decent (if somewhat dry) survey of such methods:
> http://iridia0.ulb.ac.be/~mbiro/paperi/ZloBirMeuDor2004aor.pdf
>
> -Mark
>

The "estimation of distribution" flavour of evolutionary algorithms
(e.g. CMA-ES) is a form of model-based optimization, as acknowledged
in the Zlochin et al. paper you refer. There are also good arguments
that many other EAs (e.g. evolution strategies with self-adaptation,
or differential evolution) are implicitly model-based.

Moral: every good idea has been discovered independently by many
people and come with many labels (in different research communities).

Julian

Mark J. Nelson

unread,
Oct 25, 2009, 8:25:23 PM10/25/09
to procedur...@googlegroups.com
Julian Togelius wrote:
> I would
> imagine that games like Civilization use some rule-based mechanism for
> creating its landscapes, which only generates one instance is
> guaranteed to stop after a certain number of steps. But I don't know
> of any examples of actual such "constructive" algorithms. Does anyone
> of you know any?
>

I recently ran across a description of the galaxy-generation algorithm
for _Elite_ (1984), which has the amusing motivation of reducing RAM
requirements--- content is procedurally generated (deterministically) so
it doesn't have to be actually stored.

From
<http://en.wikipedia.org/wiki/Elite_(computer_game)#Technical_innovations>:

"The Elite universe contains eight galaxies, each galaxy containing 256
planets to explore. Due to the limited capabilities of 8-bit computers,
these worlds are procedurally generated. A single seed number is run
through a fixed algorithm the appropriate number of times and creates a
sequence of numbers determining each planet's complete composition
(position in the galaxy, prices of commodities, and even name and local
details — text strings are chosen numerically from a lookup table and
assembled to produce unique descriptions for each planet). This means
that no extra memory is needed to store the characteristics of each
planet, yet each is unique and has fixed properties. Each galaxy is also
procedurally generated from the first.

"However, the use of procedural generation created a few problems. There
are a number of poorly located systems that can be reached only by
galactic hyperspace — these are more than 7 light years from their
nearest neighbour, thus trapping the traveller. Braben and Bell also
checked that none of the system names were profane."

-Mark

Andrew Doull

unread,
Oct 26, 2009, 3:57:29 AM10/26/09
to Procedural Content Generation
On Oct 23, 6:59 am, Tarn Adams <tarn.ad...@gmail.com> wrote:
> As far as I know, random dungeon generators in Roguelike games (Rogue,
> Nethack, Moria/Angband, ADOM, Dungeon Crawl, etc.) also generally just
> make one dungeon each time without rejects.

The Angband generation algorithm doesn't reject, but it is also not
constructivist in the sense that the maps are guaranteed to be well-
formed e.g. connected. It takes an approach similar to Tribal Trouble
by choosing an algorithm which generates well-formed maps 'most' of
the time. The white paper on map generation for Tribal Trouble
includes a proof to establish that a sufficiently high percentage of
maps are well formed. Angband allows the player to control the
generate and test approach, by allowing an infinite number of levels
to be entered by the player and making it easy to leave most levels
without penalty.

Andrew

Andrew Doull

unread,
Oct 26, 2009, 4:07:33 AM10/26/09
to Procedural Content Generation
On Oct 26, 11:25 am, "Mark J. Nelson" <mnel...@cc.gatech.edu> wrote:
> Julian Togelius wrote:
> > I would
> > imagine that games like Civilization use some rule-based mechanism for
> > creating its landscapes, which only generates one instance is
> > guaranteed to stop after a certain number of steps. But I don't know
> > of any examples of actual such "constructive" algorithms. Does anyone
> > of you know any?
>
> I recently ran across a description of the galaxy-generation algorithm
> for _Elite_ (1984), which has the amusing motivation of reducing RAM
> requirements--- content is procedurally generated (deterministically) so
> it doesn't have to be actually stored.

There's several games which have used what I classified that as
dynamic world generation with this motivation in mind (http://
pcg.wikidot.com/pcg-algorithm:dynamic-world-generation). The most well
written about example is Infinity: The Quest for Earth, whose
developer has an excellent technical blog at
http://www.gamedev.net/community/forums/mod/journal/journal.asp?jn=263350.

Andrew

Andrew Doull

unread,
Oct 26, 2009, 4:11:46 AM10/26/09
to Procedural Content Generation
On Oct 26, 6:57 pm, Andrew Doull <andrewdo...@gmail.com> wrote:
> On Oct 23, 6:59 am, Tarn Adams <tarn.ad...@gmail.com> wrote:
>
> > As far as I know, random dungeon generators in Roguelike games (Rogue,
> > Nethack, Moria/Angband, ADOM, Dungeon Crawl, etc.) also generally just
> > make one dungeon each time without rejects.
>
> The Angband generation algorithm doesn't reject, but it is also not
> constructivist in the sense that the maps are guaranteed to be well-
> formed e.g. connected.

Sorry to reply to my own post, but it's worthwhile pointing out that
the tunnel connection algorithm used for this is so fragile, that
practically any Angband variant modifies it to use a generate and test
approach.

Andrew
Reply all
Reply to author
Forward
0 new messages