The Simple Genetic Algorithm: Foundations And Theory (Complex Adaptive Systems) Book 15

0 views
Skip to first unread message
Message has been deleted

Gifford Brickley

unread,
Jul 11, 2024, 11:31:38 AM7/11/24
to tickvolnetfwilf

A standard representation of each candidate solution is as an array of bits (also called bit set or bit string).[4] Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming.

The Simple Genetic Algorithm: Foundations and Theory (Complex Adaptive Systems) book 15


Download Zip > https://urlcod.com/2yUmam



The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.

Introduces topics in complex adaptive systems, including: definitions of complexity, fractals, dynamical systems and chaos, cellular automata, artificial life, game theory, neural networks, genetic algorithms and network models. Regular programming projects are required. Prerequisite: 251L and MATH 1512.
Restriction: admitted to School of Engineering.

Description: Biological organisms cope with the demands of their environments using solutions quite unlike the traditional human-engineered approaches to problem solving. Biological systems tend to be adaptive, reactive, and distributed. Bio-inspired computing is a field devoted to tackling complex problems using computational methods modeled after design principles encountered in nature. This course is strongly grounded on the foundations of complex systems and theoretical biology. It aims to provide an understanding of the distributed architectures of natural complex systems, and how those are used to produce computational tools with enhanced robustness, scalability, flexibility and which can interface more effectively with humans. It is a multi-disciplinary field strongly based on biology, complexity, computer science, informatics, cognitive science, robotics, and cybernetics.

Abstract:Water resources are the key factors affecting the sustainable development of inland river irrigation districts. The establishment of a water resources management model is helpful to realize the coordinated development of water, society, and ecology. Aiming at the contradiction of water use and ecological vulnerability, this study was based on the method of complex adaptive system (CAS) theory, and an agent-based modeling (ABM) method was adopted. Taking Huaitoutala irrigation district as the research object, a water resource management model considering ecological balance was established, with the water resources potentially tapping in the source area as an effective constraint. This study took 2016 as the datum year; the water consumption and comprehensive benefits of four water-saving irrigation scenarios in different characteristic years were simulated and optimized under the conditions of the current water supply and 10% and 15% potential water resources tapping. The results showed that the model considering the behavior and adaptability of the agent can well optimize and simulate the water use in the irrigation district. Under the application of water resources potential tapping and high-efficiency water-saving technology; the water utilization efficiency (WUE) of the irrigation area has been significantly improved. The comprehensive benefits of the irrigation district increased the proportion of ecological water, which was conducive to the sustainable development of the irrigation district and the ecological protection of inland rivers.Keywords: complex adaptive system; inland river irrigation district; water resources allocation; adaptive agent; ecological development

Genetic algorithms, developed by John Holland and his collaborators in the1960s and 1970s, are a model or abstraction of biological evolution based onCharles Darwin's theory of natural selection. Holland was the first to use crossover, recombination, mutation and selectionin the study of adaptive and artificial systems. These genetic operators are the essential components of geneticalgorithms as a problem-solving strategy. Since then, many variants of genetic algorithms have been developed and applied to a wide range of optimization problems, from graph colouring to pattern recognition, from discrete systems (suchas the travelling salesman problem) to continuous systems (e.g., the efficient design of airfoil in aerospace engineering), and from financial marketsto multiobjective engineering optimization.

Each iteration, which leads to a new population, is called a generation.The fixed-length character strings are used in most geneticalgorithms at each generation although there is substantialresearch on variable-length strings and coding structures. Thecoding of the objective function is usually in the form of binaryarrays or real-valued arrays in adaptive genetic algorithms.An important issue is the formulation or choice of an appropriate fitness functionthat determines the selection criterion in a particular problem.For the minimization of a function using genetic algorithms, onesimple way of constructing a fitness function is to use the simplest form\(F=A-y\) with \(A\) being a large constant (though \(A=0\) will do if the fitnessdoes not need to be non negative)and \(y=f(\mathbfx)\ .\) Thus the objective is to maximize the fitness function and subsequentlyto minimize the objective function \(f(\mathbfx)\ .\) However, there are manydifferent ways of defining a fitness function. For example, we canassign an individual fitness relative to the wholepopulation \[ F(x_i)=\fracf(\xi_i))\sum_i=1^N f(\xi_i),\] where\(\xi_i\) is the phenotypic value of individual \(i\ ,\) and \(N\) is thepopulation size. The form of the fitness function should make surethat chromosomes with higher fitness are selected more often than those with lower fitness.Poor fitness functions may result in incorrect or meaningless solutions.

In 1975, Holland published the groundbreaking book Adaptation in Natural and Artificial Systems, which has been cited more than 50,000 times. Intended to be the foundation for a general theory of adaptation, this book introduced genetic algorithms as a mathematical idealization that Holland used to develop his theory of schemata in adaptive systems. Later, genetic algorithms became widely used as an optimization and search method in computer science.

In recent work in the theoretical foundations of cognitivescience, it has become commonplace to separate three distinctlevels of analysis of information-processing systems. David Marr(1982) has dubbed the three levels the computational, thealgorithmic, and the implementational; Zenon Pylyshyn(1984) calls them the semantic, the syntactic, and thephysical; and textbooks in cognitive psychology sometimescall them the levels of content, form, and medium (e.g. Glass, Holyoak, and Santa 1979).But a mistaken distinction (however ubiquitous) by any anothername will do no more work. I think that there is somethingpromising but nonetheless misleading about the "three-levels"analysis of cognitive systems. In what follows, I'll try to showthis by in part by bringing to bear some of what the recentphilosophy of science has to offer on the analysis of complexsystems. First I'll look briefly at a few general ideas in thephilosophy of science on this general topic; I'll then turn to anexplicit look at the Marr-style three-level view and itsshortcomings, and suggest how this account might be revised toavoid these problems.1. Levels of organizationIt has become a central tenet of the current conventional wisdomin the philosophy of science that complex systems are to be seenas typically having multiple levels of organization. The standardmodel of the multiple levels of a complex system is a roughhierarchy, with the components at each ascending level being somekind of composite made up of the entities present at the nextlevel down. We thus often have explanations of a system'sbehavior at higher (coarser-grained) and lower (finer-grained)levels. The behavior of a complex system -- a particularorganism, say -- might then be explained at various levels oforganization, including (but not restricted to) ones which arebiochemical, cellular, and psychological. And similarly, a givencomputer can be analyzed and its behavior explained bycharacterizing it in terms of the structure of its component logicgates, the machine language program it's running, the Lisp orPascal program it's running, the accounting task it's performing,and so on.Higher-level explanations allow us to explain as a natural classthings with different underlying physical structures -- that is,types which are multiply realizable. [See, e.g., Fodor(1974), Pylyshn (1984), esp. chapter 1, and Kitcher (1984), esp.pp.343-6, for discussions of this central concept.] Thus, we canexplain generically how transistors, resistors, capacitors, andpower sources interact to form a kind of amplifier independent ofconsiderations about the various kinds of materials composingthese parts, or account for the relatively independent assortmentof genes at meiosis without concerning ourselves with the exactunderlying chemical mechanisms. Similar points can be made forindefinitely many cases: how an adding machine works, an internalcombustion engine, a four-chambered heart, and so on.This strength of capturing generalizations has many aspects. Oneis of course that higher-level explanations typically allow forreasonable explanations and predictions on the basis of fardifferent and often far less detailed information about thesystem. So, for example, we can predict the distribution ofinherited traits of organisms via classical genetics withoutknowing anything about DNA, or predict the answer a given computerwill give to an arithmetic problem while remaining ignorant of theelectrical properties of semiconductors. What's critical here isnot so much the fact of whether a given higher-level phenomenon isactually implemented in the world in different physicalways. Rather, it's the indifference to the particularitiesof lower-level realization that's critical. To say that the higherlevel determination of process is indifferent toimplementation is roughly to say that if the higher-levelprocesses occurred, regardless of implementation, this wouldaccount for the behaviors under consideration.This general view of complex systems depends on idealizing aboutthe behavior of particular lower-level structures; viewing themsimply in terms of their normal input/output functions and theirlocal contribution to the behavior of the larger system ratherthan in terms of the details of their internal structures.Generality of explanation is achieved by taxonomizing subsystemsvia input/output functions and allowing and indifferently withrespect to the internal structure by which they might produce thatfunction. The computer is again the easiest illustration: Theanalysis of the behavior of a given computer in terms of, say, theLISP program it's running is a perfectly good one; but it leavestotally unspecified how a given primitive Lisp function (such ascar(students) -- i.e. "give me the first item on the liststudents") is calculated. The Lisp program is completelycompatible with any way you like of representing the lists inmemory, or even with different underlying machine architectures(e.g. it could be implemented on a Von Neumann machine, a Turingmachine, or whatever).The question of exactly when such idealization about the behaviorof components is appropriate is a difficult one. But clearly atleast this constraint is critical: The idealization must be closeenough to the real behavior of the system to adequately capture itin some normal range of working conditions. Thus, an adder whichgenerates overflow errors only above 101000000 will typicallybe perfectly well idealized simply as an adder, but one which doesso anywhere above 7 probably will not. Exactly how close the realbehavior and the ideal must match may be a question with noperfectly general and systematic answer.2. Function and contextThe idea of specifying the components of a larger system in termsof their overall functional role with respect to that embeddingsystem is a central aspect of our general framework of explanationwhich plays several key roles in the understanding of complexsystems. The functional properties of parts of complex systemsare context-dependent properties -- ones which depend onoccurring in the right context, and not just on the local andintrinsic properties of the particular event or object itself. Thephenomenon of context-dependence of course shows up often in thetaxonomies of various sciences. Examples abound: The position ofa given DNA sequence with respect to the rest of the geneticmaterial is critical to its status as a gene; type-identicalDNA sequences at different loci can play different hereditaryroles -- be different genes, if you like. So for a particular DNAsequence to be, say, a brown-eye gene, it must be in anappropriate position on a particular chromosome. Similarly for agiven action of a computer's CPU, such as storing the contents ofinternal register A at the memory location whose address iscontained in register X: Two instances of that very same actionmight, given different positions in a program, differ completelyin terms of their functional properties at the higher level: Atone place in a program, it might be "set the carry digit from thelast addition", and at another, "add the new letter onto thecurrent line of text". And for mechanical systems -- e.g. acarburetor: The functional properties of being a choke or being athrottle are context-dependent. The very same physicallycharacterized air flow valve can be a choke in one context (i.e.when it occurs above the fuel jets) and a throttle in another(when it occurs below the jets); whether a given valve is a chokeor a throttle depends on its surrounding context. By"contextualizing" objects in this way we shift from acategorization of them in terms of local and intrinsic propertiesto their context-dependent functional ones.One critical role that such functional analyses play is that ofilluminating the lower levels of the system: We quite typicallyneed to know what a complex system is doing at a higherlevel in order to find out how at the lower level itaccomplishes that task -- that is, we often need to know thefunction of complex system being analyzed to know what aspects ofstructure to look at. Not only is there typically a mass ofdetail at the lower levels which must be sorted through, butsalience at the lower levels can be misleading, and can fail topick out which lower-level properties are important tounderstanding the overall working of the complex system. So, totake a standard example: If you think that the heart is basicallya noisemaker, the lower-level properties which will seem mostsignificant might be things like the resonant frequency of thevarious chambers, or the transient noises created by the movementof the various valves. Or if you think of a computer as a radiosignal emitter, you will see as most salient the high-frequencyswitching transients of the transistors and the exact frequency ofclock signals, and basically ignore the difference between 0's and1's represented by different DC voltages. Understanding thebehavior of a complex system requires knowing which aspects of thecomplex mass of lower-level properties are significant in making acontribution to the overall behavior of the system; and thisdepends on having some sense of the higher-level functioning ofthe system. (Oatley 1980 gives a nice illustration of some ofthese points by discussing a thought-experiment where we imaginefinding a typical microcomputer on trip to a distant planet and --not knowing what it is -- apply various sorts of researchtechniques to it. The example does a nice job of illustratingsome of the biases built into various methods and in particularbring out the importance of putting lower-level data into ahigher-level framework.)A couple of observations about function and context-dependencewhich will be important to us later: One is that considerationsabout context-dependence can and should arise at more than onelevel of analysis of a complex system, and may have quitedifferent answers at the different levels. For example, theproperties of DNA sequences as objects of chemistry dependsonly on their local physical structure. But their properties asgenes depend on their overall contribution to the phenotype;and what contribution they make to the phenotype is highlydependent on context -- on where the sequence is in relation tothe rest of the genetic materials, and on the precise nature ofthe coding mechanisms which act on the sequences. Or from thepoint of view of a LISP program, a function like (car(list)) (i.e.`get the first item on the list named "list"') is a localcharacterization of that action, whereas an appropriate functionalcharacterization of that operation in a particular case might be"get the name of the student with the highest score on themidterm". But from the machine language point of view, the Lispcharacterization would be a functional analysis of some sequenceof machine language instruction -- instructions which might play adifferent role in some other context. (Failing to notice thislevel-relativity of issue of context dependence has had widespreadconsequeces in the philosophy of the higher-level sciences. For aclear example of this, see McClamrock (in press), which shows howFodor's defense of "methodological individualism" (Fodor 1987)relies on exactly this error.)The other observation is that contextualizing orde-contextualizing can be done without a concurrent shift in level-- that is, we might reinterpret the functions of parts against abroader background of the system without at the same time shiftingthe level size of the parts. The shift between assembly-languageand machine-language is roughly like this. The degree ofabstraction is essentially the same: assembly-language isessentially a one-to-one translation of the raw numbers which theCPU operates on into mnemonics for the functional role the numberis playing in that instance. Thus, at one point in the program, agiven raw number n might be the op code for a command to theCPU, and at another be data to be operated on (e.g. added, moved,etc.). The size or grain of the functional units has not changed,but the functional/contextual characterization has.3. The three levelsIn chapter 1.2 of Vision, David Marr presents his variant onthe "three levels" story. His summary of "the three levels atwhich any machine carrying out an information-processing task mustbe understood":

  • Computational theory: What is the goal of the computation,why is it appropriate, and what is the logic of the strategy bywhich it can be carried out?

aa06259810
Reply all
Reply to author
Forward
0 new messages