compositional autoconstructive dynamics · 2013-10-25 · language clojure, a mostly-functional...

15
Compositional Autoconstructive Dynamics Kyle Harrington DEMO Lab, Brandeis University, Waltham, MA [email protected] Emma Tosch DEMO Lab, Brandeis University, Waltham, MA [email protected] Lee Spector School of Cognitive Science, Hampshire College, Amherst, MA [email protected] Jordan Pollack DEMO Lab, Brandeis University, Waltham, MA [email protected] Autoconstructive evolution is the idea of evolving programs through self-creation. This is an alternative to the hand-coded variation operators utilized in traditional ge- netic programming (GP) and the deliberately limited implementations of meta-GP. In the latter case strategies generally involve adapting the variation operators which are then used in accordance with traditional GP. On the other hand, autoconstruction offers the ability to adapt algorithmic reproductive mechanisms specific to individuals in the evolving population. We study multiple methods of compositional autoconstruc- tion, a form of autoconstruction based on function composition. While much of the previous work on autoconstruction has investigated traditional GP problems, we inves- tigate the effect of autoconstructive evolution on two problems: Order, which models order-sensitive program semantics, and Majority, which models the evolutionary acqui- sition of semantic components. In doing so we show that compositional autoconstruc- tion exhibits surprising dynamics of evolutionary improvement, and that compositional autoconstruction can be comparable to GP. This advance is a step towards the search for the open-ended evolution of problem-solving techniques. 856

Upload: others

Post on 27-Jun-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

CompositionalAutoconstructive Dynamics

Kyle Harrington

DEMO Lab, Brandeis University, Waltham, [email protected]

Emma Tosch

DEMO Lab, Brandeis University, Waltham, [email protected]

Lee Spector

School of Cognitive Science, Hampshire College, Amherst, [email protected]

Jordan Pollack

DEMO Lab, Brandeis University, Waltham, [email protected]

Autoconstructive evolution is the idea of evolving programs through self-creation.This is an alternative to the hand-coded variation operators utilized in traditional ge-netic programming (GP) and the deliberately limited implementations of meta-GP.In the latter case strategies generally involve adapting the variation operators whichare then used in accordance with traditional GP. On the other hand, autoconstructionoffers the ability to adapt algorithmic reproductive mechanisms specific to individualsin the evolving population. We study multiple methods of compositional autoconstruc-tion, a form of autoconstruction based on function composition. While much of theprevious work on autoconstruction has investigated traditional GP problems, we inves-tigate the effect of autoconstructive evolution on two problems: Order, which modelsorder-sensitive program semantics, and Majority, which models the evolutionary acqui-sition of semantic components. In doing so we show that compositional autoconstruc-tion exhibits surprising dynamics of evolutionary improvement, and that compositionalautoconstruction can be comparable to GP. This advance is a step towards the searchfor the open-ended evolution of problem-solving techniques.

856

Page 2: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

1 Introduction

Evolutionary computation is a collection of techniques inspired by evolution.Evolutionary algorithms (EA) operate on populations of individuals. At theheart of any EA there is a three step loop: evaluate, select, and vary. The de-tails of these steps and the addition of intermediate steps constitutes the studyof evolutionary computation. In this research we focus on genetic program-ming (GP) the evolution of computer programs [13]. GP is related to geneticalgorithms (GA) a field that classically studies the crossover and mutation offixed-length bitstring genomes. GP moves beyond the fixed-length bitstring rep-resentation and uses computer programs as genomes. A detailed review of GPis beyond the scope of this paper, for additional details see [13].

Meta-evolutionary techniques have been plagued by the “meta-meta-. . . problem” since the proposal of the first meta-evolution algorithm [16]. The“meta-meta-. . . problem” is: if a meta-population varies a problem solving pop-ulation, what varies the meta-population? The simplest option is to addadditional meta-layers. However, not only does the addition of meta-meta-populations not solve the problem, but it also leads to a significant cost inevaluating the performance of higher-order meta-populations. In practice theuse of more than one layer of meta-evolution is avoided; instead, researchersinvestigate various levels of the evolutionary process.

For all intents and purposes, there are three levels at which adaptation maytake place within an EA: populations, individuals, and components. Population-level adaptation generally involves update rules on evolutionary parameters suchas mutation and crossover rates. Individual-level adaptation involves similarparametric adjustments; however, the parameters are unique to individuals.Component-level adaptation associates parameters with individual componentsthat influence variation. The distinction between levels of adaptative evolu-tion is fuzzy. Some instances of individual-level adaptation associate valueswith components within individuals that bias variation [2], while some formsof individual-level adaptation may have direct effects upon variation within theentire population [3]. The distinctions of these levels of adaptation have beenintroduced in detail [1]; however, parametric adaptation is not the only option.

The evolution of programs in GP involves the variation of symbolic expres-sions. Although some of the previously mentioned techniques involve meta-evolutionary processes working with GP, the first application of GP to itself wasdone in the context of a sea of GP programs [14]. In this work a program is capa-ble of using template-based matching to search the sea of programs for a matchthat can be permanently incorporated the program’s structure. Programs aresolutions to simple boolean problems. In concluding, the researcher notes thatalthough the probability of spontaneous emergence of productive self-replicationis low, the approach of self-improving GP is in fact computationally tractable.

A notable piece of later work on meta-evolution in GP was meta-GP [5].In this work, a population of variation operators (the meta-population) act ona population of problem solving programs. The variation operators are tree

857

Page 3: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

manipulation programs that are capable of performing recombinatory variationby using two solution programs to create a new candidate solution. With thisadditional meta-population it was shown that meta-GP can perform comparablyto GP. However, the meta-population is evolved with hand-coded algorithms.This use of human-designed variation operators acting upon the meta-populationcircumvents the complications of layered meta-populations, but puts an inherentlimit upon the evolutionary process.

An advantage of the meta-GP approach is that it introduces less bias thanmost adaptive algorithms by evolving the operators of variation. However, theuse of a standard EA on the meta-population is an additional constraint thatis not required. Autoconstructive evolutionary algorithms (AEA) combine alllevels of adaptive algorithms while avoiding the “meta-meta-. . . problem.” In-dividuals in AEA contain their own variation code. When autoconstructing, aprogram may use itself, chosen, or selected individuals from the population as in-puts when producing a child program. Self-directed selection has been exploredfor age- and geographic-based methods [17]. However, efficient autoconstructioncontinues to elude researchers [19].

We study a subset of autoconstruction called “compositional autoconstruc-tion,” where programs are autoconstructively composed by other programs forthe production of offspring. Three forms of composition are explored: self-composition, asymmetric collision, and collision composition, which are dis-cussed in detail later in the paper. In this work we present a study of thedynamics of autoconstructive evolution on two problems that model particularfeatures of the evolution of program semantics. These problems are Order andMajority [8]. The Order problem is designed to model the organization of con-ditional branching, such as the true and false clauses of if-statements. On theother hand, the Majority problem models a general property of GP: the acqui-sition of more beneficial code than detrimental code. These problems serve tocharacterize the autoconstructive evolution by indicating its performance bothon a problem solving task, and by demonstrating its evolutionary capabilities.

2 The Push Language

Push is a stack-based programming language specifically designed for evolu-tionary computation [22]. Instructions, such as arithmetic, program flow, andcode manipulation instructions, take (pop) input from and output (push) toa set of global typed-stacks. Types include integers, floating-points, booleans,code, and tags (inexactly-matching identifiers that support a form of modular-ity [21]). A particularly innovative feature in Push is the exec stack [20]. Theexec stack provides instructions for self-modification during execution througha stack. Aside from the content of the language itself Push has two primaryproperties that make it useful for evolution. First, instructions do not enforcea syntactic structure because of the use of global stacks for instruction inputsand outputs. Second, when the necessary inputs are not available on the stacksfor an instruction, the instruction simply has no effect and execution continues

858

Page 4: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

(the instruction acts as a “NOOP”). As opposed to reiterating a descriptionof the language we present an example and direct the reader to the detaileddescriptions of the language available in [22, 20, 21].

We begin by showing an example of simple arithmetic (7 INTEGER.+ 3INTEGER.* 0.5 INTEGER.DUP INTEGER.+). As Push programs areevaluated left to right we begin with 7, which is pushed onto the integer stack.INTEGER.+ is evaluated, but because there is only one value on the integerstack INTEGER.+ has no effect. 3 is then pushed onto the integer stack.INTEGER.* pops 3 then 7 and pushes 21. 0.5 is pushed onto the float stack.The instruction INTEGER.DUP takes the top item on the integer stack andpushes a copy of it onto the integer stack, so now there are two copies of 21 onthe integer stack. Finally, INTEGER.+ pops both copies of 21 and pushes42. The result of evaluating this expression is 0.5 and 42 on top of the floatand integer stacks, respectively.

The PushGP evolutionary framework has been the primary mechanism forevolving problem solutions with Push. In general PushGP can act like any GPsystem, it is distinguished from standard GP primarily by its Push program-specific algorithms. Although many forms of variation have been implementedin PushGP, crossover, mutation, and replacement are the only forms of non-autoconstructive reproduction used in this study.

Autoconstruction was one of the primary considerations in the design ofPush. The code type was introduced to allow for simple manipulation of pro-grams and has been the primary datatype involved in autoconstruction studiesthus far [22, 19]. The code type implements a large number of instructions,many of which are inspired by Common Lisp. This diverse autoconstructivevocabulary has led to a number of interesting solutions presented in the previ-ously mentioned studies. However, the combinatorics of evolving programs areexponentially unfavorable with respect to the number of possible instructions,even with Push’s evolution-friendly design.

The most recent version of Push has been developed in the programminglanguage Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the idea of the zipper, a functional datastructure that represents a location within a tree [11]. A zipper allows for simpletree traversal with directional commands. Zippers have been incorporated intoPush as a reified data type with their own stack. In addition to the traditionalzipper movements, instructions are added that allow for subtree manipulationand random movement. Although a larger number of zipper instructions havebeen added to Push, the subset used in this study are shown in table 1. Thesimplicity of these tree-based instructions facilitates autoconstruction.

3 Compositional Autoconstruction

We use the phrase “compositional autoconstruction” here to emphasize a view ofautoconstruction as something akin to function composition, in which programs

859

Page 5: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

Instruction DescriptionZIP.DOWN Move the top zipper deeper in the tree, if possible.ZIP.LEFT Move the top zipper to the left sibling, if possible.

ZIP.RIGHT Move the top zipper to the right sibling, if possible.ZIP.RAND Replace the subtree rooted at the top zipper’s location

with a random subtree.ZIP.ROOT Move the top zipper to the root of the tree.ZIP.RLOC Move the top zipper to a random location in the subtree

rooted at the top zipper.ZIP.RRLOC Move the top zipper to a random location in the tree.ZIP.SWAP Pop two zippers and push them back onto the stack in

reverse order.ZIP.SWAPSUB Swap the subtrees rooted at the top two zippers.INTEGER.ERC An ephemeral random integer in: [−16,−1] ∪ [1, 16].

EXEC.NOOP A placeholder instruction used for “padding” that hasno effect.

Table 1: Instruction set used in this study.

are applied to programs in order to produce new programs.1 This usage followsthat of Fontana [7], and it also highlights connections to the “compositionalevolution” work of Watson [23], which emphasizes additional forms of biologicalcomposition such as symbiogenesis.

In the more specific context of autoconstructive evolution “compositionalautoconstruction” emphasizes the nature of the inputs and outputs to programsthat produce offspring in an autoconstructive population. In particular, we willconsider programs that are applied only to their own code in order to produceoffspring (without having access to other programs in the population), programsthat are applied only to the code of single, randomly selected programs in thepopulation to produce offspring (without having access to their own code), andprograms that are applied both to their own code and to the code of othersingle programs in the population. In contrast, many previous autoconstructiveevolution systems have allowed arbitrary access to any code in the population [17,18], while others have restricted access but without the systematic delineationof access restrictions that we present here [19].

In the present work, because we are interested in studying the self-organizeddynamics of autoconstructive populations, we also avoid most of the constraintson reproductive behavior that have been used in prior systems (e.g. the“improvement-based” constraints on reproduction in [19]). The only constraint

1Note that this is somewhat different from the standard mathematical usage in whichcomposition refers to the application of a function to the output of another function. Incompositional autoconstruction this kind of composition does occur across generations, withthe outputs of programs (which are themselves programs) serving as the input to the programsof the next generation. But within the reproductive processes of a single generation programsare applied to the code of other programs, not their outputs.

860

Page 6: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

on autoconstruction in this work, aside from the constraint that programs can-not exceed the global size limit, is a “no cloning” rule: a child may not have theexact same program as its parent.

Furthermore, for all experiments presented in this study the zipper instruc-tions have no direct interaction with the problem-specific instructions, and viceversa. This is because both the Order and Majority problems are scored viainspection as opposed to evaluation. This may help to facilitate autoconstruc-tion, and in any event it helps us to focus our study on the dynamics of au-toconstructive populations by allowing autoconstructive and problem-specificinstructions to be mostly independent. Although some of the more ambitiousgoals of autoconstruction involve code-aware instructions (see [17, 19]), we showthat autoconstruction can have acceptable performance with just simple tree-based instructions.

3.1 Self-composition

Self-composition is one of the most basic forms of compositional autoconstruc-tion. A similar approach is taken in [19] where Push’s code stack is used witha large instruction set. This type of autoconstruction is a type of mutation.During self-composing autoconstruction a program can perform mutations atparticular locations in a copy of its program in order to create a child. Thereproduction rule for self-composition can be written as

f(f) = h

where f is the parent and h is the child. This form of autoconstruction is entirelyself-contained within an individual, and for this reason there is no external de-pendency on the GP population. The next form of autoconstruction we consideris asymmetric collision, which introduces a population-level dependency.

3.2 Asymmetric Collision

Asymmetric collision compositional autoconstruction is inspired by the AlChemysystem [7]. In the AlChemy system a collection of objects (programs) are con-tained within a reactor. The system evolves by iteratively selecting two objectsat random which “interact.” Interaction occurs via function composition, theresult of which is subjected to a collision rule. In practice the collision rule isused to define boundary conditions, such as evaluation limit (to ensure termi-nation) and replication of input functions. In our system asymmetric collisionsfollows essentially the same algorithm. Two programs are selected from thepopulation and one is composed by the other. Asymmetric collisions introduceindirect interaction amongst individuals within a population. Asymmetric colli-sion composition can be expressed as

f(g) = h

861

Page 7: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

where f is the active parent, g is the passive parent, and h is the child. Likeself-composition, asymmetric collision composition is a form of mutation. Theonly code available to the autoconstructive process is gs code and code obtainedfrom the random code generator via ZIP.RAND. In the next section we introducecollision compositional autoconstruction, which gives programs the ability toexchange genetic information during the autoconstructive process.

3.3 Collision

Collision compositional autoconstruction is a recombinatory form of autocon-struction. We present a simple binary form of collision compositional autocon-struction in the form

f(g, f) = h

where f is the active parent, g is the passive parent, and h is the child. Thecomposition of two programs allows exchange of genetic information between thetwo programs. The importance of recombination in evolutionary computationhas been emphasized in the fields of genetic algorithms and genetic programming[10, 13]. We now go on to explore the properties of these forms of compositionalautoconstruction.

4 Autoconstruction of Program Semantics

The Order and Majority problems are model problems for studying the evolu-tion of program semantics in genetic programming [8]. The Order problem isdesigned to model conditional structures in programs, such as the ordering oftrue and false clauses of if-statements. The Majority problem models evolu-tionary accumulation of semantically important components. An analysis of thehardness of these two problems is presented in [4]. When originally presentedthese problems were used to model the effects of different types of variationoperations. Within the context of Push and autoconstructive evolution bothproblems are of interest; however, the implications of these problems in Pushare slightly different. These implications are presented in the relevent sections.

For both problems the problem size was 16. The lowest fitness possible is0, while the highest fitness is 16. The problem-specific “instructions” are justintegers in the ranges [−16,−1] and [1, 16]; no other Push instructions are usedaside from the ZIP instructions listed in table 1 and the “padding” instructionsdescribed below. The positive integers represent components that should occurearlier during evaluation in the Order problem, and beneficial components inthe Majority problem. The negative integers represent detrimental componentsin both problems. In these experiments the autoconstructive configurations arecompared to one another and to standard PushGP, using instruction padding(the inclusion of NOOP instructions) in any configuration with less than themaximum number of instructions of any configuration. This is done to make

862

Page 8: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

Parameter ValueSelection method tournaments of 5

Population size 1000Maximum program size 500

Maximum number of generations 1001Number of runs per condition 250

Table 2: Parameters used.

the combinatorics of the problem spaces equivalent2. The parameters used inthe following experiments are presented in table 2. Performance of parametersettings are measured with the median success generation and computationaleffort. Median success generation is the generation in which an individual in thepopulation achieves a fitness of 16. Computational effort is a metric developedfor GP [13] to quantify the amount of computation required to solve a problem.It is computed by estimating the probability of a population containing a successat a given generation. This probability distribution is then used to compute howmany individuals are required to solve a problem with a certain probability, z.In this study z = 99%. The other metric used is diversity, which is ratio of thenumber of distinct programs to the total number of programs in the population.In this study we show that compositional autoconstruction can be comparable toGP, which means that the computational effort and median success generationof autoconstruction are on par with GP.

4.1 Order

The stack-based nature of Push means that the Order problem represents anyorder-specific binary stack-based instruction, such as division, subtraction, con-ditionals, loops, code manipulation, etc.. Although it is possible that intermedi-ate instructions could interfere with stack contents (i.e. the addition/removal ofstack items), we do not introduce distance between complimentary instructions.

The fitness of an Order solution, r(f), is computed as

∀zi ∈ [1, s], ifM(zi, f) < M(−zi, f) vi = 1

otherwise vi = 0

r(f) =∑

i

vi (1)

where s is the problem size, M(z, f) is the number of elements preceedingthe first occurrence of z within program f in depth-first order. An exam-ple of program is (ZIP.RRLOC (3 (5) (16 (10) ZIP.RAND 10) -8) -6(-8 (ZIP.RIGHT 13)) (-1 ((-11 (-6 13 (9)) -15) 8 (12) 8 -15) 10)

2Collision composition uses 2 additional instructions (ZIP.SWAP and ZIP.SWAPSUB)not accounted for in the instruction set padding.

863

Page 9: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

(a) Computational effort (b) Median success generation

(c) GP (d) Self-composition

(e) Asymmetric Collision (f) Collision

Figure 1: Comparison of compositional autoconstruction to GP on Order. Fitnesschanges are normalized to range from 0 to a maximum of 1.

(ZIP.RLOC) (ZIP.ROOT) 12). This autoconstructive mutation program hasfitness of 7 and produces a child using standard subtree replacement mutation(which is implemented by the first two zipper instructions, with the subsequentzipper instructions having no effect).

864

Page 10: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

Overall performance on the Order problem is shown in figure 1. It is clearfrom the computational effort scores that Order is a fairly easy problem for all ofthe evolutionary techniques to solve. While GP outperforms autoconstructionin computational effort, autoconstruction performs similarly when consideringthe median success generation. Autoconstruction’s higher metrics are not sur-prising for two reasons. First, good variation operators are used consistently,beginning with the first generation in GP, while the autoconstructive compo-nents of programs are initialized randomly. Second, Order is a simple problemthat standard GP operators are known to work well on. However, this shouldnot discount from the informativeness of this model problem.

Fitness dynamics for the Order problem are shown in figures 1c-1f. Eachpoint represents a generation in one of the experimental conditions. In the stan-dard GP condition (figure 1c) there are moderate increases in fitness with a highand consistent level of diversity. The consistently high level of diversity is due tothe variation operators used in GP. Generally solutions are not repeatedly eval-uated due to the small changes that frequently do not significantly decrease thefitness of individuals, allowing them to survive in subsequent generations. Figure1d shows the performance of self-composition, which sometimes produces largerimprovements in fitness, but suffers from lower levels of diversity. Of particularnote is the cluster of generations with high diversity that have large improve-ments in fitness. This may either be due to the ease of improving the initiallyrandom and diverse population or speciation. In figure 1e asymmetric collisioncompositional autoconstruction is shown. This form of autoconstruction exhibitsparticularly interesting dynamics. There are many changes in fitness that leadto near optimal and optimal individuals. This method of autoconstruction alsoappears to maintain consistent levels of diversity, albeit not as consistently highas standard GP. Collision compositional autoconstruction (figure 1f) behavessimilarly to self-composition while producing outliers with large improvementsin fitness that are similar to the results of produced by asymmetric collisions.However, this form of binary collision appears to suffer from complications indiversity. This is most likely due to its relation to self-composition. Potentialexplanations are suggested in the discussion.

The Order problem is designed to model order-based semantic dependencieswithin programs. As such, Order primarily serves to model the ability of anEA to produce problem solutions. The results support the hypothesis that au-toconstruction is in fact capable of facilitating the search for problem solutionsin ways that are notably different than traditional GP. Now we consider theMajority problem, which models properties of the evolutionary process itself.

4.2 Majority

The Majority problem awards fitness for maintaining at least as many beneficialcomponents as detrimental components. Majority has been described as a weakproperty of GP solutions. The primary property modeled by Majority in Pushis the ability of a program to have the capability to push a sufficient amount of

865

Page 11: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

(a) Computational effort (b) Median success generation

(c) GP (d) Self-composition

(e) Asymmetric Collision (f) Collision

Figure 2: Comparison of compositional autoconstruction to GP on Majority. Fitnesschanges are normalized to range from 0 to a maximum of 1.

correct information onto the stacks. This is of interest for considering the proba-bility that an individual can be used to produce a new and better individual. Inthe context of autoconstruction this is a particularly important question: howwell does autoconstruction improve individuals?

866

Page 12: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

The fitness of a Majority solution, r(f), is computed as

∀zi ∈ [1, s], ifN(zi, f) ≥ N(−zi, f) ∧N(zi, f) > 0 vi = 1

otherwise vi = 0

r(f) =∑

i

vi (2)

where s is the problem size, N(z, f) is the number of instances of integer z inprogram f . The example program presented in the previous section on Orderhas a Majority fitness of 1.

Overall performance of Majority is shown in figures 2a and 2b. Self-composing autoconstruction performs comparably to GP, both in terms of com-putational effort and median success generation. This is most likely becausethe Majority problem is focused on programs that have more of a certain set ofcomponents. This implementation of the problem simply has 32 problem-specificcomponents, all in the form of terminals. Many programs in the population willcontain all these components. In such situations, a self-composing autoconstruc-tive program can easily evolve to preserve a beneficial set of components.

Fitness dynamics for the Majority problem are shown in figures 2c-2f. Theperformance of standard GP on Majority (figure 2c) is similar to that seen onthe Order problem. A high level of diversity is maintained with moderate im-provements in fitness. In figure 2d, the performance of self-composition on theMajority problem appears to be markedly poorer than on the Order problemwhen noting the low levels of diversity and moderate improvements in fitness.However, as can be seen in both the computational effort and median successgeneration, self-composition is still the best performing method of compositionalautoconstruction. Also, there is the recurrence of the distinct cluster of genera-tions where at a relatively high level of diversity large improvements in fitnessare made. Asymmetric collisions (figure 2e) follow a similar trend to the one seenon the order problem, with a higher density of generations being at high levelsof diversity with large improvements in fitness. Collision compositional auto-construction (figure 2f) performs particularly poorly on the Majority problem.This is surprising, due to the ability of this form of composition to mimic bothself-composition and asymmetric collision. Nevertheless, aside from a numberof outliers collision composition leads to lower levels of diversity and producessmaller increases in fitness, even than standard GP. It is clear from these re-sults that self-composition possesses some unique capabilities to perform wellunder certain circumstances, yet suffers from complications of diversity. On theother hand, asymmetric collisions tend to support higher levels of diversity whileproducing large improvements in fitness.

5 Discussion

We show that autoconstruction can perform comparably to GP on problemsthat model program semantics. The initial randomization of autoconstructive

867

Page 13: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

programs should slow evolution, but surprisingly the extent to which it does soappears to be reasonably bounded. One of the primary expectations of autocon-struction is that it will be of greatest use for difficult problems for which standardGP fails to find solutions. While this may still be true because GP does out-perform autoconstruction, the comparable performance of autoconstruction onproblems of program semantics elucidates the benefits of autoconstruction. Ourdata shows that the AEAs considered here often produce larger jumps in pro-gram space than traditional GP search, which can be seen in the large increasesin fitness produced by compositional autoconstruction.

On the Order problem we observe that compositional autoconstruction cangenerally find successful programs in a similar number of generations to standardGP. This observation is substantated by median success generations. Compu-tational effort metrics are important to present because of their use as a gold-standard of GP performance. However, when autoconstruction is not subjectedto techniques for reinforcing diversity it is not uncommon for diversity to crashwithin a population. For this reason it is important to primarily draw conclu-sions based upon the median success generation results.

The consistent diversity and significant improvements seen for asymmetriccompositional autoconstruction in both the Order and Majority problems isof particular interest to practitioners of autoconstruction. The only differencebetween asymmetric collisions and self-composition is the subject of the compo-sition. However, because of the implementation of these problems there is littlelinkage between autoconstructive functionality and the inspection-based fitnessscore. Thus, the primary difference between self-composition and asymmetriccollisions is the source of the input during autoconstruction. This leads us tosuggest that not only can the diversity in a population be increased by the ap-plication of a wide range of variation operators, but significant improvementsin fitness can also be achieved with this strategy. It may be possible to fur-ther improve the performance of co-dependent autoconstruction mechanisms byincreasing the amount of children produced within any given generation.

Aside from a no-cloning rule we do not introduce any reproduction-basedselection pressures on the population. This decision is justified by the previousfinding of the diversification capabilities of autoconstructive evolution [18] andthe goal of studying autoconstruction in its most simple form. We suggest thatcrowding-based techniques can be used as a method for increasing diversifica-tion. Alternatively, autoconstructive improvement metrics [19] can be treatedas additional objectives for use with coevolutionary techniques [12, 6, 15].

One reason that the autoconstructive instruction set used here may out-perform instruction sets used in previous studies is that the zipper-based in-structions provide the ability to express powerful operations with a small set ofoperators. This allows for the representation of useful variation operators withrelatively few instructions. That being said, the combination of code-stack basedautoconstruction and zipper instructions may support even more powerful formsof autoconstructive evolution

We have shown that compositional autoconstruction can perform compara-

868

Page 14: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

bly to GP and that some forms of autoconstruction produce significantly higherimprovements in fitness. By exploring problems that model program semanticswe can see that autoconstruction supports the evolutionary acquisition of ben-eficial components, as well as the ordering of problem-solving components. Byallowing algorithmic mechanisms of variation to evolve within individuals thedynamics of GP-comparable compositional autoconstruction is a step closer tothe open-ended evolution of problem-solving techniques.

6 Acknowledgements

We thank the DEMO lab and the Hampshire College CI lab. This material isbased upon work supported by the National Science Foundation under GrantNo. 1017817 and 0757452. Any opinions, findings, and conclusions or recom-mendations expressed in this publication are those of the authors and do notnecessarily reflect the views of the National Science Foundation. Thanks also toHampshire College for support of the Hampshire College Institute for Compu-tational Intelligence.

Bibliography

[1] Angeline, P.J., “Adaptive and self-adaptive evolutionary computations”,Computational Intelligence: A Dynamic Systems Perspective, (1995).

[2] Angeline, P.J., “Two self-adaptive crossover operations for genetic pro-gramming”, Advances in genetic programming , (1995).

[3] Angeline, P.J., and J.B. Pollack, “The evolutionary induction of sub-routines”, Proc. of the 14th Annual Conf. of the Cognitive Science Society ,Lawrence Erlbaum (1992), 236–241.

[4] Durrett, G., F. Neumann, and U.M. O’Reilly, “Computational com-plexity analysis of simple genetic programming on two problems modelingisolated program semantics”, Proc. of the 11th Workshop Proc. on Foun-dations of Genetic Algorithms, (2011).

[5] Edmonds, B, “Meta-genetic programming: Co-evolving the operators ofvariation”, Turk J. Elec. Engin (2001).

[6] Ficici, S., and J. Pollack, “Pareto optimality in coevolutionary learn-ing”, Advances in Artificial Life (2001), 316–325.

[7] Fontana, W, “Algorithmic chemistry”, Artificial life II (1991).

[8] Goldberg, D., and U.M. OReilly, “Where does the good stuff go, andwhy? How contextual semantics influences program structure in simplegenetic programming”, Genetic Programming (1998), 16–36.

869

Page 15: Compositional Autoconstructive Dynamics · 2013-10-25 · language Clojure, a mostly-functional version of Lisp that runs on the Java Vir-tual Machine [9]. Clojure has revisited the

[9] Hickey, R., “The Clojure programming language”, Proc. of the 2008Symp. on Dynamic Languages, (2008), 1.

[10] Holland, J.H., Adaptation in Natural and Artificial Systems, (1975).

[11] Huet, G., and I.R. France, “Functional pearl: The zipper”, J. FunctionalProgramming , (1997).

[12] Juille, H., and J.B. Pollack, “Co-evolving intertwined spirals”, Proc.of the 5th Annual Conf. on Evolutionary Programming , (1996).

[13] Koza, JR, Genetic programming: on the programming of computers bymeans of natural selection, (1992).

[14] Koza, J.R., “Spontaneous emergence of self-replicating and evolutionarilyself-improving computer programs”, Artificial life III , vol. 17, Citeseer(1994), 225–262.

[15] Popovici, E., A. Bucci, R.P. Wiegand, and E.D. de Jong, “Coevolu-tionary Principles”, Proc. of the 11th Workshop Proc. on Foundations ofGenetic Algorithms, (2011).

[16] Schmidhuber, J., Evolutionary principles in self-referential learning. (Onlearning how to learn: The meta-meta-... hook.), PhD thesis Institut furInformatik, Technische Universitat Munchen (1987).

[17] Spector, L., “Autoconstructive evolution: Push, pushGP, and pushpop”,Proc. of the Genetic and Evolutionary Computation Conference, (2001),137–146.

[18] Spector, L., “Adaptive populations of endogenously diversifying pushpoporganisms are reliably diverse”, Proc. of Artificial Life VIII , (2002), 142.

[19] Spector, L., “Towards Practical Autoconstructive Evolution: Self-Evolution of Problem-Solving Genetic Programming Systems”, GeneticProgramming Theory and Practice VIII (2010), 17–33.

[20] Spector, L., J. Klein, and M. Keijzer, “The Push3 execution stackand the evolution of control”, Proc. of the 2005 Conf. on Genetic andEvolutionary Computation, ACM (2005), 1689–1696.

[21] Spector, L., B. Martin, K. Harrington, and T. Helmuth, “Tag-based modules in genetic programming”, Proceedings of the Genetic andEvolutionary Computation Conference (GECCO-2011), (2011).

[22] Spector, L., and A. Robinson, “Genetic programming and autoconstruc-tive evolution with the push programming language”, Genetic Programmingand Evolvable Machines 3, 1 (2002), 7–40.

[23] Watson, R.A., Compositional Evolution: Interdisciplinary Investigationsin Evolvability, Modularity, and Symbiosis, PhD thesis (2002).

870