BIG Home

BIG Projects


---

Evolution of Machines: The Golem Project

In the Golem project (Genetically Organized Lifelike Electro Mechanics) we conducted a set of experiments in which simple electro-mechanical systems evolved from scratch to yield physical locomoting machines. Like biological lifeforms whose structure and function exploit the behaviors afforded by their own chemical and mechanical medium, our evolved creatures take advantage of the nature of their own medium - thermoplastic, motors, and artificial neurons. We thus achieve autonomy of design and construction using evolution in a limited universe physical simulation, coupled to off-the-shelf rapid manufacturing technology. This is the first time robots have been robotically designed and robotically fabricated.


RAAM: Recursive Auto-Associative Memory

We are working on giving neural networks the ability to store and manipulate complex data structures of arbitrary size. Current research focuses on the possibility of representing large numbers of tree structures using the fractal attractors obtained by treating the network weights as an iterated function system (IFS).


Coevolutionary dynamics in a minimal substrate - 'number games'

One of the central difficulties of coevolutionary methods arises from "intransitive superiority" - in a two-player game, for example, the fact that A beats B, and B beats C, does not exclude the possibility that C beats A. Such cyclic superiority in a coevolutionary substrate is hypothesized to cause cycles in the dynamics of the population such that it "chases its own tail" - traveling through some part of strategy space more than once despite apparent improvement with each step. It is often difficult to know whether an application domain contains such difficulties and to verify this hypothesis in the failure of a given coevolutionary set-up. In this work we elucidate some of the issues and concepts in an abstract domain where the dynamics of coevolution can be studied simply and directly.
We define three simple "number games" that illustrate intransitive superiority and resultant oscillatory dynamics, as well as some other relevant concepts. These include the distinction between a player"s perceived performance and performance with respect to an external metric, and the significance of strategies with a multi-dimensional nature. These features alone can also cause oscillatory behavior and coevolutionary failure.


Computer Evolution of Buildable Objects

We believe that not just the software, but also the physical body of a robot could be the result of an evolutionary process.

A step in this direction is the evolution of buildable lego structures, designed by the computer through the combination of genetic algorithms and physical simulation.

Examples of evolved structures are: the Lego Bridge that supports its weight on one side to reach the other; a crane and a table. EvoCAD is an interactive building tool where computer and human collaborate in creating a design.

See also our project on the evolution of machines.



Spellbee

Our research explores the learning science at the juncture of evolution and game theory and the technology for on-line communities. We have developed and are currently testing a proof of concept of our research questions. Our focus is basic science, evaluating approaches for the automatic management of peer matching and motivation in learning communities. We will be implementing computational methods for matching learners together and perform experiments motivating peers to create appropriate challenges to each other. Our system is being implemented as a framework where central issues in educational multiplayer learning groups can be tested empirically. We will continue to develop our web delivery system. Usability and workflow will be addressed, from the varying perspectives of students, educators, and evaluators. We are currently actively testing this project, please visit us.


A Game-Theoretic Investigation of Selection Methods in Evolutionary Algorithms

The replicator equation used in evolutionary game theory (EGT) assumes that strategies reproduce in direct proportion to their payoffs; this is akin to the use of fitness-proportionate selection in an evolutionary algorithm (EA). In this paper, we investigate how various other selection methods comonly used in EAs can affect the discrete-time dynamics of EGT. In particular, we show that the existence of evolutionary stable strategies (ESS) is sensitive to the selection method used. Rather than maintain the dynamics and equilibria of EGT, the selection methods we test impose a fixed-point dynamic virtually unrelated to the payoffs of the game matrix, give limit cycles, or induce chaos. These results are significant to the field of evolutionary computation because EGT can be understood as a coevolutionary algorithm operating under ideal conditions: an infinite population, noiseless payoffs, and complete knowledge of the phenotype space. Thus, certain selection methods, which may operate effectively in simple evolution, are pathological in an ideal-world coevolutionary algorithm, and therefore dubious under real-world conditions.


Automated Modular Design

The ultimate success of search algorithms as tools for design automation is critically dependent on their scaling properties. Any open-ended design problem that is based only on the direct composition of elementary building blocks grows combinatorially complex with the size of the problem and search algorithms that use such an encoding will not scale to complex tasks.

In this project we propose the use of generative encodings as a way to evolve objects with modularity and reuse of parts. Using this system we evolve tables and locomoting robots, called genobots.

The source code for this research project is also available for download from here.

Symbiosis and its Role in Evolutionary Adaptation

Our investigations concern the role of symbiosis as an enabling mechanism in evolutionary adaptation. Biological evidence suggests that symbiosis has been a critical factor in several 'major transitions in evolution'. "Symbiogenesis" is the genesis of new species via the genetic integration of symbionts. This combination of pre-adapted parts is a fundamentally different source of evolutionary innovation from the Darwinian, gradual accumulation of 'blind' variations. Though the fact of symbiogenesis in the natural history of evolution is established, the role of symbiosis is not yet integrated into our theories of adaptation. In fact, mutually benefitial relationships are generally treated as a curio - an aberration on the otherwise relentless path of mutually exclusive competition. Our research attempts to understand when and how symbiosis will be significant in evolutionary adaptation, and to provide a coherent model of evolutionary adaptation that explains the balance between individualism and higher-levels of selection.

Coevolving the "Ideal" Trainer

Coevolution provides a framework to implement search heuristics that are more elaborate than those driving the exploration of the state space in canonical evolutionary systems. However, some drawbacks have also to be overcome in order to ensure continuous progress on the long term. This project introduces the concept of coevolutionary learning and presents a search procedure which successfully addresses the underlying impediments in coevolutionary search. This algorithm is presented in the context of the evolving cellular automata rules for a classification task. This work resulted in a significant improvement over previously known best rules for this task.

Internet Online Genetic Algorithms

Genetic programming is a computer learning method that imitates nature's selection process to lead a population of computer programs towards improving levels of performance.

Tron is a dynamic game, difficult for computers to learn. Playing against itself a computer might believe that is doing a good job when it is not, because it lacks a parameter (ie, a really good player) to compare.

In this experiment, we have put a genetic learning algorithm online. A "background" GA generates players by having the computer play itself. A "foreground" GA leads the evolutionary process, evaluating players by their performance against real people.


Learning Backgammon

Following Tesauro's work on TD-Gammon, we used simple hill-climbing in a 4000 parameter feed-forward network to develop a competitive backgammon evolution function. No back-propagation, reinforcement or temporal difference learning methods were employed. Instead we start with an initial champion of all zero weights and proceed simply by playing the current champion network against a slightly mutated challenger and changing weights when the challenger wins. The success of so simple a learning method indicates backgammon deserves further study as to why hard learning is so successful in this domain.


Coevolutionary Genetic Programming

Using a competitive fitness, we were able to evolve an elegant solution to the intertwined spirals problem. This function uses 52 GP primitives and breaks the plane into two subproblems which combine to form a spiral. You can play with a live matlab demo of this work!

Autonomous Evolution of Dynamic Gaits

A trend in robotics is towards legged robots. One of the issues with legged robots is the development of gaits. Typically gaits are developed manually. In this project we report our results of autonomous evolution of dynamic gaits for the Sony Quadruped Robot. Fitness is determined using the robot's digital camera and infrared sensors. Evolved gaits are faster than those created by hand. Using this technique we evolve a gait used in the consumer version of AIBO.

Embodied Evolution

We define embodied evolution (EE) as evolution taking place in a population of robots. Further, we stipulate that the evolutionary algorithm is to execute in a distributed and asynchronous manner within the population, as in natural evolution. Thus, algorithms that centrally maintain and manipulate the specifications of individual agents are not permitted. We wish to create a population of physical robots that perform their tasks and evolve hands-free – without human intervention of any kind, as Husbands (1992) articulates. Here, we introduce our conception of embodied evolution and report the initial results of experiments that provide the first proof-of-concept for EE.


Evolution of Neural Controllers

This project addresses the problem of creating neural controllers for robots.

The first part of this project involved developing controllers for simple robots to perform a simple task - pole balancing and Luxo locomotion. In the second part we move to more complex robots performing the more complex task of pursuit and evasion. Finally we show that we can evolve controllers in simulation that transfer to complex robots, such as AIBO.


Exact Representations from Feed-Forward Neural Networks

Neural networks have been hailed as the paradigm of choice for problems which require "Human Like" perception. Despite their ample success, their main criticism remains their opaqueness. A network could be performing its function perfectly, responding correctly to every input that it is given, however its internal workings could still be construed as a black box, leaving its user without knowledge of what is happening internally.

We are interested in different ways to tease neural networks open, to analyse what they are representing, how they are "thinking". In this context we present a novel algorithm to extract internal representations from feed-forward perceptron type neural networks. This representation encapsulates a network's function in terms of polytopic decision regions. Its features being that it is exact-- fully describing a network's function, concise-- not an incremental collection of approximations and direct-- mapping a network's input directly to its output.

These decision regions can be viewed directly, in lower dimensions , or analyzed using bounding rectangles and slicing in higher dimensions. We can even use the algorithm to watch networks during learning, seeing their decision regions and weights changing simultaneously.


Fractal Neural Networks

We are working on giving neural networks the ability to store and manipulate complex data structures of arbitrary size. Current research focuses on the possibility of representing large numbers of tree structures using the fractal attractors obtained by treating the network weights as an iterated function system (IFS).

This interactive program allows you to experiment with various neural network weights to see the fractal patterns that the network produces at different pixel resolutions.


Fractal Neural Networks: The Blind Watchmaker

This interactive program allows you to evolve a fractal pattern to your liking, using a "Blind Watchmaker" paradigm.


Mind's Eye Project

Following our realization that the dynamics of recurrent neural networks generated fractal IFS like images, we have reduced the model down to a twelve weight network. We use hillclimbing to find networks whose dynamics are interesting.
Here is a gallery of images....

The GNARL Project

The GNARL Project combines research in recurrent neural networks and evolutionary methods of machine learning. The Project takes its name from the GNARL (GeNeralized Acquisition of Recurrent Links) engine [Angeline, Saunders, Pollack 1994], which is the central tool used to carry out our experiments.

With regard to neural networks, the Project investigates the dymanics and capabilities of recurrent neural networks, focusing primarily on temporally-oriented tasks. With regard to evolutionary methods, the Project continues to expand and enhance to capabilities of the GNARL system.


Educational Technologies Overview CEL goes to school

In the DEMO Lab at Brandeis University's Computer Science Department, we have directed a number of our research projects towards educational technologies. We synthesize several components of our work, uniting a unique combination of technologies and theories from which we are building the educational environment. We start with our experience in machine learning, where we have co-evolved software agents to play games. We add to this our success using the Internet to facilitate a new form of co-evolution between humans and software agents. Finally, we carry these concepts into the educational arena.

Based in Theory: For the past 8 years, the DEMO lab has focused on the process of "self-organization," how complex systems can, with enough time and energy, create themselves. We approach problems in this field using reductionist techniques, seeking the core principles of discovery that appear to be at the basis of biological development and evolution. Historically, machine learning has taken its lead from examining theories in human learning. Our research work attempts to complete the circle, applying knowledge gained from studying machine learning to bear in the human learning arena. Mission and Goals


Theory Modeling

Our pedagogical hypothesis is that students learn best when they are continually challenged and have many opportunities for getting learning rewards. This requires an environment with diverse affordances for learning. We will conduct a focused set of experiments based on our "Teacher's Dilemma" game to discover if we can find the structures and parameters which maximize appropriate challenge in a multiplayer one-to-one learning environment. If we find these parameters, we believe it will be a significant breakthrough in educational science. By implementing these mechanisms on the Internet, our goal is to create an adaptive system that encourages students to be coaches for each other, leading to tremendous efficiency, in terms of the number of adults necessary per child. We will test our theoretic framework on peers of mixed age and diverse geographic regions, using relatively simple matching algorithms to bring like-graded students together for repeated learning opportunities.

---


Comments?
demoweb@demo.cs.brandeis.edu