There’s more than one way to make a stew – but a primordial stew, the original mix of (whatever) materials from which life arose? This was a stew millions, if not hundreds of millions of years in the making. How can we recreate that evolutionary process within the ephemeral lifespan of a science laboratory? Then there’s the complexity factor. So many things contributed – temperatures, chemical processes, availability of component resources (e.g. water, organic compounds, trace elements). We’re attempting to put all this together in order to create artificial life. It seems, at best, daunting; perhaps impossible. (It may also be considered blasphemous, but that’s a topic for another time.) Yet there are many voices in the scientific community that say we will achieve this goal within the century, with certainty, and perhaps much sooner.
One of the reasons for confidence in the eventual discovery of how life evolved and the recreation of the pathways – the ability to create life ‘from scratch’ (…create life in a test tube, as the expression goes) – is the stunningly rapid advancement of microbiology and bioengineering. More and more detail is accumulating about the properties and processes of reproduction (DNA replication), and the life-sustaining processes of living cells. This detail is no longer the raw description drawn from simple observation, but the (more or less) precise description of chemistry. We’re pursuing the detail into the molecular level; into the nanoscale. It’s a painstaking process, but as the tools (scientific equipment) and the theory improves; the flow of results is gaining momentum.
Another approach to solving the mysteries of creating life is to make a model and run it on a computer. Better still, run the model on not one computer, even a supercomputer, but perhaps thousands of computers. A recent article in the New York Times by veteran science writer John Markoff, highlighted an attempt by a scientific team to enlist the help of people with a personal computer to create a network to crunch the fantastic number of repeated (iterated) calculations necessary to mimic the effect of millions of years of evolution.
The effort, dubbed the EvoGrid, is the brainchild and doctoral dissertation topic of Bruce Damer, a Silicon Valley computer scientist who develops simulation software for NASA at a company, Digital Space, based in Santa Cruz, Calif.
Mr. Damer and his chief engineer, Peter Newman, are modeling their effort after the SETI@Home project, which was started by the Search for Extraterrestrial Intelligence, or SETI, program to make use of hundreds of thousands of Internet-connected computers in homes and offices. The project turned these small computers into a vast supercomputer by using pattern recognition software on individual computers to sift through a vast amount of data to look for evidence of faint signals from civilizations elsewhere in the cosmos.
The EvoGrid goal is to detect evidence of self-organizing behavior in computerized simulations that have been constructed to model the first emergence of life in the physical world.
[Source: New York Times]
As pointed out in the article, computer models of such complex systems are problematic. Because the models are tested in abstract terms (not in real life), there’s no guarantee that failures in assumptions or the built-in processes will be detected. Nevertheless, the exercise – if that’s all it is – may provide useful insights. It also expands our knowledge of the capabilities, and limitations, of massive computation. The system being put together for EvoGrid is interesting in its own right:
To quickly build the EvoGrid, the researchers are relying on two open-source software projects.
Boinc is a system financed by the National Science Foundation that uses the Internet to permit scientists to take advantage of free computing cycles available on network-connected computers. Last week, for example the system was composed of more than 500,000 computers that generated an average of almost 2.45 petaflops of computing power. By contrast, in June of this year, the world’s most powerful supercomputer, built by I.B.M. at Los Alamos National Laboratories, produced 1.1 petaflops.
To simulate digital evolution, the EvoGrid will use a second program, Gromacs, developed at the University of Groningen in the Netherlands, to model molecular interactions. EvoGrid researchers hope to create a computer model that replicates the early ocean and then use it as a virtual “primordial soup” to quickly evolve digital forms.