[Fwd: Comments on "The Singularity is Near"]
From:
"Paul D. Fernhout" <pdfernhout@kurtz-fernhout.com>
To:
Bryan Bishop <kanzure@gmail.com>
Date:
04/29/08 09:17 am
-------- Original Message --------
Subject: Comments on "The Singularity is Near"
Date: Sun, 18 Mar 2001 11:18:55 -0500
From: Paul Fernhout <pdfernhout@kurtz-fernhout.com>
Organization: Kurtz-Fernhout Software
To: Ray Kurzweil <ray@kurzweiltech.com>
CC: Cynthia <cfkurtz@kurtz-fernhout.com>

Ray-

I've been reading your latest book precis (The Singularity is Near).
  http://www.kurzweilai.net/meme1/frame.html

Most of your effort I admire, but there are a few issues in your writing
that come from what does not appear to be a sophisticated view of
evolutionary theory (as sophisticated as the work is in other respects).
Both my wife and I studied Ecology and Evolution (E&E) in the SUNY Stony
Brook PhD program (one of the top schools for that at the time) so we
have some knowledge on the subject.
  http://life.bio.sunysb.edu/ee/

Here are a few things to think about:

1. There is not necessarily increasing complexity in an evolving system.
Populations of organisms evolve for many reasons such as natural
selection, sexual selection, and random walks. If there is momentary
adaptive value in being simpler, then that path may well be the
evolutionary future for that population or an offshoot of it.

Let me give an example. If you have a parasite and a host, you may have
an evolutionary arms race, with the host population evolving in ways
that resist the parasite, and the parasite evolving new ways that
exploit the host. What is important here is that cycles can occur --
where the host develops a resistance, the parasite develops a way to
overcome it, and then the host loses the resistance because it is now
worthless and otherwise consumes metabolic energy to sustain operation
of it (and even to keep the ability in the DNA), and then the parasite
loses the ability to overcome the resistance because that ability is now
also worthless, and then the host population may again develop a
mutation with the resistance ability and the cycle continues. I'm sure
you could code up a simple genetic algorithm simulation in a weekend
that would show you this -- and there are several existing free
educational software programs that can also demonstrate this.
  http://cbs.umn.edu/software/populus.html

2. There is not necessarily an adaptive value to intelligence in a
certain niche -- because intelligence has power, mass, heat-dissipation,
and time costs. For example, consider the Hydra, which is a tiny
multi-tentacled aquatic creature that lives off of stinging smaller
organisms like Daphnia and pulling them into its body cavity. It has a
simple neural net it uses to coordinate its feeding behavior. Why
doesn't the hydra have a brain the size of a human?  That may sound like
a stupid question, but bear with me. The Hydra could not support the
energy required to operate a brain from its current feeding behavior. It
could not protect the brain from predators. Its mobility would be
impaired by being attached to a brain that large. It would be unable to
reproduce as quickly. Also, the value of a human-sized brain to a hydra
is minimal, because there would be little the brain could accomplish
using the Hydra's few microscopic tentacles, limited sensory apparatus
(no eyes, no ears) and limited mobility choices. Further, the Hydra must
react instantly in its tiny world, and a big brain would take too long
to process the information. So, for the Hydra, a large brain makes no
sense.

There are aquatic creatures with brains as big or large than human
brains (dolphins or whales) but they have a very different ecological
niche and a totally different scale and physical structure. And there
are a lot fewer whales and dolphins than Hydra in the universe.

3. Chaotic systems (in the Chaos theory sense) like the weather allow
small changes in initial conditions to lead to large differences in
outcomes over time (the so called "butterfly" effect). Because of this,
there may be a fundamental limit to the predictive value of intelligence
for any given size brain, and further, there may be an extreme law of
diminishing returns on increased intelligence. That is, assume you have
a weather computer that may give you reasonably useful forecasts one day
in advance. To get two day forecast with as much accuracy as the one day
forecast, you may need much more data and a much bigger computer. You
don't need a system twice the size -- you may need one a thousand or a
million times the size. Each day further out you want to forecast, the
more data you need and the bigger computer you need.

Do we build the sensory net and weather computers to give us a thousand
day forecast? No. We decide we can live with a certain level of accuracy
-- that it is good enough for the cost, given the diminishing returns on
more extensive predictions. (A rebuttal on weather prediction is to
claim that the weather will be controlled at some point by intelligence
and so will be predictable, as in Alan Kay's adage "the best way to
predict the future is to invent it"...)

What might this mean in a human sense? Perhaps human brains are the size
they are because there isn't too much value in being that much smarter
because the cost of the additional intelligence is outweighed by the
diminishing returns of additional predictive value. For example, some
studies show earlier types of human-like creatures like the Neanderthal
or Cro-Magnon had a larger brain size than present-day humans.
http://www.wsu.edu:8001/vwsu/gened/learn-modules/top_longfor/phychar/culture-humans-2two.html

4. (From my wife's E&E thesis) Given most systems acting intelligently,
there is value to acting "dumb" -- because that avoids the herd effect
and related competition for resources, since the dumb organisms wind up
in less valued situations, but ones with less competition.

So, given these ideas, what is the implication for your writing on
evolution? Some parts are making essentially a religious statement of
how you want to see the universe.

You wrote:
> The Next Step in Evolution and the Purpose of Life
>
> But I regard the freeing of the human mind from its severe
> physical limitations of scope and duration as the necessary
> next step in evolution. Evolution, in my view, represents the
> purpose of life. That is, the purpose of life--and of our
> lives--is to evolve. The Singularity then is not a grave
> danger to be avoided. In my view, this next paradigm shift
> represents the goal of our civilization.
>
> What does it mean to evolve? Evolution moves toward
> greater complexity, greater elegance, greater knowledge,
> greater intelligence, greater beauty, greater creativity, and
> more of other abstract and subtle attributes such as love.
> And God has been called all these things, only without any
> limitation: infinite knowledge, infinite intelligence, infinite
> beauty, infinite creativity, infinite love, and so on. Of
> course, even the accelerating growth of evolution never
> achieves an infinite level, but as it explodes exponentially,
> it certainly moves rapidly in that direction. So evolution
> moves inexorably toward our conception of God, albeit never
> quite reaching this ideal. Thus the freeing of our thinking
> from the severe limitations of its biological form may be
> regarded as an essential spiritual quest.


Speaking as a human being with opinions, there is nothing wrong with the
sentiment that there is a purpose for life. This is as opposed to saying
there is a reason for intelligences to think in terms of purposes. As
long as you are clear that this is your statement of feeling, it's fine
to say this. But evolutionary theory does not provide any sort of proof
for your belief. Everyone who survives pretty much needs a set of
beliefs about the universe that make sense to them and give them a
reason for making life worth living. Such beliefs have adaptive value to
humans regardless of their truthfulness. What you outline is a beautiful
belief. I would like to believe it, and I would probably like others to
believe it. The optimism might lead to a self-fulfilling prophecy --
perhaps by helping people progress beyond overly valuing economic and
military competition. But again, it is essentially a religious
statement. (I also like the sci-fi writer James P. Hogan for his
optimistic philosphy on AI -- such as in his book "The Two Faces of
Tomorrow" -- a work that twenty years later still stands for me as a
classic on this topic.)

Beware that people (usually not well versed in E&E) cite ecology and
evolution all the time to support a belief, but that does not make it a
scientifically valid thing to do. For example, it is common to cite
competition as a given in nature, but Lewis Thomas in "Lives of a Cell"
points out that most successful organisms participate in effectively
collaborative efforts -- like the symbiosis between mitochondria and
eukaryotic cells. Likewise, the fact that something is true in most
species does not mean that humans (given intelligence and culture)
should behave that way -- because if E&E teaches anything, it is that
there is a lot of diversity out there.

The precis you posted, which is otherwise technical and advanced, is
using a technical term "evolution" as it is colloquially often (mis)used
to mean "progress". The two are not the same. And frankly, what is
"progress" for one may be "decay" for another, just as what is "good"
for one may be "evil" for another, as these have to do with individual
goals which may conflict. This weakens your entire argument.

I might go a step further. Because of your essentially "religious"
belief based on a limited view of evolutionary theory, you are ignoring
the obvious issues relating to the dimishing returns of intelligence, or
the adaptive value of "dumber" organisms. Thus, as I pointed out in an
earlier email to you, when you talk of downloading a human-derived AI
into a network, you ignore the fact that that large intelligence may not
be able to compete effectively in the network, in the same way as if one
grafted a human brain onto a tiny Hydra and threw it into a lake it
would not survive. What organisms do survive in a lake? Many, many tiny
things. Maybe a few fish. But the largest number are tiny things like
bacteria, algae, Daphnia and Hydra. By analogy, most of the digital
organisms in a large network will be tiny, and they might rapidly
consume larger creatures or parasitize them. Obviously, you can get big
fish in a lake -- but their numbers are small compared to the numbers of
other smaller organisms.

Because you have been heavily rewarded in your life for being
intelligent in various ways, the value of being unintelligent (or
differently intelligent) is probably a difficult concept to wrestle with
(as it was for me, and as I think it would be for most thinkers).
Ironically, both my wife and I didn't finish our PhDs in E&E in large
part because at the time the innovative computer simulations we wanted
to do were not considered an acceptable way to explore the topic of
evolution at a PhD E&E level -- a situation that a decade later has
changed significantly. Were we less intelligent :-) in some ways (and
perhaps more in others - see Howard Gardner's Multiple Intelligences),
  http://www.pz.harvard.edu/PIs/HG.htm
we might have PhDs and an easier road to travel.

I think there are rebuttals you could make to some of my points (citing
network effects, such as distributed information leveraging up the
general level of "knowledge" in a larger bacterial DNA pool, for
example) but they require a deeper thinking about evolutionary theory
and its implications for digital ecology. Perhaps some of them might
lead to new insights in the academic field of E&E.

In any case, one has to think in broader terms than "progress". In a
digital ecology, the laws might be different than in biological ecology
(for example, replication might be instantaneous), but there will still
be laws, and the system will still be governed by them.

My recommendation to you is to sit down with scientists from an Ecology
and Evolution program and argue with them about your ideas on evolution.
At Stony Brook, you might talk to people like Larry Slobodkin (who I
spent many pleasurable days collecting Hydra with at Swan pond) or Lev
Ginzburg (my advisor there) or Doug Futuyma (a world renowned expert in
the field) or Daniel Dykhuizen (an expert in bacterial evolution).
  http://life.bio.sunysb.edu/ee/ee-gen.html#faculty
For example, Dan was the first person to tell me that humans are 10%
bacteria by weight and 90% bacteria by numbers, and that bacteria in
effect are already a gigantic world-encompassing supercomputer,
transferring information (e.g. a new gene) around the globe in a matter
of days.

You could probably find people closer to your location at Harvard (like
Steven J. Gould).
  http://environment.harvard.edu/henvdir/GOULD_STEPHEN_JAY.html
You might want to talk to several such researchers at more than one
school and build your own opinions.

You voice is too much listened to for you not to write with a more
sophisticated understanding of evolutionary theory when it is at the
core of much of your argument. One of the reasons I myself spent years
studying E&E was exactly the reason you need to too -- I wanted to think
about the future of humanity and intelligent life in the universe,
especially in terms of what I could do to predict the future by
inventing it (i.e. space habitat systems that replicate themselves from
sunlight and asteroid ore and as a network maintain a high level of
ecological and intellectual diversity against the forces of entropy.) In
the graduate program at Stony Brook I learned many surprising things --
things going beyond the naive view of evolution and ecology portrayed in
many high school textbooks or the popular media.

Best of luck with your book.

-Paul Fernhout
Kurtz-Fernhout Software
=========================================================
Developers of custom software and educational simulations
Creators of the Garden with Insight(TM) garden simulator
http://www.kurtz-fernhout.com