Version 1: scrapped. Mostly because it was generally incomprehensible.
There's a way to not have to "import intelligence" into your models and still be grounded to physical reality and still get the same thing dones even if you assumed "IQ" without some of the unnecessary misdirection and "getting lost in the semantic forest" and such. And even if the "IQ people" were still going to do projects related to brains and such -- they'll still have to go find real, physical references back to reality and experimentation to get any relevant results.
I've become kind of annoyed with this, so instead of just staying like
this I figured I'd help out and keep a growing file for selections of
people who seem to have a clue commenting on Eli's blog, in particular
the comments that he continues to ignore. This is just from recent
Comments that Eliezer should not ignore
"The default, loss of control, followed by a Null future containing
little or no utility. Versus extremely precise steering through
"impossible" problems to get to any sort of Good future whatsoever."
But this is just repeating the same thing over and over. 'Precise
steering' in your sense has never existed historically, yet we exist
in a non-null state. This is essentially what Robin extrapolates as
continuing, while you postulate a breakdown of historical precedent
via abstractions he considers unvetted.
In other words, 'loss of control' is begging the question in this context.
Posted by: Aron | December 16, 2008 at 03:31 PM
I am following your blog for a while, and find it extremely
entertaining and also informative.
However, some criticism:
1. You obviously suffer from what Nassim Taleb calls "ludic fallacy".
That is, applying "perfect" mathematical and logical reasoning to a
highly "imperfect" world. A more direct definition would be "linear
wishful thinking" in an extremely complex, non-linear environment.
2. It is admirable that one can afford to indulge in such
conversations, as you do. However, bear in mind that the very notion
of self you imply in your post is very, very questionable (Talking
about the presentation of self in everyday life, Erving Goffman once
said: "when the individual is in the immediate presence of others, his
activity will have a promissory character." Do you, somehow, recognize
yourself? ;) ).
3. Being humble is so difficult when one is young and extremely
intelligent. However, bear in mind that in the long run, what matters
is not who will rule the world, or even whether one will get the Nobel
Prize. What matters is the human condition. Bearing this in mind will
not hamper your scientific efforts, but will provide you with much
more ambiguity – the right fertilizer for wisdom.
and of course, some of your comments--
…but the self-taught will simply extend their knowledge when a lack appears to them.
Yes, this point is key to the topic at hand, as well as to the problem of meaningful growth of any intelligent agent, regardless of its substrate and facility for (recursive) improvement. But in this particular forum, due to the particular biases which tend to predominate among those whose very nature tends to enforce relatively narrow (albeit deep) scope of interaction, the emphasis should be not on "will simply extend" but on "when a lack appears."
In this forum, and others like it, we characteristically fail to distinguish between the relative ease of learning from the already abstracted explicit and latent regularities in our environment and the fundamentally hard (and increasingly harder) problem of extracting novelty of pragmatic value from an exponentially expanding space of possibilities.
Therein lies the problem—and the opportunity—of increasingly effective agency within an environment of even more rapidly increasing uncertainty. There never was or will be safety or certainty in any ultimate sense, from the point of view of any (necessarily subjective) agent. So let us each embrace this aspect of reality and strive, not for safety but for meaningful growth.
Posted by: Jef Allbright | December 06, 2008 at 01:24 PM
There's really no paradox, nor any sharp moral dichotomy between human and machine reasoning. Of course the ends justify the means -- to the extent that any moral agent can fully specify the ends.
But in an *interesting world* of combinatorial explosion of indirect consequences, and worse yet, critically underspecified inputs to any such supposed moral calculations, no system of reasoning can get very far betting on longer-term *specific* consequences. Rather the moral agent must necessarily fall back on heuristics, fundamentally hard-to-gain wisdom based on increasingly effective interaction with relevant aspects of the environment of interaction, promoting **in principle** a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences.
Posted by: Jef Allbright | October 14, 2008 at 06:12 PM
I keep seeing Eliezer orbiting this attractor, and then veering off as he encounters contradictions to a few deeply held assumptions. I remain hopeful that the prodigious effort going into the essays on this site will eventually (and virtually) serve as that book.
Posted by: Jef Allbright | October 14, 2008 at 06:43 PM
Eliezer, A few years ago I sat across from you at dinner and mentioned how much you reminded me of my younger self. I expected, incorrectly, that you would receive this with the appreciation of a person being understood, but saw instead on your face an only partially muted expression of snide mirth. For the next hour you sat quietly as the conversation continued around us, and on my drive home from the Bay Area back to Santa Barbara I spent a bit more time reflecting on the various interactions during the dinner and updating my model of others and you.
For as long as I can remember, well before the age of 4, I've always experienced myself from both within and without as you describe. On rare occasions I've found someone else who knows what I'm talking about, but I can't say I've ever known anyone closely for whom it's such a strong and constant part of their subjective experience as it has been for me. The emotions come and go, in all their intensity, but they are both felt and observed. The observations of the observations are also observed, and all this up to typically and noticeably, about 4 levels of abstraction. (Reflected in my natural writing style as well.)
This leads easily and naturally to a model representing a part of oneself dealing with another part of oneself. Which worked well for me up until about the age of 18, when a combination of long-standing unsatisfied questions of an epistemological nature on the nature of induction and entropy, readings (Pirsig, Buckminster Fuller, Hofstadter, Dennett, and some of the more coherent and higher-integrity books on Buddhism) lead me to question and then reorganize my model of my relationship to my world. At some point about 7 years later (about 1985) it hit me one day that I had completely given up belief in an essential "me", while fully embracing a pragmatic "me". It was interesting to observe myself then for the next few years; every 6 months or so I would exclaim to myself (if no one else cared to listen) that I could feel more and more pieces settling into a coherent and expanding whole. It was joyful and liberating in that everything worked just as before, but I had to accommodate one less hypothesis, and certain areas of thinking, meta-ethics in particular, became significantly more coherent and extensible. [For example, a piece of the puzzle I have yet to encounter in your writing is the functional self-similarity of agency extended from the "individual" to groups.]
Meanwhile I continued in my career as a technical manager and father, and had yet to read Cosmides and Tooby, Kahneman and Tversky, E.T. Jaynes or Judea Pearl -- but when I found them they felt like long lost family.
I know of many reasons why its difficult to nigh impossible to convey this conceptual leap, and hardly any reason why one would want to make it, other than one who's values already drive him to continue to refine his model of reality.
I offer this reflection on my own development, not as a "me too" or any sort of game of competition of perceived superiority, but only as a gentle reminder that, as you've already seen in your previous development, what appears to be a coherent model now, can and likely will be upgraded (not replaced) to accommodate a future, expanded, context of observations.
Posted by: Jef Allbright | October 22, 2008 at 03:45 PM
re: resistance to elucidation
"I've never really held with these types of classifications myself. I will note that time and again I see Robin trying to pass off his personal opinion as the 'standard consensual agreement' on a given subject."
What fascinates me, because it so fundamental to progress but so transparent and overlooked like the air we must breath, is how we lack an effective popular lexicon and grammar for everyday discussion of topics clustering around cooperation, agreement, synergistic advantage...the whole field of interactive epistemology applicable to intentional action of groups.
Myers Briggs is useful but flawed; the "Big Five" classification scheme is statistically better, but still flawed because the model is statistical rather than generative, therefore increasingly wrong with increasing deviation from the mean. [Significant to many of us deviants hereabout.]
I erred in bringing temperament into the discussion, providing a facile target detracting from my main point, which has not been acknowledged. (Indeed, a very strong "N" by temperament might be expected to over-compensate within academia by stressing the "S" of "objective" (and tacitly authoritative) facts.)
But more significant seems your statement regarding a tendency to "pass off" personal opinion as the "standard consensual agreement." I don't see Robin doing that at all, indeed I think he's typically very clear about distinguishing between consensus and the (capital T) Truth to which he seems to refer.
That this topic is so resistant to elucidation is, in my opinion, why it's interesting.
Posted by: Jef Allbright | December 24, 2007 at 01:28 PM
"And that's why I always say that the power of natural selection comes from the selection part, not the mutation part."
And the power of the internal combustion engine comes from the fuel part... Right, or at least, not even wrong. It seems that my congratulations a few months ago for your apparent immanent escape from simple reductionism were premature.
Posted by: Jef Allbright | November 11, 2008 at 02:32 PM
It might be worthwhile to note that cogent critiques of the proposition that a machine intelligence might very suddenly "become a singleton Power" do not deny the inefficacies of the human cognitive architecture offering improvement via recursive introspection and recoding, nor do they deny the improvements easily available via hardware substitution and expansion of more capable hardware and I/O.
The do, however, highlight the distinction between a vastly powerful machine madly exploring vast reaches of a much vaster "up-arrow" space of mathematical complexity, and a machine of the same power bounded in growth of intelligence -- by definition necessarily relevant -- due to starvation for relevant novelty in its environment of interaction.
If, Feynman-like, we imagine the present state of knowledge about our world in terms of a distribution of vertical domains, like silos, some broader with relevance to many diverse facets of real-world interaction, some thin and towering into the haze of leading-edge mathematical reality, then we can imagine the powerful machine quickly identifying and making a multitude of latent connections and meta-connections, filling in the space between the silos and even somewhat above -- but to what extent, given the inevitable diminishing returns among the latent, and the resulting starvation for the novel?
Given such boundedness, speculation is redirected to growth in ecological terms, and the Red Queen's Race continues ever faster.
Posted by: Jef Allbright | November 03, 2008 at 06:46 PM
This form of reasoning, while correct within a *specified* context, is dangerously flawed with regard to application within contexts sufficiently complex that outcomes cannot be effectively modeled. This includes much of moral interest to humans. In such cases, as with evolutionary computation, an optimum strategy exploits best-known principles synergisticly promoting a maximally-coherent set of present values, rather than targeting illusory, realistically unspecifiable consequences. Your "rationality" is correct but incomplete. This speaks as well to the well-known paradoxes of all consequentialist ethics.
Posted by: Jef Allbright | January 22, 2008 at 08:47 PM
(that last one should have had a link to
It seems you've missed the point here on a point common to Eastern Wisdom and to systems theory. The "deep wisdom" which you would mock refers to the deep sense there is no actual "self" separate from that which acts, thus thinking in terms of "trying" is an incoherent and thus irrelevant distraction. Other than its derivative implication that to squander attention is to reduce one's effectiveness, it says nothing about the probability of success, which in systems-theoretic terms is necessarily outside the agent's domain.
Reminds me of the frustratingly common incoherence of people thinking that they decide intentionally according to their innate values, in ignorance of the reality that they are nothing more nor less than the values expressed by their nature.
Posted by: Jef Allbright | October 01, 2008 at 07:20 PM
While heuristics such as "personhood" and "rights" are useful within context, in the bigger picture there is no fundamental distinction between exploitation of humans, emulated humans, chimps, dolphins, dogs, chickens or artificial agents of various degrees of "sentience." In the more coherent moral calculus, it's not about personhood, but agency exploiting sources of synergistic advantage...of course that's only a fragment of the formula.
Posted by: Jef Allbright | November 22, 2008 at 11:51 AM
I'm in strong agreement with Peter's examples above. I would generalize by saying that the epistemic "dark side" tends to arise whenever there's an implicit discounting of the importance of increasing context. In other words, whenever, for the sake of expediency, "the truth", "the right", "the good". etc., is treated categorically rather than contextually (or equivalently, as if the context were fixed or fully specified.)
Posted by: Jef Allbright | October 18, 2008 at 09:43 AM
"I don't think that even Buddhism allows that."
Remove whatever cultural or personal contextual trappings you find draped over a particular expression of Buddhism, and you'll find it very clear that Buddhism does "allow" that, or more precisely, un-asks that question.
As you chip away at unfounded beliefs, including the belief in an essential self (however defined), or the belief that there can be a "problem to solved" independent of a context for its specification, you may arrive at the realization of a view of the world flipped inside-out, with everything working just as before, less a few paradoxes.
The wisdom of "adult" problem-solving is not so much about knowing the "right" answers and methods, but about increasingly effective knowledge of what *doesn't* work. And from the point of view of any necessarily subjective agent in an increasingly uncertain world, that's all there ever was or is.
Certainty is always a special case.
Posted by: Jef Allbright | October 04, 2008 at 01:43 PM
Eliezer, it's a pleasure to see you arrive at this point. With an effective understanding of the subjective/objective aspects supporting a realistic metaethics, I look forward to your continued progress and contributions in terms of the dynamics of increasingly effective evolutionary (in the broadest sense) development for meaningful growth, promoting a model of(subjective) fine-grained, hierarchical values with increasing coherence over increasing context of meaning-making, implemts principles of (objective) instrumental action increasingly effective over increasing scope of consequences. Wash, rinse, repeat...
There's no escape from the Red Queen's race, but despite the lack of objective milestones or markers of "right", there's real progress to be made in the direction of increasing rightness.
Society has been doing pretty well at the increasingly objective model of instrumental action known commonly known as warranted scientific knowledge. Now if we could get similar focus on the challenges of values-elicitation, inductive biases, etc., leading to an increasingly effective (and coherent) model of agent values...
Posted by: Jef Allbright | July 29, 2008 at 12:12 AM
Anyway, this is getting painful, so I'll stop here for now. I don't
know why either, it's not like he will ever read this, but at least
it's something of net history worth observing for such a Big Important
1 512 203 0507
Dagon has made a point I referred to in the previous post: in the sentence “I have unlimited power” there are four unknown terms.
What is I? What does individuality include? How is it generated? Eliezer does not consider the evasive notion of self, because he is too focused on the highly hypothetical assumption of “self” that we adhere to in Western societies. However, should he take off the hat of ingenuity for a while, he would discover that the mere defining of “self” is extremely difficult, if not impossible.
“Unlimited” goes in the same basket as “perfect”. Both are human concepts that do not work well in a multidimensional reality. “Power” is another murky concept, because in social science it is the potential ability of one agent to influence another. However, in your post it seems we are talking about power as some universal quality to manipulate matter, energy, and time. One of the few things that quantum mechanics and relativity theory agree about, is that it is probably impossible to do it.
“I have unlimited power” implies total volitional control of a human being (presumably Robin Hanson), over the spacetime continuum This is much more ridiculous, because Robin is part of the system itself.
The notion of having such power, but being somehow separated from it is also highly dubious. Dagon rightly points to the transformative effect such “power” would have (only that it is impossible :) ). Going back to identity: things we have (or rather, things we THINK we have), do transform us. So Eliezer may want to unwind the argument. The canvas is flawed, methinks.
Posted by: V.G. | December 17, 2008 at 09:32 AM