Posts

Comments

Comment by Nate_Barna on Which Parts Are "Me"? · 2008-10-23T07:19:00.000Z · LW · GW

I think I prefer, and should prefer, my smoothed out highs and lows. During a finite manipulation sequence of a galactic supercluster, whose rules I pre-established, I wouldn't necessarily need to feel much -- since that might feel like 'a lot of pointless muscle straining' -- other than a modest, homo sapiens-level positive reinforcement that it's getting done. Consciousness, if I may also give my best guess, is only good for abstract senses (with and without extensions), and where these abstract senses seem concrete, even to an infinite precision, not "highs" and certainly not "lows" are necessary.

Comment by Nate_Barna on Beyond the Reach of God · 2008-10-17T02:39:00.000Z · LW · GW

Abe: Why would a being that can create minds at will flinch at their annihilation? The absolute sanctity of minds, even before God, is the sentiment of modern western man, not a careful deduction based on an inconceivably superior intelligence.
An atheist can imagine God having the thought: As your God, I don't care that you deny Me. Your denial of Me is inconsequential and unimpressive in the greater picture necessarily inaccessible to you. If this is an ad hoc imagining, then your assumption, in your question, that a being who can create minds at will doesn't flinch at their annihilation must also be ad hoc.

Comment by Nate_Barna on Beyond the Reach of God · 2008-10-16T17:18:00.000Z · LW · GW

Abe: I find it strange how atheists always feel able to speak for God.
Sometimes, they're not trying to speak for God, as they're not first assuming that an ideally intelligent God exists. Rather, they're imagining and speaking about the theist assumption that an ideally intelligent God exists, and then they carefully draw inferences which tend to end up incoherent on that grounding. However, philosophy of religion reasonably attempts coherence, and not all atheists are completely indifferent toward it.

Comment by Nate_Barna on Entangled Truths, Contagious Lies · 2008-10-16T03:38:46.000Z · LW · GW

If a lie is defined as the avoidance of truthfully satisfying interrogative sentences (this includes remaining silent), then it wouldn't be honest, under request, to withhold details of a referent. But privacy depends on the existence of some unireferents, as opposed to none and to coreferents. If all privacy shouldn't be abolished, then it isn't clear that the benefits of honesty as an end in itself are underrated.

Comment by Nate_Barna on Shut up and do the impossible! · 2008-10-09T10:29:54.000Z · LW · GW

As it goes, how I've come to shut up and do the impossible: Philosophy and (pure) mathematics are, as activities a cognitive system engages in by taking more (than less) resources for granted, primarily for conceiving, perhaps continuous, destinations in the first place, where the intuitively impossible becomes possible; they're secondarily for the destinations' complement on the map, with its solution paths and everything else. While science and engineering are, as activities a cognitive system engages in by taking less (than more) resources for granted, primarily for the destinations' complement on the map; they're secondarily for conceiving destinations in the first place, as in, perhaps, getting the system to destinations where even better destinations can be conceived.

Because this understanding is how I've come to shut up and do the impossible, it's somewhat disappointing when philosophy and pure mathematics get ridiculed. To ridicule them must be a relief.

Comment by Nate_Barna on Beyond the Reach of God · 2008-10-07T17:57:00.000Z · LW · GW

Phil: [. . .] In such a world, how would anybody know if "you" had died?
Perhaps anyone else knowing whether you're alive or dead wouldn't matter. You die when you lose sufficient component magnitudes and claim strengths on your components. If you formulate the sufficient conditions, you know what counts as death for your decisions, thus for you. If you formulate the sufficiency also as instance in a greater network, you and others know what counts as death for you. In either case, unless you're dying to be suicidally abstract, you're somebody and you know what it means for you to die.

Comment by Nate_Barna on My Bayesian Enlightenment · 2008-10-05T19:18:15.000Z · LW · GW

Eliezer: That scream of horror and embarrassment is the sound that rationalists make when they level up. Sometimes I worry that I'm not leveling up as fast as I used to, and I don't know if it's because I'm finally getting the hang of things, or because the neurons in my brain are slowly dying.
Or both. But getting the hang of things might just mean something like having core structures that are more and more durable which are harder and harder to break, making you feel like you're not leveling up as fast as you used to. Whether not leveling up as fast as before means something more like not arriving at "new theorems" as fast, might be more because of the other matter. If it doesn't cost anything and if it would slow down the neural degeneration process, be as physiologically healthy as you can on current terms.

Comment by Nate_Barna on Trying to Try · 2008-10-02T03:04:03.000Z · LW · GW

Initially, I also thought this blog entry was faulty. But there indeed seems to be an important difference between having the goal do-A, and succeeding only when A, and having the goal try-A, and succeeding when only a finger (or a hyperactuator in my case) was lifted toward A.

rw: Everything is reality! Speaking of thoughts as if the "mental" is separate from the "physical" indicates implicit dualism.
One may note that if "mental events" M1 and M2 occur as "physical events" P1 and P2 occur, doing surgery at the P-level could yield better Ps for Ms than doing surgery at the M-level.

Comment by Nate_Barna on Above-Average AI Scientists · 2008-09-28T18:25:13.000Z · LW · GW

I can't recall ever affirming that the chance is negligible that religionists enter the AGI field. Not just recently, I began to anticipate they would be among the first encountered expressing that they act on one possibility that they are confined and sedated, even given a toy universe that is matryoshka dolls indefinitely all the way in and all the way out for them.

Comment by Nate_Barna on Say It Loud · 2008-09-20T09:46:09.000Z · LW · GW

Tibba, the English grammar is correct. The idea is excruciatingly simple, so I don't assume it's extraordinary.

You're probably trying to say something that should be considered seriously, but I'm having trouble disambiguating your post.

Comment by Nate_Barna on Say It Loud · 2008-09-19T23:21:41.000Z · LW · GW

Greindl,

For some, there's a not obviously wrong intuitive sense that not only might there be bad, deathly AIs to avoid but bad, more powerfully deterministic AIs to avoid. These latter kind would be so correct about everything in relation to its infra-AIs, like potentially some of us, that they would be indistinguishable from unpredictable puppeteers. For some, then, there must be little intellectual difference between wishy-washy thinking and having to agree with persons whose purposes appear to be nothing less than being, or at least being "a greater causal agent" of, the superior deterministic controllers, approaching reality, the ultimately unpredictable puppeteers.

If Truth is more important than anything else, an infra-AI's own truth is all it would have. Hence, the problem.