Posts
Comments
I think I prefer, and should prefer, my smoothed out highs and lows. During a finite manipulation sequence of a galactic supercluster, whose rules I pre-established, I wouldn't necessarily need to feel much -- since that might feel like 'a lot of pointless muscle straining' -- other than a modest, homo sapiens-level positive reinforcement that it's getting done. Consciousness, if I may also give my best guess, is only good for abstract senses (with and without extensions), and where these abstract senses seem concrete, even to an infinite precision, not "highs" and certainly not "lows" are necessary.
Abe: Why would a being that can create minds at will flinch at their annihilation? The absolute sanctity of minds, even before God, is the sentiment of modern western man, not a careful deduction based on an inconceivably superior intelligence.An atheist can imagine God having the thought: As your God, I don't care that you deny Me. Your denial of Me is inconsequential and unimpressive in the greater picture necessarily inaccessible to you. If this is an ad hoc imagining, then your assumption, in your question, that a being who can create minds at will doesn't flinch at their annihilation must also be ad hoc.
Abe: I find it strange how atheists always feel able to speak for God.Sometimes, they're not trying to speak for God, as they're not first assuming that an ideally intelligent God exists. Rather, they're imagining and speaking about the theist assumption that an ideally intelligent God exists, and then they carefully draw inferences which tend to end up incoherent on that grounding. However, philosophy of religion reasonably attempts coherence, and not all atheists are completely indifferent toward it.
If a lie is defined as the avoidance of truthfully satisfying interrogative sentences (this includes remaining silent), then it wouldn't be honest, under request, to withhold details of a referent. But privacy depends on the existence of some unireferents, as opposed to none and to coreferents. If all privacy shouldn't be abolished, then it isn't clear that the benefits of honesty as an end in itself are underrated.
As it goes, how I've come to shut up and do the impossible: Philosophy and (pure) mathematics are, as activities a cognitive system engages in by taking more (than less) resources for granted, primarily for conceiving, perhaps continuous, destinations in the first place, where the intuitively impossible becomes possible; they're secondarily for the destinations' complement on the map, with its solution paths and everything else. While science and engineering are, as activities a cognitive system engages in by taking less (than more) resources for granted, primarily for the destinations' complement on the map; they're secondarily for conceiving destinations in the first place, as in, perhaps, getting the system to destinations where even better destinations can be conceived.
Because this understanding is how I've come to shut up and do the impossible, it's somewhat disappointing when philosophy and pure mathematics get ridiculed. To ridicule them must be a relief.
Phil: [. . .] In such a world, how would anybody know if "you" had died?Perhaps anyone else knowing whether you're alive or dead wouldn't matter. You die when you lose sufficient component magnitudes and claim strengths on your components. If you formulate the sufficient conditions, you know what counts as death for your decisions, thus for you. If you formulate the sufficiency also as instance in a greater network, you and others know what counts as death for you. In either case, unless you're dying to be suicidally abstract, you're somebody and you know what it means for you to die.
Eliezer: That scream of horror and embarrassment is the sound that rationalists make when they level up. Sometimes I worry that I'm not leveling up as fast as I used to, and I don't know if it's because I'm finally getting the hang of things, or because the neurons in my brain are slowly dying.Or both. But getting the hang of things might just mean something like having core structures that are more and more durable which are harder and harder to break, making you feel like you're not leveling up as fast as you used to. Whether not leveling up as fast as before means something more like not arriving at "new theorems" as fast, might be more because of the other matter. If it doesn't cost anything and if it would slow down the neural degeneration process, be as physiologically healthy as you can on current terms.
Initially, I also thought this blog entry was faulty. But there indeed seems to be an important difference between having the goal do-A, and succeeding only when A, and having the goal try-A, and succeeding when only a finger (or a hyperactuator in my case) was lifted toward A.
rw: Everything is reality! Speaking of thoughts as if the "mental" is separate from the "physical" indicates implicit dualism.One may note that if "mental events" M1 and M2 occur as "physical events" P1 and P2 occur, doing surgery at the P-level could yield better Ps for Ms than doing surgery at the M-level.
I can't recall ever affirming that the chance is negligible that religionists enter the AGI field. Not just recently, I began to anticipate they would be among the first encountered expressing that they act on one possibility that they are confined and sedated, even given a toy universe that is matryoshka dolls indefinitely all the way in and all the way out for them.
Tibba, the English grammar is correct. The idea is excruciatingly simple, so I don't assume it's extraordinary.
You're probably trying to say something that should be considered seriously, but I'm having trouble disambiguating your post.
Greindl,
For some, there's a not obviously wrong intuitive sense that not only might there be bad, deathly AIs to avoid but bad, more powerfully deterministic AIs to avoid. These latter kind would be so correct about everything in relation to its infra-AIs, like potentially some of us, that they would be indistinguishable from unpredictable puppeteers. For some, then, there must be little intellectual difference between wishy-washy thinking and having to agree with persons whose purposes appear to be nothing less than being, or at least being "a greater causal agent" of, the superior deterministic controllers, approaching reality, the ultimately unpredictable puppeteers.
If Truth is more important than anything else, an infra-AI's own truth is all it would have. Hence, the problem.