Posts

Comments

Comment by Eniac on Why I think there's a one-in-six chance of an imminent global nuclear war · 2022-10-09T21:50:21.638Z · LW · GW

In other words, both the kaboom and the KABOOM must be initiated by Putin and have no upsides for him. That’s two huge hurdles to Armageddon, and I find your probability estimates overly pessimistic.

Comment by Eniac on Why I think there's a one-in-six chance of an imminent global nuclear war · 2022-10-09T21:35:27.443Z · LW · GW

If Putin uses a nuke in Ukraine, NATO will respond by decimating the Russian invasion force in Ukraine (probably excluding Crimea) with conventional air power. That should be seen as de-escalating, since 1) only a nuclear response can really be an escalation to a nuclear provocation, and 2) Russia’s pre-war border is not violated. It will allow the Ukrainians to take back their land to pre-war boundaries (“Vietnam”). Putin knows this (it’s probably been “explained” to him by Western leaders), so the likelihood he would choose that path is small. Even if he does, I think the likelihood he would opt for further nuclear escalation with a first strike beyond Ukraine is much smaller than you claim. The West would never opt for a nuclear first strike, because of their clear conventional advantage.

Comment by Eniac on Stupid Questions December 2014 · 2014-12-13T19:05:41.680Z · LW · GW

Good point.

I suppose it boils down to what you include when you say "mind". I think the part of our mind that talks and writes is not very different from the part that thinks. So, if you narrowly, but reasonably, define the "mind" as only the conscious, thinking part of our personality, it might not be so farfetched to think a reasonable reconstruction of it from writings is possible.

Thought and language are closely related. Ask yourself: How many of my thoughts could I put into language, given a good effort? My gut feeling is "most of them", but I could be wrong. The same goes for memories. If a memory can not be expressed, can it even be called a memory?

Comment by Eniac on Stupid Questions December 2014 · 2014-12-11T03:39:52.906Z · LW · GW

Yes, making them would be incredibly hard, and because of their relatively short lifetimes, it would be extremely surprising to find any lying around somewhere. Atom sized black holes would be very heavy and not produce much Hawking readiation, as you say. Smaller ones would produce more Hawking radiation, be even harder to feed, and evaporate much faster.

Comment by Eniac on Stupid Questions December 2014 · 2014-12-10T05:06:05.985Z · LW · GW

The task you describe, at least the part where no whole brain transplant is involved, can be divided into two parts: 1) extracting the essential information about your mind from your brain, and 2) implanting that same information back into another brain.

Either of these could be achieved in two radically different ways: a) psychologically, i.e. by interview or memoir writing on the extraction side and "brain-washing" on the implanting side, or b) technologically, i.e. by functional MRI, electro-encephalography, etc on the extraction side. It is hard for me to envision a technological implantation method.

Either way, it seems to me that once we understand the mind enough to do any of this, it will turn out the easiest to just do the extraction part and then simulate the mind on a computer, instead of implanting it into a new body. Eliminate the wetware, and gain the benefit of regular backups, copious copies, and Moore's law for increasing effectiveness. Also, this would be ethically much more tractable.

It seems to me this could also be the solution to the unfriendly AI problem. What if the AI are us? Then yielding the world to them would not be so much of a problem, suddenly.

Comment by Eniac on Stupid Questions December 2014 · 2014-12-10T04:41:14.928Z · LW · GW

You might want to check out Centauri Dreams, best blog ever and dedicated to this issue.

Comment by Eniac on Stupid Questions December 2014 · 2014-12-10T04:34:38.190Z · LW · GW

Throwing mass into a black hole is harder than it sounds. Conveniently sized black holes that you actually would have a chance at moving around are extremely small, much smaller than atoms, I believe. I think they would just sit there without eating much, despite strenous efforts at feeding them. The cross-section is way too small.

To make matters worse, such holes would emit a lot of Hawking radiation, which would a) interfere with trying to feed them, and b) quickly evaporate them ending in an intense flash of gamma rays.

Comment by Eniac on Stupid Questions December 2014 · 2014-12-10T04:24:40.601Z · LW · GW

Hah, thanks for pointing this out. I must have read or heard of this before and then forgotten about it, except in my subconscious. Looks like they have done the math, too, and it figures. Cool!

Comment by Eniac on Stupid Questions December 2014 · 2014-12-09T01:31:18.491Z · LW · GW

Well, this is not pumping, but it might be much more efficient: As I understand, the polar ice caps are in an equilibrium between snowfall and runoff. If you could somehow wall in a large portion of polar ice, such that it cannot flow away, it might rise to a much higher level and sequester enough water to make a difference in sea levels. A super-large version of a hydroelectric dam, in effect, for ice.

It might also help to have a very high wall around the patch to keep air from circulating, keeping the cold polar air where it is and reduce evaporation/sublimation.

Comment by Eniac on Stupid Questions December 2014 · 2014-12-09T01:10:30.134Z · LW · GW

I think you have something there. You could design a complex, but at least metastable orbit for an asteroid sized object that, in each period, would fly by both Earth and, say, Jupiter. Because it is metastable, only very small course corrections would be necessary to keep it going, and it could be arranged such that at every pass Earth gets pushed out just a little bit, and Jupiter pulled in. With the right sized asteroid, it seems feasible that this process could yield the desired results after billions of years.

Comment by Eniac on Linked decisions an a "nice" solution for the Fermi paradox · 2014-12-09T00:37:23.808Z · LW · GW

I don't see how this amounts to central control. At best it is parallel predetermination, but that breaks down because the actions of the AI are determined by the environment, not the utility function alone. Central control implies two-way communication and is impractical when the latency is measured in decades.

Comment by Eniac on Linked decisions an a "nice" solution for the Fermi paradox · 2014-12-08T01:20:29.540Z · LW · GW

so either civilizations are expending to less than 1000 stars on average, or they're not using radio waves, or our guesses about how common they are are wrong

Absent FTL communication, it is hard to imagine a scenario in which any central control remains after civilization has spread to more than a few stars. There would be no stopping the expansion after that, so the first explanation is unlikely.

A civilization whose area of expansion includes our own solar system would be perceivable by many means other than radio, so the second explanation is really not relevant.

That leaves the third as the most likely explanation, I am afraid.

Comment by Eniac on [link] On the abundance of extraterrestrial life after the Kepler mission · 2014-12-08T00:02:01.036Z · LW · GW

My own favorite hypothesis goes like this: Our universe is most likely to be the simplest one that contains me (us, observers, conscious beings, whatever your favorite rendition of the anthropic principle). It is not likely to be much larger than necessary for creating me. The reason it is as large as it is, then, is that that's what it takes. The answer, then, is that something like me exists only once. More would be a waste of universal size and/or complexity, and Occam forbids it.

Is this as crazy as it sounds?

Comment by Eniac on [link] On the abundance of extraterrestrial life after the Kepler mission · 2014-12-07T23:47:51.550Z · LW · GW

I agree. However, considering that Kepler is not actually sensitive enough to detect Earth sized planets in the habitable zone of sun-like stars, both these numbers are extrapolations and it must be assumed that the 7-15% or 20% are well within each other's error bounds.

Comment by Eniac on [link] On the abundance of extraterrestrial life after the Kepler mission · 2014-12-07T23:43:03.694Z · LW · GW

That is true. However, if "any value could be assigned to Fb", then any value can be made to come out of the Drake equation, except for an upper bound. Updating on Rb can shift around that upper bound, but it tells you nothing about the really small values that decide whether we are alone in the universe or not.

Comment by Eniac on In order to greatly reduce X-risk, design self-replicating spacecraft without AGI · 2014-12-07T04:54:24.255Z · LW · GW

One of the prototypical payoffs that could be had with self-replication that I have seen mentioned is solar farms in the desert that live off sand or rocks and produce arbitrarily large acreage of photovoltaics that can then be used as a replacement for oil. This requires full self-replication, including chemical raw material processing, which is not easy to demonstrate.

I am not sure a good business case could be made for the more limited form of self-replication where the "raw material" is machine parts that only need to be assembled. That would be much easier to demonstrate, so I think a business case for it would be extremely valuable.

Comment by Eniac on In order to greatly reduce X-risk, design self-replicating spacecraft without AGI · 2014-12-07T04:39:27.047Z · LW · GW

Bacteria perform quite well at expanding into an environment, and they are not intelligent.

Comment by Eniac on In order to greatly reduce X-risk, design self-replicating spacecraft without AGI · 2014-12-07T04:34:56.437Z · LW · GW

I think this is because Freitas and Drexler and others who might have pursued clanking replicators became concerned with nanotechnology instead. It seems to me that clanking replicators are much easier, because we already have all the tools and components to build them (screwdrivers, electic motors, microchips, etc.). Nanotechnology, while incorporating the same ideas, is far less feasible and may be seen as a red herring that has cost us 30 years of progress in self-replicating machines. Clanking replicators are also much less dangerous, because it is much easier to pull the plug or throw in a wrench when something goes wrong.

Comment by Eniac on In order to greatly reduce X-risk, design self-replicating spacecraft without AGI · 2014-12-07T04:23:16.196Z · LW · GW

It seems to me that you are making a map-territory confusion here. Existential risks are in the territory.

If I understand the reasoning correctly, it is that we only know the map. We do not know the territory. The territory could be many different kinds, as long as they are consistent with the map. Adding SRS to the map rules out some of the unsafer territories, i.e. reduces our existential risk. It is a Baysian type argument.

Comment by Eniac on [link] On the abundance of extraterrestrial life after the Kepler mission · 2014-12-06T04:19:41.085Z · LW · GW

This is indeed unexpected. It appears the belief in aliens has been waning instead of waxing as we find out more and more about the universe.

"So what happens if we find all these biologically feasible exoplanets that just don't have any life on them?"

We go forth and put some, of course!

Comment by Eniac on [link] On the abundance of extraterrestrial life after the Kepler mission · 2014-12-06T04:11:39.860Z · LW · GW

Estimates? Here some quotes from the paper on those "estimates":

"Also Lc, the average longevity of a communicative civilization, cannot be inducted from its short history on Earth and could be anywhere between a few hundred years and billions of years."

"Bayesian analysis demonstrates that as long as Earth remains the only known planet with biotic life, any value could be assigned to Fb"

You tell me how valuable these estimates are, in view of their precision....

Comment by Eniac on Rationality Quotes December 2014 · 2014-12-06T03:25:07.523Z · LW · GW

Reality is merely an illusion, albeit a very persistent one.

  • Albert Einstein

(http://www.brainyquote.com/quotes/quotes/a/alberteins100298.html)