Posts

No Fire in the Equations 2023-01-28T21:16:24.896Z
Pivot! 2021-09-12T20:39:41.535Z
The AI research process as incipient superintelligence 2021-08-27T20:16:20.513Z
Could you have stopped Chernobyl? 2021-08-27T01:48:02.332Z
What is the problem? 2021-08-11T22:33:04.435Z
Carlos Ramirez's Shortform 2021-08-07T00:35:53.663Z

Comments

Comment by Carlos Ramirez (carlos-ramirez) on Pivot! · 2021-09-13T17:44:21.408Z · LW · GW

The Presence of Everything has two posts so far, and both are examples of the sort of panoramic analogical thinking we need to undo the various Gordian Knots bedeviling us.

Comment by Carlos Ramirez (carlos-ramirez) on Pivot! · 2021-09-13T14:35:13.045Z · LW · GW

Disease is down. War is down. Poverty is down. Democracy is up (on the timescale of centuries). Photovoltaics are cheaper than coal. This all seems worthwhile to me. If world peace, health and prosperity aren't worthwhile then what is?

These things are worthwhile, but we lack critical stuff. In particular, it does not capture what makes us shudder when thinking of dystopias like the Combine. Which is: how well developed our spirituality is. You can't pin that down with a number.

I don't think there's a conflict between weird contemplative stuff and making the world better in a measurable way. If the two conflict then you're doing the contemplative stuff wrong.

There is a conflict if the scientistic/materialistic worldview continues its dominance, because that worldview insists only it is valid, and that the spiritual paths provide only psychotherapeutic copes, at best. That state of affairs is unacceptable.   

When evil people see good, they try to undermine it. When good people see good, they celebrate it.

How do you think I undermine science? I just point out there is a tenebrous principle currently underpinning it. Science can very easily proceed without that. Science can get back its heart, and its sanity too, probably with increased effectiveness too boot.

Comment by Carlos Ramirez (carlos-ramirez) on Pivot! · 2021-09-12T22:01:33.143Z · LW · GW

Do you think that there is a non-secular way forward? Did you previously (before your belief update) think there is a non-secular way forward?

 

Yes, I did always think there was a non-secular way forward for all sorts of problems. It's just that I realized AI X-risk is just one head of an immense hydra: technological X-risks. I'm more interested in slaying that hydra, than in coming up with ways to deal with just one of its myriad heads.

those indicators seem pretty meaningful for me. Life expectancy, poverty rates, etc.

Yeah, the indicators are worth something, but they are certainly not everything! Slavish devotion to the indicators renders one blind to critical stuff, such as X-risk, but also to things like humanity becoming something hideous or pathetic in the future.

Why are the standard arguments against religion/magic and for materialism and reductionism not compelling to you anymore?

The hard problem of consciousness combined with learning the actual tenets of Hinduism (read the Ashtavakra Gita), was the big one for me. Dostoyevsky also did a bang up job depicting the spiritual poverty of the materialist worldview.

Comment by Carlos Ramirez (carlos-ramirez) on Could you have stopped Chernobyl? · 2021-09-12T20:48:13.890Z · LW · GW

I think I'm the only one who found that confusing.

 

It makes sense because we don't have good stories that drill into our head negligence -> bad stuff, or incompetence -> bad stuff. When those things happen, it's just noise.

We have bad guys -> bad stuff instead. Which is why HBO's Chernobyl is rather important: that is definitely a very well produced negligence -> bad stuff story.

Comment by Carlos Ramirez (carlos-ramirez) on Could you have stopped Chernobyl? · 2021-09-12T20:42:10.592Z · LW · GW

Eh. We can afford to take things slow. What you describe are barely costs.

Comment by Carlos Ramirez (carlos-ramirez) on Could you have stopped Chernobyl? · 2021-08-29T23:27:32.493Z · LW · GW

Well, INSAG-7 is 148 pages that I will not read in full, as Chernobyl is not my primary interest. But I did find this in it:

5.2.2. Departure from test procedures 

It is not disputed that the test was initiated at a power level (200 MW(th)), well below that prescribed in the test procedures. Some of the recent comments addressed to INSAG boil down to an argument that this was acceptable because nothing in normal procedures forbade it. However, the facts are that: 

— the test procedure was altered on an ad hoc basis;

 — the reason for this was the operators' inability to achieve the prescribed test power level; 

— this was the case because of reactor conditions that arose owing to the previous operation at half power and the subsequent reduction to very low power levels; 

— as a result, when the test was initiated the disposition of the control rods, the power distribution in the core and the thermal-hydraulic conditions were such as to render the reactor highly unstable. When the reactor power could not be restored to the intended level of 700 MW(th), the operating staff did not stop and think, but on the spot they modified the test conditions to match their view at that moment of the prevailing conditions. Well planned procedures are very important when tests are to take place at a nuclear plant. These procedures should be strictly followed. Where in the process it is found that the initial procedures are defective or they will not work as planned, tests should cease while a carefully preplanned process is followed to evaluate any changes contemplated.

 5.2.3. Other deficiencies in safety culture 

The foregoing discussion is in many ways an indication of lack of safety culture. Criticism of lack of safety culture was a major component of INSAG-1, and the present review does not diminish that charge. Two examples already mentioned are worthy of emphasis, since they bear on the particular instincts required in reactor operation. The reactor was operated with boiling of the coolant water in the core and at the same time with little or no subccoling at the pump intakes and at the core inlet. Such a mode of operation in itself could have led to a destructive accident of the kind that did ultimately occur, in view of the characteristics of positive reactivity feedback of the RBMK reactor. Failure to recognize the need to avoid such a situation points to the flaws in operating a nuclear power plant without a thorough and searching safety analysis, and with a staff untutored in the findings of such a safety analysis and not steeped in safety culture. This last remark is especially pertinent to the second point, which concerns operation of the reactor with almost all control and safety rods withdrawn to positions where they would be ineffective in achieving a quick reduction in reactivity if 19 shutdown were suddenly needed. Awareness of the necessity of avoiding such a situation should be second nature to any responsible operating staff and to any designers responsible for the elaboration of operating instructions for the plant.

Sound like HBO's Chernobyl only erred in making it seem like only Dyatlov was negligent that night, as opposed to everyone in the room. But even without that, the series does show the big takeaway was that the USSR as a whole was negligent. 

Comment by Carlos Ramirez (carlos-ramirez) on Could you have stopped Chernobyl? · 2021-08-29T23:09:41.052Z · LW · GW

I highlight later on that Beirut is a much more pertinent situation. No control room there either, just failures of coordination and initiative.

Also, experts are not omnipotent. At this point, I don't think there are arguments that will convince the ones who are deniers, which is not all of them. It is now a matter of reigning that, and other, field(s) in.

Comment by Carlos Ramirez (carlos-ramirez) on Could you have stopped Chernobyl? · 2021-08-28T15:40:50.940Z · LW · GW

That's good to know, though the question remains why didn't anyone do that in Beirut.

Fauci 

I don't think reflexive circling-the-wagons around the experts happens in every context. Certainly not much of that happens for economists or psychometricians...

Comment by Carlos Ramirez (carlos-ramirez) on Could you have stopped Chernobyl? · 2021-08-27T20:25:15.411Z · LW · GW

As Ikaxas said. It's now fixed. 

Comment by Carlos Ramirez (carlos-ramirez) on Could you have stopped Chernobyl? · 2021-08-27T20:23:14.328Z · LW · GW

I'll be there. Been thinking about what precisely to ask. Probably something about how it seems we don't take AI risk seriously enough. This is assuming the current chip shortage has not, in fact, been deliberately engineered by the Future of Humanity Institute, of course...

Comment by Carlos Ramirez (carlos-ramirez) on Could you have stopped Chernobyl? · 2021-08-27T20:18:12.912Z · LW · GW

Will keep this in mind moving forward. The Beirut analogy is better at any rate.

Comment by Carlos Ramirez (carlos-ramirez) on Google’s Ethical AI team and AI Safety · 2021-08-15T00:38:25.571Z · LW · GW

Necroing.

"This perspective" being smuggling in LW alignment into corps through expanding the fear of the AI "making mistakes" to include our fears?

Comment by Carlos Ramirez (carlos-ramirez) on Carlos Ramirez's Shortform · 2021-08-06T22:58:29.275Z · LW · GW

A lucid analogy:

AI is a caveman looking at a group of cavemen shamans performing a strange ritual that summons many strange and somewhat useful beings. Some of the shamans say they're going to eventually summon a Messiah that will solve all problems. Others say that there's a chance to summon a world-devouring demon, by mistake. Yet others say neither are going to happen, that the sprites and elementals that the ritual has been bringing into the world are all the ritual can do.

Who should the caveman listen to and why? For bonus points, try sticking to the frame of the analogy.

Comment by Carlos Ramirez (carlos-ramirez) on Carlos Ramirez's Shortform · 2021-08-06T19:46:18.870Z · LW · GW

Civilization collapsing is blatantly better than rogue superintelligence, as it's plausibly a recoverable disaster, so yes, that is my honest belief. I don't consider non-organics to be moral entities, since I also believe they're not sentient. Yeah, I'm aware those views are contested, but then, what the hell isn't when it comes to philosophy. There are philosophers who argue for post-intentionalism, the view that our words, language and thoughts aren't actually about anything, for crying out loud.

Comment by Carlos Ramirez (carlos-ramirez) on Carlos Ramirez's Shortform · 2021-08-06T18:00:25.363Z · LW · GW

The thing is though, there isn't a dichotomy between agents and processes. Everything physical (except maybe the elementary particles) is a process in the final analysis, as Heraclitus claimed. Even actual individual persons, the paradigmatic examples of an agent, are also processes: the activity of the brain and body only ever stop at death. The appearance of people as monadic agents is just that, an appearance, and not actually real.

This might have sounded too much philosophical woo-woo, but it does have pragmatic consequences, which is that since agents are a facade over what is actually a process, the question becomes how do you actually tell which processes do or do not have goals? It's not obvious that only processes that can pass as agents are the only ones that have goals.

EDIT: Think about it like this: when a river floods and kills hundreds or thousands, was it misaligned? Talk of agents and alignment only makes sense in certain contexts, and only as a heuristic! And I think AI X-risk is a context in which talking in terms of agents and alignment obfuscates enough critical features of the subject that the discourse starts excluding genuine understanding.

EDIT 2: The above edit being mostly a different way of agreeing with you. I guess my original point is "The scientific research process is dangerous, and for the same reasons rogue superintelligence would be: opacity, constantly increasing capabilities, and difficulty of guaranteeing alignment". I still disagree with you (and with my own example of the river actually) that non-agents (processes) can't be misaligned. All natural forces can be fairly characterized as misaligned, as they are indifferent to our values, and this does not make them agentic (that would be animism). In fact, I would say "a dangerous non-agent is not misaligned" is false, and "a dangerous non-agent is misaligned" is a tautology in most contexts (depends on to whom it is dangerous).

Comment by Carlos Ramirez (carlos-ramirez) on Carlos Ramirez's Shortform · 2021-08-06T01:05:33.937Z · LW · GW

Given that the process of scientific research has many AGI traits (opaque, self-improving, amoral as a whole), I wonder how rational it is for laypersons to trust it. I suspect the answer is, not very. Primarily because, just like an AGI improving itself, it doesn't seem to be possible for anyone, not even insiders in the process, to actually guarantee the process will not, in its endless iterations, produce an X-risk. And indeed, said process is the only plausible source of manmade X-risk. This is basically Bostrom's technological black ball thought experiment in the Vulnerable World Hypothesis. But Bostrom's proposed solution is to double down, with his panopticon.

I have an intuition that such instances of doubling down are indications the scientific research process itself is misaligned.

Comment by Carlos Ramirez (carlos-ramirez) on Carlos Ramirez's Shortform · 2021-08-06T00:48:07.780Z · LW · GW

I started an AI X-Risk awareness twitter account. Introducing @GoodVibesNoAI. It's about collating reasons to believe civilization will collapse before it gets to spawn a rogue superintelligence that consumes all matter in the Laniakea supercluster. A good outcome, all things considered.

What do you think about it? Any particular people to follow? Considered also doing a weekly roundup of the articles I post on it and making a weekly newsletter with them.