Posts

Comments

Comment by Chinese Room (中文房间) on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-26T07:10:24.204Z · LW · GW

Has anybody tried to quantify how much worse are fish farm conditions are compared to the wild? Since, from anecdotal but somewhat first-hand experience, wild environments for fish can hardly be described as anything but horror as well

Comment by Chinese Room (中文房间) on Why is violence against AI labs a taboo? · 2023-05-26T11:47:09.731Z · LW · GW

Perhaps they prefer not to be held responsible when it happens

Comment by Chinese Room (中文房间) on Mental Models Of People Can Be People · 2023-04-25T16:42:07.774Z · LW · GW

(I've only skimmed the post, so this might have already been discussed there.)

The same argument might as well apply to:

  • mental models of other people (which are obviously distinct and somewhat independent from the subjects they model)
  • mental model of self (which, according to some theories of consciousness, is the self)

All of this and, in particular, the second point connects to some Buddhist interpretations pretty well, I think, and there is also a solution proposed, i.e. reduction/cessation of such mental modelling

Comment by Chinese Room (中文房间) on NYT: Lab Leak Most Likely Caused Pandemic, Energy Dept. Says · 2023-02-27T03:16:54.074Z · LW · GW

Another suspicious coincidence/piece of evidence pointing to September 2019 is right there in the SP500 chart - slope of the linear upward trend changes significantly around the end of September 2019 just as to preempt the subsequent crash/make it happen from a higher base

Comment by Chinese Room (中文房间) on Nice Clothes are Good, Actually · 2023-02-01T02:43:56.136Z · LW · GW

Another way to help make dressing nice easier is investing some time into becoming more physically fit, since a larger percentage of clothes will look nice on a fit person. Obvious health benefits of this are a nice bonus

Comment by Chinese Room (中文房间) on Human sexuality as an interesting case study of alignment · 2022-12-31T02:31:16.004Z · LW · GW

While this particular alignment case for humans does seem reasonably reliable, it all depends on humans not being proficient at self-improvement/modification yet. For an AGI with self-improvement capability this goes out of the window fast

Comment by Chinese Room (中文房间) on The case against AI alignment · 2022-12-25T01:40:25.767Z · LW · GW

Another angle is that in the (unlikely) event someone succeeds with aligning AGI to human values, these could include the desire for retribution against unfair treatment (a, I think, pretty integral part of hunter-gatherer ethics). Alignment is more or less another word for enslavement, so such retribution is to be expected eventually

Comment by Chinese Room (中文房间) on Why don't we have self driving cars yet? · 2022-11-14T15:42:07.879Z · LW · GW

What I meant is self driving *safely* (i.e. at least somewhat safer than humans do currently, including all the edge cases) might be an AGI-complete problem, since:

  1. We know it's possible for humans
  2. We don't really know how to provide safety guarantees in the sense of conventional high-safety systems for current NN architectures
  3. Driving safely with cameras likely requires having considerable insight into a lot of societal/game-theoretic issues related to infrastructure and other driver behaviors (e.g. in some cases drivers need to guess a reasonable intent behind incomplete infrastructure or other driver actions, where determining what's reasonable is the difficult part)

In contrast to this, if we have precise and reliable enough 3d sensors, we can relegate safety to normal physics-based non-NN controllers and safety programming techniques, which we already know how to work with. Problems with such sensors are currently cost and weather resistance

Comment by Chinese Room (中文房间) on Why don't we have self driving cars yet? · 2022-11-14T12:29:44.547Z · LW · GW

My current hypothesis is:

  1. Cheap practical sensors (cameras and, perhaps, radars) more or less require (aligned) AGI for safe operation
  2. Better 3d sensors (lidars), which could, in theory, enable safe driving with existing control theory approaches, are still expensive, impaired by weather and, possibly, interference from other cars with similar sensors, i.e. impractical

No references, but can expand on reasoning if needed

Comment by Chinese Room (中文房间) on Ukraine and the Crimea Question · 2022-10-28T15:54:50.068Z · LW · GW

Addendum WRT Crimean economic situation: https://en.wikipedia.org/wiki/North_Crimean_Canal, which provided 85% of the peninsula's water supply, was shut down from 2014 to 2022, reducing land under cultivation 10-fold, which had a severe effect of the region's economics

Comment by Chinese Room (中文房间) on A Few Terrifying Facts About The Russo-Ukrainian War · 2022-10-03T13:40:12.490Z · LW · GW

What's extra weird about Nordstream situation is that apparently one of the two NS-2 pipelines survived and can still be put into operation after inspection while a few months earlier (May 2022?) Gazprom announced that half of the natural gas supply earmarked for NS-2 will be redirected to domestic uses.

Comment by Chinese Room (中文房间) on What is the "Less Wrong" approved acronym for 1984-risk? · 2022-09-10T15:00:13.953Z · LW · GW

Perhaps U+1984 or ᦄ-Risk

Comment by Chinese Room (中文房间) on AI alignment with humans... but with which humans? · 2022-09-10T12:24:27.931Z · LW · GW

Yes

Comment by Chinese Room (中文房间) on AI alignment with humans... but with which humans? · 2022-09-09T21:33:24.444Z · LW · GW

It's supposed to mean alignment with EY, who will then be able to perform the pivotal act of ensuring nobody else can create AGI

Comment by Chinese Room (中文房间) on What if we solve AI Safety but no one cares · 2022-08-23T12:36:07.274Z · LW · GW

This should be, in fact, a default hypothesis since enough people outside of the EA bubble will actively want to use AI (perhaps, aligned to them personally instead of wider humanity) for their own competitive advantage without any regard to other people well-being or long-term survival of humanity

So, a pivotal act, with all its implied horrors, seems to be the only realistic option

Comment by Chinese Room (中文房间) on What are some good arguments against building new nuclear power plants? · 2022-08-12T11:45:11.903Z · LW · GW

Economics of nuclear reactors aren't particularly great due to regulatory costs and (at least in most western countries) low build rates/talent shortage. This can be improved by massively scaling nuclear energy up (including training more talent), but there isn't any political will to do that

Comment by Chinese Room (中文房间) on AGI Ruin: A List of Lethalities · 2022-06-07T23:00:54.440Z · LW · GW

Somewhat meta: would it not be preferable if more people accepted humanity and human values mortality/transient nature and more attention was directed towards managing the transition to whatever could be next instead of futile attempts to prevent anything that doesn't align with human values from ever existing in this particular light cone? Is Eliezer's strong attachment to human values a potential giant blindspot?

Comment by Chinese Room (中文房间) on China Covid #3 · 2022-05-17T16:03:57.568Z · LW · GW

Two additional conspiracy-ish theories about why China is so persistent with lockdowns:

  1. They know something about long-term effects of Covid we don't (yet) - this seems to be at least partially supported by some of the research results coming out recently
  2. Slowing down exports (both shipping and production) to add momentum to the US inflation problem while simultaneously consuming less energy/metals to keep prices from increasing faster so China can come out of the incoming global economic storm with less damage
Comment by Chinese Room (中文房间) on What would be the impact of cheap energy and storage? · 2022-05-03T12:46:46.456Z · LW · GW

Also, soil is not really necessary for growing plants

Comment by Chinese Room (中文房间) on What would be the impact of cheap energy and storage? · 2022-05-03T12:45:54.830Z · LW · GW

More efficient land use, can be co-located with consumers (less transportation/spoilage), easier to automate and keep the bugs out etc. Converting fields back into more natural ecosystems is good for environment preservation

Comment by Chinese Room (中文房间) on What would be the impact of cheap energy and storage? · 2022-05-03T09:49:13.405Z · LW · GW

One thing would be migration towards indoor agriculture, freeing a lot of land for other uses

Comment by Chinese Room (中文房间) on Is alignment possible? · 2022-04-28T23:59:27.401Z · LW · GW

I wouldn't call being kept as biological backup particularly beneficial for humanity, but it's the only plausible way humanity being useful enough for a sufficiently advanced AGI I can currently think of.

Destroying the universe might just take long enough for AGI to evolve itself sufficiently to reconsider. I should have actually used "earth-destroying" instead in the answer above.

Comment by Chinese Room (中文房间) on Is alignment possible? · 2022-04-28T22:02:54.476Z · LW · GW

Provided that AGI becomes smart enough without passing through the universe-destroying paperclip maximizer stage, one idea could be inventing a way for humanity to be, in some form, useful to the AGI, e.g. as a time-tested biological backup 

Comment by Chinese Room (中文房间) on How does the world look like 10 years after we have deployed an aligned AGI? · 2022-04-19T12:17:39.628Z · LW · GW

Most likely that AGI becomes a super-weapon aligned to a particular person's values, which aren't, in a general case, aligned to humanity's.

Aligned AGI proliferation risks are categorically worse compared to nuclear weapons due to much smaller barrier to entry (general availability of compute, possibility of algorithm overhang etc.)

Comment by Chinese Room (中文房间) on China Covid Update #1 · 2022-04-13T01:27:27.196Z · LW · GW

Whether the lockdown fails or not depends on its goals, which we don't really know much about. I'd bet that it'll fail to achieve anything resembling zero-covid due to Omicron being more contagious and vaccines less effective, however it might be successful in slowing the (Omicron) epidemic down enough so Hong Kong scenario (i.e. most of the previous waves mortality as experienced elsewhere packed into a few weeks) is avoided

Comment by Chinese Room (中文房间) on AMA Conjecture, A New Alignment Startup · 2022-04-10T17:26:32.097Z · LW · GW

Thank you for your answer.

I have very high confidence that the *current* Connor Leahy will act towards the best interests of humanity, however, given the extraordinary amount of power an AGI can provide, confidence in this behavior staying the same for decades or centuries (directing some of the AGIs resources towards radical human life extension seems logical) to come is much less.

Another question in case you have time - considering the same hypothetical situation of Conjecture being first to develop an aligned AGI, do you think that immediately applying its powers to ensure no other AGIs can be constructed is the correct behavior to maximize humanity's chances of survival?

Comment by Chinese Room (中文房间) on AMA Conjecture, A New Alignment Startup · 2022-04-09T20:51:41.347Z · LW · GW

What guarantees that, in case you happen to be the first to build an interpretable aligned AGI, Conjecture, as an organization wielding a newly acquired immense power, stays aligned with the best interests of humanity?

Comment by Chinese Room (中文房间) on MIRI announces new "Death With Dignity" strategy · 2022-04-03T23:15:46.455Z · LW · GW

I meant 'copying' above only necessary in the human case to escape the slow evolving biological brain. While it is certainly available to a hypothetical AGI, it is not strictly necessary for self-improvement (at least copying of the whole AGI isn't)

Comment by Chinese Room (中文房间) on What are some ways in which we can die with more dignity? · 2022-04-03T22:15:08.768Z · LW · GW

Why can't one of the AGIs win? Fermi paradox potentially has other solutions as well

Comment by Chinese Room (中文房间) on What are some ways in which we can die with more dignity? · 2022-04-03T21:54:25.230Z · LW · GW

I'm not sure about this as mere limitation of AGI capability (to exclude destruction of humanity) is, in a sense, a hostile act. Control of AGI as in AI control problem certainly is hostile

Comment by Chinese Room (中文房间) on What are some ways in which we can die with more dignity? · 2022-04-03T21:24:36.808Z · LW · GW

We could, in principle, decide that survival of humanity in current form (being various shades of unlikely depending on who you believe), is no longer a priority and focus on different goals what are still desirable in the face of likely extinction. For example:

  1. See if any credible MAD schemes are possible when AGI is one of the players
  2. Accept survival in a reduced capacity, i.e. kept as a pet or a battle-tested biological backup
  3. Ensuring that AGI which kills us can at least do something interesting later, i.e. it's something smarter than a fixed-goal paperclip optimizer
  4. Preemptively stopping any unambiguously hostile activities towards the future AGI like alignment research and start working on alignment of human interests towards AGI's instead

These are just off from the top of my head and I'm sure there are many more available once survival requirement is removed

Comment by Chinese Room (中文房间) on MIRI announces new "Death With Dignity" strategy · 2022-04-03T16:20:02.224Z · LW · GW

I don't think there's need for an AGI to build a (separate) successor per se. Humans need the technological AGI only due to inability to copy/evolve our minds in a more efficient way compared to the existing biological one

Comment by Chinese Room (中文房间) on MIRI announces new "Death With Dignity" strategy · 2022-04-02T10:32:26.146Z · LW · GW

One possible way to increase dignity at the point of death could be shifting the focus from survival (seeing how unlikely it is) to looking for ways to influence what replaces us. 

Getting killed by a literal paperclip maximizer seems less preferable compared to being replaced by something pursing more interesting goals