Posts

The Limits of Artificial Consciousness: A Biology-Based Critique of Chalmers’ Fading Qualia Argument 2023-12-17T19:11:31.953Z
AI Risk and Survivorship Bias - How Andreessen and LeCun got it wrong 2023-07-14T17:43:15.998Z
Biosafety Regulations (BMBL) and their relevance for AI 2023-06-29T19:22:41.196Z
AI Incident Sharing - Best practices from other fields and a comprehensive list of existing platforms 2023-06-28T17:21:13.991Z

Comments

Comment by Štěpán Los (stepan-los) on The Limits of Artificial Consciousness: A Biology-Based Critique of Chalmers’ Fading Qualia Argument · 2023-12-18T09:04:37.064Z · LW · GW

Just to be clear, I am not arguing in favour of or against dualism, however, it is not true that if dualism were true, it would explain nothing — it is certainly an explanation of consciousness (something like “it arises out of immaterial minds”) but perhaps is just an unpopular one/suffers from too many problems according to some. Secondly, while I may agree that what you are saying about AC being obvious, this does not really address any part of my argument — many things seemed obvious in the past that turned out to be wrong, so just relying on our intuitions rather than arguments does not seem valid. And since there may be reasons that the two cannot turn out to be similar enough (this is the crux of my argument), this may contest your thesis about AC simply being obvious.

Comment by Štěpán Los (stepan-los) on The Limits of Artificial Consciousness: A Biology-Based Critique of Chalmers’ Fading Qualia Argument · 2023-12-18T09:00:45.455Z · LW · GW

He makes this assumption in the first paragraph of the paper (i.e. he assumes that consciousness has a physical basis and lawfully depends on this basis).

Comment by Štěpán Los (stepan-los) on Race to the Top: Benchmarks for AI Safety · 2023-07-21T21:06:30.757Z · LW · GW

I know I am super late to the party but this seems like something along the lines of what you’re looking for: https://www.alignmentforum.org/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios

Comment by Štěpán Los (stepan-los) on AI Risk and Survivorship Bias - How Andreessen and LeCun got it wrong · 2023-07-14T20:45:08.538Z · LW · GW

Hi Gerald, thanks for your comment! Note that I am arguing neither in favour of or against doom. What I am arguing though is the following: it is not good practice to group AI with technologies that we were able to iteratively improve towards safety when you are trying to prove AI safety. The point here is that without further arguments, you could easily make the reverse argument and it would have roughly the equal force:

P1 Many new technologies are often unsafe and impossible to iteratively improve (e.g. airhips).

P2 AI is a new technology.

C1 AI is probably unsafe and impossible to iteratively improve.

That is why I argue that this is not a good argument template because through survivorship bias in P1, you‘ll always be able to sneak in whatever it is you’re trying to prove.

With respect to your arguments about doom scenarios, I think they are really interesting and I’d be excited to read a post with your thoughts (maybe you already have one?).