Posts

Comments

Comment by Konstantin P (konstantin-pilz) on Anthropic is further accelerating the Arms Race? · 2023-04-11T12:28:46.921Z · LW · GW

The way we communicate changes how people think. So if they currently just think of AI as normal competition but then realize it's worth to race to powerful systems, we may give them the intention to race. And worse, we might get additional actors to join in such as the DOD, which would accelerate it even further.

Comment by Konstantin P (konstantin-pilz) on Anthropic is further accelerating the Arms Race? · 2023-04-07T10:56:22.706Z · LW · GW

Please don't call it an arms race or it might become one. (Let's not spread that meme to onlookers) This is just about the wording, not the content

Comment by Konstantin P (konstantin-pilz) on One-day applied rationality workshop in Berlin Aug 29 (after LWCW) · 2022-07-31T20:09:30.384Z · LW · GW

Alright, I'm sold :D

Comment by Konstantin P (konstantin-pilz) on One-day applied rationality workshop in Berlin Aug 29 (after LWCW) · 2022-07-28T20:10:38.105Z · LW · GW

This sounds quite promising but I find it hard to evaluate whether it's going to be worth 90€. Why the high price?

Comment by Konstantin P (konstantin-pilz) on Confusion about neuroscience/cognitive science as a danger for AI Alignment · 2022-06-23T06:04:37.221Z · LW · GW

Interesting argument though I don't quite agree with the conclusion to stay away from brain-like AGI safety.

I think you could argue that if the assumption holds that AGI will likely be brain-like, it would be very important that safety researchers look at the perspective before mainstream AI research realizes this.

I think there also is a point to be said that you could probably tell the safety community about your discovery without speeding up mainstream AI research, but this depends on what exactly your discovery is (i.e. might work for theoretical work, less for practical work)

Even if you were very convinced that brain-like AGI is the only way we can get there, it should still be possible to do research that is speeding up safety differentially. I.e. If you discovered some kind of architecture that would be very useful for capabilities, you could just stop laying out how it would be useful and instead do work on the assumption that future AI will look that way and base your safety work on that.