Posts

Freedom Is All We Need 2023-04-27T00:09:16.619Z

Comments

Comment by Leo Glisic on Freedom Is All We Need · 2023-04-29T21:12:31.588Z · LW · GW

>>If things happen the right way, we will get a lot of freedom as a consequence of that. But starting with freedom has various problems of type "my freedom to make future X is incompatible with your freedom to make it non-X".

Yes, I would anticipate a lot of incompatibilities. But the ASI would be incentivized to find ways to optimize for both people's freedom in that scenario. Maybe each person gets 70% of their values fulfilled instead of 100%. But over time, with new creativity and new capabilities, the ASI would be able to nudge that to 75%, and then 80% and so on. It's an endless optimization exercise.

>>Second reason is safety. Yes, we can punish the people who do the bad things, but that often does not reverse the harm done, e.g. if they killed someone.

Crime, and criminal justice, are difficult problems we'll have to grapple with no matter what. I would argue the goal here would be to incentivize the ASI to find ways to implement criminal justice in the best way possible. Yes, sometimes you have to separate the murderer from the rest of the society; but is there a way to properly rehabilitate them? Certainly things can be done much better than they are today. I think this would set us on a path to keep improving these things over time.

>>Hypothetically, a sufficiently powerful AI with perfect surveillance could allow people do whatever they want, because it could always prevent any crime or tragedy at the last moment. 

Perfect surveillance would not make me (nor many other people) feel free, so I'm not sure this would be the right solution for everyone. I imagine some people would prefer it though, and for them, the ASI can offer them higher security in exchange for their privacy, and for people like myself, it would index privacy higher.

>>I should have ignored you and made someone else happy instead who would keep being happy the next day, too!"

I would imagine a freedom-optimizing ASI would direct its efforts in areas where it can make the most return on its effort. This would mean if someone is volatile with their values like you mention, they would not receive the same level of effort from an ASI (nor should they) as someone who is consistent, at least until they become more consistent. 

>>Ah, a methodological problem is that when you ask people "how free do you feel?" they may actually interpret the question differently, and instead report on how satisfied they are, or something.

Great point, and this is certainly one of the challenges / potential issues with this approach. Not just the interpretation of what it means to feel free, but also the danger of words changing meaning over time for society as a whole. An example might be how the word 'liberal' used to mean something much closer to 'libertarian' a hundred years ago.

Comment by Leo Glisic on Freedom Is All We Need · 2023-04-29T21:01:10.344Z · LW · GW

>>But I don't think it is likely that adding an extra 9 to its chance of victory would take centuries.

This is one point I think we gloss over when we talk about 'an AI much smarter than us would have a million ways to kill us and there's nothing we can do about it, as it would be able to perfectly predict everything we are going to do'. Upon closer analysis, this isn't precisely true. Life is not a game of chess; first, there are infinite instead of finite future possibilities, so no matter how intelligent you are, you can't perfectly anticipate all of them and calculate backwards. The world is also extremely chaotic, so no amount of modeling, even if you have a million or a billion times the computing power of human brains, will allow you to perfectly predict how things will play out given any action. There will always be uncertainty, and I would argue a much higher level than is commonly assumed. 

If it takes, say 50 years, to go from 95% to 99% certainty, that's still a 1% chance of failure. What if waiting another 50 years then gets it to 99.9% (and I would argue that level of certainty would be really difficult to achieve, even for an ASI). And then why not wait another 50 years to get to 99.99%? At some point, there's enough 9's, but over the remaining life of the universe, an extra couple of hundred years to get a few more 9s seems like it would almost certainly be worth it. If you are an ASI with a near-infinite time horizon, why leave anything up to chance (or why not minimize that chance as much as super-intelligently-possible)?

>>This sounds like an assumption that we can get from the point "humans are too dangerous to rebel against" to the point "humans pose no obstacle to AI's goals", without passing through the point "humans are annoying, but no longer dangerous" somewhere in between. 

That's an excellent point; I want to be clear that I'm not assuming that, I'm only saying that it may be the case. Perhaps some kind of symbiosis develops between humans and the AI such that the cost-benefit analysis tips it in favor of 'it's worth it to put it in the extra effort to keep humans alive.' But my overall hypothesis is predicated only that this would extend our longevity by a decent amount of time, not that the AI would keep us alive indefinitely.

Comment by Leo Glisic on Freedom Is All We Need · 2023-04-27T16:34:06.624Z · LW · GW

Agree. Tried to capture this under 'Potential Challenges' item #4. My hope is that people would value the environment and sustainability beyond just their own short term interests, but it's not clear whether that would happen to a sufficient degree.

Comment by Leo Glisic on Open & Welcome Thread – April 2023 · 2023-04-25T21:21:30.760Z · LW · GW

HI everyone, I'm Leo. I've been thinking about the AI existential threat for several years (since I read Superintelligence by Bostrom), but much more so recently with the advent of ChatGPT. Looking forward to learning more about the AI safety field and openly (and humbly) discussing various ideas with others here!