An Ignorant View on Ineffectiveness of AI Safety
post by Iknownothing · 2023-01-07T01:29:59.126Z · LW · GW · 7 commentsContents
7 comments
I am ignorant of AI and AI Safety to a very large degree. It is very likely that a lot of what I talk about here will be silly to an actual expert or is something that has been thought of a long time ago and tried and taken as far as they knew how. But I think that if there is anything here that could help, or could be turned into something else that could help, I would rather have spent time writing this than not.
AI Safety and alignment, as far as this ignorant one knows, has been very ineffective. The current progress is reckless, very very fast, with little to no restraint. With the main approach in the actual industry seeming to be to patch things up as they come along, but only really if they can affect the profit margin. And the experts who aren't in favor of this seem to have amounted to little more than educated picketers.
Something I don't understand is that unlike other areas such as climate change and urban planning, which although very different have a slightly similar dynamic of experts with little personal power trying to advocate for changes which are better for people generally, against larger, more powerful organizations who profit hugely from the way things are currently done, people in AI Safety seem to be doing little else. What they especially don't seem to be doing, which experts in climate change and urban planning(which, I appreciate are very different, I am using them as examples due to slightly similar power dynamics) are doing, is offering profitable alternatives. Or even offering alternative courses of action, perhaps to another powerful group. As a small scale example, if I offer solar panels to people in my village, I can appeal to how they'll save money and have self-governance. I can offer the quietness of it compared to a generator. Sympathizing that they might not have the climate as a high priority as I do, because they have other problems. An urban planning advocate might appeal to how good drivers would have to deal with a lot less poor drivers on the road if more of them were in buses, if buses didn't get stuck in traffic, or how there might be more parking free for drivers if there were better bike lanes and bike parking. Essentially, putting forth an idea so that it's not just appealing to themselves, but to others as well. And also obviously profitable to others.
I don't see this in AI Safety. The attitude often seems to be a 'woe is me' kind, with an underlying idea that there are people just foolishly doing things that are going to doom everyone and if they don't listen to AI Safety, they are dumb and going to do bad things. Which even if it is true, does not seem effective or helpful to me. Of course, maybe I am completely wrong about this.
Perhaps one of the main flaws of AI Safety is the very poor alternatives offered. The main alternative offered seem to be to do the same thing more slowly and more carefully. Which will obviously be less profitable, but the huge company should do things that are less profitable, because that is the sensible and right thing to do. In my sincere opinion, if there is anyone actually trying to take this approach, and it's not just an arrogant and ignorant misunderstanding of mine, they are very very foolish and self-centered.
A lot of what AI does currently, that is visible to the general public seems like it could be replicated without AI. E.g. making boilerplate code that can be quickly edited. There are already huge libraries of boilerplate code. Surely making a system connecting most of them, which allows the user to search one up, maybe even make and upload some of their own and quickly make some changes to boilerplate code is not something extraordinary. But it does seem like it would be useful. The same could be said for art. How useful would it be to be able to an artist to be able to select a person/things outline from a dropdown/search bar, stretch it as much as they like and the choose it to be filled in with a specific gradient? And similarly to the code, create and upload their own boilerplates. A simple way to make these things profitable might be to make older boilerplates free but ones from say, the last month are only available to subscribed users. Yet, as far as I am aware, this does not yet exist and is not being worked on. It would not even have to be a separate application. In fact, it might be better if they were extensions for already widely used IDEs or drawing apps.
Some else I don't see being done is incentives for AI engineers or other workers not to go work for companies such as OpenAI, Deep Mind, etc. Or things being done or made to compete for the resources that such companies need.
I also see no mention of targeting the companies that fund them. Or the systems that allow such monopolies to grow so fat that they easily have the money to fund them. Or the systems that make unions so feeble that they talk about working with AI instead of fighting against it.
The main reason, it seems to me, that such things are not being worked on is that AI Safety people actually do want AI. They actually want the research to be done. They actually want an AGI to be made that will replace the human workforce. But they want it done slower and more carefully. I find this silly. How inspiring would the man blocking the tanks from Tienanmen Square be if instead of standing in front of them, he was walking alongside them, asking them to slow down and maybe think about not using guns? It would be laughable.
7 comments
Comments sorted by top scores.
comment by Mitchell_Porter · 2023-01-07T05:58:41.883Z · LW(p) · GW(p)
people in AI Safety ... especially don't seem to be ... offering profitable alternatives
The problem is that AI is a general-purpose tool. It's far simpler to just ban AI entirely.
One could imagine an approach similar to UN Security Council on nuclear weapons. The nonproliferation treaty says, only the permanent members of the UNSC (the big five victors from World War 2) are allowed to have nuclear weapons, but they promise to help other states with civilian uses of nuclear energy. An "AI nonproliferation treaty" could again say, only the permanent members are allowed to have this technology - and they need to nationalize it or otherwise keep it under tight control. But they will make it available to other states in limited form.
I don't think any of that will happen, either. Not unless something happens that deeply frightens the great powers. Just as AI is irresistibly tempting for those who seek profit, it is also irresistibly tempting for those who seek power.
So I am definitely in the camp of those who are trying to improve the odds that AI will be benevolent, rather than fighting to stop it from happening at all. I actually would like to see the "stop AI, or at least slow it down [LW · GW]" body of opinion become better organized. But I think it would need to be based outside the AI research and AI safety communities.
Replies from: Iknownothing↑ comment by Iknownothing · 2023-03-03T13:14:11.747Z · LW(p) · GW(p)
The trouble I see with banning AI vs banning nuclear weapons is that it's a lot harder to catch and detect people who are making AI. Banning AI is more like banning drugs or gambling. It could be done, but the effectiveness really varies. Creating a narrative about not using it, since it's bad for your health, associating it with addicts, making it clear how it's not profitable even if seems that way on the surface, controlling the components used to make it, etc seem much more effective.
I agree that AI is very tempting for those who seek profit, but I don't agree with the irresistability. I think a sufficiently tech savvy businessman who's looking for long term profits, in the scope of at least decades, rather than years, can see how unprofitable AI will be.
Something that is not fully understood and gets harder and harder to understand, that is discouraging for people who wanted to study to become experts and needs those experts to be able to verify it's results and is very energy and computation intensive on top of that, is not sustainable. And that's not even considering it at some point maybe having its own will which is unlikely to be in line with your own.
Now many short term seeking businessmen will certainly be attracted to it and perhaps some long term ones will also think they can ride the wave for a bit and then cash out. Or some businessmen who are powerful enough to think they'll be the ones left holding the AI that essentially becomes the economy.
Take this with a lot of salt please, I'm very ignorant on a lot of this.
With what I know, even in the scenarios where we have well aligned AGI- which seems very unlikely- it's much more likely to be used to further cement the power of authoritative governments or corporations than be used to help people. Any help will likely be a side effect or a necessary step for said government/corporation to get more power.
If we say that empowering people, helping people be able to help themselves, helping people feel fulfilled and happy etc is a goal, it seems to me that we must focus on tech and laws that move us away from thing like AI. And more towards fixing tax evasion, make solar panels more efficient and cheaper, urban planning that allows walkable cities, reducing a need for the Internet, etc.
Replies from: Iknownothing, Iknownothing↑ comment by Iknownothing · 2023-05-11T11:24:20.421Z · LW(p) · GW(p)
Being a bit less ignorant now, I disagree with a lot of "I agree that AI is very tempting for those who seek profit, but I don't agree with the irresistability. I think a sufficiently tech savvy businessman who's looking for long term profits, in the scope of at least decades, rather than years, can see how unprofitable AI will be. "
↑ comment by Iknownothing · 2023-03-03T13:16:56.145Z · LW(p) · GW(p)
One of the biggest things I think we can immediately do is not consume online entertainment. Have more in person play/fun and encourage it of other too. The more this is done, the less data is available for training AI.
Replies from: knowsnothing↑ comment by knowsnothing · 2023-07-24T18:54:52.602Z · LW(p) · GW(p)
I disagree with this now.
comment by AnthonyC · 2023-05-13T02:28:04.203Z · LW(p) · GW(p)
I think it might be helpful to re-examine the analogy to climate change. Today's proponents of increased focus on AI safety and alignment are not comparable to today's climate scientists, they're comparable to the climate scientists of the 1970s-1980s. Since then many trillions of dollars and millions of careers have been spent developing and scaling up the alternatives to business-as-usual that are finally starting to make global decarbonization feasible.
It's only in the past handful of years that unsubsidized wind and solar power became cost-competitive with fossil fuels in most of the world. Getting here required massive advances in semiconductor engineering, power electronics, and many other fields. We still aren't at a point where energy storage is cost-competitive in most places, but we've gotten better at grid management which has been enough for now. Less than 15 years ago many people in relevant industries still seriously believed that every watt of wind and solar on the grid needed to be matched by a watt of dispatchable (aka usually natural gas) power plants to deal with intermittency. It's only in the last decade that we've gotten enough improvements in battery technology to make electric cars start to become competitive. And there are still a huge number of advances needed - ones we can now predict and plan for because we can see what the likely solutions are and have examples starting to enter real-world use, but difficult problems nonetheless. Mining more critical minerals. Replacing coke in steelmaking. Zero-carbon cement. Synthetic hydrocarbon fuels and hydrogen production for industrial heat, aviation fuel, and shipping fuel. Carbon capture and utilization/storage. Better recycling methods. Plastic alternatives. Cleaner agricultural practices. It's a long list, and every item on it is multifaceted.
The first climate scientists to worry about global warning weren't experts in any of these other fields, and it wouldn't make sense to expect them to be. They were working before the computer models, ice cores, extreme weather event increases, and so on, so of course they wouldn't able to convince most people that society should invest so heavily into solving a new, hard to conceptualize (for most), invisible problem that might someday harm their great-grandchildren in some hard-to-anticipate ways.
So the fact that today's experts who are talking about AI safety and alignment don't have a ready-to-go solution, or even a readily actionable path to one, really shouldn't be surprising. If they did, they'd either deploy it, or sell it to someone for a huge sum of money who could then deploy it and also promote regulation of everyone else, in order to secure a huge economic advantage.
comment by Iknownothing · 2023-10-31T02:08:59.511Z · LW(p) · GW(p)
I disagree with this paragraph today: "A lot of what AI does currently, that is visible to the general public seems like it could be replicated without AI"