AI Capabilities vs. AI Products
post by Darmani · 2023-03-25T01:14:51.867Z · LW · GW · 1 commentsContents
Building AI Products is Unethical Building AI Products is Not Unethical None 1 comment
It's a great time to start an AI company. You can create a product in days that would have been unbelievable just two years ago. Thanks to AI, there will soon be a one-person billion-dollar company.
It's a terrible time to start an AI company. The singularity is fast approaching, and any talent that enters the field should instead go work in AI alignment or policy. Living a monastic lifestyle is better than being dead.
But wait! My plan is to just build software that takes helps people count calories by taking pictures of their meals. How does counting calories help bring about the end of the world?
In the past, any company worthy of being called an AI company would be training models to do things AI had never done before. But now it just means using the same pre-built AI models, programming it in English. Calling a calorie-counting app an "AI company" and worrying about its affect on AI timelines sounds a lot like worrying about Korean car manufacturers becoming dominant because there are an increasing number of Korean "car companies" that deliver cookies.
We need to distinguish between AI capabilities companies, which actually advance the state of the art in AI, vs. AI product companies, which merely use it.
So, my two questions:
- To what extent do AI product companies increase AI capabilities?
- Is it unethical for someone who believes in a high probability of AI doom to found an AI product company?
Simple economics does tell us that, all else being equal, the more AI product companies there are, the better AI capabilities will be. But, at an individual level, should someone considering starting an AI product company shy away from it?
I don't know. Here are some arguments for and against:
Building AI Products is Unethical
- Argument: You are increasing AI capabilities by paying money to AI vendors, who will use it to increase capabilities.
Counterargument: Many believe LLMs will become commoditized. Then there will be so many vendors that none will make an economic profit. There are likely to soon be competitive leaked or open source LLMs. You will be able to run these yourself with very little value flowing into AI capabilities creators. - Argument: The barrier to entry is so low that every vertical will face stiff competition. One dimension people will compete on is the power of their foundation models. Even if you aren't paying OpenAI to create a GPT6 which controls robots, you'll be pressuring your competitors to do so.
Counterargument: If you don't compete and leave your would-be competitor a monopoly, would it be any different? Monopolies can be great at investing in technology; just see Bell Labs. - Argument: There is a displacement effect: if you go and build AI calorie counting, then maybe someone who would have built AI calorie counting instead builds AI fitness coaching, someone who would have build AI fitness coaching instead builds AI middle management, and someone who would have instead built AI middle management goes off and takes a Senior Scientist role at OpenAI.
Counterargument: That kind of dynamic takes years to play out. Word doesn't spread that quickly, and people don't change their plans overnight. More likely that person builds a calorie counting app anyway, loses to you after 6 months, and then goes back to paying the bills as an ordinary software engineer. Or as an AI alignment researcher.
Counter-counterargument: Base rates suggest that, if someone loses their first option of what to work on, AI capabilities will be a more common second choice than AI alignment. And as long as AI capabilities work pays more, this will be especially true among the entrepreneurial.
Building AI Products is Not Unethical
- Argument: It's important to have people who care about AI safety within the new AI power structure, or at least as influential customers of those who are. Consider the example of Oskar Schindler, who used his position as an arms manufacturer within the Nazi party to save over 1000 Jews.
Counterargument: Power and money are addictive. OpenAI, DeepMind, and Anthropic were all founded by people who care deeply about AI safety. Look how that's turned out. - Argument: Starting an AI company is the best way to become rich in March 2023. It's better for the world to be a rich person that cares about AI alignment than a poor person who cares about it, no matter how talented.
Counterargument: There may not be time to become rich and then fund AI safety work before the world ends. - Argument: My AI product helps people become healthier, happier, and more productive. It will help AI safety researchers speed up their progress.
Counterargument: Are you really going to only sell your product to people on the right side of the AI race? Will your investors let you?
1 comments
Comments sorted by top scores.
comment by Jay Bailey · 2023-03-25T22:32:36.067Z · LW(p) · GW(p)
I like this dichotomy. I've been saying for a bit that I don't think "companies that only commercialise existing models and don't do anything that pushes forward the frontier" aren't meaningfully increasing x-risk. This is a long and unwieldy statement - I prefer "AI product companies" as a shorthand.
For a concrete example, I think that working on AI capabilities as an upskilling method for alignment is a bad idea, but working on AI products as an upskilling method for alignment would be fine.