AI Capabilities vs. AI Products

post by Darmani · 2023-03-25T01:14:51.867Z · LW · GW · 1 comments

Contents

  Building AI Products is Unethical
  Building AI Products is Not Unethical
None
1 comment

It's a great time to start an AI company. You can create a product in days that would have been unbelievable just two years ago. Thanks to AI, there will soon be a one-person billion-dollar company.

It's a terrible time to start an AI company. The singularity is fast approaching, and any talent that enters the field should instead go work in AI alignment or policy. Living a monastic lifestyle is better than being dead.

But wait! My plan is to just build software that takes helps people count calories by taking pictures of their meals. How does counting calories help bring about the end of the world?


In the past, any company worthy of being called an AI company would be training models to do things AI had never done before. But now it just means using the same pre-built AI models, programming it in English. Calling a calorie-counting app an "AI company" and worrying about its affect on AI timelines sounds a lot like worrying about Korean car manufacturers becoming dominant because there are an increasing number of Korean "car companies" that deliver cookies.

We need to distinguish between AI capabilities companies, which actually advance the state of the art in AI, vs. AI product companies, which merely use it.

So, my two questions:

  1.  To what extent do AI product companies increase AI capabilities?
  2. Is it unethical for someone who believes in a high probability of AI doom to found an AI product company?

Simple economics does tell us that, all else being equal, the more AI product companies there are, the better AI capabilities will be. But, at an individual level, should someone considering starting an AI product company shy away from it?

I don't know. Here are some arguments for and against:
 

Building AI Products is Unethical

Building AI Products is Not Unethical

1 comments

Comments sorted by top scores.

comment by Jay Bailey · 2023-03-25T22:32:36.067Z · LW(p) · GW(p)

I like this dichotomy. I've been saying for a bit that I don't think "companies that only commercialise existing models and don't do anything that pushes forward the frontier" aren't meaningfully increasing x-risk. This is a long and unwieldy statement - I prefer "AI product companies" as a shorthand.

For a concrete example, I think that working on AI capabilities as an upskilling method for alignment is a bad idea, but working on AI products as an upskilling method for alignment would be fine.