Should people build productizations of open source AI models?
post by lc · 2023-11-02T01:26:47.516Z · LW · GW · No commentsThis is a question post.
Contents
No comments
In early 2021, before I had developed any opinions on AI safety, my startup cofounder and I were contacted by a group of people who wanted an alternative to an AI CYOA platform, AIDungeon. They were frustrated with the site's current management and especially their decision to start banning users for writing NSFW content. We were almost out of money at that point, so we built a new frontend for them over an EleutherAI model that encrypted user content and didn't filter. Surprisingly to us, the site ended up doing better than anything else we had tried so far, at least enough to tide us over.
We worked on HoloAI for around six months after that. Then between Dec 2021 and April 2022, my model of AI R&D went from "neutral tech research" to "the worst thing you could possibly be doing". While we were not AI researchers, and in retrospect HoloAI was unlikely to have grown massively even if we had kept at it, we decided to stop making improvements. The site is still online, but nowadays I just answer support requests for the people that already use it as an editor.
I still very much agree with our choice to switch to a field that was AI unrelated, but nowadays I'm not actually sure if the concerns about X-risk were real. We never did any foundational model research, we weren't paying OpenAI for tokens or competing against them for customers, and HoloAI didn't release code that could be used with other models. On the other hand, it seems plausible that products like HoloAI have a downstream effect, where by competing in the AI product category, you indirectly encourage others to make those innovations. We weren't paying OpenAI, but there were similar products on the market that did pay OpenAI, and now they have to do something stay ahead.
So what should be the social consensus about these sorts of utilities? Or for that matter, any software that uses proprietary models built by the research labs?
Part of my reason for asking this question here is because I expect many of the new companies in the next 5-10 years to be, to some degree, AI productizations. We landed on HoloAI mostly by chance, but it's possible putting some kind of LLM in various parts of the economy is going to be a large proportion of how the technology sector grows going forward. Further, lots of people in the tech industry will be working on these tools or supporting them, and "doing something else" might not be something that's easy without maneuvering yourself around that growth.
Answers
No comments
Comments sorted by top scores.