Announcing Encultured AI: Building a Video Game
post by Andrew_Critch, Nick Hay (nickjhay) · 2022-08-18T02:16:26.726Z · LW · GW · 26 commentsContents
Will Encultured save the world? Fun is a pretty good target for us to optimize Principles to exemplify None 27 comments
Also available on the EA Forum. [EA · GW]
Preceded By: Encultured AI Pre-planning, Part 2: Providing a Service [LW · GW]
If you've read to the end of our last post, you maybe have guessed: we’re building a video game!
This is gonna be fun :)
Our homepage: https://encultured.ai/
Will Encultured save the world?
Is this business plan too good to be true? Can you actually save the world by making a video game?
Well, no. Encultured on its own will not be enough to make the whole world safe and happy forever, and we'd prefer not to be judged by that criterion. The amount of control over the world that's needed to fully pivot humanity from an unsafe path onto a safe one is, simply put, more control than we're aiming to have. And, that's pretty core to our culture. From our homepage:
Still, we don’t believe our company or products alone will make the difference between a positive future for humanity versus a negative one, and we’re not aiming to have that kind of power over the world. Rather, we’re aiming to take part in a global ecosystem of companies using AI to benefit humanity, by making our products, services, and scientific platform available to other institutions and researchers.
Our goal is to play a part in what will be or could be a prosperous civilization. And for us, that means building a successful video game that we can use in valuable ways to help the world in the future!
Fun is a pretty good target for us to optimize
You might ask: how are we going to optimize for making a fun game and helping the world at the same time? The short answer is that creating a game world in which lots of people are having fun in diverse and interesting ways in fact creates an amazing sandbox for play-testing AI alignment & cooperation. If an experimental new AI enters the game and ruins the fun for everyone — either by overtly wrecking in-game assets, subtly affecting the game culture in ways people don't like, or both — then we're in a good position to say that it probably shouldn't be deployed autonomously in the real world, either. In the long run, if we're as successful we hope as a game company, we can start posing safety challenges to top AI labs of the form "Tell your AI to play this game in a way that humans end up endorsing."
Thus, we think the market incentive to grow our user base in ways they find fun is going to be highly aligned with our long-term goals. Along the way, we want our platform to enable humanity to learn as many valuable lessons as possible about human↔AI interaction, in a low-stakes game environment before having to learn those lessons the hard way in the real world.
Principles to exemplify
In preparation for growing as a game company, we’ve put a lot of thought into how to ensure our game has a positive rather than negative impact on the world, accounting for its scientific impact, its memetic impact, as well as the intrinsic moral value of the game as a positive experience for people.
Below are some guiding principles we’re planning to follow, not just for ourselves, but also to set an example for other game companies:
- Pursue: Fun! We’re putting a lot of thought into not only how our game can be fun, but also ensuring that the process of working at Encultured and building the game is itself fun and enjoyable. We think fun and playfulness are key for generating outcomes we want, including low-stakes high-information settings for interacting with AI systems.
- Maintain: opportunities to experiment. No matter how our product develops, we’re committed to maintaining its value as a platform for experiments, especially experiments that help humanity navigate the present and future development of AI technology.
- Avoid: teaching bad lessons. On the margin, we expect our game to incentivize cooperation over conflict, relative to other games. If players demand some amount of in-game violence, we might enable it, but only along with other features that reward people/groups for finding ways to avoid violence (like in the real world). We hope that our creativity in this regard can set a positive example for other game companies.
- Avoid: in-game suffering. Unlike other game developers, we are committed to ensuring that the entities in our game are not themselves susceptible to conscious suffering. Today’s narrow AI systems are not likely to be entities that suffer, but if that changes, we’ll be on the lookout to avoid it, and to promote industry-wide standards for minimizing the in-game suffering of algorithmic entities.
- Avoid: uncontrolled intelligence explosions. This should go without saying given our founding team, but: we expect to be much more careful than other companies to ensure that recursively self-improving intelligent agents don’t form within our game and break out onto the internet! Again, with today’s AI technology – especially as used in our video game as-planned — this possibility is extremely unlikely; however, as AI progresses, we’re going to exercise and promote industry-wide caution around the potential for intelligence explosions.
- Pursue: more fun :) We want our developers’ sense of creativity and our users’ sense of fun to drive our product development for the most part; otherwise, we’ll miss out on a huge number of connections with people who can teach us valuable lessons about how human↔AI interactions should work.
So, that’s it. Make a fun game, make sure it remains a healthy and tolerant place for experiments with AI safety and alignment, and be safe and ethical ourselves in the ways we want all game companies to be safe and ethical. We hope you’ll like it!
If we're very lucky and the global development of AI technology moves in a really safe and positive direction — e.g., if we end up with a well-functioning Comprehensive AI Services economy — maybe our game will even stick around as a long-lasting source of healthy entertainment. While it's beyond our ability to unilaterally prevent every disaster that could avert such a positive future, it's definitely our intention to help steer things in that direction.
Also, we’re hiring! Definitely reach out to our team via contact@encultured.ai if you have any questions or ideas to share, or if you might want to get involved :)
26 comments
Comments sorted by top scores.
comment by ShardPhoenix · 2022-08-18T05:23:53.334Z · LW(p) · GW(p)
How will you compete in the market while optimizing for various things other than what players want?
Replies from: Jack Ryan↑ comment by Jack R (Jack Ryan) · 2022-08-18T06:11:33.467Z · LW(p) · GW(p)
What are some of the "various things" you have in mind here? It seems possible to me that something like "AI alignment testing" is straightforwardly upstream of what players want, but maybe you were thinking of something else
comment by mako yass (MakoYass) · 2022-08-18T10:28:12.494Z · LW(p) · GW(p)
I'd guess that the main AI-exacerbating thing that the game industry does is provoke consumers to subsidize hardware development. I don't know if this is worth worrying about (have you weighed the numbers?), but do you plan on like, promoting low-spec art-styles to curb demand for increasing realism? :] I wonder if there's a tension between realism and user-customizability that you might be able to inflame (typically, very detailed artstyles are more expensive to work in and are harder to kitbash, but it's also possible that stronger hardware would simplify asset pipelines in some ways: raytracing could actually simplify a lot of lighting stuff, right?).
comment by Lone Pine (conor-sullivan) · 2022-08-18T03:21:30.390Z · LW(p) · GW(p)
It sounds like the game will be an MMO, is that correct?
comment by Chase Carter · 2024-08-30T18:59:00.105Z · LW(p) · GW(p)
@Andrew_Critch [LW · GW] 2 years later, it looks like Encultured.ai has pivoted away from the game concept to the (unrelated?) healthcare domain (as mentioned here: Encultured AI | Blog). I'd be really interested in reading about a postmortem of the game / agent-environment concept and whether you think it would be a worthwhile project for another team to pick up (either in broad concept or literally starting from Encultured's designs & prototype(s)).
comment by Donald Hobson (donald-hobson) · 2022-08-20T00:34:55.912Z · LW(p) · GW(p)
What will you do that makes your game better than the many other computer games out there?
I presume that random members of the public will be playing this game, not just a handful of experts.
Once the AI realizes its in a game and being tested, you have basically no reason to suspect its behaviour in the game to be correlated with behaviour in reality. Given random people who are making basically no attempt to keep this secret, any human level AI will realize this. Freely interacting with large numbers of gullible internet randos isn't the best from a safety standpoint either. I mean a truely superintelligent AI will persuade MIRI to let it out of any box, but for a top human level AI, it could more easily trick internet randos.
If you kept the controls much tighter, and only had a few AI experts interacting with it, you could possibly have an AI smart enough to be useful, and dumb enough to not realize its in a box being tested. But this makes getting enough training data hard.
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-08-20T13:45:52.785Z · LW(p) · GW(p)
I agree strongly with your point about experts vs laypersons interacting with the AI, Donald. I personally am really excited for the potential of game / simulation-world interactions with varying numbers of AI agents and human experts. But I think this a high risk scenario if you are trying to evaluate a new cutting-edge model to test how dangerous it is. I think this project as stated by Encultured AI has value iff it carefully uses only well-vetted models under careful surveillance, and is made available for evaluating new models only in special high-security settings.
comment by MSRayne · 2022-08-18T12:55:51.638Z · LW(p) · GW(p)
Does any of you actually have game design or development experience, or at least e.g. years of daydreaming and failed prototypes behind you? I'd like to point out that making a video game, particularly an MMO (which it looks like you're thinking of), is very hard, and there are many problems that can ruin their success quite apart from the actual quality of the game.
You also haven't said anything about the actual design itself yet; my prior, from lots and lots of personal experience, is to expect people who are making a Great Game That Will Help Change The World!!! to have tons of Cool Ideas and no discipline and to try to turn it into a kitchen sink and lose all momentum. (I've been those people. Over and over again.)
So: what are the actual game mechanics? What do players actually do in your game? And what makes that action fun?
looks at your website
Oh dear. You don't even have a job listing for a game designer. And lots of talk about the AI you want to make, and as far as I can see, nothing about the game you want to make. Your goals are admirable; but my prediction as of right now is ~80% that you will be shocked at the difficulty of the undertaking and your own lack of readiness for it and give up within a year.
My suggestion is that you help other game studios that already exist build AI for their games in return for their agreement to allow you to use those contexts to explore the alignment-relevant questions, rather than attempting to make games yourself. I think that would be much, much more likely to succeed and require far less effort and struggle on your part.
Evidence that my opinion ought to matter to you: I've wanted to make games since I read Chris Crawford's "On Game Design" when I was 11, and I have had a PERFECT EPIC GAME in my head for years that constantly changes form and never settles down, and I've never managed to actually build even a small prototype without giving up in frustration and self hate. I've also been part of several "small, passionate" teams of people who'd never made a game before and were trying to do so, two of which actually had a professional software developer at the head, and all of them failed miserably within a month or two with nothing to show for it.
Replies from: GuySrinivasan↑ comment by SarahNibs (GuySrinivasan) · 2022-08-18T15:37:59.443Z · LW(p) · GW(p)
Replies from: MakoYass, MakoYass, MSRayneBrandon has been a professional game developer since 1998, starting his career at Epic Games with engineering and design on Unreal Tournament and Unreal Engine 1.0. More recently, Brandon spent 12 years at Valve wearing (and inventing) hats. Many, many hats… Brandon has spent considerable amounts of time in development and leadership on Team Fortress 2 and Dota 2 where he wrote mountains of code and pioneered modern approaches to game development. Also an advisor for the Makers Fund family of companies, Brandon offers his expertise to game startups at all stages of growth.
↑ comment by mako yass (MakoYass) · 2022-08-19T01:22:07.041Z · LW(p) · GW(p)
I'm not seeing any (sorry I missed a word) much game design here.
My experience as a designer, building out a genre of "peacewagers" (games that aren't zero sum but also aren't strictly cooperative, the set of games where honest negotiation is possible.), is that it actually is very likely that someone who's mostly worked in established genres would drastically underestimate the amount of design thought that is required to make a completely new kind of game work, and they're trying to make a new kind of game, so I wouldn't be surprised if they just fell over irrecoverably as soon as they strayed from the yellow brick road they have lived their whole lives within. When you're building a new genre... you have to figure out so much about what can be done there, what the challenge is, and what the appeal is, and how to elegantly communicate all of that to players and make them want it.
So... I've been working on semi-cooperative games for a few years now, I might be able to help with that (I'm also familiar with rust, and have built a basic game engine of my own for some unreleased stuff (in C++)). But I don't get the impression from the site that they appreciate the difficulty of design, that they'd appreciate me, so I haven't applied.
↑ comment by mako yass (MakoYass) · 2022-08-19T00:57:40.407Z · LW(p) · GW(p)
I see that this is getting quite a lot of agreement points. I would also like to add my agreement, this is probably a true quote. I agree that it's probably a true quote. Your claim that this was written somewhere is probably true.
↑ comment by MSRayne · 2022-08-18T18:12:12.875Z · LW(p) · GW(p)
I don't think I remember seeing any of this information on the front page of the website. If it's not there, it maybe ought to be?
Replies from: Raemon↑ comment by Raemon · 2022-08-18T18:58:46.612Z · LW(p) · GW(p)
It's in the "team" section (you have to click on Brandon to get the info, but it does say Game Developer by default before doing any clicks)
Replies from: MSRayne↑ comment by MSRayne · 2022-08-18T19:01:48.840Z · LW(p) · GW(p)
Weird! How did I not notice that?
Well, I still think my pessimism was warranted given the epistemic state I had when I made the comment. I still don't have a clue what the game is about though, and I think that's a legitimate question... or did I miss an explanation of that on the website too? :/
Replies from: Raemon↑ comment by Raemon · 2022-08-18T19:50:41.088Z · LW(p) · GW(p)
I think it's generally right to have a prior of pessimism about projects in this reference class, although I think you went overboard in assuming your initial read was right. (Critch also has a several successful non-game projects under his belt which I think is relevant)
comment by ViktoriaMalyasova · 2022-10-04T22:46:49.583Z · LW(p) · GW(p)
How are your goals not met by existing cooperative games? E.g. Stardew Valley is a cooperative farming simulator, Satisfactory is about building a factory together. No violence or suffering there.
comment by joshc (joshua-clymer) · 2022-08-28T02:49:28.262Z · LW(p) · GW(p)
Your emphasis on how 'fun' your organization is is kind of off-putting (I noticed this on your website as well). I think it gives me the impression that you are not very serious about what you are doing. Maybe it's just me though.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2022-08-28T02:51:31.497Z · LW(p) · GW(p)
(FWIW I think Critch is very serious about achieving his goals and saving the world, even while also being able to have a lot of fun while working on it.)
Replies from: joshua-clymer↑ comment by joshc (joshua-clymer) · 2022-08-28T03:03:47.181Z · LW(p) · GW(p)
I don't doubt this. I was more reporting on how the branding came across to me.
comment by Alex Flint (alexflint) · 2022-10-03T18:02:54.271Z · LW(p) · GW(p)
In order to be the kind of healthy participant in the world AI ecosystem that you are describing, I have the sense that the object-level product you build must be good for the world independent of the goodness of the experiments that it enables or datasets that it generates. So I think you face the challenge of building a game that in addition to being successful, is on its own good for the world.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-10-04T04:42:21.816Z · LW(p) · GW(p)
needs to provide the same kind of useful moral-tradeoff-relevant training data for humans as for ai, I'd imagine
comment by Jack R (Jack Ryan) · 2022-08-30T05:13:09.129Z · LW(p) · GW(p)
Seems like this guy has already started trying to use GPT-3 in a videogame: GPT3 AI Game Prototype
comment by NickyP (Nicky) · 2022-08-23T23:33:29.580Z · LW(p) · GW(p)
Maybe you have seen it before, but Veloren looks like a project with people you should talk with. They are building an open source voxel MMO in Rust, and you might be able to collaborate with them. I think most people working on it are doing it as a side hobby project.
comment by philip_b (crabman) · 2022-08-20T22:14:50.840Z · LW(p) · GW(p)
Is this the beginning of Friendship is Optimal?
comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-08-19T21:13:01.222Z · LW(p) · GW(p)
Are you considering starting with a text-based (or primarily text-only, with optional graphical elements) game? I think that could help a lot with the initial development hurdles and integration with LLMs. There's already a lot of open source code for multiplayer text-based games which could give you a jumping off point. Using that to start with would let you get right into the important part of game design and integration with LLMs. Also, there's some neat work that has already been done with single and multiplayer text-based games and LLMs.
As for how to make a text-based game appealing to modern gamers, I've been thinking that the new art-creating models combined with some old text-based games to auto-generate art to optionally accompany the existing text descriptions of 'rooms' would be a very cool revamp of an old genre. Especially if the players can use in-game currency to create their own text-based game objects which the AI would then illustrate for them. (I've played multiplayer online text-based games where in-game currency was used to purchase creation of 'rooms' and 'objects', so the code for this is already available and well play-tested). I think a game where text-based generative creation aspects in combination with strategic challenges and resource acquisition / trade economy elements sounds like an ideal milieu for building experimental mini-games for human & AI agents within.
Safety Concern: I feel compelled to point out that you'd have to monitor the AI agents' interactions with human players very closely and continuously because this has the potential to turn very dangerous very fast. An AI agent which strategically plans and takes actions including bargaining with humans is on a clear path to dangerous territory if left unchecked.
comment by Andrew McKnight (andrew-mcknight) · 2022-08-22T19:38:08.112Z · LW(p) · GW(p)
Wouldn't the kind of alignment you'd be able to test behaviorally in a game be unrelated to scalable alignment?