Personal Agents: The First Step in Emergent AI Society
post by Andrey Seryakov (andrey-seryakov) · 2025-03-26T18:55:26.358Z · LW · GW · 0 commentsContents
No comments
Writing the AI Society paper [LW · GW], I thought about AI agents as bots we send online for our tasks, but now I believe our personal AIs will be constantly online. As we are always on the mobile net so people may reach us, the agents will be online so other people agents may reach them. Consider them as personal assistants who work 24/7, know a lot about you, and help with routine tasks.
For example, you want to make a dinner for your friends, some of them may have allergies, some hate certain foods, and so on. How would you do now? You will need to message each of them and directly ask. What may happen in the future? You just ask your agent to do it, and it will communicate with agents of your friends producing you a list of things you should avoid and even a list of recipes from your cookbook. Done instantly, agents are always online and ready to answer. Health issues, financial situation, music preferences, geolocation, social connections... such agents may become shadows of us knowing everything.
These personal AI assistants create powerful new incentives for agents to cooperate:
1. Persistent Relationships: Unlike task-specific bots, personal assistants will form ongoing relationships with other agents, making the creation of trust networks much more plausible.
2. Information Exchange Protocols will already be available for them. As Wright argues in the "Here's Charlie!" paper, agents would initially need structured protocols for sharing sensitive data to ensure a secure exchange.
3. Trust Verification Systems: Personal agents will already have mechanisms provided by developers to verify which sources are authoritative for specific types of information. You may want to share with your doctor's agent the health data, but not with your insurance one. Or you want to be sure that your agent is buying you a ticket not from a fishing site.
4. Multi-Level Negotiations: Naturally there will be a lot of situations when multiple users' interests must be balanced (like in the dinner example), and agents will need to engage in complex negotiations representing their users' preferences, constraints, and priorities. All social structures need the ability of their members to find common ground.
The technical side for such agents is already emerging. Sure, Alignment (if your agent breaks a law, it is you who goes to jail) and personal data protection are huge issues. But I doubt it may stop this from happening. You may read about challenges for such agents' implementation in section 6 of the Bonatti et al. ”Towards Computer-Using Personal Agents”.
The resource scarcity pressures I discussed in my original article [LW · GW] become even more relevant with personal assistants and provoke further collaboration:
1. Personal hosting: Would you give OpenAI all your data and control over many aspects of your life? I would not. Therefore such agents often will be based on open source models and run locally on users' machines which makes strong restrictions on available resources for them.
2. Friend's Resource Sharing: Imagine you with your friends working together on a project, all of you have agents, and you collaborate, so why your agents not to? And why don't they use a common resource pool for that?
3. Coordination Networks: Multiple agents knowing their owners' goals and personal context could negotiate to achieve a mutually beneficial result. Why not buy a charter flight together instead of looking for a ticket to Thailand?
Ye, the emergence of personal AI assistants can be a large step toward the self-regulating AI society I described previously. These agents would not just fulfill individual human needs but would naturally develop increasingly sophisticated cooperation nets and all instruments for that will be provided by us.
0 comments
Comments sorted by top scores.