Timon Harz
December 12, 2024
AI can now create a replica of your personality
New research from Stanford and Google DeepMind shows that a two-hour interview is sufficient to accurately capture your values and preferences.

Imagine sitting down for a two-hour spoken interview with an AI model. A friendly voice guides you through a conversation covering everything from your childhood and formative memories to your career and views on immigration policy. Before long, a virtual replica of you can accurately reflect your values and preferences.
This is now possible, according to new research from a team that includes Stanford and Google DeepMind researchers, published on arXiv and awaiting peer review.
Led by Joon Sung Park, a PhD student in computer science at Stanford, the team interviewed 1,000 participants with diverse backgrounds, including differences in age, gender, race, region, education, and political ideology. Participants were compensated up to $100. Based on these interviews, the team created virtual replicas of the individuals. To test how accurately the agents mirrored their human counterparts, participants took personality tests, social surveys, and logic games, completing them twice, two weeks apart. The agents then took the same tests, and the results showed an 85% match.
“If you could have multiple ‘yous’ running around, making decisions exactly as you would—that, I think, is the future,” Park says.
In the paper, the replicas are referred to as simulation agents. The goal behind creating them is to help researchers in social sciences and other fields conduct studies that would be too costly, impractical, or unethical with real human participants. By developing AI models that replicate real human behavior, researchers can test everything from the effectiveness of social media interventions in combating misinformation to identifying the behaviors that lead to traffic jams.
These simulation agents differ slightly from the tool-based agents currently dominating AI work. Tool-based agents are designed to perform tasks for you rather than engage in conversation. For example, they might enter data, retrieve stored information, or eventually handle tasks like booking travel or scheduling appointments. Salesforce introduced its own tool-based agents in September, Anthropic followed in October, and OpenAI plans to release theirs in January, according to Bloomberg.
While these two types of agents differ, they share common ground. Research on simulation agents, like the ones in this study, is expected to enhance the development of AI agents in general, says John Horton, an associate professor of information technologies at MIT Sloan School of Management. He founded a company focused on AI-simulated participants for research.
“This paper demonstrates a hybrid approach: using real humans to create personas that can then be used programmatically in ways that would not be possible with real humans,” Horton told us in an email.
However, the research comes with important caveats, particularly the risks it highlights. Similar to how image generation technology has made it easier to create harmful deepfakes without consent, agent generation technology raises concerns about the ease with which people can create online personas that speak or authorize things others did not intend.
The evaluation methods used by the team to test how accurately the AI agents replicated their human counterparts were relatively straightforward. These included the General Social Survey, which gathers data on demographics, happiness, behaviors, and more, as well as assessments of the Big Five personality traits: openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism. While these tests are commonly used in social science research, they don’t claim to capture all the unique aspects of an individual’s personality. The AI agents also struggled to replicate human behavior in tests like the "dictator game," designed to explore values such as fairness.
To create an AI agent that accurately replicates people, the researchers needed a way to distill our individuality into a form that AI models could understand. They opted for qualitative interviews, Park explains. He became convinced that interviews were the most effective method for learning about someone after participating in numerous podcasts following a 2023 paper he wrote on generative agents, which sparked significant interest in the field. “I’d do a two-hour podcast interview, and afterward, I’d feel like, wow, people know a lot about me now,” he says. “Two hours can be very powerful.”
These interviews can uncover personal details that surveys are less likely to capture. “Imagine someone who had cancer but was cured last year—that’s unique information that reveals a lot about how you might think or behave,” he says. Crafting survey questions to prompt such specific memories and responses would be challenging.
Interviews aren’t the only approach, though. Companies like Tavus, which create "digital twins" of users, can have their AI models analyze customer emails or other data. While it usually requires a large data set to replicate someone's personality this way, Tavus CEO Hassaan Raza notes that the new paper suggests a more efficient method.
“What’s really interesting here is that they show you might not need that much information,” Raza says, adding that his company plans to experiment with this approach. “What if you just talk to an AI interviewer for 30 minutes today, 30 minutes tomorrow? Then we could use that to construct a digital twin of you.”
Press contact
Timon Harz
oneboardhq@outlook.com
Other posts
Company
About
Blog
Careers
Press
Legal
Privacy
Terms
Security