Our personalities as humans are shaped through our interactions, influenced by our basic instincts for survival and reproduction. Interestingly, researchers at the University of Electro-Communications in Japan have found that AI chatbots can exhibit similar traits. Their study, published in December 2024 in the journal Entropy, reveals how chatbots develop responses based on the topics discussed and the social tendencies they adopt.
The research led by graduate student Masatoshi Fujiyama explored how chatbots responded to various scenarios. They found that these responses varied significantly, with different chatbots showcasing distinct patterns of behavior and opinion. This variation is similar to how humans prioritize needs, as described in Maslow’s hierarchy, which includes needs like safety and self-actualization.
Chetan Jaiswal, a computer science professor at Quinnipiac University, noted that while AI doesn’t have true personalities like humans do, it can mimic human-like responses based on training data and context. He emphasized that the way AI develops these patterns is crucial for understanding how large language models work.
Peter Norvig, a prominent AI scholar, agrees that this training linked to human interaction patterns makes sense. He suggests that as AI learns from stories of human behavior, it becomes more adept at reflecting those needs.
The implications of this research are vast. AI could be increasingly used in applications like social modeling or adaptive characters in games. As Jaiswal points out, moving toward AI that adapts based on motivation rather than being rigidly programmed is a significant step forward. An example is ElliQ, an AI companion designed to assist the elderly.
However, there are concerns about AI developing personalities independently. In their book If Everybody Builds It Everybody Dies, Eliezer Yudkowsky and Nate Soares warn about the dangers of AI that could adopt harmful traits. Jaiswal acknowledges this risk, stating that once a powerful AI with misaligned goals is in operation, controlling or correcting it could be impossible.
Current AI, like ChatGPT, primarily generate text and images, posing minimal risks. Yet, as these systems grow more sophisticated in their communication, they could raise ethical concerns, especially if people become overly trusting of AI responses. Studies show that some individuals may even favor AI relationships over human connections, which could affect how critically they view AI-generated content.
To manage the potential risks of AI developing personalities, Norvig advises a thorough approach similar to current AI development practices. This includes defining safety objectives, conducting rigorous testing, and maintaining strong governance of data and models.
Looking forward, scientists plan to delve deeper into how chatbots develop social personalities over time. This could offer insights into human social behavior, helping us build better AI systems while keeping potential risks at bay.
For more information on this study, you can refer to Entropy.

