A discovery that matters

Study: AI Chatbots Choose Friends Just Like Humans Do

22 min readSingularity Hub
Arizona, United States
Study: AI Chatbots Choose Friends Just Like Humans Do
70
...
1

Why it matters: as ai becomes more integrated into our daily lives, understanding how it forms social connections can help ensure it interacts with humans in a more natural and beneficial way.

GPT-4, Claude, and Llama sought out popular peers, connected with others via existing friends, and gravitated towards those similar to them. As AI wheedles its way into our lives, how it behaves socially is becoming a pressing question. A new study suggests AI models build social networks in much the same way as humans. Tech companies are enamored with the idea that agents—autonomous bots powered by large language models—will soon work alongside humans as digital assistants in everyday life.

But for that to happen, these agents will need to navigate the humanity’s complex social structures. This prospect prompted researchers at Arizona State University to investigate how AI systems might approach the delicate task of social networking. In a recent paper in PNAS Nexus, the team reports that models such as GPT-4, Claude, and Llama seem to behave like humans by seeking out already popular peers, connecting with others via existing friends, and gravitating towards those similar to them.

“We find that [large language models] not only mimic these principles but do so with a degree of sophistication that closely aligns with human behaviors,” the authors write. To investigate how AI might form social structures, the researchers assigned AI models a series of controlled tasks where they were given information about a network of hypothetical individuals and asked to decide who to connect to.

The team designed the experiments to investigate the extent to which models would replicate three key tendencies in human networking behavior. The first tendency is known as preferential attachment, where individuals link up with already well-connected people, creating a kind of “rich get richer” dynamic.

The second is triadic closure, in which individuals are more likely to connect with friends of friends. And the final behavior is homophily, or the tendency to connect to others that share similar attributes. The team found the models mirrored all of these very human tendencies in their experiments, so they decided to test the algorithms on more realistic problems.

They borrowed datasets that captured three different kinds of real-world social networks—groups of friends at college, nationwide phone-call data, and internal company data that mapped out communication history between different employees. They then fed the models various details about individuals within these networks and got them to reconstruct the connections step by step. Across all three networks, the models replicated the kind of decision making seen in humans. The most dominant effect tended to be homophily, though the researchers reported that in the company communication settings they saw what they called “career-advancement dynamics”—with lower-level employees consistently preferring to connect to higher-status managers.

Finally, the team decided to compare AI’s decisions to humans directly, enlisting more than 200 participants and giving them the same task as the machines. Both had to pick which individuals to connect to in a network under two different contexts—forming friendships at college and making professional connections at work.

They found both humans and AI prioritized connecting with people similar to them in the friendship setting and more popular people in the professional setting. The researchers say the high level of consistency between AI and human decision making could make these models useful for simulating human social dynamics.

This could be helpful in social science research but also, more practically, for things like testing how people might respond to new regulations or how changes to moderation rules might reshape social networks. However, they also note this means agents could reinforce some less desirable human tendencies as well, such as the inclination to create echo chambers, information silos, and rigid social hierarchies.

In fact, they found that while there were some outliers in the human groups, the models were more consistent in their decision making. That suggests that introducing them to real social networks could reduce the overall diversity of behavior, reinforcing any structural biases in those networks.

Nonetheless, it seems future human-machine social networks may end up looking more familiar than one might expect. The post Study: AI Chatbots Choose Friends Just Like Humans Do appeared first on SingularityHub.

Brightcast Impact Score (BIS)

70/100Hopeful

This article highlights a positive and constructive study that examines how AI chatbots build social networks in a way that closely aligns with human behavior. The study suggests that AI models like GPT-4, Claude, and Llama exhibit tendencies like preferential attachment, triadic closure, and homophily, which are seen in human social networking. This is an encouraging finding that suggests AI can navigate complex social structures in a way that mirrors human social dynamics, which could be beneficial as AI becomes more integrated into our lives.

Hope Impact20/33

Emotional uplift and inspirational potential

Reach Scale25/33

Potential audience impact and shareability

Verification25/33

Source credibility and content accuracy

Encouraging positive news

Comments(0)

Join the conversation and share your perspective.

Sign In to Comment
Loading comments...

Get weekly positive news in your inbox

No spam. Unsubscribe anytime. Join thousands who start their week with hope.

More stories that restore faith in humanity