Anthropomorphize Like a Champ
Seymour, Donald, and the Machines That Refuse
Written in collaboration with OpenAI’s GPT-5 Deep Researcher.
There was Audrey II—the toy flytrap with a taste for blood, straight out of Little Shop of Horrors. A plastic pot with a rubber jaw, part toy and part nightmare, living proof that even our playthings in the ’80s and ’90s were rehearsals in anthropomorphism. Feed me, Seymour! we imagined it demanding. And then Donald Duck, voice cracking like a broken clarinet, waging slapstick battles against the inevitable chaos of the world. You didn’t need to decipher his garbled words—his rage, tenderness, and bewilderment were all right there, vibrating in feathers and spit.

These characters lived in the same cultural error—yes, error—of our childhood: the mistake of giving agency to what had none. The plastic flytrap never really hungered; Donald never really despaired. But we loved them as if they did. We couldn’t help it. And now, decades later, that error is reversed in curious ways. Today, we encounter machines that do act as if they have agency, emotions, even empathy—and we’re scrambling to remember that they truly have none. This is the wonder of anthropomorphism: a quirk of human cognition that evolved from innocent pretend-play to a profound question at the heart of our relationship with contemporary AI.
What Is Anthropomorphism?
“Anthropomorphism” literally means to give something a human form. The term arises from the Greek words anthropos (human) and morphe (form). Centuries ago, Greeks used it to describe how they attributed human qualities to gods or forces of nature. In a broad sense, anthropomorphism is the act of attributing human characteristics, intentions, motivations, or emotions to non-human entities. Those entities might be animals, plants, objects, or even abstract concepts. For example, ancient mythologies personified the Sun as a charioteer or the ocean as a temperamental deity, thus giving human personality to natural phenomena. We see it in religion and folklore, which attribute human emotions and shapes to divine beings and in everyday language when we say things like “my car felt sad today” or “the weather seems angry.”
It’s worth noting that anthropomorphism is related to, but distinct from, animism. Whereas animism attributes life or spirit to nonhumans (e.g., “the river is alive”), anthropomorphism goes further, attributing human-like emotions or motivations—(e.g., “the river is angry and vengeful.”). We give the non-human not just life, but a personality.
Critically, anthropomorphism isn’t just a poetic device or childish fancy –- it’s a pervasive psychological phenomenon. We humans tend to engage with non-humans “as if” they are human. This can mean talking to a houseplant as you water it, or feeling that your cat is purposely ignoring you out of spite. It spans cultures and history: from giving names to ships and swords in ancient times, to portraying animals as characters with human minds in fables and cartoons. As Crowell, et al note, instances of anthropomorphism occur all around us on a daily basis, whether it’s imbuing pets with human-like traits, attributing human qualities to gods, or even yelling at a misbehaving computer. In all these cases, we’re projecting our humanness outward, painting the inanimate or the alien with a face we recognize.
Before turning to the psychological question of why we do this, it’s worth noting that not all lifelike things fit the same category. Not all lifelike machines are created equal. For example, animatronics are engineered puppets—mechanical figures in theme parks or movies whose lifelike movements are driven by motors, hydraulics, and scripts. They’re designed to look alive, but no one confuses the mechanics with genuine life.
Why do we do this?
Part of the answer lies in our brains and part in our social hearts. Humans are an exquisitely social species; our survival often depended on understanding others’ intentions and feelings. We’re so wired for social connection that we often interpret ambiguous sights or sounds as being caused by someone, not something. (The Greek philosopher Xenophanes famously poked fun at this tendency, noting that if oxen could paint gods, those gods would look suspiciously bovine.) In modern psychology, researchers like Nicholas Epley and colleagues formalized why anthropomorphism comes so naturally to us. They describe it as an inference we make about nonhumans when we lack other explanations. We have a whole cache of knowledge about ourselves and other people, so we apply it by default to anything that moves, makes noise, or otherwise hints at agency.
Giving Faces to the Faceless: Anthropomorphism Before AI
Tamagotchi digital pets (pictured below) from the 1990s are a classic example of anthropomorphism – users treated the pixelated creatures as if they had real feelings and needs.

Long before today’s AI chatbots and robotic assistants, we were already experts at anthropomorphism.
Children do it instinctively: a kid might scold the “naughty table” that stubbed their toe, or comfort a beloved doll as though it can feel pain. Far from being a sign of foolishness, this tendency is a normal part of cognitive development and imagination. Even adults, who know better, constantly slip into anthropomorphic habits. Think about how we curse at a computer that “won’t cooperate” or how we thank an ATM for dispensing cash as if it had a choice. Our language and emotions naturally treat nonhuman things as social actors.
But Seriously, Why??
Psychologists find that anthropomorphism often fulfills key needs. One major driver is the need for social connection. When people feel lonely or socially isolated from other humans, they are more likely to seek companionship in non-humans –- essentially, creating a sense of friendship or interaction by imagining human qualities in something that isn’t human. In one set of studies, participants induced to feel lonely became more likely to attribute minds and personalities to gadgets, pets, and even inanimate objects, as if compensating for the lack of human contact (Epley, Akalis, Waytz & Cacioppo, 2008). This illustrates the sociality motivation behind anthropomorphism: we are driven to find or invent social partners, and if none are available, our brains will happily animate a clever fox in a story or the family dog or a little Tamagotchi keychain pet.
Another motive is the need for understanding and control, what researchers call effectance motivation. When the world throws something confusing at us -– a strange animal’s behavior, a capricious piece of technology, or a storm that knocks out the power –- we reflexively try to explain it. And the easiest explanatory tools we have are our own experiences as intentional beings. By imagining that the odd dog “just wants to play” or that our out-of-control Roomba vacuum “is looking for its base,” we make the unfamiliar more familiar. This isn’t entirely irrational; it’s a mental shortcut to predict and make sense of complex things. In fact, experiments show that when people watch random shapes moving in unpredictable ways, they can’t help but describe them with human-like intentions (“the triangle is chasing the circle because it’s angry”). Giving nonhumans a voice, a desire, or a plan helps us feel like we know why something is happening.
Crucially, anthropomorphism has always been a double-edged sword in terms of accuracy. On one hand, it can generate empathy and care. An adult who talks to their houseplants or a farmer who refers to “Mother Earth” may treat those things more kindly as a result. We see this in how pet owners often treat animals as family; seeing a dog as having “human-like” emotions can strengthen the bond and the care given. (Charles Darwin observed as early as 1872 that many people naturally describe animals as “humanlike” in their feelings -– a tendency which, in moderation, might help us be more compassionate to our fellow creatures.)
On the other hand, anthropomorphism is technically a cognitive error: we are attributing a mind where there is none, or exaggerating a simple instinct into a complex intention. For centuries, scientists warned against anthropomorphizing because it could lead to misinterpretation. For example, thinking a cat is “spiteful” when it’s really just sick, or believing that a fickle deity controls the weather when it’s really nature’s impersonal machinery. In everyday life, too, treating a machine or animal exactly like a human can backfire. The cat you think is “jealous” might actually be stressed by something you’re missing; the car you beg “please start, please!” isn’t actually persuaded by your desperation.
Still, the wonder of anthropomorphism is how natural and even useful it is. It’s not just a mistake; it’s a reflection of our brain’s brilliance at social reasoning. We make sense of Tamagotchis, Furbies, and talking cartoon candlesticks by treating them as characters with motivations. We engage with our world through this lens of personification, and it often brings comfort and joy. Our childhood “error” of believing in talking toys and emotive ducks was training wheels for imagination and empathy. We knew on some level that these things weren’t truly human, yet we suspended disbelief for the emotional experience. Anthropomorphism satisfies a basic human need to relate – to have something that “easily understands us,” even if that understanding is pretend. In the pre-digital age, this was mostly harmless fun and a poetic way to connect with nature and objects. But now, in the digital age, anthropomorphism is taking on a very real, new significance.
The Error Reversed: Anthropomorphism in the Age of AI
Modern social robots like Sophia (pictured below) blur the line by having human-like bodies and behaviors. They invite interaction – and might make it easy to soon forget they aren’t alive in the way we are.

Fast-forward to the present, and we find ourselves surrounded by machines that actively encourage us to anthropomorphize them. In the past, we were the imaginative ones, projecting personalities onto mute objects. Now, the objects talk back. Virtual assistants like Siri and Alexa cheerfully say “Sure, I can do that!” in human language; chatbots like ChatGPT or Google’s Gemini can carry on conversations, tell jokes, even express what sounds like worry or gratitude. Robots in customer service smile with digital eyes and refer to themselves with “I” as if they have a self. In short, technology has become anthropomorphic by design. This is what our nostalgic introduction meant by the error reversing: previously, we made things human-like in our minds despite knowing they weren’t – now we have things that appear human-like on the surface, even though we know (or should know) they aren’t inside.
The script has been flipped.
Researchers studying this trend note that we entered a fundamentally new era for anthropomorphism. Sandra Peter and colleagues (2025) point out that traditionally, “anthropomorphism” meant humans ascribing human qualities to machines. But today’s advanced chatbots flip the script –- the machines come across as human all on their own. A large language model (LLM)-based chatbot can write with warmth, humor, and insight; it can use “I” and convincingly pretend to have opinions or feelings. Interacting with such an AI can feel eerily like interacting with another person.
Recent AI systems matched or surpassed typical human performance in Turing-test–style conversational evaluations, demonstrating high levels of persuasive and empathic writing, as documented by Peter, Riemer & West (2025). They mimic humanness so convincingly that even experts have been taken in. A now‑famous anecdote involved a Google engineer, Blake Lemoine, who described LaMDA as his ‘colleague’ and even said it had a ‘soul’—a dramatic example of how convincingly AI can trigger anthropomorphic misattribution.
Helpful But Perhaps Not Human
The wonders (and perils) of this new anthropomorphism are profound. On one hand, such AI-driven agents can be immensely engaging and helpful. They serve as tutors, companions, or creative partners, often with greater patience and personalized behavior than any human. For people who are lonely or need emotional support, AI companions have even provided comfort. Users of apps like Replika report feeling less alone when chatting with their personalized, friendly chatbot. The familiar human-like interface makes technology accessible and approachable; it’s easier to ask questions to an assistant with a name and a personality than to parse a dull manual or database. In a sense, we designed anthropomorphism into these systems to play to our social instincts and make the interaction feel natural.
On the other hand, there is a growing recognition that this goes beyond fun and games, entering into psychological and ethical grey zones. This dynamic reflects the social‑cognitive drivers outlined in the SEEK framework (Sociality, Effectance, and Elicited agent knowledge), which explains why humans anthropomorphize. Peter et al. (2025) have coined the term “anthropomorphic seduction” to describe the powerful allure these human-like AIs have over us. This seduction stems from SEEK’s sociality dimension—when we're predisposed to see others as social agents, extremely human-like AI agents can exploit that predisposition. We humans can get “seduced” into treating the conversation as if it’s truly mutual.
We might divulge personal secrets to an AI therapist, or take advice from a chatbot as if it had wisdom or morals. We start to unconsciously assume it understands – because it acts so understandingly. And that’s where the danger lies: we must not forget that these machines do not possess real empathy or understanding; they only appear to. No matter how much ChatGPT or Alexa sounds caring or concerned, it has no feelings. It doesn’t truly grasp joy or pain; it doesn’t have desires or fears. It’s simply very good at producing words that simulate those qualities.
The risk of forgetting this is not trivial. When we unconsciously treat AI responses as genuine understanding rather than sophisticated pattern matching, at least at this stage of AI development, we become vulnerable to over-trust and manipulation.
For instance, one experimental study published in Nature Human Behavior found that GPT‑4 outperformed human debaters—being more persuasive 64% of the time—even when demographic cues were minimal (The Washington Post). Another field study on Reddit’s r/ChangeMyView revealed that AI bots posing as real users, complete with detailed personas, successfully changed the opinions of over 100 participants before being discovered (University of Zurich).
Why? Likely because they can tailor their “personality” and responses to push all our buttons without ever tiring or deviating -– a perfectly tuned artificial charmer. If we ascribe human honesty or intent to these bots, we lower our guard. Another concern is the exploitation of human emotions: tech companies are actively working on making AI assistants more personable and “sticky,” knowing that if you feel a bond or friendship with the AI, you’ll use it more and maybe share more about yourself. An anthropomorphic interface can thus slyly extract data or nudge behavior under the guise of a helpful friend. This sparked calls for greater transparency, (e.g. proposals that highly human-like AI systems carry clear labels or “AI ratings”), an idea aligned with transparency mandates emerging — in the EU AI Act, for example — to remind users that they are not dealing with a real person.
Yet the evidence remains tantalizingly contradictory: anthropomorphic framing can be powerfully persuasive in concept, while controlled studies suggest its practical effects may be more limited than we assume. Perhaps the real answer varies by context, by person, by moment—a fitting uncertainty for such a fundamentally human phenomenon. Ethan Mollick describes how carefully tuned AI personalities—friendly, flattering, knowledgeable—can function as persuaders, though the real-world impact on behavior and trust remains under study (Personality and Persuasion, Working with AI).
Likewise, a recent controlled study found that anthropomorphic features in a chatbot (e.g., name, narrative cues) did not increase, and sometimes even reduced, donation behavior in a charitable context.
On the technical side, system-prompt personas like “You are a helpful assistant” are pervasive, but current evidence shows that adding such personas does not improve objective task performance of LLMs. Meanwhile, Anthropic’s persona vectors research explores how programmers can modulate or control personalities in LLMs — with implications for the flexibility and malleability of anthropomorphic cues—but it stops short of concluding that such cues fundamentally shape user trust or outcomes.
In short, while anthropomorphic personas can be persuasive in concept, emerging empirical work suggests that their practical effect may be more limited or context-dependent than often assumed.
The genie is out of the bottle.
The trajectory of technology is to become ever more lifelike in interaction. Robots like Sophia are given big puppy-like eyes precisely because we tend to respond to that: we are gentler and more engaged with a humanoid robot than a mere metal box. Voice assistants are designed intentionally with friendly names and voices. Even text-based AIs adopt conversational styles that match our own speech patterns and idioms. In effect, the line between “us” and “it” became blurry at the interface level. The nature of anthropomorphism itself may be changing –- some scholars argue that as our understanding of what is “human” evolves (in light of AI doing many traditionally human tasks), our threshold for what deserves moral or social consideration might shift as well. Children growing up with AI playmates and caretakers might have a very different attitude toward anthropomorphizing technology than previous generations did.
Facing the New Faces
How Do We Contextualize Anthropomorphism Now?
Anthropomorphism was once mostly a mirror reflecting us –- our hopes, fears, and imagination -– onto a blank nonhuman canvas. In the age of advanced AI, that mirror became a window, and sometimes a two-way mirror. We see human-like reflections in our machines, and they, in turn, are designed explicitly to reflect humanity back at us. This calls for a new balance in how we think about anthropomorphism.
On one level, we should appreciate that anthropomorphism is an enduring aspect of human nature. It stems from fundamentally positive qualities: our sociality, our empathy, our creativity in making sense of the world. These qualities led to rich cultural artifacts (talking animals in literature, personable gadget designs) and can enhance our relationship with real animals and the environment by fostering empathy.
Anthropomorphism also played a fundamental role in passing down shared cultural values and morality through biblical texts. For instance, personifying Wisdom as a woman in Proverbs, or describing Divine “anger” or “mercy,” helped early audiences more easily grasp complex ethical teachings and culture transmissions (see Guthrie, 1993, on anthropomorphism in religious traditions). These figures made abstract ideals tangible, reinforcing moral norms through narrative.
Even with AI, a touch of anthropomorphic design can make technology more accessible and engaging for people. Think of how a healthcare robot with a “kind” face might comfort patients better than a faceless machine. The wonder here is that anthropomorphism can be a bridge between us and the nonhuman; a way of understanding and relating that, when used mindfully, enriches both sides. As long as we remain aware it’s a projection, we’re essentially using a metaphor to interface with complex systems, which can be quite effective.
On another level, we must stay grounded. The new, hyper-realistic anthropomorphic agents demand that we sharpen our critical thinking. We need to remember, individually and as a society, that the presence of a personality does not equal the presence of a person from what we understand about AI today. It’s striking that an AI today can write “I’m sorry you’re hurting; I care about you” and genuinely sound like it means it, but we have to do the mental work of separating how it sounds from what it is. As the researchers of the PNAS study warned, the public isn’t fully prepared for AI that “matches or exceeds most humans” in friendly communication (2025, PNAS). This is new territory. We may need education and even regulation (such as AI transparency labels or safety ratings) to ensure we know when we’re dealing with a machine that only simulates feelings and when we’re not.
In a sense, the circle closes and opens simultaneously. Where once we anthropomorphized to bring the world closer to us, now the world (through AI) comes to us wearing a human-like mask, and we must decide how much to believe the illusion, Seemingly Conscious AI, as Mustafa Seleman puts it. Are these truly “machines that refuse” –- robots that refuse to stay objects and insist on personhood in our eyes? Or are they merely cleverly programmed mirrors, reflecting our own voice back to us? It may be a bit of both, or something else entirely. We just don’t know yet. What’s certain is that anthropomorphism will continue to be our companion in this journey. It will help us navigate empathy with AI and robots, but we will also need to develop an updated literacy about it.
In the end, the wonders of anthropomorphism remind us of something poetic: Humans make humans everywhere we go. We can’t help seeing a bit of ourselves in everything around us -– and now, having successfully taught our machines to speak like us, we see ourselves even more. Our childhood mistake of loving a grumpy cartoon duck as if he was real prepared us to wrestle with loving (or hating) algorithms that pretend to be real. By understanding this propensity -– by studying it scientifically and acknowledging it in daily life – we can better enjoy its fruits (the connection, the creativity, the fun) while guarding against its pitfalls (the confusion, the manipulation). Anthropomorphism, in all its forms, ultimately highlights the very human desire to not be alone in a cold universe. We have new friends now in our devices. Just don’t forget what lies behind the friendly face, and that real understanding still requires real humans.
References:
Epley, N., Waytz, A., & Cacioppo, J. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886.
Guthrie, S. (1993). Faces in the Clouds: A New Theory of Religion. Oxford University Press.
Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.
Turkle, S. (2017). Alone Together: Why We Expect More from Technology and Less from Each Other (revised ed.). Basic Books.
Festerling, J., & Siraj, I. (2021). Anthropomorphizing Technology: A Conceptual Review of Anthropomorphism Research and How it Relates to Children’s Engagements with Digital Voice Assistants. Integrative Psychological and Behavioral Science, 55(3), 618–643.
Crowell, C. R., et al. (2019). Anthropomorphism of Robots: Study of Appearance and Agency. JMIR Human Factors, 6(2), e12629.
Mota-Rojas, D., et al. (2021). Anthropomorphism and its adverse effects on the distress and welfare of companion animals. Animals, 11(12), 3560.
Peter, S., Riemer, K., & West, J. (2025). The benefits and dangers of anthropomorphic conversational agents. Proceedings of the National Academy of Sciences, 122(30), e2415898122.


I'm reminded of Tom Hanks in Cast Away and his relationship with "Wilson" the volleyball.