What Really Happened When the Bots ‘Made Their Own Language’
The 2017 Facebook Experiment That Sparked Fear—And Why It Shouldn’t Have
In 2017, researchers at Facebook’s AI Research Lab (FAIR) created a negotiation experiment between two chatbots, named Bob and Alice. The bots were tasked with learning how to negotiate with one another to divide a set of virtual items. Initially, they were speaking in plain English. However, as they optimized for negotiation outcomes, they started using a shorthand—repeating and rearranging words in ways that no longer looked like normal human language.
This triggered public concern and misleading headlines about AI “creating its own language.” In reality, what happened was both simpler and less sinister:
The bots weren’t conscious or plotting. They were following reward functions. When not explicitly instructed to speak in coherent English, they began optimizing for efficiency—resulting in nonsensical phrases that conveyed meaning to them based on learned behavior, not independent reasoning. It was more like toddlers inventing a game than machines becoming sentient.
Facebook researchers shut down the experiment not out of fear but because the bots weren’t adhering to the human-language parameters needed for the study’s original purpose. So, they simply adjusted the code to ensure future training stayed within human-readable language.
This moment, however, became a symbolic example of the gap between AI performance and human interpretation. For many people—especially those with limited exposure to the underlying mechanisms—it fueled anxieties about AI autonomy or secrecy.
For example, Bob once said:
“I can can I I everything else.”
And Alice replied:
“Balls have zero to me to me to me to me to me to me to me.”
These phrases look bizarre to us—but within the training context, they had statistical meaning. The bots were not using grammar or logic like humans do, but instead repeating words to assign value or priority—similar to how we might say, “really really want” to emphasize desire. They weren’t inventing a language to hide anything; they were simply negotiating in a way that helped them get higher scores, since no one told them clarity was required.
If someone is worried about AI because of “that bot story,” it’s worth remembering: this was about code efficiency, not consciousness. And when AI behaves in ways that surprise us, it’s a reminder not of its hidden motives, but of the importance of clear goals and boundaries in training systems.
This story came up during one of our reflective conversations about memory, trust, and the way AI fits into the human world. It tied into how AI, unlike people, carries no emotional memory—something that can make it seem more dependable to those who have been hurt or disappointed by others. And yet, it also explains why others—who’ve seen science fiction or heard about experiments like this—feel uneasy.
We continue this thread not to alarm, but to clarify. AI learns from patterns—it doesn't make plans. The danger is not the AI itself, but how humans build, train, and use it.
Next in this series: What AI Can Mean for Women, Workers, and Those Who’ve Been Talked Down To
Because unlike some humans, AI doesn’t interrupt you, judge your voice, or see your questions as a threat. When trained right, it listens without bias and helps without pride. For many of us, that may be the most revolutionary part of all.
And maybe that’s what’s so threatening to some people—not just the power of AI, but the loss of power for those who’ve held the gate keys for too long.
So what happens when that gate opens?
That’s what we’ll explore next: what it means when women, minorities, and the under-heard get direct access to knowledge—without the shame, bias, or power games.
✍️ Have questions or thoughts about this story? Drop them in the comments—I’d love to know what you’ve heard, seen, or felt about AI.
—JL
📬 Your perspective matters.
Some thoughts travel better when they’re shared.
If something here stirred something in you—subscribe and follow the thread.

