AI models exhibit human-like behavior in picking random numbers, revealing both their capabilities and limitations. Humans struggle with true randomness, often avoiding certain patterns and favoring numbers like 7. AI models, trained on human data, mirror these biases.
Human Randomness
Humans are poor at generating random sequences:
- Predicting 100 coin flips often lacks true randomness.
- Choosing numbers between 0 and 100, people avoid extremes and certain patterns (e.g., multiples of 5, repeating digits).
AI Randomness Experiment
Engineers at Gramener tested major LLM chatbots by asking them to pick random numbers between 0 and 100:
- OpenAI’s GPT-3.5 Turbo: Frequently chose 47, previously favored 42.
- Anthropic’s Claude 3 Haiku: Preferred 42.
- Gemini: Often selected 72.
Observations
- AI models avoided low/high extremes and certain patterns.
- Claude avoided numbers above 87 or below 27.
- Double digits like 33, 55, 66 were avoided, except 77.
- Round numbers were rare, but Gemini picked 0 at high variability.
Explanation
AI models don’t understand randomness:
- They rely on training data, repeating common patterns seen in human responses.
- Lack of reasoning or understanding of numbers, acting like "stochastic parrots."
- Fail at simple arithmetic due to limited training on specific calculations.
Conclusion
AI models mimic human behavior due to their training data, not consciousness or understanding. They imitate human responses, making interactions feel human-like, whether for recipes, advice, or random numbers. This highlights the challenge of avoiding anthropomorphism in AI interactions.