OpenAI’s biometric verification plans for social platform
OpenAI is reportedly working on a social network that would use biometric verification tools to ensure users are real people. According to sources familiar with the project, the company is considering systems like Apple’s Face ID or World’s iris-scanning Orb technology.
The idea seems to be creating what insiders describe as a “real humans only” network. This would position OpenAI’s platform in direct contrast to the bot-driven engagement that has become common on existing social media sites like X.
I think this approach makes sense from a verification standpoint, but it raises some immediate questions about privacy and accessibility. Biometric data is sensitive, and not everyone has access to the latest facial recognition or iris-scanning technology.
World’s technology and token surge
The Forbes report specifically mentioned World’s Orb as a potential verification tool. This device creates a unique identifier from a user’s iris pattern. What’s interesting here is the connection between the companies – World is operated by Tools for Humanity, which was founded by OpenAI’s CEO Sam Altman.
Following the report, World’s native WLD token saw significant movement. It jumped more than 25% in a single day, reaching around $0.55. Market reactions like this show how sensitive crypto markets can be to news about potential partnerships or adoption.
Privacy concerns and development status
Privacy advocates have already started raising concerns about the biometric verification approach. Iris scans are particularly sensitive because they’re permanent – you can’t change your iris pattern like you can change a password. If this data were ever compromised, the risks could be long-term.
It’s worth noting that sources cautioned that OpenAI’s plans are still in flux. The project remains in early development, built by a small internal team of fewer than ten people. There’s no set timeline for a public launch, and the company’s approach could change significantly before anything reaches users.
The broader context
What OpenAI seems to be attempting here is addressing one of the fundamental problems with current social media: the difficulty of distinguishing real people from automated accounts. Bots have become sophisticated enough to mimic human behavior in concerning ways, influencing everything from political discourse to market sentiment.
But the solution they’re exploring comes with its own set of challenges. Biometric verification creates a permanent record of identity that could be vulnerable to breaches or misuse. There’s also the question of whether people will be willing to submit to iris scans just to use a social platform.
The project’s small team size suggests this is still very much an exploratory effort. OpenAI might be testing the waters to see how users and regulators react before committing significant resources. Given the privacy implications, I’d expect some pushback from advocacy groups and potentially from users themselves.
Still, the market reaction to the news shows there’s interest in verification solutions that work. Whether biometrics is the right approach, or whether less invasive methods might achieve similar results, remains to be seen. The conversation around online identity and verification is only getting more complex as AI systems become more capable of mimicking human behavior.






