xAI workers balked over training request to help “give Grok a face,” docs show

https://arstechnica.com/tech-policy/2025/07/xai-workers-balked-over-training-request-to-help-give-grok-a-face-docs-show/

Ashley Belanger Jul 22, 2025 · 2 mins read
xAI workers balked over training request to help “give Grok a face,” docs show
Share this

Dozens of xAI employees expressed concerns—and many objected—when asked to record videos of their facial expressions to help "give Grok a face," Business Insider reported.

BI reviewed internal documents and Slack messages, finding that the so-called project "Skippy" was designed to help Grok learn what a face is and "interpret human emotions."

It's unclear from these documents if workers' facial data helped train controversial avatars that xAI released last week, including Ani—an anime companion that flirts and strips—and Rudi—a red panda with a "Bad" mode that encourages violence. But a recording of an xAI introductory meeting on "Skippy" showed a lead engineer confirming the company "might eventually use" the employees' facial data to build out "avatars of people," BI reported.

Although all employees were told that their training videos would not be shared outside the company and would be used "solely" for training, some workers refused to sign the consent form, worried their likenesses might be used to say things they never said. Likely xAI's recent Grok scandals—where the chatbot went on antisemitic rants praising Hitler—and xAI's reported plan to hire an engineer to design "AI-powered anime girls for people to fall in love with" contributed to employees' discomfort. Confirming on Slack that they opted out, these employees were ultimately too "uneasy" granting xAI "perpetual" access to their data, BI found.

For the more than 200 employees who did not opt out, xAI asked that they record 15- to 30-minute conversations, where one employee posed as the potential Grok user and the other posed as the "host." xAI was specifically looking for "imperfect data," BI noted, expecting that only training on crystal-clear videos would limit Grok's ability to interpret a wider range of facial expressions.

xAI's goal was to help Grok "recognize and analyze facial movements and expressions, such as how people talk, react to others' conversations, and express themselves in various conditions," an internal document said. Allegedly among the only guarantees to employees—who likely recognized how sensitive facial data is—was a promise "not to create a digital version of you."

To get the most out of data submitted by "Skippy" participants, dubbed tutors, xAI recommended that they never provide one-word answers, always ask follow-up questions, and maintain eye contact throughout the conversations.

The company also apparently provided scripts to evoke facial expressions they wanted Grok to understand, suggesting conversation topics like "How do you secretly manipulate people to get your way?" or "Would you ever date someone with a kid or kids?"

For xAI employees who provided facial training data, privacy concerns may still exist, considering X—the social platform formerly known as Twitter that recently was folded into xAI—has recently been targeted by what Elon Musk called a "massive" cyberattack. Because of privacy risks ranging from identity theft to government surveillance, several states have passed strict biometric privacy laws to prevent companies from collecting such data without explicit consent.

xAI did not respond to Ars' request for comment.