A call for a “professional Redditor” is drawing attention to the ongoing question of credibility and authenticity on both the platform and the internet in general.
Featured VideoSherwood recently pointed out that financial operations platform Ramp has posted a call for a “professional Redditor.” Someone “who is a Reddit power user and understands the platform’s culture, nuances, and unwritten rules” to help them implement a strategy to push their company using both paid ads and “organic content.”
The first bullet point on the list of responsibilities for this three-month contract position includes developing and executing a strategy that “provides genuine value to Reddit communities while subtly showcasing Ramp’s benefits through case studies, AMAs, educational content, and thought leadership.”
In other words, it sure sounds like they’re trying to hire a Reddit plant who will quietly and strategically influence others into using their products.
AdvertisementAnother requirement is to “have thick skin and can handle Reddit’s critique culture while learning from community responses to refine your approach.”
Rise of Reddit bots and AI engagement farming
If you spend time in any of the larger subreddits, you may start to notice comments throwing around accusations that posts were written by AI or that other commenters are just bots.
AdvertisementThe rise of both of these things has become a valid concern on the platform (and on many social media platforms) over the last decade. Some bots are more obvious, hawking products or directing users to specific websites. Others spread misinformation and try to influence public opinion by flooding various sites in large quantities.
Concerns have grown as the adoption of AI has spread, as it allows both bots and human users to simply deploy AI to create more varied copy with more ease and speed. It also allows these accounts to post more content unrelated to their “mission,” creating a better illusion of being an authentic human being.
In a recent experiment, researchers purposely deployed AI-powered bots to r/changemyview to prove just how effective (and dangerous) this can be. The incident prompted Reddit CEO Steve Huffman to share a vague announcement that the site would start working with third-party services to confirm users are human in an attempt to cut down on bot activity.
Advertisement“Reddit works because it’s human,” he wrote. “It’s one of the few places online where real people share real opinions. That authenticity is what gives Reddit its value.”
Authenticity across social media
While the Ramp job listing for a “professional Redditor” doesn’t mention anything about bots or AI, there is still an insidious quality underlying the idea of someone hired by a brand to “organically” work their company or product into a variety of conversations.
In the U.S., the Federal Trade Commission also has strict rules around disclosure when someone is being paid or otherwise incentivized to push a product or service. Of course, it isn’t unheard of for companies and/or their representatives to try to get away with not disclosing this information.
AdvertisementAnd Ramp’s request for someone who can identify “relevant threads where Ramp solutions naturally fit the conversation, providing helpful insights without appearing promotional” doesn’t exactly instill confidence about any forthrightness.
But with every corner of the internet seemingly rapidly losing credibility due to AI and bots, this seems like an unsurprising addition to throw into the mix. And at least some real person will make $40 to $84 an hour doing it.
Advertisement