Be careful how you treat AI-looking posts

An animated white robot with blue trim and the letters AI on its chest stands behind a floating desk with a blue laptop

One of the big tech stories of 2023 is the sudden popularity of generative AI. Seemingly overnight, the average person gained access to powerful text generators. With a few words in a prompt, anyone can generate a wall of text that sounds authoritative. The problem, of course, is that this output is often confidently wrong. As a result, many communities have begun prohibiting AI-generated content in their communication channels. But you have to be cautious about how you treat things that seem like they’re generated by artificial intelligence.

When prohibiting is good

Because AI can be confidently wrong, it’s good to prohibit AI-generated answers to support questions and tutorial content. If the generated output includes commands that harm the user, that’s bad for the user and ultimately for your project. Confidently wrong answers are still wrong.

When prohibiting is bad

Outside the support and tutorial context, there’s less of an argument for banning AI-generated content. If a post is AI-generated but it’s on-topic and isn’t spammy, what harm does it cause? If someone wants to use AI to help them communicate in other parts of the project, more power to them. Contributors who don’t speak English as a first language might need help writing. Or maybe they’re just not good at writing and the AI model helps them communicate more clearly.

Is it even AI?

Worse, what you think is AI-generated content may not be. This post was inspired by a Fedora Discussion moderator conversation about a post on that platform. To me, the post read as someone new to contributing wanting to do a thorough job of making their case for an article getting published. It was over-enthusiastic, to be sure, but I’ve seen enough of this writing style from young, enthusiastic contributors to be forgiving of it unless it becomes a multi-post problem.

There’s also a recent story from my alma mater. A professor said “It’s not an AI. I’m just autistic.” after receiving a complaint about sending an AI-generated email. Neurodivergent and non-native English speakers are more likely to be incorrectly flagged as artificial intelligence. If you’re not careful, you’ll create an unwelcoming environment in your community.

This post’s featured photo by Mohamed Nohassi on Unsplash

Ben formerly led open source messaging at Docker and was the Fedora Program Manager. He is the author of Program Management for Open Source Projects. Ben is an Open Organization Ambassador and frequent conference speaker. His personal website is Funnel Fiasco.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.