POLICY
POLICY
POLICY
Wikipedia has banned contributors from using artificial intelligence tools to create content for its platform through a recent policy update.
The recently announced new guidelines reflect increasing concern within the Wikipedia community that AI-generated text conflicts with the platform’s standards on citing reliable and verifiable sources. In the update, Wikipedia noted that text generated by large language models has a tendency to violate a number of its core content policies. “For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below,” the new policy reads.
Editors can still use AI tools in limited ways, such as fixing typos and making changes to the formatting of an article after it has been reviewed by a Wikipedia volunteer or administrator. It’s also permissible to use AI to translate articles from foreign-language Wikis into English and vice versa, so long as the translation still follows the site’s policies. That means the translator must be fluent in both English and the foreign language in question.
However, Wikipedia stressed that editors must ensure that the tools do not add new information to the articles. It urges caution too, pointing out that AI sometimes changes the meaning of content it edits or translates, and that the outputs may not be accurate or align with the source’s intent.
The new policy does not mention any specific penalties for editors and contributors who use AI-generated content, but Wikipedia’s guidelines around disclosure warn that repeated misuse forms a pattern of “disruptive editing.” Anyone guilty of that could find themselves temporarily suspended from making edits or adding new content, and repeat violators can be permanently banned. Still, Wikipedia does offer an appeal process for overturning such bans.
What isn’t clear is how Wikipedia will actually identify AI-generated content submitted by its human editors. The wording of the policy suggests this will be difficult, because it warns editors checking for factual content not to rely on someone’s writing style alone to determine if something was created by an LLM. Instead, it tells them to focus on whether the content complies with its standards and the contributor’s editing history.
“Some editors may have similar writing styles to LLMs,” the policy reads. “More evidence than just stylistic or linguistic signs is needed to justify sanctions, and it is best to consider the text’s compliance with core intent policies and recent edits by the editor in question.”
Given the open-source nature of Wikipedia, which allows anyone to make changes to its articles provided they follow its content policies, banning the use of AI is a sensible move, given how error prone LLMs can be. For all of the improvements made to AI text generators, most models are still prone to “hallucinations,” or making unsubstantiated claims. Plagiarism remains a concern too.
Wikipedia has had concerns about AI for a while. Last year, the Wikimedia Foundation that runs the site asked AI companies to stop scraping data from its platform and instead use its paid enterprise application programming interface, which allows them to access its content at scale without putting its servers under strain.
Several AI companies have agreed to do this, with Microsoft Corp., Google LLC, Amazon Web Services Inc. and Meta Platforms Inc. all agreeing to use the API in January. The API is a paid service designed for large-scale reuse of Wikipedia’s content, and the revenue helps fund Wikipedia’s nonprofit mission.
Wikipedia itself continues to bleed traffic as a result of the growing popularity of AI chatbots. In October, the foundation said human visits to the site had declined by about 8%, as chatbots provide direct answers to users’ questions rather than sending them to its website.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.