Top AI Conference Bans ChatGPT From Writing Academic Papers

Please Share This Story!
ChatGPT stands for “Chat Generative Pre-trained Transformer”. It can perform natural language generation at such a high level of accuracy that it can pass the Turing Test, i.e., fooling humans that they are communicating with a computer. Even Technocrat AI leaders are recognizing the dangerous implications and are erecting stop signs. ⁃ TN Editor

AI tools can be used to ‘edit’ and ‘polish’ authors’ work, say the conference organizers, but text ‘produced entirely’ by AI is not allowed. This raises the question: where do you draw the line between editing and writing?

One of the world’s most prestigious machine learning conferences has banned authors from using AI tools like ChatGPT to write scientific papers, triggering a debate about the role of AI-generated text in academia.

The International Conference on Machine Learning (ICML) announced the policy earlier this week, stating, “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.” The news sparked widespread discussion on social media, with AI academics and researchers both defending and criticizing the policy. The conference’s organizers responded by publishing a longer statement explaining their thinking. (The ICML responded to requests from The Verge for comment by directing us to this same statement.)

According to the ICML, the rise of publicly accessible AI language models like ChatGPT — a general purpose AI chatbot that launched on the web last November — represents an “exciting” development that nevertheless comes with “unanticipated consequences [and] unanswered questions.” The ICML says these include questions about who owns the output of such systems (they are trained on public data, which is usually collected without consent and sometimes regurgitate this information verbatim) and whether text and images generated by AI should be “considered novel or mere derivatives of existing work.”

The latter question connects to a tricky debate about authorship — that is, who “writes” an AI-generated text: the machine or its human controller? This is particularly important given that the ICML is only banning text “produced entirely” by AI. The conference’s organizers say they are not prohibiting the use of tools like ChatGPT “for editing or polishing author-written text” and note that many authors already used “semi-automated editing tools” like grammar-correcting software Grammarly for this purpose.

“It is certain that these questions, and many more, will be answered over time, as these large-scale generative models are more widely adopted. However, we do not yet have any clear answers to any of these questions,” write the conference’s organizers.

As a result, the ICML says its ban on AI-generated text will be reevaluated next year.

The questions the ICML is addressing may not be easily resolved, though. The availability of AI tools like ChatGPT is causing confusion for many organizations, some of which have responded with their own bans. Last year, coding Q&A site Stack Overflow banned users from submitting responses created with ChatGPT, while New York City’s Department of Education blocked access to the tool for anyone on its network just this week.

In each case, there are different fears about the harmful effects of AI-generated text. One of the most common is that the output of these systems is simply unreliable. These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of “facts” to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.

In the case of ICML’s ban on AI-generated text, another potential challenge is distinguishing between writing that has only been “polished” or “edited” by AI and that which has been “produced entirely” by these tools. At what point do a number of small AI-guided corrections constitute a larger rewrite? What if a user asks an AI tool to summarize their paper in a snappy abstract? Does this count as freshly generated text (because the text is new) or mere polishing (because it’s a summary of words the author did write)?

Read full story here…

About the Editor

Patrick Wood
Patrick Wood is a leading and critical expert on Sustainable Development, Green Economy, Agenda 21, 2030 Agenda and historic Technocracy. He is the author of Technocracy Rising: The Trojan Horse of Global Transformation (2015) and co-author of Trilaterals Over Washington, Volumes I and II (1978-1980) with the late Antony C. Sutton.
Notify of

Newest Most Voted
Inline Feedbacks
View all comments
Mug Diller

This is like enacting a law in 1905 that allows for automobiles, but only if pulled by a horse.


BAN authors from using AI tools! Good luck with that. And OJ Simpson is still looking for the real killer.


I was just listening to your Quickening Report on this topic and decided to comment here. (1) You indicate that ChatGPT can perform deep fakes of voice over for videos. This could also be a propaganda ruse so that the Deep State actors can deny what they really said on the video evidence we gather. (2) All I can be certain of is that I do not know what is on the other end of the interface. Who or what is the man behind the curtain? When is it an AI, human, or propagandist? See also the concept of Mechanical… Read more »

Last edited 2 months ago by Resistance21