A curious scientific paper on millipedes helps us consider what Large Language Models (LLMs) can teach us about the construction and dissemination of environmental intelligence.

Ethan Zuckerman

Large Language Models for Augmented Reading

Published Originally by MIT Public Interest Technologist. A curious scientific paper on millipedes helps us consider what Large Language Models (LLMs) can teach us about the construction and dissemination of environmental intelligence.

Illustration of millipede damage to root and tuber crops in Rwanda, with images from the withdrawn preprint.

By Claire Gorman

In June, 2023, a curious preprint of a scientific paper about millipedes surfaced online. The paper proposed to review recent research concerning the biology, ecology, and agricultural pest behavior of millipedes in Africa, including their impact on soil health, damage to crops, and the influence of climate factors on their distribution. Not long after the preprint was released, however, it came to the attention of scholars who had been cited in its references for papers they had never written. It was quickly determined that the paper had been generated using AI, most likely a publicly available Large Language Model (LLM) interface such as ChatGPT.

As a preprint, this paper had never undergone the scientific process of peer review. If it had, human reviewers would hopefully have noticed telltale signs of artificial authorship— although it has been proven that this is not always the case. In 2014, more than 120 papers were removed from IEEE and Springer Publishing websites when it was determined that they had been automatically generated with “SCIGen,” a program for fabricating gibberish computer science research originally implemented in 2005 by MIT graduate students. SCIGen was initially conceived as a practical joke, designed to expose oversights in the scientific publishing realm by pranking negligent conferences with nonsense research later revealed to be machine fabrications. However, the large-scale infiltration of undeclared use of the software in real publications is cause for more alarm than amusement.

The encroachment of AI-generated text into scientific publications lends a new significance to old questions facing both the academic community and the public, especially surrounding the topic of climate change. In climate and environmental science, the social production and dissemination of science is particularly fraught: scientific consensus and media communication of environmental issues have long been disconnected due to political lobbying and deliberate disinformation. Consequently, the view of climate change offered by scientific research has tended to differ significantly from the view presented by public-facing media outlets. As the environmental public navigates this contested corpus of online topical literature— now newly complicated by the specter of AI-generated information, even in scientific forums—the task of finding legible and legitimate climate research has become all the more complex.

In the long process of producing, publishing, and communicating scientific research, much interpretation takes place. Scientists collect data and use mathematical or statistical analyses to interpret sets of observations; then they express these findings in words that adhere to the standard form of a scientific paper. In these phases, where original findings are analyzed for the first time, AI-generated text does not belong. When AI enters the authorship of academic publications, it introduces vagueness at best and disguised falsehood at worst; these are not acceptable dynamics in forums where the highest degrees of precision and accuracy are expected.

However, the field-specific expertise needed to fully understand scientific research that meets these standards also tends to make it inaccessible to public audiences. For this reason, the interpretation of science does not typically end with the release of a paper; instead, a subset of scientific research is aggregated, reviewed, and summarized by secondary literature, scientific journalism, and news media before circulating into the social media feeds that correspond to mainstream conversation. These additional interpretive processes translate science into a context and vocabulary that readers outside the scientific community can engage with. It is in this interpretive process, rather than in the production of scientific research, that AI can play a constructive role.

Take, for example, the June 1999 paper “Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica,” published in Nature by J. R. Petit and colleagues. The paper’s topic of deep climate history might appeal to someone looking for proof that the present era of anthropogenic warming is unlike natural changes in the Earth’s temperature: a scientifically accepted fact that the paper confirms, but that is disproportionately questioned in public-facing media. While the paper is written clearly, it assumes familiarity with some concepts that a newcomer to the ice record may find unfamiliar; this is where an LLM tool can be constructive. Using an interface like OpenAI’s Chat-GPT, a non-expert reader can easily access term definitions and basic concept explanations that make specialized research much easier to understand. Companion questions a reader might ask an LLM while parsing the Vostok paper might include “What is the scientific purpose of ice cores?”, “What are precession, obliquity, and eccentricity in the Earth's orbit?”, and “How does ice-albedo feedback work?”. An LLM chatbot can answer these questions quickly and without the distraction of hunting through the internet unguided. While its answers may not fully satisfy an expert, they are more than sufficient for a novice reader to get the gist.

This approach utilizes some of the greatest strengths of LLMs—their ability to summarize large amounts of aggregated information and their believable conversational tone— while only minimally triggering their risks of bias and imprecision by asking conceptual scientific questions. Taken further, readers might prompt LLMs to rephrase paragraphs written in confusing or jargon-heavy language, or even summarize entire sections of text. When given the entire article, ChatGPT was even able to produce an accurate abstract for the Vostok paper, in more accessible language than the original. While scientific research authors should not compromise the precision or novelty of their research for the speed of artificial text generation, these examples are promising entry points that outline a role for LLMs on the reader’s side of scientific interpretation rather than the scholarly writer’s. Imagining this pathway invites the application of LLM technology to raise the level of scientific knowledge accessible to the environmental public, not to lower it.


Claire Gorman is a dual Masters student at MIT pursuing degrees in Environmental Planning and Computer Science. Her research interests include deep learning-based computer vision methods, remote sensing for ecological sustainability, and design as a mediator between science and society. Her bachelor’s degree is in Computer Science and Architecture, from Yale University.

Claire Gorman