IT Essentials: 'Impressively innovative' and other inanities

Trying to save time may be taking us towards real-world harm

IT Essentials: 'Impressively innovative' and other inanities

AI use in academia has exploded in the last 12 months - but attempts to save time could have real-world implications.

Welcome to an Easter-delayed IT Essentials.

It turns out that AI's not just for marketing copy. A recent study found a correlation suggesting that the use of gen-AI in academic papers shot up more than 450% last year, while a separate paper concluded that the tech is also being used in peer reviews of the same papers.

On one hand, I get it: academic papers are long and complex, taking weeks or months to get right. Add the peer review process on top and it can be a year or more before your work is available to the wider community. When your funding is tied to your output, it's no surprise that academics are increasingly opting for the quick route.

But this isn't PR copy or website code; these studies will be seen, cited and used by other researchers, and possibly even in practical applications. There comes a point where AI ceases to be useful and instead can actively introduce harm. Academic papers are a perfect example, because existing LLMs are neither objective nor always factual.

Unfortunately, until there's a documented case of real-world harm caused directly by the use of AI the systems will probably remain in use. Some papers might attract criticism or even ridicule, causing publishers to enact new policies, but while academia is structured as it is there is every incentive for researchers to favour quantity over quality.

On a related note, we've noticed a massive rise in AI use here at Computing, especially in our awards entries. We've tracked a massive rise in the use of words like 'impressive', 'innovative', 'notable', 'versatile' - clear indicators of AI involvement.

We stipulate that awards entries must be written specifically for each category rather than being cut-and-paste marketing copy, which we understand makes the entry process slow; but we do it to be sure our judges are only seeing the best, most compelling projects, which stand up to a rigorous examination. AI copy doesn't do that, and rarely makes it through the shortlisting process.

That's the heart of the matter. AI can massively speed up output, but it's rarely good enough to take a paper through a formal review process. That's not to say you can't use AI, but you must make sure to review and edit whatever it produces.

I say this not (just) as a journalist tired of reading some variant on "X is an impressive innovation," but as a person with a long-term health condition. I'd really prefer not to become the statistic of real-world harm that changes how AI is used in academia.