Made with ChatGPT

Must we inform our readers when we use powerful new writing tools?

Aaron Mayer
5 min readJan 9, 2023

[Cross-posted on Substack — follow along for more]

It’s 2023.

A 5th grader sends an essay to his teacher for a school assignment.

An employee at a non-profit submits a grant proposal to the NIH.

A travel blogger posts an article about a surfing spot in Maui.

They all have one thing in common: they were co-written with ChatGPT.

ChatGPT uses large language models (LLMs), a relatively new form of AI capable of producing text so original and fluent that human readers can’t tell whether the text was written by a man or a machine.

LLMs represent an incredible development in the field of AI, and conversational proficiency is a major milestone for artificially intelligent systems and their engineers. It’s a momentous scientific achievement, and one we should take care to revere with equal measures of awe and trepidation.

But what’s striking about ChatGPT in particular is that it’s so gosh darn good at a very special craft: writing.

For the first time, everyday folks are able to access these AI systems and use them as if they’re writing alongside a real person, and that means that ChatGPT and its successors (coming soon, and no doubt more powerful) are poised to fundamentally restructure our relationship with the written word.

--

--