I write about five articles a week. If I worked very hard, I could probably write ten. But what I really wish is that I could write 100 articles per week, discussing every interesting econ paper and piece of news out there in the world. The more my ideas get out there into the world, the happier I am. (Don’t worry, I wouldn’t send you 100 emails a week; I’d put the posts into a weekly digest.)
Realistically, the only way I’m ever going to be able to raise my productivity substantially is with the aid of artificial intelligence. Large language models like GPT-3 offer the hope that someday, chatbots like GPT will be able to write large portions of my articles, in my own style, incorporating my own beliefs and ideas, with my own level of understanding and analysis, and with only a little prompting, guidance, and editing from me. Other media outlets are thinking along similar lines — Buzzfeed recently announced that it plans to use ChatGPT to help create some of its content, and its stock rose on the news.
But so far, my attempts to use ChatGPT to help me write my own blog posts — which I will write about in detail at some point — have all ended in failure, for one simple reason: ChatGPT routinely spews out a lot of fake “facts”.
Just to give one example, I was recently discussing the idea of wage councils (also called “wage boards”) with a friend of mine. I referred him to an article written by the economist Arin Dube, arguing in favor of wage boards. Just for fun, my friend decided to have ChatGPT argue the opposite side of the case:
There are just a few problems with this output. First, the quote from Dube is completely fictitious. Second, though he has been interviewed for Bloomberg, Dube has never written an article for that publication. And third, and most importantly, Dube’s position on wage boards is completely opposite to what ChatGPT claims in the above passage. When I showed Dube the exchange, he was less than amused. (Update: I gave Perplexity AI the same prompt, and it invented even more fake facts than ChatGPT!)
What’s more, this is far from an isolated incident. When I try to use ChatGPT as a source for economic statistics, the numbers it reports are routinely fictitious, with misattributed sources. A quick Twitter search for the phrase “none of these papers exist” will demonstrate the chatbot’s well-known tendency to make up fake sources and present them with an air of the utmost confidence and authority.
There’s no way that I can possibly rely on this kind of output for my articles. Including made-up facts would be a breach of trust with my audience, and utterly misrepresenting someone’s position on an issue, as ChatGPT does above, would be absolutely devastating to my credibility. So using it to help write even small portions of my posts, even with very careful “prompt engineering”, would require me to fact-check every detail, which would take far more time and effort than just writing the content myself.
But of course LLMs are still in their early days, and we can expect them to undergo considerable improvement. So I think it’s worth it to ask why ChatGPT makes up fake facts and sources, and whether this is something that we should expect to be fixed over time.
Natural language does not equal knowledge
Keep reading with a 7-day free trial
Subscribe to Noahpinion to keep reading this post and get 7 days of free access to the full post archives.