This article in Scientific American explains how AI, a GPT-3 AI model, created its own academic paper after being told “Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.” That paper is being reviewed for publication and has been published by the International French-owned pre-print server HAL. I wonder when it will become impossible for humans to understand the explanations written by AI if these systems are left unfettered?
For example, in 2017 a Facebook AI “talked” to another artificial intelligence system to solve a problem and in the process the two systems created a new more efficient language for that specific problem. Left unfettered it strikes me as likely that AI models designed to explore unknown scientific riddles will indeed find answers that we mortals may have trouble understanding, even though the prediction is proven correct. This is already starting to happen. AI is now finding new cancer fighting drugs under human supervision that are being tested for effectiveness. I imagine that eventually those supervisors will be removed as the tools advance beyond the supervisors’ comprehension. If so, when AI writes a paper explaining how it discovered the cure for cancer, will we be able to understand how it found that answer? Will we care? I think the author of the Scientific American article, Almira Osmanovic Thunström, has similar questions:
“We have no way of knowing if the way we chose to present this paper will serve as a great model for future GPT-3 co-authored research, or if it will serve as a cautionary tale. Only time— and peer-review—can tell. Currently, GPT-3’s paper has been assigned an editor at the academic journal to which we submitted it, and it has now been published at the international French-owned pre-print server HAL. The unusual main author is probably the reason behind the prolonged investigation and assessment. We are eagerly awaiting what the paper’s publication, if it occurs, will mean for academia. Perhaps we might move away from basing grants and financial security on how many papers we can produce. After all, with the help of our AI first author, we’d be able to produce one per day.
Perhaps it will lead to nothing. First authorship is still the one of the most coveted items in academia, and that is unlikely to perish because of a nonhuman first author. It all comes down to how we will value AI in the future: as a partner or as a tool.
It may seem like a simple thing to answer now, but in a few years, who knows what dilemmas this technology will inspire and we will have to sort out? All we know is, we opened a gate. We just hope we didn’t open a Pandora’s box.”
Overview by Tim Sloane, VP, Payments Innovation at Mercator Advisory Group
Read how bank AI’s may be vulnerable to cyber attacks.