ChatGPT And Science


The journal Nature recently wrote “ChatGPT is not a person. Yet in many ways, this program has had a profound and wide-ranging effect on science in the past year.”

It co-wrote scientific paperssometimes surreptitiously. It drafted outlines for presentations, grant proposals and classes, churned out computer code, and served as a sounding board for research ideas. It also invented references, made up facts and regurgitated hate speech. Most of all, it captured people’s imaginations: by turns obedient, engaging, entertaining, even terrifying, ChatGPT took on whatever role its interlocutors desired — and some they didn’t.

– The Nature

Marinka Zitnik, who works on AI for medical research at Harvard Medical School in Boston, Massachusetts, told Nature, “ChatGPT and related software can help to brainstorm ideas, enhance scientific search engines and identify research gaps in the literature. Models trained in similar ways on scientific data could help to build AI systems that can guide research, perhaps by designing new molecules or simulating cell behaviour”.

According to Emily Bender, a computational linguist at the University of Washington, Seattle, there are a “few appropriate ways to use what she terms synthetic text-extruding machines. ChatGPT has a large environmental impact, problematic biases and can mislead its users into thinking that its output comes from a person. On top of that, OpenAI is being sued for stealing data and has been accused of exploitative labour practices (by hiring freelancers at low wages)”.

The Nature article concludes “No one knows how much more there is to squeeze out of ChatGPT-like systems. Their capabilities might yet be limited by the availability of computing power or new training data. But the generative AI revolution has started. And there’s no turning back.”