AI and Prophets of doom

|


In an Star Tribune editorial OpEd, Tylor Cowen of the Bloomberg opinion and a a professor of economics at George Mason University, advises not to listen and trust the prophets of doom. Cowen argues that during the Covid pandemic, many health experts including Dr Anthony Fauci, made health experts who had dominant voice public health decision made not appropriate decisions.

He suggests that on artificial intelligence, wrong experts may make similar mistakes again. Cowen writes that some experts on AI suggest pause on AI research “declaring that he was worried about misinformation, mass unemployment and future risks of a more destructive nature”. He writes:

What I do not hear, however, is a more systematic cost-benefit analysis of AI progress. Such an analysis would have to consider how AI might fend off other existential risks — deflecting that incoming asteroid, for example, or developing better remedies against climate change — or how AI might cure cancer or otherwise improve our health. Predictions of doom often fail to take into account the risks to America and the world if we pause AI development.

I also do not hear much engagement with the economic arguments that, while labor market transitions are costly, freeing up labor has been one of the major modes of material progress throughout history. The U.S. economy has a remarkable degree of automation already, not just from AI, and currently stands at full employment. If need be, the government could extend social protections to workers in transition rather than halt labor-saving innovations.

He suggests making any decisions on future AI research based on experts analyses based on ” an advanced understanding of the social sciences and political science, not just AI and computer science”. Cowen offers his views on AI:

decentralized social systems are fairly robust; the world has survived some major technological upheavals in the past; national rivalries will always be with us (thus the need to outrace China); and intellectuals can too easily talk themselves into pending doom.

Additional arguments Prof. Cowen makes on the future AI research can be found here.