The free rider problem: Why Artificial Intelligence needs to be regulated

It is clear that current AI systems pose plenty of dangers, from racial bias in facial recognition technology to the increased threat of misinformation.

May 10, 2023 - 01:30
The free rider problem: Why Artificial Intelligence needs to be regulated

On March 22, thousands of researchers and tech leaders – including Elon Musk and Apple co-founder Steve Wozniak – published an open letter calling to slow down the artificial intelligence race. Specifically, the letter recommended that labs pause training for technologies stronger than OpenAI’s GPT-4, the most sophisticated generation of today’s language-generating AI systems, for at least six months.

Sounding the alarm on risks posed by AI is nothing new – academics have issued warnings about the risks of superintelligent machines for decades now. There is still no consensus about the likelihood of creating artificial general intelligence, autonomous AI systems that match or exceed humans at most economically valuable tasks. However, it is clear that current AI systems already pose plenty of dangers, from racial bias in facial recognition technology to the increased threat of misinformation and student cheating.

While the letter calls for industry and policymakers to cooperate, there is currently no mechanism to enforce such a pause. As a philosopher who studies technology ethics, I’ve noticed that AI research exemplifies the “free rider problem”. I’d argue that this should guide how societies respond to its risks – and that good intentions won’t be enough.

Riding for free

Free riding is a common consequence of what philosophers call “collective action problems”. These are situations in which, as a group, everyone would benefit from a particular action, but as...

Read more