How will we govern super-powerful AI?
Watch the newest video from Big Think: https :// bigth.ink/ NewVideo
Learn skills from the world’s top sentiments at Big Think Edge: https :// bigth.ink/ Edge
The question of self-conscious artificial intelligence dominating future humanity is not the most pressing issue we face today, says Allan Dafoe of the Center for the Governance of AI at Oxford’s Future of Humanity Institute. Dafoe argues that AI’s power to generate wealth should make good governance our primary concern.
With attentive systems and policies in place, humanity can unlock the full potential of AI with minimal negative consequences. Drafting an AI constitution will too provide the opportunity to learn from the error of past designs to avoid future conflicts.
Building a framework for governance will require us to get past sectarian gaps and interests so that society as a whole can benefit from AI in ways that do “the worlds largest” good and the least harm.
Allan Dafoe is an associate professor in the International Politics of AI and head of the Centre for the Governance of AI at the Future of Humanity Institute at University of Oxford. He specializes in AI governance, AI race dynamics, and AI international politics. Dafoe’s prior succeed centered around examinations of the causes of The Liberal Peace, and the role of reputation and honor as reasons for war.
ALLAN DAFOE: AI is likely to be a acutely transformative general determination technology that changes virtually every aspect of society, their own economies, politics, and the military forces. And this is just the beginning. The issue doesn’t come down to consciousness or “Will AI want to dominate the world or will it not? ” That’s not the question. The question is: “Will AI be potent and will it be able to generate capital? ” It’s very likely that it will be able to do both. And so really given that, the governance of AI is the most important issue facing the world today and especially in the coming decades.
My appoint is Allan Dafoe, I am the director of the Center for the Governance of AI at the Future of Humanity Institute at University of Oxford. The core part of my experiment is to think about the governance question with respect to AI. So this is the problem of how the world can develop AI in a way that maximizes the opportunities and belittles the risks.
NARRATOR: So why is it so important for us to govern artificial intelligence? Well, first, let’s exactly consider the appropriate means that natural human intellect has impacted the world on its own.
DAFOE: In countless directions it’s incredible how far we’ve gone with human intelligence. This human ability, which had all sorts of energy constraints and physical restrictions, has been able to build up this technological civilization, which has rendered cellphones and houses, education, penicillin, and flight. Practically everything that we have to be indebted for is a product of human intelligence and human cooperation. With artificial intelligence, we can amplify that and eventually give it beyond our resource. And it’s hard for us to know now what that will mean for the economy, for society, for the social impacts and the possibilities that it will bring.
NARRATOR: AI isn’t the first engineering that our society has had to grapple with how to govern. In fact, countless technologies like autoes, handguns, radio, the internet are all subject to governance. What launches AI apart is the kind of impact it can have on society and on every other technology it touches.
DAFOE: So if we govern AI well, there’s likely to be substantial advances in medicine, transportation, helping to reduce global poverty and[ it will] help us address climate change. The problem is if we don’t govern it well, it will too create these negative externalities in civilization. Social media may shape us more lonely, self-driving cars may cause congestion, autonomous weapons could justification likelihoods of twinkle proliferations and crusade or another kind of military insecurity. So the first layer is to address these unintended consequences of the advances in AI that are emerging. Then there’s this bigger challenge facing the governance of AI, which is really the question of where do we want to go?
NARRATOR: The room we organize our governance of AI is crucial, possibly to the survival of our species. When we consider how impactful this technology can be, any system that reigns its help must be carefully constructed.
DAFOE: There are a lot instances where a society has stumbled into extremely dangerous situations–World War I perhaps represent one of the more illustrative ones–where no one supervisor really wanted to have this war but, nevertheless, they were…
To spoke the full transcript, please visit https :// bigthink.com/ videos/ ai-governance
Read more: youtube.com