Share
Preview
The Next Step to Ensuring Beneficial AI
Two years ago, after an exciting conference in Puerto Rico that included many of the top minds in AI, we produced two open letters -- one on beneficial AI and one on autonomous weapons -- which were signed and supported by tens of thousands of people. But that was just one step along the path to creating artificial intelligence that will benefit us all. 

This month, we brought together even more AI researchers, entrepreneurs, and thought leaders for our second Beneficial AI Conference, held in Asilomar, California (see videos below). Speakers and panelists discussed the future of AI, economic impacts, legal issues, ethics, and more. And during breakout sessions, groups gathered to discuss what basic principles we could all agree on that could help shape a future of beneficial AI.

As we expressed in a recent post about the process involved in creating the Asilomar Principles:

"We, the organizers, found it extraordinarily inspiring to be a part of the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence. Along with being a gathering of endlessly accomplished and interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best.

"We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone’s lives in coming years."

If you haven't had a chance to review the Principles yet, we encourage you to do so now, and consider joining the thousands of other researchers and concerned citizens who have already signed.

For more in-depth discussion about the Principles, we interviewed Anca Dragan, Yoshua Bengio, Kay Firth-Butterfield, Guruduth Banavar, Francesca Rossi, Toby Walsh, Stefano Ermon, Dan Weld, and  Roman Yampolskiy.



Sampling of Asilomar videos on YouTube so far
Superintelligence: Science or Fiction?

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss what likely outcomes might be if we succeed in building human-level AI. Moderated by Max Tegmark.

Interactions between the AI Control Problem and the Governance Problem

Nick Bostrom explores the likely outcomes of human-level AI and problems regarding governing AI.

Creating Human-Level AI

AI pioneer Yoshua Bengio explores paths forward to human-level artificial intelligence.
AI and the Economy

Economist Erik Brynjolfsson explores how we can grow our prosperity through automation without leaving people lacking income and meaning.

Public Risk Management for AI: The Path Forward

Law scholar Matt Scherer explores means of mitigating risks to public from AI.

These are just a sampling of videos from the conference, and many more will be uploaded in the coming days. Please visit and follow our YouTube channel for more great videos and updates as we add more. 
Don't forget to follow us on SoundCloud and iTunes

2016 saw some significant AI developments. To talk about the AI progress of the last year, we turned to Richard Mallah and Ian Goodfellow. Richard is the director of AI projects at FLI, he’s the Senior Advisor to multiple AI companies, and he created the highest-rated enterprise text analytics platform. Ian is a research scientist at OpenAI, he’s the lead author of a deep learning textbook, and he’s the inventor of Generative Adversarial Networks. Listen to the podcast here or review the transcript here.

Follow Us
Future of Life Institute
PO Box 454
Winchester, MA 01890
United States



Email Marketing by ActiveCampaign