Two years ago, after an exciting conference in Puerto Rico that included many of the top minds in AI, we produced two open letters -- one on beneficial AI and one on autonomous weapons -- which were signed and supported by tens of thousands of people. But that was just one step along the path to creating artificial intelligence that will benefit us all.
This month, we brought together even more AI researchers, entrepreneurs, and thought leaders for our second Beneficial AI Conference, held in Asilomar, California (see videos below). Speakers and panelists discussed the future of AI, economic impacts, legal issues, ethics, and more. And during breakout sessions, groups gathered to discuss what basic principles we could all agree on that could help shape a future of beneficial AI.
As we expressed in a recent
post about the process involved in creating the Asilomar Principles:
"We, the organizers, found it extraordinarily inspiring to be a part of the BAI 2017 conference,
the Future of Life Institute's second conference on the future of artificial intelligence. Along with being a gathering of endlessly accomplished and interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best.
"We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone's lives in coming years."
If you haven't had a chance to review the Principles yet, we encourage you to do so now, and consider joining the thousands of other researchers and concerned citizens who have already signed.
Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss what likely outcomes might be if we succeed in building human-level AI. Moderated by Max Tegmark.
Law scholar Matt Scherer
explores means of mitigating risks to public from AI.
These are just a sampling of videos from the conference, and many more will be uploaded in the coming days. Please visit and follow our YouTube channel for more great videos and updates as we add more.
2016 saw some significant AI developments. To talk about the AI progress of the last year, we turned to Richard Mallah and Ian Goodfellow. Richard is the director of AI projects at FLI, he's the Senior Advisor to multiple AI companies, and he created the highest-rated enterprise text analytics platform. Ian is a research scientist at OpenAI, he's the lead author of a deep learning textbook, and he's the inventor of Generative Adversarial Networks. Listen to the podcast here or review the transcript here.
Future of Life Institute
PO Box 454
Winchester, MA 01890