Artificial Intelligence (AI) is going to fundamentally change the way we work and live as algorithms begin to make more fundamental decisions for us. Like many new technologies, AI is the source of tremendous opportunities to support the public good, but also brings risks and new challenges. How do we build ethical, moral, and human values into the future of AI? What governance mechanisms must be in place to minimize AI’s potential harm and maximize its benefits? This intriguing discussion will explore how companies can incorporate ethics, inclusion, and transparency to protect against perpetuating biases or circumventing ethics in financial transactions, law enforcement, monopolizing behavior, and more.
Manager, Corporate Responsibility and Human Rights
Researcher in Data Ethics, University of Oxford
Turing Research Fellow, The Alan Turing Institute
Cyberlaw Clinic, Harvard Law School
Associate Director, Information and Communications Technology