Stanford's Rob Reich to Serve as Senior Advisor to the U.S. AI Safety Institute (AISI)
Earlier this month, U.S. Secretary of Commerce Gina Raimondo announced that Rob Reich, McGregor-Girand Professor of Social Ethics of Science and Technology and Senior Fellow at the Stanford Institute for HAI, will serve as Senior Advisor to the U.S. AI Safety Institute (AISI), which is housed at the National Institute of Standards and Technology (NIST).
Reichs leave-in-service is made possible in part by Stanfords Scholars in Service program, which is jointly run by Stanford Impact Labs (SIL) and the Haas Center for Public Service.
In conversation with SILs Kate Green Tripp, Reich discussed the charge of the Institute, his role on the executive leadership team, and how he as a philosopher frames and approaches some of the pressing questions that surround AI safety.
Kate Green Tripp: As someone immersed in the overlap of technology, ethics, and policy can you frame the moment the U.S. is having when it comes to AI safety?
Rob Reich: At the federal government, were seeing the U.S. AI Safety Institute (AISI) take shape. This institute is, in the language of Silicon Valley, a start-up within the long-established Department of Commerce. It was created in the wake of the October 2023 White House Executive Order on AI.
This marks one of the first attempts by the U.S. federal government to reckon with the AI moment we're in. It follows the efforts in the European Union, which passed an AI act earlier in 2024. It also follows on from the efforts in the U.K. to create an AI safety institute.
This is an early and important attempt by the U.S. government to come to terms with how to ensure that Americans and the world get the great benefits of artificial intelligence while diminishing some of the existing harms as well as emerging risks of especially powerful AI models.
Id frame this moment in two key ways. Number one: many people believe that governments around the world missed the opportunity in the 2010s to contain the problems of social media. Only recently do we see attention paid to the rampant privacy concerns, misinformation, disinformation, child pornography, and so on. Governments do not want to repeat that mistake with AI.
Continued https://impact.stanford.edu/article/stanfords-rob-reich-serve-senior-advisor-us-ai-safety-institute-aisi