View profile

Center for Responsible AI @ NYU Newsletter - Issue #4

Center for Responsible AI @ NYU Newsletter
Center for Responsible AI @ NYU Newsletter - Issue #4
By Center for Responsible AI  • Issue #4 • View online
In just one year since NYU R/AI was established, we have started new projects, conducted important research, developed new partnerships, and launched a course and comic book series for responsible AI education.
Much work remains towards making responsible AI synonymous with AI. We are fortunate to be part of a great community inspired by the same vision, and are thankful to our friends and supporters! Read on for news and highlights of our work in 2021. 
Happy holidays, and until next year, 
NYU Center for Responsible AI. 

We’re HIRING! 
Apply to be a Research Scientist or a Research Engineer with NYU R/AI.  
Visit to learn more about current projects and upcoming events.
New Course: WE ARE AI
We Are AI is a 5-week course that introduces social and ethical dimensions of AI, and empowers individuals to engage with how AI is governed. This course will be run at the Hunters Point Queens Public Library branch from 6-7:30pm starting January 27, 2022.
2021 was a busy year for us! Here are some highlights from events, press, and other things we worked on.
Best of 2021–Events
Demystifying AI
On December 9, Julia Stoyanovich spoke at MIT on the fundamental principles of responsible AI design, development, and deployment, using algorithmic hiring and lending as examples. Read more about the event.
AI Here and There Series
This four-part event series aims at introducing the subject of AI to a broad audience. Each event will be focused on different subjects. Since this is a global topic, we took this opportunity in stride to invite panelists both locally and internationally from Switzerland and the US. Read more about the series. 
Co-Opting AI
Mona Sloane ran a series of public, virtual dialogues this year for Co-Opting AI. These talks put the most forward-thinking scholars across technology, design and inequality into conversation with the public and cover a wide spectrum of concerns. Topics this year included Advertising, Security, Intimacy, and Ideology. For a full list, see here. 
Best of 2021–Press
Hiring and AI: Let Job Candidates Know Why They Were Rejected | The Wall Street Journal
Julia Stoyanovich argued in the Wall Street Journal for more transparency in the hiring process. She argued that job candidates should see simple, standardized labels that show the factors that go into the AI’s decision. Read the article here. 
We Need Laws to Take On Racism and Sexism in Hiring Technology | The New York Times
Julia Stoyanovich wrote a New York Times op-ed with Alexandra Reeve Givens and Hilke Schellman on ​​why artificial intelligence used to evaluate job candidates must not become a tool that exacerbates discrimination. Read the op-ed here.
Surveillance Society: Artificial Lighting for a Policed Public | Architectural Review
Mona Sloane wrote about how lighting marks space as dangerous and in need of policing. She connects this with the “permanent state of illumination” facilitated by AI, and how it has become further institutionalized and normalized. Read the article here.
Now is the Time for a Transatlantic Dialog on the
Risk of AI | VentureBeat
Mona Sloane and Andrea Renda argue in VentureBeat that rather than reconcile differing approaches to AI regulation after the fact, governments should co-develop a regulatory approach and create the preconditions for mutual learning. Read the article here.
Best of 2021–Projects
The “Data, Responsibly” and “We are AI” comic series are available online. Our comics speak about the social impacts of data-intensive technology, including AI, and are aimed at current and future data scientists, and at the general public. Read the series. 
We Are AI
Our 5-week public education course introduces the basics of AI, discusses some of the social and ethical dimensions of the use of AI in modern life, and empowers individuals to engage with how AI is used and governed. Learn more about We are AI.
AI Procurement Primer
Existing public procurement processes and standards for the procurement of AI systems are in urgent need of innovation to address potential risks and harms. This primer is based on research and on input from leading experts in the public sector, data science, civil society, policy, social science, and the law to learn about pathways forward. Read the Primer
With support from NYU R/AI, Mona Sloane led a public interest technology convention and career fair that engaged over 3000 participants, and had 16 keynote speakers including Dr. Alondra Nelson, Deputy Director for Science and Society at the White House. Learn more about A BETTER TECH here. 
Best of 2021–Publications
Disaggregated Interventions to Reduce Inequality.
Lucius Bynum, Joshua R. Loftus & Julia Stoyanovich. ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (2021).
The authors propose an “impact remediation framework” that measures real-world disparities and discovers optimal intervention policies that could help improve equity or access to opportunity for those who are underserved with respect to an outcome of interest. The framework draws on insights from the social sciences brought into the realm of causal modeling and constrained optimization. Read the article.
A Silicon Valley Love Triangle: Hiring Algorithms, Pseudo-Science, and the Quest for Auditability
Sloane, M., Moss, E., & Chowdhury, R. (2021), A Silicon Valley Love Triangle: Hiring Algorithms, Pseudo-Science, and the Quest for Auditability. Computers and Society.
Mona Sloane and her co-authors suggest a matrix to expose underlying assumptions rooted in pseudoscientific understandings of human nature and capability, and to critically investigate emerging auditing standards and practices that fail to address these assumptions. Read the article.
COVID-19 Brings Data Equity Challenges to the Fore
H.V. Jagadish, Julia Stoyanovich & Bill Howe. ACM Digital Government Research and Practice  (2021).
The COVID-19 pandemic is compelling us to make crucial data-driven decisions quickly, bringing together diverse and unreliable sources of information without the usual quality control mechanisms we may employ. These decisions are consequential, and they may give rise to, reinforce, and propagate significant inequities. In this article, the authors propose a framework, called FIDES, for surfacing and reasoning about data equity in such systems. Read the article.
Teaching Responsible Data Science: Charting New Pedagogical Territory
Armanda Lewis & Julia Stoyanovich. International Journal of Artificial Intelligence in Education. 2021. 
The authors recount the experience of developing and teaching a technical course focused on responsible data science, which tackles the issues of ethics in AI, legal compliance, and data protection among other areas. They also propose pedagogical methods for responsible data science education. Read the article.
Did you enjoy this issue?
Center for Responsible AI

Hiring Center for Responsible AI @ NYU

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue
NYU Tandon Future Labs, 370 Jay Street, Brooklyn NY 11201