Lessons Learned
Taking an ethics-first approach to teaching AI
It’s not just learning to store everything as a secret. Users and developers need their own sense of agency. Designing mechanisms to increase transparency does that.
When it comes to artificial intelligence, “bad technology is not just good technology being used for bad purposes,” says computer scientist Augustin Chaintreau. “The harm comes baked into the assumptions.”
Disinformation, biased algorithms, compromised privacy: The potential for great harm dominates much of the wider conversation around artificial intelligence. Chaintreau, associate professor at Columbia Engineering, leads an effort (funded by the Responsible Computer Science Challenge) to create an innovative undergraduate ethics curriculum for the School’s Computer Science Department. His team views “ethics” as a verb, pushing students to actively work for social good, not simply try to avoid bad outcomes.
That idea strongly informs their approach. When it comes to teaching engineering ethics, there are different schools of thought; Chaintreau believes full integration is the most effective, and he and his team are making space to explore the consequences of each technology within every lesson on how to.
It’s a method he has long favored. Take questions of privacy: Chaintreau, who for more than a decade has led courses on how social networks vacuum up sensitive user data, asks his students to think creatively and proactively on that front. “It’s not just learning to store everything as a secret,” he says. “Users and developers need their own sense of agency. Designing mechanisms to increase transparency does that.”
Such an example illustrates how addressing ethics is as complex as it is urgent; adding to the complexity is the sector’s deeply unbalanced demographics. Even among STEM fields, computer science faces particular challenges in attracting and retaining diverse talent. For example, in 2019 just over 2% of all doctorates in AI awarded to U.S. residents went to Black engineers and approximately 3% to Latinx engineers.
Research shows that prioritizing discussions around AI’s social impacts can help foster a more welcoming environment, increasing a sense of belonging for women and underrepresented minorities within the discipline. At Columbia, that’s far from an academic concept. “Even as our students are fielding job offers, they know they can always come back and ask more questions,” Chaintreau says. “They don’t ever have to feel alone.”