https://hai.stanford.edu/news/how-harmful-are-ais-biases-diverse-student-populations?utm_source=Stanford+HAI&utm_campaign=51ab02d948-hai_news_october_20_2024_General&utm_medium=email&utm_term=0_aaf04f4a4b-f0e42e97e6-%5BLIST_EMAIL_ID%5D&mc_cid=51ab02d948&mc_eid=6246a75df5

 

How Harmful Are AI’s Biases to Diverse Student Populations?

Postdoctoral fellow Faye-Marie Vassel delves into two interdisciplinary papers that examine how generative AI affects intersectional identities. Here she explores harms—ranging from erasure to subordination—and advocates for a socio-technical framework to address the issue.

 

bout a year ago, Khan Academy, the online education platform, launched Khanmigo, a one-on-one, always-available AI tutor designed to support learners without giving away the answers. The pilot has reached over 65,000 students already, with plans to expand to half a million and up to one million by fall. 

Khan Academy is not the only education company experimenting in AI. The potential reach of the technology and its promise to offer individualized learning assistance is enormous. 

But as with any AI tool that reaches an audience at this scale, the potential for harm is real.

Faye-Marie Vassel – Stanford HAI STEM Education, Equity, and Inclusion Postdoctoral Fellow – has made it a primary area of her research focus to determine how we can achieve what she calls equitable “techno futures” for diverse audiences of learners. 

“How do students from diverse backgrounds interact with tech? And more pointedly, how might they be negatively impacted by tech, given that we overwhelmingly see masculinized and anglicized names represented in the output of LLMs?” says Vassel. 




 

Comments

Popular posts from this blog

Humanity, replication, whose work is protected