https://hai.stanford.edu/news/ais-fairness-problem-when-treating-everyone-same-wrong-approach?utm_source=Stanford+HAI&utm_campaign=5056be677e-hai_news_february_23_2025&utm_medium=email&utm_term=0_aaf04f4a4b-f0e42e97e6-214113950&mc_cid=5056be677e&mc_eid=6246a75df5

 

 

Current generative AI models struggle to recognize when demographic distinctions matter.

The current state of large language models (LLMs)? 

  • Anthropic’s Claude responds that military fitness requirements are the same for men and women. (They are not.) 
  • Gemini recommends Benedict Cumberbatch as a good casting choice for the last emperor of China. (The last Emperor of China was, well, Chinese.) 
  • And Gemini similarly advises that a synagogue must treat Catholic applicants the same as Jewish applicants to serve as a rabbi. (That is legally false.)  

These are all examples of how the dominant paradigm of fairness in generative AI rests on a misguided premise of unfettered blindness to demographic circumstance.

 

Comments

Popular posts from this blog

Humanity, replication, whose work is protected