Can AI be Unfair? A Deep Generative Model to Enable Unbiased Facial Recognition

Date: 
Tuesday, May 21, 2019 - 17:00
Source: 
SF Data Mining
Attendees: 
59
City: 
San Francisco

Posting on behalf of the Northeastern team that is organizing this event.

Abstract:
Our devices and its cloud-based APIs use Artificial Intelligence (AI) to choose for us a set of candidate actions. As users, our routine is influenced by algorithmic recommendations to watch a movie, add someone on a social network, or read the news. If this technology is so useful, then what is the problem with its use at a large scale? Recently, these algorithms have been blamed for encoding bias and stereotypes unveiling potential ethical issues, including that they may be less precise for individuals from under-represented groups. Florez will talk about the Math responsible for discriminative power and why it is important that Women, LatinX, Afro-Americans, and LGTB groups get involved in promoting the development of AI technology.

About our speaker:
Omar Florez is a Senior Research Manager at Capital One working on enabling natural conversations between humans and devices. He was also a Machine Learning Researcher at Intel Research Labs focused on teaching computers to understand user context by discovering patterns in images (visual question answering) and audio (prediction of acoustic events) and is a recipient of the IBM Innovation Award on Large-Scale Analytics. He used deep learning for accurate predictions and Bayesian models for discovering interpretable hypotheses. He holds a Ph.D. in CS from Utah State University.

WeWork

600 California St