Stanford EE Computer Systems Colloquium

4:30 PM, Wednesday, March 6, 2019
Shriram Center for Bioengineering and Chemical Engineering Room 104
http://ee380.stanford.edu

Extending the theory of ML for human-centric applications

Jamie Morgenstern
Georgia Tech

About the talk:

Many recent application domains for machine learning deviate from standard modeling assumptions, by including data generated by people who may want to manipulate a system's output, or by trying to accomplish some task for which multiple objectives are simultaneously important. For example, an employer might want to promote their job opportunities to people with certain skills, while simultaneously ensuring a broad range of demographics sees and applies to the job posting. Moreover, if the employer uses a fixed filter to sift out fraudulent applications, the filter will become less useful over time as both fraudulent and honest applicants shift their application contents to pass the filter. In this talk, I will survey some recent results that take steps towards making ML methods more robust to these natural environments often faced in the real world.

Video:

To access the live webcast of the talk (active at 16:28 of the day of the presentaton) and the archived version of the talk, use the URL SU-EE380-20190306. This is a first class reference and can be transmitted by email, Twitter, etc.

A URL referencing a YouTube view of the lecture will be posted here a week or so following the presentation.

About the Speaker

[speaker photo] Jamie is an assistant professor in the School of Computer Science Georgia Tech. Prior to this appointment, she was hosted by Michael Kearnsi, Aaron Roth, and Rakesh Vohra as a Warren Center fellow at the University of Pennsylvania. She completed her PhD working with Avrim Blum at Carnegie Mellon University. Her work focuses on the social impact of machine learning and the impact of social behavior on ML's guarantees. How should machine learning be made robust to behavior of the people generating training or test data for it? How should ensure that the models we design do not exacerbate inequalities already present in society?