Margaret Mitchell is the Senior Research Scientist in Google’s Research & Machine Intelligence group, working on advancing artificial intelligence towards positive goals. She works on vision-language and grounded language generation, focusing on how to help computers communicate based on what they can process. Her work combines computer vision, natural language processing, social media, many statistical methods, and insights from cognitive science. Before Google, She was the Researcher at Microsoft Research, and a founding member of their “Cognition” group. Her work focused on advancing artificial intelligence, with specific focus on generating language from visual inputs. Before MSR, she was a postdoctoral researcher at The Johns Hopkins University Center of Excellence, where she mainly focused on semantic role labeling and sentiment analysis using graphical models, working under Benjamin Van Durme. Margaret received her PhD in Computer Science from the University of Aberdeen, with supervision from Kees van Deemter and Ehud Reiter, and external supervision from the University of Edinburgh (with Ellen Bard) and Oregon Health & Science University (OHSU) (with Brian Roark and Richard Sproat). She worked on referring expression generation from statistical and mathematical perspectives as well as a cognitive science perspective, and also worked on prototyping assistive/augmentative technology that people with language generation difficulties can use. Her thesis work was on generating natural, human-like reference to visible, everyday objects. She spent a good chunk of 2008 getting a Master’s in Computational Linguistics at the University of Washington, studying under Emily Bender and Fei Xia. Simultaneously (2005 – 2012), She worked on and off at the Center for Spoken Language Understanding, part of OHSU, in Portland, Oregon. Her title changed with time (research assistant/associate/visiting scholar), but throughout, she worked under Brian Roark on technology that leverages syntactic and phonetic characteristics to aid those with neurological disorders. She continues to balance my time between language generation, applications for clinical domains, and core AI research. Abstract summary The Seen and Unseen Factors Influencing Machine Perception of Images and Language: The success of machine learning has recently surged, with similar algorithmic approaches effectively solving a variety of human-defined tasks. Tasks testing how well machines can perceive images and communicate about them have begun to show a lot of promise, and at the same time, have exposed strong effects of different types of bias, such as overgeneralization. In this talk, I will detail how machines are learning about the visual world, and discuss how machine learning and different kinds of bias interact.