This poster, presented at the Information Architecture Summit in Vancouver, March 2017 with Lisa Nguyen, presents an interaction model for mixed reality, including gesture, gaze, sound, spatial mapping, and emotion. When personal computing software and the Web were young, user interfaces and navigation controls borrowed heavily from physical world metaphors, from the trash can icon to Web "pages". Since then, we have come to rely on native digital UI frameworks and pattern libraries. Now we face new design challenges for mixed reality, which allows us to collaborate over digital holograms in physical space. Early mixed reality applications rely on familiar Web controls, porting the digital affordances we are familiar with - buttons, menus, and two-dimensional displays of data - into our physical environment. These familiar controls allow us to easily transition from 2D interactions to 3D. But will this approach be successful or will it prove too limiting? Recent research in embodied cognition hypothesizes that knowledge is created through how our bodies interact with objects in the world. To fully embrace the knowledge creation and sharing opportunities that mixed reality offers, we designers need to expand our vocabulary of interactions to engage with digital objects and data in the real world. This poster presents a conceptual framework that will continue to evolve as augmented reality and mixed reality experiences become part of our daily lives.