Multimodal interfaces use computer vision, touch surfaces, and other input methods to allow novel ways of interacting with computers beyond the traditional keyboard and mouse. Technologies like the Wiimote, iPhone touchscreen, and Kinect demonstrate approaches like gesture recognition and touch that will continue advancing as hardware and software improve, making these interfaces cheaper, more accurate, and more widely available. This opens opportunities to rethink user interfaces and how people interact with and navigate digital experiences.