The document discusses visual interpretation of hand gestures for human-computer interaction. It proposes using pointing gestures with a depth camera to interact with large displays. The system tracks hand movements using RGB-D cameras and uses the hand position and orientation to control the movement and rotation of virtual objects in a display. It discusses approaches for modeling, recognizing, and analyzing hand gestures as well as applications of gesture-based interaction systems. The methodology presented uses color segmentation and centroid tracking of a user's hand to determine coordinates and control a virtual object similarly to a computer mouse.