This paper proposes combining data from the Leap Motion and Microsoft Kinect to more accurately recognize hand gestures. New features are extracted from each device and combined into a feature vector, including extended finger detection, fingertip positions and angles, and measurements of the hand shape. These features are used to train a random forest classifier on a dataset of 10 American Sign Language gestures. The results show improved recognition accuracy over using either device alone.