This document proposes using facial gesture information from face keypoints, in addition to skeleton keypoints, to improve sign language recognition. Specifically, it summarizes previous work on sign language recognition and datasets. It then proposes adding facial keypoints generated from a wholepose estimator to an existing skeleton-based model (SLGCN + SSTCN) for sign language recognition. The results show adding facial features improves top-1 accuracy on the AUTSL dataset from the baseline to 93.26%. However, the dataset has few signs involving facial expressions, limiting analysis of facial expressions. Overall, the document argues incorporating facial information could provide useful information for sign languages relying heavily on facial expressions.