This paper proposes using RF sensors for recognizing American Sign Language (ASL) in order to build technologies that better serve the Deaf community. The paper discusses partnerships with Deaf organizations to understand community needs. A focus group with Deaf participants found that existing technologies are disliked due to privacy concerns with cameras, restricted movement with gloves, and lack of cultural understanding by developers. The paper then presents a methodology using RF sensors to non-invasively collect motion data from ASL signs and investigate the linguistic properties observable in the RF data through machine learning techniques. Preliminary results show RF sensing can differentiate between native ASL signing and imitation signing with 99% accuracy, and classify 20 ASL signs with 72.5% accuracy.