This document presents a method for human action recognition using Lagrangian descriptors. Lagrangian methods are applied to treat optical flow fields as unsteady vector fields. Lagrangian descriptors in the form of finite-time Lyapunov exponents (FTLE) and time-normalized arc length are extracted from video frames to capture motion boundaries and areas of coherent flow. A histogram of oriented gradients (HoG) is computed for each descriptor, and the descriptors are fused before training an SVM classifier. The method is evaluated on the Weizmann and KTH datasets, demonstrating its ability to recognize basic actions while maintaining privacy by working with optical flow fields rather than raw images. Future work will focus on local representations and recognizing more complex actions.