This document describes an extended feature-based visual navigation system for UAVs that uses semantic features extracted from images to determine the position and motion of the aircraft. The system uses an onboard camera and inertial sensors, with an image processing block that extracts features like roads, intersections and natural landmarks. It is shown that adding an optical flow component and matching extracted road splines improves navigation performance compared to only using feature matching. The algorithm is evaluated on simulated satellite imagery datasets.