This document presents a methodology for developing a prototype variable message sign (VMS) reading system using machine learning techniques. The system has two parts: 1) a VMS recognition model that identifies the sign in images, using a RetinaNet model trained on labeled VMS data. 2) An image processing pipeline that normalizes the VMS image, extracts the text using Tesseract OCR, and converts the text to speech using IBM Watson. The goal is to help drivers safely attend to variable warning messages on roads by automating VMS detection and transcription to reduce distractions.