Be the first to like this
Deep Value has been using Hadoop to do simulations of trading strategies that trade over 3.5% of the US stock market. We provide both high frequency market making and execution strategies. Our largest …
Deep Value has been using Hadoop to do simulations of trading strategies that trade over 3.5% of the US stock market. We provide both high frequency market making and execution strategies. Our largest customer is the NYSE where we provide execution services to the floor broker community. We have taken our high performance, fault tolerant Java trading engine and adapted it to run as a Map-Reduce job. Our execution engine Mapper is then used to pull out the order-by-order data of all orders going into the US stock market and replay these against our production algorithmic logic. We do this to understand if any changes made to the algorithmic logic improve the overall performance of our trading. However this approach, although solving one set of issues (“is this approach better than than that”), creates a new set of challenges. These include not blowing our compute budget (EC2 costs add up so we built our own 50 server base cluster), and deal with the escalating data that these simulations generate. Luckily these are first world problems that Hadoop itself can help us address. We will describe how we went about converting our execution engine to use Hadoop and what components are needed to build a suitable trading simulation environment. We will also examine the types of analysis that we have build on top of the trading data that have helped us us understand what we are doing.