Diving into the deep end of clothing styles (PyData NYC 2017)
Asking if two items of clothing have a “similar style” is a subjective and difficult question to answer. One can attempt to use implicit signals such as purchases, clicks, and views in order to learn product similarities, like in the ubiquitous “People who bought this also bought that” section of any ecommerce product page. Such techniques prove difficult in the world of fashion where clothing can have a short shelf-life. One must then resort to using knowledge of the clothing itself in order learn these similarities. Sometimes this knowledge takes the form of images, and sometimes techniques get deep.
I will discuss this journey from implicit signals to an image-based approach that we embarked on at Dia&Co, a fashion retailer. I will first start with the multiple, non-deep models that failed to produce satisfactory results. These will build the foundation and rationale for resorting to admittedly more complicated, deep learning-based methods. This represents a tradeoff between performance and difficulty. While deep learning may be extremely popular, the need for more complex infrastructure (i.e. GPUs), rapidly changing frameworks, and an arguably reduced level of interpretability are significant costs in a production system. The final, successful technique was written from scratch in PyTorch, and results from training this model on our database of product images will be presented. Lastly, details of training and serving model results via an API will be discussed.