Subtitling in most translation environments has long been a context free affair, limited to handling a subtitle file as a text only exercise. There has been the odd exception to this and Star Transit have offered the ability to see a video and play it synchronously with the text in their translation editor for some years, slowly extending their support to include SRT, VTT, webVTT and a TXT formats for the subtitle file (as far as I know). memoQ recently launched a video preview as well, I think with SRT support (I’m not sure here as it’s not easily obtained or installed). SDL only offerred support for SRT in terms of extracting the translatable text and giving you a static preview that showed you the timecodes and the text. Other tool vendors, and for other file formats, often rely on the text in a subtitle file being copied into Microsoft Word where the time-codes and other meta information can be hidden allowing the translator to focus on the translatable text… tools like Tortoise tagger for example can be helpful in preparing the files for translation. But none of them provide contextual previews of the video with embedded subtitles supporting positional and formatting information, and none of them provide any useful quality controls for subtitling other than line length which is based on the standard QA checks in most translation tools. This week SDL released some new plugins onto their appstore that will no doubt kick off some innovation in this area as the need for better audio visual localization tools increases.