Vikas Ashok from Old Dominion University presented on using visual saliency models to improve web accessibility for people with visual impairments. He described two projects: SVIM, a saliency-driven video magnifier that tracks regions of interest in videos to aid low vision users, and SAIL, which automatically injects ARIA landmarks into webpages to streamline navigation for blind screen reader users. Both projects use deep learning models to detect salient regions and objects. SVIM clusters salient pixels to adjust the video viewport for magnification, while SAIL identifies salient areas to tag with landmarks. Evaluations found SVIM enhanced the video experience and SAIL reduced task completion times for screen reader users compared to manually landmarked pages.