The document proposes optimizing DRAM caches for latency rather than hit rate. It presents the Alloy Cache design which avoids tag serialization to reduce latency. The Alloy Cache uses a Memory Access Predictor to selectively use either serial or parallel access models for tags and data to minimize latency and bandwidth usage. Evaluation shows the Alloy Cache with a simple predictor outperforms previous designs like SRAM-tag caches and the LH-Cache, achieving over 35% speedup compared to 24% for SRAM-tags. The design provides better performance than previously assumed to be necessary structures like SRAM tags in a simpler and more practical way.