This document discusses advances in hardware that have enabled progress in artificial intelligence. It notes that deep learning advances have been gated by hardware capabilities, and that more computation is needed for larger models and datasets. Specialized hardware designs for deep learning can improve efficiency by utilizing techniques like reduced precision data types, pruning of unimportant weights, and bringing computation closer to memory. Future opportunities include non-Von Neumann architectures, memory-compute integration, and continued co-design across algorithms, architectures and circuits to most efficiently solve domain-specific problems like artificial intelligence.