This document discusses cardinality estimation techniques for very large data sets. It begins by outlining goals for counting solutions such as supporting high throughput data streams, estimating cardinality within known error thresholds for large data sets up to 1 billion or 1 trillion elements, and supporting set operations. It then discusses naive solutions and their limitations before introducing intuitions and techniques like applying multiple hash functions, stochastic averaging, and the HyperLogLog algorithm. HyperLogLog uses a single hash function to map values to substreams and tracks the maximum value in each to estimate cardinality within a typical 1.04/m error. The document concludes by discussing related probabilistic data structures and references.