This document discusses different number systems used in computing including decimal, binary, hexadecimal, and binary-coded decimal (BCD). It explains how each system uses a positional notation with a radix or base to represent values. Binary uses a base of 2 with digits of 0 and 1. Hexadecimal uses a base of 16 with digits 0-9 and A-F. Conversion between decimal, binary, and hexadecimal is described. Signed and unsigned numbers are also discussed, with two's complement being the most common way of representing signed binary numbers. Fixed precision and the concept of overflow are introduced when numbers are represented with a limited number of bits in computers.