This document proposes a new method called Contrastive Representation Learning for Clustering (CRLC) that applies the principle of maximizing mutual information across views to learn cluster-level and instance-level semantics for unsupervised image clustering. CRLC trains an encoder to map images to a representation space by maximizing the agreement between the image representation and a cluster-assignment probability vector, using either cosine similarity or log-of-dot-product as the critic. This training loss functions to minimize a contrastive loss and learn discriminative representations. Experimental results show CRLC learns more separated representations than baselines and achieves better clustering and semi-supervised learning performance.