This paper proposes a new approach to ensemble learning that combines multiple supervised and unsupervised models. The approach formulates the ensemble task as an optimization problem on a bipartite graph. The objective is to maximize consensus among the supervised predictions and unsupervised constraints by favoring smooth predictions over the graph while penalizing deviations from the initial supervised labels. This is solved through iterative propagation of probability estimates among neighboring nodes on the graph. Experimental results demonstrate the benefits of this new approach over existing alternatives.