This document discusses using a relational state representation and graph neural network to learn an abstract action-value function that can generalize to different environments and goals for task-and-motion planning problems. It proposes approximating which object should be moved rather than values for detailed actions. The relational state representation encodes predicates on objects and regions as nodes and relations between entities as edges. The value function is then the output of the graph neural network, guiding planning by comparing values of abstract actions like picking different objects.