The document presents a methodology for automatically assessing participants in chat conversations used for computer-supported collaborative learning (CSCL). It uses natural language processing techniques and heuristics to evaluate conversations based on participants' involvement, knowledge, and innovation. The heuristics were tested on a corpus of 7 chat conversations involving 35 students discussing web collaboration technologies. Correlations between the heuristic evaluations and expert human evaluations were generally high, particularly for involvement and innovation. The knowledge heuristic was less reliable. The methodology can help identify effective participation criteria and rank learners and conversations.