This updated paper revisits (https://tinyurl.com/yw9329bd) an assessment written only a month earlier evaluating the ability of Large Language Models (LLMs) to contribute to incremental research in mathematics and theoretical physics. Substantial new evidence, particularly the early scientific case studies of GPT-5 reported by OpenAI, demonstrates materially improved capabilities relative to the assumptions underlying the previous analysis. This revision clarifies which limitations of earlier models still apply, which have been weakened by the new results, and what the new empirical frontier implies for cautious, well-structured human–LLM scientific collaboration. The tone remains conservative: the improvements are significant, but they do not eliminate the need for expert oversight, rigorous verification, or principled scientific governance