Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Survey of reasoning techniques with Language Model prompting
1. Survey of reasoning techniques with
Language Model prompting
Presenter:
Sanjana Kothari
2. Language Models and Reasoning
Reasoning lies at the core of human language and communication. It enables
decisions, discussions, negotiations, etc. It was not long ago that reasoning was a
power that only humans possessed. The development in Language Models (LMs),
complex and nuanced forms of reasoning like common-sense and symbolic, have
started taking shape. With more advances, these tools may eventually augment
human intelligence with the help of prompting strategies, which may narrow the gap
between machine and human intelligence.
3. Language Model Prompting
Language model prompting strategies refer to the various techniques and
approaches that can be used to prompt a language model to generate specific text or
perform a specific task. Prompting strategies are designed to help guide the language
model's output and ensure that it generates text that is relevant to the task at hand.
9. Comparing Language Model Performance
Wei et al. demonstrate that few-shot prompting performs
better in almost all tasks as the model scale increases,
which can be explained by the fact that LMs with larger
model size contain more implicit knowledge for reasoning.
CoT prompting shows greater improvements, however,
when the model scale declines to less than 100B, CoT
prompting yields no performance gain and may even be
detrimental. Another observation is that PaLM-62B even
performs better than LaMDA-137B possibly because it
was trained on the higher-quality corpus. Surprisingly, on
the same parameter scale but different training corpus,
Codex outperforms GPT-3 significantly, indicating that pre
training on code branch enables code generation and
10. Comparing Prompts
1. Manual construction
2. LM Generated Prompts
3. Retrieval-based Prompts
1. Zero-shot
2. Few-shot
3. Chain of Thought
4. Self consistency sampling
5. Instruction prompting
11. Future Direction
1. Theoretical principle of reasoning
2. Efficient reasoning
3. Robust and interpretable reasoning
4. Multimodel reasoning
5. Generalizable reasoning