64
施策 まとめ
提供データ
4
付録
引用論文
[1]Edward Hu,Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen
“LORA: LOW-RANK ADAPTATION OF LARGE LAN- GUAGE MODELS “
https://arxiv.org/pdf/2106.09685(最終閲覧日:2025-09-10)
[6]Nitish Shirish Keskar∗ , Bryan McCann∗ , Lav R. Varshney, Caiming Xiong, Richard Socher
“CTRL: A CONDITIONAL TRANSFORMER LANGUAGE MODEL FOR CONTROLLABLE GENERATION”
https://arxiv.org/pdf/1909.05858(最終閲覧日:2025-09-10)
[2]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, Denny Zhou
“Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”
https://arxiv.org/pdf/2201.11903(最終閲覧日:2025-09-10)
[3]KAIYAN CHANG, SONGCHENG XU, CHENGLONG WANG, YINGFENG LUO, XIAOQIAN LIU, TONG XIAO∗ , JINGBO ZHU,
“Efficient Prompting Methods for Large Language Models: A Survey”
https://arxiv.org/pdf/2404.01077(最終閲覧日:2025-09-10)
[4]Jason Wei∗ , Maarten Bosma∗ , Vincent Y. Zhao∗ , Kelvin Guu∗ , Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le
“FINETUNED LANGUAGE MODELS ARE ZERO-SHOT LEARNERS”
https://arxiv.org/pdf/2109.01652(最終閲覧日:2025-09-10)
[5]Vladislav Lialin, Vijeta Deshpande, Xiaowei Yao, Anna Rumshisky
“Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning”
https://arxiv.org/pdf/2303.15647(最終閲覧日:2025-09-10)
[7]提言 生成 AI を受容・活用する社会の実現に向けて 令和7年(2025年)2月27日 日本学術会議
https://www.scj.go.jp/ja/info/kohyo/pdf/kohyo-26-t381.pdf(最終閲覧日:2025-09-10)