[ 1 ] Huang S, Grady P. Generative AI Act Two[EB/OL].(2023-09-21)[2023-10-20]. https://www.sequoiacap.com/article/generative-ai-act-two/.
[
2 ] 云瀚应用“ChatBK1.0 博看智慧咨询”简介[EB/OL].(2023-10-19)[2023-10-20]. https://www.calsp.cn/2023/10/20/bulletin-202310-01/.
[
3 ] Tom Brown, Benjamin Mann, Nick Ryder, et al.Language models are few-shot learners[EB/OL].[2023-10-20].
https://arxiv.org/abs/2005.14165.
[ 4 ] Wei J, Tay Y, Bommasani R, et al. Emergentabilities of large language models[J/OL].
TransactionsonMachine Learning Research,2022(8). [2023-10-20]. https://openreview.net/pdf?id=yzkSU5zdwD.
[
5 ] A n d r e j K a r p a t h y. S t a t e o f G P T [ E B / O L ] .(2023-05-23)[2023-10-20]. https://karpathy.ai/stateofgpt.pdf.
[
6 ] Wu S, Irsoy O, Lu S, et al. Bloomberggpt: Alarge language model for finance[J]. arXiv preprintarXiv:2303.17564, 2023.
[ 7 ] Ouyang L, Wu J, Jiang X, et al. Training languagemodels to follow instructions
with human feedback[J/OL]. Advances in Neural Information ProcessingSystems, 2022(35). [2023-10-20]. https://arxiv.org/pdf/2203.02155.pdf.
[ 8
] Brown T, Mann B, Ryder N, et al. Languagemodels are few-shot learners[J]. Advances
in NeuralInformation Processing Systems, 2020, 33: 1877—1901.
[ 9 ] IT 之家. 百度字节等8 家公司大模型产品通过生成式人工智能备案,可上线向公众提供服务[EB/OL].(2023-08-31)[2023-10-20]. https://www.ithome.com/0/715/938.htm.
[10]
郭利敏,付雅明. 以大语言模型构建智慧图书馆:框架和未来[J/OL]. 图书馆杂志:1-11[2023-10-20].
http://kns.cnki.net/kcms/detail/31.1108.G2.20231011.1616.006.html.
[11] Daochen
Zha, Zaid Pervaiz Bhat, Kwei-HerngLai, et al. Data-centric Artificial Intelligence: ASurvey[EB/OL]. [2023-10-20]. https://arxiv.org/pdf/2303.10158.pdf.
[12]
Li X, Yu P, Zhou C, et al. Self-alignment withinstruction backtranslation[J].
arXiv preprintarXiv:2308.06259, 2023.
[13] K?pf A, Kilcher Y, von Rütte D, et al. OpenAssistant Conversations—Democratizing LargeLanguage Model
Alignment[J]. arXiv preprintarXiv:2304.07327, 2023.
[14] Wei J, Bosma M, Zhao V Y, et al. Finetuned languagemodels are zero-shot
learners[J]. arXiv preprintarXiv:2109.01652, 2021.
[15] Chang Y, Wang X, Wang J, et al. A survey onevaluation of large language models[J].
arXivpreprint arXiv:2307.03109, 2023.
|