Naifan Li
Tags
Categories
Archives
About
Naifan Li
Cancel
Tags
Categories
Archives
About
LLMs
2025
Qwen Series: Technical Summary
05-14
Qwen3 Technical Report
05-14
大模型 API 开发指南
02-11
2024
LLaMA 4: Next-Generation Open Language Models
12-25
Qwen2.5 Technical Report
12-19
Qwen2 Technical Report
07-15
LLaMA 3: The Most Capable Openly Available LLM to Date
04-18
2023
Qwen Technical Report
09-28
LLaMA 2: Open Foundation and Fine-Tuned Chat Models
07-18
LIMA: Less Is More for Alignment
05-18
LLaMA: Open and Efficient Foundation Language Models
02-27
2022
Self-Instruct: Aligning Language Model with Self-Generated Instructions
12-20
Zero-shot-CoT: Large Language Models are Zero-Shot Reasoners
05-24
Least-to-Most Prompting Enables Complex Reasoning in Large Language Models
05-21
Self-Consistency Improves Chain of Thought Reasoning in Language Models
03-21
InstructGPT: Training language models to follow instructions with human feedback
03-04
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
01-28
2021
FLAN: Finetuned Language Models Are Zero-Shot Learners
09-03
Summary: Sequence Packing
07-29
2020
GPT-3: Language Models are Few-Shot Learners
05-28
1
2