Naifan Li
Tags Categories Archives About
Naifan Li
Cancel
TagsCategoriesArchivesAbout

 LLMs

2022

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models 05-21
Self-Consistency Improves Chain of Thought Reasoning in Language Models 03-21
InstructGPT: Training language models to follow instructions with human feedback 03-04
InstructGPT 03-04
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models 01-28

2021

FLAN: Finetuned Language Models Are Zero-Shot Learners 09-03
Summary: Sequence Packing 07-29

2020

GPT-3: Language Models are Few-Shot Learners 05-28

2019

T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer 10-23
GPT-2: Language Models are Unsupervised Multitask Learners 02-14
BBPE: Byte-level Byte Pair Encoding 02-14

2018

GPT-1: Improving Language Understanding by Generative Pre-Training 06-11

2017

Transformer: Attention Is All You Need 06-12
Taxonomy of Natural Language Processing Tasks 06-12
  • 1
  • 2
2018 - 2026 Naifan Li