Research
Research Area
We are open to ANY research relevant to NLP & ML. Currently, we are particularly focused on the following topics:
Large Language Models & Foundation Models
Alignment learning (instruction fine-tuning, RLHF, Imitation learning)
Complex reasoning (math, code)
LLM self-correction, self-reflection
LLM decoding
AI-Agents
Plug-in modules (e.g., RAG)
Multimodality expansion
Machine Learning & Representation Learning
Uncertatinty measure (measuring confidence of the model output)
Learning strategies (e.g., self-supervised learning)
Representation extraction & analysis
Training model in a restricted environment (Black-box model training & optimization)
Parameter-efficient training (e.g., pruning, knowledge distillation, quantization)