-
peking university
Highlights
- Pro
Stars
Code and Benchmark Dataset for DRESSing Up LLM: Efficient Stylized Question-Answering via Style Subspace Editing
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
[NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning
CPL: Critical Plan Step Learning Boosts LLM Generalization in Reasoning Tasks
Robust recipes to align language models with human and AI preferences
[𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯𝘨 𝘪𝘯 𝘓𝘢𝘯𝘨𝘶𝘢𝘨𝘦 𝘔𝘰𝘥𝘦𝘭𝘴 𝘸𝘪𝘵𝘩 𝘍𝘪𝘯𝘦-𝘨𝘳𝘢𝘪𝘯𝘦𝘥 𝘙𝘦𝘸𝘢𝘳𝘥𝘴
Repository for the code to the paper "A Dataset for Plain Language Adaptation of Biomedical Abstracts"
From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 🍓
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
A framework for few-shot evaluation of language models.
A central, open resource for data and tools related to chain-of-thought reasoning in large language models. Developed @ Samwald research group: https://samwald.info/
Small and Efficient Mathematical Reasoning LLMs
[MathCoder, MathCoder-VL] Family of LLMs/LMMs for mathematical reasoning.
The official code for "One Fits All: Power General Time Series Analysis by Pretrained LM (NeurIPS 2023 Spotlight)"
Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
Google Research
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
北京大学软件与微电子学院硕士生课程知识点、作业等汇总【Summary of Knowledge Points and Assignments of Peking University Integrated Circuit Major Courses】
A professional list on Large (Language) Models and Foundation Models (LLM, LM, FM) for Time Series, Spatiotemporal, and Event Data.
Awesome-LLM: a curated list of Large Language Model
A collection of AWESOME things about domain adaptation
A benchmark for few-shot evaluation of foundation models for electronic health records (EHRs)
Code for "DistCare: Distilling Knowledge from Publicly Available Online EMR Data to Emerging Epidemic for Prognosis" (WWW '21)