You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fine-tuned chemical language model for predicting molecular lipophilicity in drug design. Explores parameter-efficient fine-tuning strategies (LoRA, BitFit, IA3), layer freezing techniques, and influence-based data selection. Balances accuracy and computational efficiency for molecular property prediction tasks.
A practical introduction to Transformer fine-tuning, designed for participants of the International Olympiad in Artificial Intelligence – covering LoRA, Adapters, and the limitations of parameter-efficient methods.
Build a production‑grade, modular pipeline for fine‑tuning large language models with LoRA on domain‑specific tasks (e.g., legal QA, medical summarization, financial reasoning).