Stars
YSDA course in Natural Language Processing
A 120-day CUDA learning plan covering daily concepts, exercises, pitfalls, and references (including “Programming Massively Parallel Processors”). Features six capstone projects to solidify GPU par…
ODQM contains metrics for measuring quality of offline reinforcement learning data
Distributed Reinforcement Learning accelerated by Lightning Fabric
Official Implementation for "In-Context Reinforcement Learning for Variable Action Spaces"
TORAX: Tokamak transport simulation in JAX
A library for mechanistic interpretability of GPT-style language models
Compact high quality word embeddings for Russian language
A topic-centric list of HQ open datasets.
LISP-like Simple Language Educational Edition made as a final project for ITMO Computer Arhitecture course.
Pretrained language model with 100B parameters
OpenMMLab Pre-training Toolbox and Benchmark
A collection of loss functions for medical image segmentation