We’re thrilled to see our advanced ML models and EMG hardware — that transform neural signals controlling muscles at the wrist into commands that seamlessly drive computer interactions — appearing in the latest edition of Nature Magazine. Read the story: https://lnkd.in/g6JJwcf8 Find more details on this work and the models on GitHub: https://lnkd.in/g-xiJ2Nm
AI at Meta
Research Services
Menlo Park, California 983,060 followers
Together with the AI community, we’re pushing boundaries through open science to create a more connected world.
About us
Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.
- Website
-
https://ai.meta.com/
External link for AI at Meta
- Industry
- Research Services
- Company size
- 10,001+ employees
- Headquarters
- Menlo Park, California
- Specialties
- research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing
Updates
-
Meta FAIR recently released the Seamless Interaction Dataset, the largest known high-quality video dataset of its kind, with: 4,000+ diverse participants 4,000+ hours of footage 65k+ interactions 5,000+ annotated samples This dataset of full-body, in-person, face-to-face interaction videos represents a crucial stepping stone to understanding and modeling how people communicate and behave when they’re together—advancing AI's ability to generate more natural conversations and human-like gestures. Download the dataset on @huggingface: https://lnkd.in/ebDSm3Wq Learn more about the dataset: https://lnkd.in/e9V4CVms
-
"Our mission with the lab is to deliver personal superintelligence to everyone in the world. So that way, we can put that power in every individual's hand." - Mark Watch Mark's full interview with The Information as he goes deeper on Meta's vision for superintelligence and investment in AI compute infrastructure.
The Information | TITV | July 15th, 2025 The Information’s TITV is first in tech news and analysis from the people that break and shape the story. The rest is just commentary. Watch every weekday live at 10 am PT/ 1 ET on TheInformation.com/titv, App, YouTube, X—and on demand wherever you get your podcasts.
The Information | TITV | July 15th, 2025
www.linkedin.com
-
Today Mark announced Meta's major AI compute investment. See his post: https://lnkd.in/e_QUAKfz
-
-
Today, the Meta FAIR team is introducing Seamless Interaction, a project dedicated to modeling interpersonal dynamics. Explore the research artifacts below 👇 1️⃣ The project features a family of audiovisual behavioral models, developed in collaboration with Meta’s Codec Avatars lab + Core AI lab, that render speech between two individuals into diverse, expressive full-body gestures and active listening behaviors, allowing the creation of fully embodied avatars in 2D and 3D. Learn more here: https://lnkd.in/gYPNZbuX 2️⃣ We’re also releasing the Seamless Interaction Dataset, with 4,000+ participants and 4,000+ hours of interactions, making it the largest known video dataset of its kind. This dataset enables our audiovisual behavioral models to understand and generate human-like social behaviors, and represents a crucial stepping stone to understanding and modeling how people communicate and behave when they’re together. We’re releasing it here to help the research community advance their work: https://lnkd.in/gnJC9tan 3️⃣ You can also check out this technical report detailing our methodology to build motion models on the dataset, along with an evaluation framework for this type of model. See the report here: https://lnkd.in/gWmQp7PP Head to our blog to go deeper on the full story: https://lnkd.in/gj_XNZWs
-
AI at Meta reposted this
Tired of manual prompt tweaking? Watch the latest Llama tutorial on how to optimize your existing GPT or other LLM prompts for Llama with `llama-prompt-ops`, the open-source Python library! In this video, Partner Engineer Justin Lee demonstrates installation, project setup, migrating your first prompt, and analyzing performance gains. Watch now to discover: ✨ Why systematic prompt optimization is crucial for migrating from GPT to Llama. 💻 A live code walkthrough of `llama-prompt-ops` in action for a customer service classification task. Watch the full tutorial here: https://bit.ly/3G5chBr
-
AI at Meta reposted this
Take your AI development skills to the next level with our latest course on Deeplearning.ai, "Building with Llama 4", taught by Andrew Ng and Amit Sangani, Director of Partner Engineering for Meta's AI team. In this comprehensive course, you'll learn how to harness the power of Llama 4, which enables deployment easier than before and achieves more advanced multi-modal understanding by prompting over multiple images. You'll discover how to: - Build applications that reason over visual content, detect objects, and answer image grounding questions with precision - Understand the Llama 4 prompt format especially the new image related tokens - Use long context to process entire books and research papers without needing to chunk the data - Optimize your prompts with the Llama prompt optimization tool - Create high-quality training data with the Llama Synthetic Data Kit With hands-on experience using Llama 4 through Meta's official API and other inference providers, you'll be able to build more powerful applications. Start learning now at DeepLearning.ai and unlock the full potential of Llama 4! https://bit.ly/4kMpRc3
-
The response to our first-ever Llama Startup Program was astounding, and after reviewing over 1,000 applications we’re thrilled to announce our first group. This eclectic group of early-stage startups are ready to push the boundaries of what’s possible with Llama and drive innovation in the GenAI market. We’ll offer them support with access to the Llama technical team and cloud credit reimbursements to help offset the cost of building with Llama. Learn more about the Llama Startup Program: https://lnkd.in/gjR-qFcE
-
-
For #CVPR2025, dive into the latest research papers from some of the brightest minds in AI 👇 Sonata: Advancing Self-Supervised Learning for 3D Point Representations Sonata represents a significant leap forward in the field of 3D self-supervised learning. By identifying and addressing the geometric shortcut and introducing a flexible, efficient framework, it’s an exceptionally robust 3D point representation. This work advances the state of the art and sets the stage for future innovations in 3D perception and its applications. Learn more ➡️ https://lnkd.in/gZ5wbJ3r Reading Recognition in the Wild: A Dataset for Understanding Human Behaviors During Reading from an Egocentric Sensor Suite This large multimodal dataset features video, eye gaze, and head pose sensor outputs, created to help solve the task of reading recognition from wearable devices. Notably, this is the first egocentric dataset to feature high-frequency eye-tracking data collected at 60 Hz. Explore ➡️ https://lnkd.in/gVw9vTyN
-
-
We’re sharing V-JEPA 2, a new world model with state-of-the-art performance in visual understanding and prediction. V-JEPA 2 is a 1.2 billion-parameter model, trained on video, that can enable zero-shot planning in robots—allowing them to plan and execute tasks in unfamiliar environments. Learn more about V-JEPA 2 ➡️https://lnkd.in/gNgpzpvu As we continue working toward our goal of achieving advanced machine intelligence (AMI), we’re also releasing three new benchmarks for evaluating how well existing models can reason about the physical world from video. Learn more and download the new benchmarks ➡️https://lnkd.in/gNgpzpvu