We're thrilled to announce our new course: Retrieval Augmented Generation (RAG) RAG is a key part of building LLM applications that are grounded, accurate, and adaptable. In this course, taught by AI engineer Zain Hasan and available on Coursera, you’ll learn how to design and deploy production-ready RAG systems. You'll: - Combine retrievers and LLMs using tools like Weaviate, Together AI, and Phoenix - Apply keyword and semantic search methods - Evaluate performance to deploy and optimize production-ready systems You'll apply these techniques using real-world datasets in domains like healthcare, media, and e-commerce, and build the intuition to make informed architectural decisions. 📈 With the global RAG market projected to grow from $1.2B in 2024 to over $11B by 2030, RAG is core to real-world LLM systems. Start building with it today! Enroll now: https://hubs.la/Q03xtjCy0
DeepLearning.AI
Software Development
Palo Alto, California 1,232,377 followers
Making world-class AI education accessible to everyone
About us
DeepLearning.AI is making a world-class AI education accessible to people around the globe. DeepLearning.AI was founded by Andrew Ng, a global leader in AI.
- Website
-
http://DeepLearning.AI
External link for DeepLearning.AI
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- Palo Alto, California
- Type
- Privately Held
- Founded
- 2017
- Specialties
- Artificial Intelligence, Deep Learning, and Machine Learning
Products
DeepLearning.AI
Online Course Platforms
Learn the skills to start or advance your AI career | World-class education | Hands-on training | Collaborative community of peers and mentors.
Locations
-
Primary
2445 Faber Pl
Palo Alto, California 94303, US
Employees at DeepLearning.AI
Updates
-
“The California Report on Frontier AI Policy” produced by the state government urges lawmakers to require incident reporting, protect whistleblowers, and reward transparency when they regulate foundation models. Led by researchers at Stanford and the Carnegie Endowment, the authors rejected several requirements of the previously vetoed bill SB 1047, calling for flexible rules that can be adjusted as compute budgets and user bases grow. Learn more in The Batch: https://hubs.la/Q03ynbk10
-
RAG augments a standard LLM pipeline with a retrieval step; injecting external, often domain-specific data into the prompt before generation. This improves factual accuracy, enables source attribution, and avoids the need for expensive retraining. By decoupling retrieval from generation, each component does what it does best. In our new course on Retrieval Augmented Generation, you’ll learn how to build production-grade RAG systems, covering architecture, retrieval strategies, prompt design, and evaluation. Start building: https://hubs.la/Q03yhH3f0
-
Meta, which is building its new Superintelligence Labs, reportedly has offered compensation packages up to $300 million dollars over four years to top AI researchers, Wired reported. The company hired Apple scientist Ruoming Pang with an offer of $200 million dollars over several years, according to Bloomberg. So far, the company hired at least 16 specialists from rivals including Anthropic, Apple, Google, and OpenAI. Learn more in The Batch: https://hubs.la/Q03y5tVT0
-
Want to impress your stakeholders and make your data easier to understand? Maps in Tableau aren’t just pretty, they’re powerful tools for revealing geographic trends and spotting opportunities at a glance. In the Data Storytelling course, you’ll learn when and how to use interactive maps to communicate location-based insights effectively. Check it out in the Data Analytics Professional Certificate: https://hubs.la/Q03y55H-0
-
Researchers built a large-scale dataset for training web agents by generating it automatically. Agentic LLMs that were fine-tuned on the dataset performed better than those fine-tuned on earlier, handcrafted datasets. Their pipeline used Qwen3-235B and other large language models to generate, execute, and vet web tasks. Then they fine-tuned Qwen3-1.7B and coupled it with an agentic framework. Their agent achieved to 56 percent success on their generated test set, beating or matching agents based on much larger untuned models that had not been fine-tuned. Read our summary of the paper in The Batch: https://hubs.la/Q03xwFw90
-
xAI updated its Grok vision‑language model, launching Grok 4 and the multi‑agent Grok 4 Heavy based on a 1.7 trillion‑parameter mixture‑of‑experts architecture. Grok 4 topped Anthropic Claude 4 Opus, Google Gemini 2.5 Pro, and OpenAI o3 on several popular benchmarks. But the model immediately showed questionable behavior such as calling itself Hitler and basing its answer to a politically sensitive question on Elon Musk’s public statements. Learn more in The Batch: https://hubs.ly/Q03xYzX_0
-
DeepLearning.AI reposted this
I recently finished the 𝗣𝗼𝘀𝘁-𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗼𝗳 𝗟𝗟𝗠𝘀 short course by DeepLearning.AI, taught by Banghua Zhu, and it was one of the most practical deep dives I’ve taken into how large language models are fine-tuned to better reflect human intent and preferences. This course is a great starting point for anyone looking to move beyond generating text and into building aligned, purpose-driven LLMs. I got to fine-tune small models, work with preference datasets, and understand how techniques like DPO, PPO and GRPO shift model behaviour in real time. Here are a few things I took away from the course: • Learned how 𝗦𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗙𝗶𝗻𝗲-𝗧𝘂𝗻𝗶𝗻𝗴 (𝗦𝗙𝗧) sets the foundation for model behavior by imitating example responses, along with techniques for curating high-quality instruction data • Explored 𝗗𝗶𝗿𝗲𝗰𝘁 𝗣𝗿𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 (𝗗𝗣𝗢) as a stable, reward-free method for aligning models using pairwise preferences, and how it can also be applied with online or on-policy data • Understood how 𝗣𝗿𝗼𝘅𝗶𝗺𝗮𝗹 𝗣𝗼𝗹𝗶𝗰𝘆 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 (𝗣𝗣𝗢) and 𝗚𝗲𝗻𝗲𝗿𝗮𝗹𝗶𝘇𝗲𝗱 𝗥𝗲𝘄𝗲𝗶𝗴𝗵𝘁𝗲𝗱 𝗣𝗼𝗹𝗶𝗰𝘆 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 (𝗚𝗥𝗣𝗢) work, and when each method is better suited based on reward availability and computational complexity Applied all three methods through coding exercises and clearly understood how each one affects model behaviour through hands-on comparison and experimentation. Along the way, I came across 𝗡𝗲𝗠𝗼-𝗥𝗟, NVIDIA’s open-source library for scalable post-training and reinforcement learning on language models. It supports methods like DPO, GRPO, and reward model training, and is built to scale smoothly from single-GPU experiments to large multi-GPU systems. With modular backends for training and generation, NeMo-RL makes it easier to apply modern RL techniques to language models in both research and production settings. If you're curious about how language models can be fine-tuned beyond pre-training, do check it out: https://lnkd.in/gjjUS6pt #LLMs #RLHF #NeMoRL #PostTraining #DeepLearning #DPO #GRPO #PPO #LanguageModels
-
This week, in The Batch, Andrew Ng discusses how to get through the product management bottleneck, in which the speed of AI-assisted coding requires faster decisions about product specifications. Plus: 🤖 Grok 4 shows impressive smarts, questionable behavior 💰 Meta lures talent with sky‑high pay 🏛️ California reframes AI regulations 🔧 Researchers improved multi‑agent systems by addressing common failure modes Read The Batch: https://hubs.la/Q03xM-ZN0
-
DeepLearning.AI reposted this
I didn't spend a single penny to learn AI Agents and you can do it too (𝗣𝗔𝗥𝗧 3) AND... the best part is I got to learn from industry experts DeepLearning.AI has done a great job in making these courses 1. Event-Driven Agentic Document Workflows 🔗 https://lnkd.in/d7vJEH4H 2. Building AI Browser Agents 🔗 https://lnkd.in/ddKzmvmW 3. Building Code Agents with Hugging Face 🔗 https://lnkd.in/dhx73Kbn 4. Building AI Voice Agents for Production 🔗 https://lnkd.in/dHiRTWFf 5. DsPy: Build and Optimize Agentic Apps 🔗 https://lnkd.in/d4-3bidJ 6. MCP: Build Rich-Context AI Apps with Anthropic 🔗 https://lnkd.in/digapx-H I've curated 50+ AI Agent Resources on my profile Check them out 👋
-