这是indexloc提供的服务,不要输入任何密码
Skip to content
/ parlant Public
forked from emcie-co/parlant

Build reliable customer facing agents with foundational LLMs using behavioral guidelines and runtime supervision

License

Notifications You must be signed in to change notification settings

Relvox/parlant

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Parlant Banner

Hello, Conversation Modeling!

Parlant is the open-source conversation modeling engine for building better, deliberate Agentic UX — giving you the power of LLMs without the unpredictability, to create controlled, compliant, and purposeful conversations..

emcie-co%2Fparlant | Trendshift

WebsiteIntroductionTutorialAboutReddit

PyPI - Version PyPI - Python Version Apache 2 License GitHub commit activity PyPI - Downloads Discord

Introduction Video

Parlant Introduction

Install

pip install parlant

Option 1: Use the CLI

Start the server and start interact with the default agent

parlant-server run
# Now visit http://localhost:8800

Add behavioral guidelines and let Parlant do the rest

parlant guideline create \
    --condition "the user greets you" \
    --action "thank them for checking out Parlant"
# Now start a new conversation and greet the agent

Option 2: Use the Python SDK

# file: agent.py

import parlant.sdk as p
import asyncio
from textwrap import dedent


@p.tool
async def get_on_sale_car(context: p.ToolContext) -> p.ToolResult:
    return p.ToolResult("Hyundai i20")

@p.tool
async def human_handoff(context: p.ToolContext) -> p.ToolResult:
    await notify_sales(context.customer_id, context.session_id)

    return p.ToolResult(
        data="Session handed off to sales team",
        # Disable auto-responding using the AI agent on this session
        # following the next message.
        control={"mode": "manual"},
    )


async def start_conversation_server() -> None:
    async with p.Server() as server:
        agent = await server.create_agent(
            name="Johnny",
            description="You work at a car dealership",
        )

        journey = await agent.create_journey(
            title="Research Car",
            conditions=[
                "The customer wants to buy a new car",
                "The customer expressed general interest in new cars",
            ],
            description=dedent("""\
                Help the customer come to a decision of what new car to get.

                The process goes like this:
                1. First try to actively understand their needs
                2. Once needs are clarified, recommend relevant categories or specific models for consideration."""),
        )

        offer_on_sale_car = await journey.create_guideline(
          condition="the customer indicates they're on a budget",
          action="offer them a car that is on sale",
          tools=[get_on_sale_cars],
        )

        transfer_to_sales = await journey.create_guideline(
          condition="the customer clearly stated they wish to buy a specific car",
          action="transfer them to the sales team",
          tools=[human_handoff_to_sales],
        )

        await transfer_to_sales.prioritize_over(proactively_offer_on_sale_car)


asyncio.run(start_conversation_server())

Run python agent.py and visit http://localhost:8800.

Quick Demo

Parlant Banner

What is Conversation Modeling?

You've built an AI agent—that's great! However, when you actually test it, you see it's not handling many customer interactions properly, and your business experts are displeased with it. What do you do?

Enter Conversation Modeling (CM): a new powerful and reliable approach to controlling how your agents interact with your users.

A conversation model is a structured, domain-specific set of principles, actions, objectives, and terms that an agent applies to a given conversation.

Why Conversation Modeling?

The problem of getting your AI agent to say what you want it to say is a hard one, experienced by virtually anyone building customer-facing agents. Here's how Conversation Modeling compares to other approaches to solving this problem.

  • Flow engines force the user to interact according to predefined flows. In contrast, a CM engine dynamically adapts to a user's natural interaction patterns while conforming to your rules.

  • Free-form prompt engineering leads to inconsistency, frequently failing to uphold requirements. Conversely, a CM engine leverages structure to enforce conformance to a Conversation Model.

Who uses Parlant?

Parlant is used to deliver complex conversational agents that reliably follow your business protocols in use cases such as:

  • 🏦 Regulated financial services
  • 🏥 Healthcare communications
  • 📜 Legal assistance
  • 🛡️ Compliance-focused use cases
  • 🎯 Brand-sensitive customer service
  • 🤝 Personal advocacy and representation

How is Parlant used?

Developers and data-scientists are using Parlant to:

  • 🤖 Create custom-tailored conversational agents quickly and easily
  • 👣 Define behavioral guidelines for agents to follow (Parlant ensures they are followed reliably)
  • 🛠️ Attach tools with specific guidance on how to properly use them in different contexts
  • 📖 Manage their agents’ glossary to ensure strict interpretation of terms in a conversational context
  • 👤 Add customer-specific information to deliver personalized interactions

How does Parlant work?

graph TD
    API(Parlant REST API) -->|React to Session Trigger| Engine[AI Response Engine]
    Engine -->|Load Domain Terminology| GlossaryStore
    Engine -->|Match Guidelines| GuidelineMatcher
    Engine -->|Infer & Call Tools| ToolCaller
    Engine -->|Tailor Guided Message| MessageComposer
Loading

When an agent needs to respond to a customer, Parlant's engine evaluates the situation, checks relevant guidelines, gathers necessary information through your tools, and continuously re-evaluates its approach based on your guidelines as new information emerges. When it's time to generate a message, Parlant implements self-critique mechanisms to ensure that the agent's responses precisely align with your intended behavior as given by the contextually-matched guidelines.

📚 More technical docs on the architecture and API are available under docs/.

📦 Quickstart

Parlant comes pre-built with responsive session (conversation) management, a detection mechanism for incoherence and contradictions in guidelines, content-filtering, jailbreak protection, an integrated sandbox UI for behavioral testing, native API clients in Python and TypeScript, and other goodies.

$ pip install parlant
$ parlant-server run
$ # Open the sandbox UI at http://localhost:8800 and play

🙋‍♂️🙋‍♀️ Who Is Parlant For?

Parlant is the right tool for the job if you're building an LLM-based chat agent, and:

  1. 🎯 Your use case places a high importance on behavioral precision and consistency, particularly in customer-facing scenarios
  2. 🔄 Your agent is expected to undergo continuous behavioral refinements and changes, and you need a way to implement those changes efficiently and confidently
  3. 📈 You're expected to maintain a growing set of behavioral guidelines, and you need to maintain them coherently and with version-tracking
  4. 💬 Conversational UX and user-engagmeent is an important concern for your use case, and you want to easily control the flow and tone of conversations

⭐ Star Us: Your Support Goes a Long Way!

Star History Chart

🤔 What Makes Parlant Different?

In a word: Guidance. 🧭🚦🤝

Parlant's engine revolves around solving one key problem: How can we reliably guide customer-facing agents to behave in alignment with our needs and intentions?

Hence Parlant's fundamentally different approach to agent building: Managed Guidelines:

parlant guideline create \
  --condition "the customer wants to return an item" \
  --action "get the order number and item name and then help them return it"

By giving structure to behavioral guidelines, and granularizing guidelines (i.e. making each behavioral guideline a first-class entity in the engine), Parlant's engine is able to offer unprecedented control, quality, and efficiency in building LLM-based agents:

  1. 🛡️ Reliability: Running focused self-critique in real-time, per guideline, to ensure it is actually followed
  2. 💡 Explainability: Providing feedback around its interpretation of guidelines in each real-life context, which helps in troubleshooting and improvement
  3. 🔧 Maintainability: Helping you maintain a coherent set of guidelines by detecting and alerting you to possible contradictions (gross or subtle) in your instructions

🤖 Works with all major LLM providers

📚 Learning Parlant

To start learning and building with Parlant, visit our documentation portal.

Need help? Ask us anything on Discord. We're happy to answer questions and help you get up and running!

💻 Usage Example

Adding a guideline for an agent—for example, to ask a counter-question to get more info when a customer asks a question:

parlant guideline create \
    --condition "a free-tier customer is asking how to use our product" \
    --action "first seek to understand what they're trying to achieve"

👋 Contributing

We use the Linux-standard Developer Certificate of Origin (DCO.md), so that, by contributing, you confirm that you have the rights to submit your contribution under the Apache 2.0 license (i.e., that the code you're contributing is truly yours to share with the project).

Please consult CONTRIBUTING.md for more details.

Can't wait to get involved? Join us on Discord and let's discuss how you can help shape Parlant. We're excited to work with contributors directly while we set up our formal processes!

Otherwise, feel free to start a discussion or open an issue here on GitHub—freestyle 😎.

About

Build reliable customer facing agents with foundational LLMs using behavioral guidelines and runtime supervision

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 87.6%
  • Gherkin 6.2%
  • TypeScript 5.5%
  • CSS 0.3%
  • JavaScript 0.1%
  • Just 0.1%
  • Other 0.2%