这是indexloc提供的服务,不要输入任何密码
Skip to content

TheSongg/langchain-xinference

Repository files navigation

💻 langchain-xinference

This package contains the LangChain integration with Xinference

🤝 Support

  • Chat
  • Generate
  • Embeddings
  • Reranks
  • Tools Call

🚀 Installation

pip install -U langchain-xinference

☕ Chat Models

ChatXinference class exposes chat models from Xinference.

from langchain_xinference.chat_models import ChatXinference
from langchain.prompts import PromptTemplate

llm = ChatXinference(
  server_url="http://0.0.0.0:9997",  # replace your xinference server url
  model_uid={model_uid}  # replace model_uid with the model UID return from launching the model
         )
prompt = PromptTemplate(input=["country"], template="Q: where can we visit in the capital of {country}? A:")
chain = prompt | llm

chain.invoke(input={"country": "France"})

ai_res = chain.stream(input={"country": "France"})
for chunk in ai_res:
    print(chunk.content)

☕ Generate

Xinference class exposes LLMs from Xinference.

from langchain_xinference.llms import Xinference
from langchain.prompts import PromptTemplate

llm = Xinference(
    server_url="http://0.0.0.0:9997",  # replace your xinference server url
    model_uid={model_uid}  # replace model_uid with the model UID return from launching the model
 )
prompt = PromptTemplate(input=["country"], template="Q: where can we visit in the capital of {country}? A:")
chain = prompt | llm
chain.invoke(input={"country": "France"})

ai_res = chain.stream(input={"country": "France"})
for chunk in ai_res:
    print(chunk)

About

This package contains the LangChain integration with Xinference

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •