You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A fully local RAG pipeline that answers natural language questions about movie reviews. Uses Ollama for embeddings + local LLM, Chroma for the vector store, and LangChain to retrieve, summarize, and generate answers.
Self-hosted RAG application for PDF question-answering using LangChain, ChromaDB, and Ollama. Features Flask web interface, vector embeddings, automated chunking, and local LLM inference. Includes CI/CD pipeline with automated testing.
Modern columnar data format for ML and LLMs implemented in Rust. Convert from parquet in 2 lines of code for 100x faster random access, vector index, and data versioning. Compatible with Pandas, DuckDB, Polars, Pyarrow, and PyTorch with more integrations coming..
A project to show howto use SpringAI with OpenAI to chat with the documents in a library. Documents are stored in a normal/vector database. The AI is used to create embeddings from documents that are stored in the vector database. The vector database is used to query for the nearest document. That document is used by the AI to generate the answer.