From 38bfc9b96a26aca44dc6ed3ba17b6456ffe019b5 Mon Sep 17 00:00:00 2001 From: Justin Date: Wed, 22 Jan 2025 09:36:54 -0800 Subject: [PATCH 1/2] added Helicone to Obervability --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 887ad46..59443c3 100644 --- a/README.md +++ b/README.md @@ -63,7 +63,7 @@ An awesome & curated list of the best LLMOps tools for developers. | [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | Code and documentation to train Stanford's Alpaca models, and generate the data. | ![GitHub Badge](https://img.shields.io/github/stars/tatsu-lab/stanford_alpaca.svg?style=flat-square) | | [BELLE](https://github.com/LianjiaTech/BELLE) | A 7B Large Language Model fine-tune by 34B Chinese Character Corpus, based on LLaMA and Alpaca. | ![GitHub Badge](https://img.shields.io/github/stars/LianjiaTech/BELLE.svg?style=flat-square) | | [Bloom](https://github.com/bigscience-workshop/model_card) | BigScience Large Open-science Open-access Multilingual Language Model | ![GitHub Badge](https://img.shields.io/github/stars/bigscience-workshop/model_card.svg?style=flat-square) | -| [dolly](https://github.com/databrickslabs/dolly) | Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform | ![GitHub Badge](https://img.shields.io/github/stars/databrickslabs/dolly.svg?style=flat-square) | +| [dolly](https://github.com/databrickslabs/dolly) | Databricks' Dolly, a large language model trained on the Databricks Machine Learning Platform | ![GitHub Badge](https://img.shields.io/github/stars/databrickslabs/dolly.svg?style=flat-square) | | [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b-instruct) | Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-40B and finetuned on a mixture of Baize. It is made available under the Apache 2.0 license. | | | [FastChat (Vicuna)](https://github.com/lm-sys/FastChat) | An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and FastChat-T5. | ![GitHub Badge](https://img.shields.io/github/stars/lm-sys/FastChat.svg?style=flat-square) | | [Gemma](https://www.kaggle.com/models/google/gemma) | Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. | | @@ -160,6 +160,7 @@ An awesome & curated list of the best LLMOps tools for developers. | [Fiddler AI](https://github.com/fiddler-labs/fiddler-auditor) | Evaluate, monitor, analyze, and improve machine learning and generative models from pre-production to production. Ship more ML and LLMs into production, and monitor ML and LLM metrics like hallucination, PII, and toxicity. | ![GitHub Badge](https://img.shields.io/github/stars/fiddler-labs/fiddler-auditor.svg?style=flat-square) | | [Giskard](https://github.com/Giskard-AI/giskard) | Testing framework dedicated to ML models, from tabular to LLMs. Detect risks of biases, performance issues and errors in 4 lines of code. | ![GitHub Badge](https://img.shields.io/github/stars/Giskard-AI/giskard.svg?style=flat-square) | | [Great Expectations](https://github.com/great-expectations/great_expectations) | Always know what to expect from your data. | ![GitHub Badge](https://img.shields.io/github/stars/great-expectations/great_expectations.svg?style=flat-square) | +| [Helicone](https://github.com/Helicone/helicone) | Open source LLM observability platform. One line of code to monitor, evaluate, and experiment with features like prompt management, agent tracing, and evaluations. | ![GitHub Badge](https://img.shields.io/github/stars/Helicone/helicone.svg?style=flat-square) | | [whylogs](https://github.com/whylabs/whylogs) | The open standard for data logging | ![GitHub Badge](https://img.shields.io/github/stars/whylabs/whylogs.svg?style=flat-square) | **[⬆ back to ToC](#table-of-contents)** @@ -444,7 +445,7 @@ An awesome & curated list of the best LLMOps tools for developers. | [OpenModelZ](https://github.com/tensorchord/openmodelz) | One-click machine learning deployment (LLM, text-to-image and so on) at scale on any cluster (GCP, AWS, Lambda labs, your home lab, or even a single machine). | ![GitHub Badge](https://img.shields.io/github/stars/tensorchord/openmodelz.svg?style=flat-square) | | [Seldon-core](https://github.com/SeldonIO/seldon-core) | An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models | ![GitHub Badge](https://img.shields.io/github/stars/SeldonIO/seldon-core.svg?style=flat-square) | | [Starwhale](https://github.com/star-whale/starwhale) | An MLOps/LLMOps platform for model building, evaluation, and fine-tuning. | ![GitHub Badge](https://img.shields.io/github/stars/star-whale/starwhale.svg?style=flat-square) | -| [TrueFoundry](https://truefoundry.com/llmops) | A PaaS to deploy, Fine-tune and serve LLM Models on a company’s own Infrastructure with Data Security and Optimal GPU and Cost Management. Launch your LLM Application at Production scale with best DevSecOps practices. | | +| [TrueFoundry](https://truefoundry.com/llmops) | A PaaS to deploy, Fine-tune and serve LLM Models on a company's own Infrastructure with Data Security and Optimal GPU and Cost Management. Launch your LLM Application at Production scale with best DevSecOps practices. | | | [Weights & Biases](https://github.com/wandb/wandb) | A lightweight and flexible platform for machine learning experiment tracking, dataset versioning, and model management, enhancing collaboration and streamlining MLOps workflows. W&B excels at tracking LLM-powered applications, featuring W&B Prompts for LLM execution flow visualization, input and output monitoring, and secure management of prompts and LLM chain configurations. | ![GitHub Badge](https://img.shields.io/github/stars/wandb/wandb.svg?style=flat-square) | **[⬆ back to ToC](#table-of-contents)** From c61611c114987ab2bc6278e89dda9a941c75aff6 Mon Sep 17 00:00:00 2001 From: Justin Date: Wed, 22 Jan 2025 09:39:03 -0800 Subject: [PATCH 2/2] revert apostraphe --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 59443c3..43aca99 100644 --- a/README.md +++ b/README.md @@ -63,7 +63,7 @@ An awesome & curated list of the best LLMOps tools for developers. | [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | Code and documentation to train Stanford's Alpaca models, and generate the data. | ![GitHub Badge](https://img.shields.io/github/stars/tatsu-lab/stanford_alpaca.svg?style=flat-square) | | [BELLE](https://github.com/LianjiaTech/BELLE) | A 7B Large Language Model fine-tune by 34B Chinese Character Corpus, based on LLaMA and Alpaca. | ![GitHub Badge](https://img.shields.io/github/stars/LianjiaTech/BELLE.svg?style=flat-square) | | [Bloom](https://github.com/bigscience-workshop/model_card) | BigScience Large Open-science Open-access Multilingual Language Model | ![GitHub Badge](https://img.shields.io/github/stars/bigscience-workshop/model_card.svg?style=flat-square) | -| [dolly](https://github.com/databrickslabs/dolly) | Databricks' Dolly, a large language model trained on the Databricks Machine Learning Platform | ![GitHub Badge](https://img.shields.io/github/stars/databrickslabs/dolly.svg?style=flat-square) | +| [dolly](https://github.com/databrickslabs/dolly) | Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform | ![GitHub Badge](https://img.shields.io/github/stars/databrickslabs/dolly.svg?style=flat-square) | | [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b-instruct) | Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-40B and finetuned on a mixture of Baize. It is made available under the Apache 2.0 license. | | | [FastChat (Vicuna)](https://github.com/lm-sys/FastChat) | An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and FastChat-T5. | ![GitHub Badge](https://img.shields.io/github/stars/lm-sys/FastChat.svg?style=flat-square) | | [Gemma](https://www.kaggle.com/models/google/gemma) | Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. | | @@ -445,7 +445,7 @@ An awesome & curated list of the best LLMOps tools for developers. | [OpenModelZ](https://github.com/tensorchord/openmodelz) | One-click machine learning deployment (LLM, text-to-image and so on) at scale on any cluster (GCP, AWS, Lambda labs, your home lab, or even a single machine). | ![GitHub Badge](https://img.shields.io/github/stars/tensorchord/openmodelz.svg?style=flat-square) | | [Seldon-core](https://github.com/SeldonIO/seldon-core) | An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models | ![GitHub Badge](https://img.shields.io/github/stars/SeldonIO/seldon-core.svg?style=flat-square) | | [Starwhale](https://github.com/star-whale/starwhale) | An MLOps/LLMOps platform for model building, evaluation, and fine-tuning. | ![GitHub Badge](https://img.shields.io/github/stars/star-whale/starwhale.svg?style=flat-square) | -| [TrueFoundry](https://truefoundry.com/llmops) | A PaaS to deploy, Fine-tune and serve LLM Models on a company's own Infrastructure with Data Security and Optimal GPU and Cost Management. Launch your LLM Application at Production scale with best DevSecOps practices. | | +| [TrueFoundry](https://truefoundry.com/llmops) | A PaaS to deploy, Fine-tune and serve LLM Models on a company’s own Infrastructure with Data Security and Optimal GPU and Cost Management. Launch your LLM Application at Production scale with best DevSecOps practices. | | | [Weights & Biases](https://github.com/wandb/wandb) | A lightweight and flexible platform for machine learning experiment tracking, dataset versioning, and model management, enhancing collaboration and streamlining MLOps workflows. W&B excels at tracking LLM-powered applications, featuring W&B Prompts for LLM execution flow visualization, input and output monitoring, and secure management of prompts and LLM chain configurations. | ![GitHub Badge](https://img.shields.io/github/stars/wandb/wandb.svg?style=flat-square) | **[⬆ back to ToC](#table-of-contents)**