这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Mar 20, 2025

This PR contains the following updates:

Package Change Age Confidence
litellm ==1.49.0 -> ==1.61.15 age confidence

GitHub Vulnerability Alerts

CVE-2024-10188

A vulnerability in BerriAI/litellm, as of commit 26c03c9, allows unauthenticated users to cause a Denial of Service (DoS) by exploiting the use of ast.literal_eval to parse user input. This function is not safe and is prone to DoS attacks, which can crash the litellm Python server.

CVE-2025-0628

An improper authorization vulnerability exists in the main-latest version of BerriAI/litellm. When a user with the role 'internal_user_viewer' logs into the application, they are provided with an overly privileged API key. This key can be used to access all the admin functionality of the application, including endpoints such as '/users/list' and '/users/get_users'. This vulnerability allows for privilege escalation within the application, enabling any account to become a PROXY ADMIN.

CVE-2024-8984

A Denial of Service (DoS) vulnerability exists in berriai/litellm version v1.44.5. This vulnerability can be exploited by appending characters, such as dashes (-), to the end of a multipart boundary in an HTTP request. The server continuously processes each character, leading to excessive resource consumption and rendering the service unavailable. The issue is unauthenticated and does not require any user interaction, impacting all users of the service.


Release Notes

BerriAI/litellm (litellm)

v1.61.7

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.61.3...v1.61.7

Docker Run LiteLLM Proxy

```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.7
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Docker Run LiteLLM Proxy

```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.7
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed ❌ 180.0 206.98769618433857 6.145029010811349 6.145029010811349 1839 1839 146.21495699998377 3174.8161250000067
Aggregated Failed ❌ 180.0 206.98769618433857 6.145029010811349 6.145029010811349 1839 1839 146.21495699998377 3174.8161250000067

v1.61.3

What's Changed

Full Changelog: BerriAI/litellm@v1.61.1...v1.61.3

Docker Run LiteLLM Proxy

```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.3
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Docker Run LiteLLM Proxy

```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.3
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed ❌ 110.0 127.51554087063036 6.408067444109619 6.408067444109619 1917 1917 94.95955199997752 2825.282969
Aggregated Failed ❌ 110.0 127.51554087063036 6.408067444109619 6.408067444109619 1917 1917 94.95955199997752 2825.282969

v1.61.1

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.61.0...v1.61.1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.1
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 160.0 180.272351294557 6.268555221678184 0.0 1874 0 118.979319999994 3618.562145999988
Aggregated Passed ✅ 160.0 180.272351294557 6.268555221678184 0.0 1874 0 118.979319999994 3618.562145999988

v1.61.0

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.60.8...v1.61.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.0
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 180.0 213.86169773089247 6.297834462789351 0.003342799608699231 1884 1 81.07622899996159 4173.802059999957
Aggregated Passed ✅ 180.0 213.86169773089247 6.297834462789351 0.003342799608699231 1884 1 81.07622899996159 4173.802059999957

v1.60.8

What's Changed

Full Changelog: BerriAI/litellm@v1.60.6...v1.60.8

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.8
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 170.0 189.56173781509457 6.206468643400922 0.0 1855 0 149.30551800000558 3488.08786699999
Aggregated Passed ✅ 170.0 189.56173781509457 6.206468643400922 0.0 1855 0 149.30551800000558 3488.08786699999

v1.60.6

Compare Source

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.60.5...v1.60.6

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.6
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 200.0 217.05167674521235 6.288425886864887 0.0 1880 0 164.17646499996863 2306.284880000021
Aggregated Passed ✅ 200.0 217.05167674521235 6.288425886864887 0.0 1880 0 164.17646499996863 2306.284880000021

v1.60.5

Compare Source

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.60.4...v1.60.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.5
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 251.44053604962153 6.19421782055854 0.0 1854 0 167.35073600000305 4496.06190000003
Aggregated Passed ✅ 210.0 251.44053604962153 6.19421782055854 0.0 1854 0 167.35073600000305 4496.06190000003

v1.60.4

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.60.2...v1.60.4

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.4
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 243.98647747354212 6.187158959524932 0.0033407985742575225 1852 1 94.81396500007122 3976.009301999966
Aggregated Passed ✅ 210.0 243.98647747354212 6.187158959524932 0.0033407985742575225 1852 1 94.81396500007122 3976.009301999966

v1.60.2

Compare Source

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.60.0...v1.60.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.2
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 170.0 187.78487681207412 6.365583292626693 0.0 1905 0 135.5453470000043 3644.0179759999864
Aggregated Passed ✅ 170.0 187.78487681207412 6.365583292626693 0.0 1905 0 135.5453470000043 3644.0179759999864

v1.60.0

What's Changed

Important Changes between v1.50.xx to 1.60.0

Known Issues

🚨 Detected issue with Langfuse Logging when Langfuse credentials are stored in DB

New Contributors

Full Changelog: BerriAI/litellm@v1.59.10...v1.60.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.0
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 240.0 281.07272626532927 6.158354312051399 0.0 1843 0 215.79772499995897 3928.489000000013
Aggregated Passed ✅ 240.0 281.07272626532927 6.158354312051399 0.0 1843 0 215.79772499995897 3928.489000000013

v1.59.10

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.59.9...v1.59.10

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.10
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 239.24647793068146 6.21745665443628 0.00334092243655899 1861 1 73.25327600000264 3903.3159660000083
Aggregated Passed ✅ 210.0 239.24647793068146 6.21745665443628 0.00334092243655899 1861 1 73.25327600000264 3903.3159660000083

v1.59.9

Compare Source

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.59.8...v1.59.9

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.9
Don't want to maintain

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@bolt-new-by-stackblitz
Copy link

Review PR in StackBlitz Codeflow Run & review this pull request in StackBlitz Codeflow.

@codesandbox
Copy link

codesandbox bot commented Mar 20, 2025

Review or Edit in CodeSandbox

Open the branch in Web EditorVS CodeInsiders

Open Preview

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Mar 20, 2025

Reviewer's Guide by Sourcery

This pull request updates the litellm dependency to version 1.53.1 to address a security vulnerability (CVE-2024-10188). The update includes several fixes and improvements to the LiteLLM proxy, such as enforcing unique key aliases, Datadog integration enhancements, and bug fixes.

Sequence diagram for key generation with unique alias enforcement

sequenceDiagram
    participant User
    participant LiteLLM Proxy
    participant Database

    User->>LiteLLM Proxy: /key/generate request with alias
    LiteLLM Proxy->>Database: Check if alias already exists
    alt Alias exists
        Database-->>LiteLLM Proxy: Alias already in use
        LiteLLM Proxy-->>User: Error: Alias already exists
    else Alias is unique
        Database-->>LiteLLM Proxy: Alias is unique
        LiteLLM Proxy->>Database: Store new key with alias
        Database-->>LiteLLM Proxy: Key stored successfully
        LiteLLM Proxy-->>User: Key generated successfully
    end
Loading

Sequence diagram for key update with unique alias enforcement

sequenceDiagram
    participant User
    participant LiteLLM Proxy
    participant Database

    User->>LiteLLM Proxy: /key/update request with alias
    LiteLLM Proxy->>Database: Check if alias already exists
    alt Alias exists and is not owned by the user
        Database-->>LiteLLM Proxy: Alias already in use by another user
        LiteLLM Proxy-->>User: Error: Alias already exists
    else Alias is unique or owned by the user
        Database-->>LiteLLM Proxy: Alias is unique or owned by the user
        LiteLLM Proxy->>Database: Update key with alias
        Database-->>LiteLLM Proxy: Key updated successfully
        LiteLLM Proxy-->>User: Key updated successfully
    end
Loading

Sequence diagram for Datadog logging with StandardLoggingPayload

sequenceDiagram
    participant LiteLLM Proxy
    participant LLM Provider
    participant Datadog

    LiteLLM Proxy->>LLM Provider: LLM Request
    LLM Provider-->>LiteLLM Proxy: LLM Response
    LiteLLM Proxy->>LiteLLM Proxy: Create StandardLoggingPayload
    LiteLLM Proxy->>Datadog: Log StandardLoggingPayload
    alt LLM Failure
        LiteLLM Proxy->>Datadog: Log LLM Failure
    end
Loading

File-Level Changes

Change Details Files
The litellm dependency was updated to address a security vulnerability.
  • Updated litellm from version 1.44.8 to 1.53.1 in requirements.txt.
  • Updated litellm from version 1.49.0 to 1.53.1 in apps/ai-gateway/requirements.txt.
requirements.txt
apps/ai-gateway/requirements.txt
The update includes fixes and improvements to the LiteLLM proxy, such as enforcing unique key aliases, Datadog integration enhancements, and bug fixes.
  • Enforced unique key aliases on /key/generate and /key/update requests.
  • Enhanced Datadog integration with StandardLoggingPayload and failure logging support.
  • Added PATCH support for LLM endpoints.
  • Fixed /key/update to store budget_duration in the DB.
  • Handled JSON decode errors for Datadog exception logging.
  • Added enforcement for unique key aliases on /key/update and /key/generate.
  • Allowed disabling ErrorLogs written to the DB.
requirements.txt
apps/ai-gateway/requirements.txt

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have skipped reviewing this pull request. Here's why:

  • It seems to have been created by a bot (hey, renovate[bot]!). We assume it knows what it's doing!
  • We don't review packaging changes - Let us know if you'd like us to change this.

@qodo-merge-pro
Copy link

qodo-merge-pro bot commented Mar 20, 2025

CI Feedback 🧐

(Feedback updated until commit b3422ca)

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: Build and Test

Failed stage: Setup pnpm [❌]

Failure summary:

The action failed due to two separate issues:

1. The pnpm setup action failed because of conflicting pnpm versions:
- The GitHub Action config
specified version "latest"
- The package.json file specified "pnpm@9.0.0" in the packageManager
field
This conflict causes version mismatch errors like ERR_PNPM_BAD_PM_VERSION.

2. The Codecov test results action failed because no JUnit XML test reports were found. The log
shows "Found 0 test_results files to report" and suggests reviewing the documentation at
https://docs.codecov.com/docs/test-result-ingestion-beta.

Relevant error logs:
1:  ##[group]Operating System
2:  Ubuntu
...

136:  ##[endgroup]
137:  ##[warning]Cache not found for keys: Linux-turbo-9b8643e2a93a4370361add7dd13c7266d13bfada, Linux-turbo-
138:  Cache not found for input keys: Linux-turbo-9b8643e2a93a4370361add7dd13c7266d13bfada, Linux-turbo-
139:  ##[group]Run pnpm/action-setup@v4.0.0
140:  with:
141:  version: latest
142:  dest: ~/setup-pnpm
143:  run_install: null
144:  package_json_file: package.json
145:  standalone: false
146:  env:
147:  TURBO_TOKEN: 
148:  TURBO_TEAM: 
149:  ##[endgroup]
150:  ##[group]Running self-installer...
151:  Error: Multiple versions of pnpm specified:
152:  - version latest in the GitHub Action config with the key "version"
153:  - version pnpm@9.0.0 in the package.json with the key "packageManager"
154:  Remove one of these versions to avoid version mismatch errors like ERR_PNPM_BAD_PM_VERSION
155:  at readTarget (/home/runner/work/_actions/pnpm/action-setup/v4.0.0/dist/index.js:1:4528)
156:  at runSelfInstaller (/home/runner/work/_actions/pnpm/action-setup/v4.0.0/dist/index.js:1:3742)
157:  at async install (/home/runner/work/_actions/pnpm/action-setup/v4.0.0/dist/index.js:1:2976)
158:  at async main (/home/runner/work/_actions/pnpm/action-setup/v4.0.0/dist/index.js:1:444)
159:  ##[error]Error: Multiple versions of pnpm specified:
160:    - version latest in the GitHub Action config with the key "version"
161:    - version pnpm@9.0.0 in the package.json with the key "packageManager"
162:  Remove one of these versions to avoid version mismatch errors like ERR_PNPM_BAD_PM_VERSION
163:  ##[group]Run codecov/test-results-action@v1.0.2
...

176:  gpg: Total number processed: 1
177:  gpg:               imported: 1
178:  gpg: Signature made Wed Mar 12 16:00:55 2025 UTC
179:  gpg:                using RSA key 27034E7FDB850E0BBC2C62FF806BB28AED779869
180:  gpg: Good signature from "Codecov Uploader (Codecov Uploader Verification Key) <security@codecov.io>" [unknown]
181:  gpg: WARNING: This key is not certified with a trusted signature!
182:  gpg:          There is no indication that the signature belongs to the owner.
183:  Primary key fingerprint: 2703 4E7F DB85 0E0B BC2C  62FF 806B B28A ED77 9869
184:  ==> Uploader SHASUM verified (39dd112393680356daf701c07f375303aef5de62f06fc80b466b5c3571336014  codecov)
185:  ==> Running version latest
186:  ==> Running version v10.2.1
187:  ==> Running command '/home/runner/work/_actions/codecov/test-results-action/v1.0.2/dist/codecov do-upload'
188:  [command]/home/runner/work/_actions/codecov/test-results-action/v1.0.2/dist/codecov do-upload -C b3422caafddca275e9f866c1181aba7720a9feb7 --report-type test_results
189:  info - 2025-03-23 16:47:27,661 -- ci service found: github-actions
190:  info - 2025-03-23 16:47:27,703 -- Found 0 test_results files to report
191:  error - 2025-03-23 16:47:27,704 -- No JUnit XML reports found. Please review our documentation (https://docs.codecov.com/docs/test-result-ingestion-beta) to generate and upload the file.
192:  info - 2025-03-23 16:47:28,131 -- No test results reports found. Triggering notifications without uploading.

@renovate renovate bot force-pushed the renovate/pypi-litellm-vulnerability branch from cf296ff to 7a10b95 Compare March 20, 2025 22:37
@renovate renovate bot changed the title chore(deps): update dependency litellm to v1.53.1 [security] chore(deps): update dependency litellm to v1.61.15 [security] Mar 20, 2025
@renovate renovate bot force-pushed the renovate/pypi-litellm-vulnerability branch from 7a10b95 to b3422ca Compare March 23, 2025 16:47
@renovate renovate bot force-pushed the renovate/pypi-litellm-vulnerability branch from b3422ca to 577c360 Compare November 19, 2025 01:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants