You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[BUG]: Invocation of model ID deepseek.r1-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model. #3441
I got the below output when I am using bedrock with deepseek, its seems that bedrock required a inferenceConfig as below.
Error msg: Invocation of model ID deepseek.r1-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model.
aws sample api query:
{
"modelId": "deepseek.r1-v1:0",
"contentType": "application/json",
"accept": "application/json",
"body": {
"inferenceConfig": {
"max_tokens": 512
},
"messages": [
{
"role": "user",
"content": "this is where you place your input text"
}
]
}
}
Are there known steps to reproduce?
set model ID to deepseek.r1-v1:0 in LLM provider page
No response
hmain, aimestereo, meow-developer, hymloth and thcheungthcheung