The calls to Llama-3.2-11B-Vision-Instruct were working until about 2 hours ago I started getting error response {'code': None, 'message': 'unexpected_error', 'param': None, 'type': 'unexpected_error'}
.
Hello @wwmcheung ,
Thank you for reaching out. We understand that you’re encountering the error {'code': None, 'message': 'unexpected_error', 'param': None, 'type': 'unexpected_error'}
with the Llama-3.2-11B-Vision-Instruct model.
To assist you better, could you kindly provide any relevant code snippets, error logs, or additional context related to the issue? This will help us in troubleshooting and resolving the issue.
We appreciate your patience and will update you as soon as we have more information.
Thanks
Regards,
Rohit
The error happens for both Llama-3.2-11B-Vision-Instruct and Llama-3.2-90B-Vision-Instruct. Just go to the Playground select one of these two models and upload any image. You’ll get unexpected_error
like in my screenshot.
Hello @wwmcheung,
We have identified the issue and, as part of the resolution, we will need to place the Llama-3.2-11B and 90B models, along with the Qwen Audio, into maintenance mode. At this moment, we are unable to provide an estimated time of resolution, but we will keep you updated as soon as we have more information.
We apologize for the inconvenience and appreciate your understanding.
Thanks
Best regards,
Rohit
Sad, this will cost me some money. Anyway, thanks for the update.
Engineering has identified the root cause early AM PDT . They are testing the fix in lower environments . Please PM me the email address you are using for your business account so we can give you a credit reward . The email tied to your community user has only a consumed less than 3 cents worth of tokens to date.
-Coby
Thanks but don’t worry about it @coby.adams. Last night I was working on a hackathon that was due this morning and while I was recording the mandatory demo video, the Vision API stopped working. That’s what I meant by this is going to cost me some money, the prize money. But you never know with hackathons where you stand.
Is this issue still unresolved? I’m having the same problem with Llama-3.2-90B-Vision-Instruct here right now. Thanks.
I should have mentioned that this is working in the playground but not via API for me.
Hi @edward.cruz , The issue is resolved.
To assist you better, could you kindly provide relevant code snippets, error logs, or additional context related to the issue? This will help us in troubleshooting and resolving the issue.
Thanks
I am getting this exact same error. Here is a snippet from my server log:
7AabR2gEPzfUdhAHPTg/4+0Z0qOtlBFrZqHrtbkP2I/51g2sRLDdpkD2bNthP9K/8PJjDpt8K/DrmvUUWByrc5A54+xkuxULte4DixgXUNjHOe3H6TbXi7RaZHBDHGCB17QeupD66vUUDItDFducZxn++YdtGa6ltYhhggIDxx1A+/MnymhK09CVmsu0tJrSsGu7AYf6W6Z/eOeH6LyKLbAyC45UMT3P/ANiLajUVuwrclq1HKgZOY3o22+EKCmWNh27jycDjHzFkf117KhHezmajUJbrLPOqsR6lADFfy9esHr6hZTVsO9HHJXnMGN2r1YZ8/mKnB4weOZ1vC7FSiopV5hpJHl4J7nnPT2nej//Z"
}
}
]
}
],
“stream”: true,
“max_completion_tokens”: 8192,
“temperature”: 0.2,
“top_p”: 0.2
}
-
Creating completion stream…
Stream created successfully
Stream object type: <class ‘openai.Stream’>
Stream object dir: [‘annotations’, ‘class’, ‘class_getitem’, ‘delattr’, ‘dict’, ‘dir’, ‘doc’, ‘enter’, ‘eq’, ‘exit’, ‘format’, ‘ge’, ‘getattribute’, ‘getstate’, ‘gt’, ‘hash’, ‘init’, ‘init_subclass’, ‘iter’, ‘le’, ‘lt’, ‘module’, ‘ne’, ‘new’, ‘next’, ‘orig_bases’, ‘orig_class’, ‘parameters’, ‘reduce’, ‘reduce_ex’, ‘repr’, ‘setattr’, ‘sizeof’, ‘slots’, ‘str’, ‘stream’, ‘subclasshook’, ‘weakref’, ‘_cast_to’, ‘_client’, ‘_decoder’, ‘_is_protocol’, ‘_iter_events’, ‘_iterator’, ‘close’, ‘response’] -
Processing stream chunks…
Error processing stream chunks: unexpected_error
Error type: <class ‘openai.APIError’>
Traceback (most recent call last):
File “/home/runner/workspace/experimental/streaming.py”, line 274, in send_request_stream_with_fallback
for chunk in stream:
File “/home/runner/workspace/.pythonlibs/lib/python3.11/site-packages/openai/_streaming.py”, line 46, in iter
for item in self._iterator:
File “/home/runner/workspace/.pythonlibs/lib/python3.11/site-packages/openai/_streaming.py”, line 72, in stream
raise APIError(
openai.APIError: unexpected_error
@edward.cruz can you please try setting “max_completion_tokens”: 4000, instead of “max_completion_tokens”: 8192, as Context length for Llama-3.2-90B-Vision-Instruct
is 4K.
Thanks
I was also looking at your calls in our logs yesterday and it looks like your app did 7 calls at once. Is that expected?
Coby
Hi Coby, this was almost certainly due to an early iteration of my code, in which I assumed that the model could process multiple images in a single prompt, like gpt-4o-mini can, for example. I have since removed that code and restricted it to one image per prompt. HTH.
This seems related: Both Llama-3.2-11B-Vision-Instruct and Llama-3.2-90B-Vision-Instruct in your playground are not currently responding to text prompts.
@edward.cruz We have identified the issue, we are unable to provide an estimated time of resolution, but we will keep you updated as soon as we have more information.
We apologize for the inconvenience and appreciate your understanding.
Thanks
I got an email saying the Llama vision models will be deprecated on Apr 14. Will SambaNova be providing any replacements?
Yes @wwmcheung, We do have replacement model lined up, we just cannot announce it yet.
Thank you.