I am using, the OpenAI NPM package for my app. I use the types from OpenAI itself for message construction.
const userContent: ChatCompletionContentPart[] = [{ type: 'text', text: query ` }
, { type: 'image_url', image_url: { url: base64Image, detail: 'auto' } }]
const userMessage: ChatCompletionMessageParam = { role: 'user', content: userContent, name: versionId }
const messages: ChatCompletionMessageParam[] = [userMessage]
When sending Messages to SambaNova through Vision model I am getting unexpected_error
error: {
code: null,
message: 'unexpected_error',
param: null,
type: 'unexpected_error'
}
But this method works, when I use it with OpenAI just by changing the API key and endpoint. The 90B model works when I don’t attach any image and even get rate limited.
This is how the final constructed Messages array looks like
[
[
{
type: 'text',
text: 'generate a sudoku app in this layout'
},
{ type: 'image_url', image_url: [Object] }
]
]
1 Like
Hello notagodzilla,
We will look into this issue and let you know as we have an update.If you have any additional information or observations, please feel free to share them with us.
Thank you
Best Regards,
Rohit Vyawahare
1 Like
Hey @rohit.vyawahare, the only weird observation i have is that the implementation works with OpenAI (I just change the endpoint and api key). I am getting the error in both the vision models.
Edit:
When I tried sending a URL in image_url: {url: \\}
, I get the following error asking for base64.
{
error: {
code: null,
message: "`image_url` must start with 'data:image/<jpeg|jpg|png|webp>;base64,'",
param: null,
type: 'Error'
}
}
So guessing that my implementation of sending Base64 images is correct.
Thanks
1 Like
Hi @notagodzilla
Can you please re-confirm whether you are using this request format for vision models.
messages: [
{
role: "user",
content: [
{
type: "text",
text: "What are in these images?",
},
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${base64Image}`,
detail: "auto",
},
}]
}]
Sample request body is mentioned here sambanova_cloud_api_reference
Thanks & Regards
Hi @notagodzilla Can you verify your message structure and request format because the image data is now explicitly included as a base64 string. Make sure your base64 string is properly formatted (data:image/png;base64,<your_image_data>
)
Below is my working code in Node.js, I’ve tried in my system and it’s working fine
const fs = require('fs'); // For reading files
const axios = require('axios'); // For making API requests
require('dotenv').config(); // To manage environment variables securely
// Function to encode the image to Base64
function encodeImage(imagePath) {
const imageBuffer = fs.readFileSync(imagePath);
return imageBuffer.toString('base64');
}
// Path to your images
const imagePath1 = "give your image path"; // your image path
// Getting the base64 encoded string for both images
const base64Image1 = encodeImage(imagePath1);
// SambaNova API key and endpoint setup
const SAMBANOVA_API_KEY = process.env.SAMBANOVA_API_KEY || 'your-api-key'; // Make sure to use your correct API key
const BASE_URL = "https://api.sambanova.ai/v1";
// Prepare the request body for the Vision model
const requestBody = {
model: "Llama-3.2-90B-Vision-Instruct",
messages: [
{
role: "user",
content: [
{
type: "text",
text: "What are in these images?",
},
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${base64Image1}`,
detail: "auto",
},
},
],
},
],
};
// Make the API request to SambaNova (Vision Instruct Model)
async function getSambaNovaResponse() {
try {
const response = await axios.post(
`${BASE_URL}/chat/completions`, // Ensure the URL is correct
requestBody,
{
headers: {
Authorization: `Bearer ${SAMBANOVA_API_KEY}`,
"Content-Type": "application/json",
},
}
);
// Log the response from the SambaNova API
console.log("Response Text:", response.data.choices[0].message.content);
} catch (error) {
console.error("Error occurred while calling SambaNova API:", error.message);
if (error.response) {
console.error("Response Status:", error.response.status);
console.error("Response Data:", error.response.data);
}
}
}
// Call the function to get the response
getSambaNovaResponse();
1 Like
Hey @omkar.gangan , @hjha03144 ,
Yes I my message structure is the same as what you have provided. I think I have found the issue and it is weird. I have my system prompt as the first message as {role: 'system', content: [{type: 'text', text: dedent(GENERATE_PROMPT)}]}
. If the role is system
the unknown-error pops up if an image is attached. But if image is not attached then it works fine. I just changed the role to user
and the api works (with image attached)
cc: @rohit.vyawahare
1 Like
That’s Great!! Vision models currently do not support system prompts. We are working on it.
Thanks & Regards
3 Likes