Temperature in the API

Hi all, I hope you can help me with the following problem.

I have a use case where I need to generate different samples for the same prompt which is typically done by having high temperature, but for example the following request always returns the same response:

curl -H "Authorization: Bearer $SAMBANOVA_API_KEY" -H "Content-Type: application/json" \
     -d '{
"stream": false,
"model": "Meta-Llama-3.2-3B-Instruct",
"temperature": 1.0,
"messages": [
{
"role": "system",
"content": "You are helpful assistant."
},
{
"role": "user",
"content": "write me a poem"
}
]
}'

So it seems that responses are always cached. Is there a way to avoid caching? Alternatively, OpenAI API has parameter “n” where we can set the number of candidate generations, but in SambaNova API this does not seem to work.

2 Likes

Hello Mislav,

Thank you for bringing this to our attention. We are currently looking into the issue you’re experiencing and will get back to you with an update as soon as possible.

We appreciate your patience in the meantime!

Best regards,
Prajwal.

yeah i have this problem too.
maybe it’s better to add seed parameter so we can make it random or fixed.

1 Like

Hi,

We’ve identified the issue and our team is currently working on a solution. Rest assured, we are addressing it and expect to have the fix implemented shortly.

We appreciate your patience and understanding as we work to resolve this. If you have any further questions or need additional assistance in the meantime, feel free to reach out.

Thank you,
Prajwal

1 Like