What Models Would You Like to See on SambaNova Cloud?

Full deepseek r1. Dont care even if its more expensive than llama 3 405b.

Need speed.
No other r1 providers are providing speed

1 Like

Sambanova is good thing, but 16k context will devalue any model. I want to use aider with 128k coding models, Sambanova has no offer for me ;(

However, I appreciate new Developer plan, at least something.

1 Like

@kollikiran456 R1 is live in the playground and there is a sign up wait list for the API. Given a spin.

-Coby

@sannysanoff I hear you and I am passing your exact quote onto our product teams.

-Coby

It’s Here and available in the playground. There is a signup waitlist for API access.

-Coby

1 Like

Aye… Right now will try it on the playground, waiting for the API to be released.

Make sure you sign up on the wait list.

-Coby

Hi Coby, One comment for it. Even it would be great to have a model with large context window, such the Qwen 2.5-Turbo which capable of 1 million tokens.

That would preprocess the data from the input, and result the answer with a more sophisticated model. I could imagine a flow with two steps to eliminate the iterations with chunks.

One new addition, which could help me for one of my project elements:

Microsoft OmniParser 2.0:

https://huggingface.co/microsoft/OmniParser-v2.0