Could you update your privacy policy to be explicit on how/if you store and use the data (prompts, input, etc) sent to the serverless model inference under each plan tier? Privacy Policy | SambaNova Systems
@olivier This request has been passed on to our legal team. Please note it will not be an immediate turnaround . In the mean time please refer to our current Terms Of Service.
-Coby
Hello, there is a update in this important topic ?
I would be interested too. If this is safe to use in a company context with confidential code base. Especially when using continue.dev with it: Use QwenCoder2.5 32B on the SambaNova Cloud and never wait for replies.
We understand the situation with the legal team, but we need to be sure that the users: company confidential code base, is keeped confidential. Together AI and Amazon bedrock, are clear with privacy.
Considering their core service is inputs/outputs and this is a common question from developers, for your own sake, you should assume that there’s some degree of logging and access.
This doesn’t mean that it is actually happening, but it’s common for companies to be vague on points that they expect users won’t like. Other services are explicit because they know it will enhance their marketability to developers.
Since LLMs are one their specialties, it’s very unlikely this point wasn’t considered, and it doesn’t take two months to get a response from a legal team. Responding “we don’t log inputs and outputs” wouldn’t require a legal team if they actually don’t store/log the inputs and outputs.
So without additional information, I would assume that it is stored for some unknown period of time and is NOT safe to use with confidential information.