We as AIvantGuard sees there is a significant security gap nearly in all AI implementation.
The fundamental problem, that everyone focuses to the features but not the security of the solutions.
To start a movement in the industry, we released an open source implementation of an OpenAI compatible API client library, that demonstrates what is the problem.
In all existing tools the credentials, tokens remains in the memory, which can be read by other processes after the process releases the memory. Nearly all currently used languages affected.
In more details about the problem:
Let’s start fixing the ecosystem, and make AI more secure
@hello1 Interesting reads . Based off this most of the work to needs to be done on the client application side in a traditional AI model interaction, correct?
I think it is also a problem on the server side.
I have doubts if all the received strings overwritten before the drop operation.
As most framework based on python and uses the python memory management it is a general problem.