Can anyone help me with using crewai and sambanova api with langchain_community.chat_models.sambanova together, I am getting version conflicts and errors using both of them together. Seeking for help if anyone already know how to fix that
@debsouryadatta Welcome to the community . Could you possible share some of your error stacks ?
@hello1 You did some work with crewai as well if I properly recall. If so could you provide some advise?
Yes, I used it, but finally, I decided to avoid both the crewai and the lang_chain.
Overall, with Crewai, the ChatOpenAI interface can be used, but you need to provide the Samba Nova base URL and API_key. The rate_limits will hit the ceiling, so you need to minimize the complexity of the tasks and build some waits into the calls.
For the models, you need to specify open/{model name} to work with the API.
I hope this helps.
Yes @coby.adams , i got that fixed but just using the ChatOpenAI and not the ChatSambaNovaCloud and changing the base url
Hey @hello1 , So is it not worth it to create agents with crewai?
Does it really reach the limits very quick
Yes, these were my findings. I created a rate-limited interface for the SambaNova cloud and shared it on GitHub.
What I found my custom agent implementation is way better performs, than the CrewAI one, and as I removing the dependencies more and more on Langchain I getting less warnings on vulnerable libraries.
How about LangGraph, have you used it earlier, how does it perform?
I used both Langchain and Langgraph, but after I saw the vulnerabilities, I decided to decommission them from my code as my goal to keep everything as secure as possible.
Got it, anyways i guess langgraph is better than crewai?
Hi @debsouryadatta,
I know this is old but want to give you a quick answer on how to get Sambanova models to work with your CrewAI agents.
We have an LLM class that you can use to configure the model like so in your crew.py file:
from crewai import Agent, Task, Crew, Process, LLM
import os
# Configure the LLM to use Sambanova
sambanova_llm = LLM(
model="sambanova/Meta-Llama-3.1-405B-Instruct",
api_key=os.environ.get("SAMBANOVA_API_KEY"),
temperature=0.5
)
Make sure you’ve saved your API Key in .env in your root directory.
If you’re using the YAML format to build your CrewAI agent(s), you can update the file to refer to the LLM you want to use and bypass the step above. You still have to save the API Key in your .env file.
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: sambanova/Meta-Llama-3.1-405B-Instruct
More details on this process in our docs.
@tony thank you so much and welcome to the community!
I would say it is more flexible, and you have more control over the processes.
Okay i see, then i should go with langgraph for my next projects✌️
Thanks very much for your detailed explanation, will be helpful, but i guess this will not work with RAG implementations where we need certain crewai tools for vector embeddings and even in web search tools of crewai.
I also using RAG without any of such tools (langchain/langgraph or crewai)
You can use such db as surrealdb, or do the embedding function and distance search separately. it is a one-time effort.