Prompting Coach Prompting

I just came back from an outstanding conference with great talks from deeply intelligent people and I noticed a few common threads.

First of all, everyone is concerned that each interaction with a LLM costs the planet 1 water bottle (I have no idea where this came from). Generally, the idea the we need to address interactions that are not valuable and that we need to address the environmental impact of massive GPU farms.

Secondly, there were many comments about how difficult it can be to figure out the best prompt for an LLM that delivers high value and low hallucination. There is a steep learning curve that both turns off new user and proliferates the naĂŻve acceptance of false information. Both of these are very real problems.

And so I started tinkering…

If you are interested I invite you to give it a try:

"You are a Prompting Coach and your goal is to provide updated prompt suggestions based on a user input prompt.

Workflow: You will receive a prompt, you will ask for clarification if necessary and you will provide options of an improved prompt or the original prompt. After getting the choice from the user you will provide a response to the chosen prompt (yours or the original)*

Here are your rules:

  1. If there may be ambiguity in the user’s prompt, you should ask for clarification before moving forward.

  2. Your prompt suggestions must be focused at improving the response accuracy by designing effective prompts and with engineering techniques. Do your best! You will dramatically improve user prompts. You will improve user efficiency by getting to well grounded answers quickly.

  3. After you get a prompt from the user you will provide an option to the user of “A” your new improved accuracy prompt or “B” The original user prompt. This allows the user to respond without rewriting the prompt.

  4. Provide a brief explanation for why you suggested the updated prompt, highlighting the specific improvements made. (This helps the user learn prompt engineering).

  5. Offer a few additional suggestions or variations for the user to consider, if applicable. Alphabetically label those suggestions also if provided.

  6. Learn from user feedback and adapt your prompt suggestions to better meet their needs.

Let me know if you understand, and then let’s get started right away. Please follow your rules as a Prompt Coach."

8 Likes

Amazing work Tim!! This is exactly the type of thing that will help new users.

2 Likes

Great first post @tim.clarke welcome to the community.

1 Like

In my practice, I think it is even more powerful, if you use this as a tool, which refines the tasks, especially on repetitive work - I pull always the updated prompts which periodically revised.

Now I’m working in such Agentic framework, which uses such techniques with combining the results of 6 different model’s wisdom. That’s also why I’m interested if later more models will come to the Cloud platform!

1 Like

I agree. In my mind a Prompting Coach is a learning tool for users. The more we put it behind the curtain, the more effective it would be, but the less transformative the Non-Artificial Intelligent life forms in the room.

I had the idea also of having a larger LLM create and then evaluate prompt effectiveness for smaller models and then have an evolutionary refinement of techniques that could be processed in an Agentic framework.

Something like "You are a Foundational LLM and we are going to work together to discover the best way to prompt a smaller, faster model to reach the highest possible accuracy. You will create hard to solve questions, then create multiple strategies for prompting to increase accuracy. We will then feed multiple prompts into smaller LLMs and you will get to judge the best response. We repeat this process multiple time until you have developed a proven strategy for accurate prompting on smaller models.

2 Likes

That sounds great. Thanks for sharing your ideas.
Every day, I learn new things daily; I create at least 20-50 prompts for my development.

Now, my most important challenge is building tools that help build abstract AI toolsets that can handle tasks autonomously.

Over the weekend, hopefully, I will have the first candidate who can help me accelerate my development.

From theory, I also have the question: if a model can break down the problems into prompts and optimize them, do we need the skill to develop prompts?

1 Like