Hey 👋 I have been working on a dust master agent that uses specialized children search agents. I am having issues finding best practices on how to prompt the master agent and the search agents so the orchestration and communication between agents is optimized. I feel like I am navigating this blind. Do you guys have any recommended ressources or tips on those advanced use cases?
Hello Raphael Verdier 👋 The feature is brand new, we don't have any best practices/official doc to share yet. Do you have specific issues with your agent? I can maybe help you debug. As a rule of thumb, it's important to understand that the instructions give you have A LOT of control over how you want your agent to use the tools it has access to (incl. "use other agent" tool). You can ask things like : do this then that / always execute in parallel / Never use X unless asked / etc.
Hey thanks for the answer. I am already trying to orchestrate how it interacts with children agents but I find that it is sometimes not responsive to my prompt instructions. That might be related to the issues I reported here: https://dustcommunity.slack.com/archives/C06SHT0F20G/p1753282471503569?thread_ts=1753183979.265599&cid=C06SHT0F20G For context, for the moment my children agents are all specialized search agents so I can delegate the search and synthesis of results to children agent and preserve the master agent context window. Some questions I have about this approach:
What's the optimal way for master and children agents to communicate between them? Should I try to have short or long queries to the children agents? Should my children agents answer in markdown (useful for debugging) or structured JSON format (I had issues with this approach)?
In case we don't have a JSON structure answer approach, is there any benefit in telling the master agent how its children agents will answer?
Is the mater agent able to aggregate the knowledge from the successive call to children agents and draw a conclusion? Is that too much to ask of it? Should I maybe have other agents that draw business knowledge conclusions from my search agents and my master agent only becomes an orchestrator between all the agents.
Is it better to give a lot of details about how to solve problems to my agents or is it better to be synthetic as to not restrict them too much in their approach of the problem?
What's the optimal way for master and children agents to communicate between them? Should I try to have short or long queries to the children agents? Should my children agents answer in markdown (useful for debugging) or structured JSON format (I had issues with this approach)?
I'd say it doesn't matter too much
In case we don't have a JSON structure answer approach, is there any benefit in telling the master agent how its children agents will answer?
It really depends on the use case - it can be interesting. But I would be careful of not complexifying the instructions of the master agent too much
Is the mater agent able to aggregate the knowledge from the successive call to children agents and draw a conclusion? Is that too much to ask of it? Should I maybe have other agents that draw business knowledge conclusions from my search agents and my master agent only becomes an orchestrator between all the agents.
If I am not mistaking, the full convo is given as context to the meta agent (incl. documents retrieved) - but I'd need to retest it to be 100% sure
Is it better to give a lot of details about how to solve problems to my agents or is it better to be synthetic as to not restrict them too much in their approach of the problem?
As a rule of thumb, I usually start high level (give role, high level context, objective and tools it has access to). I then iterate on the instruction to make it work specifically for my use case. i.e: make it as high level as you can with the agent still capable of doing what it's supposed to
Ok thanks for the answers 🙂