Optimizing User Feedback Aggregation: Architecture and Best Practices
Hey team, I'm building a system to aggregate user feedback from various sources. The goal is to first summarize everything into a table, and then use a separate agent to prioritize problems based on ICPs, product direction, and company knowledge. The final output will be an exhaustive list of problem statements, including feedback frequency, severity, verbatims, and sources. I've split this into multiple specialized agents, and I'm wondering if this architecture is correct or perhaps overly complex. My aim was to specialize each agent's instructions. To help define these instructions, I used a separate Dust Agent to generate the specific instructions for each individual agent (including the orchestrator agent). I ensured that every agent comprehends its role within this ecosystem and the data structure it analyzes. Furthermore, I defined a common result template, ensuring that all generated results from specific agents adhere to a similar structure. I've reviewed the existing guides, but some of them seem a bit outdated for this specific challenge. Any advice or insights you have would be greatly appreciated! I'm particularly wondering:
- 1.
Is the architecture I've chosen correct, or is it overly complex for this problem?
- 2.
Are there any patterns or best practices for inter-agent communication within Dust that I should be leveraging more explicitly?
- 3.
Productboard has almost 40,000 notes; will the system only be able to sample, or can I effectively use the query table feature to process all of it? Or should I give Productboard feature summaries instead of getting notes?
- 4.
Do you think this quest has any chance of providing good results or i should use specific agents independently
Thanks!