Hi team, I'm looking for guidance on using Dust to analyze a set of logs from a tarball. My current approach is to: 1. Use the Dust API to create a folder (note that I couldn't find an API for that) 2. Upload the extracted logs to that folder. 3. Use an agent to identify errors within the logs. 4. (Ideally) Have an option to delete the folder afterwards. Does this sound like a reasonable approach? Any tips or alternative suggestions would be greatly appreciated!
Hey Romain Jouhannet , looks like a great way to go about doing it ! You have to create de folder manually but once created you can use the API or the Zapier/Make/n8n integration to push new docs in it. You can then create a agent with « most recent » or « extract data » tools. The former will feed as much docs as possible (fitting the context window) to answer. The latter will loop through all the docs to extract specific data and store the result in a table. Let me know if you have any other questions!
Those logs are meant to be linked to a support ticket our customer open (because we provide a self-hosted version of our application) so I was hoping not to have to create the folder manually as I would potentially have multiple support engineers uploading the multiple content of the support bundles to dust to analyse their data and I don't want to mike them in the same folder. Do you know of any other way i could go about it? Or if the folder creation via the API could be added?
Ah ! This is a bit trickier as youâd also need to automate the agent creation linked to the new folder. The API doesnât give this kind of flexibility at the moment unfortunately. One thing you could explore is to use the new dynamic filtering function. Youâd push the customer ID/name as tag, allow the agent to filter. Then add by default something like « filter to see only docs with tag : X ». I am not 100% positive in will work in your case but it is worth exploring.