Extract_Data token limit

Hello,
Currently trying to analyse hundreds of files stocked in a folder on dust to get trends.
I’ve set up an Extract_Data source which works as I want it to, but only scans the last 5-6 files in the folder and then tells me it’s reached its token limit. The files are quite long (google meet discussions of 1H in general).
How can I get it to analyze a larger number of files before making trends, even if I have to do it several times? (Is there a way to limit the analysis of a file with paragraphs mentioning only certain things in order to limit the use of tokens?) Thanks in advance

Hi,

Unfortunately there’s no way to bypass this as the extract data tool is limited to the context window of the model you’re talking to :confused:
Maybe a good way would be to run an assistant on each transcript to extract the essence of what you’re search for, then run the extract data on that summary?

I seen,
How could I register automatically in a folder each outcomes the bot gives me to be able to run it through the folder at the end of the process ?

You’d need to go through the API for that !