Hey Eloïse Barrège 👋 The question is how easily accessible is the data for agents (i.e can the agent scrape the content). Linkedin is known to be very difficult to scrape. Regardless of what url you give, your agent will struggle getting the content from there.
An approach that works well is to use email newsletters as input. You can sign up to a bunch of newsletters, assign a tag using "filters" in Gmail and push them in a Dust folder using Make. Once in Dust, you can plug it to an agent with "Include data" tool. This is what I did to generate the newsletters in 🔒[private feed]