Thank you Guillaume Linet. I see what you mean. But do we expect large corporate use cases such as Sharepoint to expose a native MCP server? Because using third party MCP server is out of the question for most companies that would care about internal access management (which means exposing to a third party is not considered)
On a different note, the native dust retrieval engine (and its own vector DB) is what provides a lot of the value to many use cases (it is excellent), which, if I understand correctly, is enabled by the already-done ingestion of the entire knowledge base. MCP servers rely on the retrieval of the tool you are connecting to, which is often very poor (hence the need for Dust in the first place).
Taking the Microsoft example again, for most use cases, good ingestion, embedding and retrieval is needed to build agents that will be able to fetch the correct piece of information to accomplish their task. The simple "Sharepoint" search (or even MS's "Copilot Retrieval") often fails at finding the right document, page or even sentence the task requires. I wouldn't expect a future MCP server to outperform it.
With MCP, the Dust agent would rely on the MCP's server capacity to find the right file.
If the Sharepoint was wholly ingested, but access-partitioned within Dust, you would keep both the excellent quality of Dust's retrieval AND the native Sharepoint access rights would be respected