[Feature]: LiteLLM MCP Roadmap #9891
Replies: 7 comments 23 replies
-
Beta Was this translation helpful? Give feedback.
-
Can this be made into a discussion? @ishaan-jaff it'll be easier to track |
Beta Was this translation helpful? Give feedback.
-
I think a very common problem is that very quickly the number of MCP servers you have installed explodes. I think that makes prompt engineering much more difficult because you have to carefully tune your prompt that it's not choosing the wrong tools for the task you plan to do. So I think it would be a great feature if the proxy would have several endpoints to offer different subsets of MCP servers that are installed so that you can have one endpoint for your IDE focused on software development tools and another for a general chat client. Just a mechanism to have different profiles with different sets of MCP servers available. |
Beta Was this translation helpful? Give feedback.
-
Would love to tackle this one for the initial proxy features:
One thing to consider is restricting litellm to only use HTTP streamable transport since SSE requires a constant connection. If we wanted to support SSE, we would need to either stay connected consistently or only connect on tool runs and periodically listen for new changes. I think the authentication is an important discussion and would be curious to see where you guys are going with that requirement. It makes sense to include a pass through of a token from the litellm proxy to the MCP servers for specific authn/authz at the server. Do we also want litellm proxy to be able store it's own cert to generate jwt like the new feature? |
Beta Was this translation helpful? Give feedback.
-
Can anyone help me in understanding how LiteLLM MCP related to the MCP registry and MCP Proxy discussion here in below links : modelcontextprotocol/registry#11 https://github.com/orgs/modelcontextprotocol/discussions/73 Is LiteLLM MCP a registry implementation or a proxy ? What are the plans to adopt the official registry implementation. |
Beta Was this translation helpful? Give feedback.
-
Hi team, One point I’d like to raise for consideration is the importance of forwarding user credentials (or some representation of the user identity, such as a JWT or access token) to the MCP, in addition to the LiteLLM API key. This would allow the MCP to enforce fine-grained authentication and authorization based on the actual end user’s identity. This is especially relevant when: Access to specific tools or actions in the MCP is restricted per user or role. Audit logs or usage policies need to reflect the individual user, not just the API key. Multi-tenant scenarios require isolation at the user level. Ensuring that user-level context is passed through LiteLLM to the MCP would enable better security, traceability, and policy enforcement. Is this something that’s already on the roadmap or being considered? |
Beta Was this translation helpful? Give feedback.
-
What is the roadmap for other mcp resource types like resources and prompts? It seems like in the docs the only supported type right now are tools? Is it possible to build in a way in the future that each new resource type doesn't need additional dev work from litellm to support the new mcp protocol option? Using litellm for a prompt library and controlling access across teams would be really cool since you get the litellm access controls for free without having to build it into the mcp server |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The Feature
Starting this to discuss LiteLLM MCP Roadmap
Demo Video
Existing MCP walkthrough here
Are you a ML Ops Team?
No
Twitter / LinkedIn details
No response
Beta Was this translation helpful? Give feedback.
All reactions