Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible to launch a server and client in separate processes? #84

Open
logan-markewich opened this issue Dec 2, 2024 · 2 comments
Open

Comments

@logan-markewich
Copy link

When looking at the examples, it seems like you always need to launch the server and client in the same script, because they share the read/write variables.

Is there a way to launch these two pieces in their own scripts? If so, is it documented? This feels like an extremely common use case that might be missing.

Trying to write an MCP server integration for tools in llama-index and realized I can't figure it out.

Thanks for any help!

@dsp-ant
Copy link
Member

dsp-ant commented Dec 2, 2024

I am not sure I fully understand. MCP is a client / server architecture. Clients and Servers operate independently and speak over a transport to each other.

Implementing a client

If you want to implement a client, you likely want to connect to servers either via STDIO or HTTP+SSE. For STDIO you spawn a server executable via stdio_client(parameters). This can be done at any point. Commonly users would define a set of servers and you spawn for each one a client + session. For HTTP+SSE you can start a client + session separately and just listen on a port for a connection.

Since you mentioned 'in the same script'. If you want to have a stdio client, you need to spawn a server. However, this is not the same script. STDIO Servers are just programs that listen for JSON-RPC messages on STDIN and write JSON-RPC messages to stdout.

It might help to take https://modelcontextprotocol.io/llms-full.txt, put it into Claude and ask the model for more help to understand the concept.

Implementing a server

If you are only interested in implementing a server, you never need to start a client. I would recommend a framework like https://github.com/jlowin/fastmcp for ease of use.

I hope this helps. Let me know if you have more questions.,

@logan-markewich
Copy link
Author

logan-markewich commented Dec 2, 2024

@dsp-ant yea yea I guess I'm just confused. The example in the readme is something like:

server_params = StdioServerParameters(
    command="python",
    args=["example_server.py"],
    env=None,
)

# launches the server
async with stdio_client(server_params) as (read, write):
    # launches the client
    async with ClientSession(read, write) as session:
        # Initialize the connection
        await session.initialize()
        ...

But in this example, it seems like the client depends on variables that you can only get from launching the server? How do you launch them separately? I may have missed where this was documented.

An example for the target usage I am looking for:

  1. user launches an MCP server somewhere
python ./mcp_server.py
  1. user runs another script that connects to that server to perform a tool call. Below is the llama-index usage I am trying to implement
# this would connect to a server and enumerate the available tools
tool = MCPTool(url="127.0.0.1:8000")

# then we can plug that into an agent
agent = FunctionCallingAgent.from_tools([tool], llm=llm)
resp = agent.chat("Hey, use your tool")

If we can achieve something similar to the above, would love to add it to llama-index, but so far I haven't found a way to achieve the above using the current mcp package/docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants