Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server-Client binding address conflict #348

Open
BlueKiji77 opened this issue Nov 5, 2024 · 2 comments
Open

Server-Client binding address conflict #348

BlueKiji77 opened this issue Nov 5, 2024 · 2 comments
Labels
question Further information is requested

Comments

@BlueKiji77
Copy link

Trying to launch an app with server-client architecture using llama-deploy but I keep running into port conflict between server and client.
Sorry if this seems obvious, I am new to this.
###LAUNCHING SERVER FROM TERMINAL

INFO:llama_deploy.message_queues.simple - Launching message queue server at 127.0.0.1:8001
INFO:     Started server process [33231]
INFO:     Waiting for application startup.
INFO:llama_deploy.control_plane.server - Launching control plane server at localhost:8000
INFO:     Started server process [33231]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8001/ (Press CTRL+C to quit)
INFO:     Uvicorn running on http://localhost:8000/ (Press CTRL+C to quit)
INFO:llama_deploy.message_queues.simple - Consumer ControlPlaneServer-481b18ab-ec44-4b02-ba8d-e5fe92c368b9: control_plane has been registered.
INFO:     127.0.0.1:50102 - "POST /register_consumer HTTP/1.1" 200 OK

###LAUNCHING CLIENT FROM TERMINAL

The app is available on localhost:8000 
[Error 98] while attempting to bind on address (127.0.0.1): address already in use.

Server Deployment Code Snippet

  @dataclass
  class DeploymentConfig:
      local_model: bool = False
      host: str = "localhost"
      port: int = 8000
      service_name: str = "react_workflow"

# Some code ...

    await deploy_core(
        control_plane_config=ControlPlaneConfig(
            host=deployment_config.host,
            port=deployment_config.port
        ),
        message_queue_config=SimpleMessageQueueConfig(),
    )
    
    await deploy_workflow(
        ReActAgent(
            llm=groq_llama_8b,
            tools=query_engine_tools,
            timeout=400,
            verbose=False
        ),
        WorkflowServiceConfig(
            host=deployment_config.host,
            port=deployment_config.port,
            service_name=deployment_config.service_name
        ),
        ControlPlaneConfig(
            host=deployment_config.host,
            port=deployment_config.port
        ),
    )

if __name__ == "__main__":
    import asyncio
    
    # Parse command line arguments
    config = parse_arguments()
    
    # Run with parsed config
    asyncio.run(main(config))

Client Code Snippet

load_dotenv()

# Get configuration from environment variables
CONTROL_PLANE_HOST = os.getenv("CONTROL_PLANE_HOST")
CONTROL_PLANE_PORT = int(os.getenv("CONTROL_PLANE_PORT", 8000))
WORKFLOW_NAME = os.getenv("WORKFLOW_NAME")

if not all([CONTROL_PLANE_HOST, WORKFLOW_NAME]):
    raise ValueError("Missing required environment variables. Please check your .env file.")

# Create a LlamaDeployClient instance
client = LlamaDeployClient(
    ControlPlaneConfig(
        host=CONTROL_PLANE_HOST,
        port=CONTROL_PLANE_PORT
    )
)

@cl.on_chat_start
async def start():
    """Initialize the chat session."""
    # Create a SessionClient instance and store it in the user session
    session = client.create_session()
    cl.user_session.set("llama_session", session)
    
    await cl.Message(
        content="👋 Hello! I'm your AI assistant powered by LlamaIndex. I can help you analyze "
        "Uber and Lyft's financial data from their 2021 10-K reports. What would you like to know?"
    ).send()

@masci
Copy link
Member

masci commented Nov 5, 2024

Hey @BlueKiji77 I think the problem is 8000 is also the default port for Chainlit 😅 try putting port: int = 8005 or something like that in your DeploymentConfig

@masci masci added the question Further information is requested label Nov 5, 2024
@BlueKiji77
Copy link
Author

Doesn't work. I get the same error regardless of the port I set the client and server to listen to. Also assigning different port to both the client and the server (8080, 8005) doesn't work either as one might guess.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
Status: No status
Development

No branches or pull requests

2 participants