You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trying to launch an app with server-client architecture using llama-deploy but I keep running into port conflict between server and client.
Sorry if this seems obvious, I am new to this.
###LAUNCHING SERVER FROM TERMINAL
INFO:llama_deploy.message_queues.simple - Launching message queue server at 127.0.0.1:8001
INFO: Started server process [33231]
INFO: Waiting for application startup.
INFO:llama_deploy.control_plane.server - Launching control plane server at localhost:8000
INFO: Started server process [33231]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8001/ (Press CTRL+C to quit)
INFO: Uvicorn running on http://localhost:8000/ (Press CTRL+C to quit)
INFO:llama_deploy.message_queues.simple - Consumer ControlPlaneServer-481b18ab-ec44-4b02-ba8d-e5fe92c368b9: control_plane has been registered.
INFO: 127.0.0.1:50102 - "POST /register_consumer HTTP/1.1" 200 OK
###LAUNCHING CLIENT FROM TERMINAL
The app is available on localhost:8000
[Error 98] while attempting to bind on address (127.0.0.1): address already in use.
Server Deployment Code Snippet
@dataclass
class DeploymentConfig:
local_model: bool = False
host: str = "localhost"
port: int = 8000
service_name: str = "react_workflow"
# Some code ...
await deploy_core(
control_plane_config=ControlPlaneConfig(
host=deployment_config.host,
port=deployment_config.port
),
message_queue_config=SimpleMessageQueueConfig(),
)
await deploy_workflow(
ReActAgent(
llm=groq_llama_8b,
tools=query_engine_tools,
timeout=400,
verbose=False
),
WorkflowServiceConfig(
host=deployment_config.host,
port=deployment_config.port,
service_name=deployment_config.service_name
),
ControlPlaneConfig(
host=deployment_config.host,
port=deployment_config.port
),
)
if __name__ == "__main__":
import asyncio
# Parse command line arguments
config = parse_arguments()
# Run with parsed config
asyncio.run(main(config))
Client Code Snippet
load_dotenv()
# Get configuration from environment variables
CONTROL_PLANE_HOST = os.getenv("CONTROL_PLANE_HOST")
CONTROL_PLANE_PORT = int(os.getenv("CONTROL_PLANE_PORT", 8000))
WORKFLOW_NAME = os.getenv("WORKFLOW_NAME")
if not all([CONTROL_PLANE_HOST, WORKFLOW_NAME]):
raise ValueError("Missing required environment variables. Please check your .env file.")
# Create a LlamaDeployClient instance
client = LlamaDeployClient(
ControlPlaneConfig(
host=CONTROL_PLANE_HOST,
port=CONTROL_PLANE_PORT
)
)
@cl.on_chat_start
async def start():
"""Initialize the chat session."""
# Create a SessionClient instance and store it in the user session
session = client.create_session()
cl.user_session.set("llama_session", session)
await cl.Message(
content="👋 Hello! I'm your AI assistant powered by LlamaIndex. I can help you analyze "
"Uber and Lyft's financial data from their 2021 10-K reports. What would you like to know?"
).send()
The text was updated successfully, but these errors were encountered:
Hey @BlueKiji77 I think the problem is 8000 is also the default port for Chainlit 😅 try putting port: int = 8005 or something like that in your DeploymentConfig
Doesn't work. I get the same error regardless of the port I set the client and server to listen to. Also assigning different port to both the client and the server (8080, 8005) doesn't work either as one might guess.
Trying to launch an app with server-client architecture using llama-deploy but I keep running into port conflict between server and client.
Sorry if this seems obvious, I am new to this.
###LAUNCHING SERVER FROM TERMINAL
###LAUNCHING CLIENT FROM TERMINAL
Server Deployment Code Snippet
Client Code Snippet
The text was updated successfully, but these errors were encountered: