Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Very longs commands execution in terminal on macos #207

Open
kevin-support-bot bot opened this issue Jan 13, 2025 · 4 comments
Open

[Bug]: Very longs commands execution in terminal on macos #207

kevin-support-bot bot opened this issue Jan 13, 2025 · 4 comments

Comments

@kevin-support-bot
Copy link

kevin-support-bot bot commented Jan 13, 2025

All-Hands-AI#6218 Issue


@Proger666, Would you ask the agent to run in non-interactive mode?


# sample program to ask for input from the user
code='''
print("Enter your name:")
name = input()
print("Hello, " + name + "!")
'''
with open('test.py', 'w') as f:
    f.write(code)
python test.py

Does the same happen for the above command? It should raise no change timeout after 30 seconds.

@Proger666
Copy link

How to do it, using openhands docker container ?

@SmartManoj
Copy link
Owner

SmartManoj commented Jan 13, 2025

You could manually create the file and run that command in the frontend terminal.

Development guide

@Proger666
Copy link

Pardon but i don't get,

I ran instance as:

docker run -it --rm --pull=always \
    -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.19-nikolaik \
    -e LOG_ALL_EVENTS=true \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v ~/.openhands-state:/.openhands-state \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    --name openhands-app \
    docker.all-hands.dev/all-hands-ai/openhands:0.19

Provided remote endpoint to llm and started a task.

I thought it's resource usage issue but llm load at command execution point is 0%.
Ram sufficient, CPU load less than 5 %.

I tried to change agent from CodeAct to Code but it has the same issue.
BTW at start, it works with small delay, but after a while, delay is growing exponentially.

@SmartManoj
Copy link
Owner

SmartManoj commented Jan 13, 2025

Are you using a local LLM model?

Would you ask the agent to use the default preset? Docs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants