Here’s a simple and clear README specifically for managing and using the virtual environment in your project. This will help you remember why and how you set up the virtual environment.
The virtual environment is used to isolate the project's dependencies (libraries, packages, etc.) from other projects and the system-wide Python installation. By doing this, we ensure that the specific versions of the tools we need are available only for this project and don’t interfere with other Python environments.
We named the virtual environment research_ai_env
to match the project name, making it easy to identify and manage in the future.
- Dependency Isolation: Prevents conflicts between different projects by keeping project-specific libraries and versions in their own isolated environment.
- Easy Dependency Management: Allows you to install the exact libraries required for this project without affecting other Python projects on your system.
- Portability: Makes it easier to recreate the same environment on other systems by exporting the list of dependencies (
requirements.txt
). - Organized Project Structure: Naming the virtual environment after the project helps you keep things organized, especially when managing multiple projects.
-
Create the Virtual Environment (Only do this once):
- Run the following command in the terminal within your project directory to create a virtual environment named
research_ai_env
:rm -rf research_ai_env python3 -m venv research_ai_env
- Run the following command in the terminal within your project directory to create a virtual environment named
-
Activate the Virtual Environment:
-
You need to activate the virtual environment whenever you work on the project.
source research_ai_env/bin/activate
-
After activating, your terminal prompt will change to show
(research_ai_env)
, indicating that the virtual environment is active.
-
-
Install Project Dependencies:
- Once the virtual environment is activated, install the required dependencies by running:
pip install -r requirements.txt
- Once the virtual environment is activated, install the required dependencies by running:
-
Deactivate the Virtual Environment:
- When you’re done working, deactivate the virtual environment by simply running:
deactivate
- When you’re done working, deactivate the virtual environment by simply running:
Whenever you want to start working on this project in the future:
-
Navigate to the Project Directory:
cd path/to/your/project
-
Activate the Virtual Environment:
-
On Windows:
research_ai_env\Scripts\activate
-
On Linux/macOS:
source research_ai_env/bin/activate
-
-
Start Working on the Project: You are now inside the project-specific environment where all dependencies are isolated for this project.
- Always make sure to activate the virtual environment before running your project’s scripts or installing new packages.
- The
research_ai_env
directory contains all the files related to the virtual environment. This folder should not be shared when you distribute the project. Only therequirements.txt
file, which lists your dependencies, needs to be shared. - To recreate the environment on another machine, run:
pip install -r requirements.txt
To check if the Ollama service is running, use the following command:
sudo systemctl status ollama.service
If the service is not running, follow the steps below to set it up.
-
Create a systemd service file for Ollama:
sudo nano /etc/systemd/system/ollama.service
-
Add the following content (update the paths as needed):
[Unit] Description=Ollama Server After=network.target [Service] ExecStart=/path/to/ollama/binary --port 11434 WorkingDirectory=/path/to/ollama/ Restart=always User=your-username Group=your-group Environment=DISPLAY=:0 StandardOutput=journal StandardError=journal [Install] WantedBy=multi-user.target
-
Reload systemd to recognize the new service:
sudo systemctl daemon-reload
-
Enable the Ollama service to start on boot:
sudo systemctl enable ollama.service
-
Start the service:
sudo systemctl start ollama.service
-
Check the status:
sudo systemctl status ollama.service
To test if the API and the llama2-uncensored
model are working correctly, use the following curl
command:
curl -X POST http://localhost:11434/api/generate -H "Content-Type: application/json" -d '{"model": "llama2-uncensored", "prompt": "What is the capital of France?", "max_tokens": 100}'
You should receive a JSON response if everything is working.
If the model doesn't work as expected, check the logs:
sudo journalctl -u ollama.service
If you see warnings about syslog
being obsolete, update the service file:
- Open the file for editing:
sudo nano /etc/systemd/system/ollama.service
- Replace:
with:
StandardOutput=syslog StandardError=syslog
StandardOutput=journal StandardError=journal
- Save, then reload and restart the service:
sudo systemctl daemon-reload sudo systemctl restart ollama.service
- Make sure your Ollama server is listening on the correct port (default: 11434).
- Check GPU compatibility if you are using GPU inference.