This guide walks you through setting up a virtual environment, starting a Docker container, and using md_creation.py
to generate Markdown documentation for files or directories.
python -m venv ollama_venv
Install the dependencies listed in requirements.txt
:
pip install -r requirements.txt
Ensure the start_run.sh
script has execution permission:
chmod +x start_run.sh
Start the Docker containers using:
./start_run.sh
It installs Ollama GPU , if nvidia-smi works else installs Ollama CPU version, which is considerably slower.
On successful Docker deployment, you should see output like:
Checking if Qwen3 model is successfully pulled...
Model Qwen3 successfully pulled and ready.
Once Docker is running, you can validate the deployment and proceed to run md_creation.py
.
Use the following CURL command to validate that the Flask app is running in virtual environment CLI:
curl http://localhost:5000
Expected Output:
{"message":"Welcome to the Flask app that uses Ollama!. "}
To confirm that the Qwen3 model and Ollama backend are functioning correctly, use the following in vitual environment CLI:
curl -X POST http://localhost:5000/query \
-H "Content-Type: application/json" \
-d '{"prompt": "hi"}'
Sample Output (JSON): having "think" and response part
{
"response": "<think>\nOkay, the user said \"hi\". I should respond in a friendly and welcoming manner. Let me make sure to acknowledge their greeting and offer assistance. Maybe add an emoji to keep it light and approachable. I should keep the response short but open-ended to encourage them to ask more questions. Let me check for any typos or errors. Alright, that should work.\n</think>\n\nHello! 😊 How can I assist you today? Let me know if you have any questions or need help with something!"
}
The md_creation.py
script generates Markdown documentation from source code or readable files.Rnu the file in the virtual environment created above. It accepts several arguments:
-
--dir
Description: File or directory to document (relative or absolute path).
Examples:python md_creation.py --dir np_dev_f6f0f8/docs python md_creation.py --dir output.py
-
--exclude-files
Default:['.gitignore']
Description: List of files to exclude. -
--exclude-dirs
Default:['__pycache__', '.git']
Description: List of directories to exclude. -
--llm_thinking_file
Default:True
Description: Enable creation of a_thinking.md
file with LLM reasoning.
Running the script will generate two Markdown files in the same location as the input:
-
***_thinking.md
(if--llm_thinking_file
is not set to False)What the LLM is thinking while generating the documentation.
-
***_explain.md
Actual documentation explaining the code or content line-by-line.
🔍 Preview the generated Markdown files in VS Code using
Ctrl + Shift + V
.
You can directly query the Qwen3 model from the CLI within the virtual environment or include it in your scripts as API:
curl -X POST http://localhost:5000/query \
-H "Content-Type: application/json" \
-d '{"prompt": "hi"}'
ℹ️ More API features coming soon!
Happy documenting! 📘