Skip to content

devanjanmishra/Documentation-Generation

Repository files navigation

📄 Markdown File Creation for Any Directory or File

This guide walks you through setting up a virtual environment, starting a Docker container, and using md_creation.py to generate Markdown documentation for files or directories.


⚙️ Setup Instructions

1. Create a Virtual Environment

python -m venv ollama_venv

2. Install Required Packages

Install the dependencies listed in requirements.txt:

pip install -r requirements.txt

3. Make Startup Script Executable

Ensure the start_run.sh script has execution permission:

chmod +x start_run.sh

4. Start Docker and Containers

Start the Docker containers using:

./start_run.sh

It installs Ollama GPU , if nvidia-smi works else installs Ollama CPU version, which is considerably slower.

✅ Successful Output

On successful Docker deployment, you should see output like:

Checking if Qwen3 model is successfully pulled...
Model Qwen3 successfully pulled and ready.

Once Docker is running, you can validate the deployment and proceed to run md_creation.py.


✅ Validating the Deployment

🔍 Check if Containers Are Running Properly

Use the following CURL command to validate that the Flask app is running in virtual environment CLI:

curl http://localhost:5000

Expected Output:

{"message":"Welcome to the Flask app that uses Ollama!. "}

🧪 Validate Containers and Ollama is Working

To confirm that the Qwen3 model and Ollama backend are functioning correctly, use the following in vitual environment CLI:

curl -X POST http://localhost:5000/query \
     -H "Content-Type: application/json" \
     -d '{"prompt": "hi"}'

Sample Output (JSON): having "think" and response part

{
  "response": "<think>\nOkay, the user said \"hi\". I should respond in a friendly and welcoming manner. Let me make sure to acknowledge their greeting and offer assistance. Maybe add an emoji to keep it light and approachable. I should keep the response short but open-ended to encourage them to ask more questions. Let me check for any typos or errors. Alright, that should work.\n</think>\n\nHello! 😊 How can I assist you today? Let me know if you have any questions or need help with something!"
}

🧠 Running md_creation.py

The md_creation.py script generates Markdown documentation from source code or readable files.Rnu the file in the virtual environment created above. It accepts several arguments:

🔹 Required Argument

  • --dir
    Description: File or directory to document (relative or absolute path).
    Examples:

    python md_creation.py --dir np_dev_f6f0f8/docs
    python md_creation.py --dir output.py

🔹 Optional Arguments

  • --exclude-files
    Default: ['.gitignore']
    Description: List of files to exclude.

  • --exclude-dirs
    Default: ['__pycache__', '.git']
    Description: List of directories to exclude.

  • --llm_thinking_file
    Default: True
    Description: Enable creation of a _thinking.md file with LLM reasoning.


📂 Output Files

Running the script will generate two Markdown files in the same location as the input:

  • ***_thinking.md (if --llm_thinking_file is not set to False)

    What the LLM is thinking while generating the documentation.

  • ***_explain.md

    Actual documentation explaining the code or content line-by-line.

🔍 Preview the generated Markdown files in VS Code using Ctrl + Shift + V.


💬 Querying the Qwen3 Model Directly

You can directly query the Qwen3 model from the CLI within the virtual environment or include it in your scripts as API:

Example CURL Request : Prompt -->"hi"

curl -X POST http://localhost:5000/query \
     -H "Content-Type: application/json" \
     -d '{"prompt": "hi"}'

ℹ️ More API features coming soon!


Happy documenting! 📘


About

code base to generate documentation for directories/repos using ollama and qwen3

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published