This guide walks you through setting up a virtual environment, starting a Docker container, and using md_creation.py to generate Markdown documentation for files or directories.
python -m venv ollama_venvInstall the dependencies listed in requirements.txt:
pip install -r requirements.txtEnsure the start_run.sh script has execution permission:
chmod +x start_run.shStart the Docker containers using:
./start_run.shIt installs Ollama GPU , if nvidia-smi works else installs Ollama CPU version, which is considerably slower.
On successful Docker deployment, you should see output like:
Checking if Qwen3 model is successfully pulled...
Model Qwen3 successfully pulled and ready.
Once Docker is running, you can validate the deployment and proceed to run md_creation.py.
Use the following CURL command to validate that the Flask app is running in virtual environment CLI:
curl http://localhost:5000Expected Output:
{"message":"Welcome to the Flask app that uses Ollama!. "}To confirm that the Qwen3 model and Ollama backend are functioning correctly, use the following in vitual environment CLI:
curl -X POST http://localhost:5000/query \
-H "Content-Type: application/json" \
-d '{"prompt": "hi"}'Sample Output (JSON): having "think" and response part
{
"response": "<think>\nOkay, the user said \"hi\". I should respond in a friendly and welcoming manner. Let me make sure to acknowledge their greeting and offer assistance. Maybe add an emoji to keep it light and approachable. I should keep the response short but open-ended to encourage them to ask more questions. Let me check for any typos or errors. Alright, that should work.\n</think>\n\nHello! 😊 How can I assist you today? Let me know if you have any questions or need help with something!"
}The md_creation.py script generates Markdown documentation from source code or readable files.Rnu the file in the virtual environment created above. It accepts several arguments:
-
--dir
Description: File or directory to document (relative or absolute path).
Examples:python md_creation.py --dir np_dev_f6f0f8/docs python md_creation.py --dir output.py
-
--exclude-files
Default:['.gitignore']
Description: List of files to exclude. -
--exclude-dirs
Default:['__pycache__', '.git']
Description: List of directories to exclude. -
--llm_thinking_file
Default:True
Description: Enable creation of a_thinking.mdfile with LLM reasoning.
Running the script will generate two Markdown files in the same location as the input:
-
***_thinking.md(if--llm_thinking_fileis not set to False)What the LLM is thinking while generating the documentation.
-
***_explain.mdActual documentation explaining the code or content line-by-line.
🔍 Preview the generated Markdown files in VS Code using
Ctrl + Shift + V.
You can directly query the Qwen3 model from the CLI within the virtual environment or include it in your scripts as API:
curl -X POST http://localhost:5000/query \
-H "Content-Type: application/json" \
-d '{"prompt": "hi"}'ℹ️ More API features coming soon!
Happy documenting! 📘