Do not expose sensitive data to the LLM.
This is an unsupported POC, use at your own risk.
This project was driven by the idea that a unit test skeleton could be automatically generated by a single command using AI.
This implementation may be used in a git repository to automatically generate unit tests, using the unittest framework, for Python code. As part of the process, the program will run the tests that it generates and try to correct any errors a number of times, based on the --max-cycles
flag.
Ensure that you review any code generated by this program.
- You must have access to an AWS account with AWS Bedrock available in the
eu-west-2
region, the Claude model must be available in Bedrock.
-
Command must be run in a git repository, as to identify files for which to write tests for a
git diff
is performed between the current working directory andmain
branch. -
New files must be added to the staging area, otherwise they won't be tracked and no changes will be detected.
-
The model is currently only configured to write tests using the Python
unittest
framework. -
Due to the stochasticity of the LLM response, each time you run the program the result may be slightly different.
-
Clone the repository
-
cd
into the repository -
Run
sh create_executable.sh
to build the executable -
Put the executable file on the
PATH
by addingexport PATH="$(pwd)/dist:$PATH"
to your.bashrc
/.zshrc
file
-
Assume AWS role with access to Bedrock
-
Run
generate_unit_tests --dirs-to-test dir1 dir2
Name | Description | Type | Default | Required |
---|---|---|---|---|
--dirs-to-test |
Directories containing Python code to test | blankspace separated list e.g. dir1 dir2 dir3 |
n/a | yes |
--test-path |
Path to existing unit tests | str |
test/ |
no |
--test-command |
Command to run unit tests | str |
pipenv run python -m unittest |
no |
--test-prefix |
prefix for test paths, the default assunes that the path to existing unit tests is exactly the same as the path to the file that you are testing, with an added test_ prefix to each component e.g. services/script1.py > test_services/test_script1.py |
str |
test_ |
no |
--max-cycles |
maximum number of attempts AI will make at generating passing tests | int |
3 | no |
--generated-test-path |
Output path for AI generated tests | str |
ai_test/ |
no |