When making changes in the project, please make sure that the corresponding tests are not failing.
This project uses Kotlin Multiplatform with JVM tests located in the jvmTest source sets.
To run all JVM tests in the project:
./gradlew jvmTestTo run JVM tests from a specific module:
./gradlew :<module>:jvmTestFor example, to run JVM tests from the agents-test module:
./gradlew :agents:agents-test:jvmTestTo run a specific test class:
./gradlew :<module>:jvmTest --tests "fully.qualified.TestClassName"For example:
./gradlew :agents:agents-test:jvmTest --tests "ai.koog.agents.test.SimpleAgentMockedTest"Integration tests are located in the integration-tests module and are used to test interactions with external LLM
services.
To run all integration tests in the project:
./gradlew jvmIntegrationTestTo run integration tests from a specific module:
./gradlew :<module>:jvmIntegrationTestFor example, to run integration tests from the integration-tests module:
./gradlew :integration-tests:jvmIntegrationTestIntegration test methods are prefixed with integration_ to distinguish them from unit tests. To run a specific
integration test:
./gradlew :<module>:jvmIntegrationTest --tests "fully.qualified.TestClassName.integration_testMethodName"For example:
./gradlew :integration-tests:jvmIntegrationTest --tests "ai.koog.integration.tests.SingleLLMPromptExecutorIntegrationTest.integration_testExecute"Integration tests that interact with LLM services require API tokens to be set as environment variables:
ANTHROPIC_API_TEST_KEY- Required for tests using Anthropic's Claude modelsDEEPSEEK_API_TEST_KEY- Required for tests using DeepSeekGEMINI_API_TEST_KEY- Required for tests using Google's Gemini modelsMISTRAL_AI_API_TEST_KEY- Required for tests using MistralAIOPEN_AI_API_TEST_KEY- Required for tests using OpenAI's modelsOPEN_ROUTER_API_TEST_KEY- Required for tests using OpenRouter
You need to set these environment variables before running the integration tests that use the corresponding LLM clients.
To simplify development, you can also create env.properties file (already gitignored) using env.template.propertes as a template.
Then properties specified there would be automatically applied as environment variables when you run any test task.
If you don't have API keys for certain LLM providers, you can skip the tests for those providers using the skip.llm.providers system property. This is useful when you want to run integration tests but only have API keys for some of the providers.
To skip tests for specific providers, use the -Dskip.llm.providers flag with a comma-separated list of provider IDs:
./gradlew :integration-tests:jvmIntegrationTest -Dskip.llm.providers=openai,googleThis will skip all tests that use OpenAI and Google models, but still run tests for other providers like Anthropic.
Available provider IDs:
openai- Skip tests using OpenAI modelsanthropic- Skip tests using Anthropic modelsgoogle- Skip tests using Google modelsopenrouter- Skip tests using OpenRouter models
You can also run a specific test class with provider skipping:
./gradlew :integration-tests:jvmIntegrationTest --tests "ai.koog.integration.tests.AIAgentIntegrationTest" -Dskip.llm.providers=anthropic,geminiWhen tests are skipped due to provider filtering, they will be reported as "skipped" in the test results rather than "failed".
Ollama tests are integration tests that use the Ollama LLM client. These tests are located in the integration-tests
module and are prefixed with ollama_ to distinguish them from other integration tests.
To run all Ollama tests in the project:
./gradlew jvmOllamaTestTo run Ollama tests from a specific module:
./gradlew :integration-tests:jvmOllamaTestTo run a specific Ollama test:
./gradlew :integration-tests:jvmOllamaTest --tests "fully.qualified.TestClassName.ollama_testMethodName"For example:
./gradlew :integration-tests:jvmOllamaTest --tests "ai.koog.integration.tests.OllamaClientIntegrationTest.ollama_test execute simple prompt"By default, Ollama tests use a Docker container to run the Ollama server. You need to:
- Have Docker installed and running on your machine
- Set the
OLLAMA_IMAGE_URLenvironment variable to the URL of the Ollama image to use
For example:
export OLLAMA_IMAGE_URL="ollama/ollama:latest"Alternatively, you can use a local Ollama client:
- Install Ollama from https://ollama.com/download
- Pull the required model (e.g.,
llama3) - Comment out the
@field:InjectOllamaTestFixtureannotation in the test class - Manually specify the executor and model in the test
Example of modifying a test to use a local Ollama client:
// @ExtendWith(OllamaTestFixtureExtension::class)
class OllamaClientIntegrationTest {
companion object {
// @field:InjectOllamaTestFixture // Comment out this line
private lateinit var fixture: OllamaTestFixture
// Manually initialize executor and model
private val executor = SingleLLMPromptExecutor(OllamaClient("http://localhost:11434"))
private val model = OllamaModels.Meta.LLAMA_3_2
}
// Test methods...
}Note that when using a local Ollama client, you need to ensure that the model specified in the test is pulled and available in your local Ollama installation.