This guide covers VT Code's comprehensive test suite, including unit tests, integration tests, benchmarks, and testing best practices.
VT Code includes a multi-layered test suite designed to ensure reliability and performance:
- Unit Tests: Test individual components and functions
- Integration Tests: Test end-to-end functionality
- Performance Benchmarks: Measure and track performance
- Mock Testing: Test with realistic mock data
# Run all tests
cargo test
# Run tests with detailed output
cargo test -- --nocapture
# Run specific test
cargo test test_tool_registry
# Run tests for specific module
cargo test tools::
# Run tests in release mode
cargo test --release# Run only integration tests
cargo test --test integration_tests
# Run integration tests with output
cargo test --test integration_tests -- --nocapture# Run all benchmarks
cargo bench
# Run specific Criterion benches used in this workspace
cargo bench -p vtcode-core --bench tool_pipeline
cargo bench -p vtcode-tools --bench cache_bench# List fuzz targets
cargo +nightly fuzz list
# Build and run a target for 60 seconds
cargo +nightly fuzz build shell_parser
cargo +nightly fuzz run shell_parser -- -max_total_time=60See Fuzzing Guide for target details, corpus layout, and crash reproduction.
tests/
mod.rs # Test module declarations
common.rs # Shared test utilities
mock_data.rs # Mock data and responses
integration_tests.rs # End-to-end integration tests
benches/
tool_pipeline.rs # vtcode-core tool pipeline benchmarks
cache_bench.rs # vtcode-tools cache benchmarks
src/
lib.rs # Unit tests for library exports
tools.rs # Unit tests for tool registry
tree_sitter/
analyzer.rs # Unit tests for tree-sitter analyzer
Located in the source files alongside the code they test:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_specific_functionality() {
// Test code here
}
}Located in tests/integration_tests.rs:
#[cfg(test)]
mod integration_tests {
use vtcode::tools::ToolRegistry;
use serde_json::json;
#[tokio::test]
async fn test_tool_integration() {
// Integration test code here
}
}Located in standalone files in tests/:
tests/open_responses_compliance.rs: Validates strict adherence to the Open Responses specification.
# Run Open Responses compliance tests
cargo test --test open_responses_complianceLocated in benches/ directory:
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn benchmark_function(c: &mut Criterion) {
// Benchmark setup and execution
}
criterion_group!(benches, benchmark_function);
criterion_main!(benches);Test the file system tools:
#[tokio::test]
async fn test_list_files_tool() {
let env = create_test_project();
let mut registry = ToolRegistry::new();
let args = json!({
"path": "."
});
let result = registry.execute("list_files", args).await;
assert!(result.is_ok());
}Test code analysis capabilities:
#[test]
fn test_parse_rust_code() {
let analyzer = create_test_analyzer();
let rust_code = r#"fn main() { println!("Hello"); }"#;
let result = analyzer.parse(rust_code, LanguageSupport::Rust);
assert!(result.is_ok());
}Test regex-based search:
#[tokio::test]
async fn test_grep_file_tool() {
let env = TestEnv::new();
let content = "fn main() { println!(\"test\"); }";
env.create_test_file("test.rs", content);
let mut registry = ToolRegistry::new();
let args = json!({
"pattern": "fn main",
"path": "."
});
let result = registry.execute("grep_file", args).await;
assert!(result.is_ok());
}use tests::common::{TestEnv, create_test_project};
#[test]
fn test_with_test_environment() {
let env = TestEnv::new();
env.create_test_file("test.txt", "content");
// Test code here
}use tests::mock_data::MockGeminiResponses;
#[test]
fn test_with_mock_response() {
let response = MockGeminiResponses::simple_function_call();
assert!(response["candidates"].is_array());
}use tests::common::TestEnv;
#[test]
fn test_file_operations() {
let env = TestEnv::new();
// Create test files
env.create_test_file("main.rs", "fn main() {}");
env.create_test_dir("src");
// Test operations
}cargo bench -p vtcode-core --bench tool_pipelineMeasures:
- Rate limiter throughput and latency
- Tool pipeline outcome construction overhead
cargo bench -p vtcode-tools --bench cache_benchMeasures:
- LRU insert/get throughput
- Owned vs
Arcretrieval overhead
- Unit Tests: Test individual functions and methods
- Integration Tests: Test component interactions
- End-to-End Tests: Test complete workflows
- Performance Tests: Benchmark critical paths
#[test]
fn test_descriptive_name() {
// Test implementation
}
#[tokio::test]
async fn test_async_functionality() {
// Async test implementation
}// Prefer specific assertions
assert_eq!(result, expected_value);
assert!(condition, "Descriptive message");
// Use appropriate matchers
assert!(result.is_ok());
assert!(error_msg.contains("expected text"));#[test]
fn test_independent_functionality() {
let env = TestEnv::new(); // Fresh environment for each test
// Test implementation
}name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
- run: cargo test
- run: cargo bench# Install cargo-tarpaulin
cargo install cargo-tarpaulin
# Generate coverage report
cargo tarpaulin --out Html
# Open coverage report
open tarpaulin-report.html# Run only failed tests
cargo test -- --failed
# Run with backtrace
RUST_BACKTRACE=1 cargo test#[test]
fn test_with_debug_output() {
let result = some_function();
println!("Debug: {:?}", result); // Will show in --nocapture mode
assert!(result.is_ok());
}# Capture baseline and latest local metrics
./scripts/perf/baseline.sh baseline
./scripts/perf/baseline.sh latest# Compare baseline vs latest
./scripts/perf/compare.sh- Unit tests for all public functions
- Integration tests for component interactions
- Error handling tests
- Edge case testing
- Performance benchmarks
- Documentation examples tested
- Cross-platform compatibility
- Memory leak testing (if applicable)
Test fails intermittently
- Check for race conditions in async tests
- Ensure proper test isolation
- Use unique test data for each test
Benchmark results vary
- Run benchmarks multiple times
- Use statistical significance testing
- Consider environmental factors
Mock setup is complex
- Simplify test scenarios
- Use builder patterns for complex objects
- Consider integration tests instead of complex mocks
**Happy Testing! **