All notable changes to this project will be documented in this file. This format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
Documentation and API updates.
- Objective field in the class docstrings for specifying the objective of the metric
- Added .readthedocs.yml file for readthedocs configuration
- General description of the package in the Getting Started section of the documentation
- Updated unit tests for the API to reflect the new changes in the metric categorization
- Updated developer guidelines for pre-commit hooks
- Updated Makefile hook for documentation generation with output directory
- Reorganized the documentation structure to the new
metric_group
categorization - Removed any reference to the previous categorizations from the API models
- mkdocs heading from 3 to 4 for better organization
FrechetDistance
metric now usesInceptionFID
from piq as a default feature extractor
- CLI device specification
- CLI compute arguments in reports dir
- Preprocessing transform in image datasets when using the API
- Moved
sklearn
andgudhi
dependencies to the main dependency tree
- Default image feature extractor is now
vit_b_32
- Confusing synthetic and input metric goals where aggregated to
quality
,privacy
,annotation
andutility
categories - Moved metrics to sepecific folders based on
metric_group
(feature-based, data-based)
- Fixed project configuration conflict between setup.py and pyproject.toml by reverting to poetry as main build engine
- Updated dependencies in
pyproject.toml
and ignoring versions in requirements folder - Fixed error in validation type for the cli
- Fixed example
tabular
andtime-series
scripts to use cli application
- Removed
setup.py
file as it is deprecated in favor ofpyproject.toml
- Removed
cookiecutter
template files as they are not needed for the project
- Changed default image extractor model from
dino_vits8
tovit_b_32
- Bumped version to 0.1.0 to indicate the first stable release of the package
- Documentation for image, tabular and time-series class metrics
- Tabular and time-series unit tests for direct import from the package
- Dummy notebook with examples of quality evaluation on the imagenette dataset
- Mkdocs documentation for the package (only available locally and for metric classes)
- Each modality has a seciton in the documentation site
- Metrics are organized as input_val and synthesis_val groups
- Groups may be divided into other subgroups depending on metrics' specificities
- Added distance metric parameter for the PRDC metrics
- Entire code structure moved to package level under src/
- Allows for easire import of metrics and models and correct package builds
- Package submodules defined in upper init.py files to allow for direct import of metrics
- Updated
mkdocs
from1.5.3
to1.6.1
to fix a bug in the previous version - Updated
mkdocs-material
from9.4.14
to9.5.34
to fix a bug in the previous version - Updated
mkdocstrings
from0.24.0
to0.26.0
to fix a bug in the previous version
- Standard version of
main.py
and documentation on how to run locally - Support for specific feature extractors in the
main.py
script - Added key -> value/array dtypes for metric results
- API data access is now controlled by the
API_DATA_ROOT
environment variable in .env - Modality batch sizes are now controlled by the
<MODALITY>_BATCH_SIZE
environment variables in .env - Reorganized metrics into metric group categories to account for shared requirements. A diagram explaining the new metric structure can be found here
- input metrics are grouped under
quality
,privacy
orannotation
categories - Synthesized metrics are grouped under
feature
ordata
categories
- input metrics are grouped under
- Older and confusing
metric_subgoals
were moved to themetric_goal
fieldmetric_goal
field is an optional subcategory of themetric_group
field
- Integrated image COCO annotation metrics into the API
- Annotation files must be present under the input_val/annotations/ directory
- Annotation files must be in COCO format and the filename given as a parameter in the API
- API unit testing for every modality with FastAPI testing utilities
- Documentation of shared metrics and image metrics
- Added annotation_file field to the API for image datasets
- Using resize of 224 in DINO-ViT for image datasets
- Updated
pycaret
from3.1.0
to3.3.1
due to a bug in the previous version - Removed redundant hooks in the Makefile
- Text datasets are now kept under text/input_val/dataset/ for consistency with the other modalities
- Input layers now extend the InputLayer abastract class for consistency
- Renamed input variables in input layers from
data_src1
,data_src2
toreference
andtarget
- Refactored code struture from properties-based structure to a more modular class-based structure
- Metrics are now classes that inherit from a base
Metric
class - Each metric has a
compute
method that returns the metric value - Older property attributes are now class attributes of the specific metric class
- Introduced the
MetricResult
class to enforce and validate return types of thecompute
method - Input layers provide batched samples to the metrics via the
batched_samples
property - Embedder models are now passed to the
get_embeddings
method of the input layers (allows for more flexibility) - Moved common API and CLI code to the
common
package to avoid code repetition - Added the
ComputationManager
class to manage the computation of metrics. Allows for the batch computation and merger of metric results and the computation of metrics in parallel. Also keeps track of a shared state of the metrics and their results.
- Metrics are now classes that inherit from a base
- Removed FhP git dependencies from the pyproject.toml file (not needed)
- Dockerfile now uses only the
requirements-prod.txt
file for production builds - Dockerfile pip install with torch-cpu source for lighter builds
- Removed unused environment variables from the .env file
- Standard error messages being returned in the API for invalid requests
evaluation_level
field in the API is now optionalmetric_goal
field in the API is now optionalmetric_group
field in the API is mandatory and indicates the group of the metricannotation_file
optional field for the API to specify the annotation file for image datasetsmetric_subgoals
field in the API was removed and replaced by themetric_goal
fieldmetric_plots
field in the returned JSON of the API was removed- Each metric now has a
plot_params
field that contains the plot parameters
- Each metric now has a
- Each metric now has an optional
stats
field that contains additional statistical information of the metric results- Previous iterations returned these statistics in separate metric results
- The response body now contains an
errors
field to indicate any errors caught during the metric computation- This is only used in known errors/warnings that are known to occur in a specific metric (e.g. a metric that is supposed to be between 0 and 1 but is not due to numerical instability). Other API errors are still returned as HTTP status codes.
- Renamed metric result
type
field todtype
to avoid conflicts with Python'stype
function - Renamed the
dtype
in the metric result tosubtype