Releases: openlayer-ai/openlayer-python
Releases · openlayer-ai/openlayer-python
v0.0.0a6
Pin versions of requests
, urllib3
v0.0.0a5
Fixes OPEN-3628 Conda not available in subprocess calls when running in Docker container
v0.0.0a4
Bump version
v0.0.0a3
Fix aws storage type
v0.0.0a2
Update StorageType enum
v0.0.0a01
Shift back to PyPI because setuptools is not in test PyPI
v0.3.0
Added
- A
Project
helper class. - A convenience method
create_or_load_project
which loads in a project if it is already created. - Accepts AZURE as a
DeploymentType
.
Changed
- Compatibility with Unbox API OpenAPI refactor.
- Models and datasets must be added to projects.
- Deprecates
categorical_features_map
in favor ofcategorical_feature_names
for model and dataset uploads. - Moved
TaskType
attribute from theModel
level to theProject
level. Creating aProject
now requires specifying theTaskType
. - Removed
name
fromadd_dataset
. - Changed
description
tocommit_message
fromadd_dataset
,add_dataframe
andadd_model
. requirements_txt_file
no longer optional for model uploads.- NLP dataset character limit is now 1000 characters.
Fixed
- More comprehensive model and dataset upload validation.
- Bug with duplicate feature names for NLP datasets if uploading same dataset twice.
- Added
protobuf==3.2.0
to requirements to fix bug with model deployment.
0.3.0rc1
Added
- A
Project
helper class. - A convenience method
create_or_load_project
which loads in a project if it is already created. - Accepts AZURE as a
DeploymentType
.
Changed
- Compatibility with Unbox API OpenAPI refactor.
- Models and datasets must be added to projects.
- Deprecates
categorical_features_map
in favor ofcategorical_feature_names
for model and dataset uploads. - Moved
TaskType
attribute from theModel
level to theProject
level. Creating aProject
now requires specifying theTaskType
. - Removed
name
fromadd_dataset
. - Changed
description
tocommit_message
fromadd_dataset
,add_dataframe
andadd_model
. requirements_txt_file
no longer optional for model uploads.- NLP dataset character limit is now 1000 characters.
Fixed
- More comprehensive model and dataset upload validation.
- Bug with duplicate feature names for NLP datasets if uploading same dataset twice.
- Added
protobuf==3.2.0
to requirements to fix bug with model deployment.
0.3.0a5
Changed
- NLP dataset character limit to 1000 characters.
Fixed
- Fix issue with duplicate feature names for NLP datasets.
v0.3.0a4
Fixed
explainability_tokenizer
validation.