Releases: solliancenet/foundationallm
Release 0.9.2
FoundationaLLM Version 0.9.2 Release Notes
Introduction
Welcome to FoundationaLLM version 0.9.2! This release includes new features, enhancements, performance improvements and bug fixes. Below is a detailed summary of the changes.
Enhancements and Features
- Added CheckName action for APIEndpointConfiguration resources
- Added default subcategory values on configuration resources
- Added the Agent workflow and tools options in the Management Portal UX
- Added Prompt category and create Prompt option to the Management Portal UX
- Removed orchestration settings Agent validation
- Updated the agent workflow check, where capabilities should rely only on workflow settings for Azure OpenAI Assistants.
- Removed legacy agent AI model and prompt settings
- Added message image from content artifact
Bug Fixes
- Fixed orchestration selection logic
- Cleaned up Prompt form and updated the content artifact style
- Fixed invalid chat session query in URL on startup
Improvements
- Improved the Mobile view for the Management Portal
- Populated OpenAI Assistant information in workflow
- Improved the generation of content artifacts by the DALL-E tool
- Improved user portal toast in the UX
- Improved deployment changes in support of 0.9.2 QuickStart and Standard
Contact Information
For support and further inquiries regarding this release, please reach out to us:
- Support Contact: https://foundationallm.ai/contact
- Website: FoundationalLLM
Conclusion
We hope you enjoy the new features and improvements in FoundationalLLM version 0.9.2. Your feedback continues to be instrumental in driving our product forward. Thank you for your continued support.
Release 0.9.1
FoundationaLLM Version 0.9.1 Release Notes
Introduction
Welcome to FoundationaLLM version 0.9.1! This release includes new features, enhancements, performance improvements, bug fixes, and updates to the documentation. Below is a detailed summary of the changes.
Enhancements and Features
- AgentWorkflow classes have been added for its initial definition.
- Added Private Storage component per agent.
- Added username tooltip and add tooltip component to Management Portal
- Introduced Amazon Bedrock as a Language Provider and added EntraID managed identity to its service.
- Introduced LangGraph ReAct Workflow
- Added PackageName to AgentTool
- Initial implementation of the DALL-E Image Generation tool
- Introduced Python Tool Plugins for agents
- Added IndexingProfileObjectIds and TextEmbeddingModelNames to AgentTool
- Added the ability to export chat conversations in User Portal
- Added FoundationaLLM Skunkworks for experimental LangChain tools
- Added support for agent access tokens instead of using EntraID
- Added in-memory cache for resource providers
- Added semantic search and reranker to Azure AI Search retriever
- Added Semantic caching
Bug Fixes
- Fixed issue with a null agent capabilities property when loading the agent in the Management Portal
- Added CosmosDB Data Contributor Role to Gatekeeper API
- EventGrid - No need to manually restart services after creation or update of an agent.
- Several fixes for accessibility in the Management Portal and the Chat Portal.
Improvements
- Improved agent listing in the User Portal
- Improved User Portal conversation
- Added several Deployment updates to make Quick Start and Standard deployments smoother and faster
- Documented the new Branding capabilities in the Management Portal.
- Renamed Citation class to ContentArtifact and parse it from ToolMessages
- Enhanced CoreAPI authentication
- Added RunnableConfig to LangGraph call to support passing vars to tools
- Added tools array to default agent resource template
- Improved logging capabilities in the Python SDK
- Implemented rating comments in the backend
- Allow for the conditional display of tokens, prompt, rating, and comments in the Management Portal
- Extended the use of OpenTelemetry to Core API entry points
- Added prompt editor to the Management Portal
- Linked LangChain API tracing to main FoundationaLLM tracing
- Enabled optional persistence of completion requests
- Added optional sqlalchemy dependency
- Improved telemetry hierarchy organization
- Updated Certbot to use Ubuntu 22.04 and use RSA when calling Certbot for Standard Deployments
- Added new documentation for Standard Deployment.
- Added Vector Stores for indexing profiles in the management portal
- Added aiohttp library to allow async HTTP requests in Python SDK instead of using the requests package.
Contact Information
For support and further inquiries regarding this release, please reach out to us:
- Support Contact: https://foundationallm.ai/contact
- Website: FoundationalLLM
Conclusion
We hope you enjoy the new features and improvements in FoundationalLLM version 0.9.1. Your feedback continues to be instrumental in driving our product forward. Thank you for your continued support.
Release 0.8.4
FoundationalLLM Version 0.8.4 Release Notes
Introduction
Welcome to FoundationalLLM version 0.8.4! This release includes new features, enhancements, performance improvements, bug fixes, and updates to the documentation. Below is a detailed summary of the changes.
Enhancements and Features
- Polymorphic Serialization Support for Agents: Addressed the issue of all agents are deserialized as
AgentBase
due to the lack of ploymorphic serialization attributes. - KeyVault URI Addition to ACA Deployment
- Management Portal Agent Model: Fixes breaking changes to the API layer from the Management portal and Adding optional
agent_prompt
in internal context agent - PPTX Text Extraction Support
Bug Fixes
- Removal of OpenTelemetry from GatekeeperIntegrationAPI:
GatekeeperIntegrationAPI
does not reference the PythonSDK, so cannot get to the Telemetry class. Temporarily removing OpenTelemetry fromGatekeeperIntegrationAPI
. - App Config and Connection String Validations: Fixes issue with how this environment variable is passed into deployed images and updates resource locator logic when deleting an agent.
- Issue Fix with Vectorization Resource Providers: Configuration values were not being synchronized across multiple instances of various services.
- Fix Authorization Errors and Inconsistencies: Managed identity-based authentication were not working with the Authorization API.
Improvements
- Update Host File Generator: The old method of generating host files missed 1 of the hosts related to cosmos DB.
- Enhanced Data Lake Storage and Vectorization Capabilities: HNS was not enabled on the quick-start storage account
- Event Profiles and Grid Resources: Adding event grid resources to support configuration change events
- Enhanced Polymorphism and Management Portal UI: Fixes the lack of proper serialization polymorphism in vectorization profiles.
Contact Information
For support and further inquiries regarding this release, please reach out to us:
- Support Contact: https://foundationallm.ai/contact
- Website: FoundationalLLM
Conclusion
We hope you enjoy the new features and improvements in FoundationalLLM version 0.8.4. Your feedback continues to be instrumental in driving our product forward. Thank you for your continued support.
Release 0.8.3
FoundationalLLM Version 0.8.3 Release Notes
Introduction
Welcome to FoundationalLLM version 0.8.3! This release includes new features, enhancements, performance improvements, bug fixes, and updates to the documentation. Below is a detailed summary of the changes.
New Features
- Management Portal UI Adjustments: Enhanced the user management interface to improve usability and aesthetics.
- Content Identification and Vectorization: Improvements to the content identification process and vectorization algorithms.
- Vectorization Unit Tests: Implemented new unit tests for comprehensive testing of vectorization features.
- Agent-to-Agent Conversations: Added support for agent-to-agent conversations to enhance overall interaction capabilities by bringing other agents to a conversation using the
@agent
pattern. - Enkrypt Guardrails: Integrate the Enkrypt Guardrails service with Gatekeeper API
- Prompt Shields: Integrate Prompt Shields service with Gatekeeper API
Enhancements
- Management Portal Branding: Improved branding elements within the management portal for a more cohesive visual identity.
- Config Resource Provider: Added missing configuration health checks to ensure system stability.
- Python Resource Provider Defaults: Set default values for Python-based resource providers to streamline configurations.
Bug Fixes
- Text Splitting Based on Tokens: Fixed issues with text splitting when token limits are reached.
- Invalid Parameter Removal: Corrected parameter issues in application Bicep files to prevent configuration errors.
- Vectorization Worker Build: Fixed build issues related to the vectorization worker to ensure smooth deployment.
- OpenTelemetry Integration Issues: Addressed reference issues for integrating OpenTelemetry across various APIs.
- Legacy Agent Selection: Added support for appending legacy agent names through the
FoundationaLLM:Branding:AllowAgentSelection
App Config setting - Gatekeeper API: Multiple changes to the Gatekeeper Integration API for stability.
Documentation Updates
- Knowledge Management Agent: Updated documentation to reflect changes in the knowledge management agent.
- Vectorization Request Documentation: Added and refined documentation for vectorization request processes.
- Basic API Docs Quality Checks: Conducted quality checks and updates to the basic API documentation for precision.
Performance Improvements
- Vectorization Optimizations: Updated algorithms and internal processes to significantly boost performance.
- Event Handling Support: Generalized event handling improvements to ensure robust processing across different scenarios.
- Refined Object Identifiers: Enhanced mechanisms for managing agent and vectorization profiles to reduce overhead and increase efficiency.
Contact Information
For support and further inquiries regarding this release, please reach out to us:
- Support Contact: https://foundationallm.ai/contact
- Website: FoundationalLLM
Conclusion
We hope you enjoy the new features and improvements in FoundationalLLM version 0.8.3. Your feedback continues to be instrumental in driving our product forward. Thank you for your continued support.
Release 0.7.1
Improvements
Fixes downstream package dependency issue for the use of MS Presidio in the Gatekeeper Integration API.
Fixes logic for asynchronous vectorization processing while improving performance.
Fixes bring your own OpenAI deployment pipeline.
Fixes permission issue for the Gatekeeper API having access to Azure Content Safety.
Release 0.7.0
Gateway API
The Gateway API is a load balancing and resiliency solution for embeddings. It sits in front of Azure OpenAI, serving vectorization embedding requests with the correct model and automatically handling rate limits.
- Vectorization Text Embedding Profiles can be configured to use
GatewayTextEmbedding
, complementing the existingSemanticKernelTextEmbedding
- Vectorization with the Gateway API only supports asynchronous requests
Agent RBAC
Agent-level RBAC enables FoundationaLLM administrators to manage access to individual agents, protecting organizations from data exfiltration. When a user creates an agent through the Management API, they will automatically be granted Owner access.
Vectorization Request Management Through the Management API
Users can submit and trigger Vectorization requests through the Management API, rather than the separate Vectorization API, improving consistency across the platform. Creating and triggering Vectorization requests are handled as two separate HTTP requests.
Citations Available in the Chat UI
Knowledge Management agents without Inline Contexts will include citations, indicating the document from the vector store used to answer the user's request.
Agent to Agent Conversations
Through the Semantic Kernel API, FoundationaLLM enables robust agent-to-agent interactions. Users can develop complex, multi-agent workflows that perform well across a variety of tasks.
End to end Testing architecture
With the release of 0.7.0, FoundationaLLM has established an elaborate architecture for E2E testing
Improvements
- User portal session linking and loading improvements
- Documentation updates for ACA and AKS deployments
- Added fix to ensure API keys are unique
- Some restructuring of folders and file movement
- Added support for prompt injection detection
- Added support for authorizing multiple resources in a single request
- Vectorization pipeline execution and state management improvements
- Added the ability for invocation of external orchestration services
- Added the ability to create OneLake synchronous and asynchronous vectorization
- Added support for GPT-3.5 1106 and GPT-4o
Release 0.6.0
Changes to the 0.6.0 release
This document outlines the changes made to the FoundationaLLM project in the 0.6.0 release.
Zero trust - removing dependencies on API keys
The following components are now added to the list of Entra ID managed identity-based authentication support:
- Azure CosmosDB service
- Azure OpenAI in LangChain
- AzureAIDirect orchestrator
- AzureOpenAIDirect orchestrator
Citations
Citations (which means Explainability) is to be able to justify the responses returned by the agent and identify the source of where it based the response on. This release include the API portion of this feature and in next releases we will include the UI portion of this feature.
Release 0.5.0
Features
AzureAIDirect
orchestrator
Allows pointing agents directly (no orchestrators involved) to any LLM deployed in an Azure Machine Learning workspace (e.g., Llama-2 or Mistral models).
AzureOpenAIDirect
orchestrator
Allows pointing agents directly (no orchestrators involved) to any LLM deployed in an Azure OpenAI deployment.
Override LLM parameters in completion requests
A new section is available in the completion requests that allows direct overrides of LLM parameters (e.g., top_p
, temperature
, and logprobs
for GPT).
RBAC Roles
RBAC roles (Reader
, Contributor
and User Access Administrator
) are now activated on the Management API, Core API, and Agent Factory API.
Vectorization
- Improved validation of vectorization requests (rejecting immediately requests for file types that are not supported).
- Stop vectorization request processing after N failed attempts at any given step.
- Dynamic pace of processing in vectorization worker.
- Add custom metadata to a vectorization request.
Zero trust - removing dependencies on API keys
The following components have now Entra ID managed identity-based authentication support:
- Vectorization content sources
- Resource providers
- Azure AI Search
- Authorization store and API
- Azure AI Content Safety
The following components are getting Entra ID managed identity-based authentication support in the next release:
- Azure CosmosDB service
- Azure OpenAI in LangChain
AzureAIDirect
orchestratorAzureOpenAIDirect
orchestrator
Management Portal & API Updates
Data Sources
- Data Sources consolidate Vectorization Content Source Profiles, Text Partitioning Profiles, and Text Embedding Profiles
- Users simply need to create a Data Source and select a target Azure AI Search Index to run end-to-end Vectorization from the Management Portal
- Content Source Profiles, Text Partitioning Profiles, and Text Embedding Profiles will remain available for more advanced use cases
Configuration Management
- Management Portal automatically configures Azure App Configuration keys and Azure Key Vault secrets for new Data Sources
- Management API enables management of all Azure App Configuration keys and Azure Key Vault secrets
API Changes
- Agents
- Core API
- Session-less Completion: Removal of
X-AGENT-HINT
header & passing agent name in the JSON body
- Session-less Completion: Removal of
- Vectorization path casing
Release 0.4.2
Features
Fixes the issue with the prompt prefix not being added to the context for the Internal Context agent.
Release 0.4.1
Features
Fixes support for the vectorization of PPTX files.