Releases: oracle/weblogic-kubernetes-operator
Operator 3.2.2
- Resolved an issue where the operator would retry Kubernetes API requests that timed out without a backoff causing increased network utilization #2300.
- Resolved an issue where the operator would select the incorrect WebLogic Server port for administrative traffic #2301.
- Resolved an issue where the operator, when running in a large and heavily loaded Kubernetes cluster, would not properly detect when a domain had been deleted and recreated #2305 and #2314.
- Resolved an issue where the operator would fail to recover and begin processing in a namespace if the operator did not immediately have privileges in that namespace when it was first detected #2315.
- The operator logs a message when it cannot generate a
NamespaceWatchingStopped
Event in a namespace because the operator no longer has privileges in that namespace #2323. - Resolved an issue where the operator would repeatedly replace a ConfigMap causing increased network utilization #2321.
- Resolved an issue where the operator would repeatedly read a Secret causing increased network utilization #2326.
- Resolved an issue where the operator was not honoring
domain.spec.adminServer.serverService
#2334.
Operator 3.2.1
Updated the versions of several dependencies, including the Oracle Linux base of the container image.
Operator 3.2.0
Features:
- The operator's container image is based on Oracle Linux 8.
- WebLogic Server container images based on Oracle Linux 8 are supported.
- Online updates of dynamic configuration changes for Model in Image.
- Automatic injection of the WebLogic Monitoring Exporter as a sidecar container.
- Events are generated at important moments in the life cycle of the operator or a domain.
- PodDisruptionBudgets are generated for clusters improving the ability to maintain cluster availability during planned node shutdowns and Kubernetes upgrade.
- Additional scripts to assist with common tasks, such as the
scaleCluster.sh
script. - Support for TCP and UDP on the same channel when the SIP protocol is used.
Fixes for Bugs or Regressions:
- All fixes included in 3.1.1 through 3.1.4 are included in 3.2.0.
- Resolved an issue where clustered Managed Servers would not start when the Administration Server was not running (#2093)
- Model in Image generated domain home file systems that exceed 1 MB are supported (#2095).
- An event and status update are generated when a cluster can't be scaled past the cluster's maximum size (#2097).
- Improved the operator's ability to recover from failures during its initialization (#2118).
- Improved the ability for
scalingAction.sh
to discover the latest API version (#2130). - Resolved an issue where the operator's log would show incorrect warnings related to missing RBAC permissions (#2138).
- Captured WDT errors related to validating the model (#2140).
- Resolved an issue where the operator incorrectly missed Secrets or ConfigMaps in namespaces with a large number of either resource (#2199).
- Resolved an issue where the operator could report incorrect information about an introspection job that failed (#2201).
- Resolved an issue where a Service with the older naming pattern from operator 3.0.x could be stranded (#2208).
- Resolved an issue in the domain and cluster start scripts related to overrides at specific Managed Servers (#2222).
- The operator supports logging rotation and maximum file size configurations through Helm chart values (#2229).
- Resolved an issue supporting session replication when Istio is in use (#2242).
- Resolved an issue where the operator could swallow exceptions related to SSL negotiation failure (#2251).
- Resolved an issue where introspection would detect the wrong SSL port (#2256).
- Resolved an issue where introspection would fail if a referenced Secret or ConfigMap name was too long (#2257).
Operator 3.1.4
Resolved an issue where the operator would ignore live data that was older than cached data, such as following an etcd restore #2196.
Updated Kubernetes Java Client and Bouncy Castle dependencies.
Operator 3.1.3
Resolved an issue that caused some WebLogic Servers to fail to start in large Kubernetes clusters where Kubernetes watch notifications were not reliably delivered. #2188
Resolved an issue that caused the operator to ignore some namespaces it was configured to manage in Kubernetes clusters that had more than 500 namespaces. #2189
Operator 3.1.2
Resolved an issue where the operator failed to start servers in which the pods were configured to have an annotation containing a forward slash #2089.
Operator 3.1.1
This version resolves an issue that caused unexpected server restarts when the domain had multiple WebLogic clusters (#2109)
Operator 3.0.4
This release contains a back-ported fix from 3.1.0 for Managed Server pods that do not properly restart following a rolling activity.
Operator 3.1.0
- All fixes included in 3.0.1, 3.0.2, and 3.0.3 are included in 3.1.0.
- Sample scripts to start and stop server instances #2002
- Support running with OpenShift restrictive SCC #2007
- Updated default resource and Java options #1775
- Introspection failures are logged to the operator's log #1787
- Mirror introspector log to a rotating file in the log home #1827
- Reflect introspector status to domain status #1832
- Ensure operator detects pod state changes even when watch events are not delivered #1811
- Support configurable WDT model home #1828
- Namespace management enhancements #1860
- Limit concurrent pod shut down while scaling down a cluster #1892
- List continuation and watch bookmark support #1881
- Fix scaling script when used with dedicated namespace mode #1921
- Fix token substitution for mount paths #1911
- Validate existence of service accounts during Helm chart processing #1939
- Use Kubernetes Java Client 10.0.0 #1937
- Better validation and guidance when using longer domainUID values #1979
- Update pods with label for introspection version #2012
- Fix validation error during inrtrospector for certain static clusters #2014
- Correct issue in wl-pod-wait.sh sample script #2018
- Correct processing of ALWAYS serverStartPolicy #2020
Operator 3.0.3
This release contains a fix for pods that are stuck in the Terminating state after an unexpected shut down of a worker node.