This pillar of the maturity model covers ....
Reduce impact by optimising energy consumption within your build processes. For example, enabling developers to better catch/find problems sooner on their local machines, it means less wasted resources elsewhere in the lifecycle. Often we 'build on every commit' which whilst convenient, can also be a huge waste of resources.
Measure | Description | Score: 1 | Score: 3 | Score: 5 |
---|---|---|---|---|
Static Analysis | There are now several mechanisms that can now measure carbon awareness in code - are you using such tools to identify improvements? | We do static analysis, but we don't apply a green software lens to this. | We do static analysis and also use green software scanners. There are lots of false positives, but we're working through this to find those genuine improvements. | We do static analysis and also use green software scanners. We've tuned our implementation of this to improve the signal-to-noise ratio so our teams get really targeted feedback. |
CI/CD | Have you optimised your CI/CD infrastructure to reduce waste and improve efficiency? | We have dedicated CI/CD infrastructure with always-on build-agents. | We have dedicated CI/CD build infrastructure but build agents only get provisioned when there is work to do. | We make use of a SaaS hosted CI/CD solution where local build agents only get provisioned when there is work to do. |
Peer reviews | Does your process interrogate the efficiency of the solution or does it just focus on the fact that it works? Does your peer review process reward the efficency of the solution being developed and non-functional concerns? | Any changes we make require an approval where another member of the team has to review/approve changes. | Any changes we make require an approval from a more senior member of the team with green software efficiency being prioritised. | As well as manual reviews and approvals, we have integrated automated tools to give us feedback on the efficiency of the code. |
When it comes to making those improvement to your software to reduce impact, how easily can you do that without impact things? Prefering decoupled archiectures and reducing your unit of change into smaller chunks, you can deliver better services to your users faster and safer, because you have less risks of breaking other things.
Measure | Description | Score: 1 | Score: 3 | Score: 5 |
---|---|---|---|---|
Reducing waste | Two of the most important areas of digital hygiene that can have a huge impact are
|
Our environments tend to be long lived whether they are being actively used or not. Given how environments bloat over time, we will almost certainly find provisioned resources which aren't really needed. | We use concepts such as infrastructure as code to ensure unused resources get cleaned up over-time though generally speaking, our environments tend to be long-lived, but production sized environments are limited. | Our environments tend to be short-lived and we provision them on-demand using Infrastructure as code, scaling out to production sized evironments as required with automatic clean up processes in place to remove after use. |
Automation | Leveraging automation to make otherwise burdonsome tasks to reduce waste, more accessible e.g. destroying environments overnight knowing you can quickly redeploy it using automation when it's required. | We use a little automation but the human/process cost of redeploying things rules out many activities we might otherwise wish to do to optimise resource use. | We use infrastructure as code automation in combination with configuration management to apply changes. Applying new code changes tends to be fully-automated. | We use infrastructure as code automation in combination with configuration management to rapidly build new environments and apply code-changes in an end-to-end manner. |
Release modularity | When you have smaller parts to change, you can change them more easily. If your system is made of flexible and decoupled components, you can improve your green software practices without worrying about breaking something else. | We release everything relating to our service at the same time; so managing dependencies is not a problem. Our releases tend to be less frequent as a result of having to deploy everything. | Many of our components are independent of each other so we can release them whenever we wish to make a change. We can release these components as often as we like though engaging with third-parties does slow this down. | All of our components can be independently deployed and we utilise approaches like API versioning and we work with third-parties to streamline our integration when necessary. |
Monitoring data can be used to identify trends to continuously improve and optimise your service. For example, understanding that trends follows a specific cycle/seasonability may inform decisions around how you target use of resources to reduce waste.
Measure | Description | Score: 1 | Score: 3 | Score: 5 |
---|---|---|---|---|
Patching | Updates often contain performance improvements and updating your software regularly to ensure your using the latest version gives you the benefit of those improvements. | We apply urgent security updates when they are available. | We apply security updates when they are available and consider patching/upgrading other components when new features touch those components. | We regularly patch and upgrade all software underpinning the service. |
Feature toggling | Being able to dynamically control/toggle features within your application provides opportunities to optimise for certain user scenarios. | We don't make use of this. Turning a feature off would necessitate a code change. | We can toggle certain software features on or off to reduce the environmental impact of our software. | We automatically toggle certain software features on or off to reduce the environmental impact of our software based on external parameters. |
Testing is an integral part of how we validate the quality of the software solutions we build, but do you test in a way that minimises the resources required to prove whether it works For example, do you always run all tests or just the subset related to the area where you've made the change?
Measure | Description | Score: 1 | Score: 3 | Score: 5 |
---|---|---|---|---|
Change aware | Testing only what has changed means you don't waste resources testing what hasn't. | We execute all tests on each code commit. | We execute all tests relating only to the component which has been changed and its integrations. | We execute all tests relating only to the feature which has been added/changed and its integrations. |
Non-Functional Requirements(NFR) | You should keep investing in testing/executing NFR's as part of a regular cadence. Otherwise how do you know that things like performance is not being made worse? | Around major releases we do significant NFR testing e.g. system testing. But on each code change in-between, we don't really apply any. | We regularly undertake some form of scheduled NFR testing to review system performance and to test component resiliency. | On each code change we are able to execute some element of NFR testing relating to the feature or component that has been changed. |
Tolerance | Sometimes tests are binary pass/fail and in other cases adding failure tolerances is more appropriate. Do you fail the build when any of your tests fail? What is an acceptable level of failure to break the build? | A test either passes or breaks the build. | We have some rudimentary tolerance checks built in to break the build under certain conditions e.g. if static analysis has found more than 5 high priority green software actions that require fixing. | We integrate the output of NFR testing to determine failure tolerances for new builds e.g. if performance degrades. |
Production Datasets | You will get unexpected results if you don’t test with production data e.g. You might not notice big performance issues with a small dataset. | We don't regularly test with production datasets that replicate either the scale or variance of the data. | We have some environments where we regularly test with production datasets to ensure that performance/behaviour remains consistent with test data. | We have on-demand environments where we can regularly test with production datasets to simulate production size, scale and variance. |