-
Notifications
You must be signed in to change notification settings - Fork 0
4. Output
ARG (Assessment Report Generator) produces a PowerPoint presentation comprising one or several applications and multiple spreadsheets holding corroborating data, all originating from either the highlight or cast REST APIs. This version offers two distinct templates - one solely designed for Highlight and the other specifically catered towards AIP and Highlight decks.
One of the key benefits of working with ARG is our ability to create professional and client-ready PowerPoint presentations using our supplied templates. Our team understands the importance of delivering a visually appealing and cohesive presentation to your clients, and we have the skills and expertise to make it happen. We also know that your time is valuable, which is why we have pre-made templates ready to use, saving you time and ensuring a consistent and polished final product. You can trust us to create a professional and engaging presentation that will impress your clients and effectively communicate your message. Let ARG help you make a lasting impression with our PowerPoint deck services.
It is important to note that while our provided templates offer a great starting point for creating your decks, there may be some information that cannot be automatically retrieved through our REST API's. This means that certain sections may need to be manually updated by the user. These sections will be marked with a yellow highlight or a sample page banner for easy identification. Additionally, when running ARG to generate supporting artifacts, these will also need to be manually attached to the deck. This ensures that your final product is accurate and reflects all necessary information. We appreciate your understanding and attention to detail in this process.
Unfortunately, there are certain situations in which tag replacement may not be successful. In these cases, the original tag will remain in the final deck and the user will need to manually replace it with the correct information. However, it is important to note that this is not something ARG should be doing. If you encounter this issue, please report it to us so we can address and fix it as soon as possible. Our goal is to provide a seamless and efficient experience for our users, and we appreciate your help in ensuring that our services meet those standards.
Creating and maintaining ARG templates can greatly improve efficiency and consistency when working with CAST Highlight or AIP REST API data. An ARG template is essentially a PowerPoint deck that contains TAGS, Table, and Chart markers, which can be easily replaced with data collected from these sources. This page offers comprehensive information on the process of creating and maintaining ARG templates, allowing users to effectively present and analyze their data in a professional and streamlined manner. By utilizing ARG templates, users can save time and ensure accuracy in their data analysis and reporting. Whether you are new to ARG templates or looking to improve your skills, this page has all the necessary information to help you succeed.
Enclosed by curly brackets, {tag name} are textual elements that can be replaced with relevant text from either the configuration file, the Highlight REST API, AIP REST API, or calculated by ARG. These tags are not limited to a singular category, but rather subdivided into subcategories like portfolio or application-level tags.
| TAG Name | Description |
|---|---|
| company | configuration company element value |
| project | project element value |
| app_cnt | application count |
When working with the Highlight REST API, it is important to understand the two types of Highlight tags used by ARG: portfolio (port_hl) and application (app1_hl). The portfolio tag provides information gathered from all applications listed in the configuration file, while the application tag provides information specific to each application. It is important to note that when using the ARG template, it should only refer to application 1. If the configuration file includes multiple applications, ARG will automatically replace the app1 in the template with the corresponding number for each application.
The following list shows all Highlight tags currently available for use in an ARG template. To use the tag in a template the TAG name must be prefixed with either "port_hl_" or "app1_hl_".
| TAG Name | Description | Notes |
|---|---|---|
| cloudReady_score | The score found in the Cloud Ready tile | |
| greenIndex_score | The score found in the Green Impact tile | |
| openSourceSafety_score | The score found in the Open Source Safety tile | |
| oss_total_components | number of open source components | Disctinct count for all application in the portfolio |
| oss_total_licenses | number of open source components | Disctinct count for all application in the portfolio |
| softwareAgility_score | The score found in the Agility tile | |
| softwareElegance_score | The score found in the Elegance tile | |
| softwareHealth_score | The score found in the Health tile | |
| softwareResiliency_score | The score found in the Resiliency tile | |
| technology | List of all technologies | In order of LOC count |
| total_loc | Total number of lines | Sum of the each application for portfolio tag |
| TAG Name | Description | Notes |
|---|---|---|
| TAG Name | Description | Notes |
|---|---|---|
| TAG Name | Description | Notes |
|---|---|---|
| TAG Name | Description | Notes |
|---|---|---|
| TAG Name | Description | Notes |
|---|---|---|
- Updated the installation process to be more reliable.
- Installation will no longer fail if Python is already installed on the system.
- If Python is present, the installer will install only the required packages.
- If Python is not present, the installer will install a secondary Python runtime automatically.
- Enhanced Delta Report Functionality
- The delta report now compares data between two snapshots, providing a clear view of changes.
- It accurately highlights what has increased, decreased, or remained unchanged compared to the previous snapshot.
- Imaging Placeholders Delta Implemented
- Delta calculations are now applied to all imaging placeholders, enabling better tracking of changes over time.
- Improved Delta Representation
- Current snapshot values are displayed alongside the difference from the previous snapshot in brackets.
- Symbols indicate whether the metric has increased (+) or decreased (−), making trends easier to interpret.
- Overall Health Factors Added
- Added aggregated health metrics for various components to provide a holistic view of system performance.
- Delta Report: Introduced functionality to compare two snapshots and highlight variations between scans, enabling users to easily identify changes and trends over time.
- In Executive Summary Tiles are not getting populated is fixed.
- In Overview Slide Tiles are not getting populated is fixed.
- Cloud Native Compliance Assessment [PaaS]: added % sign for Cloud Maturity, Cloud Scan, Boosters and Blockers.
- In green Impact slide Scores are coming.
- Cloud Maturity sheet: - Effort is in float values
- Cloud Maturity sheet: - Sort NB Roadblocks from “Largest to Smallest”.
- GreenIT-Sheet:- Effort should is in float values
- working as desired for multiple applications.
- Remediation & Grade Improvement (slide #13) - automate the 4 examples with rules name: top 1 for "Immediate Plan" and "Near-Term Plan", top 2 for "Mid-Term Plan".
- First tab: replace the last column header: "comment" by "Priority" (action plan sheet).
- Second tab: remove the last column "RuleId" (action plan sheet).
- Remove the last column: "Files" (Cloud Maturity sheet).
- Sort violations highest to lowest for each action plan.
- Cloud native slide: add percentage sign for Cloud Maturity, Cloud Scan, Boosters and Blockers.
- Automate the health factors within the title (last 2 or lowest if below 3).
- Technical Overview and Key takeaways:
- Table technology distribution: check the Fix Now counts.
- Table bottom left: automate the health factor within the title (related to the rule in the first row) (slide #9).
- Last line background color (slides 10, 11, 12).
- Fix total violation count in ISO table.
- Open-Source Software (OSS) IP Risk Assessment (slide #14) :- for the Licenses table, the latest character is trimmed within the last column ("Top 5 latest components").
- Automate the wording about the commented out code lines (slide #11) :- ("good" if below 1%, "quite significant" if between 1 and 2, "significant" if above 2).
- Removed downloading feature of com.castsoftware.uc.arg and com.castsoftware.uc.python.common packages from PyPi instead of that these packages are stored in .nupkg file and available under src folder.
- Some other bug fixes.
- Corrections made to installation
- Added AWS and Microsoft templates
- Custom tags using Json file
- Enable table rollover to next page
- General bug fixes
- Highlight only assessment deck upgrade
- Summary Page for both portfolio and application level
- Cloud Ready Page
- Enable text replacement by selection name
- MRI assessment deck upgrade
- Application Sizing
- Strength and Improvement
- Technology Detail
- Bug fixes
- Add spillover
- Add new pages
- Highlight Cloud Ready
- MRI Overview
- Expand action plan warning when effort not found
- Enhance error trapping
- Enable ARG to be used with the Integrated Dashboard.
- Correct Action priority count.
- Improved error trapping
- Add ARG to CAST-Extend
- Add installation script
- Update readme file
- Update GitHub Wiki
- Include Highlight Only and AIP and Highlight templates in extend package
- Rework Highlight benchmark page to be more user intuitive
- Highlight tiles
- Add best, worst, industry average to all Highlight tiles
- Number formatting changes
- Add internal table for calculated text replacement (will be moved to external json file)
- Highlight Summary page bug fixes
- Highlight Green Impact page bug fixes
- Add Highlight benchmark pages
- If PPT file is locked when saving, prompt the user to retry or quit
- Add Green Impact page
- Correct Value exception error when no OSS and License issues found
- Show proper technology on AppMarq Slide
- Correct Value exception error when sizing application
- Converted to run as a python module
- Enable ARG to be run from under OneClick
- Add OSS effort and violations to fix now totals
- Move common code to python.common package
- Updates for new overview Slide
- Slider not setting for TQI
- Correct ipython vulnerability
- Make table rows fit data being filled, remove/add rows accordingly
- Add root cause to strengths and improvement table
- Convert config file to json format
- Move templates to teams
- Updated documentation
- Rename the generated deck to Project <proj_name> - Tech DD Findings
- Generate ISO-5055 side
- General formatting changes
- Rename Critical violations to Violations under Critical rules (all pages)
- Doc slide - remove * *Health Improvement Categories
- Add the grades improvement table of the "Fix Now" issues within the related slide. Change the title by "Remediation plan for {projectName} - Grades improvement simulation"
- Add the full list of BLOCKERS and BOOSTERS within an embedded Excel sheet on the Cloud readiness slide.
- Increas daily rate
- Correct Highlight only generation issue introduced in version 1.4.0
- Add document page automation
- Split near and long term issue peron days
- Automation of the TQI - Strengths and Improvement page
- Split AIP and HL person days on slide#5 (Action Plan to mitigate)
- Correct issue - miscalculation for the Total $estimate in slide#2 (Overview)
Corrected the following issues:
- slide 0: substitution are not done for month and year (“{month} {year}”)
- slide 6: near term issues are not displayed (“… plan for near term, including {app1_near_term_vio_txt}…”)
- slide 8: size of the application is not displayed (“{app1_sml_size} size application with…”)
- slide 10: comment ratio and commented-out ratio are not displayed (“… at {app1_comment_pcnt}%.” and “…less than {app1_comment_out_pcnt}%.”)
- slide 18: action plan estimates for mid-term are not displayed (“${app1_mid_term_cost}K implementation cost” and “Max. {app1_mid_term_eff} person-days implementation effort”)
- slide 19: mid-term items are not displayed (“{app1_mid_term_vio_cnt} items should be planned in mid-term…”)
- slide 19: long-term items number is incorrect as it contains mid-term items + long-term items. Therefore, mid-term items must be excluded from this calculation.