Automation tools for various work tasks including lab operations, ServiceNow, and Jira.
- ServiceNow to Jira AI Processor: LLM-powered ticket classification, analysis, and Jira ticket preparation using Ollama
- QA Automation: Run automated QA tests on lab exercises
- Lab Operations: Create, start, stop, delete, and manage labs
- User Impersonation: Switch to different users for testing
- Multi-Environment Support: Works with ROL, Factory, and China environments
- ServiceNow Auto-Assignment: Automated ticket assignment with team-specific configurations
- Link Checker: Validate external links in course content with PDF/JSON reports and Jira integration
- Python 3.8+
- Firefox or Chrome browser
- Geckodriver (for Firefox) or Chromedriver (for Chrome)
- Podman or Docker (for Link Checker - optional but recommended for faster link validation)
The scripts are intended to be installed on your local machine, not lab's workstation. Only tested for Fedora 42.
-
Clone the repository:
git clone https://github.com/carias-rh/lx-toolbox.git cd lx-toolbox -
Set up your credentials:
# Copy the template cp env.template .env # Edit with your credentials vim .env # or use your preferred editor
-
Configure settings:
cp config/config.ini.example config/config.ini # Edit config/config.ini as needed vim config/config.ini -
Run the installer (creates virtualenv, installs dependencies, drivers, and deploys lx-toolbox):
# Basic installation (skips Ollama - recommended for faster setup) ansible-playbook playbooks/setup.yml -K --skip-tags ai-capabilities # Full installation including Ollama (for SNOW AI features) ansible-playbook playbooks/setup.yml -K
After running the Ansible setup.yml playbook, wrapper commands are available system-wide.
These simplified commands are installed to /usr/bin/ and available from anywhere:
# Start a lab
start rh124-9.3
start do180 factory # Use Factory environment
# Stop a lab
stop rh124-9.3
stop do180 factory
# Delete a lab
delete rh124-9.3
delete do180 factory
# Recreate a lab (delete + create)
recreate rh124-9.3
recreate do180 factory
# Impersonate a user
impersonate rh124-9.3 jsmith
impersonate do180-4.14 jsmith
# Run QA automation
qa 124 2.3 # Run QA on chapter 2.3
qa do180 ch02s05 factory # Run on Factory
# Check links in course content
check-links --course do280-4.18
check-links --course rh124 --all-versions
check-links --course do180 --create-jira
check-links # Check all courses
# Process SNOW tickets with AI
snow-ai # Process tickets from user queue
snow-ai --ticket INC12345678 # Process specific ticket
snow-ai -t INC111 -t INC222 # Process multiple ticketsAll wrapper commands support -h or --help for usage information.
For advanced usage or when wrapper commands aren't available, use lx-tool directly:
# Lab operations
lx-tool lab start rh124-9.3 --env rol --browser firefox
lx-tool lab stop rh124-9.3
lx-tool lab create rh124-9.3
lx-tool lab delete rh124-9.3
lx-tool lab recreate rh124-9.3
lx-tool lab impersonate rh124-9.3 student01The ServiceNow to Jira AI Processor uses a local LLM (Ollama) to analyze and classify ServiceNow Feedback tickets, automatically preparing Jira tickets for content or environment issues.
- Ollama: Install from ollama.ai
- LLM Model: Download a model (e.g.,
ollama pull ministral-3:8b)
Add the following to your .env file:
# LLM Provider (default: ollama)
LLM_PROVIDER=ollama
# Ollama Configuration
OLLAMA_MODEL=ministral-3:8b
OLLAMA_COMMAND=/usr/local/bin/ollama
# Your signature for responses
SIGNATURE_NAME=Your NameConfigure ServiceNow url path in the config/config.ini file:
SNOW_BASE_URL = <ask to carias>
SNOW_FEEDBACK_QUEUE_PATH = <ask to carias># Process all tickets in your feedback queue (wrapper command)
snow-ai
# Process specific tickets
snow-ai -t RITM0123456 -t RITM0123457
# Use Chrome instead of Firefox
snow-ai --browser chrome
# Run in a specific environment
snow-ai --env factoryOpens a separate browser window per ticket with organized tabs for efficient processing.
The SNOW AI Processor follows this workflow for each ticket:
-
Ticket Retrieval: Fetches ticket details from ServiceNow (description, course, chapter, section, URL)
-
Classification: Uses the LLM to classify the ticket as:
- Content Issue: Typos, incorrect instructions, missing steps, outdated content
- Environment Issue: Lab script failures, stuck labs, platform limitations
- Manual Review: UI suggestions, complaints, complex issues
-
Language Detection & Translation: Detects non-English feedback and translates it
-
Guide Text Extraction: Navigates to the course section in ROL and extracts the relevant content
-
Analysis: Uses the LLM to:
- Compare student feedback with the guide text
- Determine if the issue is valid
- Suggest corrections
- Generate a Jira-ready title
-
Response Generation: Crafts a professional response to the student
-
Jira Preparation: Opens a prefilled Jira ticket with:
- Course and section information
- Translated issue description
- Suggested workaround
- Component and version pre-selected
Each ticket gets its own browser window with 4 tabs:
| Tab | Content |
|---|---|
| Tab 1 | ServiceNow ticket (zoomed for readability) |
| Tab 2 | ROL course section (guide content) |
| Tab 3 | Jira search (find similar existing tickets) |
| Tab 4 | Jira create (prefilled new ticket form) |
This layout enables quick context switching and manual review before submitting tickets.
Content Issues (automatically creates Jira):
- Typos or grammatical errors
- Incorrect or outdated instructions
- Missing steps or commands
- Mismatches between video and guide
- Incomplete exercise information
Environment Issues (suggests debugging steps):
- Lab start/finish script failures
- Labs stuck in starting/stopping state
- Platform-specific limitations (e.g., vim visual mode)
- OpenShift lab first-boot delays
Video Issues (creates Jira if videos exist, otherwise informs student):
- Video not available or cannot be found
- Video doesn't match section/chapter content
- Video subtitles incorrect, missing, or translation issues
- Video technical problems (bad cuts, audio sync)
- Video player not working
- Exercise/guide mismatch with instructor video
The Link Checker functionality validates all external links in course content (References sections) and generates comprehensive reports.
Basic Usage:
# Check links in a specific course
check-links --course do280-4.18
# Check all available versions of a course
check-links --course rh124-9.3 --all-versions
# Check all courses in the catalog
check-linksKey Features:
- Automatic link validation: Checks all external links in course References sections
- Screenshot capture: Takes screenshots of each visited link (including error pages)
- Comprehensive reports: Generates both PDF and JSON reports
- Retry mechanism: Automatically retries failed links before final report
- Multi-version support: Can check all versions of a course
- Anti-crawler workaround: Will overcome website limitations for bots
- Jira integration: Optionally creates prefilled Jira tickets for broken links
- Fast validation: Uses containerized
linkcheckertool for quick checks when screenshots aren't needed - Authenticated access: Automatically logs into access.redhat.com to check knowledge base articles that require authentication (avoiding 403 Forbidden errors)
Report Structure:
- PDF Report: Includes course summary, broken links with hyperlinks, embedded screenshots, and detailed section-by-section breakdown
- JSON Report: Machine-readable format for alternative validation methods
- Screenshots: Organized by run timestamp, course name, version, and section name
Command Options:
--course, -c: Specific course ID to check (e.g.,rh124-9.3). Checks all courses if omitted--env, -e: Environment (rol, factory, china). Default: rol--browser, -b: Browser to use (firefox, chrome). Default: firefox--headless/--no-headless: Run browser in headless mode. Default: no-headless--output-dir, -d: Directory to save reports. Default: current directory--screenshots/--no-screenshots: Take screenshots of visited links. Default: screenshots enabled--screenshots-dir, -s: Directory to save screenshots. Default:./link_checker_screenshots--retry/--no-retry: Retry failed links before generating report. Default: retry enabled--all-versions/--single-version: Check all available versions of the course. Default: single version--create-jira/--no-jira: Create prefilled Jira tickets for broken links. Default: disabled
Examples:
# Quick check without screenshots (faster)
check-links --course do280-4.18 --no-screenshots
# Check all versions with custom output directory
check-links --course rh124-9.3 --all-versions --output-dir ./reports
# Check all courses and create Jira tickets for broken links
check-links --create-jira
# Custom screenshots directory
check-links --course do280-4.18 --screenshots-dir ./my_screenshots
# Skip retry for faster execution
check-links --course do280-4.18 --no-retry
# Headless mode for automated runs
check-links --course do280-4.18 --headlessHow It Works:
- Navigation: Logs into ROL and navigates to the course(s)
- Section Discovery: Expands the table of contents and collects all sections (excluding summaries, labs, quizzes, etc.)
- Link Extraction: Extracts all external links from References sections
- Link Validation:
- Uses containerized
linkcheckerfor fast validation (when screenshots disabled) - Uses Selenium for validation with screenshots
- Supports parallel checking for faster execution
- Uses containerized
- Retry Round: Optionally retries failed links to catch transient errors
- Report Generation: Creates PDF (with screenshots) and JSON reports
- Jira Integration: Optionally creates prefilled Jira tickets with broken links, hyperlinks to course sections, and attached reports
Screenshot Organization:
Screenshots are organized in a hierarchical structure:
link_checker_screenshots/
└── 20251228_143022/ # Run timestamp
└── do280/ # Course name
└── 4.18/ # Version
└── Section_1.2/ # Section name
└── screenshot_*.png
Jira Ticket Features:
When --create-jira is enabled, the tool creates prefilled Jira tickets with:
- Course information and version
- Table of broken links with clickable section hyperlinks
- Links to search for existing tickets (prevents duplicates)
- Attached PDF and JSON reports
- Opens in a new window to avoid interfering with link checking
Excluded Sections:
The following section types are automatically excluded from checking:
- Sections containing "summary" in the title
- Sections containing "lab:", "guided exercise:", "quiz:" in the title
- "Comprehensive Review" sections
- "Preface" sections
--env, -e: Specify environment (rol, factory, china)--browser, -b: Choose browser (firefox, chrome)--headless: Run in headless mode--no-headless: Run with browser visible
See CREDENTIALS.md for detailed information on setting up credentials.
The automation assists with login by autofilling credentials when configured:
- Username is autofilled when
RH_USERNAMEis set
-
Create a
.envfile with your credentials:RH_USERNAME=your_username SNOW_INSTANCE_URL= SNOW_API_USER=your_snow_user SNOW_API_PASSWORD=your_snow_password -
Test your configuration:
lx-tool config lx-tool snow test
The ServiceNow auto-assignment feature supports multiple teams with different configurations:
-
T1 Team (
t1): RHT Learner Experience- Handles standard support requests
- Supports round-robin assignment
- Automatic acknowledgment messages
- Handles special cases (iqlaserpress.net emails)
-
T2 Team (
t2): RHT Learner Experience - T2- Handles feedback tickets
- Automatic user name lookup via LMS API
- Auto-resolution for specific reporters
- Course information extraction
To add a new team, modify the _load_team_configurations() method in lx_toolbox/core/servicenow_handler.py:
teams["new_team"] = TeamConfig(
team_name="Your Team Name",
assignment_group_id="your_group_sys_id",
category="Your Category",
acknowledgment_template="Your template with {customer_name} and {assignee_name}",
# ... other configuration options
)# Test ServiceNow and LMS connections
lx-tool snow test
# List unassigned tickets for a team
lx-tool snow list-tickets t1
lx-tool snow list-tickets t2 --limit 20
# Run single auto-assignment cycle
lx-tool snow assign t1
lx-tool snow assign t2 --assignee "John Doe"
# Run continuous auto-assignment
lx-tool snow assign t1 --continuous
lx-tool snow assign t2 --continuous --interval 120lx-toolbox/
├── lx_toolbox/ # Main Python package
│ ├── core/ # Core business logic
│ │ ├── base_selenium_driver.py
│ │ ├── jira_handler.py
│ │ ├── lab_manager.py
│ │ ├── servicenow_autoassign.py
│ │ ├── servicenow_handler.py
│ │ └── snow_ai_processor.py
│ ├── utils/ # Utility modules
│ │ ├── config_manager.py
│ │ ├── course_resolver.py
│ │ ├── helpers.py
│ │ └── keyboard_handler.py
│ └── main.py # CLI entry point
├── config/ # Configuration files
│ └── config.ini
├── scripts/ # Wrapper scripts
│ └── lx-tool
├── playbooks/ # Legacy Ansible files (to be removed)
├── .env # Environment variables (credentials)
└── requirements.txt # Python dependencies
If you encounter "Username not configured":
- Ensure
.envfile exists in the project root - Check variable names are correct (case-sensitive)
- Verify no spaces around
=in.envfile - Run
lx-tool configto check configuration
For SNOW AI Processor issues:
- Verify Ollama is running:
ollama list - Test the model directly:
ollama run ministral-3:8b "Hello" - Check
OLLAMA_COMMANDpath in.envis correct - Ensure the model specified in
OLLAMA_MODELis downloaded - For slow responses, consider a smaller model (e.g.,
qwen3:8b)
For ServiceNow issues:
- Test connections with
lx-tool snow test - Verify ServiceNow credentials and permissions
- Check that assignment group IDs are correct
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request