Free AI Readiness Assessment — we map your automation opportunities in 60 minutes, no obligation.
ACTIVE DEPLOYMENT
Case Study · QA Automation · Enterprise SaaS

2,000+ Automated Tests
Replacing 3 Weeks
of Manual QA.

An enterprise service management platform of comparable scope to ServiceNow — tickets, workflows, SLA management, multi-tenancy, approval chains. Manual testing at this complexity was blocking every release. We're changing that.

Playwright Cypress AI Test Generation Enterprise SaaS Multi-Tenant Active Deployment
Target outcomes — in progress
3wk→hrs
Regression cycle target
2,000+
Test scenarios being built
68%
Coverage achieved to date
42
Bugs found in first sprint

The platform — and why QA at this scale is different

This client operates an enterprise service management platform serving large organisations across multiple industries. The platform manages IT and facilities service tickets, workflow automation, SLA tracking, approval chains, asset management, and reporting — comparable in scope and complexity to ServiceNow, built as a custom platform.

Custom-built enterprise platforms at this scale have a QA problem that off-the-shelf tools don't solve. The platform has tenant-specific configuration — workflows, SLA rules, and approval logic that are different for each enterprise customer. A test that passes for one tenant configuration can fail silently for another. This is a scale of testing complexity that manual processes simply cannot address.

Before this engagement, a full regression cycle took three weeks. Given that the platform releases every two weeks, the team was always catching up — testing the last release while the next one was already being built. Bugs were reaching production regularly.

The complexity we're testing against

Multi-tenant configuration matrix

Each enterprise customer has different workflow rules, SLA thresholds, approval chains, and field configurations. Tests must verify behaviour across representative tenant configurations, not just the default setup.

Complex approval workflows

Approval chains with conditional routing — based on ticket type, priority, cost threshold, and organisational hierarchy. Each path is a test scenario. AI generates the permutations from the workflow specification.

SLA timing and escalation logic

SLA breach detection, escalation triggers, and notification logic are time-dependent. Playwright's ability to manipulate time state in tests makes this tractable — without it, these tests would require waiting for real timers.

Role-based access control

Six distinct user roles with different permissions across 40+ platform features. Every role × feature combination is a test case. AI generated the full RBAC test matrix from the permissions specification document in under an hour.

The approach

We chose a dual-framework approach: Playwright for end-to-end workflow and integration tests, Cypress for component-level testing of the React front-end. Both suites integrate into the same GitHub Actions pipeline and report into the same dashboard.

01

Test infrastructure setup and CI/CD integration

✓ Complete
Playwright and Cypress configured, GitHub Actions pipeline built, test environments provisioned (staging, UAT, demo tenants). All infrastructure deployed before a single test case was written.
3 isolated test environments with representative tenant configurations
Parallel execution across Chromium, Firefox, and WebKit
Test result dashboard with Allure reporting integrated with Jira
02

Critical path coverage — ticket lifecycle and SLA management

✓ Complete
The core ticket lifecycle — creation through to resolution — and the SLA monitoring logic that wraps it. 340 test cases covering every ticket type, priority level, and SLA configuration.
340 test cases covering full ticket lifecycle
SLA timer manipulation tests — Playwright mocks time to test breach logic without waiting
42 bugs identified in first sprint, 38 resolved before production deployment
03

Approval workflow and RBAC automation

In Progress
AI-generated test matrix from the permissions specification and workflow documentation. Complex conditional approval chains being tested across 6 user roles and 8 workflow types.
AI generated 480 test scenarios from the RBAC specification document
Conditional workflow routing tested across all approval path permutations
Currently at 68% coverage — target 85% by end of sprint
04

Multi-tenant configuration testing and API coverage

Upcoming
Parameterised tests that run against all representative tenant configurations simultaneously. API integration coverage for the 60+ REST endpoints. Performance benchmarks added to the pipeline.

Progress to date

Test coverage by module
Ticket lifecycle
94%
SLA management
91%
User authentication
100%
Approval workflows
68%
RBAC / permissions
58%
API endpoints
24%
42 bugs found in the first sprint of testing. The first time an automated suite ran against a codebase that had been manually tested, it found 42 issues — not regressions introduced recently, but edge cases that had existed in the codebase for months without being caught. This is the consistent pattern we see: automated testing doesn't just prevent future problems, it surfaces existing ones that manual testing missed.

What this will look like when complete

When the full suite is deployed, every commit to the main branch will trigger 2,000+ test cases across the full platform. The entire suite runs in under 15 minutes in parallel. The three-week manual regression cycle will no longer exist. Releases will deploy on a two-week cadence with continuous regression assurance rather than a three-week QA gate.

The client's QA engineers will shift from running manual regression scripts to maintaining and expanding the automated suite, analysing failure patterns, and building new scenario coverage as features are added. The work becomes higher-quality and more meaningful.

Back to top
📊 BI Practice
Free Assessment
We find out why your dashboards aren't being used — and fix it.

🔒 ISO 27001 · No spam · Honest assessment