Free AI Readiness Assessment — we map your automation opportunities in 60 minutes, no obligation.
Case Study · QA Automation · Print SaaS

PrintPlanr:
3 Days of Manual QA
to 6 Minutes.

How Infomaze built an AI-powered Playwright test suite for its own cloud print MIS — and what that taught us about deploying QA automation for clients.

Playwright AI Test Generation GitHub Actions Cloud Print MIS Regression Automation
Outcomes delivered
3d→6min
Full regression cycle time
4,800+
Test cases run per day
96%
Core flow test coverage
0
Regressions reaching production

The situation

PrintPlanr is Infomaze's own cloud print MIS — built in 2010 and now ranked in the global top 10 for cloud print management. The platform handles the complete production workflow: job creation, press scheduling, prepress, finishing, customer portal, invoicing, and dispatch. It also hosts five AI capabilities that are live in production.

When you build your own SaaS, you also own your own QA problem. For years, PrintPlanr releases were gated by a manual regression process. A senior QA engineer would spend three full days running through the critical user flows — job creation, press assignment, customer ordering, invoicing — across multiple browsers and customer configurations before any release was approved for production.

This wasn't unusual. It was how SaaS QA worked before modern automation tools. But as the platform grew — more modules, more integration points, more AI features — three days started becoming four. And the pressure to release more frequently was going in the opposite direction.

The challenges we faced

⏱️

3-day regression blocking every release

PrintPlanr's production workflow has 40+ distinct flows across job management, press scheduling, customer portal, and invoicing. Running them manually meant any release required a full week — three days QA, then fixes, then confirmation testing.
🖨️

Print-specific complexity

Print workflows are non-trivial to test. Substrate types, press assignments, colour profiles, and finishing options create a matrix of combinations. The AI press recommendation engine added another layer — you needed to verify both the recommendation logic and the UI response.
🌐

Multi-browser, multi-customer configuration

PrintPlanr customers configure the platform differently — custom templates, different modules enabled, varied pricing structures. A feature that worked for one customer configuration could break silently for another.
🤖

AI features added new test surface

When we shipped the AI press recommendation engine, AI job coding, and the AI support chatbot, each added new test scenarios that the existing manual process struggled to cover consistently.

The solution: Playwright + AI-generated tests

We chose Playwright for two reasons: it handles multi-browser testing natively (Chromium, Firefox, WebKit from one suite), and it's the framework where AI code generation is most mature. The combination of Playwright's stability and AI-assisted test authoring was the key to building coverage fast.

Phase 1: Core flow coverage (weeks 1–3)

We started with the 12 flows that every release must not break — the business-critical paths that a PrintPlanr customer uses every day. Job creation through to invoice generation. Press assignment. Customer portal ordering. User authentication.

For each flow, we fed the user story and the existing manual test steps into an AI code generation prompt with our Playwright framework setup. The AI generated the test scaffold — correct selectors using data-testid attributes, proper async handling, assertion structure. Our engineers reviewed, refined edge cases, and committed. What would have taken a week of manual script writing per flow took one to two days.

Sample — press assignment flow (AI-generated, engineer-reviewed)
// US-118: Press Assignment — AI selects optimal press for job attributes
// Generated from user story + manual test steps by AI
// Reviewed and refined by Infomaze QA engineer

import { test, expect } from '@playwright/test';
import { loginAs, createJob } from '../helpers/printplanr';

test.describe('Press Assignment — AI Recommendation', () => {

  test.beforeEach(async ({ page }) => {
    await loginAs(page, 'production_manager');
  });

  test('CMYK A4 flyer routes to correct press type', async ({ page }) => {
    await createJob(page, {
      product: 'A4 Flyer', colourProfile: 'CMYK',
      substrate: '130gsm Gloss', qty: 5000
    });
    await page.click('[data-testid="get-ai-recommendation"]');

    // AI should recommend a B2 or larger press for 5K run
    const rec = page.locator('[data-testid="press-recommendation"]');
    await expect(rec).toBeVisible();
    await expect(rec.locator('.press-format'))
      .toContainText(/B[12]/);

    // Accept recommendation and verify job assigned
    await page.click('[data-testid="accept-recommendation"]');
    await expect(page.locator('.job-press-badge'))
      .not.toBeEmpty();
  });

  test('digital product routes away from litho press', async ({ page }) => {
    await createJob(page, {
      product: 'Business Cards', qty: 250, colourProfile: '4/4'
    });
    await page.click('[data-testid="get-ai-recommendation"]');
    const rec = page.locator('[data-testid="press-recommendation"]');
    await expect(rec.locator('.press-type'))
      .toContainText('Digital');
  });
});

Phase 2: API and integration coverage (weeks 4–5)

PrintPlanr integrates with courier booking APIs, payment gateways, and customer-specific MIS integrations. Each integration point was a potential regression source. Using Playwright's built-in API testing layer, we built an integration test suite that runs alongside the UI tests — same command, same pipeline, same report.

Phase 3: CI/CD integration and daily automation (week 6)

The complete suite was integrated into GitHub Actions. Every pull request triggers the full suite across Chromium, Firefox, and WebKit in parallel. A PR cannot be merged if any test is failing. The pipeline takes 6 minutes for the full regression — previously 3 days for manual QA.

The unexpected benefit: Once the tests were running on every commit, we started finding bugs that had existed for months — edge cases in the press assignment logic, a race condition in the invoice generation flow, an accessibility issue in the customer portal that no manual tester had caught because it only appeared in Firefox. Automated testing doesn't just prevent future regressions. It uncovers existing ones.

Results

3d → 6min
Regression cycle
4,847
Tests run today (live)
96%
Core flow coverage
70%
Test authoring time saved vs manual
3
Browsers tested simultaneously
0
Regressions in production since deployment

What we learned from doing this on our own product

Building this for PrintPlanr — a product we know deeply — gave us something more valuable than a case study: a repeatable methodology we now deploy for clients. The lessons:

Start with business-critical flows, not full coverage. The temptation is to automate everything. The right move is to automate the 12 flows that cannot break — and do them properly — before expanding coverage.

Add data-testid attributes to your frontend before writing tests. Tests that rely on CSS selectors or text content break constantly as the UI evolves. Data test IDs are stable contracts between developers and QA.

AI code generation is genuinely useful but not magic. The AI generates a strong scaffold — correct structure, proper async patterns, sensible assertions. The engineer still needs to review it, add the edge cases the AI didn't anticipate, and refine the assertions to match actual business rules. The saving is real; it's roughly 70% of the authoring time.

Back to top
📊 BI Practice
Free Assessment
We find out why your dashboards aren't being used — and fix it.

🔒 ISO 27001 · No spam · Honest assessment