Most MVPs aren't minimal. They're feature-complete products with "MVP" written on the brief. They take twice as long, cost twice as much, and launch into a market that's already moved. Here's the framework we use โ and the discipline that makes it work.
By Infomaze Product Engineering10 min readBuild vs Buy Series
We've built 15+ SaaS products โ including our own. We've scoped, built, launched, and iterated on MVPs across print management, field service, lending, property search, telecom inventory, and small business job management. And we've seen the pattern that kills most MVP projects before they reach customers.
The pattern is scope expansion โ the slow accumulation of features that each seem essential and collectively produce a product that takes 12 months to build when it should have taken 16 weeks.
Here's what we've learned about preventing it.
The over-building pattern โ how it always starts the same way
// The scope expansion sequence โ almost universal
"We need user roles โ otherwise the sales team will see what the ops team sees." Added. 14 weeks.
3.
"Can we add email notifications? Users need to know when something changes." Added. 15 weeks.
4.
"The CEO wants a reporting dashboard for launch. Just basic KPIs." Added. 17 weeks.
5.
"We should have an API so we can integrate with our CRM later." Added. 19 weeks.
6.
"Actually the mobile experience needs to be better โ users will mostly be on phones." Major rework. 24 weeks.
7.
Launch. The market has moved. The feedback that would have shaped features 2โ6 never arrived because the MVP never shipped to get it.
Every feature added in steps 2โ6 was reasonable in isolation. Collectively they doubled the timeline and eliminated the entire value of launching early โ which is to learn from real users before building more.
What MVP actually means โ and what it doesn't
MVP
The definition that actually guides decisions
Minimum: The least you can build while still delivering the core value proposition. Not the least you'd be comfortable showing. Not the least that has all the features you think users want. The least that puts the core workflow in front of real users.
Viable: A real user can complete a real task and get real value from it. Not a demo. Not a prototype that requires hand-holding. Something that works, reliably, for the specific workflow it was designed for.
Product: Designed and architected to scale. Not a prototype that will be thrown away. The architecture decisions made in the MVP determine how expensive every subsequent feature is. MVP architecture matters.
An MVP with bad architecture is a liability โ you rebuild it at exactly the moment customers are asking you to add features.
The scoping discipline โ three columns
When we scope an MVP with a client, every proposed feature goes into one of three columns. The column it goes into is determined by one question: can a user complete the core workflow without this feature?
โ In MVP
Core workflow โ the thing the product exists to do
Authentication โ users need to log in
Data that the core workflow depends on
Error states that would leave users stuck
The one thing that makes this different from alternatives
โ Not in MVP
Advanced reporting and analytics
Admin panels for managing users at scale
API access for third-party integrations
Advanced notification systems
Anything someone prefaced with "eventually we'll need..."
โ Post-MVP
Features users ask for after launch
Performance optimisations under real load
Integrations with other systems
Role-based access and permissions
Everything in the "out" column that turned out to matter
The discipline is in the first column. Every feature proposed for MVP gets the same question: can a user complete the core workflow without this? If yes โ it goes in the second or third column. No exceptions made for "but it would only take a day" or "users will definitely expect this."
Five tests for every proposed MVP feature
๐ฏ
Can a user complete the core workflow without this feature?
If yes โ it's post-MVP. This is the primary test. A user who can't complete the core workflow without the feature has an MVP that doesn't work. A user who can complete it without the feature has a feature that might be nice but isn't necessary.
๐ค
Is this for the user or for us?
Admin panels, analytics dashboards, audit logs, and bulk data management tools are things the team building the product needs โ not things users need to get value from it. They're important but they're not MVP features. Build them in V1.1.
๐ฎ
Are you assuming users will want this, or do you know?
"Users will definitely want notifications" is an assumption. "Three users in our pilot said they needed notifications to do their job" is evidence. The entire point of shipping an MVP is to replace assumptions with evidence. Don't build features based on assumptions you could test after launch.
โฑ๏ธ
What's the cost of launching without it vs the cost of adding it post-launch?
Some features are genuinely cheaper to include in the initial build than to add later (things that touch the data model or the core architecture). Most features are not โ they can be added cleanly after launch. Know the difference before including something "because it'll be cheaper to do now."
๐๏ธ
What is this feature delaying?
Every feature added to MVP scope delays launch. Make that delay explicit: "Adding this feature delays launch by 2 weeks. In those 2 weeks, we won't be getting feedback from real users. Is this feature more valuable than 2 weeks of real user data?" Usually the answer is no.
Architecture is not optional โ even in an MVP
The one thing we never cut from MVP scope is architecture. The multi-tenant data model. The authentication layer designed to support SSO later. The API structure that allows integrations to be added without rearchitecting. The deployment pipeline that supports zero-downtime updates.
These aren't features โ users don't experience them directly. But they determine the cost of every feature built after the MVP. Getting them wrong in the MVP means rebuilding them at the moment you're trying to grow.
โฆ Always in MVP โ architecture
Multi-tenant data model if it's a SaaS platform
Authentication and session management designed for scale
API structure that doesn't require breaking changes to extend
Zero-downtime deployment capability
Error logging and basic monitoring
Billing infrastructure hooks even if billing isn't live yet
ยท Post-MVP โ features
Self-service billing and subscription management UI
Advanced role-based access control
Analytics and reporting dashboards
Public API documentation and developer portal
Advanced notification and webhook systems
White-label and custom branding options
From our own products โ what V1 actually looked like
SkedPlanr
Three screens. That's it.
V1 had: job creation, job list, job details with status update. A plumber could create a job, track it, and mark it complete. Everything else โ quoting, invoicing, customer communication โ came after the core workflow was validated with real users.
The architecture was multi-tenant and billing-ready from day one. The features weren't.
PrintPlanr
Job creation and basic production tracking.
V1 solved the core problem: a print job could be created, assigned to a press, tracked through production, and marked complete. No press recommendation engine. No AI job coding. No customer portal. Those came from watching what real users needed after launch.
15 years of features were built from feedback. None were assumed at V1.
SystemTask (now ElementIQ)
Work order creation and technician assignment.
The original system did two things: create work orders and assign them to technicians. Everything that ElementIQ is today โ full ERP, inventory, billing, reporting โ grew from those two core functions over 20 years of real use by real clients.
20 years of evolution started with two functions. Start simple.
FieldPlanr
Job management and technician mobile app.
V1: field service businesses could create jobs, dispatch technicians, and technicians could update status from the mobile app. Customer portal, reporting, automated invoicing โ all came later from what customers said they needed.
The mobile app was in V1 because it was core. The customer portal wasn't.
The conversation we have that most teams don't
At every MVP scoping session, we ask one question that most teams avoid: "What is the minimum version of this product that would embarrass you to show customers โ but would still let them do the thing it's designed for?"
The answer to that question is closer to the right MVP scope than any feature list will get you. The discomfort of showing something minimal is exactly what drives over-building. Getting comfortable with shipping something that works but isn't finished is the discipline that separates teams that ship from teams that are always two months from launch.
The other conversation we have: "What do we need to learn from the first 50 users that we can't learn without shipping?" The answer to this question tells you what the MVP has to do โ and makes it easier to cut everything it doesn't need to do.
The honest summary
MVP scope is kept minimal by discipline, not by default. The default is over-building โ because every feature seems important when you're close to the product, and because shipping something minimal feels uncomfortable.
The discipline is applying the same question to every proposed feature โ "can a user complete the core workflow without this?" โ and being willing to accept the answer when it's yes.
The architecture exception is real: don't cut the things that determine the cost of every future feature. Multi-tenant data model, clean API structure, deployment infrastructure โ these belong in every MVP. Features that users haven't asked for yet don't.
The payoff is real too: a product that ships in 14 weeks and gets 8 weeks of real user feedback before V1.1 is built will produce better V1.1 features than a product that spent those 8 weeks building features nobody asked for.
Ready to scope your MVP and actually ship it?
Free product workshop โ we run the scoping session, apply the discipline, and give you a realistic scope and timeline before any commitment.