What We Find When We Take Over Failed Projects | InfomazeWhat We Find When We Take Over Failed Projects | Infomaze
Free AI Readiness Assessment — we map your automation opportunities in 60 minutes, no obligation.
🔧
Project in trouble? Previous team gone silent?
We respond within 4 hours · Urgent situations prioritised
Infomaze PerspectiveProject RescueHonest Account
What We Find When We Take Over Failed Projects.
After 23 years and dozens of project rescues, patterns emerge. The code is rarely the whole problem. The relationship is usually part of it. The expectations almost always are. Here's what we actually find — and what determines whether a project is salvageable or needs to be rebuilt.
By Infomaze Engineering11 min readInfomaze Perspective
Every project rescue starts with the same conversation. A client explains what went wrong — the previous agency disappeared, the freelancer stopped responding, the internal team ran out of capacity, the software was delivered but doesn't work. They want to know if it's fixable, how long it will take, and how much it will cost.
We've had this conversation dozens of times. And we've learned that the answer to all three questions depends almost entirely on what we find when we actually look at the project — not what we're told about it.
Here's what we actually find.
The eight patterns that appear in almost every failed project
01
No documentation. Not a single comment in the code.
The most common finding — and the one that multiplies the cost of every other problem. Code without documentation or comments requires the rescue team to reverse-engineer every decision. Why is this field stored as a string when it should be a number? Why does this function do three unrelated things? Why is there a hardcoded value in the middle of a calculation? Without documentation, every question requires investigation instead of a quick read. Crystal Networks — our most constrained rescue — had zero documentation. Week one was entirely mapping what the codebase did before we could identify what needed fixing.
Every Infomaze build ships with documentation as a deliverable. Not optional — standard.
02
No version control. Or version control that was never properly used.
Files on a single machine. FTP uploads directly to a live server with no development environment. A Git repository with one branch and one commit message that says "update." Version control exists to preserve history, enable rollback, and allow multiple people to work safely on the same codebase. When it's missing or unused, you can't see what changed, when, or why. You can't roll back a bad deploy. You can't safely make changes without risking what's already working. Some rescues have required us to reverse-engineer the history of a codebase from a live production server — there was nowhere else to look.
The Crystal Networks rescue was coded entirely from a running production instance — the only copy that existed.
03
The brief was never fully agreed — in writing.
Scope disputes are at the root of more failed projects than technical problems are. The client says "we agreed you'd build X." The development team says "that was never in scope." Both are genuinely right from their own memory of a conversation. Written, signed specifications that both parties agreed to before build started are the single most reliable prevention. When we do project autopsies, the absence of a written spec is almost always present. The client's mental model of what was agreed and the developer's mental model diverged — and neither party noticed until the deliverable arrived and looked wrong.
We produce a written specification with explicit client sign-off before any development begins. Not as bureaucracy — as protection for both parties.
04
No test environment — changes made directly to production.
A surprising number of projects are developed and maintained directly on a live production server. No staging environment, no local development setup, no testing before changes go live. This produces two failure modes: things that break silently because nobody tested the edge cases, and a team that becomes increasingly afraid to make changes because "last time we touched that it broke something else." The fear of making changes is one of the clearest signs that a codebase is in trouble — it means the team has lost confidence in what's safe to touch.
When we inherit a production-only codebase, establishing a proper development environment is the first thing we do before any other changes.
05
The architecture was never designed for scale — or sometimes for anything.
Projects that started small and grew — or projects that started with the wrong architecture from the beginning. Database queries that fetch the entire table before filtering in application code. No caching layer, so every page load hits the database fresh. Business logic scattered across UI files, API controllers, database stored procedures, and scheduled jobs with no coherent separation. The application works at low load and degrades rapidly as usage grows. Sometimes the architecture is random enough that it's genuinely unclear why certain decisions were made — and the original developer isn't available to ask.
The real estate platform we rebuilt had reports that took 2+ minutes because of exactly this — no proper indexing, nested subqueries, no caching. Rebuilt from scratch: under 5 seconds.
06
The budget ran out before the project was finished — and nobody said so.
A pattern particularly common with fixed-price engagements that were scoped too loosely. The budget is consumed, the development team runs out of commercial motivation to continue, but the project is declared "done" rather than "out of budget." The client receives something that looks finished but has critical gaps — missing validations, placeholder features that don't work, error states that weren't handled. The previous team has moved on. The client discovers the gaps when users find them. The project rescue begins not from a build failure but from a commercial failure dressed as a technical one.
We price projects with contingency built in and flag budget status at every milestone. A project that's running over budget should never be a surprise to the client.
07
Security was treated as optional.
SQL injection vulnerabilities that have been open for years. Passwords stored in plain text. API keys committed to version control. User sessions that don't expire. Admin panels accessible without authentication if you know the URL. Security problems are the most dangerous finding in a rescue because they're often invisible — the application functions normally, users get their work done, and the vulnerability sits quietly until it's exploited. We've found serious security issues in projects that had been in production for years without incident, simply because nobody had looked.
Security audit is a mandatory part of every project rescue. We won't deploy changes to an insecure codebase without disclosing the findings to the client first.
08
The client's expectations are significantly higher than the brief.
Not the most common finding in the code — but the most common finding in the relationship. The client expected a finished product. What was scoped was an MVP. The client expected certain features to be included. They were post-MVP items nobody wrote down. The client expected the software to behave a certain way in edge cases nobody discussed. Managing expectations is as much a part of a successful project as writing good code. Many "failed projects" we rescue are technically complete — the deliverable matches the written spec. But the client received something different from what they imagined, and nobody closed that gap early enough.
We do explicit expectation-setting conversations at the start, at midpoints, and before delivery. The written spec is necessary but not sufficient.
The three questions that determine whether we rebuild or rescue
When we inherit a failed project, we have to answer three questions before we can give a client an honest recommendation. Many clients want to hear "yes, we can fix it" immediately. We don't answer that question until we know the answer to these three first.
Is the architecture fixable — or does it require a rebuild to be reliable?
Some architecture problems can be fixed incrementally — add a caching layer, refactor the worst query patterns, extract the business logic from the UI layer. Others are structural: if the data model is wrong, everything built on top of it is wrong too, and incremental fixes just add more layers over a bad foundation. We assess this question honestly — because the answer determines whether rescue is cheaper than rebuild or the reverse. A rebuild recommended honestly at the start is less expensive than a rescue that fails two years later.
Is there enough of the right code to save — or is starting fresh faster?
Sometimes the codebase has more salvageable components than it first appears. Authentication, database schema, core business logic — if these are sound, the rescue can build on them. Other times, the code is so tangled that extracting anything useful takes longer than writing it properly from scratch. We've recommended rebuilds when the existing codebase would have produced a slower, less reliable result at higher cost than starting clean. We've recommended rescues when the core was sound and the problems were in layers that could be replaced without touching what worked.
What caused the failure — and is that cause still present?
If a project failed because the previous development team was under-resourced, a new team solves that. If it failed because the scope was never properly defined, a rescue that doesn't start with scope definition will fail again. If it failed because the client's expectations were significantly higher than what was contracted, those expectations need to be reset before any new development starts. We ask this question directly — and we've declined projects where the root cause was something we couldn't fix from our side, because taking on a project that's going to fail again doesn't serve anyone.
How we approach the first two weeks of a rescue
// Our standard rescue triage — first two weeks
Days 1–3 — Archaeology
Map what exists
Read the entire codebase — structure, patterns, anomalies
Reverse-engineer database schema if undocumented
Identify all external dependencies and integrations
Establish what the application currently does vs what it should do
Days 4–5 — Triage
Classify what we found
Critical bugs — things that cause data loss or incorrect behaviour
Security vulnerabilities — disclose to client immediately
Architecture problems — structural vs fixable
Missing features vs incomplete features vs working features
Days 6–7 — Honest assessment
Tell the client what we found
Written report: what we found, what it means, what we recommend
Rescue vs rebuild recommendation with rationale
Honest scope for whichever path is recommended
Client decides with full information — not our filtered version
Week 2 — Stabilise
Fix what's critical first
Critical bug fixes — things actively causing harm
Set up proper development environment if missing
Establish version control if absent
Begin documentation of the current state
What clients expect vs what a rescue actually involves
What clients typically expect
Assessment in a day or two — "how hard can it be?"
Fixes available immediately once we understand the code
Most of what was built is salvageable
Cost similar to finishing what the previous team started
Timeline picking up where the previous team left off
Previous team's estimate was probably close to accurate
What a rescue actually involves
1–2 weeks minimum to map an undocumented codebase properly
Stabilisation before fixes — establish safe environment first
Often less salvageable than it looks from the outside
Rescue typically costs 40–80% of a clean build — sometimes more
Previous team's remaining estimate is usually optimistic by 2–3×
Previous team's estimate assumed their own knowledge — the new team starts from zero
The most expensive rescue is one where we discover three months in that the architecture needed a rebuild from day one. The cheapest rescue is one where we told the client that on day seven.
Warning signs you should look for before engaging a development team
Most failed projects show warning signs before they fail. Here are the ones that appear most consistently in our rescue clients' histories.
📋
No written specification was signed
A development engagement that starts without a written, signed specification agreed by both parties is missing the only reliable protection against scope disputes. If the development team resists writing a spec, that's the warning sign.
📊
No milestone-based progress reporting
If you're not receiving structured progress reports at defined milestones — not just "it's going well" conversations — you don't have visibility of where the project actually is. Problems that surface late are expensive. Problems surfaced early are manageable.
🔑
You don't have access to the code repository
If the development team is the only party with access to the codebase, you don't own the project — they do. All source code and infrastructure access should be in your name or accessible to you at any point. This is what makes the Crystal Networks scenario possible: the client had no copy of the code.
💸
Requests for additional budget without documented justification
Budget changes on a fixed-price project should come with a documented scope change explanation. "We underestimated" without explaining what was underestimated is a warning sign of either poor initial scoping or scope not being managed properly.
⏰
Repeated delays without root cause explanation
Delays happen. Projects that are delayed repeatedly without honest explanation of the root cause are usually hiding a more fundamental problem — scope that was underestimated, technical decisions that turned out to be wrong, or team capacity that was over-committed.
🔇
Communication becoming less frequent and less specific
Development teams in trouble tend to communicate less and in vaguer terms. If updates that were daily become weekly, and answers that were specific become vague, something has changed. The change is rarely good.
What makes a rescue succeed
We've completed rescues that produced platforms the client grew their business on — Crystal Networks eventually sold, successfully. We've also been honest enough to tell clients their project should be rebuilt rather than rescued, even when that wasn't what they wanted to hear.
The rescues that work have three things in common.
Honest assessment first. A rescue that starts with "yes we can fix it" before understanding what needs fixing is setting up a second failure. The first two weeks of any rescue should produce a written assessment that the client can take to another team if they choose. We'd rather lose the project than take on one that's going to fail again.
Scope discipline from day one. The same discipline that should have been applied to the original project needs to be applied to the rescue. What is the minimum viable rescue — what does the system need to do to be useful? What is deferred to post-rescue? What is left unfixed because it's not critical? The temptation to fix everything simultaneously produces a rescue that becomes as unmanageable as the project it's rescuing.
The client understands what they're paying for. Rescue work is slower than new development because every change requires understanding context that doesn't exist in documentation. Clients who understand this — and who don't measure rescue velocity against greenfield development velocity — allow the rescue team to work at the pace the work requires. Pressure to move faster than the codebase allows produces shortcuts that create the next rescue.
Project in trouble? We'll tell you honestly what we find.
Emergency response within 4 hours. We assess, report, and recommend — without obligation to proceed with us. ISO 27001. NDA before anything is shared.