When Atlantic LNG came to us, the requirement was clear: executive and departmental dashboards pulling together operational and financial data from multiple systems. The challenge was equally clear: IT security policy did not permit direct connectivity from any BI tool to any production database.

In our experience, this is the point where most BI engagements either stall or produce a compromise nobody is satisfied with — a limited dashboard that only shows what's accessible, or a drawn-out negotiation with IT that delays the project by months.

We did neither. We designed an architecture that delivered everything the business needed without touching a single production database. Here's how we thought about it and what we built.


The constraint that looked like a blocker

The standard BI implementation approach is straightforward: connect Power BI (or Tableau, or Qlik) directly to the databases, build a data model, build dashboards. When direct connectivity isn't available, that path is closed.

What most teams miss is that the goal was never "connect Power BI to a database." The goal was "give leadership a reliable, unified view of the business." The database connection is just one way to achieve that — and when it's unavailable, the question becomes: what other ways exist to get clean, current data from these systems into a BI-readable form?

// The constraints on day one
Direct Power BI connection to any production database — not permitted by IT security policy
Multiple source systems across different platforms and technical environments
Some data existing only in manual processes — no digital source at all
Different data owners across departments with varying technical capability
No API access to any of the source systems

Five constraints. Any one of them could have stalled the project. Together they forced us to think about the problem differently — which, in retrospect, produced a better architecture than a straight database connection would have.


The reframe that unlocked the solution

The insight was simple once stated: the BI tool doesn't need to know where the data came from — it only needs the data to be clean, current, and in a format it can read.

That reframe separated the ingestion problem from the visualisation problem. Power BI doesn't care whether its data model was populated from a direct SQL connection or from a staged SharePoint file. If the data arrives correctly validated and structured, the dashboards work identically.

So the question became: how do we get clean, current data from each source system into a structured location that Power BI can read — without any of those systems needing to grant direct database access?

The answer was two ingestion streams, each designed for a different type of data source.


Two streams. One data model.

01

Excel staging exports

Each source system's data owner exports structured data to a defined Excel or CSV template on a schedule they controlled
Files deposited to a controlled SharePoint folder — consistent path, consistent format
Power BI reads from SharePoint staging — never from the source system directly
Format validation on every load — column structure, required fields, data type checks
Invalid files flagged immediately — operations team alerted before bad data reaches dashboards
Historical data preserved through append-only staging — trend analysis available from day one
02

PowerApps capture

For data with no digital source at all — existing only in people's heads or paper processes — we built structured Microsoft PowerApps forms
Operational staff enter data through the PowerApp — works on desktop and mobile
Submissions write automatically to SharePoint Lists and Microsoft Dataverse
Power BI reads from Dataverse — structured, validated, consistent
No manual consolidation — data goes from form submission to dashboard-readable in one step
Audit trail built in — every submission timestamped and attributed
📊
Both streams feed one Power BI data model. The dashboards read from the model — not from either stream directly. Executive and departmental dashboards have no visibility of how the data arrived. They see clean, validated, consistent data.

Why Excel staging is better than it sounds

When we describe this approach to clients, the initial reaction is sometimes scepticism. Excel exports sound like a workaround — less rigorous than a "proper" database connection. In practice, structured Excel staging through a controlled SharePoint location has several genuine advantages over direct database connectivity.

  • Analytical load separated from transactional load. The source databases are never touched by analytical queries. They run at full transactional performance regardless of how many dashboards are open or how many users are refreshing simultaneously.
  • Data quality validation at ingestion. Every file is validated before it enters the data model. A direct database connection takes whatever is in the database — a staging layer can reject data that doesn't meet format and quality standards before it reaches any dashboard.
  • Historical data preserved automatically. Append-only staging means every load adds to the history. Even if the source system only keeps 90 days of data, the staging layer has everything since the pipeline started.
  • Export process creates awareness. When data owners export on a schedule, they see the data they're responsible for. In several cases, this surface-level visibility caught data quality issues in the source system that had been invisible before.
Excel staging isn't a workaround. It's a deliberate architectural choice that separates analytical and transactional workloads, validates data at ingestion, and preserves history. In some environments it's genuinely preferable to direct query.

The PowerApps decision — digitising data that didn't exist yet

The second stream solved a different problem. Some of the data Atlantic LNG needed in their dashboards didn't exist in any digital system. It lived in the heads of operational staff, in paper forms, in email exchanges, in spreadsheets that sat on individual laptops and never fed into anything central.

The conventional response to this is to say "we can't dashboard data that doesn't exist" and move on. We built the mechanism to make it exist.

Microsoft PowerApps is a low-code form builder that writes directly to Microsoft Dataverse — a structured data store that Power BI connects to natively. We built PowerApps forms for each data category that had no digital source: structured, validated, required fields enforced, submission confirmation, timestamped records. Operational staff completed them on desktop or mobile. The data went immediately into Dataverse. Power BI read from Dataverse.

Three things made this work well in practice:

  • The forms were genuinely simple. We designed each form around exactly the data the dashboard needed — not a comprehensive data capture exercise, just the specific fields required for the specific metrics leadership wanted to see. Short forms get completed. Comprehensive forms get abandoned.
  • Mobile support mattered. Operational staff in an oil and gas environment are often not at a desk. A mobile-accessible form they could complete from their phone or tablet removed the friction that would have killed adoption.
  • Validation was built in. Required fields, data type validation, dropdown menus for categorical choices — the forms prevented the kind of data entry inconsistency that makes manual data useless for analytics.

What we learned — and why this is now a blueprint

01

The constraint shaped a better solution

Not having direct database access forced us to design an ingestion architecture that was more robust than a direct connection would have been. Data quality validation, analytical/transactional separation, and historical preservation all came from the constraint, not despite it.
02

The BI tool's job is visualisation — not ingestion

Power BI's job is to turn a clean data model into useful dashboards. How the data model gets populated is a separate problem. Conflating them leads to the assumption that "no DB access = no BI." They're independent concerns.
03

Non-digital data can be digitised at the point of collection

PowerApps as a data capture layer for operational data that has no existing digital form is a pattern that applies well beyond this project. It's now part of our standard toolkit for constrained environments.
04

IT security constraints often produce better architecture

Direct database connectivity is convenient but carries risk — analytical queries on production systems, potential for BI tools to access data they shouldn't. The staging architecture is more compliant by design, not just by necessity.

Where this approach applies beyond Atlantic LNG

We've described this as the "Atlantic LNG approach" because that's where we refined it. But the same pattern applies to a wider range of situations than the oil and gas industry.

🏥

Healthcare — clinical system access restrictions

Patient data systems often prohibit direct BI connectivity. Structured export pipelines with proper data governance deliver the analytics without the compliance risk.
🏛️

Government and regulated industries

IT security policies in public sector organisations frequently restrict direct database access. The staging approach satisfies security requirements while delivering full analytical capability.
🏭

Manufacturing — legacy OT systems

Operational technology systems on factory floors often have no API and no safe way to connect directly. Scheduled exports from SCADA or MES systems to a staging layer is the standard approach.
💼

Businesses using third-party SaaS

When your data lives in a SaaS platform that doesn't expose direct database access (only an API or export), a staging architecture from the export mechanism is often more reliable than API polling.
🔗
Multi-subsidiary organisationsWhen subsidiaries run different systems that won't grant group-level database access, structured export from each subsidiary into a group-level staging layer delivers consolidated analytics.
📋

Any environment where "no direct access" seems like a blocker

If a previous BI attempt stalled because of connectivity constraints, the staging approach is almost always the solution that was missing.

The honest summary

The assumption that BI requires direct database access is wrong. It requires clean, current, structured data in a place the BI tool can read. How that data gets there is an engineering problem with multiple solutions — direct connection being only one of them.

The Atlantic LNG project taught us to start every constrained BI engagement by asking: "What data exchange mechanisms do these systems support?" rather than "Can we connect directly?" The answer to the second question is sometimes no. The answer to the first question almost always reveals a path forward.

If your BI project has stalled because direct database access isn't available — or if IT security policy is blocking the conventional approach — we've built the alternative. It works. And in several ways, it's better.

✦ The replicable principle
Separate the ingestion problem from the visualisation problem. Solve them independently.
The BI tool's job is to turn a clean, consistent data model into useful dashboards. The ingestion pipeline's job is to create that clean, consistent data model from whatever sources are available, using whatever connectivity those sources permit. When you hold those two jobs clearly separate, "no direct database access" stops being a blocker and becomes an ingestion engineering problem — which has multiple solutions.
Facing similar connectivity constraints?
Free consultation — we map your data sources, assess what access is available, and design an ingestion approach that works within your constraints. No obligation.