Free AI Readiness Assessment — we map your automation opportunities in 60 minutes, no obligation.
Case Study
🏗️ Custom Application Development

42 Million Properties.
Search Results in
Under One Second.

A US real estate platform needed to search 42 million properties instantly, plot results live on a Google Map that updated as users dragged or drew search boundaries, and generate complex property history reports that had previously taken over two minutes. We rebuilt the search architecture from scratch. Reports now run in under five seconds.

REAL ESTATE PLATFORM · SEARCH · LIVE
● 42M properties indexed
PROPERTIES
42M
Indexed
<1s
Search results
<5s
Reports
SEARCH CAPABILITIES — ACTIVE
Full-text + geo-spatial search · Elasticsearch
Live
Google Maps live results on drag & pan
Live
Polygon draw — circle or freehand area search
Live
Property history report · <5 seconds
Live
BEFORE vs AFTER
Property report — Previous team: >2 minutes
Property report — Infomaze rebuild: <5 seconds
ELASTICSEARCH + GOOGLE MAPS APISub-second geo-spatial search across 42M records with live map results that update on every user interaction.
42M
Properties searchable — full-text and geo-spatial — results returned in under 1 second
<1s
Search result delivery — Elasticsearch geo-spatial indexing across the full property dataset
2min
→ 5sec — property history report rebuild. 24× faster than the previous implementation.
Live
Map updates on drag, pan, and polygon draw — results always reflect the current map view
Client Real Estate Platform (US)
Industry Real Estate · PropTech
Service Custom Application Development
Stack Elasticsearch · Google Maps API · .NET · SQL Server
Scale 42 million property records
— The Challenge

Search at a scale that breaks conventional database queries.

42 million property records isn't a large database by data warehouse standards. But searching it in real time — with geo-spatial filtering, full-text matching, and live map integration — is a problem that requires purpose-built architecture, not a faster SQL query.

The platform had three distinct performance problems. First: search results were slow — running complex property queries across 42 million records using conventional database queries meant users waited several seconds per search. Second: the map integration was static — search results were plotted on a map, but the map didn't update as users interacted with it. Third: the property history and sales comparison reports were taking over two minutes to generate — long enough that users assumed they were broken.

We rebuilt all three from the ground up. Elasticsearch replaced the SQL-based search layer. Google Maps API was integrated with a live event listener architecture. The report queries were rebuilt with optimised execution plans and proper indexing strategy.

// Before — the problems
Search queries running 3–8 seconds on SQL Server
Map static — didn't update as user panned or zoomed
No draw-to-search capability
Property history reports taking >2 minutes
No geo-spatial search — only text and filter-based
// After — what we delivered
Elasticsearch geo-spatial search — sub-1-second results
Live map — results update on every drag, pan, zoom
Circle and freehand polygon draw-to-search
Property history reports in under 5 seconds
Sales comparison report rebuilt from scratch

— The Numbers

The before and after that matters most.

Before — Property Report

2min
Previous implementation — complex SQL query on multi-table property database. Long enough that users assumed the feature was broken.

After — Infomaze Rebuild

<5sec
Rebuilt from scratch — optimised query execution plan, proper indexing, data pre-computation where possible. 24× faster.

Before — Property Search

3–8sec
SQL Server queries across 42M records — full table scans on complex filter combinations. Unusable for real-time search.

After — Elasticsearch

<1sec
Elasticsearch geo-spatial index — optimised for the exact query patterns the platform uses. Sub-second on every search.

— The Architecture

Four layers. Each solving a specific performance problem.

Performance at this scale requires every layer of the architecture to be purpose-designed for its specific job. No layer is generic.

Elasticsearch Geo-spatial index Full-text + filter Incremental index updates Sub-1s across 42M records
Map Integration
Google Maps API Viewport-bound search queries Event listener on drag/pan/zoom Polygon & circle draw Pin clustering for density
Report Engine
Optimised SQL execution plans Proper index strategy Pre-computed aggregations Sales comparison logic Property history in <5s
Data Layer
42M property records Incremental ES index sync No full re-index on updates .NET application layer SQL Server primary store
— Map Search Capabilities

Three ways to define a search area — all updating results live.

🖱️

Drag & Pan

As the user moves the map, search results update automatically to show properties in the current viewport. No button to press. No page reload. Results always reflect what's visible on screen.

Circle Draw

Draw a circle on the map to define a radius search area. All properties within the defined radius returned in under a second. Useful for "within X miles of this point" searches.
✏️

Freehand Polygon

Draw any shape on the map to define an irregular search boundary — a neighbourhood, a school catchment area, a specific block. Elasticsearch geo-polygon query returns results within the drawn boundary.

— The Search Layer

Why Elasticsearch — and how we implemented it.

SQL Server is the right database for transactional operations on property records — creates, updates, financial transactions, audit trails. It is not the right tool for geo-spatial full-text search across 42 million records in real time. That's exactly what Elasticsearch was built for.

We designed the Elasticsearch index specifically for the query patterns the platform uses — property type, location, price range, square footage, plus the geo-spatial component. The inverted index structure means these queries execute in milliseconds regardless of dataset size.

The key architecture decision was incremental index updates — when a property record changes in SQL Server, only that record is updated in the Elasticsearch index. No full re-index required. The index stays current without any performance penalty on update operations.

// How the geo-spatial search works
1. User drags map — browser fires viewport bounds event (NE/SW coordinates)
2. Application sends geo-bounding-box query to Elasticsearch with filter parameters
3. Elasticsearch returns matching property IDs and coordinates in <1 second
4. Google Maps renders pins at returned coordinates — clustered at high zoom-out levels
5. Sidebar property list updates simultaneously from same result set
✦ The indexing decision that made it work
Index design determines search performance. Not hardware.
The query patterns are known before the index is designed. We modelled the exact filter combinations users make and designed the Elasticsearch index to execute those patterns optimally. Throwing more servers at a poorly designed index produces a faster-failing system, not a faster search.

— The Report Rebuild

From two minutes to five seconds — without changing the report output.

The property history and sales comparison reports produced exactly the same output before and after. What changed was everything underneath them.

The previous report implementation ran a series of complex SQL queries across multiple tables — joins across the 42M property table, the sales history table, and several lookup tables — without adequate indexing and with inefficient query structure. Execution time exceeded two minutes on typical property queries.

We audited the existing query execution plans using SQL Server's query analyser, identified the bottlenecks — missing indexes, suboptimal join order, repeated table scans — and rebuilt the queries from scratch with proper execution plan optimisation.

Where the report logic required aggregations that could be pre-computed (median sale prices by area, comparable property statistics), we implemented background pre-computation — the heavy calculation runs on a schedule, and the report reads from the pre-computed result rather than recalculating on demand.

What we found in the original queries
Missing indexes on frequently-joined columns
Nested subqueries forcing repeated full table scans
Join order not matching cardinality — large-to-small instead of small-to-large
Aggregations computed on every report request with no caching
What we did differently
Composite indexes designed for the specific query patterns
Query rebuilt with CTEs and correct join order
Pre-computed aggregations refreshed on schedule
Execution plans validated before deployment

— Results

The numbers that summarise the project.

42M
Properties searchable in real time via Elasticsearch
<1s
Search results — including geo-spatial filtering and full-text
24×
Faster property reports — from 2+ minutes to under 5 seconds
3
Search modes — drag, circle draw, freehand polygon
— Related Services & Resources
🏗️
Build New Applications
Custom platform development
🔌
API Integration
Google Maps and third-party APIs
⚙️
Case Study: ElementIQ
20-year ERP evolution
💰
Case Study: QuickZ
Automated lending platform
🏠
All Dev Services
Custom application overview
🏗️ Custom Development
Discuss a Similar Project
Tell us about your performance or search challenge — we'll map the right architecture.
🔒 ISO 27001 · NDA before any details shared · No spam

We'll be in touch

You'll hear from our development team within 4 business hours.

📊 BI Practice
Free Assessment
We find out why your dashboards aren't being used — and fix it.

🔒 ISO 27001 · No spam · Honest assessment