A US real estate platform needed to search 42 million properties instantly, plot results live on a Google Map that updated as users dragged or drew search boundaries, and generate complex property history reports that had previously taken over two minutes. We rebuilt the search architecture from scratch. Reports now run in under five seconds.
42 million property records isn't a large database by data warehouse standards. But searching it in real time — with geo-spatial filtering, full-text matching, and live map integration — is a problem that requires purpose-built architecture, not a faster SQL query.
The platform had three distinct performance problems. First: search results were slow — running complex property queries across 42 million records using conventional database queries meant users waited several seconds per search. Second: the map integration was static — search results were plotted on a map, but the map didn't update as users interacted with it. Third: the property history and sales comparison reports were taking over two minutes to generate — long enough that users assumed they were broken.
We rebuilt all three from the ground up. Elasticsearch replaced the SQL-based search layer. Google Maps API was integrated with a live event listener architecture. The report queries were rebuilt with optimised execution plans and proper indexing strategy.
Performance at this scale requires every layer of the architecture to be purpose-designed for its specific job. No layer is generic.
SQL Server is the right database for transactional operations on property records — creates, updates, financial transactions, audit trails. It is not the right tool for geo-spatial full-text search across 42 million records in real time. That's exactly what Elasticsearch was built for.
We designed the Elasticsearch index specifically for the query patterns the platform uses — property type, location, price range, square footage, plus the geo-spatial component. The inverted index structure means these queries execute in milliseconds regardless of dataset size.
The key architecture decision was incremental index updates — when a property record changes in SQL Server, only that record is updated in the Elasticsearch index. No full re-index required. The index stays current without any performance penalty on update operations.
The property history and sales comparison reports produced exactly the same output before and after. What changed was everything underneath them.
The previous report implementation ran a series of complex SQL queries across multiple tables — joins across the 42M property table, the sales history table, and several lookup tables — without adequate indexing and with inefficient query structure. Execution time exceeded two minutes on typical property queries.
We audited the existing query execution plans using SQL Server's query analyser, identified the bottlenecks — missing indexes, suboptimal join order, repeated table scans — and rebuilt the queries from scratch with proper execution plan optimisation.
Where the report logic required aggregations that could be pre-computed (median sale prices by area, comparable property statistics), we implemented background pre-computation — the heavy calculation runs on a schedule, and the report reads from the pre-computed result rather than recalculating on demand.
You'll hear from our development team within 4 business hours.