When I joined Temeda in 2019, the first assignment I received was to optimize the initial load times of the application for a new, very large client. The core entity in the application was the Asset, and this customer had nearly 30,000 assets. These assets are dropped as points on a map using the Google Maps library, along with loading into several data tables. The application was not configured to handle this much data during the initialization process and it caused the application to take nearly 60 seconds to bootstrap. I was tasked with coming up with solutions to resolve this. Here are the steps I took to do so.

Split the load process

During initialization, the application loaded ALL data for that client before it was responsive. Until this point, it worked well and allowed the app to be snappy since all of the data was cached on the local device, but the application couldn’t load effectively with the amount of data the client had. I theorized that I could split the initialization process into two phases:

  1. The first phase would be to grab the average lat, long, and asset count for a given map segment.
  2. The second phase would asynchronously load the remaining asset data until it was fully cached.

Phase 1: Loading average asset data

Work on this phase involved making a number of changes to both the AngularJS client and ASP.NET WebAPI backend. In the backend, I created appended one of the several initialization API endpoints to return the total number of assets that user had access to. If it was over a given number, the client would load differently. Instead of making the necessary API calls to return ALL asset data, it instead accessed a new endpoint that would return average data for a given number of map segments.

POST /api/clusterpoints
Content-Type: application/json

	"lat1": 0000,
	"lon1": 0000,
	"lat2": 0000,
	"lon2": 0000,
	"rows": 4,
	"cols": 4

The request above (albeit faked) would be used to calculate 16 segments (4x4) on the map between the two sets of coordinates given. I would use these to craft a SQL query that would return the ONLY the lat/long between the overall coordinates. Then using the coordinates of each of the 16 map segments, I could sum the total number of assets in a given segment and calculate the average lat/long, returning a total of 16 records in a JSON structure that the client would use to render on a map.

Even after the app finished initializing, this same logic would be used when the map was zoomed out beyond a given point to keep the application rendering data quickly.

Phase 2: Asynchronously cache data

The application would be responsive after the first phase, but since many other components of the application assumes the data is fully loaded before it would properly function, those components had loading indicators added to them while the loading process completed. The init logic was updated to take the total number of assets and load them in batches if 5000 in parallel using JavaScript promises. This also required a new API endpoint to be built that would support pagination to properly request the data from SQL Server.