Product Overview

Data Dictionary

Global Data Counts

About Mobile Location Data



Cross Account Bucket Access



Getting Started with Quadrant Mobile Location Data

A. Create a Database, Table and Partition

B. How To Run Basic Location Data Queries

i. Scale and Trend

ii. Depth

iii. Accuracy

C. How Geohash Works (Coming Soon)


All You Need To Know About Data Evaluation

A. Best Practices

B. SDK vs Bidstream Data

C. Data Evaluation using AWS Athena


Location Data Algorithms and Queries

A. Geo-fencing Query

B. Footfall Query

C. Nearest POI Model

D. Location Algorithms



Android Integration

iOS Integration

Integrate with Unity3D for Android

Integrate with Unity3D for iOS


About The Alliance

The Data

Use Cases & Data Science Algorithms

Access APAC Data Alliance Data with AWS S3








Quadrant analyse the quality of location data we provide to our buyers – some of the steps we take to ensure it is of the highest quality possible for their particular use case. Quadrant only source location data from SDKs as IP address, bidstream, and cell tower triangulation data are not nearly as accurate. Peeling back the layers of location data to assess its overall quality means we look at a variety of key data metrics.

Let’s take a closer look at some of these metrics below:


DAU/MAU Ratio - One of the baseline metrics we look at when analysing location data for quality is the Daily Active Users (DAU) and Monthly Active Users (MAU) ratio. In a nutshell, this helps us approximate how consistent a panel (group of mobile devices) is over the course of a month. The higher the number, the better.

Data Completeness - The amount of data captured is dependent on a number of factors including device hardware, SDK collection methodology, user opt-in permission, etc. As such, one common issue seen with location data is incomplete or missing data fields. At Quadrant, we developed a metric known as “Data Completeness” (the percentage of each data attribute that contains verifiable data). This allows data buyers to quickly and easily assess the amount of missing data points in each attribute.

Horizontal Accuracy - Another key metric we always consider is Horizontal Accuracy (HA). Horizontal Accuracy of 10 meters and below is generally considered very good (for GPS data). In fact, we tend to reject data sources with high HA. In our Data Quality Dashboard, this metric is visualised as a histogram. It’s worth noting that HA can vary based on a user’s environment and weather conditions. For example, in certain built-up areas of if there is bad weather, readings can be less accurate. Contrastingly, clear skies and open line-of-sight to satellites will likely result in better HA.

Days Seen Per Month - Days seen per month is a metric that gets even more granular than DAU/MAU. It enables us to see the distribution of devices over a certain period of time, we start by evaluating the number of days over the course of the month.


Hours Seen Per Day - The number of Hours Seen Per Day, like days seen per month, for most use cases is usually more valuable when the number is higher. This should be obvious, because it means we are recording a more complete picture of a user’s daily activity in terms of where they are located on an hour-by-hour basis.


Evidence of bidstream data – The final metric we will share is very important to ensure a high-quality and consistent data set. We remove all data from providers that show evidences of bidstream data, by looking for the following three red flags:


  - Lack of movement: This tends to be an indicator of low-quality location data, whereas high-quality data shows lots of movement.
  - “Kansas farm” (and other similar phenomenon): Lots of people at the same coordinate, beyond what’s to be reasonably expected, is always a red flag.
  - Teleportation: By this we mean the same device appearing in multiple countries or regions within the same 24-hour period.