Don't miss out on the action at this year's Chrome Dev Summit, streaming live on YouTube. Watch now.

Chrome User Experience Report

The Chrome User Experience Report is a public dataset of key user experience metrics for popular origins on the web, as experienced by Chrome users under real-world conditions.

Methodology

The Chrome User Experience Report is powered by real user measurement of key user experience metrics across the public web, aggregated from users who have opted-in to syncing their browsing history, have not set up a Sync passphrase, and have usage statistic reporting enabled. The resulting metrics are aggregated by origin and split across multiple dimensions outlined below.

Metrics

Metrics provided by the Chrome User Experience Report are powered by standard web platform APIs exposed by modern browsers and aggregated to origin-resolution. Site owners that want more detailed (URL level resolution) analysis and insight into their site performance and can use the same APIs to gather detailed real user measurement (RUM) data for their own origins.

For guidance on which metrics to track and optimize for, and best practices on how to interpret real user measurement data, refer to our user centric performance documentation.

First Paint

Defined by the Paint Timing API and available in Chrome M60+:

“First Paint reports the time when the browser first rendered after navigation. This excludes the default background paint, but includes non-default background paint. This is the first key moment developers care about in page load – when the browser has started to render the page.”

First Contentful Paint

Defined by the Paint Timing API and available in Chrome M60+:

“First Contentful Paint reports the time when the browser first rendered any text, image (including background images), non-white canvas or SVG. This includes text with pending webfonts. This is the first time users could start consuming page content.”

DOMContentLoaded

Defined by the HTML specification:

“The DOMContentLoaded reports the time when the initial HTML document has been completely loaded and parsed, without waiting for stylesheets, images, and subframes to finish loading.” - MDN.

onload

Defined by the HTML specification:

“The load event is fired when the page and its dependent resources have finished loading.” - MDN.

Dimensions

Performance of web content can vary significantly based on device type, properties of the network, and other variables. To help segment and understand user experience across such key segments, the Chrome User Experience Report provides the following dimensions.

Effective Connection Type

Defined by the Network Information API and available in Chrome M62+:

“Provides the effective connection type (“slow-2g”, “2g”, “3g”, “4g”, or “offline”) as determined by round-trip and bandwidth values based on real user measurement observations.”

Device Type

Coarse device classification (“mobile”, “tablet”, or “desktop”), as communicated via User-Agent.

Data format

The report is provided as a public Google BigQuery dataset containing the aggregated user experience metrics for a sample of origins on the web. Each row in the dataset contains a nested record of user experience for a particular origin, split by key dimensions.

Dimension Value
origin "https://example.com"
effective_connection_type.name 4G
form_factor.name "mobile"
first_paint.histogram.start 1000
first_paint.histogram.end 1200
first_paint.histogram.density 0.123

For example, the above shows a sample record from the Chrome User Experience Report, which indicates that 12.3% of page loads had a “first paint time” measurement in the range of 1000-1200 milliseconds when loading “http://example.com” on a “mobile” device over a ”4G”-like connection. To obtain a cumulative value of users experiencing a first paint time below 1200 milliseconds, you can add up all records whose histogram’s “end” value is less than or equal to 1200.

Getting started

The Chrome User Experience Report is provided as a public dataset on Google BigQuery. To access the dataset, you’ll need a Google account and a Google Cloud Project: refer to our step by step guide and the guided tour of how to query the dataset.

Analysis tips & best practices

Consider population differences across origins

The metrics provided by the Chrome User Experience Report are powered by real user measurement data. As a result, the data reflects how real users experienced the visited origin and, unlike synthetic or local testing where the test is performed under fixed and simulated conditions, captures the full range of external factors that shape and contribute to the final user experience.

For example, differences in population of users accessing a particular origin can contribute meaningful differences to the user experience. If the site is frequented by more visitors with more modern devices or via a faster network, the results may appear “fast” even if the site is not well optimized. Conversely, a well optimized site that attracts a wider population of users, or a population with larger fraction of users on slower devices or networks, may appear “slow”.

When performing head-to-head comparisons across origins, it is important to account and control for the population differences: segment by provided dimensions, such as device type and connection type, and consider external factors such as size of the population, countries from which the origin is accessed, and so on.

Consider population size differences across origins

The Chrome User Experience Report aggregates data for each origin, with the “density” values across all dimension-metric histograms summing to a value of “1.0”. This provides insight into the distribution of experiences across the key dimensions for a single origin.

However, when aggregating data from multiple origins, for example within an industry vertical, be careful with the types of conclusions being drawn: adding up densities for the same metric across multiple origins does not account for relative population differences across origins.

For example, site A may have ten million visitors, while site B has ten thousand. In both cases, the histogram densities for each origin sum to “1.0”, and the dataset does not provide any absolute metrics about the population size of individual origins, or relative population size differences across origins. As a result, if you add together the densities from A and B, and average the results, you will treat them as equals even though A has three orders of magnitude more traffic.

Consider Chrome population differences

The Chrome User Experience report is powered by real user measurement aggregated from Chrome users who have opted-in to syncing their browsing history, have not set up a Sync passphrase, and have usage statistic reporting enabled. This population may not be representative of the broader user base for a particular origin and many origins may have population differences among each other. Further, this data does not account for users with different browsers.

As a result, be careful with the types of conclusions being drawn when looking at a cross-section of origins, and when comparing individual origins: avoid using absolute comparisons and consider other population factors outlined in the sections above.

Feedback and suggestions

We would love to hear your feedback, questions, and suggestions to help us improve the Chrome User Experience Report. Please join the conversation on our public Google Group.