Frontend Performance Measuring, KPIs, and Monitoring
Frontend performance matters for user experience, conversion, ad placements, and good old SEO. To build fast websites, you must define performance metrics and KPIs and monitor, understand, and act on them.
- Determine Your Performance KPIs
- Core Web Vitals
- Performance KPI 1: Time to First Byte (TTFB)
- Performance KPI 2: Largest Contentful Paint (LCP)
- Performance KPI 3: Cumulative Layout Shift (CLS)
- Performance KPI 4: Interaction to Next Paint (INP)
- Other Performance Metrics
- How to Measure Frontend Performance?
- What About Frontend Performance Monitoring?
- Adopt Performance First Mindset
Core Web Vitals, LCP, Cumulative Layout Shift, FID, Webpagetest.org, Google Lighthouse, Pagespeed Insight, CrUX, or the latest one, Interaction to Next Paint (INP) —we're sure you've stumbled upon one of these in your quest to improve front-end performance.
But how do you measure performance? And why do you need to be data-driven and continuously monitor, optimize, and build a performance testing and optimization culture?
Let's explore that. Let's go millisecond hunting.
Frontend performance KPIs, or Key Performance Indicators, are the first thing you must establish before measuring or improving. With your KPIs in place, you have defined the game's rules in your organization, selecting a common language that you use in frontend performance subjects.
We highly recommend that you choose KPIs that are common in the industry. This allows you to benefit from a wide selection of measurement tools and focus your attention on the same metrics as Search Engines, which factor in speed as part of the page ranking.
The best option for this at the moment are the Core Web Vitals. Although they do not encompass all the available frontend performance metrics, but they include the most important ones that can be reliably measured.
In May 2020, Google announced new metrics to measure the quality of user experience on the web by quantifying the Core Web Vital metrics. Measuring core web vitals is the critical metric added to the various tools to measure frontend performance and page speed.
The Core Web Vitals consist of (as of the time of this article) three different metrics:
- Largest Contentful Paint (LCP)
- Interaction to Next Paint (INP) replaced FID on March 12th.
- Cumulative Layout Shift (CLS)
You can identify these three metrics as a part of a performance audit done with the popular online performance tool Pagespeed Insights.
The three Core Web Vital metrics are represented in the image above. Make a note of the heading Field Data. This indicates that these performance audits have been pulled from actual users on the target website (collected automatically when using Google Chrome). These numbers reflect real experiences that the users are getting and are also what Google uses as part of the page ranking algorithm.
When measuring performance, there are a lot of factors that influence the results.
- The device (laptop, cell phone)
- The network type (wifi, cellular, cable)
- The network health (at home or a busy café)
A rule of thumb here is that the device's speed and network are a primer for the performance result. However, you can get vastly different results for the same website if you test with another device or over a different network setup.
On the other hand, Lab Data is not based on actual user reporting but on a single test run using a simulated device on a throttled network.
Lab data is not what your users see and is not used by Google in their ranking algorithm. It is only helpful for developers who can use the results of a single test to check if changes in the code had any impact on performance.
In other words, it is super helpful for the developer to analyze how changes to the code affect frontend performance, but it is not at all relevant when telling the world how fast your site is.
When we talk about Core Web Vitals, for Google, it is not just about them. Core Web Vitals are a part of the Page Experience Signals, which you must pass to improve your rankings.
On top of that, not all CWV reports are created equal. What we mean by that is that when it comes to Core Web Vitals for SEO, the only place to check them is Search Console, which uses data from the Chrome User Experience Report, or CrUX, the official dataset of the Web Vitals program (we'll talk some more about it later on).
Finally, don't be misled into thinking performance is all there is. Google Search always seeks to show the most relevant content, even if the page experience is sub-par. Performance matters for many things.
Time to First Byte (TTFB) is an indicator of server efficiency. A delayed TTFB might suggest server congestion, inefficient server-side scripting, or network issues. Ideally, the TTFB should be under 200 milliseconds for a server in close proximity to the user. However, factors like CDN usage can impact this.
Our dedicated TTFB page linked above has more on how to improve TTFB.
As the most influential of the Core Web Vitals, the Largest Contentful Paint (LCP) makes up 25% of the Lighthouse score, making it the number one frontend performance KPI. It measures the time it takes for the most significant portion of the visible screen to be displayed and painted on the user's device. It directly correlates with user experience.
In general, to improve LCP, you'd optimize and compress images, leverage browser caching, prioritize above-the-fold content, reduce server response times, etc., depending on the use case OFC. Our dedicated LCP page linked above has more on how to improve LCP.
Claiming 15% of the total Lighthouse performance, Cumulative Layout Shift is now established as an essential metric that you should not ignore. It measures all the layout's small and large "jumps" during the page load phase. You want to keep this score as close to 0 as possible, as a score of 0 indicates that the layout did not change at all. CLS measures the instability of content.
Some optimization techniques can help you improve CLS scores, like specifying dimensions for images, videos, and other media elements, using CSS aspect-ratio property, etc. Our dedicated CLS page linked above has more on how to improve CLS.
Google sunsetted the First Input Delay (FID) on March 12th, 2024, as part of Core Web Vitals with Interaction to Next Paint (INP), which measures the time it takes for the browser to render the next frame after user interaction, such as a click, scroll, or typing. It is only logical to use INP as one of your KPIs from now on.
The key difference between FID and INP lies in their scope and focus. While FID focuses solely on the first input delay, capturing the initial responsiveness of the page, INP provides a broader assessment of interactive performance throughout the user's entire session. INP is particularly useful for understanding a page's overall interactivity and responsiveness, helping developers identify and address issues that might not be evident from looking at the initial load performance alone.
This means we all must recalibrate our expectations against the new baseline now that INP has hit the scene.
Total Blocking Tim3 (TBT) makes up 30% of the Lighthouse performance score, and although it is not part of the Core Web Vitals, it is a good indicator when performing one-off Lighthouse tests for the INP scores since INP measures the time from when the user interacts with your site to the time when the visual feedback from that interaction is painted on the screen. Together, these two should give you a better picture of what you need to do.
TBT measures the time a user is blocked from doing interactions at the critical page load time and is a metric that you want to get close to as 0 as possible.
A score of 0 indicates that the user is not blocked, whereas a score of, let’s say, 1000 indicates that the user was blocked for an accumulated time of 1 second. This is not a very good user experience. You typically want to keep this number below 300 to ensure a happy journey for the user.
These metrics are the most important when defining your performance KPIs. There are a bunch of other metrics (speed index comes to mind), and they all serve a common purpose: to improve the experienced speed for the end-user. Frontend performance is an essential factor in digital product design, and the metrics described above measure just that.
When optimizing frontend performance, you also need to monitor your backend performance. For example, with a headless commerce approach, you must ensure your back end has a fast eCommerce API.
As the most accessible tool, Page Speed Insights is web-based and uses Lighthouse behind the scenes. Anyone in your organization can use it; making it an excellent place to start.
Open Chrome dev tools, go to the Audits panel and run a test with the Performance checked. It uses Lighthouse in the background and gives back a range of metrics, with the Core Web Vitals taking center stage.
If you desire an even more detailed report, you can use this instead of Pagespeed Insights, which is useful for local development.
If you haven't done so, add your website to Google Search Console. It is a free Google tool that helps you stay on top of your website's SEO performance, i.e., you can track and measure for which keywords you rank and with which URLs and also diagnose technical SEO issues like CWV and crawability of your website.
As we already mentioned, if you are analyzing your CWV for SEO, this is the place to do it. The Core Web Vitals report data comes from the CrUX report, which is then used in Search to influence rankings.
Search Console shows how CrUX data influences the page experience ranking factor by URL and URL group, meaning it considers URL-level CWV data when deciding where to rank you in SERPs.
If you want to examine your website's performance, visit webpagetest.org. This online service provides page tests on a fleet of servers located worldwide. The audit reports you get back show you in incredible detail all the behind-the-scenes of a page load and can be beneficial if you want to debug a particular front-end performance problem.
It is also convenient when you want to perform tests where you are interested in the metrics measured from a particular region around the globe.
GTMetrix is another excellent website performance analytics tool. It's built to help you analyze the performance of your website and provide you with a list of actionable recommendations to improve it—a bunch of cool features like Speed Visualization, Waterfall Chart, and even Video for paid users.
If you want to track your site visitors, we recommend the web-vitals library. It’s easy to use and integrates well with whatever frontend solution you might be running:
import {getLCP, getFID, getCLS} from 'web-vitals';
getCLS(console.log);
getFID(console.log);
getLCP(console.log);
The data from the user timings API can be collected for all your users and sent to an analyzing tool, such as Google Analytics. From there, you will get a crystal clear image of how your site is performing for your user base. You can also get valuable information in reports by correlating measured frontend performance with things like conversion rate.
Frontend performance monitoring is crucial as it ensures users receive a seamless experience when interacting with web applications. Monitoring needs have surged with the increasing complexity of modern websites and applications. Downtime or performance degradation can have severe consequences, making frontend monitoring indispensable.
There are several performance monitoring tools tailored for front-end web applications, and Google Lighthouse is one of them. However, it doesn't stop at performance monitoring. With it, you can conduct audits that cover several dimensions, such as performance, accessibility, SEO, and even best practices for progressive web apps.
Then, there is Sentry, an open-source tool that provides engineering teams with tools to detect and solve user-impacting bugs. With features like transaction tracing and performance views, Sentry is a robust choice for front-end monitoring.
We should also mention Pingdom and Sematext. Pingdom offers uptime monitoring with features like SSL and real-user monitoring with customizable alerts and detailed performance reporting. Sematext provides a comprehensive suite of monitoring tools with features like real-time alerts, support for major frameworks, and unified log management.
Finally, there is LogRocket. Though not strictly speaking a performance monitoring tool, it does provide a comprehensive understanding of how users engage with and behave on your website
Installing and configuring performance monitoring tools usually follows a standard procedure, although specifics might vary based on the tool.
A transaction summary in frontend performance refers to an aggregated report capturing crucial metrics and details of a user transaction on a web application. It provides a holistic view of a specific user interaction or series of interactions on a web application or site through:
- Header Information shows the URL, HTTP method (like GET or POST), status code (like 200 for success), and overall time taken.
- Waterfall View offers a step-by-step breakdown of the request and response process. It helps identify bottlenecks in content loading or third-party scripts.
- Trace Details may include information about JavaScript function calls, AJAX requests, and other client-side operations.
- Metadata holds contextual data about the user's browser, location, device type, etc.
Transaction Summary provides a granular view of user interactions, helping teams understand, analyze, and optimize the user journey.
Performance issues can arise due to a myriad of reasons. Troubleshooting them should involve a systematic approach to identifying and rectifying these. Usually, it is done in these steps (though not necessarily):
- Identify the Issue. First, review performance metrics to pinpoint potential problems. Is it a server-side issue or content rendering? Then, compare current metrics with historical data to spot discrepancies.
- Check the Usual Suspects. Third-party scripts, images and media files, JavaScript, and CSS usually influence performance. Making sure they are in order is crucial. Check our frontend performance checklist for more info.
- Check Browser Caching. Is there a caching strategy in place? Check the settings.
- Check Database and Server Performance. Bad database optimizations and server configurations can bottleneck performance.
Optimizing for frontend performance is not something you do once and don’t have to worry about it again. It’s an ongoing effort that requires commitment from everyone involved. Browsers evolve, devices get faster, metrics change, and how we craft websites changes.
What never changes, though, is the importance of a friction-free user journey, where you put the user first and never compromise on performance. On top of that, performance has become one of the search ranking signals, which means it is effectively a part of your SEO efforts (especially for eCommerce SEO).
Remember, milliseconds matter🤘
Deep Dive👇
Keeping Websites Fast when Loading Google Tag Manager
Google Tag Manager can influence your overall website performance just as any other external 3rd party script. Luckily, there are a couple of things that you can do in that regard.
The Cost of 3rd Party Scripts: Understanding and Managing the Impact on Website Performance
3rd party scripts are also notorious for hurting your website’s performance in addition to posing security risks. With that being the case, let’s look at the cost of third-party scripts and how to manage their impact better.
Behind the Scenes of crystallize.com: All the Performance Tricks and Hacks
Putting those frontend performance checklist suggestions to work.