Metrics to pay attention to, when optimizing web page performance

In today’s lightning-fast digital landscape, website speed is no longer a luxury – it’s a fundamental requirement. Every developer should possess the knowledge to analyze and optimize web page performance for a seamless user experience. After all, a speedy website translates into higher engagement, lower bounce rates, and ultimately, increased conversions.

The High Cost of Slow Websites

The detrimental effects of sluggish websites are well-documented by numerous studies:

Prioritizing Core Web Vitals

Forget outdated metrics – Google prioritizes Core Web Vitals for website performance evaluation. These metrics measure real-world user experience and directly impact search engine rankings. Here’s a breakdown of the three key Core Web Vitals:

  1. Largest Contentful Paint (LCP): This tracks the time it takes for the largest content element to load. Optimize images and preload content to improve LCP (ideally under 2.5 seconds). Learn more about LCP
  2. Interaction to Next Paint (INP): This metric measures the user’s perceived responsiveness. Aim for an INP of under 50 milliseconds. Learn more about INP
  3. Cumulative Layout Shift (CLS): This metric assesses how much your page layout shifts as elements load. Use pre-defined dimensions for images and avoid lazy loading critical content to minimize CLS (ideally below a score of 0.1). Learn more about CLS

Optimizing for Interactivity

Beyond loading speed, interactivity matters. Here’s how to ensure your page feels responsive:

  • Time to Interactive (TTI): This measures the time it takes for your page to become fully interactive. Reduce unnecessary JavaScript and optimize critical rendering paths to achieve a TTI under 3.1 seconds. Learn more about TTI
  • Total Blocking Time (TBT): This metric focuses on how long your main thread is blocked by JavaScript execution. Minimize render-blocking JavaScript and leverage code splitting to keep TBT below 3.1 seconds. Learn more about TBT

Actionable Steps for Improvement

  • Leverage a CDN: Consider a content delivery network (CDN) to improve content delivery speed for geographically dispersed users. Monitor CDN performance, including cache hit rate and first-byte time. Remember to carefully consider the Time-to-Live (TTL) of your content. A longer TTL can improve performance by reducing the number of requests to your origin server, but it can also lead to stale content if not managed properly.
  • Minify and Optimize Resources: Reduce file sizes and optimize images for web delivery.
  • Implement Lazy Loading: Load non-critical content below the fold only when the user scrolls down to improve initial page load.
  • Utilize Browser Caching: Enable browser caching for static assets to reduce server requests on subsequent visits.

Other Considerations

While Core Web Vitals and interactivity metrics provide a solid foundation, there are other factors to consider for comprehensive website performance optimization:

  • Network Performance: Although not directly measured by Lighthouse, network response times significantly impact user experience. Tools like Google PageSpeed Insights can help identify network bottlenecks.
  • Server-Side Optimization: Optimizing server response times and resource processing can significantly improve perceived website performance.

Continuous Monitoring and Improvement

Remember, website performance is an ongoing process. Regularly monitor your website’s performance metrics using tools like Google PageSpeed Insights and Lighthouse. Continuously analyze and optimize your code, content, and infrastructure to ensure a top-notch user experience.

Optimize your bundle size by eliminating dead code / tree-shaking in Webpack

When building modern javascript apps (regardless of browser or server-side use), it’s important to know what your dependencies are and what you utilize from those dependencies. If no care is given to this, your bundle size may end up being very large and result in a non-performant user experience. Especially if this is a browser-based application that every byte matters.

Today, I want to talk about a very effective method to optimize your bundle size called Tree Shaking.

Traditionally, we install a module and import the methods we use from a module. In many modules, the methods in them are not separately exported and are part of a single default export that we object deconstruct from the default import. The most common example of this is:

import { Box } from "@material-ui/core"

This results webpack to bundle all module methods. Even if we don’t use any of them.

There are a couple of ways to avoid this. Some libraries like lodash allow you to install only what you need. Instead of installing the entire lodash library, you can install only the module you need like lodash.get or lodash.trottle.

Another method is the tree shaking where we still install the full library but when we package our bundle, we tell webpack that we are only importing a portion of the larger library.

https://material-ui.com/guides/minimizing-bundle-size/#option-1

Instead of:

import { Box } from "@material-ui/core"

Do this:

import Box from "@material-ui/core/Box";

Similarly, a lodash example: instead of:

import { groupBy } from "lodash";

Do this:

import groupBy from "lodash/groupBy";

Alternative method

There is also a babel plugin that can do this for you: babel-plugin-tree-shaking-import

Consistent import convention

Another key point to pay attention to about tree shaking is the consistency throughout your code. Make sure every instance of a module’s imports should be done consistently to point module paths. A single instance of a traditional way of importing the module and then deconstructing the parts needed will result in bundling the whole module in your bundle again.

Another reason to look into using the babel plugin is to achieve this automatically.

Profiling and optimization on Facebook PHP SDK

If you’re developing PHP based Facebook application, you might want to use (or already using) Facebook integration little more than just authentication and identification of your user. Even if you have the simplest Facebook app and using PHP SDK, you probably have regularly done API calls.

You write your app and you start to see performance issues. You start to optimize your database interactions, PHP code optimization, after you done with your application optimization if you still have performance problems it’s possibly from Facebook calls. Since you use SDK, you might not know what’s happening with Facebook communication. So you want to do profiling between your app and Facebook API servers.

You can add a basic timing profiling to your API calls to see how many calls you do, what kind of calls they are and how long they take to run.

Let’s dive in SDK, modify it a bit and start getting profiling information. Here is the actual method you need to modify in base_facebook.php file:

public function api(/* polymorphic */) {
	$args = func_get_args();
	if (is_array($args[0])) {
		return $this->_restserver($args[0]);
	} else {
		return call_user_func_array(array($this, '_graph'), $args);
	}
}

and we’re modifying it like this:

$facebookApiCalls = array();
public function api( /* polymorphic */)
{
	$args = func_get_args();

	$time_start = microtime(true);

	if (is_array($args[0])) {
		$result = $this->_restserver($args[0]);
	} else {
		$result = call_user_func_array(array($this, '_graph'), $args);
	}

	$time_end = microtime(true);
	$time_elapsed = $time_end - $time_start;
	$time_elapsed *= 1000; //convert to millisecs

	if (isset($GLOBALS['facebookApiCalls'])) $GLOBALS['facebookApiCalls'][] = array(
		'duration' => $time_elapsed,
		'args' => $args,
	);
	return $result;
}

It simply appends a global array named “facebookApiCalls” and adds the API call details as “args” and timing as “duration”. So at the end of your page logic code, you can print this information after sorting it by duration and you can also filter to show only slow ones (for instance, the ones took over 200 milliseconds).

After this profiling you can start to identify the slow calls, also if you do same calls multiple times because of recursing, recalls etc…, you can see and optimize, combine them.

This optimization is not only a performance tweak for the user, also it will decrease the number of calls made between your server and Facebook servers.