Build A Static RSS Reader To Fight Your Inner FOMO

Build A Static RSS Reader To Fight Your Inner FOMO

Build A Static RSS Reader To Fight Your Inner FOMO

Karin Hendrikse

2024-10-07T13:00:00+00:00
2025-06-20T10:32:35+00:00

In a fast-paced industry like tech, it can be hard to deal with the fear of missing out on important news. But, as many of us know, there’s an absolutely huge amount of information coming in daily, and finding the right time and balance to keep up can be difficult, if not stressful. A classic piece of technology like an RSS feed is a delightful way of taking back ownership of our own time. In this article, we will create a static Really Simple Syndication (RSS) reader that will bring you the latest curated news only once (yes: once) a day.

We’ll obviously work with RSS technology in the process, but we’re also going to combine it with some things that maybe you haven’t tried before, including Astro (the static site framework), TypeScript (for JavaScript goodies), a package called rss-parser (for connecting things together), as well as scheduled functions and build hooks provided by Netlify (although there are other services that do this).

I chose these technologies purely because I really, really enjoy them! There may be other solutions out there that are more performant, come with more features, or are simply more comfortable to you — and in those cases, I encourage you to swap in whatever you’d like. The most important thing is getting the end result!

The Plan

Here’s how this will go. Astro generates the website. I made the intentional decision to use a static site because I want the different RSS feeds to be fetched only once during build time, and that’s something we can control each time the site is “rebuilt” and redeployed with updates. That’s where Netlify’s scheduled functions come into play, as they let us trigger rebuilds automatically at specific times. There is no need to manually check for updates and deploy them! Cron jobs can just as readily do this if you prefer a server-side solution.

During the triggered rebuild, we’ll let the rss-parser package do exactly what it says it does: parse a list of RSS feeds that are contained in an array. The package also allows us to set a filter for the fetched results so that we only get ones from the past day, week, and so on. Personally, I only render the news from the last seven days to prevent content overload. We’ll get there!

But first…

What Is RSS?

RSS is a web feed technology that you can feed into a reader or news aggregator. Because RSS is standardized, you know what to expect when it comes to the feed’s format. That means we have a ton of fun possibilities when it comes to handling the data that the feed provides. Most news websites have their own RSS feed that you can subscribe to (this is Smashing Magazine’s RSS feed: https://www.smashingmagazine.com/feed/). An RSS feed is capable of updating every time a site publishes new content, which means it can be a quick source of the latest news, but we can tailor that frequency as well.

RSS feeds are written in an Extensible Markup Language (XML) format and have specific elements that can be used within it. Instead of focusing too much on the technicalities here, I’ll give you a link to the RSS specification. Don’t worry; that page should be scannable enough for you to find the most pertinent information you need, like the kinds of elements that are supported and what they represent. For this tutorial, we’re only using the following elements: , , , , and . We’ll also let our RSS parser package do some of the work for us.

Creating The State Site

We’ll start by creating our Astro site! In your terminal run pnpm create astro@latest. You can use any package manager you want — I’m simply trying out pnpm for myself.

After running the command, Astro’s chat-based helper, Houston, walks through some setup questions to get things started.

 astro   Launch sequence initiated.

   dir   Where should we create your new project?
         ./rss-buddy

  tmpl   How would you like to start your new project?
         Include sample files

    ts   Do you plan to write TypeScript?
         Yes

   use   How strict should TypeScript be?
         Strict

  deps   Install dependencies?
         Yes

   git   Initialize a new git repository?
         Yes

I like to use Astro’s sample files so I can get started quickly, but we’re going to clean them up a bit in the process. Let’s clean up the src/pages/index.astro file by removing everything inside of the

tags. Then we’re good to go!

From there, we can spin things by running pnpm start. Your terminal will tell you which localhost address you can find your site at.

Pulling Information From RSS feeds

The src/pages/index.astro file is where we will make an array of RSS feeds we want to follow. We will be using Astro’s template syntax, so between the two code fences (—), create an array of feedSources and add some feeds. If you need inspiration, you can copy this:

const feedSources = [
  'https://www.smashingmagazine.com/feed/',
  'https://developer.mozilla.org/en-US/blog/rss.xml',
  // etc.
]

Now we’ll install the rss-parser package in our project by running pnpm install rss-parser. This package is a small library that turns the XML that we get from fetching an RSS feed into JavaScript objects. This makes it easy for us to read our RSS feeds and manipulate the data any way we want.

Once the package is installed, open the src/pages/index.astro file, and at the top, we’ll import the rss-parser and instantiate the Partner class.

import Parser from 'rss-parser';
const parser = new Parser();

We use this parser to read our RSS feeds and (surprise!) parse them to JavaScript. We’re going to be dealing with a list of promises here. Normally, I would probably use Promise.all(), but the thing is, this is supposed to be a complicated experience. If one of the feeds doesn’t work for some reason, I’d prefer to simply ignore it.

Why? Well, because Promise.all() rejects everything even if only one of its promises is rejected. That might mean that if one feed doesn’t behave the way I’d expect it to, my entire page would be blank when I grab my hot beverage to read the news in the morning. I do not want to start my day confronted by an error.

Instead, I’ll opt to use Promise.allSettled(). This method will actually let all promises complete even if one of them fails. In our case, this means any feed that errors will just be ignored, which is perfect.

Let’s add this to the src/pages/index.astro file:

interface FeedItem {
  feed?: string;
  title?: string;
  link?: string;
  date?: Date;
}

const feedItems: FeedItem[] = [];

await Promise.allSettled(
  feedSources.map(async (source) => {
    try {
      const feed = await parser.parseURL(source);
      feed.items.forEach((item) => {
        const date = item.pubDate ? new Date(item.pubDate) : undefined;
        
          feedItems.push({
            feed: feed.title,
            title: item.title,
            link: item.link,
            date,
          });
      });
    } catch (error) {
      console.error(`Error fetching feed from ${source}:`, error);
    }
  })
);

This creates an array (or more) named feedItems. For each URL in the feedSources array we created earlier, the rss-parser retrieves the items and, yes, parses them into JavaScript. Then, we return whatever data we want! We’ll keep it simple for now and only return the following:

  • The feed title,
  • The title of the feed item,
  • The link to the item,
  • And the item’s published date.

The next step is to ensure that all items are sorted by date so we’ll truly get the “latest” news. Add this small piece of code to our work:

const sortedFeedItems = feedItems.sort((a, b) => (b.date ?? new Date()).getTime() - (a.date ?? new Date()).getTime());

Oh, and… remember when I said I didn’t want this RSS reader to render anything older than seven days? Let’s tackle that right now since we’re already in this code.

We’ll make a new variable called sevenDaysAgo and assign it a date. We’ll then set that date to seven days ago and use that logic before we add a new item to our feedItems array.

This is what the src/pages/index.astro file should now look like at this point:

---
import Layout from '../layouts/Layout.astro';
import Parser from 'rss-parser';
const parser = new Parser();

const sevenDaysAgo = new Date();
sevenDaysAgo.setDate(sevenDaysAgo.getDate() - 7);

const feedSources = [
  'https://www.smashingmagazine.com/feed/',
  'https://developer.mozilla.org/en-US/blog/rss.xml',
]

interface FeedItem {
  feed?: string;
  title?: string;
  link?: string;
  date?: Date;
}

const feedItems: FeedItem[] = [];

await Promise.allSettled(
  feedSources.map(async (source) => {
    try {
      const feed = await parser.parseURL(source);
      feed.items.forEach((item) => {
        const date = item.pubDate ? new Date(item.pubDate) : undefined;
        if (date && date >= sevenDaysAgo) {
          feedItems.push({
            feed: feed.title,
            title: item.title,
            link: item.link,
            date,
          });
        }
      });
    } catch (error) {
      console.error(`Error fetching feed from ${source}:`, error);
    }
  })
);

const sortedFeedItems = feedItems.sort((a, b) => (b.date ?? new Date()).getTime() - (a.date ?? new Date()).getTime());

---


  

Rendering XML Data

It’s time to show our news articles on the Astro site! To keep this simple, we’ll format the items in an unordered list rather than some other fancy layout.

All we need to do is update the element in the file with the XML objects sprinkled in for a feed item’s title, URL, and publish date.


  
{sortedFeedItems.map(item => ( ))}

Go ahead and run pnpm start from the terminal. The page should display an unordered list of feed items. Of course, everything is styled at the moment, but luckily for you, you can make it look exactly like you want with CSS!

And remember that there are even more fields available in the XML for each item if you want to display more information. If you run the following snippet in your DevTools console, you’ll see all of the fields you have at your disposal:

feed.items.forEach(item => {}

Scheduling Daily Static Site Builds

We’re nearly done! The feeds are being fetched, and they are returning data back to us in JavaScript for use in our Astro page template. Since feeds are updated whenever new content is published, we need a way to fetch the latest items from it.

We want to avoid doing any of this manually. So, let’s set this site on Netlify to gain access to their scheduled functions that trigger a rebuild and their build hooks that do the building. Again, other services do this, and you’re welcome to roll this work with another provider — I’m just partial to Netlify since I work there. In any case, you can follow Netlify’s documentation for setting up a new site.

Once your site is hosted and live, you are ready to schedule your rebuilds. A build hook gives you a URL to use to trigger the new build, looking something like this:

https://api.netlify.com/build_hooks/your-build-hook-id

Let’s trigger builds every day at midnight. We’ll use Netlify’s scheduled functions. That’s really why I’m using Netlify to host this in the first place. Having them at the ready via the host greatly simplifies things since there’s no server work or complicated configurations to get this going. Set it and forget it!

We’ll install @netlify/functions (instructions) to the project and then create the following file in the project’s root directory: netlify/functions/deploy.ts.

This is what we want to add to that file:

// netlify/functions/deploy.ts

import type { Config } from '@netlify/functions';

const BUILD_HOOK =
  'https://api.netlify.com/build_hooks/your-build-hook-id'; // replace me!

export default async (req: Request) => {
  await fetch(BUILD_HOOK, {
    method: 'POST',
  })
};

export const config: Config = {
  schedule: '0 0 * * *',
};

If you commit your code and push it, your site should re-deploy automatically. From that point on, it follows a schedule that rebuilds the site every day at midnight, ready for you to take your morning brew and catch up on everything that you think is important.

Smashing Editorial
(gg, yk)

5 Best WordPress Backup Plugins for Data Security (2024)

Ever wondered what would happen if your WordPress site suddenly disappeared?

It’s a nightmare scenario, but one that can be easily avoided with the right backup solution in place. For anyone managing a WordPress site, whether it’s a personal blog or a full-scale business, having a reliable backup is crucial.

This post highlights some of the best WordPress backup plugins available, focusing on their key features and how they can protect your site from unexpected data loss, security threats, and other digital disasters.

Let’s take a closer look at how these tools can help keep your WordPress site secure and your data intact.


BlogVault

BlogVault
Pros
  • Complete solution with backups, staging, and security
  • Reliable restore process with a high success rate
  • Integrated staging for safe testing
  • Real-time backups for WooCommerce sites
  • Automated malware scanning
Cons
  • Limited options for backup storage locations
  • High cost might not suit budget-conscious users

BlogVault is a robust WordPress backup plugin that’s ideal for business-critical websites. It offers features like incremental backups, secure cloud storage, and one-click restore, making sure your site can bounce back quickly from any issues. With a strong track record of successful backups, BlogVault provides the reliability you need.

What sets BlogVault apart is its strong data security, with encrypted backups stored across multiple data centers. The plugin also makes it easy to create staging sites, so you can test changes before making them live. Its migration tool is another standout feature, allowing you to move your site to a new host with minimal effort.

Who should use this?

BlogVault is perfect for businesses that need reliable backups, especially if downtime is not an option. It’s also great for developers who need staging environments or anyone who wants comprehensive backup and security for their WordPress site.

Visit BlogVault


Solid Backups

Solid Backups
Pros
  • Easy-to-use interface with clear options
  • Efficient incremental backups
Cons
  • No free version or trial available

Previously known as BackupBuddy, Solid Backups has been around since 2010, helping WordPress users keep their sites safe. This plugin allows you to back up your entire site, including your database and files, with just a few clicks. You can choose to back up everything or only specific files and folders, and send these backups to secure locations like Amazon S3, Dropbox, or even your email.

Solid Backups also makes it easy to restore your site, whether you need to recover a single file or roll back your database. You can set up automatic backups on a schedule that works for you, so you don’t have to worry about losing any data.

Who should use this?

Solid Backups is ideal for anyone who wants a dependable and user-friendly backup solution. It’s particularly useful if you manage multiple sites or prefer to set up backups once and let the plugin handle the rest.

Visit Solid Backups


UpdraftPlus

UpdraftPlus
Pros
  • Free version includes scheduled and incremental backups
  • Works with many cloud storage services
  • Light on server resources
  • Easy setup for beginners
  • Premium version offers extra features like site migration
Cons
  • User interface might seem complex at first
  • Advanced features require the premium version

UpdraftPlus is a widely trusted WordPress backup plugin, known for its reliability and ease of use. Whether you’re looking to create a complete backup of your website or just want to save specific parts like plugins, themes, or databases, UpdraftPlus makes it simple. You can choose to store your backups locally on your computer or in the cloud, with support for services like Dropbox, Google Drive, and Amazon S3.

This plugin offers both scheduled and on-demand backups, giving you the flexibility to secure your site whenever needed. While the free version covers essential features, upgrading to the premium version unlocks advanced options like incremental backups and seamless site migration. The straightforward setup ensures that even beginners can quickly get started with protecting their WordPress site.

Who should use this?

UpdraftPlus is a good choice for anyone looking for a reliable backup plugin that’s easy to use. Whether you’re new to WordPress or have been using it for years, this plugin can help keep your site’s data safe, especially if you want to store backups in more than one place.

Visit UpdraftPlus


Jetpack VaultPress Backups

Jetpack VaultPress Backups
Pros
  • Real-time backups for every site change
  • Quick recovery, even during hosting issues
  • User-friendly interface
Cons
  • Can’t manually choose specific files or directories to back up
  • No option to select your storage provider
  • Could be expensive for large sites

Following UpdraftPlus, Jetpack VaultPress Backups offers a robust solution for WordPress and WooCommerce sites, prioritizing data safety and easy recovery. Unlike many backup options, it provides real-time backups stored securely in the Jetpack Cloud, ensuring that every change on your site is instantly protected.

If your site encounters an issue, you can restore it swiftly with just a few clicks, even if your hosting service is experiencing downtime. With automated backups and a 30-day archive, Jetpack VaultPress Backups delivers peace of mind, making it a dependable choice for those who require continuous site protection.

Who should use this?

This plugin is particularly well-suited for users who prioritize security and need a hands-off, automated backup solution, especially those managing online stores or frequently updated sites.

Visit Jetpack VaultPress Backups


BackWPup

BackWPup
Pros
  • Flexible backup options
  • Proven reliability
  • Easy scheduling for automated backups
Cons
  • Advanced features require the premium version

As you explore backup solutions, BackWPup stands out as a straightforward yet powerful plugin for WordPress users. It simplifies the backup process, allowing you to easily secure your entire site, with storage options ranging from Dropbox to S3 and FTP. Additionally, BackWPup includes restore capabilities in its free version, making it a comprehensive choice for safeguarding your site.

Designed with ease of use in mind, BackWPup caters to both beginners and experienced users alike. You can schedule automatic backups and choose your preferred storage locations, ensuring that your site’s data remains safe and accessible whenever you need it.

Who should use this?

BackWPup is great for anyone looking for a straightforward backup and restore plugin, whether you’re running a small blog or managing several sites.

Visit BackWPup


The post 5 Best WordPress Backup Plugins for Data Security (2024) appeared first on Hongkiat.

Best Social Media Plugins for WordPress (2024)

Let’s face it – getting your content noticed online can be tough. But what if you could make it easy for your visitors to share your posts across social media with just a click? That’s where the right social media plugins for WordPress come in. These tools don’t just add share buttons; they help you turn your website into a social hub, boosting engagement and expanding your reach effortlessly.

Whether you want to connect your site with platforms like Facebook, X (formerly Twitter), and Instagram, or need an all-in-one solution for social sharing and following, choosing the best WordPress social media plugins can make a world of difference. In this post, we’re looking into the top plugins that can help you maximize social interaction on your WordPress site, making it more shareable, more engaging, and ultimately more successful.


Social Share Icons & Social Share Buttons

Social Share Icons plugin

For those who want more flexibility in adding social share icons to their website, this plugin is an excellent choice. It supports a wide range of social media platforms and gives you control over the placement of icons-before or after posts, in sidebars, or even as floating buttons. You can choose from 16 different design styles and add animations or pop-ups to make the buttons more engaging. The plugin also includes share counts and allows you to reorder the icons to fit your preferences.

The free version comes with all the essential features, while the premium version unlocks additional options like themed designs, mobile-optimized displays, and more advanced pop-up and lightbox features. You can also integrate email subscriptions, allowing visitors to easily subscribe to your site. Whether you stick with the free version or upgrade, this plugin provides a robust solution for social sharing.


Smash Balloon Social Photo Feed

Smash Balloon plugin

If you want to showcase Instagram photos on your WordPress site, this plugin makes it simple. It allows you to display posts from one or multiple Instagram accounts in a fully responsive feed that looks great on any device. Customization is easy, with options to control the size, layout, and style of your feed. You can also add multiple feeds across different pages of your site, each with its own unique look and feel.

Beyond displaying photos, this plugin helps boost social engagement by encouraging visitors to follow your Instagram account directly from your website. It’s also a time-saver, automatically updating your site with new Instagram posts. The Pro version offers even more features, like hashtag feeds, carousels, and shoppable posts, making it a powerful tool for integrating Instagram into your WordPress site.


AddToAny Share Buttons

AddToAny Share Buttons plugin

Enhancing your site’s social sharing capabilities is easy with AddToAny. This plugin offers versatile sharing options, allowing visitors to share your content across a wide range of platforms, including Facebook, Pinterest, LinkedIn, and WhatsApp. You can choose between standard and floating share buttons, customize their appearance, and even add follow buttons to connect with your social media profiles. The plugin also supports fast and official share counts to help you track engagement.

What’s great about AddToAny is its flexibility. It integrates seamlessly with Google Analytics for tracking shares and supports URL shorteners like Bitly. The plugin is optimized for performance, loading asynchronously to keep your site fast, and it works well with a variety of themes, WooCommerce, and multilingual sites. Plus, it’s mobile-ready, with sharp SVG icons and responsive design, making it a reliable choice for boosting social media interaction on your site.


Nextend Social Login and Register

Nextend Social Login plugin

Simplify the registration and login process on your WordPress site with Nextend Social Login and Register. This plugin lets your visitors log in using their social media profiles from platforms like Facebook, Google, and X (formerly Twitter). It eliminates the need for lengthy registration forms, making it easier for users to access your site without having to remember multiple usernames and passwords. The plugin integrates smoothly with your existing WordPress login and registration forms, and users can connect multiple social accounts to a single profile.

Nextend Social Login offers plenty of features in its free version, including one-click registration, customizable redirect URLs, and the ability to use social media profile pictures as avatars. If you need more, the Pro addon expands compatibility with plugins like WooCommerce and BuddyPress and supports additional providers such as LinkedIn and Amazon. With various options for customizing user roles and login layouts, this plugin is a versatile solution for enhancing user experience on your site.


Simple Share Buttons Adder

Simple Share Buttons plugin

Sometimes, simplicity is key, and the Simple Share Buttons Adder plugin delivers just that. It allows you to quickly add social share buttons to your WordPress posts and pages without any hassle. The plugin offers a “Modern Share Buttons” tab, which lets you customize your buttons using CSS-based settings. This gives you control over the colors and hover effects of your buttons, allowing them to blend seamlessly with your site’s design. You can also upload and use your own custom images as share buttons for a personalized touch.

The user-friendly interface makes it easy to manage and position your share buttons exactly where you want them. Whether you’re looking to increase social engagement or just need a straightforward way to add share buttons, Simple Share Buttons Adder provides a flexible and reliable solution. For those who want more, the plugin’s dedicated website offers additional resources and customization options.


Social Chat

Social Chat plugin

Engage your visitors directly through WhatsApp with the Social Chat plugin. This handy tool allows customers to start a conversation with you via WhatsApp or WhatsApp Business by simply clicking a button on your site. You can pre-set the first message to streamline communication, making it easier for visitors to reach out. With WhatsApp being one of the most popular messaging apps globally, this plugin can help turn casual visitors into leads by offering them a familiar and direct way to contact you.

The plugin is highly customizable, allowing you to choose different button layouts, colors, and even add a contact information box with a personalized message. For WooCommerce users, there’s a feature to add a WhatsApp button on product pages, making customer support even more accessible. The premium version expands on these features with multiple WhatsApp numbers, custom chatboxes, and integration with Google Analytics, offering a comprehensive solution for customer communication via WhatsApp.


Simple Social Media Share Buttons

Simple Social Media Share Buttons plugin

When you need a flexible and easy-to-use social sharing solution, the Simple Social Media Share Buttons plugin has you covered. It allows you to place social media buttons in various locations on your site, such as inline with your content, in sidebars, on images, or even as popups and fly-ins. With support for popular platforms like Facebook, WhatsApp, Twitter, LinkedIn, and Pinterest, this plugin ensures your content is easily shareable across the most important social networks.

The plugin is fully customizable, offering options to hide buttons on mobile devices, display share counts, and apply animations to grab your visitors’ attention. It’s also compatible with the NextGEN Gallery plugin, allowing you to add share buttons to your photo galleries. The premium version adds even more features, including advanced positioning, color customization, and additional display options like popups and fly-ins, making it a versatile choice for boosting social engagement on your site.


Social Media Feather

Social Media Feather plugin

If you’re looking for a lightweight and efficient way to add social sharing and following buttons to your WordPress site, Social Media Feather is a great option. This plugin focuses on simplicity and performance, allowing you to quickly integrate social media buttons into your posts, pages, and custom post types without slowing down your site. It supports all major platforms, including Facebook, Twitter, Pinterest, LinkedIn, and more, helping you extend the reach of your content across various social networks.

One of the standout features of Social Media Feather is its full support for Retina and high-resolution displays, ensuring that your buttons look sharp and professional on any device. The plugin also offers widgets and shortcodes for greater customization, allowing you to control the appearance and placement of your social buttons. For those who want to keep things simple while still providing essential social sharing features, Social Media Feather is a reliable and effective solution.

The post Best Social Media Plugins for WordPress (2024) appeared first on Hongkiat.

Generating Unique Random Numbers In JavaScript Using Sets

Generating Unique Random Numbers In JavaScript Using Sets

Generating Unique Random Numbers In JavaScript Using Sets

Amejimaobari Ollornwi

2024-08-26T15:00:00+00:00
2025-06-20T10:32:35+00:00

JavaScript comes with a lot of built-in functions that allow you to carry out so many different operations. One of these built-in functions is the Math.random() method, which generates a random floating-point number that can then be manipulated into integers.

However, if you wish to generate a series of unique random numbers and create more random effects in your code, you will need to come up with a custom solution for yourself because the Math.random() method on its own cannot do that for you.

In this article, we’re going to be learning how to circumvent this issue and generate a series of unique random numbers using the Set object in JavaScript, which we can then use to create more randomized effects in our code.

Note: This article assumes that you know how to generate random numbers in JavaScript, as well as how to work with sets and arrays.

Generating a Unique Series of Random Numbers

One of the ways to generate a unique series of random numbers in JavaScript is by using Set objects. The reason why we’re making use of sets is because the elements of a set are unique. We can iteratively generate and insert random integers into sets until we get the number of integers we want.

And since sets do not allow duplicate elements, they are going to serve as a filter to remove all of the duplicate numbers that are generated and inserted into them so that we get a set of unique integers.

Here’s how we are going to approach the work:

  1. Create a Set object.
  2. Define how many random numbers to produce and what range of numbers to use.
  3. Generate each random number and immediately insert the numbers into the Set until the Set is filled with a certain number of them.

The following is a quick example of how the code comes together:

function generateRandomNumbers(count, min, max) {
  // 1: Create a `Set` object
  let uniqueNumbers = new Set();
  while (uniqueNumbers.size < count) {
    // 2: Generate each random number
    uniqueNumbers.add(Math.floor(Math.random() * (max - min + 1)) + min);
  }
  // 3: Immediately insert them numbers into the Set...
  return Array.from(uniqueNumbers);
}
// ...set how many numbers to generate from a given range
console.log(generateRandomNumbers(5, 5, 10));

What the code does is create a new Set object and then generate and add the random numbers to the set until our desired number of integers has been included in the set. The reason why we’re returning an array is because they are easier to work with.

One thing to note, however, is that the number of integers you want to generate (represented by count in the code) should be less than the upper limit of your range plus one (represented by max + 1 in the code). Otherwise, the code will run forever. You can add an if statement to the code to ensure that this is always the case:

function generateRandomNumbers(count, min, max) {
  // if statement checks that `count` is less than `max + 1`
  if (count > max + 1) {
    return "count cannot be greater than the upper limit of range";
  } else {
    let uniqueNumbers = new Set();
    while (uniqueNumbers.size < count) {
      uniqueNumbers.add(Math.floor(Math.random() * (max - min + 1)) + min);
    }
    return Array.from(uniqueNumbers);
  }
}
console.log(generateRandomNumbers(5, 5, 10));

Using the Series of Unique Random Numbers as Array Indexes

It is one thing to generate a series of random numbers. It’s another thing to use them.

Being able to use a series of random numbers with arrays unlocks so many possibilities: you can use them in shuffling playlists in a music app, randomly sampling data for analysis, or, as I did, shuffling the tiles in a memory game.

Let’s take the code from the last example and work off of it to return random letters of the alphabet. First, we’ll construct an array of letters:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// rest of code

Then we map the letters in the range of numbers:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// generateRandomNumbers()

const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);

In the original code, the generateRandomNumbers() function is logged to the console. This time, we’ll construct a new variable that calls the function so it can be consumed by randomAlphabets:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// generateRandomNumbers()

const randomIndexes = generateRandomNumbers(5, 0, 25);
const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);

Now we can log the output to the console like we did before to see the results:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// generateRandomNumbers()

const randomIndexes = generateRandomNumbers(5, 0, 25);
const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);
console.log(randomAlphabets);

And, when we put the generateRandomNumbers() function definition back in, we get the final code:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];
function generateRandomNumbers(count, min, max) {
  if (count > max + 1) {
    return "count cannot be greater than the upper limit of range";
  } else {
    let uniqueNumbers = new Set();
    while (uniqueNumbers.size  englishAlphabets[index]);
console.log(randomAlphabets);

So, in this example, we created a new array of alphabets by randomly selecting some letters in our englishAlphabets array.

You can pass in a count argument of englishAlphabets.length to the generateRandomNumbers function if you desire to shuffle the elements in the englishAlphabets array instead. This is what I mean:

generateRandomNumbers(englishAlphabets.length, 0, 25);

Wrapping Up

In this article, we’ve discussed how to create randomization in JavaScript by covering how to generate a series of unique random numbers, how to use these random numbers as indexes for arrays, and also some practical applications of randomization.

The best way to learn anything in software development is by consuming content and reinforcing whatever knowledge you’ve gotten from that content by practicing. So, don’t stop here. Run the examples in this tutorial (if you haven’t done so), play around with them, come up with your own unique solutions, and also don’t forget to share your good work. Ciao!

Smashing Editorial
(yk)

Regexes Got Good: The History And Future Of Regular Expressions In JavaScript

Regexes Got Good: The History And Future Of Regular Expressions In JavaScript

Regexes Got Good: The History And Future Of Regular Expressions In JavaScript

Steven Levithan

2024-08-20T15:00:00+00:00
2025-06-20T10:32:35+00:00

Modern JavaScript regular expressions have come a long way compared to what you might be familiar with. Regexes can be an amazing tool for searching and replacing text, but they have a longstanding reputation (perhaps outdated, as I’ll show) for being difficult to write and understand.

This is especially true in JavaScript-land, where regexes languished for many years, comparatively underpowered compared to their more modern counterparts in PCRE, Perl, .NET, Java, Ruby, C++, and Python. Those days are over.

In this article, I’ll recount the history of improvements to JavaScript regexes (spoiler: ES2018 and ES2024 changed the game), show examples of modern regex features in action, introduce you to a lightweight JavaScript library that makes JavaScript stand alongside or surpass other modern regex flavors, and end with a preview of active proposals that will continue to improve regexes in future versions of JavaScript (with some of them already working in your browser today).

The History of Regular Expressions in JavaScript

ECMAScript 3, standardized in 1999, introduced Perl-inspired regular expressions to the JavaScript language. Although it got enough things right to make regexes pretty useful (and mostly compatible with other Perl-inspired flavors), there were some big omissions, even then. And while JavaScript waited 10 years for its next standardized version with ES5, other programming languages and regex implementations added useful new features that made their regexes more powerful and readable.

But that was then.

Did you know that nearly every new version of JavaScript has made at least minor improvements to regular expressions?

Let’s take a look at them.

Don’t worry if it’s hard to understand what some of the following features mean — we’ll look more closely at several of the key features afterward.

  • ES5 (2009) fixed unintuitive behavior by creating a new object every time regex literals are evaluated and allowed regex literals to use unescaped forward slashes within character classes (/[/]/).
  • ES6/ES2015 added two new regex flags: y (sticky), which made it easier to use regexes in parsers, and u (unicode), which added several significant Unicode-related improvements along with strict errors. It also added the RegExp.prototype.flags getter, support for subclassing RegExp, and the ability to copy a regex while changing its flags.
  • ES2018 was the edition that finally made JavaScript regexes pretty good. It added the s (dotAll) flag, lookbehind, named capture, and Unicode properties (via p{...} and P{...}, which require ES6’s flag u). All of these are extremely useful features, as we’ll see.
  • ES2020 added the string method matchAll, which we’ll also see more of shortly.
  • ES2022 added flag d (hasIndices), which provides start and end indices for matched substrings.
  • And finally, ES2024 added flag v (unicodeSets) as an upgrade to ES6’s flag u. The v flag adds a set of multicharacter “properties of strings” to p{...}, multicharacter elements within character classes via p{...} and q{...}, nested character classes, set subtraction [A--B] and intersection [A&&B], and different escaping rules within character classes. It also fixed case-insensitive matching for Unicode properties within negated sets [^...].

As for whether you can safely use these features in your code today, the answer is yes! The latest of these features, flag v, is supported in Node.js 20 and 2023-era browsers. The rest are supported in 2021-era browsers or earlier.

Each edition from ES2019 to ES2023 also added additional Unicode properties that can be used via p{...} and P{...}. And to be a completionist, ES2021 added string method replaceAll — although, when given a regex, the only difference from ES3’s replace is that it throws if not using flag g.

Aside: What Makes a Regex Flavor Good?

With all of these changes, how do JavaScript regular expressions now stack up against other flavors? There are multiple ways to think about this, but here are a few key aspects:

  • Performance.
    This is an important aspect but probably not the main one since mature regex implementations are generally pretty fast. JavaScript is strong on regex performance (at least considering V8’s Irregexp engine, used by Node.js, Chromium-based browsers, and even Firefox; and JavaScriptCore, used by Safari), but it uses a backtracking engine that is missing any syntax for backtracking control — a major limitation that makes ReDoS vulnerability more common.
  • Support for advanced features that handle common or important use cases.
    Here, JavaScript stepped up its game with ES2018 and ES2024. JavaScript is now best in class for some features like lookbehind (with its infinite-length support) and Unicode properties (with multicharacter “properties of strings,” set subtraction and intersection, and script extensions). These features are either not supported or not as robust in many other flavors.
  • Ability to write readable and maintainable patterns.
    Here, native JavaScript has long been the worst of the major flavors since it lacks the x (“extended”) flag that allows insignificant whitespace and comments. Additionally, it lacks regex subroutines and subroutine definition groups (from PCRE and Perl), a powerful set of features that enable writing grammatical regexes that build up complex patterns via composition.

So, it’s a bit of a mixed bag.

JavaScript regexes have become exceptionally powerful, but they’re still missing key features that could make regexes safer, more readable, and more maintainable (all of which hold some people back from using this power).

The good news is that all of these holes can be filled by a JavaScript library, which we’ll see later in this article.

Using JavaScript’s Modern Regex Features

Let’s look at a few of the more useful modern regex features that you might be less familiar with. You should know in advance that this is a moderately advanced guide. If you’re relatively new to regex, here are some excellent tutorials you might want to start with:

Named Capture

Often, you want to do more than just check whether a regex matches — you want to extract substrings from the match and do something with them in your code. Named capturing groups allow you to do this in a way that makes your regexes and code more readable and self-documenting.

The following example matches a record with two date fields and captures the values:

const record = 'Admitted: 2024-01-01nReleased: 2024-01-03';
const re = /^Admitted: (?d{4}-d{2}-d{2})nReleased: (?d{4}-d{2}-d{2})$/;
const match = record.match(re);
console.log(match.groups);
/* → {
  admitted: '2024-01-01',
  released: '2024-01-03'
} */

Don’t worry — although this regex might be challenging to understand, later, we’ll look at a way to make it much more readable. The key things here are that named capturing groups use the syntax (?...), and their results are stored on the groups object of matches.

You can also use named backreferences to rematch whatever a named capturing group matched via k, and you can use the values within search and replace as follows:

// Change 'FirstName LastName' to 'LastName, FirstName'
const name = 'Shaquille Oatmeal';
name.replace(/(?w+) (?w+)/, '$, $');
// → 'Oatmeal, Shaquille'

For advanced regexers who want to use named backreferences within a replacement callback function, the groups object is provided as the last argument. Here’s a fancy example:

function fahrenheitToCelsius(str) {
  const re = /(?-?d+(.d+)?)Fb/g;
  return str.replace(re, (...args) => {
    const groups = args.at(-1);
    return Math.round((groups.degrees - 32) * 5/9) + 'C';
  });
}
fahrenheitToCelsius('98.6F');
// → '37C'
fahrenheitToCelsius('May 9 high is 40F and low is 21F');
// → 'May 9 high is 4C and low is -6C'

Lookbehind

Lookbehind (introduced in ES2018) is the complement to lookahead, which has always been supported by JavaScript regexes. Lookahead and lookbehind are assertions (similar to ^ for the start of a string or b for word boundaries) that don’t consume any characters as part of the match. Lookbehinds succeed or fail based on whether their subpattern can be found immediately before the current match position.

For example, the following regex uses a lookbehind (?<=...) to match the word “cat” (only the word “cat”) if it’s preceded by “fat ”:

const re = /(?<=fat )cat/g;
'cat, fat cat, brat cat'.replace(re, 'pigeon');
// → 'cat, fat pigeon, brat cat'

You can also use negative lookbehind — written as (?<!...) — to invert the assertion. That would make the regex match any instance of “cat” that’s not preceded by “fat ”.

const re = /(?<!fat )cat/g;
'cat, fat cat, brat cat'.replace(re, 'pigeon');
// → 'pigeon, fat cat, brat pigeon'

JavaScript’s implementation of lookbehind is one of the very best (matched only by .NET). Whereas other regex flavors have inconsistent and complex rules for when and whether they allow variable-length patterns inside lookbehind, JavaScript allows you to look behind for any subpattern.

The matchAll Method

JavaScript’s String.prototype.matchAll was added in ES2020 and makes it easier to operate on regex matches in a loop when you need extended match details. Although other solutions were possible before, matchAll is often easier, and it avoids gotchas, such as the need to guard against infinite loops when looping over the results of regexes that might return zero-length matches.

Since matchAll returns an iterator (rather than an array), it’s easy to use it in a for...of loop.

const re = /(?w)(?w)/g;
for (const match of str.matchAll(re)) {
  const {char1, char2} = match.groups;
  // Print each complete match and matched subpatterns
  console.log(`Matched "${match[0]}" with "${char1}" and "${char2}"`);
}

Note: matchAll requires its regexes to use flag g (global). Also, as with other iterators, you can get all of its results as an array using Array.from or array spreading.

const matches = [...str.matchAll(/./g)];

Unicode Properties

Unicode properties (added in ES2018) give you powerful control over multilingual text, using the syntax p{...} and its negated version P{...}. There are hundreds of different properties you can match, which cover a wide variety of Unicode categories, scripts, script extensions, and binary properties.

Note: For more details, check out the documentation on MDN.

Unicode properties require using the flag u (unicode) or v (unicodeSets).

Flag v

Flag v (unicodeSets) was added in ES2024 and is an upgrade to flag u — you can’t use both at the same time. It’s a best practice to always use one of these flags to avoid silently introducing bugs via the default Unicode-unaware mode. The decision on which to use is fairly straightforward. If you’re okay with only supporting environments with flag v (Node.js 20 and 2023-era browsers), then use flag v; otherwise, use flag u.

Flag v adds support for several new regex features, with the coolest probably being set subtraction and intersection. This allows using A--B (within character classes) to match strings in A but not in B or using A&&B to match strings in both A and B. For example:

// Matches all Greek symbols except the letter 'π'
/[p{Script_Extensions=Greek}--π]/v

// Matches only Greek letters
/[p{Script_Extensions=Greek}&&p{Letter}]/v

For more details about flag v, including its other new features, check out this explainer from the Google Chrome team.

A Word on Matching Emoji

Emoji are 🤩🔥😎👌, but how emoji get encoded in text is complicated. If you’re trying to match them with a regex, it’s important to be aware that a single emoji can be composed of one or many individual Unicode code points. Many people (and libraries!) who roll their own emoji regexes miss this point (or implement it poorly) and end up with bugs.

The following details for the emoji “👩🏻‍🏫” (Woman Teacher: Light Skin Tone) show just how complicated emoji can be:

// Code unit length
'👩🏻‍🏫'.length;
// → 7
// Each astral code point (above uFFFF) is divided into high and low surrogates

// Code point length
[...'👩🏻‍🏫'].length;
// → 4
// These four code points are: u{1F469} u{1F3FB} u{200D} u{1F3EB}
// u{1F469} combined with u{1F3FB} is '👩🏻'
// u{200D} is a Zero-Width Joiner
// u{1F3EB} is '🏫'

// Grapheme cluster length (user-perceived characters)
[...new Intl.Segmenter().segment('👩🏻‍🏫')].length;
// → 1

Fortunately, JavaScript added an easy way to match any individual, complete emoji via p{RGI_Emoji}. Since this is a fancy “property of strings” that can match more than one code point at a time, it requires ES2024’s flag v.

If you want to match emojis in environments without v support, check out the excellent libraries emoji-regex and emoji-regex-xs.

Making Your Regexes More Readable, Maintainable, and Resilient

Despite the improvements to regex features over the years, native JavaScript regexes of sufficient complexity can still be outrageously hard to read and maintain.

ES2018’s named capture was a great addition that made regexes more self-documenting, and ES6’s String.raw tag allows you to avoid escaping all your backslashes when using the RegExp constructor. But for the most part, that’s it in terms of readability.

However, there’s a lightweight and high-performance JavaScript library named regex (by yours truly) that makes regexes dramatically more readable. It does this by adding key missing features from Perl-Compatible Regular Expressions (PCRE) and outputting native JavaScript regexes. You can also use it as a Babel plugin, which means that regex calls are transpiled at build time, so you get a better developer experience without users paying any runtime cost.

PCRE is a popular C library used by PHP for its regex support, and it’s available in countless other programming languages and tools.

Let’s briefly look at some of the ways the regex library, which provides a template tag named regex, can help you write complex regexes that are actually understandable and maintainable by mortals. Note that all of the new syntax described below works identically in PCRE.

Insignificant Whitespace and Comments

By default, regex allows you to freely add whitespace and line comments (starting with #) to your regexes for readability.

import {regex} from 'regex';
const date = regex`
  # Match a date in YYYY-MM-DD format
  (?  d{4}) - # Year part
  (? d{2}) - # Month part
  (?   d{2})   # Day part
`;

This is equivalent to using PCRE’s xx flag.

Subroutines and Subroutine Definition Groups

Subroutines are written as g (where name refers to a named group), and they treat the referenced group as an independent subpattern that they try to match at the current position. This enables subpattern composition and reuse, which improves readability and maintainability.

For example, the following regex matches an IPv4 address such as “192.168.12.123”:

import {regex} from 'regex';
const ipv4 = regex`b
  (? 25[0-5] | 2[0-4]d | 1dd | [1-9]?d)
  # Match the remaining 3 dot-separated bytes
  (. g){3}
b`;

You can take this even further by defining subpatterns for use by reference only via subroutine definition groups. Here’s an example that improves the regex for admittance records that we saw earlier in this article:

const record = 'Admitted: 2024-01-01nReleased: 2024-01-03';
const re = regex`
  ^ Admitted: (? g) n
    Released: (? g) $

  (?(DEFINE)
    (?  g-g-g)
    (?  d{4})
    (? d{2})
    (?   d{2})
  )
`;
const match = record.match(re);
console.log(match.groups);
/* → {
  admitted: '2024-01-01',
  released: '2024-01-03'
} */

A Modern Regex Baseline

regex includes the v flag by default, so you never forget to turn it on. And in environments without native v, it automatically switches to flag u while applying v’s escaping rules, so your regexes are forward and backward-compatible.

It also implicitly enables the emulated flags x (insignificant whitespace and comments) and n (“named capture only” mode) by default, so you don’t have to continually opt into their superior modes. And since it’s a raw string template tag, you don’t have to escape your backslashes like with the RegExp constructor.

Atomic Groups and Possessive Quantifiers Can Prevent Catastrophic Backtracking

Atomic groups and possessive quantifiers are another powerful set of features added by the regex library. Although they’re primarily about performance and resilience against catastrophic backtracking (also known as ReDoS or “regular expression denial of service,” a serious issue where certain regexes can take forever when searching particular, not-quite-matching strings), they can also help with readability by allowing you to write simpler patterns.

Note: You can learn more in the regex documentation.

What’s Next? Upcoming JavaScript Regex Improvements

There are a variety of active proposals for improving regexes in JavaScript. Below, we’ll look at the three that are well on their way to being included in future editions of the language.

Duplicate Named Capturing Groups

This is a Stage 3 (nearly finalized) proposal. Even better is that, as of recently, it works in all major browsers.

When named capturing was first introduced, it required that all (?...) captures use unique names. However, there are cases when you have multiple alternate paths through a regex, and it would simplify your code to reuse the same group names in each alternative.

For example:

/(?d{4})-dd|dd-(?d{4})/

This proposal enables exactly this, preventing a “duplicate capture group name” error with this example. Note that names must still be unique within each alternative path.

Pattern Modifiers (aka Flag Groups)

This is another Stage 3 proposal. It’s already supported in Chrome/Edge 125 and Opera 111, and it’s coming soon for Firefox. No word yet on Safari.

Pattern modifiers use (?ims:...), (?-ims:...), or (?im-s:...) to turn the flags i, m, and s on or off for only certain parts of a regex.

For example:

/hello-(?i:world)/
// Matches 'hello-WORLD' but not 'HELLO-WORLD'

Escape Regex Special Characters with RegExp.escape

This proposal recently reached Stage 3 and has been a long time coming. It isn’t yet supported in any major browsers. The proposal does what it says on the tin, providing the function RegExp.escape(str), which returns the string with all regex special characters escaped so you can match them literally.

If you need this functionality today, the most widely-used package (with more than 500 million monthly npm downloads) is escape-string-regexp, an ultra-lightweight, single-purpose utility that does minimal escaping. That’s great for most cases, but if you need assurance that your escaped string can safely be used at any arbitrary position within a regex, escape-string-regexp recommends the regex library that we’ve already looked at in this article. The regex library uses interpolation to escape embedded strings in a context-aware way.

Conclusion

So there you have it: the past, present, and future of JavaScript regular expressions.

If you want to journey even deeper into the lands of regex, check out Awesome Regex for a list of the best regex testers, tutorials, libraries, and other resources. And for a fun regex crossword puzzle, try your hand at regexle.

May your parsing be prosperous and your regexes be readable.

Smashing Editorial
(gg, yk)

How To Build A Multilingual Website With Nuxt.js

How To Build A Multilingual Website With Nuxt.js

How To Build A Multilingual Website With Nuxt.js

Tim Benniks

2024-08-01T15:00:00+00:00
2025-06-20T10:32:35+00:00

This article is sponsored by Hygraph

Internationalization, often abbreviated as i18n, is the process of designing and developing software applications in a way that they can be easily adapted to various spoken languages like English, German, French, and more without requiring substantial changes to the codebase. It involves moving away from hardcoded strings and techniques for translating text, formatting dates and numbers, and handling different character encodings, among other tasks.

Internationalization can give users the choice to access a given website or application in their native language, which can have a positive impression on them, making it crucial for reaching a global audience.

What We’re Making

In this tutorial, we’re making a website that puts these i18n pieces together using a combination of libraries and a UI framework. You’ll want to have intermediate proficiency with JavaScript, Vue, and Nuxt to follow along. Throughout this article, we will learn by examples and incrementally build a multilingual Nuxt website. Together, we will learn how to provide i18n support for different languages, lazy-load locale messages, and switch locale on runtime.

After that, we will explore features like interpolation, pluralization, and date/time translations.

And finally, we will fetch dynamic localized content from an API server using Hygraph as our API server to get localized content. If you do not have a Hygraph account please create one for free before jumping in.

As a final detail, we will use Vuetify as our UI framework, but please feel free to use another framework if you want. The final code for what we’re building is published in a GitHub repository for reference. And finally, you can also take a look at the final result in a live demo.

The nuxt-i18n Library

nuxt-i18n is a library for implementing internationalization in Nuxt.js applications, and it’s what we will be using in this tutorial. The library is built on top of Vue I18n, which, again, is the de facto standard library for implementing i18n in Vue applications.

What makes nuxt-i18n ideal for our work is that it provides the comprehensive set of features included in Vue I18n while adding more functionalities that are specific to Nuxt, like lazy loading locale messages, route generation and redirection for different locales, SEO metadata per locale, locale-specific domains, and more.

Initial Setup

Start a new Nuxt.js project and set it up with a UI framework of your choice. Again, I will be using Vue to establish the interface for this tutorial.

Let us add a basic layout for our website and set up some sample Vue templates.

First, a “Blog” page:



  
Home This is the home page description

Next, an “About” page:



  
About This is the about page description

This gives us a bit of a boilerplate that we can integrate our i18n work into.

Translating Plain Text

The page templates look good, but notice how the text is hardcoded. As far as i18n goes, hardcoded content is difficult to translate into different locales. That is where the nuxt-i18n library comes in, providing the language-specific strings we need for the Vue components in the templates.

We’ll start by installing the library via the command line:

npx nuxi@latest module add i18n

Inside the nuxt.config.ts file, we need to ensure that we have @nuxtjs/i18n inside the modules array. We can use the i18n property to provide module-specific configurations.

// nuxt.config.ts
export default defineNuxtConfig({
  // ...
  modules: [
    ...
    "@nuxtjs/i18n",
    // ...
  ],
  i18n: {
    // nuxt-i18n module configurations here
  }
  // ...
});

Since the nuxt-i18n library is built on top of the Vue I18n library, we can utilize its features in our Nuxt application as well. Let us create a new file, i18n.config.ts, which we will use to provide all vue-i18n configurations.

// i18n.config.ts
export default defineI18nConfig(() => ({
  legacy: false,
  locale: "en",
  messages: {
    en: {
      homePage: {
        title: "Home",
        description: "This is the home page description."
      },
      aboutPage: {
        title: "About",
        description: "This is the about page description."
      },
    },
  },
}));

Here, we have specified internationalization configurations, like using the en locale, and added messages for the en locale. These messages can be used inside the markup in the templates we made with the help of a $t function from Vue I18n.

Next, we need to link the i18n.config.ts configurations in our Nuxt config file.

// nuxt.config.ts
export default defineNuxtConfig({
  ...
  i18n: {
    vueI18n: "./i18n.config.ts"
  }
  ...
});

Now, we can use the $t function in our components — as shown below — to parse strings from our internationalization configurations.

Note: There’s no need to import $t since we have Nuxt’s default auto-import functionality.



  
{{ $t("homePage.title") }} {{ $t("homePage.description") }}

Page with a title and description

(Large preview)

Lazy Loading Translations

We have the title and description served from the configurations. Next, we can add more languages to the same config. For example, here’s how we can establish translations for English (en), French (fr) and Spanish (es):

// i18n.config.ts
export default defineI18nConfig(() => ({
  legacy: false,
  locale: "en",
  messages: {
    en: {
      // English
    },
    fr: {
      // French
    },
    es: {
      // Spanish
    }
  },
}));

For a production website with a lot of content that needs translating, it would be unwise to bundle all of the messages from different locales in the main bundle. Instead, we should use the nuxt-i18 lazy loading feature asynchronously load only the required language rather than all of them at once. Also, having messages for all locales in a single configuration file can become difficult to manage over time, and breaking them up like this makes things easier to find.

Let’s set up the lazy loading feature in nuxt.config.ts:

// etc.
  i18n: {
    vueI18n: "./i18n.config.ts",
    lazy: true,
    langDir: "locales",
    locales: [
      {
        code: "en",
        file: "en.json",
        name: "English",
      },
      {
        code: "es",
        file: "es.json",
        name: "Spanish",
      },
      {
        code: "fr",
        file: "fr.json",
        name: "French",
      },
    ],
    defaultLocale: "en",
    strategy: "no_prefix",
  },

// etc.

This enables lazy loading and specifies the locales directory that will contain our locale files. The locales array configuration specifies from which files Nuxt.js should pick up messages for a specific language.

Now, we can create individual files for each language. I’ll drop all three of them right here:


// locales/en.json
{
  "homePage": {
    "title": "Home",
    "description": "This is the home page description."
  },
  "aboutPage": {
    "title": "About",
    "description": "This is the about page description."
  },
  "selectLocale": {
    "label": "Select Locale"
  },
  "navbar": {
    "homeButton": "Home",
    "aboutButton": "About"
  }
}
// locales/fr.json
{
  "homePage": {
    "title": "Bienvenue sur la page d'accueil",
    "description": "Ceci est la description de la page d'accueil."
  },
  "aboutPage": {
    "title": "À propos de nous",
    "description": "Ceci est la description de la page à propos de nous."
  },
  "selectLocale": {
    "label": "Sélectionner la langue"
  },
  "navbar": {
    "homeButton": "Accueil",
    "aboutButton": "À propos"
  }
}
// locales/es.json
{
  "homePage": {
    "title": "Bienvenido a la página de inicio",
    "description": "Esta es la descripción de la página de inicio."
  },
  "aboutPage": {
    "title": "Sobre nosotros",
    "description": "Esta es la descripción de la página sobre nosotros."
  },
  "selectLocale": {
    "label": "Seleccione el idioma"
  },
  "navbar": {
    "homeButton": "Inicio",
    "aboutButton": "Acerca de"
  }
}

We have set up lazy loading, added multiple languages to our application, and moved our locale messages to separate files. The user gets the right locale for the right message, and the locale messages are kept in a maintainable manner inside the code base.

Switching Between Languages

We have different locales, but to see them in action, we will build a component that can be used to switch between the available locales.



const { locale, locales, setLocale } = useI18n();

const language = computed({
  get: () => locale.value,
  set: (value) => setLocale(value),
});



  

This component uses the useI18n hook provided by the Vue I18n library and a computed property language to get and set the global locale from a input. To make this even more like a real-world website, we’ll include a small navigation bar that links up all of the website’s pages.



  
    
{{ $t("navbar.homeButton") }} {{ $t("navbar.aboutButton") }}

That’s it! Now, we can switch between languages on the fly.

Showing the current state of the homepage.

Showing the current state of the homepage. (Large preview)

Homepage showing a French translation.

Showing a French translation. (Large preview)

We have a basic layout, but I thought we’d take this a step further and build a playground page we can use to explore more i18n features that are pretty useful when building a multilingual website.

Interpolation and Pluralization

Interpolation and pluralization are internationalization techniques for handling dynamic content and grammatical variations across different languages. Interpolation allows developers to insert dynamic variables or expressions into translated strings. Pluralization addresses the complexities of plural forms in languages by selecting the appropriate grammatical form based on numeric values. With the help of interpolation and pluralization, we can create more natural and accurate translations.

To use pluralization in our Nuxt app, we’ll first add a configuration to the English locale file.

// locales/en.json
{
  // etc.
  "playgroundPage": {
    "pluralization": {
      "title": "Pluralization",
      "apple": "No Apple | One Apple | {count} Apples",
      "addApple": "Add"
    }
  }
  // etc.
}

The pluralization configuration set up for the key apple defines an output — No Apple — if a count of 0 is passed to it, a second output — One Apple — if a count of 1 is passed, and a third — 2 Apples, 3 Apples, and so on — if the count passed in is greater than 1.

Here is how we can use it in your component: Whenever you click on the add button, you will see pluralization in action, changing the strings.



let appleCount = ref(0);
const addApple = () => {
  appleCount.value += 1;
};


  
    
    
      
        {{ $t("playgroundPage.pluralization.title") }}
      

      
        {{ $t("playgroundPage.pluralization.apple", { count: appleCount }) }}
      
      
        {{ $t("playgroundPage.pluralization.addApple") }}
      
    
  

To use interpolation in our Nuxt app, first, add a configuration in the English locale file:

// locales/en.json
{
  ...
  "playgroundPage": {
    ... 
    "interpolation": {
      "title": "Interpolation",
      "sayHello": "Hello, {name}",
      "hobby": "My favourite hobby is {0}.",
      "email": "You can reach out to me at {account}{'@'}{domain}.com"
    },
    // etc. 
  }
  // etc.
}

The message for sayHello expects an object passed to it having a key name when invoked — a process known as named interpolation.

The message hobby expects an array to be passed to it and will pick up the 0th element, which is known as list interpolation.

The message email expects an object with keys account, and domain and joins both with a literal string "@". This is known as literal interpolation.

Below is an example of how to use it in the Vue components:



  
    
    
      
        {{ $t("playgroundPage.interpolation.title") }}
      
      
        

{{ $t("playgroundPage.interpolation.sayHello", { name: "Jane", }) }}

{{ $t("playgroundPage.interpolation.hobby", ["Football", "Cricket"]) }}

{{ $t("playgroundPage.interpolation.email", { account: "johndoe", domain: "hygraph", }) }}

Exampe of pluralization and interpolation

(Large preview)

Date & Time Translations

Translating dates and times involves translating date and time formats according to the conventions of different locales. We can use Vue I18n’s features for formatting date strings, handling time zones, and translating day and month names for managing date time translations. We can give the configuration for the same using the datetimeFormats key inside the vue-i18n config object.

// i18n.config.ts
export default defineI18nConfig(() => ({
  fallbackLocale: "en",
  datetimeFormats: {
    en: {
      short: {
        year: "numeric",
        month: "short",
        day: "numeric",
      },
      long: {
        year: "numeric",
        month: "short",
        day: "numeric",
        weekday: "short",
        hour: "numeric",
        minute: "numeric",
        hour12: false,
      },
    },
    fr: {
      short: {
        year: "numeric",
        month: "short",
        day: "numeric",
      },
      long: {
        year: "numeric",
        month: "short",
        day: "numeric",
        weekday: "long",
        hour: "numeric",
        minute: "numeric",
        hour12: true,
      },
    },
    es: {
      short: {
        year: "numeric",
        month: "short",
        day: "numeric",
      },
      long: {
        year: "2-digit",
        month: "short",
        day: "numeric",
        weekday: "long",
        hour: "numeric",
        minute: "numeric",
        hour12: true,
      },
    },
  },
}));

Here, we have set up short and long formats for all three languages. If you are coding along, you will be able to see available configurations for fields, like month and year, thanks to TypeScript and Intellisense features provided by your code editor. To display the translated dates and times in components, we should use the $d function and pass the format to it.



  
    
    
      
        {{ $t("playgroundPage.dateTime.title") }}
      
      
        

Short: {{ (new Date(), $d(new Date(), "short")) }}

Long: {{ (new Date(), $d(new Date(), "long")) }}

Showing a default date and time.

Showing a default date and time. (Large preview)

Showing the date and time translated in Spanish.

Showing the date and time translated in Spanish. (Large preview)

Localization On the Hygraph Side

We saw how to implement localization with static content. Now, we’ll attempt to understand how to fetch dynamic localized content in Nuxt.

We can build a blog page in our Nuxt App that fetches data from a server. The server API should accept a locale and return data in that specific locale.

Hygraph has a flexible localization API that allows you to publish and query localized content. If you haven’t created a free Hygraph account yet, you can do that on the Hygraph website to continue following along.

Go to Project SettingsLocales and add locales for the API.

Showing Locales on the Hygraph website

(Large preview)

We have added two locales: English and French. Now we need aq localized_post model in our schema that only two fields: title and body. Ensure to make these “Localized” fields while creating them.

Showing Schema, which has title and body fields, on the Htygraph website

(Large preview)

Add permissions to consume the localized content, go to Project settingsAccessAPI AccessPublic Content API, and assign Read permissions to the localized_post model.

Project settings on the Htygraph website with Public Content API and an assigned Read permission

(Large preview)

Now, we can go to the Hygrapgh API playground and add some localized data to the database with the help of GraphQL mutations. To limit the scope of this example, I am simply adding data from the Hygraph API playground. In an ideal world, a create/update mutation would be triggered from the front end after receiving user input.

Run this mutation in the Hygraph API playground:

mutation createLocalizedPost {
  createLocalizedPost(
    data: {
      title: "A Journey Through the Alps", 
      body: "Exploring the majestic mountains of the Alps offers a thrilling experience. The stunning landscapes, diverse wildlife, and pristine environment make it a perfect destination for nature lovers.", 
      localizations: {
        create: [
          {locale: fr, data: {title: "Un voyage à travers les Alpes", body: "Explorer les majestueuses montagnes des Alpes offre une expérience palpitante. Les paysages époustouflants, la faune diversifiée et l'environnement immaculé en font une destination parfaite pour les amoureux de la nature."}}
        ]
      }
    }
  ) {
    id
  }
}

The mutation above creates a post with the en locale and includes a fr version of the same post. Feel free to add more data to your model if you want to see things work from a broader set of data.

Putting Things Together

Now that we have Hygraph API content ready for consumption let’s take a moment to understand how it’s consumed inside the Nuxt app.

To do this, we’ll install nuxt-graphql-client to serve as the app’s GraphQL client. This is a minimal GraphQL client for performing GraphQL operations without having to worry about complex configurations, code generation, typing, and other setup tasks.

npx nuxi@latest module add graphql-client
// nuxt.config.ts
export default defineNuxtConfig({
  modules: [
    // ...
    "nuxt-graphql-client"
    // ...
  ],
  runtimeConfig: {
    public: {
      GQL_HOST: 'ADD_YOUR_GQL_HOST_URL_HERE_OR_IN_.env'
    }
  },
});

Next, let’s add our GraphQL queries in graphql/queries.graphql.

query getPosts($locale: [Locale!]!) {
  localizedPosts(locales: $locale) {
    title
    body
  }
}

The GraphQL client will automatically scan .graphql and .gql files and generate client-side code and typings in the .nuxt/gql folder. All we need to do is stop and restart the Nuxt application. After restarting the app, the GraphQL client will allow us to use a GqlGetPosts function to trigger the query.

Now, we will build the Blog page where by querying the Hygraph server and showing the dynamic data.

// pages/blog.vue

  import type { GetPostsQueryVariables } from "#gql";
  import type { PostItem, Locale } from "../types/types";

  const { locale } = useI18n();
  const posts = ref([]);
  const isLoading = ref(false);
  const isError = ref(false);

  const fetchPosts = async (localeValue: Locale) => {
    try {
      isLoading.value = true;
      const variables: GetPostsQueryVariables = {
        locale: [localeValue],
      };
      const data = await GqlGetPosts(variables);
      posts.value = data?.localizedPosts ?? [];
    } catch (err) {
      console.log("Fetch Error, Something went wrong", err);
      isError.value = true;
    } finally {
      isLoading.value = false;
    }
  };

  // Fetch posts on component mount
  onMounted(() => {
    fetchPosts(locale.value as Locale);
  });

  // Watch for locale changes
  watch(locale, (newLocale) => {
    fetchPosts(newLocale as Locale);
  });

This code fetches only the current locale from the useI18n hook and sends it to the fetchPosts function when the Vue component is mounted. The fetchPosts function will pass the locale to the GraphQL query as a variable and obtain localized data from the Hygraph server. We also have a watcher on the locale so that whenever the global locale is changed by the user we make an API call to the server again and fetch posts in that locale.

And, finally, let’s add markup for viewing our fetched data!



  
    Blogs
    

Something went wrong while getting blogs please check the logs.

{{ post.title }} {{ post.body }}

Awesome! If all goes according to plan, then your app should look something like the one in the following video.

Wrapping Up

Check that out — we just made the functionality for translating content for a multilingual website! Now, a user can select a locale from a list of options, and the app fetches content for the selected locale and automatically updates the displayed content.

Did you think that translations would require more difficult steps? It’s pretty amazing that we’re able to cobble together a couple of libraries, hook them up to an API, and wire everything up to render on a page.

Of course, there are other libraries and resources for handling internationalization in a multilingual context. The exact tooling is less the point than it is seeing what pieces are needed to handle dynamic translations and how they come together.

Smashing Editorial
(gg, yk)

Uniting Web And Native Apps With 4 Unknown JavaScript APIs

Uniting Web And Native Apps With 4 Unknown JavaScript APIs

Uniting Web And Native Apps With 4 Unknown JavaScript APIs

Juan Diego Rodríguez

2024-06-20T18:00:00+00:00
2025-06-20T10:32:35+00:00

A couple of years ago, four JavaScript APIs that landed at the bottom of awareness in the State of JavaScript survey. I took an interest in those APIs because they have so much potential to be useful but don’t get the credit they deserve. Even after a quick search, I was amazed at how many new web APIs have been added to the ECMAScript specification that aren’t getting their dues and with a lack of awareness and browser support in browsers.

That situation can be a “catch-22”:

An API is interesting but lacks awareness due to incomplete support, and there is no immediate need to support it due to low awareness.

Most of these APIs are designed to power progressive web apps (PWA) and close the gap between web and native apps. Bear in mind that creating a PWA involves more than just adding a manifest file. Sure, it’s a PWA by definition, but it functions like a bookmark on your home screen in practice. In reality, we need several APIs to achieve a fully native app experience on the web. And the four APIs I’d like to shed light on are part of that PWA puzzle that brings to the web what we once thought was only possible in native apps.

You can see all these APIs in action in this demo as we go along.

1. Screen Orientation API

The Screen Orientation API can be used to sniff out the device’s current orientation. Once we know whether a user is browsing in a portrait or landscape orientation, we can use it to enhance the UX for mobile devices by changing the UI accordingly. We can also use it to lock the screen in a certain position, which is useful for displaying videos and other full-screen elements that benefit from a wider viewport.

Using the global screen object, you can access various properties the screen uses to render a page, including the screen.orientation object. It has two properties:

  • type: The current screen orientation. It can be: "portrait-primary", "portrait-secondary", "landscape-primary", or "landscape-secondary".
  • angle: The current screen orientation angle. It can be any number from 0 to 360 degrees, but it’s normally set in multiples of 90 degrees (e.g., 0, 90, 180, or 270).

On mobile devices, if the angle is 0 degrees, the type is most often going to evaluate to "portrait" (vertical), but on desktop devices, it is typically "landscape" (horizontal). This makes the type property precise for knowing a device’s true position.

The screen.orientation object also has two methods:

  • .lock(): This is an async method that takes a type value as an argument to lock the screen.
  • .unlock(): This method unlocks the screen to its default orientation.

And lastly, screen.orientation counts with an "orientationchange" event to know when the orientation has changed.

Browser Support

Browser Support on Screen Orientation API

Source: Caniuse. (Large preview)

Finding And Locking Screen Orientation

Let’s code a short demo using the Screen Orientation API to know the device’s orientation and lock it in its current position.

This can be our HTML boilerplate:

Orientation Type:
Orientation Angle:

On the JavaScript side, we inject the screen orientation type and angle properties into our HTML.

let currentOrientationType = document.querySelector(".orientation-type");
let currentOrientationAngle = document.querySelector(".orientation-angle");

currentOrientationType.textContent = screen.orientation.type;
currentOrientationAngle.textContent = screen.orientation.angle;

Now, we can see the device’s orientation and angle properties. On my laptop, they are "landscape-primary" and .

Screen Orientation type and angle being displayed

(Large preview)

If we listen to the window’s orientationchange event, we can see how the values are updated each time the screen rotates.

window.addEventListener("orientationchange", () => {
  currentOrientationType.textContent = screen.orientation.type;
  currentOrientationAngle.textContent = screen.orientation.angle;
});

Screen Orientation type and angle are displayed in portrait mode

(Large preview)

To lock the screen, we need to first be in full-screen mode, so we will use another extremely useful feature: the Fullscreen API. Nobody wants a webpage to pop into full-screen mode without their consent, so we need transient activation (i.e., a user click) from a DOM element to work.

The Fullscreen API has two methods:

  1. Document.exitFullscreen() is used from the global document object,
  2. Element.requestFullscreen() makes the specified element and its descendants go full-screen.

We want the entire page to be full-screen so we can invoke the method from the root element at the document.documentElement object:

const fullscreenButton = document.querySelector(".fullscreen-button");

fullscreenButton.addEventListener("click", async () => {
  // If it is already in full-screen, exit to normal view
  if (document.fullscreenElement) {
    await document.exitFullscreen();
  } else {
    await document.documentElement.requestFullscreen();
  }
});

Next, we can lock the screen in its current orientation:

const lockButton = document.querySelector(".lock-button");

lockButton.addEventListener("click", async () => {
  try {
    await screen.orientation.lock(screen.orientation.type);
  } catch (error) {
    console.error(error);
  }
});

And do the opposite with the unlock button:

const unlockButton = document.querySelector(".unlock-button");

unlockButton.addEventListener("click", () => {
  screen.orientation.unlock();
});

Can’t We Check Orientation With a Media Query?

Yes! We can indeed check page orientation via the orientation media feature in a CSS media query. However, media queries compute the current orientation by checking if the width is “bigger than the height” for landscape or “smaller” for portrait. By contrast,

The Screen Orientation API checks for the screen rendering the page regardless of the viewport dimensions, making it resistant to inconsistencies that may crop up with page resizing.

You may have noticed how PWAs like Instagram and X force the screen to be in portrait mode even when the native system orientation is unlocked. It is important to notice that this behavior isn’t achieved through the Screen Orientation API, but by setting the orientation property on the manifest.json file to the desired orientation type.

2. Device Orientation API

Another API I’d like to poke at is the Device Orientation API. It provides access to a device’s gyroscope sensors to read the device’s orientation in space; something used all the time in mobile apps, mainly games. The API makes this happen with a deviceorientation event that triggers each time the device moves. It has the following properties:

  • event.alpha: Orientation along the Z-axis, ranging from 0 to 360 degrees.
  • event.beta: Orientation along the X-axis, ranging from -180 to 180 degrees.
  • event.gamma: Orientation along the Y-axis, ranging from -90 to 90 degrees.

Browser Support

Browser Support on Device Orientation API

Source: Caniuse. (Large preview)

Moving Elements With Your Device

In this case, we will make a 3D cube with CSS that can be rotated with your device! The full instructions I used to make the initial CSS cube are credited to David DeSandro and can be found in his introduction to 3D transforms.

See the Pen [Rotate cube [forked]](https://codepen.io/smashingmag/pen/vYwdMNJ) by Dave DeSandro.

See the Pen Rotate cube [forked] by Dave DeSandro.

You can see raw full HTML in the demo, but let’s print it here for posterity:

1
2
3
4
5
6

Device Orientation API

Alpha:
Beta:
Gamma:

To keep this brief, I won’t explain the CSS code here. Just keep in mind that it provides the necessary styles for the 3D cube, and it can be rotated through all axes using the CSS rotate() function.

Now, with JavaScript, we listen to the window’s deviceorientation event and access the event orientation data:

const currentAlpha = document.querySelector(".currentAlpha");
const currentBeta = document.querySelector(".currentBeta");
const currentGamma = document.querySelector(".currentGamma");

window.addEventListener("deviceorientation", (event) => {
  currentAlpha.textContent = event.alpha;
  currentBeta.textContent = event.beta;
  currentGamma.textContent = event.gamma;
});

To see how the data changes on a desktop device, we can open Chrome’s DevTools and access the Sensors Panel to emulate a rotating device.

Emulating a device rotating on the Chrome DevTools Sensor Panel

(Large preview)

To rotate the cube, we change its CSS transform properties according to the device orientation data:

const currentAlpha = document.querySelector(".currentAlpha");
const currentBeta = document.querySelector(".currentBeta");
const currentGamma = document.querySelector(".currentGamma");

const cube = document.querySelector(".cube");

window.addEventListener("deviceorientation", (event) => {
  currentAlpha.textContent = event.alpha;
  currentBeta.textContent = event.beta;
  currentGamma.textContent = event.gamma;

  cube.style.transform = `rotateX(${event.beta}deg) rotateY(${event.gamma}deg) rotateZ(${event.alpha}deg)`;
});

This is the result:

Cube rotated according to emulated device orientation

(Large preview)

3. Vibration API

Let’s turn our attention to the Vibration API, which, unsurprisingly, allows access to a device’s vibrating mechanism. This comes in handy when we need to alert users with in-app notifications, like when a process is finished or a message is received. That said, we have to use it sparingly; no one wants their phone blowing up with notifications.

There’s just one method that the Vibration API gives us, and it’s all we need: navigator.vibrate().

vibrate() is available globally from the navigator object and takes an argument for how long a vibration lasts in milliseconds. It can be either a number or an array of numbers representing a patron of vibrations and pauses.

navigator.vibrate(200); // vibrate 200ms
navigator.vibrate([200, 100, 200]); // vibrate 200ms, wait 100, and vibrate 200ms.

Browser Support

Browser Support on Vibration API

Source: Caniuse. (Large preview)

Vibration API Demo

Let’s make a quick demo where the user inputs how many milliseconds they want their device to vibrate and buttons to start and stop the vibration, starting with the markup:

We’ll add an event listener for a click and invoke the vibrate() method:

const vibrateButton = document.querySelector(".vibrate-button");
const millisecondsInput = document.querySelector("#milliseconds-input");

vibrateButton.addEventListener("click", () => {
  navigator.vibrate(millisecondsInput.value);
});

To stop vibrating, we override the current vibration with a zero-millisecond vibration.

const stopVibrateButton = document.querySelector(".stop-vibrate-button");

stopVibrateButton.addEventListener("click", () => {
  navigator.vibrate(0);
});

4. Contact Picker API

In the past, it used to be that only native apps could connect to a device’s “contacts”. But now we have the fourth and final API I want to look at: the Contact Picker API.

The API grants web apps access to the device’s contact lists. Specifically, we get the contacts.select() async method available through the navigator object, which takes the following two arguments:

  • properties: This is an array containing the information we want to fetch from a contact card, e.g., "name", "address", "email", "tel", and "icon".
  • options: This is an object that can only contain the multiple boolean property to define whether or not the user can select one or multiple contacts at a time.

Browser Support

I’m afraid that browser support is next to zilch on this one, limited to Chrome Android, Samsung Internet, and Android’s native web browser at the time I’m writing this.

Browser Support on Contacts Manager API

Source: Caniuse. (Large preview)

Selecting User’s Contacts

We will make another demo to select and display the user’s contacts on the page. Again, starting with the HTML:

Contacts:

Then, in JavaScript, we first construct our elements from the DOM and choose which properties we want to pick from the contacts.

const getContactsButton = document.querySelector(".get-contacts");
const contactList = document.querySelector(".contact-list");

const props = ["name", "tel", "icon"];
const options = {multiple: true};

Now, we asynchronously pick the contacts when the user clicks the getContactsButton.


const getContacts = async () => {
  try {
    const contacts = await navigator.contacts.select(props, options);
  } catch (error) {
    console.error(error);
  }
};

getContactsButton.addEventListener("click", getContacts);

Using DOM manipulation, we can then append a list item to each contact and an icon to the contactList element.

const appendContacts = (contacts) => {
  contacts.forEach(({name, tel, icon}) => {
    const contactElement = document.createElement("li");

    contactElement.innerText = `${name}: ${tel}`;
    contactList.appendChild(contactElement);
  });
};

const getContacts = async () => {
  try {
    const contacts = await navigator.contacts.select(props, options);
    appendContacts(contacts);
  } catch (error) {
    console.error(error);
  }
};

getContactsButton.addEventListener("click", getContacts);

Appending an image is a little tricky since we will need to convert it into a URL and append it for each item in the list.

const getIcon = (icon) => {
  if (icon.length > 0) {
    const imageUrl = URL.createObjectURL(icon[0]);
    const imageElement = document.createElement("img");
    imageElement.src = imageUrl;

    return imageElement;
  }
};

const appendContacts = (contacts) => {
  contacts.forEach(({name, tel, icon}) => {
    const contactElement = document.createElement("li");

    contactElement.innerText = `${name}: ${tel}`;
    contactList.appendChild(contactElement);

    const imageElement = getIcon(icon);
    contactElement.appendChild(imageElement);
  });
};

const getContacts = async () => {
  try {
    const contacts = await navigator.contacts.select(props, options);
    appendContacts(contacts);
  } catch (error) {
    console.error(error);
  }
};

getContactsButton.addEventListener("click", getContacts);

And here’s the outcome:

Contact Picker showing three mock contacts

(Large preview)

Note: The Contact Picker API will only work if the context is secure, i.e., the page is served over https:// or wss:// URLs.

Conclusion

There we go, four web APIs that I believe would empower us to build more useful and robust PWAs but have slipped under the radar for many of us. This is, of course, due to inconsistent browser support, so I hope this article can bring awareness to new APIs so we have a better chance to see them in future browser updates.

Aren’t they interesting? We saw how much control we have with the orientation of a device and its screen as well as the level of access we get to access a device’s hardware features, i.e. vibration, and information from other apps to use in our own UI.

But as I said much earlier, there’s a sort of infinite loop where a lack of awareness begets a lack of browser support. So, while the four APIs we covered are super interesting, your mileage will inevitably vary when it comes to using them in a production environment. Please tread cautiously and refer to Caniuse for the latest support information, or check for your own devices using WebAPI Check.

Smashing Editorial
(gg, yk)

How To Hack Your Google Lighthouse Scores In 2024

How To Hack Your Google Lighthouse Scores In 2024

How To Hack Your Google Lighthouse Scores In 2024

Salma Alam-Naylor

2024-06-11T18:00:00+00:00
2025-06-20T10:32:35+00:00

This article is sponsored by Sentry.io

Google Lighthouse has been one of the most effective ways to gamify and promote web page performance among developers. Using Lighthouse, we can assess web pages based on overall performance, accessibility, SEO, and what Google considers “best practices”, all with the click of a button.

We might use these tests to evaluate out-of-the-box performance for front-end frameworks or to celebrate performance improvements gained by some diligent refactoring. And you know you love sharing screenshots of your perfect Lighthouse scores on social media. It’s a well-deserved badge of honor worthy of a confetti celebration.

Animated gif of four perfect Google Lighthouse scores with confetti popping in all over the place

Just the fact that Lighthouse gets developers like us talking about performance is a win. But, whilst I don’t want to be a party pooper, the truth is that web performance is far more nuanced than this. In this article, we’ll examine how Google Lighthouse calculates its performance scores, and, using this information, we will attempt to “hack” those scores in our favor, all in the name of fun and science — because in the end, Lighthouse is simply a good, but rough guide for debugging performance. We’ll have some fun with it and see to what extent we can “trick” Lighthouse into handing out better scores than we may deserve.

But first, let’s talk about data.

Field Data Is Important

Local performance testing is a great way to understand if your website performance is trending in the right direction, but it won’t paint a full picture of reality. The World Wide Web is the Wild West, and collectively, we’ve almost certainly lost track of the variety of device types, internet connection speeds, screen sizes, browsers, and browser versions that people are using to access websites — all of which can have an impact on page performance and user experience.

Field data — and lots of it — collected by an application performance monitoring tool like Sentry from real people using your website on their devices will give you a far more accurate report of your website performance than your lab data collected from a small sample size using a high-spec super-powered dev machine under a set of controlled conditions. Philip Walton reported in 2021 that “almost half of all pages that scored 100 on Lighthouse didn’t meet the recommended Core Web Vitals thresholds” based on data from the HTTP Archive.

Web performance is more than a single core web vital metric or Lighthouse performance score. What we’re talking about goes way beyond the type of raw data we’re working with.

Web Performance Is More Than Numbers

Speed is often the first thing that comes up when talking about web performance — just how long does a page take to load? This isn’t the worst thing to measure, but we must bear in mind that speed is probably influenced heavily by business KPIs and sales targets. Google released a report in 2018 suggesting that the probability of bounces increases by 32% if the page load time reaches higher than three seconds, and soars to 123% if the page load time reaches 10 seconds. So, we must conclude that converting more sales requires reducing bounce rates. And to reduce bounce rates, we must make our pages load faster.

But what does “load faster” even mean? At some point, we’re physically incapable of making a web page load any faster. Humans — and the servers that connect them — are spread around the globe, and modern internet infrastructure can only deliver so many bytes at a time.

The bottom line is that page load is not a single moment in time. In an article titled “What is speed?” Google explains that a page load event is:

[…] “an experience that no single metric can fully capture. There are multiple moments during the load experience that can affect whether a user perceives it as ‘fast’, and if you just focus solely on one, you might miss bad experiences that happen during the rest of the time.”

The key word here is experience. Real web performance is less about numbers and speed than it is about how we experience page load and page usability as users. And this segues nicely into a discussion of how Google Lighthouse calculates performance scores. (It’s much less about pure speed than you might think.)

How Google Lighthouse Performance Scores Are Calculated

The Google Lighthouse performance score is calculated using a weighted combination of scores based on core web vital metrics (i.e., First Contentful Paint (FCP), Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS)) and other speed-related metrics (i.e., Speed Index (SI) and Total Blocking Time (TBT)) that are observable throughout the page load timeline.

This is how the metrics are weighted in the overall score:

MetricWeighting (%)
Total Blocking Time30
Cumulative Layout Shift25
Largest Contentful Paint25
First Contentful Paint10
Speed Index10

The weighting assigned to each score gives us insight into how Google prioritizes the different building blocks of a good user experience:

1. A Web Page Should Respond to User Input

The highest weighted metric is Total Blocking Time (TBT), a metric that looks at the total time after the First Contentful Paint (FCP) to help indicate where the main thread may be blocked long enough to prevent speedy responses to user input. The main thread is considered “blocked” any time there’s a JavaScript task running on the main thread for more than 50ms. Minimizing TBT ensures that a web page responds to physical user input (e.g., key presses, mouse clicks, and so on).

2. A Web Page Should Load Useful Content With No Unexpected Visual Shifts

The next most weighted Lighthouse metrics are Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS). LCP marks the point in the page load timeline when the page’s main content has likely loaded and is therefore useful.

At the point where the main content has likely loaded, you also want to maintain visual stability to ensure that users can use the page and are not affected by unexpected visual shifts (CLS). A good LCP score is anything less than 2.5 seconds (which is a lot higher than we might have thought, given we are often trying to make our websites as fast as possible).

3. A Web Page Should Load Something

The First Contentful Paint (FCP) metric marks the first point in the page load timeline where the user can see something on the screen, and the Speed Index (SI) measures how quickly content is visually displayed during page load over time until the page is “complete”.

Your page is scored based on the speed indices of real websites using performance data from the HTTP Archive. A good FCP score is less than 1.8 seconds and a good SI score is less than 3.4 seconds. Both of these thresholds are higher than you might expect when thinking about speed.

Usability Is Favored Over Raw Speed

Google Lighthouse’s performance scoring is, without a doubt, less about speed and more about usability. Your SI and FCP could be super quick, but if your LCP takes too long to paint, and if CLS is caused by large images or external content taking some time to load and shifting things visually, then your overall performance score will be lower than if your page was a little slower to render the FCP but didn’t cause any CLS. Ultimately, if the page is unresponsive due to JavaScript blocking the main thread for more than 50ms, your performance score will suffer more than if the page was a little slow to paint the FCP.

To understand more about how the weightings of each metric contribute to the final performance score, you can play about with the sliders on the Lighthouse Scoring Calculator, and here’s a rudimentary table demonstrating the effect of skewed individual metric weightings on the overall performance score, proving that page usability and responsiveness is favored over raw speed.

DescriptionFCP (ms)SI (ms)LCP (ms)TBT (ms)CLSOverall Score
Slow to show something on screen6000000090
Slow to load content over time0500000090
Slow to load the largest part of the page0060000076
Visual shifts occurring during page load00000.8276
Page is unresponsive to user input0002000070

The overall Google Lighthouse performance score is calculated by converting each raw metric value into a score from 0 to 100 according to where it falls on its Lighthouse scoring distribution, which is a log-normal distribution derived from the performance metrics of real website performance data from the HTTP Archive. There are two main takeaways from this mathematically overloaded information:

  1. Your Lighthouse performance score is plotted against real website performance data, not in isolation.
  2. Given that the scoring uses log-normal distribution, the relationship between the individual metric values and the overall score is non-linear, meaning you can make substantial improvements to low-performance scores quite easily, but it becomes more difficult to improve an already high score.

Log-normal distribution curve visualization, high on the left, low on the right.

(Large preview)

Read more about how metric scores are determined, including a visualization of the log-normal distribution curve on developer.chrome.com.

Can We “Trick” Google Lighthouse?

I appreciate Google’s focus on usability over pure speed in the web performance conversation. It urges developers to think less about aiming for raw numbers and more about the real experiences we build. That being said, I’ve wondered whether today in 2024, it’s possible to fool Google Lighthouse into believing that a bad page in terms of usability and usefulness is actually a great one.

I put on my lab coat and science goggles to investigate. All tests were conducted:

  • Using the Chromium Lighthouse plugin,
  • In an incognito window in the Arc browser,
  • Using the “navigation” and “mobile” settings (apart from where described differently),
  • By me, in a lab (i.e., no field data).

That all being said, I fully acknowledge that my controlled test environment contradicts my advice at the top of this post, but the experiment is an interesting ride nonetheless. What I hope you’ll take away from this is that Lighthouse scores are only one piece — and a tiny one at that — of a very large and complex web performance puzzle. And, without field data, I’m not sure any of this matters anyway.

How to Hack FCP and LCP Scores

TL;DR: Show the smallest amount of LCP-qualifying content on load to boost the FCP and LCP scores until the Lighthouse test has likely finished.

FCP marks the first point in the page load timeline where the user can see anything at all on the screen, while LCP marks the point in the page load timeline when the main page content (i.e., the largest text or image element) has likely loaded. A fast LCP helps reassure the user that the page is useful. “Likely” and “useful” are the important words to bear in mind here.

What Counts as an LCP Element

The types of elements on a web page considered by Lighthouse for LCP are:

  • elements,
  • elements inside an element,
  • elements,
  • An element with a background image loaded using the url() function, (and not a CSS gradient), and
  • Block-level elements containing text nodes or other inline-level text elements.

The following elements are excluded from LCP consideration due to the likelihood they do not contain useful content:

  • Elements with zero opacity (invisible to the user),
  • Elements that cover the full viewport (likely to be background elements), and
  • Placeholder images or other images with low entropy (i.e., low informational content, such as a solid-colored image).

However, the notion of an image or text element being useful is completely subjective in this case and generally out of the realm of what machine code can reliably determine. For example, I built a page containing nothing but a

element where, after 10 seconds, JavaScript inserts more descriptive text into the DOM and hides the

element.

Lighthouse considers the heading element to be the LCP element in this experiment. At this point, the page load timeline has finished, but the page’s main content has not loaded, even though Lighthouse thinks it is likely to have loaded within those 10 seconds. Lighthouse still awards us with a perfect score of 100 even if the heading is replaced by a single punctuation mark, such as a full stop, which is even less useful.

This test suggests that if you need to load page content via client-side JavaScript, we‘ll want to avoid displaying a skeleton loader screen since that requires loading more elements on the page. And since we know the process will take some time — and that we can offload the network request from the main thread to a web worker so it won’t affect the TBT — we can use some arbitrary “splash screen” that contains a minimal viable LCP element (for better FCP scoring). This way, we’re giving Lighthouse the impression that the page is useful to users quicker than it actually is.

All we need to do is include a valid LCP element that contains something that counts as the FCP. While I would never recommend loading your main page content via client-side JavaScript in 2024 (serve static HTML from a CDN instead or build as much of the page as you can on a server), I would definitely not recommend this “hack” for a good user experience, regardless of what the Lighthouse performance score tells you. This approach also won’t earn you any favors with search engines indexing your site, as the robots are unable to discover the main content while it is absent from the DOM.

I also tried this experiment with a variety of random images representing the LCP to make the page even less useful. But given that I used small file sizes — made smaller and converted into “next-gen” image formats using a third-party image API to help with page load speed — it seemed that Lighthouse interpreted the elements as “placeholder images” or images with “low entropy”. As a result, those images were disqualified as LCP elements, which is a good thing and makes the LCP slightly less hackable.

View the demo page and use Chromium DevTools in an incognito window to see the results yourself.

In-browser proof that the non-useful page scored 100 on Lighthouse performance

(Large preview)

This hack, however, probably won’t hold up in many other use cases. Discord, for example, uses the “splash screen” approach when you hard-refresh the app in the browser, and it receives a sad 29 performance score.

Compared to my DOM-injected demo, the LCP element was calculated as some content behind the splash screen rather than elements contained within the splash screen content itself, given there were one or more large images in the focussed text channel I tested on. One could argue that Lighthouse scores are less important for apps that are behind authentication anyway: they don’t need to be indexed by search engines.

Lighthouse screenshot of a score of 29 next to a blurred-out Discord server channel.

(Large preview)

There are likely many other situations where apps serve user-generated content and you might be unable to control the LCP element entirely, particularly regarding images.

For example, if you can control the sizes of all the images on your web pages, you might be able to take advantage of an interesting hack or “optimization” (in very large quotes) to arbitrarily game the system, as was the case of RentPath. In 2021, developers at RentPath managed to improve their Lighthouse performance score by 17 points when increasing the size of image thumbnails on a web page. They convinced Lighthouse to calculate the LCP element as one of the larger thumbnails instead of a Google Map tile on the page, which takes considerably longer to load via JavaScript.

The bottom line is that you can gain higher Lighthouse performance scores if you are aware of your LCP element and in control of it, whether that’s through a hack like RentPath’s or mine or a real-deal improvement. That being said, whilst I’ve described the splash screen approach as a hack in this post, that doesn’t mean this type of experience couldn’t offer a purposeful and joyful experience. Performance and user experience are about understanding what’s happening during page load, and it’s also about intent.

How to Hack CLS Scores

TL;DR: Defer loading content that causes layout shifts until the Lighthouse test has likely finished to make the test think it has enough data. CSS transforms do not negatively impact CLS, except if used in conjunction with new elements added to the DOM.

CLS is measured on a decimal scale; a good score is less than 0.1, and a poor score is greater than 0.25. Lighthouse calculates CLS from the largest burst of unexpected layout shifts that occur during a user’s time on the page based on a combination of the viewport size and the movement of unstable elements in the viewport between two rendered frames. Smaller one-off instances of layout shift may be inconsequential, but a bunch of layout shifts happening one after the other will negatively impact your score.

If you know your page contains annoying layout shifts on load, you can defer them until after the page load event has been completed, thus fooling Lighthouse into thinking there is no CLS. This demo page I created, for example, earns a CLS score of 0.143 even though JavaScript immediately starts adding new text elements to the page, shifting the original content up. By pausing the JavaScript that adds new nodes to the DOM by an arbitrary five seconds with a setTimeout(), Lighthouse doesn’t capture the CLS that takes place.

This other demo page earns a performance score of 100, even though it is arguably less useful and useable than the last page given that the added elements pop in seemingly at random without any user interaction.

Lighthouse performance score of 100 following the second test.

(Large preview)

Whilst it is possible to defer layout shift events for a page load test, this hack definitely won’t work for field data and user experience over time (which is a more important focal point, as we discussed earlier). If we perform a “time span” test in Lighthouse on the page with deferred layout shifts, Lighthouse will correctly report a non-green CLS score of around 0.186.

Screenshot of a timespan test performed on the same page with layout shifts.

(Large preview)

If you do want to intentionally create a chaotic experience similar to the demo, you can use CSS animations and transforms to more purposefully pop the content into view on the page. In Google’s guide to CLS, they state that “content that moves gradually and naturally from one position to another can often help the user better understand what’s going on and guide them between state changes” — again, highlighting the importance of user experience in context.

On this next demo page, I’m using CSS transform to scale() the text elements from 0 to 1 and move them around the page. The transforms fail to trigger CLS because the text nodes are already in the DOM when the page loads. That said, I did observe in my testing that if the text nodes are added to the DOM programmatically after the page loads via JavaScript and then animated, Lighthouse will indeed detect CLS and score things accordingly.

You Can’t Hack a Speed Index Score

The Speed Index score is based on the visual progress of the page as it loads. The quicker your content loads nearer the beginning of the page load timeline, the better.

It is possible to do some hack to trick the Speed Index into thinking a page load timeline is slower than it is. Conversely, there’s no real way to “fake” loading content faster than it does. The only way to make your Speed Index score better is to optimize your web page for loading as much of the page as possible, as soon as possible. Whilst not entirely realistic in the web landscape of 2024 (mainly because it would put designers out of a job), you could go all-in to lower your Speed Index as much as possible by:

  • Delivering static HTML web pages only (no server-side rendering) straight from a CDN,
  • Avoiding images on the page,
  • Minimizing or eliminating CSS, and
  • Preventing JavaScript or any external dependencies from loading.

You Also Can’t (Really) Hack A TBT Score

TBT measures the total time after the FCP where the main thread was blocked by JavaScript tasks for long enough to prevent responses to user input. A good TBT score is anything lower than 200ms.

JavaScript-heavy web applications (such as single-page applications) that perform complex state calculations and DOM manipulation on the client on page load (rather than on the server before sending rendered HTML) are prone to suffering poor TBT scores. In this case, you could probably hack your TBT score by deferring all JavaScript until after the Lighthouse test has finished. That said, you’d need to provide some kind of placeholder content or loading screen to satisfy the FCP and LCP and to inform users that something will happen at some point. Plus, you’d have to go to extra lengths to hack around the front-end framework you’re using. (You don’t want to load a placeholder page that, at some point in the page load timeline, loads a separate React app after an arbitrary amount of time!)

What’s interesting is that while we’re still doing all sorts of fancy things with JavaScript in the client, advances in the modern web ecosystem are helping us all reduce the probability of a less-than-stellar TBT score. Many front-end frameworks, in partnership with modern hosting providers, are capable of rendering pages and processing complex logic on demand without any client-side JavaScript. While eliminating JavaScript on the client is not the goal, we certainly have a lot of options to use a lot less of it, thus minimizing the risk of doing too much computation on the main thread on page load.

Bottom Line: Lighthouse Is Still Just A Rough Guide

Google Lighthouse can’t detect everything that’s wrong with a particular website. Whilst Lighthouse performance scores prioritize page usability in terms of responding to user input, it still can’t detect every terrible usability or accessibility issue in 2024.

In 2019, Manuel Matuzović published an experiment where he intentionally created a terrible page that Lighthouse thought was pretty great. I hypothesized that five years later, Lighthouse might do better; but it doesn’t.

On this final demo page I put together, input events are disabled by CSS and JavaScript, making the page technically unresponsive to user input. After five seconds, JavaScript flips a switch and allows you to click the button. The page still scores 100 for both performance and accessibility.

Lighthouse showing perfect performance and accessibility scores for a useless, inaccessible page.

(Large preview)

You really can’t rely on Lighthouse as a substitute for usability testing and common sense.

Some More Silly Hacks

As with everything in life, there’s always a way to game the system. Here are some more tried and tested guaranteed hacks to make sure your Lighthouse performance score artificially knocks everyone else’s out of the park:

  • Only run Lighthouse tests using the fastest and highest-spec hardware.
  • Make sure your internet connection is the fastest it can be; relocate if you need to.
  • Never use field data, only lab data, collected using the aforementioned fastest and highest-spec hardware and super-speed internet connection.
  • Rerun the tests in the lab using different conditions and all the special code hacks I described in this post until you get the result(s) you want to impress your friends, colleagues, and random people on the internet.

Note: The best way to learn about web performance and how to optimize your websites is to do the complete opposite of everything we’ve covered in this article all of the time. And finally, to seriously level up your performance skills, use an application monitoring tool like Sentry. Think of Lighthouse as the canary and Sentry as the real-deal production-data-capturing, lean, mean, web vitals machine.

And finally-finally, here’s the link to the full demo site for educational purposes.

Smashing Editorial
(gg, yk, il)

Scaling Success: Key Insights And Practical Takeaways

Scaling Success: Key Insights And Practical Takeaways

Scaling Success: Key Insights And Practical Takeaways

Addy Osmani

2024-06-04T12:00:00+00:00
2025-06-20T10:32:35+00:00

Building successful web products at scale is a multifaceted challenge that demands a combination of technical expertise, strategic decision-making, and a growth-oriented mindset. In Success at Scale, I dive into case studies from some of the web’s most renowned products, uncovering the strategies and philosophies that propelled them to the forefront of their industries.

Here you will find some of the insights I’ve gleaned from these success stories, part of an ongoing effort to build a roadmap for teams striving to achieve scalable success in the ever-evolving digital landscape.

Cultivating A Mindset For Scaling Success

The foundation of scaling success lies in fostering the right mindset within your team. The case studies in Success at Scale highlight several critical mindsets that permeate the culture of successful organizations.

User-Centricity

Successful teams prioritize the user experience above all else.

They invest in understanding their users’ needs, behaviors, and pain points and relentlessly strive to deliver value. Instagram’s performance optimization journey exemplifies this mindset, focusing on improving perceived speed and reducing user frustration, leading to significant gains in engagement and retention.

By placing the user at the center of every decision, Instagram was able to identify and prioritize the most impactful optimizations, such as preloading critical resources and leveraging adaptive loading strategies. This user-centric approach allowed them to deliver a seamless and delightful experience to their vast user base, even as their platform grew in complexity.

Data-Driven Decision Making

Scaling success relies on data, not assumptions.

Teams must embrace a data-driven approach, leveraging metrics and analytics to guide their decisions and measure impact. Shopify’s UI performance improvements showcase the power of data-driven optimization, using detailed profiling and user data to prioritize efforts and drive meaningful results.

By analyzing user interactions, identifying performance bottlenecks, and continuously monitoring key metrics, Shopify was able to make informed decisions that directly improved the user experience. This data-driven mindset allowed them to allocate resources effectively, focusing on the areas that yielded the greatest impact on performance and user satisfaction.

Continuous Improvement

Scaling is an ongoing process, not a one-time achievement.

Successful teams foster a culture of continuous improvement, constantly seeking opportunities to optimize and refine their products. Smashing Magazine’s case study on enhancing Core Web Vitals demonstrates the impact of iterative enhancements, leading to significant performance gains and improved user satisfaction.

By regularly assessing their performance metrics, identifying areas for improvement, and implementing incremental optimizations, Smashing Magazine was able to continuously elevate the user experience. This mindset of continuous improvement ensures that the product remains fast, reliable, and responsive to user needs, even as it scales in complexity and user base.

Collaboration And Inclusivity

Silos hinder scalability.

High-performing teams promote collaboration and inclusivity, ensuring that diverse perspectives are valued and leveraged. The Understood’s accessibility journey highlights the power of cross-functional collaboration, with designers, developers, and accessibility experts working together to create inclusive experiences for all users.

By fostering open communication, knowledge sharing, and a shared commitment to accessibility, The Understood was able to embed inclusive design practices throughout its development process. This collaborative and inclusive approach not only resulted in a more accessible product but also cultivated a culture of empathy and user-centricity that permeated all aspects of their work.

Making Strategic Decisions for Scalability

Beyond cultivating the right mindset, scaling success requires making strategic decisions that lay the foundation for sustainable growth.

Technology Choices

Selecting the right technologies and frameworks can significantly impact scalability. Factors like performance, maintainability, and developer experience should be carefully considered. Notion’s migration to Next.js exemplifies the importance of choosing a technology stack that aligns with long-term scalability goals.

By adopting Next.js, Notion was able to leverage its performance optimizations, such as server-side rendering and efficient code splitting, to deliver fast and responsive pages. Additionally, the developer-friendly ecosystem of Next.js and its strong community support enabled Notion’s team to focus on building features and optimizing the user experience rather than grappling with low-level infrastructure concerns. This strategic technology choice laid the foundation for Notion’s scalable and maintainable architecture.

Ship Only The Code A User Needs, When They Need It

This best practice is so important when we want to ensure that pages load fast without over-eagerly delivering JavaScript a user may not need at that time. For example, Instagram made a concerted effort to improve the web performance of instagram.com, resulting in a nearly 50% cumulative improvement in feed page load time. A key area of focus has been shipping less JavaScript code to users, particularly on the critical rendering path.

The Instagram team found that the uncompressed size of JavaScript is more important for performance than the compressed size, as larger uncompressed bundles take more time to parse and execute on the client, especially on mobile devices. Two optimizations they implemented to reduce JS parse/execute time were inline requires (only executing code when it’s first used vs. eagerly on initial load) and serving ES2017+ code to modern browsers to avoid transpilation overhead. Inline requires improved Time-to-Interactive metrics by 12%, and the ES2017+ bundle was 5.7% smaller and 3% faster than the transpiled version.

While good progress has been made, the Instagram team acknowledges there are still many opportunities for further optimization. Potential areas to explore could include the following:

  • Improved code-splitting, moving more logic off the critical path,
  • Optimizing scrolling performance,
  • Adapting to varying network conditions,
  • Modularizing their Redux state management.

Continued efforts will be needed to keep instagram.com performing well as new features are added and the product grows in complexity.

Accessibility Integration

Accessibility should be an integral part of the product development process, not an afterthought.

Wix’s comprehensive approach to accessibility, encompassing keyboard navigation, screen reader support, and infrastructure for future development, showcases the importance of building inclusivity into the product’s core.

By considering accessibility requirements from the initial design stages and involving accessibility experts throughout the development process, Wix was able to create a platform that empowered its users to build accessible websites. This holistic approach to accessibility not only benefited end-users but also positioned Wix as a leader in inclusive web design, attracting a wider user base and fostering a culture of empathy and inclusivity within the organization.

Developer Experience Investment

Investing in a positive developer experience is essential for attracting and retaining talent, fostering productivity, and accelerating development.

Apideck’s case study in the book highlights the impact of a great developer experience on community building and product velocity.

By providing well-documented APIs, intuitive SDKs, and comprehensive developer resources, Apideck was able to cultivate a thriving developer community. This investment in developer experience not only made it easier for developers to integrate with Apideck’s platform but also fostered a sense of collaboration and knowledge sharing within the community. As a result, ApiDeck was able to accelerate product development, leverage community contributions, and continuously improve its offering based on developer feedback.

Leveraging Performance Optimization Techniques

Achieving optimal performance is a critical aspect of scaling success. The case studies in Success at Scale showcase various performance optimization techniques that have proven effective.

Progressive Enhancement and Graceful Degradation

Building resilient web experiences that perform well across a range of devices and network conditions requires a progressive enhancement approach. Pinafore’s case study in Success at Scale highlights the benefits of ensuring core functionality remains accessible even in low-bandwidth or JavaScript-constrained environments.

By leveraging server-side rendering and delivering a usable experience even when JavaScript fails to load, Pinafore demonstrates the importance of progressive enhancement. This approach not only improves performance and resilience but also ensures that the application remains accessible to a wider range of users, including those with older devices or limited connectivity. By gracefully degrading functionality in constrained environments, Pinafore provides a reliable and inclusive experience for all users.

Adaptive Loading Strategies

The book’s case study on Tinder highlights the power of sophisticated adaptive loading strategies. By dynamically adjusting the content and resources delivered based on the user’s device capabilities and network conditions, Tinder ensures a seamless experience across a wide range of devices and connectivity scenarios. Tinder’s adaptive loading approach involves techniques like dynamic code splitting, conditional resource loading, and real-time network quality detection. This allows the application to optimize the delivery of critical resources, prioritize essential content, and minimize the impact of poor network conditions on the user experience.

By adapting to the user’s context, Tinder delivers a fast and responsive experience, even in challenging environments.

Efficient Resource Management

Effective management of resources, such as images and third-party scripts, can significantly impact performance. eBay’s journey showcases the importance of optimizing image delivery, leveraging techniques like lazy loading and responsive images to reduce page weight and improve load times.

By implementing lazy loading, eBay ensures that images are only loaded when they are likely to be viewed by the user, reducing initial page load time and conserving bandwidth. Additionally, by serving appropriately sized images based on the user’s device and screen size, eBay minimizes the transfer of unnecessary data and improves the overall loading performance. These resource management optimizations, combined with other techniques like caching and CDN utilization, enable eBay to deliver a fast and efficient experience to its global user base.

Continuous Performance Monitoring

Regularly monitoring and analyzing performance metrics is crucial for identifying bottlenecks and opportunities for optimization. The case study on Yahoo! Japan News demonstrates the impact of continuous performance monitoring, using tools like Lighthouse and real user monitoring to identify and address performance issues proactively.

By establishing a performance monitoring infrastructure, Yahoo! Japan News gains visibility into the real-world performance experienced by their users. This data-driven approach allows them to identify performance regression, pinpoint specific areas for improvement, and measure the impact of their optimizations. Continuous monitoring also enables Yahoo! Japan News to set performance baselines, track progress over time, and ensure that performance remains a top priority as the application evolves.

Embracing Accessibility and Inclusive Design

Creating inclusive web experiences that cater to diverse user needs is not only an ethical imperative but also a critical factor in scaling success. The case studies in Success at Scale emphasize the importance of accessibility and inclusive design.

Comprehensive Accessibility Testing

Ensuring accessibility requires a combination of automated testing tools and manual evaluation. LinkedIn’s approach to automated accessibility testing demonstrates the value of integrating accessibility checks into the development workflow, catching potential issues early, and reducing the reliance on manual testing alone.

By leveraging tools like Deque’s axe and integrating accessibility tests into their continuous integration pipeline, LinkedIn can identify and address accessibility issues before they reach production. This proactive approach to accessibility testing not only improves the overall accessibility of the platform but also reduces the cost and effort associated with retroactive fixes. However, LinkedIn also recognizes the importance of manual testing and user feedback in uncovering complex accessibility issues that automated tools may miss. By combining automated checks with manual evaluation, LinkedIn ensures a comprehensive approach to accessibility testing.

Inclusive Design Practices

Designing with accessibility in mind from the outset leads to more inclusive and usable products. Success With Scale’s case study on Intercom about creating an accessible messenger highlights the importance of considering diverse user needs, such as keyboard navigation and screen reader compatibility, throughout the design process.

By embracing inclusive design principles, Intercom ensures that their messenger is usable by a wide range of users, including those with visual, motor, or cognitive impairments. This involves considering factors such as color contrast, font legibility, focus management, and clear labeling of interactive elements. By designing with empathy and understanding the diverse needs of their users, Intercom creates a messenger experience that is intuitive, accessible, and inclusive. This approach not only benefits users with disabilities but also leads to a more user-friendly and resilient product overall.

User Research And Feedback

Engaging with users with disabilities and incorporating their feedback is essential for creating truly inclusive experiences. The Understood’s journey emphasizes the value of user research and collaboration with accessibility experts to identify and address accessibility barriers effectively.

By conducting usability studies with users who have diverse abilities and working closely with accessibility consultants, The Understood gains invaluable insights into the real-world challenges faced by their users. This user-centered approach allows them to identify pain points, gather feedback on proposed solutions, and iteratively improve the accessibility of their platform.

By involving users with disabilities throughout the design and development process, The Understood ensures that their products not only meet accessibility standards but also provide a meaningful and inclusive experience for all users.

Accessibility As A Shared Responsibility

Promoting accessibility as a shared responsibility across the organization fosters a culture of inclusivity. Shopify’s case study underscores the importance of educating and empowering teams to prioritize accessibility, recognizing it as a fundamental aspect of the user experience rather than a mere technical checkbox.

By providing accessibility training, guidelines, and resources to designers, developers, and content creators, Shopify ensures that accessibility is considered at every stage of the product development lifecycle. This shared responsibility approach helps to build accessibility into the core of Shopify’s products and fosters a culture of inclusivity and empathy. By making accessibility everyone’s responsibility, Shopify not only improves the usability of their platform but also sets an example for the wider industry on the importance of inclusive design.

Fostering A Culture of Collaboration And Knowledge Sharing

Scaling success requires a culture that promotes collaboration, knowledge sharing, and continuous learning. The case studies in Success at Scale highlight the impact of effective collaboration and knowledge management practices.

Cross-Functional Collaboration

Breaking down silos and fostering cross-functional collaboration accelerates problem-solving and innovation. Airbnb’s design system journey showcases the power of collaboration between design and engineering teams, leading to a cohesive and scalable design language across web and mobile platforms.

By establishing a shared language and a set of reusable components, Airbnb’s design system enables designers and developers to work together more efficiently. Regular collaboration sessions, such as design critiques and code reviews, help to align both teams and ensure that the design system evolves in a way that meets the needs of all stakeholders. This cross-functional approach not only improves the consistency and quality of the user experience but also accelerates the development process by reducing duplication of effort and promoting code reuse.

Knowledge Sharing And Documentation

Capturing and sharing knowledge across the organization is crucial for maintaining consistency and enabling the efficient onboarding of new team members. Stripe’s investment in internal frameworks and documentation exemplifies the value of creating a shared understanding and facilitating knowledge transfer.

By maintaining comprehensive documentation, code examples, and best practices, Stripe ensures that developers can quickly grasp the intricacies of their internal tools and frameworks. This documentation-driven culture not only reduces the learning curve for new hires but also promotes consistency and adherence to established patterns and practices. Regular knowledge-sharing sessions, such as tech talks and lunch-and-learns, further reinforce this culture of learning and collaboration, enabling team members to learn from each other’s experiences and stay up-to-date with the latest developments.

Communities Of Practice

Establishing communities of practice around specific domains, such as accessibility or performance, promotes knowledge sharing and continuous improvement. Shopify’s accessibility guild demonstrates the impact of creating a dedicated space for experts and advocates to collaborate, share best practices, and drive accessibility initiatives forward.

By bringing together individuals passionate about accessibility from across the organization, Shopify’s accessibility guild fosters a sense of community and collective ownership. Regular meetings, workshops, and hackathons provide opportunities for members to share their knowledge, discuss challenges, and collaborate on solutions. This community-driven approach not only accelerates the adoption of accessibility best practices but also helps to build a culture of inclusivity and empathy throughout the organization.

Leveraging Open Source And External Expertise

Collaborating with the wider developer community and leveraging open-source solutions can accelerate development and provide valuable insights. Pinafore’s journey highlights the benefits of engaging with accessibility experts and incorporating their feedback to create a more inclusive and accessible web experience.

By actively seeking input from the accessibility community and leveraging open-source accessibility tools and libraries, Pinafore was able to identify and address accessibility issues more effectively. This collaborative approach not only improved the accessibility of the application but also contributed back to the wider community by sharing their learnings and experiences. By embracing open-source collaboration and learning from external experts, teams can accelerate their own accessibility efforts and contribute to the collective knowledge of the industry.

The Path To Sustainable Success

Achieving scalable success in the web development landscape requires a multifaceted approach that encompasses the right mindset, strategic decision-making, and continuous learning. The Success at Scale book provides a comprehensive exploration of these elements, offering deep insights and practical guidance for teams at all stages of their scaling journey.

By cultivating a user-centric, data-driven, and inclusive mindset, teams can prioritize the needs of their users and make informed decisions that drive meaningful results. Adopting a culture of continuous improvement and collaboration ensures that teams are always striving to optimize and refine their products, leveraging the collective knowledge and expertise of their members.

Making strategic technology choices, such as selecting performance-oriented frameworks and investing in developer experience, lays the foundation for scalable and maintainable architectures. Implementing performance optimization techniques, such as adaptive loading, efficient resource management, and continuous monitoring, helps teams deliver fast and responsive experiences to their users.

Embracing accessibility and inclusive design practices not only ensures that products are usable by a wide range of users but also fosters a culture of empathy and user-centricity. By incorporating accessibility testing, inclusive design principles, and user feedback into the development process, teams can create products that are both technically sound and meaningfully inclusive.

Fostering a culture of collaboration, knowledge sharing, and continuous learning is essential for scaling success. By breaking down silos, promoting cross-functional collaboration, and investing in documentation and communities of practice, teams can accelerate problem-solving, drive innovation, and build a shared understanding of their products and practices.

The case studies featured in Success at Scale serve as powerful examples of how these principles and strategies can be applied in real-world contexts. By learning from the successes and challenges of industry leaders, teams can gain valuable insights and inspiration for their own scaling journeys.

As you embark on your path to scaling success, remember that it is an ongoing process of iteration, learning, and adaptation. Embrace the mindsets and strategies outlined in this article, dive deeper into the learnings from the Success at Scale book, and continually refine your approach based on the unique needs of your users and the evolving landscape of web development.

Conclusion

Scaling successful web products requires a holistic approach that combines technical excellence, strategic decision-making, and a growth-oriented mindset. By learning from the experiences of industry leaders, as showcased in the Success at Scale book, teams can gain valuable insights and practical guidance on their journey towards sustainable success.

Cultivating a user-centric, data-driven, and inclusive mindset lays the foundation for scalability. By prioritizing the needs of users, making informed decisions based on data, and fostering a culture of continuous improvement and collaboration, teams can create products that deliver meaningful value and drive long-term growth.

Making strategic decisions around technology choices, performance optimization, accessibility integration, and developer experience investment sets the stage for scalable and maintainable architectures. By leveraging proven optimization techniques, embracing inclusive design practices, and investing in the tools and processes that empower developers, teams can build products that are fast and resilient.

Through ongoing collaboration, knowledge sharing, and a commitment to learning, teams can navigate the complexities of scaling success and create products that make a lasting impact in the digital landscape.


Success at Scale. Thanks for your kind support!

Print + eBook

{
“sku”: “success-at-scale”,
“type”: “Book”,
“price”: “44.00”,

“prices”: [{
“amount”: “44.00”,
“currency”: “USD”,
“items”: [
{“amount”: “34.00”, “type”: “Book”},
{“amount”: “10.00”, “type”: “E-Book”}
]
}, {
“amount”: “44.00”,
“currency”: “EUR”,
“items”: [
{“amount”: “34.00”, “type”: “Book”},
{“amount”: “10.00”, “type”: “E-Book”}
]
}
]
}


$
44.00


Quality hardcover. Free worldwide shipping.
100 days money-back-guarantee.

eBook

{
“sku”: “success-at-scale-ebook”,
“type”: “E-Book”,
“price”: “19.00”,

“prices”: [{
“amount”: “19.00”,
“currency”: “USD”
}, {
“amount”: “19.00”,
“currency”: “EUR”
}
]
}


$
19.00


DRM-free, of course. ePUB, Kindle, PDF.
Included with your Smashing Membership.

We’re Trying Out Something New

In an effort to conserve resources here at Smashing, we’re trying something new with Success at Scale. The printed book is 304 pages, and we make an expanded PDF version available to everyone who purchases a print book. This accomplishes a few good things:

  • We will use less paper and materials because we are making a smaller printed book;
  • We’ll use fewer resources in general to print, ship, and store the books, leading to a smaller carbon footprint; and
  • Keeping the book at more manageable size means we can continue to offer free shipping on all Smashing orders!

Smashing Books have always been printed with materials from FSC Certified forests. We are committed to finding new ways to conserve resources while still bringing you the best possible reading experience.


Success at Scale. Thanks for your kind support!

Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! 😉

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Heather and Steven are two of these people. Have you checked out their books already?

Smashing Editorial
(gg, yk, vf, il)

The Era Of Platform Primitives Is Finally Here

The Era Of Platform Primitives Is Finally Here

The Era Of Platform Primitives Is Finally Here

Atila Fassina

2024-05-28T12:00:00+00:00
2025-06-20T10:32:35+00:00

This article is sponsored by Netlify

In the past, the web ecosystem moved at a very slow pace. Developers would go years without a new language feature or working around a weird browser quirk. This pushed our technical leaders to come up with creative solutions to circumvent the platform’s shortcomings. We invented bundling, polyfills, and transformation steps to make things work everywhere with less of a hassle.

Slowly, we moved towards some sort of consensus on what we need as an ecosystem. We now have TypeScript and Vite as clear preferences—pushing the needle of what it means to build consistent experiences for the web. Application frameworks have built whole ecosystems on top of them: SolidStart, Nuxt, Remix, and Analog are examples of incredible tools built with such primitives. We can say that Vite and TypeScript are tooling primitives that empower the creation of others in diverse ecosystems.

With bundling and transformation needs somewhat defined, it was only natural that framework authors would move their gaze to the next layer they needed to abstract: the server.

Server Primitives

The UnJS folks have been consistently building agnostic tooling that can be reused in different ecosystems. Thanks to them, we now have frameworks and libraries such as H3 (a minimal Node.js server framework built with TypeScript), which enables Nitro (a whole server runtime powered by Vite, and H3), that in its own turn enabled Vinxi (an application bundler and server runtime that abstracts Nitro and Vite).

Nitro is used already by three major frameworks: Nuxt, Analog, and SolidStart. While Vinxi is also used by SolidStart. This means that any platform which supports one of these, will definitely be able to support the others with zero additional effort.

This is not about taking a bigger slice of the cake. But making the cake bigger for everyone.

Frameworks, platforms, developers, and users benefit from it. We bet on our ecosystem together instead of working in silos with our monolithic solutions. Empowering our developer-users to gain transferable skills and truly choose the best tool for the job with less vendor lock-in than ever before.

Serverless Rejoins Conversation

Such initiatives have probably been noticed by serverless platforms like Netlify. With Platform Primitives, frameworks can leverage agnostic solutions for common necessities such as Incremental Static Regeneration (ISR), Image Optimization, and key/value (kv) storage.

As the name implies, Netlify Platform Primitives are a group of abstractions and helpers made available at a platform level for either frameworks or developers to leverage when using their applications. This brings additional functionality simultaneously to every framework. This is a big and powerful shift because, up until now, each framework would have to create its own solutions and backport such strategies to compatibility layers within each platform.

Moreover, developers would have to wait for a feature to first land on a framework and subsequently for support to arrive in their platform of choice. Now, as long as they’re using Netlify, those primitives are available directly without any effort and time put in by the framework authors. This empowers every ecosystem in a single measure.

Serverless means server infrastructure developers don’t need to handle. It’s not a misnomer, but a format of Infrastructure As A Service.

As mentioned before, Netlify Platform Primitives are three different features:

  1. Image CDN
    A content delivery network for images. It can handle format transformation and size optimization via URL query strings.
  2. Caching
    Basic primitives for their server runtime that help manage the caching directives for browser, server, and CDN runtimes smoothly.
  3. Blobs
    A key/value (KV) storage option is automatically available to your project through their SDK.

Let’s take a quick dive into each of these features and explore how they can increase our productivity with a serverless fullstack experience.

Image CDN

Every image in a /public can be served through a Netlify function. This means it’s possible to access it through a /.netlify/images path. So, without adding sharp or any image optimization package to your stack, deploying to Netlify allows us to serve our users with a better format without transforming assets at build-time. In a SolidStart, in a few lines of code, we could have an Image component that transforms other formats to .webp.

import { type JSX } from "solid-js";

const SITE_URL = "https://example.com";

interface Props extends JSX.ImgHTMLAttributes {
  format?: "webp" | "jpeg" | "png" | "avif" | "preserve";
  quality?: number | "preserve";
}

const getQuality = (quality: Props["quality"]) => {
  if (quality === "preserve") return"";
  return `&q=${quality || "75"}`;
};

function getFormat(format: Props["format"]) {
  switch (format) {
    case "preserve":
      return"  ";
    case "jpeg":
      return `&fm=jpeg`;
    case "png":
      return `&fm=png`;
    case "avif":
      return `&fm=avif`;
    case "webp":
    default:
      return `&fm=webp`;
  }
}

export function Image(props: Props) {
  return (
    
  );
}

Notice the above component is even slightly more complex than bare essentials because we’re enforcing some default optimizations. Our getFormat method transforms images to .webp by default. It’s a broadly supported format that’s significantly smaller than the most common and without any loss in quality. Our get quality function reduces the image quality to 75% by default; as a rule of thumb, there isn’t any perceivable loss in quality for large images while still providing a significant size optimization.

Caching

By default, Netlify caching is quite extensive for your regular artifacts – unless there’s a new deployment or the cache is flushed manually, resources will last for 365 days. However, because server/edge functions are dynamic in nature, there’s no default caching to prevent serving stale content to end-users. This means that if you have one of these functions in production, chances are there’s some caching to be leveraged to reduce processing time (and expenses).

By adding a cache-control header, you already have done 80% of the work in optimizing your resources for best serving users. Some commonly used cache control directives:

{
  "cache-control": "public, max-age=0, stale-while-revalidate=86400"

}
  • public: Store in a shared cache.
  • max-age=0: resource is immediately stale.
  • stale-while-revalidate=86400: if the cache is stale for less than 1 day, return the cached value and revalidate it in the background.
{
  "cache-control": "public, max-age=86400, must-revalidate"

}
  • public: Store in a shared cache.
  • max-age=86400: resource is fresh for one day.
  • must-revalidate: if a request arrives when the resource is already stale, the cache must be revalidated before a response is sent to the user.

Note: For more extensive information about possible compositions of Cache-Control directives, check the mdn entry on Cache-Control.

The cache is a type of key/value storage. So, once our responses are set with proper cache control, platforms have some heuristics to define what the key will be for our resource within the cache storage. The Web Platform has a second very powerful header that can dictate how our cache behaves.

The Vary response header is composed of a list of headers that will affect the validity of the resource (method and the endpoint URL are always considered; no need to add them). This header allows platforms to define other headers defined by location, language, and other patterns that will define for how long a response can be considered fresh.

The Vary response header is a foundational piece of a special header in Netlify Caching Primitive. The Netlify-Vary will take a set of instructions on which parts of the request a key should be based. It is possible to tune a response key not only by the header but also by the value of the header.

  • query: vary by the value of some or all request query parameters.
  • header: vary by the value of one or more request headers.
  • language: vary by the languages from the Accept-Language header.
  • country: vary by the country inferred from a GeoIP lookup on the request IP address.
  • cookie: vary by the value of one or more request cookie keys.

This header offers strong fine-control over how your resources are cached. Allowing for some creative strategies to optimize how your app will perform for specific users.

Blob Storage

This is a highly-available key/value store, it’s ideal for frequent reads and infrequent writes. They’re automatically available and provisioned for any Netlify Project.

It’s possible to write on a blob from your runtime or push data for a deployment-specific store. For example, this is how an Action Function would register a number of likes in store with SolidStart.

import { getStore } from "@netlify/blobs";
import { action } from "@solidjs/router";

export const upVote = action(async (formData: FormData) => {
  "use server";

  const postId = formData.get("id");
  const postVotes = formData.get("votes");

  if (typeof postId !== "string" || typeof postVotes !== "string") return;

  const store = getStore("posts");
  const voteSum = Number(postVotes) + 1)
    
  await store.set(postId, String(voteSum);

  console.log("done");
  return voteSum
  
});

Final Thoughts

With high-quality primitives, we can enable library and framework creators to create thin integration layers and adapters. This way, instead of focusing on how any specific platform operates, it will be possible to focus on the actual user experience and practical use-cases for such features. Monoliths and deeply integrated tooling make sense to build platforms fast with strong vendor lock-in, but that’s not what the community needs. Betting on the web platform is a more sensible and future-friendly way.

Let me know in the comments what your take is about unbiased tooling versus opinionated setups!

Smashing Editorial
(il)