Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Thursday, January 18, 2024

CSS Responsive Multi-Line Ribbon Shapes (Part 1)

 Ribbons have been used to accent designs for many years now. But the way we approach them in CSS has evolved with the introduction of newer features, like calc(), color-mix(), and trigonometric functions. In this article, Temani Afif combines background and gradient tricks to create ribbon shapes in CSS that are not only responsive but support multi-line text and are easily adjustable with a few CSS variables.

Back in the early 2010s, it was nearly impossible to avoid ribbon shapes in web designs. It was actually back in 2010 that Chris Coyier shared a CSS snippet that I am sure has been used thousands of times over.

And for good reason: ribbons are fun and interesting to look at. They’re often used for headings, but that’s not all, of course. You’ll find corner ribbons on product cards (“Sale!”), badges with trimmed ribbon ends (“First Place!”), or even ribbons as icons for bookmarks. Ribbons are playful, wrapping around elements, adding depth and visual anchors to catch the eye’s attention.

I have created a collection of more than 100 ribbon shapes, and we are going to study a few of them in this little two-part series. The challenge is to rely on a single element to create different kinds of ribbon shapes. What we really want is to create a shape that accommodates as many lines of text as you throw at them. In other words, there is no fixed dimension or magic numbers — the shape should adapt to its content.

Here is a demo of what we are building in this first part:

See the Pen Responsive multi-line ribbon shapes by Temani Afif.

You can play with the text, adjust the screen size, change the font properties, and the shape will always fit the content perfectly. Cool, right? Don’t look at the code just yet because we will build this together from scratch.

How Does It Work?

We are going to rely on a single HTML element, an <h1> in this case, though you can use any element you’d like as long as it can contain text.

<h1>Your text goes here</h1>

Now, if you look closely at the ribbon shapes, you can notice a general layout that is the same for both designs. There’s really one piece that repeats over and over.

Sure, this is not the exact ribbon shape we want, but all we are missing is the cutouts on the ends. The idea is to first start with this generic design and add the extra decoration as we go.

Both ribbons in the demo we looked at are built using pretty much the same exact CSS; the only differences are nuances that help differentiate them, like color and decoration. That’s my secret sauce! Most of the ribbons from my generator share a common code structure, and I merely adjust a few values to get different variations.

Let’s Start With The Gradients

Any time I hear that a component’s design needs to be repeated, I instantly think of background gradients. They are perfect for creating repeatable patterns, and they are capable of drawing lines with hard stops between colors.

We’re essentially talking about applying a background behind a text element. Each line of text gets the background and repeats for as many lines of text as there happens to be. So, the gradient needs to be as tall as one line of text. If you didn’t know it, we recently got the new line height (lh) unit in CSS that allows us to get the computed value of the element’s line-height. In our case, 1lh will always be equal to the height of one line of text, (Large preview)

Note: It appears that Safari uses the computed line height of a parent element rather than basing the lh unit on the element itself. I’ve accounted for that in the code by explicitly setting a line-height on the body element, which is the parent in our specific case. But hopefully, that will be unnecessary at some point in the future.

Let’s tackle our first gradient. It’s a rectangular shape behind the text that covers part of the line and leaves breathing space between the lines.

The gradient’s red color is set to 70% of the height, which leaves 30% of transparent color to account for the space between lines.

h1 {
  --c: #d81a14;
  
  background-image: linear-gradient(var(--c) 70%, #0000 0);
  background-position: 0 .15lh;
  background-size: 100% 1lh;
}

Nothing too complex, right? We’ve established a background gradient on an h1 element. The color is controlled with a CSS variable (--c), and we’ve sized it with the lh unit to align it with the text content.

Note that the offset (.15lh) is equal to half the space between lines. We could have used a gradient with three color values (e.g., transparent, #d81a14, and transparent), but it’s more efficient and readable to keep things to two colors and then apply an offset.

Next, we need a second gradient for the wrapped or slanted part of the ribbon. This gradient is positioned behind the first one. The following figure demonstrates this with a little opacity added to the front ribbon’s color to see the relationship better.

Here’s how I approached it:

linear-gradient(to bottom right, #0000 50%, red 0 X, #0000 0);

This time, we’re using keywords to set the gradient’s direction (to bottom right). Meanwhile, the color starts at the diagonal (50%) instead of its default 0% and should stop at a value that we’re indicating as X for a placeholder. This value is a bit tricky, so let’s get a visual that illustrates what we’re doing.

The green arrow illustrates the gradient direction, and we can see the different color stops: 50%, X, and 100%. We can apply some geometry rules to solve for X:

(X - 50%) / (100% - 50%) = 70%/100%
X = 85%

This gives us the exact point for the end of the gradient’s hard color stop. We can apply the 85% value to our gradient configuration in CSS:

h1 {
  --c: #d81a14;
  
  background-image: 
    linear-gradient(var(--c) 70%, #0000 0), 
    linear-gradient(to bottom left, #0000 50%, color-mix(in srgb, var(--c), #000 40%) 0 85%, #0000 0);
  background-position: 0 .15lh;
  background-size: 100% 1lh;
}

You’re probably noticing that I added the new color-mix() function to the second gradient. Why introduce it now? Because we can use it to mix the main color (#d81a14) with white or black. This allows us to get darker or lighter values of the color without having to introduce more color values and variables to the mix. It helps keep things efficient!

See the Pen The gradient configuration by Temani Afif.

We have accomplished the main piece of the design! We can turn our attention to creating the ribbon shape. You will notice some unwanted repetition at the top and the bottom. Don’t worry about it; it will be fixed in the next section.

Next, Let’s Make The Ribbons

Before we move in, let’s take a moment to remember that we’re making two ribbons. The demo at the beginning of this article provides two examples: a red one and a green one. They’re similar in structure but differ in the visual details.

For the first one, we’re taking the start and end of the ribbon and basically clipping a triangle out of it. We’ll do a similar thing with the second ribbon example with an extra fold step for the cutout part.

The First Ribbon

The only thing we need to do for the first ribbon is apply a clip-path to cut the triangular shape out from the ribbon’s ends while trimming unwanted artifacts from the repeating gradient at the top and bottom of the ribbon.


We have all of the coordinates we need to make our cuts using the polygon() function on the clip-path property. Coordinates are not always intuitive, but I have expanded the code and added a few comments below to help you identify some of the points from the figure.

h1 {
  --r: 10px; /* control the cutout */

  clip-path: polygon(
   0 .15lh, /* top-left corner */
   100% .15lh, /* top right corner */
   calc(100% - var(--r)) .5lh, /* top-right cutout */
   100% .85lh,
   100% calc(100% - .15lh), /* bottom-right corner  */
   0 calc(100% - .15lh), /* bottom-left corner */
   var(--r) calc(100% - .5lh), /* bottom-left cutout */
   0 calc(100% - .85lh)
  );
}

This completes the first ribbon! Now, we can wrap things up (pun intended) with the second ribbon.

The Second Ribbon #

We will use both pseudo-elements to complete the shape. The idea can be broken down like this:

  1. We create two rectangles that are placed at the start and end of the ribbon.
  2. We rotate the two rectangles with an angle that we define using a new variable, --a.
  3. We apply a clip-path to create the triangle cutout and trim where the green gradient overflows the top and bottom of the shape.

First, the variables:

h1 {
  --r: 10px;  /* controls the cutout */
  --a: 20deg; /* controls the rotation */
  --s: 6em;   /* controls the size */
}

Next, we’ll apply styles to the :before and :after pseudo-elements that they share in common:

h1:before,
h1:after {
  content: "";
  position: absolute;
  height: .7lh;
  width: var(--s);
  background: color-mix(in srgb, var(--c), #000 40%);
  rotate: var(--a);
}

Then, we position each pseudo-element and make our clips:

h1:before {
  top: .15lh;
  right: 0;
  transform-origin: top right;
  clip-path: polygon(0 0, 100% 0, calc(100% - .7lh / tan(var(--a))) 100%, 0 100%, var(--r) 50%);
}

h1:after {
  bottom: .15lh;
  left: 0;
  transform-origin: bottom left;
  clip-path: polygon(calc(.7lh / tan(var(--a))) 0, 100% 0, calc(100% - var(--r)) 50%, 100% 100%, 0 100%);
}

We are almost done! We still have some unwanted overflow where the repeating gradient bleeds out of the top and bottom of the shape. Plus, we need small cutouts to match the pseudo-element’s shape.

It’s clip-path again to the rescue, this time on the main element:

clip-path: polygon(
    0 .15lh,
    calc(100% - .7lh/sin(var(--a))) .15lh,
    calc(100% - .7lh/sin(var(--a)) - 999px) calc(.15lh - 999px*tan(var(--a))),
    100% -999px,
    100% .15lh,
    calc(100% - .7lh*tan(var(--a)/2)) .85lh,
    100% 1lh,
    100% calc(100% - .15lh),
    calc(.7lh/sin(var(--a))) calc(100% - .15lh),
    calc(.7lh/sin(var(--a)) + 999px) calc(100% - .15lh + 999px*tan(var(--a))),
    0 999px,
    0 calc(100% - .15lh),
    calc(.7lh*tan(var(--a)/2)) calc(100% - .85lh),
    0 calc(100% - 1lh)
);

Ugh, looks scary! I’m taking advantage of a new set of trigonometric functions that help a bunch with the calculations but probably look foreign and confusing if you’re seeing them for the first time. There is a mathematical explanation behind each value in the snippet that I’d love to explain, but it’s long-winded. That said, I’m more than happy to explain them in greater detail if you drop me a line in the comments.

Our second ribbon is completed! Here is the full demo again with both variations.

See the Pen CodePen Home Responsive multi-line ribbon shapes

Wrapping Up

We looked at two ribbon variations that use almost the same code structure, but we can make many, many more the same way. Your homework, if you accept it, will be to make the following variations using what you have learned so far.

You can still find the code within my ribbons collection, but it’s a good exercise to try writing code without. Maybe you will find a different implementation than mine and want to share it with me in the comments! In the next article of this two-part series, we will increase the complexity and produce two more interesting ribbon shapes.

An Actionable And Reliable Usability Questionnaire With Only 7 Items: Inuit

 Inuit (short for “Interface Usability Instrument”) is a new questionnaire you can use to assess the usability of your user interface. It has been designed to be more diagnostic than existing usability instruments like, e.g., SUS and for use with machine learning, all the while asking fewer questions than other questionnaires. This article explores how and why Inuit has been developed and why we can be sure that it actually measures usability, and reliably so.

A lot of contemporary usability evaluation relies on easily measurable and readily available metrics like conversion rates, task success rates, and time on task, even though it’s questionable how well these are suited for reliably capturing a concept as complex as usability in its entirety.

The same holds for user experience. When an instrument is used to measure usability, e.g., in controlled user studies or via live intercepts, it’s often the simple single ease question, which is generally not a bad choice, but has its limits.

Note: For more information on usability evaluation, you can check the article “Current Practice in Measuring Usability: Challenges to Usability Studies and Research” by Kasper Hornbæk and “Growth Marketing Considered Harmful” by Maximilian Speicher.

Ultimately, when you intend to precisely and reliably measure the usability of a digital product, there’s no way around a scientifically well-founded instrument or, in everyday terms, a “questionnaire.” The most famous one is probably SUS, the System Usability Scale, but there are also some others around. Two examples are UMUX, the Usability Measure for User Experience, and SUMI, the Software Usability Measurement Inventory.

To join this party, in this article, we introduce Inuit (the Interface Usability Instrument), a new usability questionnaire. We will share how and why it was developed and how it’s different from the questionnaires mentioned above.

To immediately cut to the chase: With a scale from 1 (“completely disagree”) to 5 (“completely agree”), Inuit looks as follows. The parts in square brackets can be adapted to your specific interface, e.g., products in an online shop, articles on a news website, or results in a search engine.

Q1I found [the information] I was looking for.
Q2I could easily understand [the provided information].
Q3I was confused using [the interface].
Q4I was distracted by elements of [the interface].
Q5Typography and layout added to readability.
Q6There was too much information presented in too little space.
Q7[My desired information] was easily reachable.

The Inuit metric (a score between 0 to 100, analogous to SUS) can then be calculated as follows:

(Q1 + Q2 + Q5 + Q7 - Q3 - Q4 - Q6 + 11) * 10028

Why 11 and 28?

We have seven items rated on a scale from 1 to 5, but for some (Q1, Q2, Q5, Q7), 5 is the best rating, and for some (Q3, Q4, Q6), 1 is the best rating. Hence, we need to subtract the latter from 6 when we add up everything: Q1 + Q2 + Q5 + Q7 + (6-Q3) + (6-Q4) + (6-Q6) = Q1 + Q2 + Q5 + Q7 - Q3 - Q4 - Q6 + 18. This gives us an overall score between 7 and 35.
Now, we want to normalize this to a score between 0 and 100. For this, we first subtract 7 for a score between 0 and 28: Q1 + Q2 + Q5 + Q7 - Q3 - Q4 - Q6 + 18 - 7 = Q1 + Q2 + Q5 + Q7 - Q3 - Q4 - Q6 + 11. Finally, for a score between 0 and 100, we need to divide everything by 28 and multiply by 100: (Q1 + Q2 + Q5 + Q7 - Q3 - Q4 - Q6 + 11) * 100/28.

You might have noticed that compared to, e.g., SUS with 10, Inuit consists of only 7 questions. Apart from that, it has two more advantages:

  • Inuit has been designed to provide training data for machine-learning models that can then automatically predict usability from user interactions or web analytics data.
  • Its items (i.e., the questions) are diagnostic, at least to a certain degree. This means you see what’s wrong with your interface simply by looking at the results from the questionnaire. Have a bad rating for readability (Q5)? You should make the text in your interface more readable.

Now, at this point, you can either accept all this and simply get going with Inuit to measure the usability of your digital product (we’d be delighted). Or, if you’re interested in the details, you’re very welcome to keep reading (we’d be even more delighted).


“So, Why Did You Develop Yet Another Usability Questionnaire?”

You probably already guessed that Inuit wasn’t developed just for fun or because there aren’t enough questionnaires around. But to answer this, we have to reach back a bit.

In 2014, Max was a Ph.D. student busy working on his dissertation. The goal of it all was to find a way to determine the usability of an interface automatically from users’ interactions, such as what they do with the mouse cursor and how they scroll, rather than making participants in a user study fill out pages and pages of questions. Additionally, the cherry on top should be to also automatically propose optimizations for the interface (e.g., if user interactions suggest the interface is not readable, make the text larger).

To be able to achieve this, however, it was first necessary to determine if and how well certain interactions (mouse cursor movements, mouse cursor speed, scrolling behavior, and so on) predict the usability — or rather its individual aspects — of an interface. This meant collecting training data through users’ interactions with an interface and their usability assessments of that interface. Then, one could investigate how well (combinations of) tracked interactions predict (aspects of) usability using regression and/or machine-learning models. So far, so good, as far as the theory is concerned.

In practice, one important decision that would have huge implications for the project was how to collect the usability assessments mentioned above when gathering the training data. Since usability is a latent variable, meaning it can’t be observed directly, a proper instrument (i.e., a questionnaire) is necessary to assess it. And the most famous one is undeniably the System Usability Scale (SUS). It should’ve been an obvious choice, shouldn’t it?

A closer look showed that, while SUS would be perfectly well suited to train statistical models to infer usability from interactions, it simply wasn’t the perfect fit. This was the case mainly for two reasons:

  1. First, many questions contained in SUS (“I think that I would like to use this system frequently,” “I found the various functions in this system were well integrated,” and “I felt very confident using the system,” among others) describe the effects of good or bad usability — users feel confident because the system is well usable and so on. But they don’t describe the aspects of usability that cause them, e.g., bad understandability. This makes it difficult to know what should be done to make it better. What exactly should we change to make users feel more confident? The questions are not diagnostic or “actionable” and require further qualitative research to uncover the causes of bad ratings. It’s the same for UMUX and SUMI.
  2. Second, with just 10 items, SUS is already a very small questionnaire. However, the fewer items, the less friction and the more motivated users are to actually answer. So, is ten really the minimum, or would a proper questionnaire with fewer items be possible?

With these considerations in mind, Max went on and ultimately developed Inuit, the instrument presented in the introduction. He ended up with seven items that were better suited for the needs of his Ph.D. project and more actionable than those of SUS.

“How do you know this actually measures usability?”

Inuit was developed in a two-step process. The first step was a review of established guidelines and checklists with more than 250 rules for good usability, which were filtered based on the requirements above and resulted in a first draft for the new usability instrument. This draft was then discussed and refined in expert interviews with nine usability professionals.

The final draft of Inuit, with the seven factors informativeness (Q1), understandability (Q2), confusion (Q3), distraction (Q4), readability (Q5), information density (Q6), and reachability (Q7), was evaluated using a confirmatory factor analysis (CFA).

CFA is a method for assessing construct validity, which means it “is used to test whether measures of a construct are consistent with a researcher’s understanding of the nature of that construct” or “to test whether the data fit a hypothesized measurement model.”
Wikipedia

Put very simply, by using a CFA, we can check how well a theory matches the practice. In our case, the “construct” or “hypothesized measurement model” (theory) was Inuit, and the data (practice) came from a user study with 81 participants in which four news websites were evaluated using an Inuit questionnaire.

In a CFA, there are various metrics that show how well a construct fits the data. Two well-established ones are CFI, the comparative fit index, and RMSEA, the root mean square error of approximation — both range from 0 to 1.

For CFI, 0.95 or higher is “accepted as an indicator of good fit” (Wikipedia). Inuit’s value was 0.971. For RMSEA, “values less than 0.05 are good, values between 0.05 and 0.08 are acceptable” (Kim et al.). Inuit’s value was 0.063. This means our theory matches the practice, or Inuit’s questions do indeed measure usability.

Case Study #1

Inuit was first put into practice in 2014 at Unister GmbH, which at that time ran travel search engines like fluege.de and reisen.de, and was developing an entirely new semantic search engine. The results page of this search engine, named BlueKiwi, was evaluated in a user study with 81 participants using Inuit.

In this first study, the overall score averaged across all participants was 59.9. Ratings were especially bad for informativeness (Q1), information density (Q6), and reachability (Q7). Based on these results, BlueKiwi’s search results page was redesigned.

Among other things, the number of advertisements was reduced (better reachability), search results were displayed more concisely (better informativeness), and everything was more clearly aligned and separated (better information density). See the figure below for the full list of changes.

After the redesign, we ran another study, in which the overall Inuit score increased to 67.5 (+11%), with improvements in every single one of the seven items.

“Why Wait 9 Years To Write This Article?” 

There were various factors at play. One was what’s called the research–practice gap. It’s often difficult for academic work to gain traction outside the academic community. One reason for this is that work that is part of a Ph.D. project is often a little neglected after it has served its purpose — being published in a research paper, included in a thesis, and presented at a Ph.D. defense — which is pretty much exactly what happened to Inuit.

Case Study #2

Another factor, however, was that we wanted to put the instrument into practice in a real-world industry setting over a longer period of time first, and we got the chance to do that only relatively recently.

We ran a longitudinal study over a period of almost two years in which we ran quarterly benchmarks of multiple e-commerce websites using both SUS and Inuit, with a total of 6,368 users. The results of these benchmarks were included in the dashboard of product KPIs and regularly shared with the team of 6 product managers. After roughly two years of conducting and sharing benchmarks, we interviewed the product managers about their use of the data, challenges, wishes, and potential for improvement.

What a high-level analysis showed was that all of the product managers, in one way or another, described Inuit as more intuitive to understand, less abstract, and more actionable compared to SUS when looking at both instruments as a whole.

They found most of Inuit’s items more specific and easier to interpret and, therefore, more relevant from a product manager’s perspective. SUS, in contrast, was described as, e.g., “good for [the] overall score” and the bird’s eye view. Virtually all product managers, however, wished for even more specific insights into where exactly on the website usability problems occur. One suggested building an optimal instrument by combining certain items from both SUS and Inuit.

As part of the analysis, we computed Cronbach’s α for Inuit (based on 3190 answers) as well as SUS (based on 3178 answers).

Cronbach’s α is a statistical measure for the internal consistency of an instrument, which can be interpreted as “the extent to which all of the items of a test measure the same latent variable [i.e., usability].”
Wikipedia

Values of 0.7 or above are generally deemed acceptable. Inuit reached a value of 0.7; SUS a value of 0.8.

To top things off, Inuit and SUS showed a considerable (Pearson’s r = 0.53) and highly significant (p < 0.001) correlation when looking at overall scores aggregated over the different e-commerce websites and tasks the study participants had to complete.

In layman’s terms, When the SUS score goes up, the Inuit score goes up; when the SUS score goes down, the Inuit score goes down. Both questionnaires measure the same thing (with a very, very rough approximation of INUIT = 0.6 × SUS + 17).

Since these first results were so encouraging, we decided to write this general, more practice-oriented overview article about Inuit now. A deeper analysis of our big dataset, however, is yet to be conducted, and our current plan is to report findings in much more detail separately.

“Why Do You Think Inuit Is Better Than SUS?”

We don’t think so (or that it’s better than any scientifically founded usability instrument, for that matter). There are many ways to measure the same latent variable, in this case, usability. Both questionnaires, SUS and Inuit, have proven that they can measure the usability of an interface. Still, they were developed in different contexts and with different goals and requirements in mind.

So, to address the question of when it’s better to use which, as true researchers, we have to say “it depends” (annoying, isn’t it?).

SUS, which has been around since the 1990s, is probably the most popular and well-established usability instrument. It’s been studied and validated over and over, which Inuit, of course, can’t compete with yet and still has a long way to go. If the goal is to compare scores at a high level and even tap into public benchmark numbers for orientation, SUS would be preferable.

However, by design, Inuit has two advantages over SUS:

  1. Inuit has only seven items and is still a “complete” usability instrument.
    30% fewer questions can be a major factor when it comes to motivating users to fill out a questionnaire. Assuming that a big part of remote online studies is done quickly in passing and with short attention spans, designing efficient studies that generate reliable output and minimize effects like participant fatigue can be a major challenge for researchers.
  2. Inuit’s items have been specifically designed to be more actionable for practitioners and lend themselves better to manual analysis and inferring potential interface optimizations.
    As we’ve learned in our second case study, talking to actual product managers revealed that for them, the results of a usability assessment should always be as specific as possible. Comparing the items of both, Inuit points to more concrete areas to improve than SUS, which was perceived as rather vague.

“Where Can I Use Inuit?”

Generally, in any scenario that involves an interface and a task — either defined by you or the user themselves. In the studies mentioned and described above, we could demonstrate that Inuit works well in controlled as well as natural-use settings and with news websites, search engines, and e-commerce shops.

Now, of course, we can’t evaluate Inuit with any possible kind of interface, and that is part of the reason for this article. Inuit has been around and publicly available since 2014, and we have no idea if and how it has been used by other researchers, but if you do, please let us know about it. We’d be thrilled to hear about your experience and results.

The questions presented at the beginning of the article are relatively focused on finding information because that’s where Inuit is historically coming from and because most of the things users do involve the finding of information of some kind. (Please keep in mind that information doesn’t have to be text. On the contrary, most information is non-textual.) But those questions can be adapted as long as they still reflect the underlying aspects of usability, which are informativeness, understandability, confusion, distraction, readability, information density, and reachability.

Say, for instance, you want to evaluate a module from an e-learning course, e.g., in the form of an annotated video with a subsequent quiz. To accommodate the task at hand, Q1 could be rephrased to “I had all the information necessary to complete the module” and Q7 to “All the information necessary to complete the module was easily reachable.”

Conclusion

There are plenty of usability questionnaires out there, and we have added a new one to the pool — Inuit. Why? Because sometimes, you find yourself in a situation where none of the existing questionnaires is the perfect fit. Inuit has been designed to be more diagnostic than existing usability instruments like, e.g., SUS and for use with machine learning, all the while asking fewer questions than other questionnaires. So, if any of this seems relevant to your use cases or context of work, why not give it a try?

From a scientific and statistical point of view, in a confirmatory factor analysis (CFA), Inuit has demonstrated that its questions do indeed measure usability. On top of that, it’s consistent and correlates well with SUS, based on data from a large-scale, longitudinal user study.

Note: If you want to dive deeper into the science behind Inuit, e.g., how exactly the items/questions were chosen, you can read the corresponding research paper “Inuit: The Interface Usability Instrument,” which was presented at the 2015 HCI International Conference. If you want to learn more about how Inuit can be used to train machine-learning models, read “Ensuring Web Interface Quality through Usability-Based Split Testing.” And finally if you want to see how Inuit can be used as the basis for a tool that automatically proposes optimizations for an interface, you can refer to “S.O.S.: Does Your Search Engine Results Page (SERP) Need Help?” which was presented at the 2015 ACM Conference on Human Factors in Computing Systems.

References #

Addressing Accessibility Concerns With Using Fluid Type

 The CSS clamp() function is often paired with viewport units for “fluid” font sizing that scales the text up and down at different viewport sizes. As common as this technique is, several voices warn that it opens up situations where text can fail WCAG Success Criterion 1.4.4, which specifies that text should scale up to at least 200% when the user’s browser reaches its 500% maximum zoom level. Max Barvian takes a deep look at the issue and offers ideas to help address it.

You may already be familiar with the CSS clamp() function. You may even be using it to fluidly scale a font size based on the browser viewport. Adrian Bece demonstrated the concept in another Smashing Magazine article just last year. It’s a clever CSS “trick” that has been floating around for a while.

But if you’ve used the clamp()-based fluid type technique yourself, then you may have also run into articles that offer a warning about it. For example, Adrian mentions this in his article:

“It’s important to reiterate that using rem values doesn’t automagically make fluid typography accessible for all users; it only allows the font sizes to respond to user font preferences. Using the CSS clamp function in combination with the viewport units to achieve fluid sizing introduces another set of drawbacks that we need to consider.”

Here’s Una Kravets with a few words about it on web.dev:

“Limiting how large text can get with max() or clamp() can cause a WCAG failure under 1.4.4 Resize text (AA), because a user may be unable to scale the text to 200% of its original size. Be certain to test the results with zoom.”

Trys Mudford also has something to say about it in the Utopia blog:

Adrian Roselli quite rightly warns that clamp can have a knock-on effect on the maximum font-size when the user explicitly sets a browser text zoom preference. As with any feature affecting typography, ensure you test thoroughly before using it in production.”

Mudford cites Adrian Roselli, who appears to be the core source of the other warnings:

“When you use vw units or limit how large text can get with clamp(), there is a chance a user may be unable to scale the text to 200% of its original size. If that happens, it is WCAG failure under 1.4.4 Resize text (AA) so be certain to test the results with zoom.”

So, what’s going on here? And how can we address any accessibility issues so we can keep fluidly scaling our text? That is exactly what I want to discuss in this article. Together, we will review what the WCAG guidelines say to understand the issue, then explore how we might be able to use clamp() in a way that adheres to WCAG Success Criterion (SC) 1.4.4.

WCAG Success Criterion 1.4.4

Let’s first review what WCAG Success Criterion 1.4.4 says about resizing text:

“Except for captions and images of text, text can be resized without assistive technology up to 200 percent without loss of content or functionality.”

Normally, if we’re setting CSS font-size to a non-fluid value, e.g., font-size: 2rem, we never have to worry about resizing behavior. All modern browsers can zoom up to 500% without additional assistive technology.

So, what’s the deal with sizing text with viewport units like this:

h1 {
  font-size: 5vw;
}

Here’s a simple example demonstrating the problem. I suggest viewing it in either Chrome or Firefox because zooming in Safari can behave differently.

See the Pen vw-based font-size scaling [forked] by Maxwell Barvian.

If you click the zoom buttons in the demo’s bottom toolbar, you’ll notice that although the page zoom level changes, the text doesn’t get smaller. Nothing really changes, in fact.

The issue is that, unlike rem and px values, browsers do not scale viewport-based units when zooming the page. This makes sense when thinking about it. The viewport itself doesn’t change when the user zooms in or out of a page. Where we see font-size: 1rem display like font-size: 0.5rem at a 50% zoom, font-size: 5vw stays the same size at all zoom levels.

Herein lies the accessibility issue. Font sizes based on vw — or any other viewport-based units for that matter — could potentially fail to scale to two times their original size the way WCAG SC 1.4.4 wants them to. That’s true even at 500%, which is the maximum zoom level for most browsers. If a user needs to zoom in at that scale, then we need to respect that for legibility.

Back To clamp()

Where does clamp() fit into all of this? After all, many of us don’t rely solely on vw units to size type; we use any of the many tools that are capable of generating a clamped function with a rem or px-based component. Here’s one example that scales text between 16px and 48px when the viewport is between 320px and 1280px. I’m using px values for simplicity’s sake, but it’s better to use rem in terms of accessibility.

h1 {
  font-size: clamp(16px, 5.33px + 3.33vw, 48px)
}

Try zooming into the next demo to see how the text behaves with this approach.

See the Pen clamp()ed font-size [forked] by Maxwell Barvian.

Is this font size accessible? In other words, if we zoom the page to the browser’s 500% maximum, does the content display at least double its original size? If we open the demo in full-page view and resize the browser width to, say, 1500px, notice what happens when we zoom in to 500%.

The text only scales up to 55px, or 1.67 times its original size, even though we zoomed the entire page to five times its original size. And because WCAG SC 1.4.4 requires that text can scale to at least two times its original size, this simple example would fail an accessibility audit, at least in most browsers at certain viewport widths.

Surely this can’t be a problem for all clamped font sizes with vw units, right? What about one that only increases from 16px to 18px:

h1 {
  font-size: clamp(16px, 15.33px + 0.208vw, 18px);
}

The vw part of that inner calc() function (clamp() supports calc() without explicitly declaring it) is so small that it couldn’t possibly cause the same accessibility failure, right?

Sure enough, even though it doesn’t get to quite 500% of its original size when the page is zoomed to 500%, the size of the text certainly passes the 200% zoom specified in WCAG SC 1.4.4.

So, clamped viewport-based font sizes fail WCAG SC 1.4.4 in some cases but not in others. The only advice I’ve seen for determining which situations pass or fail is to check each of them manually, as Adrian Roselli originally suggested. But that’s time-consuming and imprecise because the functions don’t scale intuitively.

There must be some relationship between our inputs — i.e., the minimum font size, maximum font size, minimum breakpoint, and maximum breakpoint — that can help us determine when they pose accessibility issues.

Thinking Mathematically

If we think about this problem mathematically, we really want to ensure that z₅(v) ≥ 2z₁(v). Let’s break that down.

z₁(v) and z₅(v) are functions that take the viewport width, v, as their input and return a font size at a 100% zoom level and a 500% zoom level, respectively. In other words, what we want to know is at what range of viewport widths will z₅(v) be less than 2×z₁(v), which represents the minimum size outlined in WCAG SC 1.4.4?

Using the first clamp() example we looked at that failed WCAG SC 1.4.4, we know that the z₁ function is the clamp() expression:

z₁(v) = clamp(16, 5.33 + 0.0333v, 48)

Notice: The vw units are divided by 100 to translate from CSS where 100vw equals the viewport width in pixels.

As for the z₅ function, it’s tempting to think that z₅ = 5z₁. But remember what we learned from that first demo: viewport-based units don’t scale up with the browser’s zoom level. This means z₅ is more correctly expressed like this:

z₅(v) = clamp(16*5, 5.33*5 + 0.0333v, 48*5)

Notice: This scales everything up by 5 (or 500%), except for v. This simulates how the browser scales the page when zooming.

Let’s represent the clamp() function mathematically. We can convert it to a piecewise function, meaning z₁(v) and z₅(v) would ultimately look like the following figure:

100% zoom (z₁(v))
100% zoom (z₁(v)). (Large preview)
500% zoom (z₅(v))
500% zoom (z₅(v)). (Large preview)

We can graph these functions to help visualize the problem. Here’s the base function, z₁(v), with the viewport width, v, on the x-axis:


This looks about right. The font size stays at 16px until the viewport is 320px wide, and it increases linearly from there before it hits 48px at a viewport width of 1280px. So far, so good.

Here’s a more interesting graph comparing 2z₁(v) and z₅(v):


Can you spot the accessibility failure on this graph? When z₅(v) (in green) is less than 2z₁(v) (in teal), the viewport-based font size fails WCAG SC 1.4.4.

Let’s zoom into the bottom-left region for a closer look:


This figure indicates that failure occurs when the browser width is approximately between 1050px and 2100px. You can verify this by opening the original demo again and zooming into it at different viewport widths. When the viewport is less than 1050px or greater than 2100px, the text should scale up to at least two times its original size at a 500% zoom. But when it’s in between 1050px and 2100px, it doesn’t.

Hint: We have to manually measure the text — e.g., take a screenshot — because browsers don’t show zoomed values in DevTools.

General Solutions

For simplicity’s sake, we’ve only focused on one clamp() expression so far. Can we generalize these findings somehow to ensure any clamped expression passes WCAG SC 1.4.4?

Let’s take a closer look at what’s happening in the failure above. Notice that the problem is caused because 2z₁(v) — the SC 1.4.4 requirement — reaches its peak before z₅(v) starts increasing.

When would that be the case? Everything in 2z₁(v) is scaled by 200%, including the slope of the line (v). The function reaches its peak value at the same viewport width where z₁(v) reaches its peak value (the maximum 1280px breakpoint). That peak value is two times the maximum font size we want which, in this case, is 2*48, or 96px.

A graph showing a peak value of the function
(Large preview)

However, the slope of z₅(v) is the same as z₁(v). In other words, the function doesn’t start increasing from its lowest clamped point — five times the minimum font size we want — until the viewport width is five times the minimum breakpoint. In this case, that is 5*320, or 1600px.

A graph showing function’s lowest clamped point
(Large preview)

Thinking about this generally, we can say that if 2z₁(v) peaks before z₅(v) starts increasing, or if the maximum breakpoint is less than five times the minimum breakpoint, then the peak value of 2z₁(v) must be less than or equal to the peak value of z₅(v), or two times the maximum value that is less than or equal to five times the minimum value.

Or simpler still: The maximum value must be less than or equal to 2.5 times the minimum value.

What about when the maximum breakpoint is more than five times the minimum breakpoint? Let’s see what our graph looks like when we change the maximum breakpoint from 1280px to 1664px and the maximum font size to 40px:

Technically, we could get away with a slightly higher maximum font size. To figure out just how much higher, we’d have to solve for z₅(v) ≥ 2z₁(v) at the point when 2z₁(v) reaches its peak, which is when v equals the maximum breakpoint. (Hat tip to my brother, Zach Barvian, whose excellent math skills helped me with this.)

To save you the math, you can play around with this calculator to see which combinations pass WCAG SC 1.4.4.

Conclusion

Summing up what we’ve covered:

  • If the maximum font size is less than or equal to 2.5 times the minimum font size, then the text will always pass WCAG SC 1.4.4, at least on all modern browsers.
  • If the maximum breakpoint is greater than five times the minimum breakpoint, it is possible to get away with a slightly higher maximum font size. That said, the increase is negligible, and that is a large breakpoint range to use in practice.

Importantly, that first rule is true for non-fluid responsive type as well. If you open this pen, for example, notice that it uses regular media queries to increase the h1 element’s size from an initial value of 1rem to 3rem (which violates our first rule), with an in-between stop for 2rem.

If you zoom in at 500% with a browser width of approximately 1000px, you will see that the text doesn’t reach 200% of its initial size. This makes sense because if you were to describe 2z₁(v) and z₅(v) mathematically, they would be even simpler piecewise functions with the same maximum and minimum limitations. This guideline would hold for any function describing a font size with a known minimum and maximum.

In the future, of course, we may get more tools from browsers to address these issues and accommodate even larger maximum font sizes. In the meantime, though, I hope you find this article helpful when building responsive frontends.

WordPress Playground: From 5-Minute Install To Instant Spin-Up

 WordPress Playground began as an experiment to see what a self-hosted WordPress experience might look like without the requirement of having to actually install WordPress. A year later, the experiment has evolved into a full-fledged project that not only abstracts installation but opens up WordPress to a slew of new opportunities. Ganesh Dahal demonstrates how WordPress Playground works and gets deep into how it might be used.

Many things have changed in WordPress over the years, but installation has largely remained the same: download WordPress, drop it on a server, create a database, sprinkle in some configuration, and presto, we have a WordPress site. This process was once lovingly referred to as the “famous five-minute install,” although that moniker seems to have faded with time, particularly as many hosting providers offer a more streamlined experience.

But what if WordPress didn’t require any setup at all? As in, you tap a link, and WordPress spins up a site for you right there, on demand? That’s probably difficult to imagine, considering WordPress runs on top of PHP, MySQL databases, and Apache. It’s not the most portable system.

That’s the aim of WordPress Playground, which got its first public boost when Matt Mullenweg introduced it during State of Word 2022.

In that clip, Matt describes WordPress Playground as a “virtual machine,” and it’s an apt description because all of the standard requirements for running WordPress — PHP, web server, database, and so on — are created directly in the browser. It completely abstracts installation to the extent that there’s absolutely no learning curve or technical knowledge to get a WordPress instance up and running!

How WordPress Playground Works

There’s a catch to all of this: the instance is temporary.

That’s right. If you even do so much as refresh your browser, the instance is gone. It’s as though the site evaporates into thin air. It’s not even captured in the Web Archive’s Wayback Machine. The site effectively never existed. Matt is right in calling WordPress Playground a virtual machine because it’s effectively a simulation of WordPress.

I’m not the right person to show you how WordPress Playground works under the hood, nor is that the scope of what I’m writing about. What I can tell you about, though, are the primary components that pull it all together:

  • WebAssembly (Wasm),
  • SQLite,
  • Web Service Worker API.

This cocktail of tech is what allows PHP to run in the browser (WebAssembly), data to be saved to a self-contained database engine (SQLite), and the site to run offline without a proper internet connection (Service Worker API).

The result is what you might call a “disposable WordPress sandbox” more than a live site you would find in a production environment. Playground creates instances that are more like isolated testing environments. In fact, it’s an ideal place to test code, themes, and plugins in different environments because Playground is capable of running different versions of PHP and WordPress, and all it takes is hitting Playground’s API with a URL. This one, for example, is pointed at the Playground site where Playground is installed and triggers a WordPress instance running WordPress 6.2 in a PHP 8.2 environment:

https://playground.wordpress.net/?php=8.2&wp=6.2

We’ll get to more features in a moment. The important thing to know right now is that WordPress Playground is a virtual machine that mimics a server environment capable of running WordPress directly in the browser without actually accessing a real server. So, it’s a WordPress simulation disconnected from the web but running in a web browser, not totally unlike a local WordPress installation that exists only on your machine.

Other WordPress Sandboxes

WordPress Playground is not the only — or even the first — solution to offer a disposable WordPress sandbox that can be used for testing. Some of the more popular options that already exist include:

That last one, WP Sandbox, formally shut its doors on September 23, 2023. It was owned by Liquid Web by way of StellarWP, which had acquired the service when they were part of Modern Tribe. I don’t have any inside information as far as why they decided to close the shop, but the introduction of WordPress Playground couldn’t have helped. I’m sure the other services are mulling over whether Playground will impact them and their businesses.

It’s not like Playground is exactly the same as other WordPress sandboxes. We are talking about a different technology stack. For instance, some services are built on top of a WordPress multi-site network, where you’re essentially spinning up a site on their server.

Despite the functional similarities between Playground and other sandboxes, there’s still plenty of room in the market, at least at the time of this writing.

Playground is still very much in its infancy and resembles a developer tool more than a consumer product, given the need to interact with its API.

For example, let’s look at the onboarding experience for TasteWP. You can create a new WordPress site instantly by hitting the following URL: https://tastewp.com/new/. After a brief series of actions, TasteWP welcomes you to your new site directly in the WordPress admin. You’re already logged in and everything.


Notice how the URL is a subdomain of a TasteWP-related top-level domain: hangingpurpose.s1-tastewp.com. It generates an instance on the multi-site network and establishes a URL for it based on a randomized naming system.

There’s a giant countdown timer on the screen that indicates when the site is scheduled to expire. That makes sense, right? Allowing anyone and everyone to create a site on the spot without so much as a login could become taxing on the server, so allowing sites to self-destruct on a schedule is likely as much to do with self-preservation as it does economics.

Speaking of economics, the countdown timer is immediately followed by a call to action to upgrade, which buys you permanence, extra server space, and customer support.

Without upgrading, though, you are only allowed two free instant sites. But if you create an account and log into TasteWP, then you can create up to six test sites on a free pricing tier.

That’s a look at the “quick” onboarding, but TasteWP does indeed have a more robust way to spin up a WordPress testing site with a set of advanced configurations, including which WordPress version to use with which version of PHP, settings you might normally define in wp-config.php, and options for adding specific themes and plugins.

So, how does that compare to WordPress Playground? Perhaps the greatest difference is that a TasteWP site is connected to the internet. It’s not a WordPress simulation, but an actual instance with a URL you can link up and share with others… as long as the site hasn’t expired. That could very well be enough of a differentiation to warrant more players in this space, even with WordPress Playground hanging around.

I wanted to give you a sense of what’s already offered before actually unboxing WordPress Playground. Now that we know what else is out there let’s turn our attention back to Playground and explore it.

Starting Up WordPress Playground

One of the first interesting things about WordPress Playground is that it is available in not just one but several places. I wouldn’t liken it completely to a service like TasteWP, where you create an account to create and manage WordPress instances. It’s more like a developer tool, one that you can reach for when testing your work in a WordPress environment.

You can simply hit the playground.wordpress.net URL in your browser to launch a new site on the spot. Or, you can launch an instance from the command line. Perhaps you prefer to use the official Chrome extension instead. Whatever the case, let’s look at those options.

1. Using The WordPress Playground URL

This is the most straightforward way to get a WordPress Playground instance up and running. That’s because all you do is visit the playground.wordpress.net address in the browser, and a WordPress site is created immediately.

This is exactly how the WordPress Playground demo works, prompting you to click a button to open a new WordPress site. In fact, try clicking the following button to create one now.

Create A WordPress Site

If you want to use a specific version of WordPress and PHP in your Playground, all it takes is adding a couple of parameters to the URL. For example, we can instruct Playground to run WordPress 6.2 on PHP 8.2 with the following URL:

https://playground.wordpress.net/?php=8.2&wp=6.2

You can even try out the developmental versions of WordPress using Playground by using the following parameter:

https://playground.wordpress.net/?wp=beta

2. Using The GitHub Repository #

True to the WordPress ethos, WordPress Playground is very much an open-source project. The repo is available over at GitHub, and we can pull it into a local environment and use WordPress Playground right from a terminal.

First, let’s clone the repository from the command line:

git clone https://github.com/WordPress/wordpress-playground.git

There is a slightly faster alternative that fetches just the latest revision:

git clone -b trunk --single-branch --depth 1 git@github.com:WordPress/wordpress-playground.git

Now that we have the WordPress Playground package in our local environment, we can formally install it:

cd wordpress-playground
npm install
npm run dev

Once the local server is running, we should get a URL from the terminal that we can use to access the new Playground instance, likely pointed to http://localhost:5400/website-server/.

We are also able to set which versions of WordPress and PHP to use in the virtual environment by adding a couple of instructions to the command. For example, this command triggers a new WordPress 5.9 instance running on PHP 7.4:

wp-now start --wp=5.9 --php=7.4

3. Using wp-now In The Command Line

An even quicker way to get Playground running from the command line is to globally install the wp-now CLI tool:

npm install -g @wp-now/wp-now

This way, we can create a new Playground instance anytime you want with a single command:

wp-now start

Be sure that you’re using Node 18 or higher. Otherwise, you’re likely to bump into some errors. Once the command executes, however, the browser will automatically open a new tab pointing to the new instance. You’re already signed into WordPress and everything!

We can configure the environment just as we could with the npm package:

wp-now start --wp=5.9 --php=7.4

A neat thing about this method is that there are several different “modes” you can run this in, and which one you use depends on the directory you’re in when running the command. For example, if you run the command from a directory that already contains WordPress, then Playground will automatically recognize that and run the directory as a full WordPress installation. Or, it’s possible to execute the command from a directory that contains nothing but an index.php file, and Playground will start the server and run requests through that file.

There are other options, including modes for theme, plugin, wp-content, and wordpress-develop, that are worth checking out in the documentation.

4. Using The Visual Studio Code Extension

WordPress Playground is also available as a Visual Studio Code extension. It provides a nice one-click process to launch a local WordPress site.

Installing the extension adds a WordPress icon to the sidebar menu that, when clicked, opens a panel for launching a new WordPress Playground site.

Open a project folder, click the “Start WordPress Server,” and the Playground extension boots up a new site on the spot. The extension also provides server details, including the local URL, the mode it’s in, and settings to change which versions of WordPress and PHP are in use.


One thing I noticed while poking at the instance is that it automatically installs and activates the SQLite Database Integration plugin. Obviously, that’s a required component for things to work, but I thought it was worth pointing out that the installation does indeed include at least one pre-installed plugin right out of the gate.

5. Using A Chrome Extension To Preview Themes & Plugins

Have you ever found yourself perusing the WordPress Theme Directory and wanting to take a particular theme out for a test drive? There’s already a “Preview” button baked right into the directory to do exactly that.

That’s nice, as it opens up the theme in a frame that looks a lot like the classic WordPress Customizer.

But how cool would it be to really open up the theme and see what it is like to do actual tasks with it in the WordPress admin, such as creating a post, editing a page, or exploring its block patterns?

That is what the “Open in WordPress Playground” extension for Chrome can do. It literally adds a button to “Preview” a theme in a fresh WordPress Playground instance that, when clicked, allows you to interact with the theme in a real WordPress environment.

I tried out the extension, and it worked as described, and not only that, but it works with the WordPress Plugin Directory as well. In other words, it’s now possible to try a new plugin on the spot without having to install, activate, and test it yourself in some sandbox or, worse, your live or staging WordPress environments.

This is a potential game-changer as far as lowering the barrier to entry for using WordPress and for theme and plugin developers offering a convenient way to provide users with a demo experience. I can easily imagine a future where paid commercial plugins adopt a similar user experience to help reduce refunds from customers merely wanting to try a plugin before formally committing to it.

The extension is available free of charge in the Chrome Web Store, but you can check out the source code in its GitHub repository as well. While we’re on it, it’s worth noting that this is a third-party extension rather than an official WordPress or Automattic release.

The Default Playground Site

No matter which Playground method you use, the instances that spin up are nearly identical. For example, all of the methods we covered have the WordPress Twenty Twenty-Three theme installed and activated by default. That makes a lot of sense: a standard WordPress installation does the same.

Similarly, all of the instances we covered make use of the SQLite Database Integration plugin developed by the WordPress Performance Team. This also makes sense: we need the plugin to establish a database. It also sounds like from the plugin description that the intent is to eventually integrate the plugin into WordPress Core, so perhaps we’ll eventually see zero plugins in a default Playground instance at some point.

There are a few differences between instances. They’re not massive, but worth calling out so you know what you are activating or have available when using a particular method to create a WordPress instance. The following table breaks down the current components included in each method at the time of this writing:

MethodWordPress VersionPHP VersionThemesPlugins
WordPress Playground website6.3.28.0
  • Twenty Twenty-Three (active)
  • SQLite Database Integration (active)
GitHub repo6.3.28.0
  • Twenty Twenty-Three (active)
  • SQLite Database Integration (active)
wp-now package6.3.28.0.10-dev
  • Twenty Twenty-Three (active)
  • Twenty Twenty-Two
  • Twenty Twenty-One
  • Akismet
  • Hello Dolly
  • SQLite Database Integration (active)
VS Code extension6.3.27.4
  • Twenty Twenty-Three (active)
  • Twenty Twenty-Two
  • Twenty Twenty-One
  • Akismet
  • Hello Dolly
  • SQLite Database Integration (active)
Chrome extension6.3.28.0
  • Twenty Twenty-Three (active)
  • SQLite Database Integration (active)

And, of course, any other differences would come from how you configure an instance. For example, if you run the wp-now package on the command line when you’re in a directory with WordPress and several themes and plugins installed, then those themes and plugins will be available to activate and use. Similarly, using the Chrome Extension on any WordPress Theme Directory page or Plugin Directory page will install that particular theme or plugin.

Installing Themes, Plugins, and Block Patterns

In a standard WordPress installation, you might log into the WordPress admin, navigate to AppearanceThemes, and install a new theme straight from the WordPress Theme Directory. That’s because your site has a web connection and is able to pull things in from WordPress.org. Since a WordPress Playground instance from the WordPress Playground website (which is essentially the same as the Chrome extension) is not technically connected to the internet, there is no way to install plugins and themes to it.

If you want the same sort of point-and-click experience in your Playground site that you would get in a standard WordPress installation, then go with the GitHub repo, the wp-now package, or the VS Code extension. Each of these is indeed connected to the internet and is able to install themes and plugins directly from the WordPress admin.

You may notice a note about using the Query API to install a theme or plugin to a WordPress Playground instance that is disconnected from the web:

“Playground does not yet support connecting to the themes directory yet. You can still upload a theme or install it using the Query API (e.g. ?theme=pendant).”

That’s right! We’re still able to load in whatever theme we want by passing the theme’s slug into the Playground URL used to generate the site. For example,

https://playground.wordpress.net/?theme=ollie

The same goes for plugins:

https://playground.wordpress.net/?plugin=jetpack

And if we want to bundle multiple plugins, we can pass in each plugin as a separate parameter chain with an ampersand (&) in the URL:

https://playground.wordpress.net/plugin=jetpack&plugin=woocommerce

It does not appear that we can do the same thing with themes. If you’re testing several themes in a single instance, then it’s probably best to use the wp-now package or the VS Code extension when pointing at a directory that already includes those themes.

What about block patterns, you ask? We only get two pre-defined patterns in a default WordPress Playground instance created on Playground’s site: Posts and Call to Action.


That’s because block patterns, too, are served to the WordPress admin from an internet connection. We get a much wider selection of options when creating an instance using any of the methods that establish a local host connection.

There appears to be no way, unfortunately, to import patterns with the Query API like we can for themes and plugins. The best way to bring in a new pattern, it seems, is to either bundle them in the theme you are using (or pointing to) or manually navigate to the Block Pattern Directory and use the “Copy” option to paste a pattern into the page or post you are testing in Playground.

Importing & Exporting Playgrounds

The transience of a WordPress Playground instance is its appeal. The site practically evaporates into thin air with the trigger of a page refresh. But what if you actually want to preserve an instance? Perhaps you need to come back to your work later. Or maybe you’re working on a visual tweak and want to demo it for your team. Playground instances can indeed be exported and even imported into other instances.

Open up a new WordPress site over at the playground.wordpress.net and locate the Upload and Download icons at the top-right corner of the frame.

No worries, this is not a step-by-step tutorial on how to click buttons. The only thing you really need to know is that these buttons are only available in instances created at the WordPress Playground site or when using the Chrome Extension to preview themes and plugins at WordPress.org.

What’s more interesting is what we get when exporting an instance. We get a ZIP file — wordpress-playground.zip to be exact — as you might expect. Extract that, and what we have is the entire website, including the full WordPress installation. It resembles any other standard WordPress project with a wp-content directory that contains the source files for the installed themes and plugins, as well as media library uploads.

The only difference I could spot between this WordPress Playground package and a standard project is that Playground provides the SQLite database in the export, also conveniently located in the wp-content directory.

This is a complete WordPress project. Now that we have it and have confirmed it has everything we would expect a WordPress site to have, we can use Playground’s importing feature to replicate the exported site in a brand-new WordPress Playground instance. Click the Upload icon in the frame of the new instance, then follow the prompts to upload the ZIP file we downloaded from the original instance.

You can probably guess what comes next. If we can export a complete WordPress site with Playground, we can not only import that site into a new Playground instance but import it to a hosting provider as well.

In other words, it’s possible to use Playground as a testing ground for development and then ship it to a production or staging environment when ready. Similarly, the exported files can be committed to a GitHub repo where your production files are, and that triggers a fresh build in production. However you choose to roll!

Sharing Playgrounds

There are clear benefits to being able to import and export Playground sites. WordPress has never been the more portable system. You know that if you’ve migrated WordPress sites and data. But when WordPress is able to move around as freely as it does with Playground, it opens up new possibilities for how we share work.

Sharing With The Query API

We’ve been using the Query API in many examples. It’s extremely convenient in that you append parameters on the WordPress Playground site, hit the URL, and a site spins up with everything specified.

The WordPress Playground site is hosted, so sharing a specific configuration of a Playground site only requires you to share a URL with the site’s configurations appended as parameters. For example. this link shares the Blue Note theme configured with the Gutenberg plugin:

https://playground.wordpress.net/?plugin=gutenberg&theme=blue-note

We can do a little more than that, like link directly to the post editor:

https://playground.wordpress.net/?plugin=gutenberg&theme=blue-note&url=/wp-admin/post-new.php

Even better, let’s link someone to the theme’s templates in the Site Editor:

https://playground.wordpress.net/?plugin=gutenberg&theme=blue-note&url=/wp-admin/site-editor.php?path=%2Fwp_template

Again, there are plenty more parameters than what we have explored in this article that are worth checking out in the WordPress Playground documentation.

Sharing With An Embedded iFrame #

We already know this is possible because the best example of it is the WordPress Playground developer page. There’s a Playground instance running and embedded directly on the page. Even when you spin up a new Playground instance, you’re effectively running an iframe within an iframe.

Let’s say we want to embed a WordPress site configured with the Pendant theme and the Gutenberg plugin:

<iframe width="800" height="650" src="https://playground.wordpress.net/?plugin=gutenberg&theme=pendant&mode=seamless" allowfullscreen></iframe>

So, really, what we’re doing is using the source URL in a different context. We can share the URL with someone, and they get to access the configured site in a browser. In this case, however, we are dropping the URL into an iframe element in HTML, and the Playground instance renders on the page.

Not to get too meta, but it’s pretty neat that we can log into a WordPress production site, create a new page, and embed a Playground instance on the page with the Custom HTML Block:

What I like about sharing Playground sites this way is that the instance is effectively preserved and always accessible. Sure, the data will not persist on a page refresh, but create the URL once, and you always have a copy of it previewed on another page that you host.

Speaking of which, WordPress Playground can be self-hosted. You have to imagine that the current Playground API hosted at playground.wordpress.net will get overburdened with time, assuming that Playground catches on with the community. If their server is overworked, I expect that the hosted API will either go away (breaking existing instances) or at least be locked for creating new instances.

That’s why self-hosting WordPress Playground might be a good idea in the long run. I can see WordPress developers and agencies reaching for this to provide customers and clients with demo work. There’s so much potential and nuance to self-hosting Playground that it might even be worth its own article.

The documentation provides a list of parameters that can used in the Playground URL.

Sharing With JSON Blueprints #

This “modern” era of WordPress is all about block-based layouts that lean more heavily into JaveScript, where PHP has typically been the top boss. And with this transition, we gained the ability to create entire WordPress themes without ever opening a template file, thanks to the introduction of theme.json.

Playground can also be configured with structured data. In fact, you can see the Playground website’s JSON configurations via this link. It’s pretty incredible that we can both configure a Playground site without writing code and share the file with others to sync environments.

Here is an example pulled directly from the Playground docs:

{
  "$schema": "https://playground.wordpress.net/blueprint-schema.json",
  "landingPage": "/wp-admin/",
  "preferredVersions": {        
    "php": "8.0",        
    "wp": "latest"    
   },    
  "steps": [{            
    "step": "login",            
    "username": "admin",            
    "password": "password"        
  }]
}

We totally can send this file to someone to clone a site we’re working on. Or, we can use the file in a self-hosted context, and others can pull it into their own blueprint.

Interestingly, we can even ditch the blueprint file altogether and write the structured data as URL fragments instead:

https://playground.wordpress.net/#{"preferredVersions": {"php":"7.4", "wp":"5.9"}}

That might get untenable really fast, but it is nice that the WordPress Playground team is thinking about all of the possible ways we might want to port WordPress.

Advanced Playground Configurations

Up to now, we’ve looked at a variety of ways to configure WordPress Playground using APIs that are provided by or based on playground.wordpress.net. It’s fast, convenient, and pretty darn flexible for something so new and experimental.

But let’s say you need full control to configure a Playground instance. I mean everything, from which themes and plugins are preinstalled to prepublished pages and posts, defining php.ini memory limits, you name it. The JavaScript API is what you’ll need because it is capable of executing PHP code, make requests, manage files and directories, and configuring parts of WordPress that none of the other approaches offer.

The JavaScript API is integrated into an iframe and uses the @wp-playground/client npm package. The Playground docs provide the following example in its “Quick Start” guide.

<iframe id="wp" style="width: 100%; height: 300px; border: 1px solid #000;"></iframe>

<script type="module">
  // Use unpkg for convenience
  import { startPlaygroundWeb } from 'https://unpkg.com/@wp-playground/client/index.js';

  const client = await startPlaygroundWeb({
    iframe: document.getElementById('wp'),
    remoteUrl: `https://playground.wordpress.net/remote.html`,
  });
  // Let's wait until Playground is fully loaded
  await client.isReady();
</script>

This is an overly simplistic example that demonstrates how the JavaScript API is embedded in a page in an iframe. The Playground docs provide a better example of how PHP is used within JavaScript to do things, like execute a file pointed at a specific path:

php.writeFile(
  "/www/index.php",
  `<?php echo "Hello world!";"`
);
const result = await php.run({
  scriptPath: "/www/index.php"
});
// result.text === "Hello world!"

Adam Zieliński and Thomas Nattestad offer a nicely commented example with multiple tasks in the article they published over at web.dev:

import {
  connectPlayground,
  login,
  connectPlayground,
} from '@wp-playground/client';

const client = await connectPlayground(
  document.getElementById('wp'), // An iframe
  { loadRemote: 'https://playground.wordpress.net/remote.html' },
);
await client.isReady();

// Login the user as admin and go to the post editor:
await login(client, 'admin', 'password');
await client.goTo('/wp-admin/post-new.php');

// Run arbitrary PHP code:
await client.run({ code: '<?php echo "Hi!"; ?>' });

// Install a plugin:
const plugin = await fetchZipFile();
await installPlugin(client, plugin);

Once again, the scope and breadth of using the JavaScript API for advanced configurations is yet another topic that might warrant its own article.

Wrapping Up

WordPress Playground is an excellent new platform that’s an ideal testing environment for WordPress themes, plugins… or even WordPress itself. Despite the fact that it is still in its early days, Playground is already capable of some pretty incredible stuff that makes WordPress more portable than ever.

We looked at lots of ways that Playground accomplishes this. Just want to check out a new theme? Use the playground.wordpress.net URL configured with parameters supported by the Query API, or grab the Chrome extension. Need to do a quick test of your theme in a different PHP environment? Use the wp-now package to spin up a test site locally. Want to let others demo a plugin you made? Embed Playground in an iframe on your site.

WordPress Playground is an evolving space, so keep your eye on it. You can participate in the discussion and request a feature through a pull request or report an issue that you encounter in your testing. In the meantime, you may want to be aware of what the WordPress Playground team has identified as known limitations of the service:

  • No access to plugins and theme directories in the browser.
    The theme and plugin directories are not accessible due to the fact that Playgrounds are not connected to the internet, but are virtual environments.
  • Instances are destroyed on a browser refresh.
    Because WordPress Playground uses a browser-based temporary database, all changes and uploads are lost after a browser refresh. If you want to preserve your changes, though, use the export feature to download a zipped archive of the instance. Meanwhile, this is something the team is working on.
  • iFrame issues with anchor links.
    Clicking a link in a Playground instance that is embedded on a page in an iframe may trigger the main page to refresh, causing the instance to reset.
  • iFrame rendering issues.
    There are reports where setting the iframe’s src attribute to a blobbed URL instead of an HTTP URL breaks links to assets, including CSS and images.

How will you use WordPress Playground? WordPress Playground creator Adam Zieliński recently shipped a service that uses Playground to preview pull requests in GitHub. We all know that WordPress has never put a strong emphasis on developer experience (DX) the same way other technical stacks do, like static site generators and headless configurations. But this is exactly the sort of way that I imagine Playground improving DX to make developing for WordPress easier and, yes, fun.

References & Resources #