Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Sunday, July 20, 2025

Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS

 

In this tutorial, Blake Lundquist walks us through two methods of creating the “moving-highlight” navigation pattern using only plain JavaScript and CSS. The first technique uses the getBoundingClientRect method to explicitly animate the border between navigation bar items when they are clicked. The second approach achieves the same functionality using the new View Transition API.

I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for a modern upgrade. With this pattern, the border around the active navigation item animates directly from one element to another as the user clicks on menu items. In 2025, we have much better tools to manipulate the DOM via vanilla JavaScript. New features like the View Transition API make progressive enhancement more easily achievable and handle a lot of the animation minutiae.


In this tutorial, I will demonstrate two methods of creating the “moving highlight” navigation bar using plain JavaScript and CSS. The first example uses the getBoundingClientRect method to explicitly animate the border between navigation bar items when they are clicked. The second example achieves the same functionality using the new View Transition API.

The Initial Markup

Let’s assume that we have a single-page application where content changes without the page being reloaded. The starting HTML and CSS are your standard navigation bar with an additional div element containing an id of #highlight. We give the first navigation item a class of .active.

See the Pen Moving Highlight Navbar Starting Markup [forked] by Blake Lundquist.

For this version, we will position the #highlight element around the element with the .active class to create a border. We can utilize absolute positioning and animate the element across the navigation bar to create the desired effect. We’ll hide it off-screen initially by adding left: -200px and include transition styles for all properties so that any changes in the position and size of the element will happen gradually.

#highlight {
  z-index: 0;
  position: absolute;
  height: 100%;
  width: 100px;
  left: -200px;
  border: 2px solid green;
  box-sizing: border-box;
  transition: all 0.2s ease;
}

Add A Boilerplate Event Handler For Click Interactions

We want the highlight element to animate when a user changes the .active navigation item. Let’s add a click event handler to the nav element, then filter for events caused only by elements matching our desired selector. In this case, we only want to change the .active nav item if the user clicks on a link that does not already have the .active class.

Initially, we can call console.log to ensure the handler fires only when expected:

const navbar = document.querySelector('nav');

navbar.addEventListener('click', function (event) {
  // return if the clicked element doesn't have the correct selector
  if (!event.target.matches('nav a:not(active)')) {
    return;
  }
  
  console.log('click');
});

Open your browser console and try clicking different items in the navigation bar. You should only see "click" being logged when you select a new item in the navigation bar.

Now that we know our event handler is working on the correct elements let’s add code to move the .active class to the navigation item that was clicked. We can use the object passed into the event handler to find the element that initialized the event and give that element a class of .active after removing it from the previously active item.

const navbar = document.querySelector('nav');

navbar.addEventListener('click', function (event) {
  // return if the clicked element doesn't have the correct selector
  if (!event.target.matches('nav a:not(active)')) {
    return;
  }
  
-  console.log('click');
+  document.querySelector('nav a.active').classList.remove('active');
+  event.target.classList.add('active');
  
});

Our #highlight element needs to move across the navigation bar and position itself around the active item. Let’s write a function to calculate a new position and width. Since the #highlight selector has transition styles applied, it will move gradually when its position changes.

Using getBoundingClientRect, we can get information about the position and size of an element. We calculate the width of the active navigation item and its offset from the left boundary of the parent element. Then, we assign styles to the highlight element so that its size and position match.

// handler for moving the highlight
const moveHighlight = () => {
  const activeNavItem = document.querySelector('a.active');
  const highlighterElement = document.querySelector('#highlight');
  
  const width = activeNavItem.offsetWidth;

  const itemPos = activeNavItem.getBoundingClientRect();
  const navbarPos = navbar.getBoundingClientRect()
  const relativePosX = itemPos.left - navbarPos.left;

  const styles = {
    left: `${relativePosX}px`,
    width: `${width}px`,
  };

  Object.assign(highlighterElement.style, styles);
}

Let’s call our new function when the click event fires:

navbar.addEventListener('click', function (event) {
  // return if the clicked element doesn't have the correct selector
  if (!event.target.matches('nav a:not(active)')) {
    return;
  }
  
  document.querySelector('nav a.active').classList.remove('active');
  event.target.classList.add('active');
  
+  moveHighlight();
});

Finally, let’s also call the function immediately so that the border moves behind our initial active item when the page first loads:

// handler for moving the highlight
const moveHighlight = () => {
 // ...
}

// display the highlight when the page loads
moveHighlight();

Now, the border moves across the navigation bar when a new item is selected. Try clicking the different navigation links to animate the navigation bar.

See the Pen Moving Highlight Navbar [forked] by Blake Lundquist.

That only took a few lines of vanilla JavaScript and could easily be extended to account for other interactions, like mouseover events. In the next section, we will explore refactoring this feature using the View Transition API.

Using The View Transition API

The View Transition API provides functionality to create animated transitions between website views. Under the hood, the API creates snapshots of “before” and “after” views and then handles transitioning between them. View transitions are useful for creating animations between documents, providing the native-app-like user experience featured in frameworks like Astro. However, the API also provides handlers meant for SPA-style applications. We will use it to reduce the JavaScript needed in our implementation and more easily create fallback functionality.

For this approach, we no longer need a separate #highlight element. Instead, we can style the .active navigation item directly using pseudo-selectors and let the View Transition API handle the animation between the before-and-after UI states when a new navigation item is clicked.

We’ll start by getting rid of the #highlight element and its associated CSS and replacing it with styles for the nav a::after pseudo-selector:

<nav>
  - <div id="highlight"></div>
  <a href="#" class="active">Home</a>
  <a href="#services">Services</a>
  <a href="#about">About</a>
  <a href="#contact">Contact</a>
</nav>
- #highlight {
-  z-index: 0;
-  position: absolute;
-  height: 100%;
-  width: 0;
-  left: 0;
-  box-sizing: border-box;
-  transition: all 0.2s ease;
- }

+ nav a::after {
+  content: " ";
+  position: absolute;
+  left: 0;
+  top: 0;
+  width: 100%;
+  height: 100%;
+  border: none;
+  box-sizing: border-box;
+ }

For the .active class, we include the view-transition-name property, thus unlocking the magic of the View Transition API. Once we trigger the view transition and change the location of the .active navigation item in the DOM, “before” and “after” snapshots will be taken, and the browser will animate the border across the bar. We’ll give our view transition the name of highlight, but we could theoretically give it any name.

nav a.active::after {
  border: 2px solid green;
  view-transition-name: highlight;
}

Once we have a selector that contains a view-transition-name property, the only remaining step is to trigger the transition using the startViewTransition method and pass in a callback function.

const navbar = document.querySelector('nav');

// Change the active nav item on click
navbar.addEventListener('click', async  function (event) {

  if (!event.target.matches('nav a:not(.active)')) {
    return;
  }
  
  document.startViewTransition(() => {
    document.querySelector('nav a.active').classList.remove('active');

    event.target.classList.add('active');
  });
});

Above is a revised version of the click handler. Instead of doing all the calculations for the size and position of the moving border ourselves, the View Transition API handles all of it for us. We only need to call document.startViewTransition and pass in a callback function to change the item that has the .active class!

Adjusting The View Transition

At this point, when clicking on a navigation link, you’ll notice that the transition works, but some strange sizing issues are visible.


This sizing inconsistency is caused by aspect ratio changes during the course of the view transition. We won’t go into detail here, but Jake Archibald has a detailed explanation you can read for more information. In short, to ensure the height of the border stays uniform throughout the transition, we need to declare an explicit height for the ::view-transition-old and ::view-transition-new pseudo-selectors representing a static snapshot of the old and new view, respectively.

::view-transition-old(highlight) {
  height: 100%;
}

::view-transition-new(highlight) {
  height: 100%;
}

Let’s do some final refactoring to tidy up our code by moving the callback to a separate function and adding a fallback for when view transitions aren’t supported:

const navbar = document.querySelector('nav');

// change the item that has the .active class applied
const setActiveElement = (elem) => {
  document.querySelector('nav a.active').classList.remove('active');
  elem.classList.add('active');
}

// Start view transition and pass in a callback on click
navbar.addEventListener('click', async  function (event) {
  if (!event.target.matches('nav a:not(.active)')) {
    return;
  }

  // Fallback for browsers that don't support View Transitions:
  if (!document.startViewTransition) {
    setActiveElement(event.target);
    return;
  }
  
  document.startViewTransition(() => setActiveElement(event.target));
});

Here’s our view transition-powered navigation bar! Observe the smooth transition when you click on the different links.

See the Pen Moving Highlight Navbar with View Transition [forked] by Blake Lundquist.

Conclusion 

Animations and transitions between website UI states used to require many kilobytes of external libraries, along with verbose, confusing, and error-prone code, but vanilla JavaScript and CSS have since incorporated features to achieve native-app-like interactions without breaking the bank. We demonstrated this by implementing the “moving highlight” navigation pattern using two approaches: CSS transitions combined with the getBoundingClientRect() method and the View Transition API.

Resources

Decoding The SVG path Element: Line Commands

 

SVG is easy — until you meet path. However, it’s not as confusing as it initially looks. In this first installment of a pair of articles, Myriam Frisano aims to teach you the basics of <path> and its sometimes mystifying commands. With simple examples and visualizations, she’ll help you understand the easy syntax and underlying rules of SVG’s most powerful element so that by the end, you’re fully able to translate SVG semantic tags into a language path understands.

In a previous article, we looked at some practical examples of how to code SVG by hand. In that guide, we covered the basics of the SVG elements rect, circle, ellipse, line, polyline, and polygon (and also g).

This time around, we are going to tackle a more advanced topic, the absolute powerhouse of SVG elements: path. Don’t get me wrong; I still stand by my point that image paths are better drawn in vector programs than coded (unless you’re the type of creative who makes non-logical visual art in code — then go forth and create awe-inspiring wonders; you’re probably not the audience of this article). But when it comes to technical drawings and data visualizations, the path element unlocks a wide array of possibilities and opens up the world of hand-coded SVGs.

The path syntax can be really complex. We’re going to tackle it in two separate parts. In this first installment, we’re learning all about straight and angular paths. In the second part, we’ll make lines bend, twist, and turn.

Required Knowledge And Guide Structure

Note: If you are unfamiliar with the basics of SVG, such as the subject of viewBox and the basic syntax of the simple elements (rect, line, g, and so on), I recommend reading my guide before diving into this one. You should also familiarize yourself with <text> if you want to understand each line of code in the examples.

Before we get started, I want to quickly recap how I code SVG using JavaScript. I don’t like dealing with numbers and math, and reading SVG Code with numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and write out, I have a much better time with this type of code, and I think you will, too.

The goal of this article is more about understanding path syntax than it is about doing placement or how to leverage loops and other more basic things. So, I will not run you through the entire setup of each example. I’ll instead share snippets of the code, but they may be slightly adjusted from the CodePen or simplified to make this article easier to read. However, if there are specific questions about code that are not part of the text in the CodePen demos, the comment section is open.

To keep this all framework-agnostic, the code is written in vanilla JavaScript (though, really, TypeScript is your friend the more complicated your SVG becomes, and I missed it when writing some of these).

Setting Up For Success 

As the path element relies on our understanding of some of the coordinates we plug into the commands, I think it is a lot easier if we have a bit of visual orientation. So, all of the examples will be coded on top of a visual representation of a traditional viewBox setup with the origin in the top-left corner (so, values in the shape of 0 0 ${width} ${height}.

I added text labels as well to make it easier to point you to specific areas within the grid.

Please note that I recommend being careful when adding text within the <text> element in SVG if you want your text to be accessible. If the graphic relies on text scaling like the rest of your website, it would be better to have it rendered through HTML. But for our examples here, it should be sufficient.

So, this is what we’ll be plotting on top of:

See the Pen SVG Viewbox Grid Visual [forked] by Myriam.

Alright, we now have a ViewBox Visualizing Grid. I think we’re ready for our first session with the beast.

Enter path And The All-Powerful d Attribute

The <path> element has a d attribute, which speaks its own language. So, within d, you’re talking in terms of “commands”.

When I think of non-path versus path elements, I like to think that the reason why we have to write much more complex drawing instructions is this: All non-path elements are just dumber paths. In the background, they have one pre-drawn path shape that they will always render based on a few parameters you pass in. But path has no default shape. The shape logic has to be exposed to you, while it can be neatly hidden away for all other elements.

Let’s learn about those commands.

Where It All Begins: M

The first, which is where each path begins, is the M command, which moves the pen to a point. This command places your starting point, but it does not draw a single thing. A path with just an M command is an auto-delete when cleaning up SVG files.

It takes two arguments: the x and y coordinates of your start position.

const uselessPathCommand = `M${start.x} ${start.y}`;

Basic Line Commands: M , L, H, V

These are fun and easy: L, H, and V, all draw a line from the current point to the point specified.

L takes two arguments, the x and y positions of the point you want to draw to.

const pathCommandL = `M${start.x} ${start.y} L${end.x} ${end.y}`;

H and V, on the other hand, only take one argument because they are only drawing a line in one direction. For H, you specify the x position, and for V, you specify the y position. The other value is implied.

const pathCommandH = `M${start.x} ${start.y} H${end.x}`;
const pathCommandV = `M${start.x} ${start.y} V${end.y}`;

To visualize how this works, I created a function that draws the path, as well as points with labels on them, so we can see what happens.

See the Pen Simple Lines with path [forked] by Myriam.

We have three lines in that image. The L command is used for the red path. It starts with M at (10,10), then moves diagonally down to (100,100). The command is: M10 10 L100 100.

The blue line is horizontal. It starts at (10,55) and should end at (100, 55). We could use the L command, but we’d have to write 55 again. So, instead, we write M10 55 H100, and then SVG knows to look back at the y value of M for the y value of H.

It’s the same thing for the green line, but when we use the V command, SVG knows to refer back to the x value of M for the x value of V.

If we compare the resulting horizontal path with the same implementation in a <line> element, we may

  1. Notice how much more efficient path can be, and
  2. Remove quite a bit of meaning for anyone who doesn’t speak path.

Because, as we look at these strings, one of them is called “line”. And while the rest doesn’t mean anything out of context, the line definitely conjures a specific image in our heads.

<path d="M 10 55 H 100" />
<line x1="10" y1="55" x2="100" y2="55" />

Making Polygons And Polylines With Z

In the previous section, we learned how path can behave like <line>, which is pretty cool. But it can do more. It can also act like polyline and polygon.

Remember, how those two basically work the same, but polygon connects the first and last point, while polyline does not? The path element can do the same thing. There is a separate command to close the path with a line, which is the Z command.

const polyline2Points = `M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y}`;
const polygon2Points  = `M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y} Z`;

So, let’s see this in action and create a repeating triangle shape. Every odd time, it’s open, and every even time, it’s closed. Pretty neat!

See the Pen Alternating Triangles [forked] by Myriam.

When it comes to comparing path versus polygon and polyline, the other tags tell us about their names, but I would argue that fewer people know what a polygon is versus what a line is (and probably even fewer know what a polyline is. Heck, even the program I’m writing this article in tells me polyline is not a valid word). The argument to use these two tags over path for legibility is weak, in my opinion, and I guess you’d probably agree that this looks like equal levels of meaningless string given to an SVG element.

<path d="M0 0 L86.6 50 L0 100 Z" />
<polygon points="0,0 86.6,50 0,100" />

<path d="M0 0 L86.6 50 L0 100" />
<polyline points="0,0 86.6,50 0,100" />

Relative Commands: m, l, h, v

All of the line commands exist in absolute and relative versions. The difference is that the relative commands are lowercase, e.g., m, l, h, and v. The relative commands are always relative to the last point, so instead of declaring an x value, you’re declaring a dx value, saying this is how many units you’re moving.

Before we look at the example visually, I want you to look at the following three-line commands. Try not to look at the CodePen beforehand.

const lines = [
  { d: `M10 10 L 10 30 L 30 30`, color: "var(--_red)" },
  { d: `M40 10 l 0 20 l 20 0`, color: "var(--_blue)" },
  { d: `M70 10 l 0 20 L 90 30`, color: "var(--_green)" }
];

As I mentioned, I hate looking at numbers without meaning, but there is one number whose meaning is pretty constant in most contexts: 0. Seeing a 0 in combination with a command I just learned means relative manages to instantly tell me that nothing is happening. Seeing l 0 20 by itself tells me that this line only moves along one axis instead of two.

And looking at that entire blue path command, the repeated 20 value gives me a sense that the shape might have some regularity to it. The first path does a bit of that by repeating 10 and 30. But the third? As someone who can’t do math in my head, that third string gives me nothing.

Now, you might be surprised, but they all draw the same shape, just in different places.

See the Pen SVG Compound Paths [forked] by Myriam.

So, how valuable is it that we can recognize the regularity in the blue path? Not very, in my opinion. In some cases, going with the relative value is easier than an absolute one. In other cases, the absolute is king. Neither is better nor worse.

And, in all cases, that previous example would be much more efficient if it were set up with a variable for the gap, a variable for the shape size, and a function to generate the path definition that’s called from within a loop so it can take in the index to properly calculate the start point.

Jumping Points: How To Make Compound Paths

Another very useful thing is something you don’t see visually in the previous CodePen, but it relates to the grid and its code.

I snuck in a grid drawing update.

With the method used in earlier examples, using line to draw the grid, the above CodePen would’ve rendered the grid with 14 separate elements. If you go and inspect the final code of that last CodePen, you’ll notice that there is just a single path element within the .grid group.

It looks like this, which is not fun to look at but holds the secret to how it’s possible:

<path d="M0 0 H110 M0 10 H110 M0 20 H110 M0 30 H110 M0 0 V45 M10 0 V45 M20 0 V45 M30 0 V45 M40 0 V45 M50 0 V45 M60 0 V45 M70 0 V45 M80 0 V45 M90 0 V45" stroke="currentColor" stroke-width="0.2" fill="none"></path>

If we take a close look, we may notice that there are multiple M commands. This is the magic of compound paths.

Since the M/m commands don’t actually draw and just place the cursor, a path can have jumps.

So, whenever we have multiple paths that share common styling and don’t need to have separate interactions, we can just chain them together to make our code shorter.

Coming Up Next

Armed with this knowledge, we’re now able to replace line, polyline, and polygon with path commands and combine them in compound paths. But there is so much more to uncover because path doesn’t just offer foreign-language versions of lines but also gives us the option to code circles and ellipses that have open space and can sometimes also bend, twist, and turn. We’ll refer to those as curves and arcs, and discuss them more explicitly in the next article.

Saturday, July 19, 2025

Collaboration: The Most Underrated UX Skill No One Talks About

 

We often spotlight wireframes, research, or tools like Figma, but none of that moves the needle if we can’t collaborate well. Great UX doesn’t happen in isolation. It takes conversations with engineers, alignment with product, sales, and other stakeholders, as well as the ability to listen, adapt, and co-create. That’s where design becomes a team sport, and when your ability to capture the outcomes multiplies the UX impact.

When people talk about UX, it’s usually about the things they can see and interact with, like wireframes and prototypes, smart interactions, and design tools like Figma, Miro, or Maze. Some of the outputs are even glamorized, like design systems, research reports, and pixel-perfect UI designs. But here’s the truth I’ve seen again and again in over two decades of working in UX: none of that moves the needle if there is no collaboration.

Great UX doesn’t happen in isolation. It happens through conversations with engineers, product managers, customer-facing teams, and the customer support teams who manage support tickets. Amazing UX ideas come alive in messy Miro sessions, cross-functional workshops, and those online chats (e.g., Slack or Teams) where people align, adapt, and co-create.

Some of the most impactful moments in my career weren’t when I was “designing” in the traditional sense. They have been gaining incredible insights when discussing problems with teammates who have varied experiences, brainstorming, and coming up with ideas that I never could have come up with on my own. As I always say, ten minds in a room will come up with ten times as many ideas as one mind. Often, many ideas are the most useful outcome.

There have been times when a team has helped to reframe a problem in a workshop, taken vague and conflicting feedback, and clarified a path forward, or I’ve sat with a sales rep and heard the same user complaint show up in multiple conversations. This is when design becomes a team sport, and when your ability to capture the outcomes multiplies the UX impact.


Why This Article Matters Now

The reason collaboration feels so urgent now is that the way we work since COVID has changed, according to a study published by the US Department of Labor. Teams are more cross-functional, often remote, and increasingly complex. Silos are easier to fall into, due to distance or lack of face-to-face contact, and yet alignment has never been more important. We can’t afford to see collaboration as a “nice to have” anymore. It’s a core skill, especially in UX, where our work touches so many parts of an organisation.

Let’s break down what collaboration in UX really means, and why it deserves way more attention than it gets.

What Is Collaboration In UX, Really?

Let’s start by clearing up a misconception. Collaboration is not the same as cooperation.

  • Cooperation: “You do your thing, I’ll do mine, and we’ll check in later.”
  • Collaboration: “Let’s figure this out together and co-own the outcome.”

Collaboration, as defined in the book Communication Concepts, published by Deakin University, involves working with others to produce outputs and/or achieve shared goals. The outcome of collaboration is typically a tangible product or a measurable achievement, such as solving a problem or making a decision. Here’s an example from a recent project:

Recently, I worked on a fraud alert platform for a fintech business. It was a six-month project, and we had zero access to users, as the product had not yet hit the market. Also, the users were highly specialised in the B2B finance space and were difficult to find. Additionally, the team members I needed to collaborate with were based in Malaysia and Melbourne, while I am located in Sydney.

Instead of treating that as a dead end, we turned inward: collaborating with subject matter experts, professional services consultants, compliance specialists, and customer support team members who had deep knowledge of fraud patterns and customer pain points. Through bi-weekly workshops using a Miro board, iterative feedback loops, and sketching sessions, we worked on design solution options. I even asked them to present their own design version as part of the process.


After months of iterating on the fraud investigation platform through these collaboration sessions, I ended up with two different design frameworks for the investigator’s dashboard. Instead of just presenting the “best one” and hoping for buy-in, I ran a voting exercise with PMs, engineers, SMEs, and customer support. Everyone had a voice. The winning design was created and validated with the input of the team, resulting in an outcome that solved many problems for the end user and was owned by the entire team. That’s collaboration!


It is definitely one of the most satisfying projects of my career.

On the other hand, I recently caught up with an old colleague who now serves as a product owner. Her story was a cautionary tale: the design team had gone ahead with a major redesign of an app without looping her in until late in the game. Not surprisingly, the new design missed several key product constraints and business goals. It had to be scrapped and redone, with her now at the table. That experience reinforced what we all know deep down: your best work rarely happens in isolation.

As illustrated in my experience, true collaboration can span many roles. It’s not just between designers and PMs. It can also include QA testers who identify real-world issues, content strategists who ensure our language is clear and inclusive, sales representatives who interact with customers on a daily basis, marketers who understand the brand’s voice, and, of course, customer support agents who are often the first to hear when something goes wrong. The best outcomes arrive when we’re open to different perspectives and inputs.

Why Collaboration Is So Overlooked?

If collaboration is so powerful, why don’t we talk about it more?

In my experience, one reason is the myth of the “lone UX hero”. Many of us entered the field inspired by stories of design geniuses revolutionising products on their own. Our portfolios often reflect that as well. We showcase our solo work, our processes, and our wins. Job descriptions often reinforce the idea of the solo UX designer, listing tool proficiency and deliverables more than soft skills and team dynamics.

And then there’s the team culture within many organisations of “just get the work done”, which often leads to fewer meetings and tighter deadlines. As a result, a sense of collaboration is inefficient and wasted. I have also experienced working with some designers where perfectionism and territoriality creep in — “This is my design” — which kills the open, communal spirit that collaboration needs.

When Collaboration Is The User Research #

In an ideal world, we’d always have direct access to users. But let’s be real. Sometimes that just doesn’t happen. Whether it’s due to budget constraints, time limitations, or layers of bureaucracy, talking to end users isn’t always possible. That’s where collaboration with team members becomes even more crucial.

The next best thing to talking to users? Talking to the people who talk to users. Sales teams, customer success reps, tech support, and field engineers. They’re all user researchers in disguise!

On another B2C project, the end users were having trouble completing the key task. My role was to redesign the onboarding experience for an online identity capture tool for end users. I was unable to schedule interviews with end users due to budget and time constraints, so I turned to the sales and tech support teams.

I conducted multiple mini-workshops to identify the most common onboarding issues they had heard directly from our customers. This led to a huge “aha” moment: most users dropped off before the document capture process. They may have been struggling with a lack of instruction, not knowing the required time, or not understanding the steps involved in completing the onboarding process.

That insight reframed my approach, and we ultimately redesigned the flow to prioritize orientation and clear instructions before proceeding to the setup steps. Below is an example of one of the screen designs, including some of the instructions we added.

This kind of collaboration is user research. It’s not a substitute for talking to users directly, but it’s a powerful proxy when you have limited options.

But What About Using AI?

Glad you asked! Even AI tools, which are increasingly being used for idea generation, pattern recognition, or rapid prototyping, don’t replace collaboration; they just change the shape of it.

AI can help you explore design patterns, draft user flows, or generate multiple variations of a layout in seconds. It’s fantastic for getting past creative blocks or pressure-testing your assumptions. But let’s be clear: these tools are accelerators, not oracles. As an innovation and strategy consultant Nathan Waterhouse points out, AI can point you in a direction, but it can’t tell you which direction is the right one in your specific context. That still requires human judgment, empathy, and an understanding of the messy realities of users and business goals.

You still need people, especially those closest to your users, to validate, challenge, and evolve any AI-generated idea. For instance, you might use ChatGPT to brainstorm onboarding flows for a SaaS tool, but if you’re not involving customer support reps who regularly hear “I didn’t know where to start” or “I couldn’t even log in,” you’re just working with assumptions. The same applies to engineers who know what is technically feasible or PMs who understand where the business is headed.

AI can generate ideas, but only collaboration turns those ideas into something usable, valuable, and real. Think of it as a powerful ingredient, but not the whole recipe.

How To Strengthen Your UX Collaboration Skills?

If collaboration doesn’t come naturally or hasn’t been a focus, that’s okay. Like any skill, it can be practiced and improved. Here are a few ways to level up:

  1. Cultivate curiosity about your teammates.
    Ask engineers what keeps them up at night. Learn what metrics your PMs care about. Understand the types of tickets the support team handles most frequently. The more you care about their challenges, the more they’ll care about yours.
  2. Get comfortable facilitating.
    You don’t need to be a certified Design Sprint master, but learning how to run a structured conversation, align stakeholders, or synthesize different points of view is hugely valuable. Even a simple “What’s working? What’s not?” retro can be an amazing starting point in identifying where you need to focus next.
  3. Share early, share often.
    Don’t wait until your designs are polished to get input. Messy sketches and rough prototypes invite collaboration. When others feel like they’ve helped shape the work, they’re more invested in its success.
  4. Practice active listening.
    When someone critiques your work, don’t immediately defend. Pause. Ask follow-up questions. Reframe the feedback. Collaboration isn’t about consensus; it’s about finding a shared direction that can honour multiple truths.
  5. Co-own the outcome.
    Let go of your ego. The best UX work isn’t “your” work. It’s the result of many voices, skill sets, and conversations converging toward a solution that helps users. It’s not “I”, it’s “we” that will solve this problem together.

Conclusion: UX Is A Team Sport

Great design doesn’t emerge from a vacuum. It comes from open dialogue, cross-functional understanding, and a shared commitment to solving real problems for real people.

If there’s one thing I wish every early-career designer knew, it’s this:

Collaboration is not a side skill. It’s the engine behind every meaningful design outcome. And for seasoned professionals, it’s the superpower that turns good teams into great ones.

So next time you’re tempted to go heads-down and just “crank out a design,” pause to reflect. Ask who else should be in the room. And invite them in, not just to review your work, but to help create it.

Because in the end, the best UX isn’t just what you make. It’s what you make together.

Designing For Neurodiversity

 Designing for neurodiversity means recognizing that people aren’t edge cases but individuals with varied ways of thinking and navigating the web. So, how can we create more inclusive experiences that work better for everyone?

Neurodivergent needs are often considered as an edge case that doesn’t fit into common user journeys or flows. Neurodiversity tends to get overlooked in the design process. Or it is tackled late in the process, and only if there is enough time.

But people aren’t edge cases. Every person is just a different person, performing tasks and navigating the web in a different way. So how can we design better, more inclusive experiences that cater to different needs and, ultimately, benefit everyone? Let’s take a closer look.

Neurodiversity Or Neurodivergent?

There is quite a bit of confusion about both terms on the web. Different people think and experience the world differently, and neurodiversity sees differences as natural variations, not deficits. It distinguishes between neurotypical and neurodivergent people.

  • Neurotypical people see the world in a “typical” and widely perceived as expected way.
  • Neurodivergent people experience the world differently, for example, people with ADHD, dyslexia, dyscalculia, synesthesia, and hyperlexia.

According to various sources, around 15–40% of the population has neurodivergent traits. These traits can be innate (e.g., autism) or acquired (e.g., trauma). But they are always on a spectrum, and vary a lot. A person with autism is not neurodiverse — they are neurodivergent.

One of the main strengths of neurodivergent people is how imaginative and creative they are, coming up with out-of-the-box ideas quickly. With exceptional levels of attention, strong long-term memory, a unique perspective, unbeatable accuracy, and a strong sense of justice and fairness.

Being different in a world that, to some degree, still doesn’t accept these differences is exhausting. So unsurprisingly, neurodivergent people often bring along determination, resilience, and high levels of empathy.

Design With People, Not For Them

As a designer, I often see myself as a path-maker. I’m designing reliable paths for people to navigate to their goals comfortably. Without being blocked. Or confused. Or locked out.

That means respecting the simple fact that people’s needs, tasks, and user journeys are all different, and that they evolve over time. And: most importantly, it means considering them very early in the process.

Better accessibility is better for everyone. Instead of making decisions that need to be reverted or refined to be compliant, we can bring a diverse group of people — with accessibility needs, with neurodiversity, frequent and infrequent users, experts, newcomers — in the process, and design with them, rather than for them.

Neurodiversity & Inclusive Design Resources

A wonderful resource that helps us design for cognitive accessibility is Stéphanie Walter’s Neurodiversity and UX toolkit. It includes practical guidelines, tools, and resources to better understand and design for dyslexia, dyscalculia, autism, and ADHD.


Another fantastic resource is Will Soward’s Neurodiversity Design System. It combines neurodiversity and user experience design into a set of design standards and principles that you can use to design accessible learning interfaces.

Last but not least, I’ve been putting together a few summaries about neurodiversity and inclusive design over the last few years, so you might find them helpful, too:

A huge thank-you to everyone who has been writing, speaking, and sharing articles, resources, and toolkits on designing for diversity. The topic is often forgotten and overlooked, but it has an incredible impact. 👏🏼👏🏽👏🏾

Reliably Detecting Third-Party Cookie Blocking In 2025

 

The web is mired in a struggle to eliminate third-party cookies, with the World Wide Web Consortium Technical Architecture Group leading the charge. But there are obstacles preventing this from happening, and, as a result, many essential web features continue to rely on cookies to function properly. That’s why detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience.

The web is beginning to part ways with third-party cookies, a technology it once heavily relied on. Introduced in 1994 by Netscape to support features like virtual shopping carts, cookies have long been a staple of web functionality. However, concerns over privacy and security have led to a concerted effort to eliminate them. The World Wide Web Consortium Technical Architecture Group (W3C TAG) has been vocal in advocating for the complete removal of third-party cookies from the web platform.

Major browsers (Chrome, Safari, Firefox, and Edge) are responding by phasing them out, though the transition is gradual. While this shift enhances user privacy, it also disrupts legitimate functionalities that rely on third-party cookies, such as single sign-on (SSO), fraud prevention, and embedded services. And because there is still no universal ban in place and many essential web features continue to depend on these cookies, developers must detect when third-party cookies are blocked so that applications can respond gracefully.

Yes, the ideal solution is to move away from third-party cookies altogether and redesign our integrations using privacy-first, purpose-built alternatives as soon as possible. But in reality, that migration can take months or even years, especially for legacy systems or third-party vendors. Meanwhile, users are already browsing with third-party cookies disabled and often have no idea that anything is missing.

Imagine a travel booking platform that embeds an iframe from a third-party partner to display live train or flight schedules. This embedded service uses a cookie on its own domain to authenticate the user and personalize content, like showing saved trips or loyalty rewards. But when the browser blocks third-party cookies, the iframe cannot access that data. Instead of a seamless experience, the user sees an error, a blank screen, or a login prompt that doesn’t work.

And while your team is still planning a long-term integration overhaul, this is already happening to real users. They don’t see a cookie policy; they just see a broken booking flow.

Detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience.

Why It’s Hard To Tell If Third-Party Cookies Are Blocked

Detecting whether third-party cookies are supported isn’t as simple as calling navigator.cookieEnabled. Even a well-intentioned check like this one may look safe, but it still won’t tell you what you actually need to know:

// DOES NOT detect third-party cookie blocking
function areCookiesEnabled() {
  if (navigator.cookieEnabled === false) {
    return false;
  }

  try {
    document.cookie = "test_cookie=1; SameSite=None; Secure";
    const hasCookie = document.cookie.includes("test_cookie=1");
    document.cookie = "test_cookie=; Max-Age=0; SameSite=None; Secure";

    return hasCookie;
  } catch (e) {
    return false;
  }
}

This function only confirms that cookies work in the current (first-party) context. It says nothing about third-party scenarios, like an iframe on another domain. Worse, it’s misleading: in some browsers, navigator.cookieEnabled may still return true inside a third-party iframe even when cookies are blocked. Others might behave differently, leading to inconsistent and unreliable detection.

These cross-browser inconsistencies — combined with the limitations of document.cookie — make it clear that there is no shortcut for detection. To truly detect third-party cookie blocking, we need to understand how different browsers actually behave in embedded third-party contexts.

How Modern Browsers Handle Third-Party Cookies

The behavior of modern browsers directly affects which detection methods will work and which ones silently fail.

Since version 13.1, Safari blocks all third-party cookies by default, with no exceptions, even if the user previously interacted with the embedded domain. This policy is part of Intelligent Tracking Prevention (ITP).

For embedded content (such as an SSO iframe) that requires cookie access, Safari exposes the Storage Access API, which requires a user gesture to grant storage permission. As a result, a test for third-party cookie support will nearly always fail in Safari unless the iframe explicitly requests access via this API.

Firefox’s Total Cookie Protection isolates cookies on a per-site basis. Third-party cookies can still be set and read, but they are partitioned by the top-level site, meaning a cookie set by the same third-party on siteA.com and siteB.com is stored separately and cannot be shared.

As of Firefox 102, this behavior is enabled by default in the Standard (default) mode of Enhanced Tracking Protection. Unlike the Strict mode — which blocks third-party cookies entirely, similar to Safari — the Standard mode does not block them outright. Instead, it neutralizes their tracking capability by isolating them per site.

As a result, even if a test shows that a third-party cookie was successfully set, it may be useless for cross-site logins or shared sessions due to this partitioning. Detection logic needs to account for that.

Chrome: From Deprecation Plans To Privacy Sandbox (And Industry Pushback) #

Chromium-based browsers still allow third-party cookies by default — but the story is changing. Starting with Chrome 80, third-party cookies must be explicitly marked with SameSite=None; Secure, or they will be rejected.

In January 2020, Google announced their intention to phase out third-party cookies by 2022. However, the timeline was updated multiple times, first in June 2021 when the company pushed the rollout to begin in mid-2023 and conclude by the end of that year. Additional postponements followed in July 2022, December 2023, and April 2024.

In July 2024, Google has clarified that there is no plan to unilaterally deprecate third-party cookies or force users into a new model without consent. Instead, Chrome is shifting to a user-choice interface that will allow individuals to decide whether to block or allow third-party cookies globally.

This change was influenced in part by substantial pushback from the advertising industry, as well as ongoing regulatory oversight, including scrutiny by the UK Competition and Markets Authority (CMA) into Google’s Privacy Sandbox initiative. The CMA confirmed in a 2025 update that there is no intention to force a deprecation or trigger automatic prompts for cookie blocking.

As for now, third-party cookies remain enabled by default in Chrome. The new user-facing controls and the broader Privacy Sandbox ecosystem are still in various stages of experimentation and limited rollout.

Edge (Chromium-Based): Tracker-Focused Blocking With User Configurability

Edge (which is a Chromium-based browser) shares Chrome’s handling of third-party cookies, including the SameSite=None; Secure requirement. Additionally, Edge introduces Tracking Prevention modes: Basic, Balanced (default), and Strict. In Balanced mode, it blocks known third-party trackers using Microsoft’s maintained list but allows many third-party cookies that are not classified as trackers. Strict mode blocks more resource loads than Balanced, which may result in some websites not behaving as expected.

Other Browsers: What About Them?

Privacy-focused browsers, like Brave, block third-party cookies by default as part of their strong anti-tracking stance.

Internet Explorer (IE) 11 allowed third-party cookies depending on user privacy settings and the presence of Platform for Privacy Preferences (P3P) headers. However, IE usage is now negligible. Notably, the default “Medium” privacy setting in IE could block third-party cookies unless a valid P3P policy was present.

Older versions of Safari had partial third-party cookie restrictions (such as “Allow from websites I visit”), but, as mentioned before, this was replaced with full blocking via ITP.

As of 2025, all major browsers either block or isolate third-party cookies by default, with the exception of Chrome, which still allows them in standard browsing mode pending the rollout of its new user-choice model.

To account for these variations, your detection strategy must be grounded in real-world testing — specifically by reproducing a genuine third-party context such as loading your script within an iframe on a cross-origin domain — rather than relying on browser names or versions.

Overview Of Detection Techniques

Over the years, many techniques have been used to detect third-party cookie blocking. Most are unreliable or obsolete. Here’s a quick walkthrough of what doesn’t work (and why) and what does.

Basic JavaScript API Checks (Misleading)

As mentioned earlier, the navigator.cookieEnabled or setting document.cookie on the main page doesn’t reflect cross-site cookie status:

  • In third-party iframes, navigator.cookieEnabled often returns true even when cookies are blocked.
  • Setting document.cookie in the parent doesn’t test the third-party context.

These checks are first-party only. Avoid using them for detection.

Storage Hacks Via localStorage (Obsolete)

Previously, some developers inferred cookie support by checking if window.localStorage worked inside a third-party iframe — which is especially useful against older Safari versions that blocked all third-party storage.

Modern browsers often allow localStorage even when cookies are blocked. This leads to false positives and is no longer reliable.

One classic method involves setting a cookie from a third-party domain via HTTP and then checking if it comes back:

  1. Load a script/image from a third-party server that sets a cookie.
  2. Immediately load another resource, and the server checks whether the cookie was sent.

This works, but it:

  • Requires custom server-side logic,
  • Depends on HTTP caching, response headers, and cookie attributes (SameSite=None; Secure), and
  • Adds development and infrastructure complexity.

While this is technically valid, it is not suitable for a front-end-only approach, which is our focus here.

Storage Access API (Supplemental Signal)

The document.hasStorageAccess() method allows embedded third-party content to check if it has access to unpartitioned cookies:

  • Chrome
    Supports hasStorageAccess() and requestStorageAccess() starting from version 119. Additionally, hasUnpartitionedCookieAccess() is available as an alias for hasStorageAccess() from version 125 onwards.
  • Firefox
    Supports both hasStorageAccess() and requestStorageAccess() methods.
  • Safari
    Supports the Storage Access API. However, access must always be triggered by a user interaction. For example, even calling requestStorageAccess() without a direct user gesture (like a click) is ignored.

Chrome and Firefox also support the API, and in those browsers, it may work automatically or based on browser heuristics or site engagement.

This API is particularly useful for detecting scenarios where cookies are present but partitioned (e.g., Firefox’s Total Cookie Protection), as it helps determine if the iframe has unrestricted cookie access. But for now, it’s still best used as a supplemental signal, rather than a standalone check.

iFrame + postMessage (Best Practice)

Despite the existence of the Storage Access API, at the time of writing, this remains the most reliable and browser-compatible method:

  1. Embed a hidden iframe from a third-party domain.
  2. Inside the iframe, attempt to set a test cookie.
  3. Use window.postMessage to report success or failure to the parent.

This approach works across all major browsers (when properly configured), requires no server (kind of, more on that next), and simulates a real-world third-party scenario.

We’ll implement this step-by-step next.

Bonus: Sec-Fetch-Storage-Access

Chrome (starting in version 133) is introducing Sec-Fetch-Storage-Access, an HTTP request header sent with cross-site requests to indicate whether the iframe has access to unpartitioned cookies. This header is only visible to servers and cannot be accessed via JavaScript. It’s useful for back-end analytics but not applicable for client-side cookie detection.

As of May 2025, this feature is only implemented in Chrome and is not supported by other browsers. However, it’s still good to know that it’s part of the evolving ecosystem.

Step-by-Step: Detecting Third-Party Cookies Via iFrame

So, what did I mean when I said that the last method we looked at “requires no server”? While this method doesn’t require any back-end logic (like server-set cookies or response inspection), it does require access to a separate domain — or at least a cross-site subdomain — to simulate a third-party environment. This means the following:

  • You must serve the test page from a different domain or public subdomain, e.g., example.com and cookietest.example.com,
  • The domain needs HTTPS (for SameSite=None; Secure cookies to work), and
  • You’ll need to host a simple static file (the test page), even if no server code is involved.

Once that’s set up, the rest of the logic is fully client-side.

Minimal version (e.g., https://cookietest.example.com/cookie-check.html):

<!DOCTYPE html>
<html>
  <body>
    <script>
      document.cookie = "thirdparty_test=1; SameSite=None; Secure; Path=/;";
      const cookieFound = document.cookie.includes("thirdparty_test=1");
    
      const sendResult = (status) => window.parent?.postMessage(status, "*");
    
      if (cookieFound && document.hasStorageAccess instanceof Function) {
        document.hasStorageAccess().then((hasAccess) => {
          sendResult(hasAccess ? "TP_COOKIE_SUPPORTED" : "TP_COOKIE_BLOCKED");
        }).catch(() => sendResult("TP_COOKIE_BLOCKED"));
      } else {
        sendResult(cookieFound ? "TP_COOKIE_SUPPORTED" : "TP_COOKIE_BLOCKED");
      }
    </script>
  </body>
</html>

Make sure the page is served over HTTPS, and the cookie uses SameSite=None; Secure. Without these attributes, modern browsers will silently reject it.

Step 2: Embed The iFrame And Listen For The Result #

On your main page:

function checkThirdPartyCookies() {
  return new Promise((resolve) => {
    const iframe = document.createElement('iframe');
    iframe.style.display = 'none';
    iframe.src = "https://cookietest.example.com/cookie-check.html"; // your subdomain
    document.body.appendChild(iframe);

    let resolved = false;
    const cleanup = (result, timedOut = false) => {
      if (resolved) return;
      resolved = true;
      window.removeEventListener('message', onMessage);
      iframe.remove();
      resolve({ thirdPartyCookiesEnabled: result, timedOut });
    };

    const onMessage = (event) => {
      if (["TP_COOKIE_SUPPORTED", "TP_COOKIE_BLOCKED"].includes(event.data)) {
        cleanup(event.data === "TP_COOKIE_SUPPORTED", false);
      }
    };

    window.addEventListener('message', onMessage);
    setTimeout(() => cleanup(false, true), 1000);
  });
}

Example usage:

checkThirdPartyCookies().then(({ thirdPartyCookiesEnabled, timedOut }) => {
  if (!thirdPartyCookiesEnabled) {
    someCookiesBlockedCallback(); // Third-party cookies are blocked.
    if (timedOut) {
      // No response received (iframe possibly blocked).
      // Optional fallback UX goes here.
      someCookiesBlockedTimeoutCallback();
    };
  }
});

Step 3: Enhance Detection With The Storage Access API #

In Safari, even when third-party cookies are blocked, users can manually grant access through the Storage Access API — but only in response to a user gesture.

Here’s how you could implement that in your iframe test page:

<button id="enable-cookies">This embedded content requires cookie access. Click below to continue.</button>

<script>
  document.getElementById('enable-cookies')?.addEventListener('click', async () => {
    if (document.requestStorageAccess && typeof document.requestStorageAccess === 'function') {
      try {
        const granted = await document.requestStorageAccess();
        if (granted !== false) {
          window.parent.postMessage("TP_STORAGE_ACCESS_GRANTED", "*");
        } else {
          window.parent.postMessage("TP_STORAGE_ACCESS_DENIED", "*");
        }
      } catch (e) {
        window.parent.postMessage("TP_STORAGE_ACCESS_FAILED", "*");
      }
    }
  });
</script>

Then, on the parent page, you can listen for this message and retry detection if needed:

// Inside the same `onMessage` listener from before:
if (event.data === "TP_STORAGE_ACCESS_GRANTED") {
  // Optionally: retry the cookie test, or reload iframe logic
  checkThirdPartyCookies().then(handleResultAgain);
}

(Bonus) A Purely Client-Side Fallback (Not Perfect, But Sometimes Necessary)

In some situations, you might not have access to a second domain or can’t host third-party content under your control. That makes the iframe method unfeasible.

When that’s the case, your best option is to combine multiple signals — basic cookie checks, hasStorageAccess(), localStorage fallbacks, and maybe even passive indicators like load failures or timeouts — to infer whether third-party cookies are likely blocked.

The important caveat: This will never be 100% accurate. But, in constrained environments, “better something than nothing” may still improve the UX.

Here’s a basic example:

async function inferCookieSupportFallback() {
  let hasCookieAPI = navigator.cookieEnabled;
  let canSetCookie = false;
  let hasStorageAccess = false;

  try {
    document.cookie = "testfallback=1; SameSite=None; Secure; Path=/;";
    canSetCookie = document.cookie.includes("test_fallback=1");

    document.cookie = "test_fallback=; Max-Age=0; Path=/;";
  } catch (_) {
    canSetCookie = false;
  }

  if (typeof document.hasStorageAccess === "function") {
    try {
      hasStorageAccess = await document.hasStorageAccess();
    } catch (_) {}
  }

  return {
    inferredThirdPartyCookies: hasCookieAPI && canSetCookie && hasStorageAccess,
    raw: { hasCookieAPI, canSetCookie, hasStorageAccess }
  };
}

Example usage:

inferCookieSupportFallback().then(({ inferredThirdPartyCookies }) => {
  if (inferredThirdPartyCookies) {
    console.log("Cookies likely supported. Likely, yes.");
  } else {
    console.warn("Cookies may be blocked or partitioned.");
    // You could inform the user or adjust behavior accordingly
  }
});

Use this fallback when:

  • You’re building a JavaScript-only widget embedded on unknown sites,
  • You don’t control a second domain (or the team refuses to add one), or
  • You just need some visibility into user-side behavior (e.g., debugging UX issues).

Don’t rely on it for security-critical logic (e.g., auth gating)! But it may help tailor the user experience, surface warnings, or decide whether to attempt a fallback SSO flow. Again, it’s better to have something rather than nothing.

Fallback Strategies When Third-Party Cookies Are Blocked

Detecting blocked cookies is only half the battle. Once you know they’re unavailable, what can you do? Here are some practical options that might be useful for you:

Redirect-Based Flows #

For auth-related flows, switch from embedded iframes to top-level redirects. Let the user authenticate directly on the identity provider’s site, then redirect back. It works in all browsers, but the UX might be less seamless.

Request Storage Access #

Prompt the user using requestStorageAccess() after a clear UI gesture (Safari requires this). Use this to re-enable cookies without leaving the page.

Token-Based Communication #

Pass session info directly from parent to iframe via:

This avoids reliance on cookies entirely but requires coordination between both sides:

// Parent
const iframe = document.getElementById('my-iframe');

iframe.onload = () => {
  const token = getAccessTokenSomehow(); // JWT or anything else
  iframe.contentWindow.postMessage(
    { type: 'AUTH_TOKEN', token },
    'https://iframe.example.com' // Set the correct origin!
  );
};

// iframe
window.addEventListener('message', (event) => {
  if (event.origin !== 'https://parent.example.com') return;

  const { type, token } = event.data;

  if (type === 'AUTH_TOKEN') {
    validateAndUseToken(token); // process JWT, init session, etc
  }
});

Partitioned Cookies (CHIPS) #

Chrome (since version 114) and other Chromium-based browsers now support cookies with the Partitioned attribute (known as CHIPS), allowing per-top-site cookie isolation. This is useful for widgets like chat or embedded forms where cross-site identity isn’t needed.

Note: Firefox and Safari don’t support the Partitioned cookie attribute. Firefox enforces cookie partitioning by default using a different mechanism (Total Cookie Protection), while Safari blocks third-party cookies entirely.

But be careful, as they are treated as “blocked” by basic detection. Refine your logic if needed.

Final Thought: Transparency, Transition, And The Path Forward

Third-party cookies are disappearing, albeit gradually and unevenly. Until the transition is complete, your job as a developer is to bridge the gap between technical limitations and real-world user experience. That means:

  • Keep an eye on the standards.
    APIs like FedCM and Privacy Sandbox features (Topics, Attribution Reporting, Fenced Frames) are reshaping how we handle identity and analytics without relying on cross-site cookies.
  • Combine detection with graceful fallback.
    Whether it’s offering a redirect flow, using requestStorageAccess(), or falling back to token-based messaging — every small UX improvement adds up.
  • Inform your users.
    Users shouldn’t be left wondering why something worked in one browser but silently broke in another. Don’t let them feel like they did something wrong — just help them move forward. A clear, friendly message can prevent this confusion.

The good news? You don’t need a perfect solution today, just a resilient one. By detecting issues early and handling them thoughtfully, you protect both your users and your future architecture, one cookie-less browser at a time.

And as seen with Chrome’s pivot away from automatic deprecation, the transition is not always linear. Industry feedback, regulatory oversight, and evolving technical realities continue to shape the time and the solutions.

And don’t forget: having something is better than nothing.