Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Wednesday, April 9, 2025

How To Argue Against AI-First Research

 Companies have been turning their attention to “synthetic,” AI-driven user testing. However, as convenient as it might seem, it’s dangerous, expensive, and usually diminishes user value. Let’s take a closer look at why exactly it is problematic and how we can argue against it to make a case for UX research with real users.

With AI upon us, companies have recently been turning their attention to “synthetic” user testing — AI-driven research that replaces UX research. There, questions are answered by AI-generated “customers,” human tasks “performed” by AI agents.

However, it’s not just for desk research or discovery that AI is used for; it’s an actual usability testing with “AI personas” that mimic human behavior of actual customers within the actual product. It’s like UX research, just… well, without the users.

=

If this sounds worrying, confusing, and outlandish, it is — but this doesn’t stop companies from adopting AI “research” to drive business decisions. Although, unsurprisingly, the undertaking can be dangerous, risky, and expensive and usually diminishes user value.[

Fast, Cheap, Easy… And Imaginary

Erika Hall famously noted that “design is only as ‘human-centered’ as the business model allows.” If a company is heavily driven by hunches, assumptions, and strong opinions, there will be little to no interest in properly-done UX research in the first place.[

But unlike UX research, AI research (conveniently called synthetic testing) is fast, cheap, and easy to re-run. It doesn’t raise uncomfortable questions, and it doesn’t flag wrong assumptions. It doesn’t require user recruitment, much time, or long-winded debates.

And: it can manage thousands of AI personas at once. By studying AI-generated output, we can discover common journeys, navigation patterns, and common expectations. We can anticipate how people behave and what they would do.

Well, that’s the big promise. And that’s where we start running into big problems.

LLMs Are People Pleasers

Good UX research has roots in what actually happened, not what might have happened or what might happen in the future.

By nature, LLMs are trained to provide the most “plausible” or most likely output based on patterns captured in its training data. These patterns, however, emerge from expected behaviors by statistically “average” profiles extracted from content on the web. But these people don’t exist, they never have.

By default, user segments are not scoped and not curated. They don’t represent the customer base of any product. So to be useful, we must eloquently prompt AI by explaining who users are, what they do, and how they behave. Otherwise, the output won’t match user needs and won’t apply to our users.


When “producing” user insights, LLMs can’t generate unexpected things beyond what we’re already asking about.

In comparison, researchers are only able to define what’s relevant as the process unfolds. In actual user testing, insights can help shift priorities or radically reimagine the problem we’re trying to solve, as well as potential business outcomes.

Real insights come from unexpected behavior, from reading behavioral clues and emotions, from observing a person doing the opposite of what they said. We can’t replicate it with LLMs.

AI User Research Isn’t “Better Than Nothing”

Pavel Samsonov articulates that things that sound like customers might say them are worthless. But things that customers actually have said, done, or experienced carry inherent value (although they could be exaggerated). We just need to interpret them correctly.

AI user research isn’t “better than nothing” or “more effective.” It creates an illusion of customer experiences that never happened and are at best good guesses but at worst misleading and non-applicable. Relying on AI-generated “insights” alone isn’t much different than reading tea leaves.

The Cost Of Mechanical Decisions

We often hear about the breakthrough of automation and knowledge generation with AI. Yet we often forget that automation often comes at a cost: the cost of mechanical decisions that are typically indiscriminate, favor uniformity, and erode quality

As Maria Rosala and Kate Moran write, the problem with AI research is that it most certainly will be misrepresentative, and without real research, you won't catch and correct those inaccuracies. Making decisions without talking to real customers is dangerous, harmful, and expensive.

Beyond that, synthetic testing assumes that people fit in well-defined boxes, which is rarely true. Human behavior is shaped by our experiences, situations, habits that can’t be replicated by text generation alone. AI strengthens biases, supports hunches, and amplifies stereotypes.

Triangulate Insights Instead Of Verifying Them

Of course AI can provide useful starting points to explore early in the process. But inherently it also invites false impressions and unverified conclusions — presented with an incredible level of confidence and certainty.

Starting with human research conducted with real customers using a real product is just much more reliable. After doing so, we can still apply AI to see if we perhaps missed something critical in user interviews. AI can enhance but not replace UX research.

Also, when we do use AI for desk research, it can be tempting to try to “validate” AI “insights” with actual user testing. However, once we plant a seed of insight in our head, it’s easy to recognize its signs everywhere — even if it really isn’t there.

Instead, we study actual customers, then triangulate data: track clusters or most heavily trafficked parts of the product. It might be that analytics and AI desk research confirm your hypothesis. That would give you a much stronger standing to move forward in the process.

Wrapping Up

I might sound like a broken record, but I keep wondering why we feel the urgency to replace UX work with automated AI tools. Good design requires a good amount of critical thinking, observation, and planning.

To me personally, cleaning up after AI-generated output takes way more time than doing the actual work. There is an incredible value in talking to people who actually use your product.

I would always choose one day with a real customer instead of one hour with 1,000 synthetic users pretending to be humans.

Useful Resources

Monday, April 7, 2025

Previewing Content Changes In Your Work With document.designMode

 

You probably already know that you can use developer tools in your browser to make on-the-spot changes to a webpage — simply click the node in the Inspector and make your edits. But have you tried document.designMode? Victor Ayomipo explains how it can be used to preview content changes and demonstrates several use cases where it comes in handy for everything from basic content editing to improving team collaboration.

So, you just deployed a change to your website. Congrats! Everything went according to plan, but now that you look at your work in production, you start questioning your change. Perhaps that change was as simple as a new heading and doesn’t seem to fit the space. Maybe you added an image, but it just doesn’t feel right in that specific context.

What do you do? Do you start deploying more changes? It’s not like you need to crack open Illustrator or Figma to mock up a small change like that, but previewing your changes before deploying them would still be helpful.

Enter document.designMode. It’s not new. In fact, I just recently came across it for the first time and had one of those “Wait, this exists?” moments because it’s a tool we’ve had forever, even in Internet Explorer 6. But for some reason, I’m only now hearing about it, and it turns out that many of my colleagues are also hearing about it for the first time.

What exactly is document.designMode? Perhaps a little video demonstration can help demonstrate how it allows you to make direct edits to a page.

Video demonstration of how document.designMode works.

At its simplest, document.designMode makes webpages editable, similar to a text editor. I’d say it’s like having an edit mode for the web — one can click anywhere on a webpage to modify existing text, move stuff around, and even delete elements. It’s like having Apple’s “Distraction Control” feature at your beck and call.

I think this is a useful tool for developers, designers, clients, and regular users alike.

You might be wondering if this is just like contentEditable because, at a glance, they both look similar. But no, the two serve different purposes. contentEditable is more focused on making a specific element editable, while document.designMode makes the whole page editable.

How To Enable document.designMode In DevTools

Enabling document.designMode can be done in the browser’s developer tools:

  1. Right-click anywhere on a webpage and click Inspect.
  2. Click the Console tab.
  3. Type document.designMode = "on" and press Enter.

To turn it off, refresh the page. That’s it.

Another method is to create a bookmark that activates the mode when clicked:

  1. Create a new bookmark in your browser.
  2. You can name it whatever, e.g., “EDIT_MODE”.
  3. Input this code in the URL field:
javascript:(function(){document.designMode = document.designMode === 'on' ? 'off' : 'on';})();

And now you have a switch that toggles document.designMode on and off.

Use Cases

There are many interesting, creative, and useful ways to use this tool.

Basic Content Editing

I dare say this is the core purpose of document.designMode, which is essentially editing any text element of a webpage for whatever reason. It could be the headings, paragraphs, or even bullet points. Whatever the case, your browser effectively becomes a “What You See Is What You Get” (WYSIWYG) editor, where you can make and preview changes on the spot.

Basic content editing using document.designMode.

Landing Page A/B Testing

Let’s say we have a product website with an existing copy, but then you check out your competitors, and their copy looks more appealing. Naturally, you’d want to test it out. Instead of editing on the back end or taking notes for later, you can use document.designMode to immediately see how that copy variation would fit into the landing page layout and then easily compare and contrast the two versions.

Landing page A/B testing with document.designMode.

This could also be useful for copywriters or solo developers.

SEO Title And Meta Description

Everyone wants their website to rank at the top of search results because that means more traffic. However, as broad as SEO is as a practice, the <title> tag and <meta> description is a website’s first impression in search results, both for visitors and search engines, as they can make or break the click-through rate.

The question that arises is, how do you know if certain text gets cut off in search results? I think document.designMode can fix that before pushing it live.

SEO title and meta description with document.designMode.

With this tool, I think it’d be a lot easier to see how different title lengths look when truncated, whether the keywords are instantly visible, and how compelling it’d be compared to other competitors on the same search result.

Developer Workflows

To be completely honest, developers probably won’t want to use document.designMode for actual development work. However, it can still be handy for breaking stuff on a website, moving elements around, repositioning images, deleting UI elements, and undoing what was deleted, all in real time.

This could help if you’re skeptical about the position of an element or feel a button might do better at the top than at the bottom; document.designMode sure could help. It sure beats rearranging elements in the codebase just to determine if an element positioned differently would look good. But again, most of the time, we’re developing in a local environment where these things can be done just as effectively, so your mileage may vary as far as how useful you find document.designMode in your development work.

Client And Team Collaboration

It is a no-brainer that some clients almost always have last-minute change requests — stuff like “Can we remove this button?” or “Let’s edit the pricing features in the free tier.”

To the client, these are just little tweaks, but to you, it could be a hassle to start up your development environment to make those changes. I believe document.designMode can assist in such cases by making those changes in seconds without touching production and sharing screenshots with the client.

Client and team collaboration with document.designMod.

It could also become useful in team meetings when discussing UI changes. Seeing changes in real-time through screen sharing can help facilitate discussion and lead to quicker conclusions.

Live DOM Tutorials

For beginners learning web development, I feel like document.designMode can help provide a first look at how it feels to manipulate a webpage and immediately see the results — sort of like a pre-web development stage, even before touching a code editor.

As learners experiment with moving things around, an instructor can explain how each change works and affects the flow of the page.

Social Media Content Preview #

We can use the same idea to preview social media posts before publishing them! For instance, document.designMode can gauge the effectiveness of different call-to-action phrases or visualize how ad copy would look when users stumble upon it when scrolling through the platform. This would be effective on any social media platform.

Social media content preview with document.designMode.

Memes

I didn’t think it’d be fair not to add this. It might seem out of place, but let’s be frank: creating memes is probably one of the first things that comes to mind when anyone discovers document.designMode.

You can create parody versions of social posts, tweak article headlines, change product prices, and manipulate YouTube views or Reddit comments, just to name a few of the ways you could meme things. Just remember: this shouldn’t be used to spread false information or cause actual harm. Please keep it respectful and ethical!

Conclusion

document.designMode = "on" is one of those delightful browser tricks that can be immediately useful when you discover it for the first time. It’s a raw and primitive tool, but you can’t deny its utility and purpose.

So, give it a try, show it to your colleagues, or even edit this article. You never know when it might be exactly what you need.

Further Reading

Saturday, April 5, 2025

Building A Drupal To Storyblok Migration Tool: An Engineering Perspective

 shares the engineering and architectural choices made by the team at Storyblok and how real-world migration challenges were addressed using modern PHP practices.

Content management is evolving. The traditional monolithic CMS approach is giving way to headless architectures, where content management and presentation are decoupled. This shift brings new challenges, particularly when organizations need to migrate from legacy systems to modern headless platforms.

Our team encountered this scenario when creating a migration path from Drupal to Storyblok. These systems handle content architecture quite differently — Drupal uses an entity-field model integrated with PHP, while Storyblok employs a flexible Stories and Blocks structure designed for headless delivery.

If you just need to use a script to do a simple — yet extensible — content migration from Drupal to Storyblok, I already shared step-by-step instructions on how to download and use it. If you’re interested in the process of creating such a script so that you can write your own (possibly) better version, stay here!

We observed that developers sometimes struggle with manual content transfers and custom scripts when migrating between CMSs. This led us to develop and share our migration approach, which we implemented as an open-source tool that others could use as a reference for their migration needs.

Our solution combines two main components: a custom Drush command that handles content mapping and transformation and a new PHP client for Storyblok’s Management API that leverages modern language features for improved developer experience.

We’ll explore the engineering decisions behind this tool’s development, examining our architectural choices and how we addressed real-world migration challenges using modern PHP practices.

Note: You can find the complete source code of the migration tool in the Drupal exporter repo.

Planning The Migration Architecture

The journey from Drupal to Storyblok presents unique architectural challenges. The fundamental difference lies in how these systems conceptualize content: Drupal structures content as entities with fields, while Storyblok uses a component-based approach with Stories and Blocks.

Initial Requirements Analysis

A successful migration tool needs to understand both systems intimately. Drupal’s content model relies heavily on its Entity API, storing content as structured field collections within entities. A typical Drupal article might contain fields for the title, body content, images, and taxonomies. Storyblok, on the other hand, structures content as stories that contain blocks, reusable components that can be nested and arranged in a flexible way. It’s a subtle difference that shaped our technical requirements, particularly around content mapping and data transformation, but ultimately, it’s easy to see the relationships between the two content models.

Technical Constraints

Early in development, we identified several key constraints. Storyblok’s Management API enforces rate limits that affect how quickly we can transfer content. Media assets must first be uploaded and then linked. Error recovery becomes essential when migrating hundreds of pieces of content.

The brand-new Management API PHP client handles these constraints through built-in retry mechanisms and response validation, so in writing a migration script, we don’t need to worry about them.

Tool Selection

We chose Drush as our command-line interface for several reasons. First, it’s deeply integrated with Drupal’s bootstrap process, providing direct access to the Entity API and field data. Second, Drupal developers are already familiar with its conventions, making our tool more accessible.

The decision to develop a new Management API client came from our experience with the evolution of PHP since we developed the first PHP client, and our goal to provide developers with a dedicated tool for this specific API that offered an improved DX and a tailored set of features.

This groundwork shaped how we approached the migration workflow.

The Building Blocks: A New Management API Client

A content migration tool interacts heavily with Storyblok’s Management API &mdash, creating stories, uploading assets, and managing tags. Each operation needs to be reliable and predictable. Our brand-new client simplifies these interactions through intuitive method calls: The client handles authentication, request formatting, and response parsing behind the scenes, letting devs focus on content operations rather than API mechanics.

Design For Reliability

Content migrations often involve hundreds of API calls. Our client includes built-in mechanisms for handling common scenarios like rate limiting and failed requests. The response handling pattern provides clear feedback about operation success: A logger can be injected into the client class, as we did using the Drush logger in our migration script from Drupal.

Improving The Development Experience #

Beyond basic API operations, the client reduces cognitive load through predictable patterns. Data objects provide a structured way to prepare content for Storyblok: This pattern validates data early in the process, catching potential issues before they reach the API.

Designing The Migration Workflow

Moving from Drupal’s entity-based structure to Storyblok’s component model required careful planning of the migration workflow. Our goal was to create a process that would be both reliable and adaptable to different content structures.

Command Structure

The migration leverages Drupal’s entity query system to extract content systematically. By default, access checks were disabled (a reversible business decision) to focus solely on migrating published nodes.

Key Steps And Insights

  • Text Fields

    • Required minimal effort: values like value() mapped directly to Storyblok fields.
    • Rich text posed no encoding challenges, enabling straightforward 1:1 transfers.
  • Handling Images

    1. Upload: Assets were sent to an AWS S3 bucket.
    2. Link: Storyblok’s Asset API upload() method returned an object_id, simplifying field mapping.
    3. Assign: The asset ID and filename were attached to the story.
  • Managing Tags

    • Tags extracted from Drupal were pre-created via Storyblok’s Tag API (optional but ensures consistency).
    • When assigning tags to stories, Storyblok automatically creates missing ones, streamlining the process.

Why Staged Workflows Matter

The migration avoids broken references by prioritizing dependencies (assets first, tags next, content last). While pre-creating tags add control, teams can adapt this logic—for example, letting Storyblok auto-generate tags to save time.

Flexibility is key: every decision (access checks, tag workflows) can be adjusted to align with project goals.

Real-World Implementation Challenges

Migrating content between Drupal and Storyblok presents challenges that you, as the implementer, may encounter.

For example, when dealing with large datasets, you may find that Drupal sites with thousands of nodes can quickly hit the rate limits enforced by Storyblok’s management API. In such cases, a batching mechanism for your requests is worth considering. Instead of processing every node at once, you can process a subset of records, wait for a short period of time, and then continue.

Alternatively, you could use the createBulk method of the Story API in the Management API, which allows you to handle multiple story creations with built-in rate limit handling and retries. Another potential hurdle is the conversion of complex field types, especially when Drupal’s nested structures or Paragraph fields need to be mapped to Storyblok’s more flexible block-based model.

One approach is first to analyze the nesting depth and structure of the Drupal content, then flatten deeply nested elements into reusable Storyblok components while maintaining the correct hierarchy. For example, a paragraph field with embedded media and text can be split into blocks within Storyblok, with each component representing a logical section of content. By structuring data this way before migration, you ensure that content remains editable and properly structured in the new system.

Data consistency is another aspect that you need to manage carefully. When migrating hundreds of records, partial failures are always risky. One approach to managing this is to log detailed information for each migration operation and implement a retry mechanism for failed operations.

For example, wrapping API calls in a try-catch block and logging errors can be a practical way to ensure that no records are silently dropped. When dealing with fields such as taxonomy terms or tags created on the fly in Storyblok, you may run into duplication issues. A good practice is to perform a check before creating a new tag. This could involve maintaining a local cache of previously created tags and checking against them before sending a create request to the API.

The same goes for images; a check could ensure you don’t upload the same asset twice.

Lessons Learned And Looking Forward

A dedicated API client for Storyblok streamlined interactions, abstracting backend complexity while improving code maintainability. Early use of structured data objects to prepare content proved critical, enabling pre-emptive error detection and reducing API failures.

We also ran into some challenges and see room for improvement:

  • Encoding issues in rich text (e.g., HTML entities) were resolved with a pre-processing step
  • Performance bottlenecks with large text/images required memory optimization and refined request handling

Enhancements could include support for Drupal Layout Builder, advanced validation layers, or dynamic asset management systems.

Friday, April 4, 2025

AI:Web Components Vs. Framework Components: What’s The Difference?

 

Some critics question the agnostic nature of Web Components, with some even arguing that they are not real components. It explores this topic in-depth, comparing Web Components and framework components, highlighting their strengths and trade-offs, and evaluating their performance.

It might surprise you that a distinction exists regarding the word “component,” especially in front-end development, where “component” is often used and associated with front-end frameworks and libraries. A component is a code that encapsulates a specific functionality and presentation. Components in front-end applications have a similar function: building reusable user interfaces. However, their implementations are different.

Web — or “framework-agnostic” — components are standard web technologies for building reusable, self-sustained HTML elements. They consist of Custom Elements, Shadow DOM, and HTML template elements. On the other hand, framework components are reusable UIs explicitly tailored to the framework in which they are created. Unlike Web Components, which can be used in any framework, framework components are useless outside their frameworks.

Some critics question the agnostic nature of Web Components and even go so far as to state that they are not real components because they do not conform to the agreed-upon nature of components. This article comprehensively compares web and framework components, examines the arguments regarding Web Components agnosticism, and considers the performance aspects of Web and framework components.

What Makes A Component?

Several criteria could be satisfied for a piece of code to be called a component, but only a few are essential:

  • Reusability,
  • Props and data handling,
  • Encapsulation.

Reusability is the primary purpose of a component, as it emphasizes the DRY (don’t repeat yourself) principle. A component should be designed to be reused in different parts of an application or across multiple applications. Also, a component should be able to accept data (in the form of props) from its parent components and optionally pass data back through callbacks or events. Components are regarded as self-contained units; therefore, they should encapsulate their logic, styles, and state.

If there’s one thing we are certain of, framework components capture these criteria well, but what about their counterparts, Web Components?

Understanding Web Components

Web Components are a set of web APIs that allow developers to create custom, reusable HTML tags that serve a specific function. Based on existing web standards, they permit developers to extend HTML with new elements, custom behaviour, and encapsulated styling.

Web Components are built based on three web specifications:

  • Custom Elements,
  • Shadow DOM,
  • HTML templates.

Each specification can exist independently, but when combined, they produce a web component.

Custom Element #

The Custom Elements API makes provision for defining and using new types of DOM elements that can be reused.

// Define a Custom Element
class MyCustomElement extends HTMLElement {
  constructor() {
    super();
  }

  connectedCallback() {
    this.innerHTML = `
      <p>Hello from MyCustomElement!</p>
    `;
  }
}

// Register the Custom Element
customElements.define('my-custom-element', MyCustomElement);

Shadow DOM #

The Shadow DOM has been around since before the concept of web components. Browsers have used a nonstandard version for years for default browser controls that are not regular DOM nodes. It is a part of the DOM that is at least less reachable than typical light DOM elements as far as JavaScript and CSS go. These things are more encapsulated as standalone elements.

// Create a Custom Element with Shadow DOM
class MyShadowElement extends HTMLElement {
  constructor() {
    super();
    this.attachShadow({ mode: 'open' });
  }

  connectedCallback() {
    this.shadowRoot.innerHTML = `
      <style>
        p {
          color: green;
        }
      </style>
      <p>Content in Shadow DOM</p>
    `;
  }
}

// Register the Custom Element
customElements.define('my-shadow-element', MyShadowElement);

HTML Templates #

HTML Templates API enables developers to write markup templates that are not loaded at the start of the app but can be called at runtime with JavaScript. HTML templates define the structure of Custom Elements in Web Components.

// my-component.js
export class MyComponent extends HTMLElement {
  constructor() {
    super();
    this.attachShadow({ mode: 'open' });
  }

  connectedCallback() {
    this.shadowRoot.innerHTML = `
      <style>
        p {
          color: red;
        }
      </style>
      <p>Hello from ES Module!</p>
    `;
  }
}

// Register the Custom Element
customElements.define('my-component', MyComponent);

<!-- Import the ES Module -->
<script type="module">
  import { MyComponent } from './my-component.js';
</script>

Web Components are often described as framework-agnostic because they rely on native browser APIs rather than being tied to any specific JavaScript framework or library. This means that Web Components can be used in any web application, regardless of whether it is built with React, Angular, Vue, or even vanilla JavaScript. Due to their supposed framework-agnostic nature, they can be created and integrated into any modern front-end framework and still function with little to no modifications. But are they actually framework-agnostic?

The Reality Of Framework-Agnosticism In Web Components #

Framework-agnosticism is a term describing self-sufficient software — an element in this case — that can be integrated into any framework with minimal or no modifications and still operate efficiently, as expected.

Web Components can be integrated into any framework, but not without changes that can range from minimal to complex, especially the styles and HTML arrangement. Another change Web Components might experience during integration includes additional configuration or polyfills for full browser support. This drawback is why some developers do not consider Web Components to be framework-agnostic. Notwithstanding, besides these configurations and edits, Web Components can easily fit into any front-end framework, including but not limited to React, Angular, and Vue.

Framework Components: Strengths And Limitations

Framework components are framework-specific reusable bits of code. They are regarded as the building blocks of the framework on which they are built and possess several benefits over Web Components, including the following:

  • An established ecosystem and community support,
  • Developer-friendly integrations and tools,
  • Comprehensive documentation and resources,
  • Core functionality,
  • Tested code,
  • Fast development,
  • Cross-browser support, and
  • Performance optimizations.

Examples of commonly employed front-end framework elements include React components, Vue components, and Angular directives. React supports a virtual DOM and one-way data binding, which allows for efficient updates and a component-based model. Vue is a lightweight framework with a flexible and easy-to-learn component system. Angular, unlike React, offers a two-way data binding component model with a TypeScript focus. Other front-end framework components include Svelte components, SolidJS components, and more.

Framework layer components are designed to operate under a specific JavaScript framework such as React, Vue, or Angular and, therefore, reside almost on top of the framework architecture, APIs, and conventions. For instance, React components use JSX and state management by React, while Angular components leverage Angular template syntax and dependency injection. As far as benefits, it has excellent developer experience performance, but as far as drawbacks are concerned, they are not flexible or reusable outside the framework.

In addition, a state known as vendor lock-in is created when developers become so reliant on some framework or library that they are unable to switch to another. This is possible with framework components because they are developed to be operational only in the framework environment.

Comparative Analysis

Framework and Web Components have their respective strengths and weaknesses and are appropriate to different scenarios. However, a comparative analysis based on several criteria can help deduce the distinction between both.

Encapsulation And Styling: Scoped Vs. Isolated #

Encapsulation is a trademark of components, but Web Components and framework components handle it differently. Web Components provide isolated encapsulation with the Shadow DOM, which creates a separate DOM tree that shields a component’s styles and structure from external manipulation. That ensures a Web Component will look and behave the same wherever it is used.

However, this isolation can make it difficult for developers who need to customize styles, as external CSS cannot cross the Shadow DOM without explicit workarounds (e.g., CSS custom properties). Scoped styling is used by most frameworks, which limit CSS to a component using class names, CSS-in-JS, or module systems. While this dissuades styles from leaking outwards, it does not entirely prevent external styles from leaking in, with the possibility of conflicts. Libraries like Vue and Svelte support scoped CSS by default, while React often falls back to libraries like styled-components.

Reusability And Interoperability

Web Components are better for reusable components that are useful for multiple frameworks or vanilla JavaScript applications. In addition, they are useful when the encapsulation and isolation of styles and behavior must be strict or when you want to leverage native browser APIs without too much reliance on other libraries.

Framework components are, however, helpful when you need to leverage some of the features and optimisations provided by the framework (e.g., React reconciliation algorithm, Angular change detection) or take advantage of the mature ecosystem and tools available. You can also use framework components if your team is already familiar with the framework and conventions since it will make your development process easier.

Performance Considerations #

Another critical factor in determining web vs. framework components is performance. While both can be extremely performant, there are instances where one will be quicker than the other.

For Web Components, implementation in the native browser can lead to optimised rendering and reduced overhead, but older browsers may require polyfills, which add to the initial load. While React and Angular provide specific optimisations (e.g., virtual DOM, change detection) that will make performance improvements on high-flow, dynamic applications, they add overhead due to the framework runtime and additional libraries.

Developer Experience

Developer experience is another fundamental consideration regarding Web Components versus framework components. Ease of use and learning curve can play a large role in determining development time and manageability. Availability of tooling and community support can influence developer experience, too.

Web Components use native browser APIs and, therefore, are comfortable to developers who know HTML, CSS, and JavaScript but have a steeper learning curve due to additional concepts like the Shadow DOM, custom elements, and templates that have a learning curve attached to them. Also, Web Components have a smaller community and less community documentation compared to famous frameworks like React, Angular, and Vue.

Side-by-Side Comparison #

Web Components BenefitsFramework Components Benefits
Native browser support can lead to efficient rendering and reduced overhead.Frameworks like React and Angular provide specific optimizations (e.g., virtual DOM, change detection) that can improve performance for large, dynamic applications.
Smaller bundle sizes and native browser support can lead to faster load times.Frameworks often provide tools for optimizing bundle sizes and lazy loading components.
Leverage native browser APIs, making them accessible to developers familiar with HTML, CSS, and JavaScript.Extensive documentation, which makes it easier for developers to get started.
Native browser support means fewer dependencies and the potential for better performance.Rich ecosystem with extensive tooling, libraries, and community support.
Web Components DrawbacksFramework Components Drawbacks
Older browsers may require polyfills, which can add to the initial load time.Framework-specific components can add overhead due to the framework’s runtime and additional libraries.
Steeper learning curve due to additional concepts like Shadow DOM and Custom Elements.Requires familiarity with the framework’s conventions and APIs.
Smaller ecosystem and fewer community resources compared to popular frameworks.Tied to the framework, making it harder to switch to a different framework.

To summarize, the choice between Web Components and framework components depends on the specific need of your project or team, which can include cross-framework reusability, performance, and developer experience.

Conclusion

Web Components are the new standard for agnostic, interoperable, and reusable components. Although they need further upgrades and modifications in terms of their base technologies to meet framework components standards, they are entitled to the title “components.” Through a detailed comparative analysis, we’ve explored the strengths and weaknesses of Web Components and framework components, gaining insight into their differences. Along the way, we also uncovered useful workarounds for integrating web components into front-end frameworks for those interested in that approach.

References

Tuesday, March 18, 2025

AI:How To Prevent WordPress SQL Injection Attacks

 What makes WordPress websites such a prime target for hackers? delves into how WordPress SQL injection attacks work and shares strategies for removing and preventing them.

Did you know that your WordPress site could be a target for hackers right now? That’s right! Today, WordPress powers over 43% of all websites on the internet. That kind of public news makes WordPress sites a big target for hackers.

One of the most harmful ways they attack is through an SQL injection. A SQL injection may break your website, steal data, and destroy your content. More than that, they can lock you out of your website! Sounds scary, right? But don’t worry, you can protect your site. That is what this article is about.

What Is SQL?

SQL stands for Structured Query Language. It is a way to talk to databases, which store and organize a lot of data, such as user details, posts, or comments on a website. SQL helps us ask the database for information or give it new data to store.

When writing an SQL query, you ask the database a question or give it a task. For example, if you want to see all users on your site, an SQL query can retrieve that list.

SQL is powerful and vital since all WordPress sites use databases to store content.


What Is An SQL Injection Attack?

WordPress SQL injection attacks try to gain access to your site’s database. An SQL injection (SQLi) lets hackers exploit a vulnerable SQL query to run a query they made. The attack occurs when a hacker tricks a database into running harmful SQL commands.

Hackers can send these commands via input fields on your site, such as those in login forms or search bars. If the website does not check input carefully, a command can grant access to the database. Imagine a hacker typing an SQL command instead of typing a username. It may fool the database and show private data such as passwords and emails. The attacker could use it to change or delete database data.

Your database holds all your user-generated data and content. It stores pages, posts, links, comments, and users. For the “bad” guys, it is a goldmine of valuable data.

SQL injections are dangerous as they let hackers steal data or take control of a website. A WordPress firewall prevents SQL injection attacks. Those attacks can compromise and hack sites very fast.

SQL Injections: Three Main Types

There are three main kinds of SQL injection attacks. Every type works in various ways, but they all try to fool the database. We’re going to look at every single type.

In-Band SQLi

This is perhaps the most common type of attack. A hacker sends the command and gets the results using the same communication method. It is to make a request and get the answer right away.

There are two types of In-band SQLi injection attacks:

  • Error-based SQLi,
  • Union-based SQLi.

With error-based SQLi, the hacker causes the database to give an error message. This message may reveal crucial data, such as database structure and settings.

What about union-based SQLi attacks? The hacker uses the SQL UNION statement to combine their request with a standard query. It can give them access to other data stored in the database.

Inferential SQLi

With inferential SQLi, the hacker will not see the results at once. Instead, they ask for database queries that give “yes” and “no” answers. Hackers can reveal the database structure or data by how the site responds.

They do that in two common ways:

  • Boolean-based SQLi,
  • Time-based SQLi.

Through Boolean-based SQLi, the hacker sends queries that can only be “true” or “false.” For example, is this user ID more than 100? This allows hackers to gather more data about the site based on how it reacts.

In time-based SQLi, the hacker asks a query that makes the database take longer to reply if the answer is “yes.” They can figure out what they need to know due to the delay.

Out-of-band SQLi

Out-of-band SQLi is a less common but equally dangerous type of attack. Hackers use various ways to get results. Usually, they connect the database to a server they control.

The hacker does not see the results all at once. However, they can get the data sent somewhere else via email or a network connection. This method applies when the site blocks ordinary SQL injection methods.

Why Preventing SQL Injection Is Crucial

SQL injections are a giant risk for websites. They can lead to various harms — stolen data, website damage, legal issues, loss of trust, and more.

Hackers can steal data like usernames, passwords, and emails. They may cause damage by deleting and changing your data. Besides, it messes up your site structure, making it unusable.

Is your user data stolen? You might face legal troubles if your site treats sensitive data. People may lose trust in you if they see that your site gets hacked. As a result, the reputation of your site can suffer.

Thus, it is so vital to prevent SQL injections before they occur.

11 Ways To Prevent WordPress SQL Injection Attacks

OK, so we know what SQL is and that WordPress relies on it. We also know that attackers take advantage of SQL vulnerabilities. I’ve collected 11 tips for keeping your WordPress site free of SQL injections. The tips limit your vulnerability and secure your site from SQL injection attacks.

1. Validate User Input

SQL injection attacks usually occur via forms or input fields on your site. It could be inside a login form, a search box, a contact form, or a comment section. Does a hacker enter bad SQL commands into one of these fields? They may fool your site, giving them access to your database by running those commands.

Hence, always sanitize and validate all input data on your site. Users should not be able to submit data if it does not follow a specific format. The easiest way to avoid this is to use a plugin like Formidable Forms, an advanced builder for adding forms. That said, WordPress has many built-in functions to sanitize and validate input on your own. It includes sanitize_text_field(), sanitize_email(), and sanitize_url().

The validation cleans up user inputs before they get sent to your database. These functions strip out unwanted characters and ensure the data is safe to store.

2. Avoid Dynamic SQL

Dynamic SQL allows you to create SQL statements on the fly at runtime. How does dynamic SQL work compared to static SQL? You can create flexible and general SQL queries adjusted to various conditions. As a result, dynamic SQL is typically slower than static SQL, as it demands runtime parsing.

Dynamic SQL can be more vulnerable to SQL injection attacks. It occurs when the bad guy alters a query by injecting evil SQL code. The database may respond and run this harmful code. As a result, the attacker can access data, corrupt it, or even hack your entire database.

How do you keep your WordPress site safe? Use prepared statements, stored procedures or parameterized queries.

3. Regularly Update WordPress Themes And Plugins

Keeping WordPress and all plugins updated is the first step in keeping your site safe. Hackers often look for old software versions with known security issues.

There are regular security updates for WordPress, themes, and plugins. They fix security issues. You leave your site open to attacks as you ignore these updates.

To stay safe, set up automatic updates for minor WordPress versions. Check for theme and plugin updates often. Only use trusted plugins from the official WordPress source or well-known developers.

By updating often, you close many ways hackers could attack.

4. Add A WordPress Firewall

A firewall is one of the best ways to keep your WordPress website safe. It is a shield for your WordPress site and a security guard that checks all incoming traffic. The firewall decides who can enter your site and who gets blocked.

There are five main types of WordPress firewalls:

  • Plugin-based firewalls,
  • Web application firewalls,
  • Cloud-based firewalls,
  • DNS-level firewalls,
  • Application-level firewalls.

Plugin-based firewalls you install on your WordPress site. They work from within your website to block the bad traffic. Web application firewalls filter, check and block the traffic to and from a web service. They detect and defend against risky security flaws that are most common in web traffic. Cloud-based firewalls work from outside your site. They block the bad traffic before it even reaches your site. DNS-level firewalls send your site traffic via their cloud proxy servers, only letting them direct real traffic to your web server. Finally, application-level firewalls check the traffic as it reaches your server. That means before loading most of the WordPress scripts.

Stable security plugins like Sucuri and Wordfence can also act as firewalls.

5. Hide Your WordPress Version #

Older WordPress versions display the WordPress version in the admin footer. It’s not always a bad thing to show your version of WordPress. But revealing it does provide virtual ammo to hackers. They want to exploit vulnerabilities in outdated WordPress versions.

Are you using an older WordPress version? You can still hide your WordPress version:

  • With a security plugin such as Sucuri or Wordfence to clear the version number or
  • By adding a little bit of code to your functions.php file.
function hide_wordpress_version() {
  return '';
}
add_filter('the_generator', 'hide_wordpress_version');

This code stops your WordPress version number from showing in the theme’s header.php file and RSS feeds. It adds a small but helpful layer of security. Thus, it becomes more difficult for hackers to detect.

6. Make Custom Database Error Notices

Bad guys can see how your database is set up via error notices. Ensure creating a custom database error notice that users see to stop it. Hackers will find it harder to detect weak spots in your site when you hide error details. The site will stay much safer when you show less data on the front end.

To do that, copy and paste the code into a new db-error.php file. Jeff Starr has a classic article on the topic from 2009 with an example:

<?php // Custom WordPress Database Error Page
  header('HTTP/1.1 503 Service Temporarily Unavailable');
  header('Status: 503 Service Temporarily Unavailable');
  header('Retry-After: 600'); // 1 hour = 3600 seconds
    
// If you want to send an email to yourself upon an error
// mail("your@email.com", "Database Error", "There is a problem with the database!", "From: Db Error Watching");
?>  
<!DOCTYPE HTML>
<html>
  <head>
    <title>Database Error</title>
    <style>
      body { padding: 50px; background: #04A9EA; color: #fff; font-size: 30px; }
      .box { display: flex; align-items: center; justify-content: center; }
    </style>
</head>

  <body>
    <div class="box">
      <h1>Something went wrong</h1>
    </div>
  </body>
</html>

Now save the file in the root of your /wp-content/ folder for it to take effect.

7. Set Access And Permission Limits For User Roles

Assign only the permissions that each role demands to do its tasks. For example, Editors may not need access to the WordPress database or plugin settings. Improve site security by giving only the admin role full dashboard access. Limiting access to features for fewer roles reduces the odds of an SQL injection attack.

8. Enable Two-factor Authentication

A great way to protect your WordPress site is to apply two-factor authentication (2FA). Why? Since it adds an extra layer of security to your login page. Even if a hacker cracks your password, they still won’t be able to log in without getting access to the 2FA code.

Setting up 2FA on WordPress goes like this:

  1. Install a two-factor authentication plugin.
    Google Authenticator by miniOrange, Two-Factor, and WP 2FA by Melapress are good options.
  2. Pick your authentication method.
    The plugins often have three choices: SMS codes, authentication apps, or security keys.
  3. Link your account.
    Are you using Google Authenticator? Start and scan the QR code inside the plugin settings to connect it. If you use SMS, enter your phone number and get codes via text.
  4. Test it.
    Log out of WordPress and try to log in again. First, enter your username and password as always. Second, you complete the 2FA step and type in the code you receive via SMS or email.
  5. Enable backup codes (optional).
    Some plugins let you generate backup codes. Save these in a safe spot in case you lose access to your phone or email.

9. Delete All Unneeded Database Functions

Assure erasing tables you no longer use and delete junk or unapproved comments. Your database will be more resistant to hackers who try to exploit sensitive data.

10. Monitor Your Site For Unusual Activity

Watch for unusual activity on your site. You can check for actions like many failed login attempts or strange traffic spikes. Security plugins such as Wordfence or Sucuri alert you when something seems odd. That helps to catch issues before they get worse.

11. Backup Your Site Regularly

Running regular backups is crucial. With a backup, you can quickly restore your site to its original state if it gets hacked. You want to do this anytime you execute a significant update on your site. Also, it regards updating your theme and plugins.

Begin to create a plan for your backups so it suits your needs. For example, if you publish new content every day, then it may be a good idea to back up your database and files daily.

Many security plugins offer automated backups. Of course, you can also use backup plugins like UpdraftPlus or Solid Security. You should store backup copies in various locations, such as Dropbox and Google Drive. It will give you peace of mind.

How To Remove SQL Injection From Your Site

Let’s say you are already under attack and are dealing with an active SQL injection on your site. It’s not like any of the preventative measures we’ve covered will help all that much. Here’s what you can do to fight back and defend your site:

  • Check your database for changes. Look for strange entries in user accounts, content, or plugin settings.
  • Erase evil code. Scan your site with a security plugin like Wordfence or Sucuri to find and erase harmful code.
  • Restore a clean backup. Is the damage vast? Restoring your site from an existing backup could be the best option.
  • Change all passwords. Alter your passwords for the WordPress admin, the database, and the hosting account.
  • Harden your site security. After cleaning your site, take the 11 steps we covered earlier to prevent future attacks.

Conclusion

Hackers love weak sites. They look for easy ways to break in, steal data, and cause harm. One of the tricks they often use is SQL injection. If they find a way in, they can steal private data, alter your content, or even take over your site. That’s bad news both for you and your visitors.

But here is the good news: You can stop them! It is possible to block these attacks before they happen by taking the correct steps. And you don’t need to be a tech freak.

Many people ignore website security until it’s too late. They think, “Why would a hacker target my site?” But hackers don’t attack only big sites. They attack any site with weak security. So, even small blogs and new websites are in danger. Once a hacker gets in, this person can cause you lots of damage. Fixing a hacked site takes time, effort, and money. But stopping an attack before it happens? That’s much easier.

Hackers don’t sit and wait, so why should you? Thousands of sites get attacked daily, so don’t let yours be the next one. Update your site, add a firewall, enable 2FA, and check your security settings. These small steps can help prevent giant issues in the future.

Your site needs protection against the bad guys. You have worked hard to build it. Never neglect to update and protect it. After that, your site will be safer and sounder.

Monday, March 17, 2025

How To Build Confidence In Your UX Work

 UX initiatives are often seen as a disruption rather than a means to solving existing problems in an organization. In this post, we’ll explore how you can build trust for your UX work, gain support, and make a noticeable impact.

When I start any UX project, typically, there is very little confidence in the successful outcome of my UX initiatives. In fact, there is quite a lot of reluctance and hesitation, especially from teams that have been burnt by empty promises and poor delivery in the past.

Good UX has a huge impact on business. But often, we need to build up confidence in our upcoming UX projects. For me, an effective way to do that is to address critical bottlenecks and uncover hidden deficiencies — the ones that affect the people I’ll be working with.

Let’s take a closer look at what this can look like.

UX Doesn’t Disrupt, It Solves Problems

Bottlenecks are usually the most disruptive part of any company. Almost every team, every unit, and every department has one. It’s often well-known by employees as they complain about it, but it rarely finds its way to senior management as they are detached from daily operations.

The Onion Layers
The Iceberg of Ignorance: Sidney Yoshida discovered that leadership is usually unaware of the organization’s real problems.

The bottleneck can be the only senior developer on the team, a broken legacy tool, or a confusing flow that throws errors left and right — there’s always a bottleneck, and it’s usually the reason for long waiting times, delayed delivery, and cutting corners in all the wrong places.

We might not be able to fix the bottleneck. But for a smooth flow of work, we need to ensure that non-constraint resources don’t produce more than the constraint can handle. All processes and initiatives must be aligned to support and maximize the efficiency of the constraint.

So before doing any UX work, look out for things that slow down the organization. Show that it’s not UX work that disrupts work, but it’s internal disruptions that UX can help with. And once you’ve delivered even a tiny bit of value, you might be surprised how quickly people will want to see more of what you have in store for them.

The Work Is Never Just “The Work”

Meetings, reviews, experimentation, pitching, deployment, support, updates, fixes — unplanned work blocks other work from being completed. Exposing the root causes of unplanned work and finding critical bottlenecks that slow down delivery is not only the first step we need to take when we want to improve existing workflows, but it is also a good starting point for showing the value of UX.

Why it’s never just the work.
The work is never just “the work.” In every project — as well as before and after it — there is a lot of invisible, and often unplanned, work going on

To learn more about the points that create friction in people’s day-to-day work, set up 1:1s with the team and ask them what slows them down. Find a problem that affects everyone. Perhaps too much work in progress results in late delivery and low quality? Or lengthy meetings stealing precious time?

One frequently overlooked detail is that we can’t manage work that is invisible. That’s why it is so important that we visualize the work first. Once we know the bottleneck, we can suggest ways to improve it. It could be to introduce 20% idle times if the workload is too high, for example, or to make meetings slightly shorter to make room for other work.

The Theory Of Constraints

The idea that the work is never just “the work” is deeply connected to the Theory of Constraints discovered by Dr. Eliyahu M. Goldratt. It showed that any improvements made anywhere beside the bottleneck are an illusion.

Any improvement after the bottleneck is useless because it will always remain starved, waiting for work from the bottleneck. And any improvements made before the bottleneck result in more work piling up at the bottleneck.

UX Strategy Components
Components of UX Strategy: it’s difficult to build confidence in your UX work without preparing a proper UX strategy ahead of time.

Wait Time = Busy ÷ Idle

To improve flow, sometimes we need to freeze the work and bring focus to one single project. Just as important as throttling the release of work is managing the handoffs. The wait time for a given resource is the percentage of time that the resource is busy divided by the percentage of time it’s idle. If a resource is 50% utilized, the wait time is 50/50, or 1 unit.

If the resource is 90% utilized, the wait time is 90/10, or 9 times longer. And if it’s 99% of time utilized, it’s 99/1, so 99 times longer than if that resource is 50% utilized. The critical part is to make wait times visible so you know when your work spends days sitting in someone’s queue.

The exact times don’t matter, but if a resource is busy 99% of the time, the wait time will explode.

Avoid 100% Occupation

Our goal is to maximize flow: that means exploiting the constraint but creating idle times for non-constraint to optimize system performance.

One surprising finding for me was that any attempt to maximize the utilization of all resources — 100% occupation across all departments — can actually be counterproductive. As Goldratt noted, “An hour lost at a bottleneck is an hour out of the entire system. An hour saved at a non-bottleneck is worthless.”

Recommended Read: “The Phoenix Project”

The Phoenix Project
The Phoenix Project” by Gene Kim, Kevin Behr, and George Spafford is a wonderful novel about the struggles of shipping. (Large preview)

I can only wholeheartedly recommend The Phoenix Project, an absolutely incredible book that goes into all the fine details of the Theory of Constraints described above.

It’s not a design book but a great book for designers who want to be more strategic about their work. It’s a delightful and very real read about the struggles of shipping (albeit on a more technical side).

Wrapping Up

People don’t like sudden changes and uncertainty, and UX work often disrupts their usual ways of working. Unsurprisingly, most people tend to block it by default. So before we introduce big changes, we need to get their support for our UX initiatives.

We need to build confidence and show them the value that UX work can have — for their day-to-day work. To achieve that, we can work together with them. Listening to the pain points they encounter in their workflows, to the things that slow them down.

Once we’ve uncovered internal disruptions, we can tackle these critical bottlenecks and suggest steps to make existing workflows more efficient. That’s the foundation to gaining their trust and showing them that UX work doesn’t disrupt but that it’s here to solve problems.

How To Fix Largest Contentful Paint Issues With Subpart Analysis

 Struggling with slow Largest Contentful Paint (LCP)? Newly introduced by Google, LCP subparts help you pinpoint where page load delays come from. Now, in the Chrome UX Report, this data provides real visitor insights to speed up your site and boost rankings. It unpacks what LCP subparts are, what they mean for your website speed, and how you can measure them.

The Largest Contentful Paint (LCP) in Core Web Vitals measures how quickly a website loads from a visitor’s perspective. It looks at how long after opening a page the largest content element becomes visible. If your website is loading slowly, that’s bad for user experience and can also cause your site to rank lower in Google.

When trying to fix LCP issues, it’s not always clear what to focus on. Is the server too slow? Are images too big? Is the content not being displayed? Google has been working to address that recently by introducing LCP subparts, which tell you where page load delays are coming from. They’ve also added this data to the Chrome UX Report, allowing you to see what causes delays for real visitors on your website!

Let’s take a look at what the LCP subparts are, what they mean for your website speed, and how you can measure them.

The Four LCP Subparts #

LCP subparts split the Largest Contentful Paint metric into four different components:

  1. Time to First Byte (TTFB): How quickly the server responds to the document request.
  2. Resource Load Delay: Time spent before the LCP image starts to download.
  3. Resource Load Time: Time spent downloading the LCP image.
  4. Element Render Delay: Time before the LCP element is displayed.

The resource timings only apply if the largest page element is an image or background image. For text elements, the Load Delay and Load Time components are always zero.

How To Measure LCP Subparts #

One way to measure how much each component contributes to the LCP score on your website is to use DebugBear’s website speed test. Expand the Largest Contentful Paint metric to see subparts and other details related to your LCP score.

Here, we can see that TTFB and image Load Duration together account for 78% of the overall LCP score. That tells us that these two components are the most impactful places to start optimizing.

LCP Subparts
(Large preview)

What’s happening during each of these stages? A network request waterfall can help us understand what resources are loading through each stage.

The LCP Image Discovery view filters the waterfall visualization to just the resources that are relevant to displaying the Largest Contentful Paint image. In this case, each of the first three stages contains one request, and the final stage finishes quickly with no new resources loaded. But that depends on your specific website and won’t always be the case.

LCP image discovery
(Large preview)

Time To First Byte #

The first step to display the largest page element is fetching the document HTML. We recently published an article about how to improve the TTFB metric.

In this example, we can see that creating the server connection doesn’t take all that long. Most of the time is spent waiting for the server to generate the page HTML. So, to improve the TTFB, we need to speed up that process or cache the HTML so we can skip the HTML generation entirely.

Resource Load Delay #

The “resource” we want to load is the LCP image. Ideally, we just have an <img> tag near the top of the HTML, and the browser finds it right away and starts loading it.

But sometimes, we get a Load Delay, as is the case here. Instead of loading the image directly, the page uses lazysize.js, an image lazy loading library that only loads the LCP image once it has detected that it will appear in the viewport.

Part of the Load Delay is caused by having to download that JavaScript library. But the browser also needs to complete the page layout and start rendering content before the library will know that the image is in the viewport. After finishing the request, there’s a CPU task (in orange) that leads up to the First Contentful Paint milestone, when the page starts rendering. Only then does the library trigger the LCP image request.

Load Delay
(Large preview)

How do we optimize this? First of all, instead of using a lazy loading library, you can use the native loading="lazy" image attribute. That way, loading images no longer depends on first loading JavaScript code.

But more specifically, the LCP image should not be lazily loaded. That way, the browser can start loading it as soon as the HTML code is ready. According to Google, you should aim to eliminate resource load delay entirely.

Resources Load Duration #

The Load Duration subpart is probably the most straightforward: you need to download the LCP image before you can display it!

In this example, the image is loaded from the same domain as the HTML. That’s good because the browser doesn’t have to connect to a new server.

Other techniques you can use to reduce load delay:

Element Render Delay #

The fourth and final LCP component, Render Delay, is often the most confusing. The resource has loaded, but for some reason, the browser isn’t ready to show it to the user yet!

Luckily, in the example we’ve been looking at so far, the LCP image appears quickly after it’s been loaded. One common reason for render delay is that the LCP element is not an image. In that case, the render delay is caused by render-blocking scripts and stylesheets. The text can only appear after these have loaded and the browser has completed the rendering process.

Render Delay
(Large preview)

Another reason you might see render delay is when the website preloads the LCP image. Preloading is a good idea, as it practically eliminates any load delay and ensures the image is loaded early.

However, if the image finishes downloading before the page is ready to render, you’ll see an increase in render delay on the page. And that’s fine! You’ve improved your website speed overall, but after optimizing your image, you’ve uncovered a new bottleneck to focus on.

Render Delay with preloaded LCP image
(Large preview)

LCP Subparts In Real User CrUX Data #

Looking at the Largest Contentful Paint subparts in lab-based tests can provide a lot of insight into where you can optimize. But all too often, the LCP in the lab doesn’t match what’s happening for real users!

That’s why, in February 2025, Google started including subpart data in the CrUX data report. It’s not (yet?) included in PageSpeed Insights, but you can see those metrics in DebugBear’s “Web Vitals” tab.

Subpart data in the CrUX data report
(Large preview)

One super useful bit of info here is the LCP resource type: it tells you how many visitors saw the LCP element as a text element or an image.

Even for the same page, different visitors will see slightly different content. For example, different elements are visible based on the device size, or some visitors will see a cookie banner while others see the actual page content.

To make the data easier to interpret, Google only reports subpart data for images.

If the LCP element is usually text on the page, then the subparts info won’t be very helpful, as it won’t apply to most of your visitors.

But breaking down text LCP is relatively easy: everything that’s not part of the TTFB score is render-delayed.

Track Subparts On Your Website With Real User Monitoring #

Lab data doesn’t always match what real users experience. CrUX data is superficial, only reported for high-traffic pages, and takes at least 4 weeks to fully update after a change has been rolled out.

That’s why a real-user monitoring tool like DebugBear comes in handy when fixing your LCP scores. You can track scores across all pages on your website over time and get dedicated dashboards for each LCP subpart.

Dashboards for each LCP subpart

You can also review specific visitor experiences, see what the LCP image was for them, inspect a request waterfall, and check LCP subpart timings. Sign up for a free trial.

DebugBear tool where you can review visitor experiences and check LCP subpart timings

Conclusion

Having more granular metric data available for the Largest Contentful Paint gives web developers a big leg up when making their website faster.

Including subparts in CrUX provides new insight into how real visitors experience your website and can tell if the optimizations you’re considering would really be impactful.