⭐ If you would like to buy me a coffee, well thank you very much that is mega kind! : https://www.buymeacoffee.com/honeyvig Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Friday, May 1, 2026

Identifying Necessary Transparency Moments In Agentic AI (Part 1)

 Designing for agentic AI requires attention to both the system’s behavior and the transparency of its actions. Between the black box and the data dump lies a more thoughtful approach.

explores how to map decision points and reveal the right moments to build trust through clarity, not noise.

Designing for autonomous agents presents a unique frustration. We hand a complex task to an AI, it vanishes for 30 seconds (or 30 minutes), and then it returns with a result. We stare at the screen. Did it work? Did it hallucinate? Did it check the compliance database or skip that step?

We typically respond to this anxiety with one of two extremes. We either keep the system a Black Box, hiding everything to maintain simplicity, or we panic and provide a Data Dump, streaming every log line and API call to the user.

Neither approach directly addresses the nuance needed to provide users with the ideal level of transparency.

The Black Box leaves users feeling powerless. The Data Dump creates notification blindness, destroying the efficiency the agent promised to provide. Users ignore the constant stream of information until something breaks, at which point they lack the context to fix it.

We need an organized way to find the balance. In my previous article, “Designing For Agentic AI”, we looked at interface elements that build trust, like showing the AI’s intended action beforehand (Intent Previews) and giving users control over how much the AI does on its own (Autonomy Dials). But knowing which elements to use is only part of the challenge. The harder question for designers is knowing when to use them.

How do you know which specific moment in a 30-second workflow requires an Intent Preview and which can be handled with a simple log entry?

This article provides a method to answer that question. We will walk through the Decision Node Audit. This process gets designers and engineers in the same room to map backend logic to the user interface. You will learn how to pinpoint the exact moments a user needs an update on what the AI is doing. We will also cover an Impact/Risk matrix that will help to prioritize which decision nodes to display and any associated design pattern to pair with that decision.

Transparency Moments: A Case Study Example 

Consider Meridian (not real name), an insurance company that uses an agentic AI to process initial accident claims. The user uploads photos of vehicle damage and the police report. The agent then disappears for a minute before returning with a risk assessment and a proposed payout range.

Initially, Meridian’s interface simply showed Calculating Claim Status. Users grew frustrated. They had submitted several detailed documents and felt uncertain about whether the AI had even reviewed the police report, which contained mitigating circumstances. The Black Box created distrust.

To fix this, the design team conducted a Decision Node Audit. They found that the AI performed three distinct, probability-based steps, with numerous smaller steps embedded:

  • Image Analysis
    The agent compared the damage photos against a database of typical car crash scenarios to estimate the repair cost. This involved a confidence score.
  • Textual Review
    It scanned the police report for keywords that affect liability (e.g., fault, weather conditions, sobriety). This involved a probability assessment of legal standing.
  • Policy Cross Reference
    It matched the claim details against the user’s specific policy terms, searching for exceptions or coverage limits. This also involved probabilistic matching.

The team turned these steps into transparency moments. The interface sequence was updated to:

  • Assessing Damage Photos: Comparing against 500 vehicle impact profiles.
  • Reviewing Police Report: Analyzing liability keywords and legal precedent.
  • Verifying Policy Coverage: Checking for specific exclusions in your plan.

The system still took the same amount of time, but the explicit communication about the agent’s internal workings restored user confidence. Users understood that the AI was performing the complex task it was designed for, and they knew exactly where to focus their attention if the final assessment seemed inaccurate. This design choice transformed a moment of anxiety into a moment of connection with the user.

Applying the Impact/Risk Matrix: What We Chose to Hide #

Most AI experiences have no shortage of events and decision nodes that could potentially be displayed during processing. One of the most critical outcomes of the audit was to decide what to keep invisible. In the Meridian example, the backend logs generated 50+ events per claim. We could have defaulted to displaying each event as they were processed as part of the UI. Instead, we applied the risk matrix to prune them:

  • Log Event: Pinging Server West-2 for redundancy check.
    • Filter Verdict: Hide. (Low Stakes, High Technicality).
  • Log Event: Comparing repair estimate to BlueBook value.
    • Filter Verdict: Show. (High Stakes, impacts user’s payout).

By cutting out the unnecessary details, the important information — like the coverage verification — was more impactful. We created an open interface and designed an open experience.

This approach uses the idea that people feel better about a service when they can see the work being done. By showing the specific steps (Assessing, Reviewing, Verifying), we changed a 30-second wait from a time of worry (“Is it broken?”) to a time of feeling like something valuable is being created (“It’s thinking”).

Let’s now take a closer look at how we can review the decision-making process in our products to identify key moments that require clear information.

The Decision Node Audit

Transparency fails when we treat it as a style choice rather than a functional requirement. We have a tendency to ask, “What should the UI look like?” before we ask, “What is the agent actually deciding?”

The Decision Node Audit is a straightforward way to make AI systems easier to understand. It works by carefully mapping out the system’s internal process. The main goal is to find and clearly define the exact moments where the system stops following its set rules and instead makes a choice based on chance or estimation. By mapping this structure, creators can show these points of uncertainty directly to the people using the system. This changes system updates from being vague statements to specific, reliable reports about how the AI reached its conclusion.

In addition to the insurance case study above, I recently worked with a team building a procurement agent. The system reviewed vendor contracts and flagged risks. Originally, the screen displayed a simple progress bar: “Reviewing contracts.” Users hated it. Our research indicated they felt anxious about the legal implications of a missing clause.

We fixed this by conducting a Decision Node Audit. I’ve included a step-by-step checklist for conducting this audit at the conclusion of this article.

We ran a session with the engineers and outlined how the system works. We identified “Decision Points” — moments where the AI had to choose between two good options.

In standard computer programs, the process is clear: if A happens, then B will always happen. In AI systems, the process is often based on chance. The AI thinks A is probably the best choice, but it might only be 65% certain.

In the contract system, we found a moment when the AI checked the liability terms against our company rules. It was rarely a perfect match. The AI had to decide if a 90% match was good enough. This was a key decision point.

The diagram shows how to connect a hidden system decision based on probability (an Ambiguity Point) to a visible moment of explanation for the user (a Transparency Moment).
Figure 1: This diagram shows how to connect a hidden system decision based on probability (an Ambiguity Point) to a visible moment of explanation for the user (a Transparency Moment).

Once we identified this node, we exposed it to the user. Instead of “Reviewing contracts,” the interface updated to say: “Liability clause varies from standard template. Analyzing risk level.”

This specific update gave users confidence. They knew the agent checked the liability clause. They understood the reason for the delay and gained trust that the desired action was occurring on the back end. They also knew where to dig in deeper once the agent generated the contract.

To check how the AI makes decisions, you need to work closely with your engineers, product managers, business analysts, and key people who are making the choices (often hidden) that affect how the AI tool functions. Draw out the steps the tool takes. Mark every spot where the process changes direction because a probability is met. These are the places where you should focus on being more transparent.

As shown in Figure 2 below, the Decision Node Audit involves these steps:

  1. Get the team together: Bring in the product owners, business analysts, designers, key decision-makers, and the engineers who built the AI. For example,

    Think about a product team building an AI tool designed to review messy legal contracts. The team includes the UX designer, the product manager, the UX researcher, a practicing lawyer who acts as the subject-matter expert, and the backend engineer who wrote the text-analysis code.

  2. Draw the whole process: Document every step the AI takes, from the user’s first action to the final result.

    The team stands at a whiteboard and sketches the entire sequence for a key workflow that involves the AI searching for a liability clause in a complex contract. The lawyer uploads a fifty-page PDF → The system converts the document into readable text. → The AI scans the pages for liability clauses. → The user waits. → Moments or minutes later, the tool highlights the found paragraphs in yellow on the user interface. They do this for many other workflows that the tool accommodates as well.

  3. Find where things are unclear: Look at the process map for any spot where the AI compares options or inputs that don’t have one perfect match.

    The team looks at the whiteboard to spot the ambiguous steps. Converting an image to text follows strict rules. Finding a specific liability clause involves guesswork. Every firm writes these clauses differently, so the AI has to weigh multiple options and make a prediction instead of finding an exact word match.

  4. Identify the ‘best guess’ steps: For each unclear spot, check if the system uses a confidence score (for example, is it 85% sure?). These are the points where the AI makes a final choice.

    The system has to guess (give a probability) which paragraph(s) closely resemble a standard liability clause. It assigns a confidence score to its best guess. That guess is a decision node. The interface needs to tell the lawyer it is highlighting a potential match, rather than stating it found the definitive clause.

  5. Examine the choice: For each choice point, figure out the specific internal math or comparison being done (e.g., matching a part of a contract to a policy or comparing a picture of a broken car to a library of damaged car photos).

    The engineer explains that the system compares the various paragraphs against a database of standard liability clauses from past firm cases. It calculates a text similarity score to decide on a match based on probabilities.

  6. Write clear explanations: Create messages for the user that clearly describe the specific internal action happening when the AI makes a choice.

    The content designer writes a specific message for this exact moment. The text reads: Comparing document text to standard firm clauses to identify potential liability risks.

  7. Update the screen: Put these new, clear explanations into the user interface, replacing vague messages like “Reviewing contracts.”

    The design team removes the generic Processing PDF loading spinner. They insert the new explanation into a status bar located right above the document viewer while the AI thinks.

  8. Check for Trust: Make sure the new screen messages give users a simple reason for any wait time or result, which should make them feel more confident and trusting.

Comic where a product team maps the decision nodes of an AI legal tool to design transparent interface messages.
Figure 2: A product team maps the decision nodes of an AI legal tool to design transparent interface messages. (Comic generated using Google Gemini/Nano Banana)

The Impact/Risk Matrix

Once you look closely at the AI’s process, you’ll likely find many points where it makes a choice. An AI might make dozens of small choices for a single complex task. Showing them all creates too much unnecessary information. You need to group these choices.

You can use an Impact/Risk Matrix to sort these choices based on the types of action(s) the AI is taking. Here are examples of impact/risk matrices:

First, look for low-stakes and low-impact decisions.

Low Stakes / Low Impact

  • Example: Organizing a file structure or renaming a document.
  • Transparency Need: Minimal. A subtle toast notification or a log entry suffices. Users can undo these actions easily.

Then identify the high-stakes and high-impact decisions.

High Stakes / High Impact

  • Example: Rejecting a loan application or executing a stock trade.
  • Transparency Need: High. These actions require Proof of Work. The system must demonstrate the rationale before or immediately as it acts.

Consider a financial trading bot that treats all buy/sell orders the same. It executes a $5 trade with the same opacity as a $50,000 trade. Users might question whether the tool recognizes the potential impact of transparency on trading on a large dollar amount. They need the system to pause and show its work for the high-stakes trades. The solution is to introduce a Reviewing Logic state for any transaction exceeding a specific dollar amount, allowing the user to see the factors driving the decision before execution.

Mapping Nodes to Patterns: A Design Pattern Selection Rubric

Once you have identified your experience’s key decision nodes, you must decide which UI pattern applies to each one you’ll display. In Designing For Agentic AI, we introduced patterns like the Intent Preview (for high-stakes control) and the Action Audit (for retrospective safety). The decisive factor in choosing between them is reversibility.

We filter every decision node through the impact matrix in order to assign the correct pattern:

High Stakes & Irreversible: These nodes require an Intent Preview. Because the user cannot easily undo the action (e.g., permanently deleting a database), the transparency moment must happen before execution. The system must pause, explain its intent, and require confirmation.

High Stakes & Reversible: These nodes can rely on the Action Audit & Undo pattern. If the AI-powered sales agent moves a lead to a different pipeline, it can do so autonomously as long as it notifies the user and offers an immediate Undo button.

By strictly categorizing nodes this way, we avoid “alert fatigue.” We reserve the high-friction Intent Preview only for the truly irreversible moments, while relying on the Action Audit to maintain speed for everything else.


ReversibleIrreversible
Low ImpactType: Auto-Execute
UI: Passive Toast / Log
Ex: Renaming a file
Type: Confirm
UI: Simple Undo option
Ex: Archiving an email
High ImpactType: Review
UI: Notification + Review Trail
Ex: Sending a draft to a client
Type: Intent preview
UI: Modal / Explicit Permission
Ex: Deleting a server

Table 1: The impact and reversibility matrix can then be used to map your moments of transparency to design patterns.

Qualitative Validation: “The Wait, Why?” Test #

You can identify potential nodes on a whiteboard, but you must validate them with human behavior. You need to verify whether your map matches the user’s mental model. I use a protocol called the “Wait, Why?” Test.

Ask a user to watch the agent complete a task. Instruct them to speak aloud. Whenever they ask a question, “Wait, why did it do that?” or “Is it stuck?” or “Did it hear me?” — you mark a timestamp.

These questions signal user confusion. The user feels their control slipping away. For example, in a study for a healthcare scheduling assistant, users watched the agent book an appointment. The screen sat static for four seconds. Participants consistently asked, “Is it checking my calendar or the doctor’s?”

The Wait, Why? Protocol. A timeline illustrating how silence creates anxiety. By mapping the specific moment users ask ‘Is it stuck?’, designers can insert transparency exactly when it is needed.
Figure 3: The Wait, Why? Protocol. A timeline illustrating how silence creates anxiety. By mapping the specific moment users ask ‘Is it stuck?’, designers can insert transparency exactly when it is needed. 

That question revealed a missing Transparency Moment. The system needed to split that four-second wait into two distinct steps: “Checking your availability” followed by “Syncing with provider schedule.”

This small change reduced users’ expressed levels of anxiety.

Transparency fails when it only describes a system action. The interface must connect the technical process to the user’s specific goal. A screen displaying “Checking your availability” falls flat because it lacks context. The user understands that the AI is looking at a calendar, but they do not know why.

We must pair the action with the outcome. The system needs to split that four-second wait into two distinct steps. First, the interface displays “Checking your calendar to find open times.” Then it updates to “Syncing with the provider’s schedule to secure your appointment.” This grounds the technical process in the user’s actual life.

Consider an AI managing inventory for a local cafe. The system encounters a supply shortage. An interface reading “contacting vendor” or “reviewing options” creates anxiety. The manager wonders if the system is canceling the order or buying an expensive alternative. A better approach is to explain the intended result: “Evaluating alternative suppliers to maintain your Friday delivery schedule.” This tells the user exactly what the AI is trying to achieve.

Operationalizing the Audit 

You have completed the Decision Node Audit and filtered your list through the Impact and Risk Matrix. You now have a list of essential moments for being transparent. Next, you need to create them in the UI. This step requires teamwork across different departments. You can’t design transparency by yourself using a design tool. You need to understand how the system works behind the scenes.

Start with a Logic Review. Meet with your lead system designer. Bring your map of decision nodes. You need to confirm that the system can actually share these states. I often find that the technical system doesn’t reveal the exact state I want to show. The engineer might say the system just returns a general “working” status. You must push for a detailed update. You need the system to send a specific notice when it switches from reading text to checking rules. Without that technical connection, your design is impossible to build.

Next, involve the Content Design team. You have the technical reason for the AI’s action, but you need a clear, human-friendly explanation. Engineers provide the underlying process, but content designers provide the way it’s communicated. Do not write these messages alone. A developer might write “Executing function 402,” which is technically correct but meaningless to the user. A designer might write “Thinking,” which is friendly but too vague. A content strategist finds the right middle ground. They create specific phrases, such as “Scanning for liability risks”, that show the AI is working without confusing the user.

Finally, test the transparency of your messages. Don’t wait until the final product is built to see if the text works. I conduct comparison tests on simple prototypes where the only thing that changes is the status message. For example, I show one group (Group A) a message that says “Verifying identity” and another group (Group B) a message that says “Checking government databases” (these are made-up examples, but you understand the point). Then I ask them which AI feels safer. You’ll often discover that certain words cause worry, while others build trust. You must treat the wording as something you need to test and prove effective.

How This Changes the Design Process

Conducting these audits has the potential to strengthen how a team works together. We stop handing off polished design files. We start using messy prototypes and shared spreadsheets. The core tool becomes a transparency matrix. Engineers and the content designers edit this spreadsheet together. They map the exact technical codes to the words the user will read.

Teams will experience friction during the logic review. Imagine a designer asking the engineer how the AI decides to decline a transaction submitted on an expense report. The engineer might say the backend only outputs a generic status code like “Error: Missing Data”. The designer states that this isn’t actionable information on the screen. The designer negotiates with the engineer to create a specific technical hook. The engineer writes a new rule so the system reports exactly what is missing, such as a missing receipt image.

Content designers act as translators during this phase. A developer might write a technically accurate string like “Calculating confidence threshold for vendor matching.” A content designer translates that string into a phrase that builds trust for a specific outcome. The strategist rewrites it as “Comparing local vendor prices to secure your Friday delivery.” The user understands the action and the result.

The entire cross-functional team sits in on user testing sessions. They watch a real person react to different status messages. Seeing a user panic because the screen says “Executing trade” forces the team to rethink their approach. The engineers and designers align on better wording. They change the text to “Verifying sufficient funds” before buying stock. Testing together guarantees the final interface serves both the system logic and the user’s peace of mind.

It does require time to incorporate these additional activities into the team’s calendar. However, the end result should be a team that communicates more openly, and users who have a better understanding of what their AI-powered tools are doing on their behalf (and why). This integrated approach is a cornerstone of designing truly trustworthy AI experiences.

Trust Is A Design Choice

We often view trust as an emotional byproduct of a good user experience. It is easier to view trust as a mechanical result of predictable communication.

We build trust by showing the right information at the right time. We destroy it by overwhelming the user or hiding the machinery completely.

Start with the Decision Node Audit, particularly for agentic AI tools and products. Find the moments where the system makes a judgment call. Map those moments to the Risk Matrix. If the stakes are high, open the box. Show the work.

In the next article, we will look at how to design these moments: how to write the copy, structure the UI, and handle the inevitable errors when the agent gets it wrong.

Appendix: The Decision Node Audit Checklist

Phase 1: Setup and Mapping

✅ Get the team together: Bring in the product owners, business analysts, designers, key decision-makers, and the engineers who built the AI.

Hint: You need the engineers to explain the actual backend logic. Do not attempt this step alone.

✅ Draw the whole process: Document every step the AI takes, from the user’s first action to the final result.

Hint: A physical whiteboard session often works best for drawing out these initial steps.

Phase 2: Locating the Hidden Logic

✅ Find where things are unclear: Look at the process map for any spot where the AI compares options or inputs that do not have one perfect match.

✅ Identify the best guess steps: For each unclear spot, check if the system uses a confidence score. For example, ask if the system is 85 percent sure. These are the points where the AI makes a final choice.

✅ Examine the choice: For each choice point, figure out the specific internal math or comparison being done. An example is matching a part of a contract to a policy. Another example involves comparing a picture of a broken car to a library of damaged car photos.

Phase 3: Creating the User Experience

✅ Write clear explanations: Create messages for the user that clearly describe the specific internal action happening when the AI makes a choice.

Hint: Ground your messages in concrete reality. If an AI books a meeting with a client at a local cafe, tell the user the system is checking the cafe reservation system.

✅ Update the screen: Put these new, clear explanations into the user interface. Replace vague messages like Reviewing contracts with your specific explanations.

✅ Check for Trust: Make sure the new screen messages give users a simple reason for any wait time or result. This should make them feel confident and trusting.

Hint: Test these messages with actual users to verify they understand the specific outcome being achieved.

Wednesday, April 29, 2026

The UX Designer’s Nightmare: When “Production-Ready” Becomes A Design Deliverable

 

In a rush to embrace AI, the industry is redefining what it means to be a UX designer, blurring the line between design and engineering. Carrie Webster explores what’s gained, what’s lost, and why designers need to remain the guardians of the user experience.

In early 2026, I noticed that the UX designer’s toolkit seemed to shift overnight. The industry standard “Should designers code?” debate was abruptly settled by the market, not through a consensus of our craft, but through the brute force of job requirements. If you browse LinkedIn today, you’ll notice a stark change: UX roles increasingly demand AI-augmented development, technical orchestration, and production-ready prototyping.

For many, including myself, this is the ultimate design job nightmare. We are being asked to deliver both the “vibe” and the “code” simultaneously, using AI agents to bridge a technical gap that previously took years of computer science knowledge and coding experience to cross. But as the industry rushes to meet these new expectations, they are discovering that AI-generated functional code is not always good code.

The LinkedIn Pressure Cooker: Role Creep In 2026

The job market is sending a clear signal. While traditional graphic design roles are expected to grow by only 3% through 2034, UX, UI, and Product Design roles are projected to grow by 16% over the same period.

However, this growth is increasingly tied to the rise of AI product development, where “design skills” have recently become the #1 most in-demand capability, even ahead of coding and cloud infrastructure. Companies building these platforms are no longer just looking for visual designers; they need professionals who can “translate technical capability into human-centered experiences.”

This creates a high-stakes environment for the UX designer. We are no longer just responsible for the interface; we are expected to understand the technical logic well enough to ensure that complex AI capabilities feel intuitive, safe, and useful for the human on the other side of the screen. Designers are being pushed toward a “design engineer” model, where we must bridge the gap between abstract AI logic and user-facing code.

A recent survey found that 73% of designers now view AI as a primary collaborator rather than just a tool. However, this “collaboration” often looks like “role creep.” Recruiters are often not just looking for someone who understands user empathy and information architecture — they want someone who can also prompt a React component into existence and push it to a repository!

This shift has created a competency gap.

As an experienced senior designer who has spent decades mastering the nuances of cognitive load, accessibility standards, and ethnographic research, I am suddenly finding myself being judged on my ability to debug a CSS Flexbox issue or manage a Git branch.

The nightmare isn’t the technology itself. It’s the reallocation of value.

Businesses are beginning to value the speed of output over the quality of the experience, fundamentally changing what it means to be a “successful” designer in 2026.

Figma to AI code ad
Tools that allow designers to switch from design to code. (Image source: Figma)

The Competence Trap: Two Job Skill Sets, One Average Result

There is potentially a very dangerous myth circulating in boardrooms that AI makes a designer “equal” to an engineer. This narrative suggests that because an LLM can generate a functional JavaScript event handler, the person prompting it doesn’t need to understand the underlying logic. In reality, attempting to master two disparate, deep fields simultaneously will most likely lead to being averagely competent at both.

The “Averagely Competent” Dilemma #

For a senior UX designer to become a senior-level coder is like asking a master chef to also be a master plumber because “they both work in the kitchen.” You might get the water running, but you won’t know why the pipes are rattling.

  • The “cognitive offloading” risk.
    Research shows that while AI can speed up task completion, it often leads to a significant decrease in conceptual mastery. In a controlled study, participants using AI assistance scored 17% lower on comprehension tests than those who coded by hand.
  • The debugging gap.
    The largest performance gap between AI-reliant users and hand-coders is in debugging. When a designer uses AI to write code they don’t fully understand, they don’t have the ability to identify when and why it fails.
A chart showing how AI assistance impacts coding speed and skill formation
Using AI tools impedes coding skill formation. (Image source: Anthropic

So, if a designer ships an AI-generated component that breaks during a high-traffic event and cannot manually trace the logic, they are no longer an expert. They are now a liability.

The High Cost Of Unoptimised Code 

Any experienced code engineer will tell you that creating code with AI without the right prompt leads to a lot of rework. Because most designers lack the technical foundation to audit the code the AI gives them, they are inadvertently shipping massive amounts of “Quality Debt”.

Common Issues In Designer-Generated AI Code

  • The security flaw
    Recent reports indicate that up to 92% of AI-generated codebases contain at least one critical vulnerability. A designer might see a functioning login form, unaware that it has an 86% failure rate in XSS defense, which are the security measures aimed at preventing attackers from injecting malicious scripts into trusted websites.
  • The accessibility illusion
    AI often generates “functional” applications that lack semantic integrity. A designer might prompt a “beautiful and functional toggle switch,” but the AI may provide a non-semantic <div> that lacks keyboard focus and screen-reader compatibility, creating Accessibility Debt that is expensive to fix later.
  • The performance penalty
    AI-generated code tends to be verbose. AI is linked to 4x more code duplication than human-written code. This verbosity slows down page loads, creates massive CSS files, and negatively impacts SEO. To a business, the task looks “done.” To a user with a slow connection or a screen reader, the site is a nightmare.

Creating More Work, Not Less

The promise of AI was that designers could ship features without bothering the engineers. The reality has been the birth of a “Rework Tax” that is draining engineering resources across the industry.

  • Cleaning up
    Organisations are finding that while velocity increases, incidents per Pull Request are also rising by 23.5%. Some engineering teams now spend a significant portion of their week cleaning up “AI slop” delivered by design teams who skipped a rigorous review process.
  • The communication gap
    Only 69% of designers feel AI improves the quality of their work, compared to 82% of developers. This gap exists because “code that compiles” is not the same as “code that is maintainable.”

When a designer hands off AI-generated code that ignores a company’s internal naming conventions or management patterns, they aren’t helping the engineer; they are creating a puzzle that someone else has to solve later.

Typical issues that developers face with AI-generated code
Typical issues that developers face with AI-generated code. (Image source: Netcorp)

The Solution 

We need to move away from the nightmare of the “Solo Full-Stack Designer” and toward a model of designer/coder collaboration.

The ideal reality:

  • The Partnership
    Instead of designers trying to be mediocre coders, they should work in a human-AI-human loop. A senior UX designer should work with an engineer to use AI; the designer creates prompts for intent, accessibility, and user flow, while the engineer creates prompts for architecture and performance.
  • Design systems as guardrails
    To prevent accessibility debt from spreading at scale, accessible components must be the default in your design system. AI should be used to feed these tokens into your UI, ensuring that even generated code stays within the “source of truth.”

Beyond The Prompt #

The industry is currently in a state of “AI Infatuation,” but the pendulum will eventually swing back toward quality.

The UX designer’s nightmare ends when we stop trying to compete with AI tools at what they do best (generating syntax) and keep our focus on what they cannot do (understanding human complexity).

Businesses that prioritise “designer-shipped code” without engineering oversight will eventually face a reckoning of technical debt, security breaches, and accessibility lawsuits. The designers who thrive in 2026 and beyond will be those who refuse to be “prompt operators” and instead position themselves as the guardians of the user experience. This is the perfect outcome for experienced designers and for the industry.

Our value has always been our ability to advocate for the human on the other side of the screen. We must use AI to augment our design thinking, allowing us to test more ideas and iterate faster, but we must never let it replace the specialised engineering expertise that ensures our designs technically work for everyone.

Summary Checklist for UX Designers 

  • Work Together.
    Use AI-made code as a starting point to talk with your developers. Don’t use it as a shortcut to avoid working with them. Ask them to help you with prompts for code creation for the best outcomes.
  • Understand the “Why”.
    Never submit code you don’t understand. If you can’t explain how the AI-generated logic works, don’t include it in your work.
  • Build for Everyone.
    Good design is more than just looks. Use AI to check if your code works for people using screen readers or keyboards, not just to make things look pretty.

The Site-Search Paradox: Why The Big Box Always Wins

 

Success in modern UX isn’t about having the most content. It’s about having the most findable content. Yet even with more data and better tools than ever, internal search often fails, leaving users to rely on global search engines to find a single page on a local site. Why does the “Big Box” still win, and how can we bring users back?

In the early days of the web, the search bar was a luxury, added to a site once it became “too big” to navigate by clicking. We treated it like an index at the back of a book: a literal, alphabetical list of words that pointed to specific pages. If you typed the exact word the author used, you found what you needed. If you didn’t, you were met with a “0 Results Found” screen that felt like a digital dead end.

Twenty-five years later, we are still building search bars that act like 1990s index cards, even though the humans using them have been fundamentally rewired. Today, when a user lands on your site and can’t find what they need in the global navigation within seconds, they don’t try to learn your taxonomy. They head for the search box. But if that box fails them, and demands they use your specific brand vocabulary, or punishes them for a typo, they do something that should keep every UX designer awake at night. They leave your site, go to Google, and type site:yourwebsite.com [query]. Or, worse still, they just type in their query and end up on a competitor’s website. I personally use Google over a site’s search nearly every time.

This is the Site-Search Paradox. In an era where we have more data and better tools than ever, our internal search experiences are often so poor that users prefer to use a trillion-dollar global search engine to find a single page on a local site. As Information Architects and UX designers, we have to ask, why does the “Big Box” win, and how can we take our users back?

The “Syntax Tax” And The Death Of Exact Match

The primary reason site search fails is what I call the Syntax Tax. This is the cognitive load we place on users when we require them to guess the exact string of characters we’ve used in our database.

Research by Origin Growth on Search vs Navigate shows that roughly 50% of users go straight to the search bar upon landing on a site. For example, when a user types “sofa” into a furniture site that has categorised everything under “couches,” and the site returns nothing, the user doesn’t think, “Ah, I should try a synonym.” They think, “This site doesn’t have what I want.”

This is a failure of Information Architecture (IA). We’ve built our systems to match strings (literal sequences of letters) rather than things (the concepts behind the words). When we force users to match our internal vocabulary, we are taxing their brainpower.

Keyword Search vs Semantic Search
Keyword Search vs. Semantic Search. (Image source: Gerrid Smith

Why Google Wins: It’s Not Power, It’s Context 

It is easy to throw our hands up and say, “We can’t compete with Google’s engineering.” But Google’s success isn’t just about raw power; it’s about contextual understanding. While we often treat search as a technical utility, Google treats it as an IA challenge.

Data from the Baymard Institute reveals that 41% of e-commerce sites fail to support even basic symbols or abbreviations, and this often leads to users abandoning a site after a single failed search attempt. Google wins because it uses stemming and lemmatization — IA techniques that recognize “running” and “ran” are the same intent. Most internal searches are “blind” to this context, treating “Running Shoe” and “Running Shoes” as entirely different entities.

If your site search can’t handle a simple plural or a common misspelling, you are effectively charging your users a tax for being human.

User Query Friction vs User Flow
User Query Friction vs. User Flow. (Image source: Created with Gemini) 

The UX Of “Maybe”: Designing For Probabilistic Results

In traditional IA, we think in binaries: A page is either in a category, or it isn’t. A search result is either a match or it isn’t. Modern search, which users now expect, is probabilistic. It deals in “confidence levels.”

According to Forresters, users who use search are 2–3 times more likely to convert than those who don’t, if the search works. And 80% of users on e-commerce sites exit a site due to poor search results.

As designers, we rarely design for the middle ground. We design a “Results Found” page and a “No Results” page. We miss the most important state: The “Did You Mean?” State. A well-designed search interface should provide “Fuzzy” matches. Instead of a cold “0 Results Found” screen, we should be using our metadata to say, “We didn’t find that in ‘Electronics,’ but we found 3 matches in ‘Accessories’.” By designing for “Maybe,” we can keep the user in the flow.

Case Study: The Cost Of “Invisible” Content 

To understand why IA is the fuel for the search engine, we must look at how data is structured behind the scenes. In my 25 years of practice, I’ve seen that the “findability” of a page is directly tied to its structured metadata.

Consider a large-scale enterprise I worked with that had over 5,000 technical documents. Their internal search was returning irrelevant results because the “Title” tag of every document was the internal SKU number (e.g., “DOC-9928-X”) rather than the human-readable name.

By reviewing the search logs, we discovered that users were searching for “installation guide.” Because that phrase didn’t appear in the SKU-based title, the engine ignored the most relevant files. We implemented a Controlled Vocabulary, which was a set of standardised terms that mapped SKUs to human language. Within three months, the “Exit Rate” from the search page dropped by 40%. This wasn’t an algorithmic fix; it was an IA fix. It proves that a search engine is only as good as the map we give it.

The Internal Language Gap 

Throughout my two decades in UX, I’ve noticed a recurring theme: internal teams often suffer from “The curse of knowledge.” We become so immersed in our own corporate vocabulary, or sometimes referred to as business jargon, that we forget the user doesn’t speak our language.

I once worked with a financial institution that was frustrated by high call volumes to their support centre. Users were complaining they couldn’t find “loan payoff” information on the site. When we looked at the search logs, “loan payoff” was the #1 searched term that resulted in zero hits.

Why? Because the institution’s IA team had labelled every relevant page under the formal term “Loan Release.” To the bank, a “payoff” was a process, but a “Loan Release” was the legal document that was the “thing” in the database. Because the search engine was looking for literal character strings, it refused to connect the user’s desperate need with the company’s official solution.

This is where the IA professional must act as a translator. By simply adding “loan payoff” as a hidden metadata keyword to the Loan Release pages, we solved a multi-million dollar support problem. We didn’t need a faster server; we needed a more empathetic taxonomy.

The 4-step Site-search Audit Framework

If you want to reclaim your search box from Google, you cannot simply “set it and forget it.” You must treat search as a living product. Here is the framework I use to audit and optimise search experiences:

Phase 1: The “Zero-result” Audit

Pull your search logs from the last 90 days. Filter for all queries that returned zero results. Group these into three buckets:

  • True gaps
    Content the user wants that you simply don’t have (a signal for your content strategy team).
  • Synonym gaps
    Content you have, but described in words the user doesn’t use (e.g., “Sofa” vs “Couch”).
  • Format gaps
    The user is looking for a “video” or “PDF,” but your search only indexes HTML text.

Phase 2: Query Intent Mapping

Analyse the top 50 most common queries. Are they Navigational (looking for a specific page), Informational (looking for “how to”), or Transactional (looking for a specific product)? Your search UI should look different for each. A navigational search should “Quick-Link” the user directly to the destination, bypassing the results page entirely.

Phase 3: The “Fuzzy” Matching Test

Intentionally mistype your top 10 products. Use plurals, common typos, and American vs. British English spellings (e.g., “Color” vs. “Colour”). If your search fails these tests, your engine lacks “stemming” support. This is a technical requirement you must advocate for to your engineering team.

Phase 4: Scoping And Filtering UX

Look at your results page. Does it offer filters that actually make sense? If a user searches for “shoes,” they should see filters for Size and Colour. Generic filters can be as bad as no filters.

Reclaiming The Search Box: A Strategy For IA Professionals 

To stop the exodus to Google, we must move beyond the “Box” and look at the scaffolding.

Step A: Implement semantic scaffolding.
Don’t just return a list of links. Use your IA to provide context. If a user searches for a product, show them the product, but also show them the manual, the FAQs, and the related parts. This “associative” search mimics how the human brain works and how Google operates.

Step B: Stop being a librarian, start being a concierge.
A librarian tells you exactly where the book is on the shelf. A concierge listens to what you want to achieve and gives you a recommendation. Your search bar should use predictive text not just to complete words, but to suggest intentions.

Using a “Google-powered” search bar, as seen on the University of Chicago website, is essentially an admission that a site’s internal organisation has become too complex for its own navigation to handle. While it is a quick “fix” for massive institutions to ensure users find something, it is generally a poor choice for businesses with deep content.

Example of a university website using Google-powered search.
Example of a university website using Google-powered search. (Source: University of Chicago

By delegating the search to Google, you surrender the user experience to an outside algorithm. You lose the ability to promote specific products, you expose your users to third-party ads, and you train your customers to leave your ecosystem the moment they need help. For a business, search should be a curated conversation that guides a customer toward a goal, not a generic list of links that pushes them back to the open web.

Shows search results with useful options when there are no exact matches. Additional suggestions are provided, including a “Did you mean” feature to help connect users with similar items.
Shows search results with useful options when there are no exact matches. Additional suggestions are provided, including a “Did you mean” feature to help connect users with similar items. (Image source: Crate & Barrel)

The Simple Search UX Checklist

Here is a final checklist for reference when you are building the search experience for your users. Work with your product team to ensure you are engaging with the right team members.

  • Kill the dead-end.
    Never just say “No results found.” If an exact match isn’t there, suggest a similar category, a popular product, or a way to contact support.
  • Fix “almost” matches.
    Make sure the search can handle plurals (like “plant” vs. “plants”) and common typos. Users shouldn’t be punished for a slip of the thumb.
  • Predict the user’s goal.
    Use an “auto-suggest” menu to show helpful actions (like “Track my order”) or categories, not just a list of words.
  • Talk like a human.
    Look at your search logs to see the words people actually use. If they type “couch” and you call it “sofa,” create a bridge in the background so they find what they need anyway.
  • Smart filtering.
    Only show filters that matter. If someone searches for “shoes,” show them size and color filters, not a generic list that applies to the whole site.
  • Show, don’t just list.
    Use small thumbnails and clear labels in the search results so users can see the difference between a product, a blog post, and a help article at a glance.
  • Speed is trust.
    If the search takes more than a second, use a loading animation. If it’s too slow, people will immediately go back to Google.
  • Check the “failure” logs.
    Once a month, look at what people searched for that returned zero results. This is your “to-do list” for fixing your site’s navigation.

Conclusion: The Search Bar Is A Conversation

The search box is the only place on your site where the user tells us exactly, in their own words, what they want. When we fail to understand those words, when we let the “Big Box” of Google do the work for us, we aren’t just losing a page view. We are losing the opportunity to prove that we understand our customers.

Success in modern UX isn’t about having the most content; it’s about having the most findable content. It’s time to stop taxing users for their syntax and start designing for their intent.

By moving from literal string matching to semantic understanding, and by supporting our search engines with robust, human-centered Information Architecture, we can finally close the gap.

Monday, April 27, 2026

A Practical Guide To Design Principles

 

by Vitaly.

We often see design principles as rigid guidelines that dictate design decisions. But actually, they are an incredible tool to rally the team around a shared purpose and document the values and beliefs that an organization embodies.

They align teams and inform decision-making. They also keep us afloat amidst all the hype, big assumptions, desire for faster delivery, and AI workslop. But how do we choose the right ones, and how do we get started? Let’s find out.

Real-World Design Principles

In times when we can generate any passable design and code within minutes, we need to decide better what’s worth designing and building — and what values we want our products to embody.

It’s similar to voice and tone. You might not design it intentionally, but then end users will define it for you. And so, without principles, many company initiatives are random, sporadic, ad-hoc — and feel vague, inconsistent, or simply dull to the outside world.

Design principles are guidelines and design considerations that designers apply with discretion — by default, without debating or discussing what has already been agreed upon.

One fantastic resource that I keep coming back to after all these years is Ben Brignell’s Principles.design. It has 230 pointers for design principles and methods, searchable and tagged, covering everything from language and infrastructure to hardware and organizations.

10 Principles Of Good Design 

There is no shortage of principles out there. But the good ones are more than just being visionary — they have a point of view, and they explain what we don’t do as much as what we do. They also explain what we stand for in the world — beyond profits, stock prices, and all the hype and noise around us.

10 legendary principles for good design
10 legendary principles for good design, by Dieter Rams. Still relevant, after all these years.

Many years ago, I encountered Dieter Rams’ 10 principles of good design (see above), a very humble, practical and tangible overview of principles that were informing, shaping, and guarding his design work at Braun.

There are no visionary claims, and no big bold statements: just a clear overview of what we do, and where our ambition and care lie for the products we are designing. It’s honest, sincere, and in many ways beautifully humane.

Examples Of Design Principles 

There are plenty of wonderful examples that I keep close:

Design Principles In Design Systems

How To Establish Design Principles

Design principles can be personal, but usually they are committed to and shaped by the entire product team. Design principles aren’t just for designers. User’s experience is everything from performance to support to customer service, and ideally, participants would cover these areas as well.

In practice, though, establishing principles might feel incredibly challenging. They are abstract and fluffy and often ambiguous, and often very difficult to agree upon.

Workshop kit for a design principles workshop
One of many workshop kits for a design principles workshop. 

You can get started with a simple 8-step workshop (inspired by Marcin Treder, Maria Meireles and Better):

  1. Pre-session Research
    Study how users speak about the products, what they appreciate, and the words they use.
  2. Get Into Principles Mode
    Invite 6–8 participants, ask them to choose their favorite object, and describe it in 3 words.
  3. Product Analogies
    Compare product to tangible items (e.g., ‘A Porsche 911’ or ‘a Braun audio system’).
  4. Extract Attributes
    Individually, in silence, everyone writes 3–5 initial principles, which are then grouped by theme for review.
  5. Link Attributes To Research
    Link attributes to actual user pain points or desires, to make sure they are grounded in reality.
  6. Value Statements
    We write ‘We want X because of Y’ sentences that express the rationale behind our thinking.
  7. Move to Principles
    Remove analogies to create enduring rules that will guide our design process.
  8. Reality Check
    Search for both positive and negative examples in our products to see where principles are being met or ignored.
Variants of sentences for establishing design principles
Voting for the most relevant sentences in keyword groups. From Better. (Large preview)

Useful Starter Kits For Principles Workshops #

Wrapping Up 

Creating principles is only a small portion of the work; most work is about effectively sharing and embedding them. It’s difficult to get anywhere without finding ways to make design principles a default — by revisiting settings, templates, naming conventions, and output.

Principles help avoid endless discussions that often stem from personal preferences or taste. But design should not be a matter of taste; it must be guided by our goals and values. Design principles can help with just that.

Useful Resources

Why Senior Engineers Go Quiet — The Hidden 3-Week Warning Before Failure

 

Delivery intelligence for technology leaders ·   Issue #40   ·   Every Wednesday

Not 3 months. Not 3 weeks of runway. 3 weeks until something breaks.

Disclaimer: Details in this issue have been changed to protect client confidentiality. The situation and the lesson are real


She had been the most engaged person in every stand-up for 7 months.

The lead engineer on our most complex workstream. Sharp, direct, the kind of person who would call out a bad architectural decision in front of the client without hesitation. She had identified three significant technical risks in the first quarter, all of which were caught before they reached the backlog. The team trusted her completely.

In week 29, I noticed she had not challenged anything in stand-up for 6 days. She was answering her 3 questions. She was present. She was doing her work. She had just stopped.

"Yesterday I finished the integration layer. Today I'm continuing the same. No blockers."

Eleven days later, the integration layer failed under load. It was not a simple bug. It was a structural decision that had been made in week 26 - one she had known was wrong, had decided not to raise, and had spent the following three weeks building around.

When I asked her why she had not flagged it, her answer was more honest than I had expected: "I raised two things in month 5 and both times I felt like I was being managed rather than heard. I decided it wasn't worth the energy."


Today's menu:

🚨  The problem: The specific silence pattern that precedes every major technical failure I have seen in 14 years and why it is always the most capable person who goes quiet first

💸  What it costs: Why the silence of senior engineers is the most accurate leading indicator of delivery risk available and why it never appears on a RAID log

✅  The fix: The 3-week intervention that catches the pattern before it becomes a crisis

⚠️  The silence pattern — what it is and why the best person goes first

Senior engineers go quiet in stand-ups for a specific, identifiable reason that is almost never the one delivery leaders assume. It is not burnout — burnout produces agitation before silence. It is not disengagement — disengagement produces lower quality work, not lower verbal frequency. And it is almost never satisfaction.

The most common cause of senior-engineer silence in a well-functioning team is a specific, rational cost-benefit calculation: the engineer has assessed that the cost of raising a concern — the social friction, the pushback, the feeling of being managed — is greater than the benefit of having the concern heard. And they have made this calculation based on evidence from the programme.

Senior engineers do not go quiet because they have nothing to say. They go quiet because they have learned that saying it is not worth it.

The reason the best engineer goes first is exactly their seniority. A junior developer raises concerns because they are still learning the social norms of the team. A senior engineer has already made the social calculation with much more precision — and has enough other things to focus on that withdrawing from verbal risk-raising is a rational conservation of energy.

Article content

🤔  Quiz

You lead a 12-person engineering team. Your most senior developer historically engaged, opinionated, and reliable has given a variation of "all good, no blockers" in stand-up for 8 consecutive days. Delivery metrics are normal. What is the right first action?

A)  Nothing — consistent delivery metrics are the signal that matters, not verbal frequency

B)  Ask them directly in the stand-up: "Are you sure there are no blockers? You've been very quiet this week."

C)  Have a private 15-minute conversation outside the stand-up: "I've noticed you have been less vocal recently, is there anything you're sitting on that you haven't raised?"

D)  Send a team-wide message encouraging everyone to raise concerns more actively

👉  Answer at the end of this issue

💡  The fix

Three interventions timed specifically to the 3-week window before the silence becomes a structural problem.

✅  Fix 1: Week 1 — The private, direct conversation

Within 5 days of noticing the silence, a 15-minute private conversation. Not a performance conversation. Not a welfare check. A specific, respectful inquiry:

"I've noticed you've been less vocal in stand-ups recently. I want to make sure I'm not missing something important. Is there anything you are sitting on technically or otherwise that you haven't raised?"

The silence after this question is important. Do not fill it. The engineer is doing a rapid cost-benefit recalculation. Is this person going to hear what I say or manage it? Your job in the first 90 seconds after they speak is to demonstrate, specifically and behaviourally, that you are doing the former.

If they say "nothing, all fine": thank them and leave the door open. Watch for a second week of silence. If that occurs, the calculation has been made and you need the next intervention.

Article content

✅  Fix 2: Week 2 — The concern-cost audit

If the private conversation has not produced the information, the issue is structural: the cost of raising concerns in this team is too high. A 45-minute session with the senior engineers only, no junior team members, with one question:

"Think of the last time you identified a concern on this programme and decided not to raise it. What made you decide not to raise it?"

Write every answer on a whiteboard. Do not defend against any of them. Do not explain or contextualise. Just write them down and read them back. The act of making the concern-cost visible, in a room, without defensiveness, is often enough to change the dynamic because it signals that this is now a problem you are taking seriously rather than managing.

Article content

✅  Fix 3: Week 3 — The structural fix builds concern-raising into the process, not the person

If the silence persists into week 3, the issue is not about the individual engineer. It is about the environment. Three structural changes that reduce the cost of raising concerns at the team level:

  • The end-of-day risk log. A standing Slack channel or equivalent where the only acceptable input is "I noticed X today and I am not sure whether it is a risk." No resolution required. No follow-up demanded. Just observation. The programme lead acknowledges every entry within 24 hours.
  • The pre-commitment check. Before any architectural or technical decision is finalised, one question is asked of the most senior person who disagreed with it: "What would need to happen for you to be proven right?" This gives dissent a legitimate structural role instead of requiring it to be raised as a confrontation.
  • The monthly technical retrospective. Separate from the delivery retrospective. Focused entirely on technical decisions: what are we building that we are not comfortable with? This is where the concerns that are too technical to raise in a delivery forum find a legitimate place.

Article content

🎯  What to do this week

This week, track one metric you have probably never tracked before:

  • For the three most senior engineers on your team: how many substantive observations (concerns, challenges, technical flags) have they made in stand-ups and ceremonies in the last 10 working days?
  • Not "did they attend." Not "are they performing." How many times did they say something that was not a status update?

If the answer is fewer than two per person per week: you may have a silence pattern forming. You now have 3 weeks before it becomes a structural problem.


Want the concern-cost audit questions — the exact 45-minute format?

Reply "silence" to this email and I'll send it directly to you.

🌐  Around the web this week

⚡  1 tool: TeamRetro — asynchronous retrospective tool with anonymous input. The specific use case: a standing "what am I not saying" prompt that team members can contribute to before the synchronous session. The anonymity removes the social cost before the concern reaches the room.

📊  1 number: Google's Project Aristotle research found that psychological safety, the belief that one will not be punished for raising concerns, is the single strongest predictor of team effectiveness across the 180 teams studied. It ranked above all technical, organisational, and individual competence factors. The silence pattern is what happens when psychological safety has already failed.

💬  1 quote: "What got you here won't get you there." - Marshall Goldsmith. For delivery leaders, what got you here: confidence, decisiveness, pattern-matching from experience, is precisely what creates the silence in your best people. Success and the conditions for future success are not the same thing.


👉  Quiz answer

C — private, direct, and non-accusatory.

Option A is the most common response and the most dangerous, it treats delivery metrics as the only signal that matters, which is the assumption that allows this pattern to reach crisis. Option B creates a public moment that compounds the social cost already causing the silence. Option D is too diffuse to address the specific pattern and may actually increase the cost of raising concerns by making it feel scrutinised.

Option C works because it is private, it is observational rather than accusatory ("I've noticed" not "why aren't you"), and it specifically names the concern about unraised issues. It gives the engineer a low-cost way to surface what they are sitting on without requiring them to do it in front of the team. The phrase "is there anything you're sitting on" is important — it names the specific pattern you are looking for.


The lead engineer I described in the opening story is, as far as I know, thriving. She left the programme at the end of that engagement — not because of the incident, but because the engagement ended.

What she taught me by going quiet, by deciding three weeks before the failure that raising concerns was not worth the energy, is something I now treat as the most important signal in any programme I lead. Not the RAG status. Not the velocity chart. The voice of the most capable person in the room.

When that voice goes quiet, everything else is noise. The 3-week clock is already running.

Think of the most technically capable person on your current team. When did they last challenge something, a decision, an assumption, a plan in a group setting? If you cannot remember, the clock may already be running.

Hit reply. I read everything.

Until next Wednesday,

Aman

www.amansingh.pro


If this issue is named something you have been watching but could not describe, forward it to one person who needs to read it.