Affinity Designer (Serif Labs) is a powerful vector/raster hybrid, available via a one-time purchase rather than a subscription. It offers full control over vector nodes, smooth gradients, and precise export capabilities, making it ideal for logo creation and multi-output design workflows Atomic SpinTechRadar.
Sketch (by Sketch B.V.) is macOS-exclusive and built for UI/UX designers. It excels with its intuitive grid system, Auto Layout, and reusable components, streamlining responsive design workflows Sketch Community ForumSkylum.
Reddit users note:
“Sketch is UI design software… but Affinity Designer is both Illustrator and Photoshop.”
“Affinity Designer offers advanced mirroring and export tools, while Sketch remains unbeatable for layout grids.” RedditSketch Community Forum
Alternatives Worth Knowing
Paid, Feature-Rich Tools
Adobe Illustrator — The industry-standard vector tool. More expensive but unmatched if you're embedded in the Adobe ecosystem G2Skylum.
CorelDRAW Graphics Suite — A versatile design suite with strong typography and illustration tools SkylumSoftwareWorld.
Xara Designer Pro+ — Fast, user-friendly, with integrated vector and photo tools—ideal for all-in-one design flows Wikipedia.
Free & Open-Source
Inkscape — A top-tier vector editor that’s open source and supports SVG, powerful path tools, and scripting WikipediaTechRadar.
GIMP — A Photoshop-like free raster editor. Great for photo editing but less seamless for vector workflows Software AdviceTechRadar.
Gravit Designer & Lunacy — Lightweight, cross-platform vector tools. Ideal for quick UI or graphic tasks, with Lunacy offering Sketch file compatibility G2Female SwitchSoftware Advice.
Web-Based & Template-Focused
Canva — Drag-and-drop, template-driven design platform—great for social media visuals and brand content Software AdviceLifewire.
Vectr — Simplified browser vector editor; collaborative but limited in advanced features LifewireTechRadar.
Kittl — A rising web design platform combining templates, vector tools, and AI—all in browser, growing fast with millions of users Wikipedia.
Boxy SVG — Minimalist SVG editor with filters and vector conversion; simple and web-friendly Wikipedia.
AI & Platform Highlights
According to TechRadar’s recent roundup:
Affinity Designer 2 is celebrated for powerful vector and raster tools, one-time pricing, and its worthiness vs. subscriptions TechRadar.
Free picks include GIMP for editing flexibility and Canva for intuitive templated designs TechRadarLifewire.
Learn why flip phones still matter in 2025, and how you can build and launch web apps for these tiny devices.
Flip phones aren’t dead. On the contrary, 200+ million non-smartphones are sold annually. That’s roughly equivalent to the number of iPhones sold in 2024. Even in the United States, millions of flip phones are sold each year. As network operators struggle to shut down 2G service,
new incentives are offered to encourage device upgrades that further
increase demand for budget-friendly flip phones. This is especially true
across South Asia and Africa, where an iPhone is unaffordable for the
vast majority of the population (it takes two months of work on an average Indian salary to afford the cheapest iPhone).
Like
their “smart” counterparts, flip phones (technically, this category is
called “Feature Phones”) are becoming increasingly more capable. They
now offer features you’d expect from a smartphone, like 4G, WiFi,
Bluetooth, and the ability to run apps. If you are targeting users in
South Asia and Africa, or niches in Europe and North America, there are
flip phone app platforms like Cloud Phone and KaiOS.
Building for these platforms is similar to developing a Progressive Web
App (PWA), with distribution managed across several app stores.
Jargon Busting Flip
phones go by many names. Non-smartphones are jokingly called “dumb
phones”. The technology industry calls this device category “feature
phones”. Regionally, they are also known as button phones or basic
mobiles in Europe, and keypad mobiles in India. They all share a few
traits: they are budget phones with small screens and physical buttons.
Why Build Apps For Flip Phones?
It’s
a common misconception that people who use flip phones do not want
apps. In fact, many first-time internet users are eager to discover new
content and services. While this market isn’t as lucrative as Apple’s
App Store, there are a few reasons why you should build for flip phones.
Organic Growth You do not need to pay to acquire flip phone users. Unlike Android or IOS, where the cost per install (CPI) averages around $2.5-3.3 per install according to GoGoChart, flip phone apps generate substantial organic downloads.
Brand Introduction When
flip phone users eventually upgrade to smartphones, they will search
for the apps they are already familiar with. This will, in turn,
generate more installs on the Google Play Store and, to a lesser extent,
the Apple App Store.
Low Competition There are ~1,700 KaiOS apps and fewer Cloud Phone widgets. Meanwhile, Google Play has over 1.55 million Android apps to choose from. It is much easier to stand out as one in a thousand than one in a million.
Technical Foundations
Flip phones could not always run apps. It wasn’t until the Ovi Store
(later renamed to the “Nokia Store”) launched in 2009, a year after
Apple’s flagship iPhone launched, that flip phones got installable,
third-party applications. At the time, apps were written for the
fragmented Java 2 Mobile Edition (J2ME) runtime, available only on
select Nokia models, and often required integration with
poorly-documented, proprietary packages like the Nokia UI API.
Today, flip phone platforms have rejected native runtimes in favor of standard web technologies
in an effort to reduce barriers to entry and attract a wider pool of
software developers. Apps running on modern flip phones are primarily
written in languages many developers are familiar with — HTML, CSS, and
JavaScript — and with them, a set of trade-offs and considerations.
Hardware
Flip
phones are affordable because they use low-end, often outdated,
hardware. On the bottom end are budget phones with a real-time operating
system (RTOS) running on chips like the Unisoc T107
with as little as 16MB of RAM. These phones typically support Opera
Mini and Cloud Phone. At the upper end is the recently-released TCL Flip 4 running KaiOS 4.0 on the Qualcomm Snapdragon 4s with 1GB of RAM.
While
it is difficult to accurately compare such different hardware, Apple’s
latest iPhone 16 Pro has 500x more memory (8GB RAM) and supports
download speeds up to 1,000x faster than a low-end flip phone (4G LTE
CAT-1).
Performance
You might think that
flip phone apps are easily limited by the scarce available resources of
budget hardware. This is the case for KaiOS, since apps are executed on
the device. Code needs to be minified, thumbnails downsized, and
performance evaluated across a range of real devices. You cannot simply
test on your desktop with a small viewport.
However, as remote browsers,
both Cloud Phone and Opera Mini overcome hardware constraints by
offloading computationally expensive rendering to servers. This means performance is generally comparable to modern desktops, but can lead to a few quirky and, at times, unintuitive characteristics.
For
instance, if your app fetches a 1MB file to display a data table, this
does not consume 1MB of the user’s mobile data. Only changes to the
screen contents get streamed to the user, consuming bandwidth. On the
other hand, data is consumed by complex animations and page transitions,
because each frame is at least a partial screen refresh. Despite this
quirk, Opera Mini estimates it saves up to 90% of data compared to conventional browsers.
Security
Do not store sensitive data
in browser storage. This holds true for flip phones, where the security
concerns are similar to those of traditional web browsers. Although
apps cannot generally access data from other apps, KaiOS does not
encrypt client-side data. The implications are different for remote
browsers.
Despite
their staying power, these devices go largely ignored by nearly every
web development framework and library. Popular front-end web frameworks
like Bootstrap v5 categorize all screens below 576px as extra small. Another popular choice, Tailwind,
sets the smallest CSS breakpoint — a specific width where the layout
changes to accommodate an optimal viewing experience across different
devices — even higher at 40em (640px). Design industry experts like Norman Nielsen suggest the smallest breakpoint,
“is intended for mobile and generally is up to 500px.” Standards like
these advocate for a one-size-fits-all approach on small screens, but
some small design changes can make a big difference for new internet
users.
Small screens vary considerably in size, resolution, contrast, and brightness.
Shrinks poorly: Screenshots of A List Apart, Chrome for Developers, and MDN Web Docs on Cloud Phone. (Large preview)Shrinks well: Screenshots of Rest of World, BBC News, and TED Talks on Cloud Phone.
Most websites render too large for flip phones.
They use fonts that are too big, graphics that are too detailed, and
sticky headers that occupy a quarter of the screen. To make matters
worse, many websites disable horizontal scrolling
by hiding content that overflows horizontally. This allows for smooth
scrolling on a touchscreen, but also makes it impossible to read text
that extends beyond the viewport on flip phones.
The table below
includes physical display size, resolution, and examples to better
understand the diversity of small screens across flip phones and budget
smartphones.
Note: Flip
phones have small screens typically between 1.8”–2.8” with a resolution
of 240x320 (QVGA) or 128x160 (QQVGA). For comparison, an Apple Watch
Series 10 has a 1.8” screen with a resolution of 416x496. By modern
standards, flip phone displays are small with low resolution, pixel
density, contrast, and brightness.
Develop For Small Screens
Add
custom, named breakpoints to your framework’s defaults, rather than
manually using media queries to override layout dimensions defined by
classes.
Bootstrap v5
Bootstrap defines a map, $grid-breakpoints, in the _variables.scss Sass file that contains the default breakpoints from SM (576px) to XXL (1400px). Use the map-merge() function to extend the default and add your own breakpoint.
Successful
flip phone apps support keyboard navigation using the directional pad
(D-pad). This is the same navigation pattern as TV remotes: four arrow
keys (up, down, left, right) and the central button. To build a great
flip phone-optimized app, provide a navigation scheme
where the user can quickly learn how to navigate your app using these
limited controls. Ensure users can navigate to all visible controls on
the screen.
Navigating PodLP using d-pad (left) and a virtual cursor (right).
Although
some flip phone platforms support spatial navigation using an emulated
cursor, it is not universally available and creates a worse user
experience. Moreover, while apps that support keyboard navigation will
work with an emulated cursor, this isn’t necessarily true the other way
around. Opera Mini Native only offers a virtual cursor, Cloud Phone only
offers spatial navigation, and KaiOS supports both.
If you develop with keyboard accessibility in mind, supporting flip phone navigation is easy. As general guidelines, never remove a focus outline. Instead, override default styles and use box shadows
to match your app’s color scheme while fitting appropriately. Autofocus
on the first item in a sequence — list or grid — but be careful to
avoid keyboard traps. Finally, make sure that the lists scroll the newly-focused item completely into view.
Don’t Make Users Type
If
you have ever been frustrated typing a long message on your smartphone,
only to have it accidentally erased, now imagine that frustration when
you typed the message using T9
on a flip phone. Despite advancements in predictive typing, it’s a
chore to fill forms and compose even a single 180-character Tweet with
just nine keys.
Whatever you do, don’t make flip phone users type!
Fortunately, it is easy to adapt designs to require less typing. Prefer numbers whenever possible.
Allow users to register using their phone number (which is easy to
type), send a PIN code or one-time password (OTPs) that contains only
numbers, and look up address details from a postal code. Each of these
saves tremendous time and avoids frustration that often leads to user
attrition.
Alternatively, integrate with single-sign-on (SSO)
providers to “Log in with Google,” so users do not have to retype
passwords that security teams require to be at least eight characters
long and contain a letter, number, and symbol. Just keep in mind that
many new internet users won’t have an email address. They may not know
how to access it, or their phone might not be able to access emails.
Finally, allow users to search by voice
when it is available. As difficult as it is typing English using T9,
it’s much harder typing a language like Tamil, which has over 90M
speakers across South India and Sri Lanka. Despite decades of
advancement, technologies like auto-complete and predictive typing are
seldom available for such languages. While imperfect, there are AI
models like Whisper Tamil that can perform speech-to-text, thanks to researchers at universities like the Speech Lab at IIT Madras.
Flip Phone Browsers And Operating Systems
Another challenge with developing web apps for flip phones is their fragmented ecosystem.
Various companies have used different approaches to allow websites and
apps to run on limited hardware. There are at least three major
web-based platforms that all operate differently:
Cloud Phone is the most recent solution, launched in December 2023, using a modern Puffin (Chromium) based remote browser that serves as an app store.
KaiOS, launched in 2016 using Firefox OS as its foundation, is a mobile operating system where the entire system is a web browser.
Opera Mini Native is by far the oldest, launched in 2005 as an ad-supported remote browser that still uses the decade-old, discontinued Presto engine.
Although both platforms are remote browsers, there are significant differences between Cloud Phone and Opera Mini that are not immediately apparent.
Left to right: Nokia 6300 4G (KaiOS), Viettel Sumo 4G V1S (Cloud Phone), and Itel Neo R60+ (Opera Mini).
Flip
phones have come a long way, but each platform supports different
capabilities. You may need to remove or scale back features based on
what is supported. It is best to target the lowest common denominator that is feasible for your application.
For
information-heavy news websites, wikis, or blogs, Opera Mini’s outdated
technology works well enough. For video streaming services, both Cloud
Phone and KaiOS work well. Conversely, remote browsers like Opera Mini
and Cloud Phone cannot handle high frame rates, so only KaiOS is
suitable for real-time interactive games. Just like with design, there
is no one-size-fits-all approach to flip phone development. Even though
all platforms are web-based, they require different tradeoffs.
Tiny Screens, Big Impact
The
flip phone market is growing, particularly for 4G-enabled models.
Reliance’s JioPhone is among the most successful models, selling more
than 135 million units
of its flagship KaiOS-enabled phone. The company plans to increase 4G
flip phone rollout steadily as it migrates India’s 250 million 2G users
to 4G and 5G.
Estimates
of the total active flip phone market size are difficult to come by,
and harder still to find a breakdown by platform. KaiOS claims to enable
“over 160 million phones worldwide,” while “over 300 million people use Opera Mini to stay connected.” Just a year after launch, Cloud Phone states that, “one million Cloud Phone users
already access the service from 90 countries.” By most estimates, there
are already hundreds of millions of web-enabled flip phone users eager
to discover new products and services.
Conclusion
Hundreds
of millions still rely on flip phones to stay connected. Yet, these
users go largely ignored even by products that target emerging markets. Modern software development often prioritizes the latest and greatest over finding ways to affordably serve more than 2.6 billion unconnected people. If you are not designing for small screens using keyboard navigation, you’re shutting out an entire population from accessing your service.
Flip phones still matter in 2025.
With ongoing network transitions, millions will upgrade, and millions
more will connect for the first time using 4G flip phones. This creates
an opportunity to put your app into the hands of the newly connected.
And thanks to modern remote browser technology, it is now easier than
ever to build and launch your app on flip phones without costly and
time-consuming optimizations to function on low-end hardware.
Designing a new AI feature? Where do you even begin? Here’s a simple,
practical overview with useful design patterns for better AI
experiences.
So you need to design a new AI feature for your
product. How would you start? How do you design flows and interactions?
And how do you ensure that that new feature doesn’t get abandoned by
users after a few runs?
In this article, I’d love to share a very simple but systematic approach to how I think about designing AI experiences. Hopefully, it will help you get a bit more clarity about how to get started.
The Receding Role of AI Chat
One of the key recent shifts is a slow move away from traditional “chat-alike” AI interfaces. As Luke Wroblewski wrote, when agents can use multiple tools, call other agents and run in the background, users orchestrate AI work more — there’s a lot less chatting back and forth.
In fact, chatbots are rarely a great experience paradigm
— mostly because the burden of articulating intent efficiently lies on
the user. But in practice, it’s remarkably difficult to do well and very
time-consuming.
Chat doesn’t go away, of course, but it’s being complemented with task-oriented UIs
— temperature controls, knobs, sliders, buttons, semantic spreadsheets,
infinite canvases — with AI providing predefined options, presets, and
templates.
There,
AI emphasizes the work, the plan, the tasks — the outcome, instead of
the chat input. The results are experiences that truly amplify value for users by sprinkling a bit of AI in places where it delivers real value to real users.
To design better AI experiences, we need to study 5 key areas that we need to shape.
Input UX: Expressing Intent
Conversational AI is a very slow way of helping users express and articulate their intent. Usability tests show
that users often get lost in editing, reviewing, typing, and re-typing.
It’s painfully slow, often taking 30-60 seconds for input.
As it
turns out, people have a hard time expressing their intent well. In
fact, instead of writing prompts manually, it’s a good idea to ask AI to write a prompt to feed itself.
With Flora AI, users can still write prompts, but they visualize their intent
with nodes by connecting various sources visually. Instead of
elaborately explaining to AI how we need the pipeline to work, we attach
nodes and commands on a canvas.
With Krea.ai, users can move abstract shapes (on the left) to explain their goal to AI and study the outcome (on the right).
With input for AI, being precise is slow and challenging. Instead, we can abstract away the object we want to manipulate, and give AI precise input by moving that abstracted object on a canvas. That’s what Krea.ai does.
In summary, we can minimize the burden of typing prompts manually — with AI-generated pre-prompts, prompt extensions, query builders, and also voice input.
Output UX: Displaying Outcomes
AI output doesn’t have to be merely plain text or a list of bullet points. It must be helpful to drive people to insights, faster. For example, we could visualize output by creating additional explanations based on the user’s goal and motivations.
Visualizing outcome through style lenses. By Amelia Wattenberger. (Large preview)
For example, Amelia Wattenberger visualized AI output for her text editor PenPal by adding style lenses to explore the content from. The output could be visualized in sentence lengths and scales Sad — Happy, Concrete — Abstract, and so on.
The outcome could also be visualized on a map, which, of course, is expected for an AI GIS analyst. Also, users can access individual data layers, turn them on and off, and hence explore the data on the map.
We can also use forced ranking and prioritizations to suggest best options
and avoid choice paralysis — even if a user asks for top 10
recommendations. We can think about ways to present results as a data
table, or a dashboard, or a visualization on a map, or as a structured
JSON file, for example.
Refinement UX: Tweaking Output
Users often need to cherry-pick some bits from the AI output and bring them together in a new place — and often they need to expand on one section, synthesize bits from another section, or just refine the outcome to meet their needs.
Refinement is usually the most painful part of the experience,
with many fine details being left to users to explain elaborately. But
we can use good old-fashioned UI controls like knobs, sliders, buttons,
and so on to improve that experience, similar to how Adobe Firefly does
it (image above).
Presets living on the side in Elicit, an example by Maggie Appleton.
We can also use presets, bookmarks, and allow users to highlight specific parts of the outcome that they’d like to change — with contextual prompts acting on highlighted parts of the output, rather than global prompts.
With AI agents, we can now also allow users to initiate tasks that AI can perform on their behalf, such as scheduling events, planning, and deep research. We could also ask to sort results or filter them in a specific way.
Suggesting actions on Elicit, an example by Maggie Appleton.
But we can also add features to help users use AI output better — e.g., by visualizing it, making it shareable, allowing transformations between formats, or also posting to Slack, Jira, and so on.
AI Integration: Where Work Happens
Many AI interactions are locked within a specific product, but good AI experiences happen where the actual work happens. It would be quite unusual to expect a dedicated section for Autocomplete, for example, but we do so for AI features.
(Large preview)DoveTail AI integrates in plenty of platforms, from Jira and Notion to Slack and Teams, where the actual work happens.
The actual boost in productivity comes when users rely on AI as a co-pilot or little helper in the tools they use daily for work. It’s seamless integrations into Slack, Teams, Jira, GitHub, and so on — the tools that people use anyway. Dia Browser and Dovetail are great examples of it in action.
Wrapping Up
Along these five areas, we can explore ways to minimize the cost of interaction
with a textbox, and allow users to interact with the points of interest
directly, by tapping, clicking, selecting, highlighting, and
bookmarking.
Many products are obsessed with being AI-first. But you might be way better off by being AI-second
instead. The difference is that we focus on user needs and sprinkle a
bit of AI across customer journeys where it actually adds value.
And
AI products don’t have to be AI-only. There is a lot of value in
mapping into the mental models that people have adopted over the years,
and enhance them with AI, similar to how we do it with browsers’ autofill, rather than leaving users in front of a frightening and omnipresent text box.
Event listeners are essential for interactivity in JavaScript, but they
can quietly cause memory leaks if not removed properly. And what if your
event listener needs parameters? That’s where things get interesting. It shares which JavaScript features make handling
parameters with event handlers both possible and well-supported.
JavaScript event listeners are very important, as they exist in
almost every web application that requires interactivity. As common as
they are, it is also essential for them to be managed properly.
Improperly managed event listeners can lead to memory leaks and can
sometimes cause performance issues in extreme cases.
Here’s the real problem: JavaScript event listeners are often not removed after they are added.
And when they are added, they do not require parameters most of the
time — except in rare cases, which makes them a little trickier to
handle.
A common scenario where you may need to use parameters
with event handlers is when you have a dynamic list of tasks, where each
task in the list has a “Delete” button attached to an event handler
that uses the task’s ID as a parameter to remove the task. In a
situation like this, it is a good idea to remove the event listener once
the task has been completed to ensure that the deleted element can be
successfully cleaned up, a process known as garbage collection.
A Common Mistake When Adding Event Listeners
A very common mistake when adding parameters to event handlers is calling the function with its parameters inside the addEventListener() method. This is what I mean:
The browser responds to this line by immediately calling
the function, irrespective of whether or not the click event has
happened. In other words, the function is invoked right away instead of
being deferred, so it never fires when the click event actually occurs.
You may also receive the following console error in some cases:
Uncaught TypeError: Failed to execute. addEventListener on EventTarget: parameter is not of type Object
This error makes sense because the second parameter of the addEventListener method can only accept a JavaScript function, an object with a handleEvent() method, or simply null. A quick and easy way to avoid this error is by changing the second parameter of the addEventListener method to an arrow or anonymous function.
The only hiccup with using arrow and anonymous functions is that they cannot be removed with the traditional removeEventListener() method; you will have to make use of AbortController, which may be overkill for simple cases. AbortController shines when you have multiple event listeners to remove at once.
For simple cases where you have just one or two event listeners to remove, the removeEventListener()
method still proves useful. However, in order to make use of it, you’ll
need to store your function as a reference to the listener.
Using Parameters With Event Handlers
There
are several ways to include parameters with event handlers. However,
for the purpose of this demonstration, we are going to constrain our
focus to the following two:
Option 1: Arrow And Anonymous Functions
Using arrow and anonymous functions is the fastest and easiest way to get the job done.
To
add an event handler with parameters using arrow and anonymous
functions, we’ll first need to call the function we’re going to create
inside the arrow function attached to the event listener:
After that, we can create the function with parameters:
function handleClick(event, param1, param2) {
console.log(param1, param2, event.type, event.target);
}
Note that with this method, removing the event listener requires the AbortController. To remove the event listener, we create a new AbortController object and then retrieve the AbortSignal object from it:
const controller = new AbortController();
const { signal } = controller;
Next, we can pass the signal from the controller as an option in the removeEventListener() method:
Now we can remove the event listener by calling AbortController.abort():
controller.abort()
Option 2: Closures
Closures
in JavaScript are another feature that can help us with event handlers.
Remember the mistake that produced a type error? That mistake can also
be corrected with closures. Specifically, with closures, a function can
access variables from its outer scope.
In other words, we can access the parameters we need in the event handler from the outer function:
This establishes a function that returns another
function. The function that is created is then called as the second
parameter in the addEventListener() method so that the
inner function is returned as the event handler. And with the power of
closures, the parameters from the outer function will be made available
for use in the inner function.
Notice how the event
object is made available to the inner function. This is because the
inner function is what is being attached as the event handler. The event
object is passed to the function automatically because it’s the event
handler.
To remove the event listener, we can use the AbortController like we did before. However, this time, let’s see how we can do that using the removeEventListener() method instead.
In order for the removeEventListener method to work, a reference to the createHandler function needs to be stored and used in the addEventListener method:
It is good practice
to always remove event listeners whenever they are no longer needed to
prevent memory leaks. Most times, event handlers do not require
parameters; however, in rare cases, they do. Using JavaScript features
like closures, AbortController, and removeEventListener, handling parameters with event handlers is both possible and well-supported.
Ensuring your product communicates clearly to a global audience is not
just about localisation. Even for products that have a proper
localisation process, English often remains the default language for UI
and communications. This article focuses on how you can make English
content clear and inclusive for non-native users. It offers a
practical guide based on his own experience as a non-native
English-speaking content designer, defining the user experience for
international companies.
A few years ago, I was in a design review at a fintech company,
polishing the expense management flows. It was a routine session where
we reviewed the logic behind content and design decisions.
While
looking over the statuses for submitted expenses, I noticed a label
saying ‘In approval’. I paused, re-read it again, and asked myself:
“Where is it? Are the results in? Where can I find them? Are they sending me to the app section called “Approval”?”
This tiny label made me question what was happening with my money, and this feeling of uncertainty was quite anxiety-inducing.
My
team, all native English speakers, did not flinch, even for a second,
and moved forward to discuss other parts of the flow. I was the only
non-native speaker in the room, and while the label made perfect sense
to them, it still felt off to me.
After a quick discussion, we
landed on ‘Pending approval’ — the simplest and widely recognised option
internationally. More importantly, this wording makes it clear that
there’s an approval process, and it hasn’t taken place yet. There’s no
need to go anywhere to do it.
Some might call it nitpicking, but
that was exactly the moment I realised how invisible — yet powerful —
the non-native speaker’s perspective can be.
Native
speakers often write on instinct, which works much like autopilot. This
can often lead to overconfidence in content that, in reality, is too
culturally specific, vague, or complex. And that content may not be
understood by 3 in 4 people who read it.
If your team shares the same native language, content clarity remains assumed by default rather than proven through pressure testing.
The price for that is the accessibility of your product. A study by National Library of Medicine
found that US adults who had proficiency in English but did not use it
as their primary language were significantly less likely to be insured,
even when provided with the same level of service as everyone else.
In
other words, they did not finish the process of securing a healthcare
provider — a process that’s vital to their well-being, in part, due to
unclear or inaccessible communication.
If people abandon the
process of getting something as vital as healthcare insurance, it’s easy
to imagine them dropping out during checkout, account setup, or app
onboarding.
Leaving
a large portion of your audience outside of the “clarity zone” limits
your global growth, and the issue will only expand over time. (Large preview)
Non-native
content designers, by contrast, do not write on autopilot. Because of
their experience learning English, they’re much more likely to tune into
nuances, complexity, and cultural exclusions that natives often
overlook. That’s the key to designing for everyone rather than 1 in 4.
Non-native Content Designers Make Your UX Global
Spotting The Clutter And Cognitive Load Issues
When
a non-native speaker has to pause, re-read something, or question the
meaning of what’s written, they quickly identify it as a friction point
in the user experience.
Why it’s important: Every
extra second users have to spend understanding your content makes them
more likely to abandon the task. This is a high price that companies pay
for not prioritising clarity.
Cognitive load is not just about complex sentences but also about the speed. There’s plenty of research confirming that non-native speakers read more slowly than native speakers.
This is especially important when you work on the visibility of system
status — time-sensitive content that the user needs to scan and
understand quickly.
One example you can experience firsthand is an
ATM displaying a number of updates and instructions. Even when they’re
quite similar, it still overwhelms you when you realise that you missed
one, not being able to finish reading.
This kind of rapid-fire updates can increase frustration and the chances of errors.
ATM giving 6 variations of content within less than 8 seconds.
Always Advocating For Plain English
They
tend to review and rewrite things more often to find the easiest way to
communicate the message. What a native speaker may consider clear
enough might be dense or difficult for a non-native to understand.
Why it’s important: Simple content better scales across countries, languages, and cultures.
Catching Culture-specific Assumptions And References
When
things do not make sense, non-native speakers challenge them. Besides
the idioms and other obvious traps, native speakers tend to fall into
considering their life experience to be shared with most
English-speaking users.
Cultural differences might even exist
within one globally shared language. Have you tried saying ‘soccer’
instead of ‘football’ in a conversation with someone from the UK? These
details may not only cause confusion but also upset people.
Why it’s important:
Making sure your product is free from culture-specific references makes
your product more inclusive and safeguards you from alienating your
users.
They Have Another Level Of Empathy For The Global Audience
Being
a non-native speaker themselves, they have experience with products
that do not speak clearly to them. They’ve been in the global user’s
shoes and know how it impacts the experience.
Why it’s important:
Empathy is a key driver towards design decisions that take into account
the diverse cultural and linguistic background of the users.
How Non-native Content Design Can Shape Your Approach To Design
Your
product won’t become better overnight simply because you read an
inspiring article telling you that you need to have a more diverse team.
I get it. So here are concrete changes that you can make in your design
workflows and hiring routines to make sure your content is accessible
globally.
Run Copy Reviews With Non-native Readers
When
you launch a new feature or product, it’s a standard practice to run QA
sessions to review visuals and interactions. When your team does not
include the non-native perspective, the content is usually overlooked
and considered fine as long as it’s grammatically correct.
I know,
having a dedicated localisation team to pressure-test your content for
clarity is a privilege, but you can always start small.
At one of my previous companies, we established a ‘clarity heroes council’
— a small team of non-native English speakers with diverse cultural and
linguistic backgrounds. During our reviews, they often asked questions
that surprised us and highlighted where clarity was missing:
What’s a “grace period”?
What will happen when I tap “settle the payment”?
These
questions flag potential problems and help you save both money and
reputation by avoiding thousands of customer service tickets.
Review Existing Flows For Clarity
Even
if your product does not have major releases regularly, it accumulates
small changes over time. They’re often plugged in as fixes or small
improvements, and can be easily overlooked from a QA perspective.
A
good start will be a regular look at the flows that are critical to
your business metrics: onboarding, checkout, and so on. Fence off some
time for your team quarterly or even annually, depending on your product
size, to come together and check whether your key content pieces serve
the global audience well.
Usually, a proper review is conducted by
a team: a product designer, a content designer, an engineer, a product
manager, and a researcher. The idea is to go over the flows, research
insights, and customer feedback together. For that, having a non-native
speaker on the audit task force will be essential.
If you’ve never done an audit before, try this template as it covers everything you need to start.
Make Sure Your Content Guidelines Are Global-ready
If
you haven’t done it already, make sure your voice & tone
documentation includes details about the level of English your company
is catering to.
This might mean working with the brand team to
find ways to make sure your brand voice comes through to all users
without sacrificing clarity and comprehension. Use examples and showcase
the difference between sounding smart or playful vs sounding clear.
Leaning
too much towards brand personality is where cultural differences
usually shine through. As a user, you might’ve seen it many times.
Here’s a banking app that wanted to seem relaxed and relatable by
introducing ‘Dang it’ as the only call-to-action on the screen.
Sometimes even bank apps accidentally sacrifice clarity to showcase brand personality.
However,
users with different linguistic backgrounds might not be familiar with
this expression. Worse, they might see it as an action, leaving them
unsure of what will actually happen after tapping it.
Considering
how much content is generated with AI today, your guidelines have to
account for both tone and clarity. This way, when you feed these
requirements to the AI, you’ll see the output that will not just be
grammatically correct but also easy to understand.
Incorporate Global English Heuristics Into Your Definition Of Success
Basic heuristic principles are often documented as a part of overarching guidelines to help UX teams do a better job. The Nielsen Norman Group usability heuristics cover the essential ones, but it doesn’t mean you shouldn’t introduce your own. To complement this list, add this principle:
Aim
for global understanding: Content and design should communicate clearly
to any user regardless of cultural or language background.
You can suggest criteria to ensure it’s clear how to evaluate this:
Action transparency: Is it clear what happens next when the user proceeds to the next screen or page?
Minimal ambiguity: Is the content open to multiple interpretations?
International clarity: Does this content work in a non-Western context?
Bring A Non-native Perspective To Your Research, Too
This one is often overlooked, but collaboration between the research team and non-native speaking writers
is super helpful. If your research involves a survey or interview, they
can help you double-check whether there is complex or ambiguous
language used in the questions unintentionally.
In a study by the Journal of Usability Studies,
37% of non-native speakers did not manage to answer the question that
included a word they did not recognise or could not recall the meaning
of. The question was whether they found the system to be “cumbersome to
use”, and the consequences of getting unreliable data and measurements
on this would have a negative impact on the UX of your product.
Another study by UX Journal of User Experience highlights how important clarity is in surveys. While most people in their study interpreted the question “How do you feel about … ?” as “What’s your opinion on …?”, some took it literally and proceeded to describe their emotions instead.
This
means that even familiar terms can be misinterpreted. To get precise
research results, it’s worth defining key terms and concepts to ensure
common understanding with participants.
Globalise Your Glossary
At
Klarna, we often ran into a challenge of inconsistent translation for
key terms. A well-defined English term could end up having from three to
five different versions in Italian or German. Sometimes, even the same
features or app sections could be referred to differently depending on
the market — this led to user confusion.
To address this, we introduced a shared term base — a controlled vocabulary that included:
English term,
Definition,
Approved translations for all markets,
Approved and forbidden synonyms.
Importantly, the term selection was dictated by user research, not by assumption or personal preferences of the team.
This Notion template shows how a controlled vocabulary can look
We
used a similar setup. Our new glossary was shared internally across
teams, from product to customer service. Results? Reducing the support
tickets related to unclear language used in UI (or directions in the
user journey) by 18%. This included tasks like finding instructions on
how to make a payment (especially with the least popular payment methods
like bank transfer), where the late fee details are located, or whether
it’s possible to postpone the payment. And yes, all of these features
were available, and the team believed they were quite easy to find.
A
glossary like this can live as an add-on to your guidelines. This way,
you will be able to quickly get up to speed new joiners, keep product
copy ready for localisation, and defend your decisions with
stakeholders.
Approach Your Team Growth With An Open Mind
‘Looking
for a native speaker’ still remains a part of the job listing for UX
Writers and content designers. There’s no point in assuming it’s
intentional discrimination. It’s just a misunderstanding that stems from
not fully accepting that our job is more about building the user experience than writing texts that are grammatically correct.
Here are a few tips to make sure you hire the best talent and treat your applicants fairly:
Remove the ‘native speaker’ and ‘fluency’ requirement.
Instead,
focus on the core part of our job: add ‘clear communicator’, ‘ability
to simplify’, or ‘experience writing for a global audience’.
Judge the work, not the accent.
Over
the years, there have been plenty of studies confirming that the accent
bias is real — people having an unusual or foreign accent are
considered less hirable. While some may argue that it can have an impact
on the efficiency of internal communications, it’s not enough to
justify the reason to overlook the good work of the applicant.
My
personal experience with the accent is that it mostly depends on the
situation you’re in. When I’m in a friendly environment and do not feel
anxiety, my English flows much better as I do not overthink how I sound.
Ironically, sometimes when I’m in a room with my team full of British
native speakers, I sometimes default to my Slavic accent. The question
is: does it make my content design expertise or writing any worse? Not
in the slightest.
Therefore, make sure you judge the portfolios,
the ideas behind the interview answers, and whiteboard challenge
presentations, instead of focusing on whether the candidate’s accent
implies that they might not be good writers.
Good Global Products Need Great Non-native Content Design
Non-native
content designers do not have a negative impact on your team’s writing.
They sharpen it by helping you look at your content through the lens of
your real user base. In the globalised world, linguistic purity no longer benefits your product’s user experience.
Try
these practical steps and leverage the non-native speaking lens of your
content designers to design better international products.