Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Wednesday, February 19, 2025

How OWASP Helps You Secure Your Full-Stack Web Applications

 

The OWASP vulnerabilities list is the perfect starting point for web developers looking to strengthen their security expertise. Let’s discover how these vulnerabilities materialize in full-stack web applications and how to prevent them.

Security can be an intimidating topic for web developers. The vocabulary is rich and full of acronyms. Trends evolve quickly as hackers and analysts play a perpetual cat-and-mouse game. Vulnerabilities stem from little details we cannot afford to spend too much time on during our day-to-day operations.

JavaScript developers already have a lot to take with the emergence of a new wave of innovative architectures, such as React Server Components, Next.js App Router, or Astro islands.

So, let’s have a focused approach. What we need is to be able to detect and palliate the most common security issues. A top ten of the most common vulnerabilities would be ideal.

Meet The OWASP Top 10

Guess what: there happens to be such a top ten of the most common vulnerabilities, curated by experts in the field!

It is provided by the OWASP Foundation, and it’s an extremely valuable resource for getting started with security.

OWASP stands for “Open Worldwide Application Security Project.” It’s a nonprofit foundation whose goal is to make software more secure globally. It supports many open-source projects and produces high-quality education resources, including the OWASP top 10 vulnerabilities list.

We will dive through each item of the OWASP top 10 to understand how to recognize these vulnerabilities in a full-stack application.

Note: I will use Next.js as an example, but this knowledge applies to any similar full-stack architecture, even outside of the JavaScript ecosystem.

Let’s start our countdown towards a safer web!

Number 10: Server-Side Request Forgery (SSRF)

You may have heard about Server-Side Rendering, aka SSR. Well, you can consider SSRF to be its evil twin acronym.

Server-Side Request Forgery can be summed up as letting an attacker fire requests using your backend server. Besides hosting costs that may rise up, the main problem is that the attacker will benefit from your server’s level of accreditation. In a complex architecture, this means being able to target your internal private services using your own corrupted server.

SSR is good vs SSRF is bad
SSR is good, but SSRF is bad! (Large preview)

Here is an example. Our app lets a user input a URL and summarizes the content of the target page server-side using an AI SDK. A mischievous user passes localhost:3000 as the URL instead of a website they’d like to summarize. Your server will fire a request against itself or any other service running on port 3000 in your backend infrastructure. This is a severe SSRF vulnerability!

You’ll want to be careful when firing requests based on user inputs, especially server-side.

Number 9: Security Logging And Monitoring Failures

I wish we could establish a telepathic connection with our beloved Node.js server running in the backend. Instead, the best thing we have to see what happens in the cloud is a dreadful stream of unstructured pieces of text we name “logs.”

Yet we will have to deal with that, not only for debugging or performance optimization but also because logs are often the only information you’ll get to discover and remediate a security issue.

As a starter, you might want to focus on logging the most important transactions of your application exactly like you would prioritize writing end-to-end tests. In most applications, this means login, signup, payouts, mail sending, and so on. In a bigger company, a more complete telemetry solution is a must-have, such as Open Telemetry, Sentry, or Datadog.

If you are using React Server Components, you may need to set up a proper logging strategy anyway since it’s not possible to debug them directly from the browser as we used to do for Client components.

Number 8: Software And Data Integrity Failures

The OWASP top 10 vulnerabilities tend to have various levels of granularity, and this one is really a big family. I’d like to focus on supply chain attacks, as they have gained a lot of popularity over the years.

You may have heard about the Log4J vulnerability. It was very publicized, very critical, and very exploited by hackers. It’s a massive supply chain attack.

In the JavaScript ecosystem, you most probably install your dependencies using NPM. Before picking dependencies, you might want to craft yourself a small list of health indicators.

  • Is the library maintained and tested with proper code?
  • Does it play a critical role in my application?
  • Who is the main contributor?
  • Did I spell it right when installing?

For more serious business, you might want to consider setting up a Supply Chain Analysis (SCA) solution; GitHub’s Dependabot is a free one, and Snyk and Datadog are other famous actors.

Number 7: Identification And Authentication Failures

Here is a stereotypical vulnerability belonging to this category: your admin password is leaked. A hacker finds it. Boom, game over.

Password management procedures are beyond the scope of this article, but in the context of full-stack web development, let’s dive deep into how we can prevent brute force attacks using Next.js edge middlewares.

Middlewares are tiny proxies written in JavaScript. They process requests in a way that is supposed to be very, very fast, faster than a normal Node.js endpoint, for example. They are a good fit for handling low-level processing, like blocking malicious IPs or redirecting users towards the correct translation of a page.

One interesting use case is rate limiting. You can quickly improve the security of your applications by limiting people’s ability to spam your POST endpoints, especially login and signup.

You may go even further by setting up a Web Applications Firewall (WAF). A WAF lets developers implement elaborate security rules. This is not something you would set up directly in your application but rather at the host level. For instance, Vercel has released its own WAF in 2024.

Number 6: Vulnerable And Outdated Components

We have discussed supply chain attacks earlier. Outdated components are a variation of this vulnerability, where you actually are the person to blame. Sorry about that.

Security vulnerabilities are often discovered ahead of time by diligent security analysts before a mean attacker can even start thinking about exploiting them. Thanks, analysts friends! When this happens, they fill out a Common Vulnerabilities and Exposure and store that in a public database.

The remedy is the same as for supply chain attacks: set up an SCA solution like Dependabot that will regularly check for the use of vulnerable packages in your application.

A visualization showing that an app depends on packages and some of them can be vulnerable
Your app depends on many packages. Sadly, some of them are probably affected by vulnerabilities that can spread to your application. (Large preview)

Halfway break

I just want to mention at this point how much progress we have made since the beginning of this article. To sum it up:

  • We know how to recognize an SSRF. This is a nasty vulnerability, and it is easy to accidentally introduce while crafting a super cool feature.
  • We have identified monitoring and dependency analysis solutions as important pieces of “support” software for securing applications.
  • We have figured out a good use case for Next.js edge middlewares: rate limiting our authentication endpoints to prevent brute force attacks.

It’s a good time to go grab a tea or coffee. But after that, come back with us because we are going to discover the five most common vulnerabilities affecting web applications!

Number 5: Security Misconfiguration

There are so many configurations that we can mismanage. But let’s focus on the most insightful ones for a web developer learning about security: HTTP headers.

You can use HTTP response headers to pass on a lot of information to the user’s browser about what’s possible or not on your website.

For example, by narrowing down the “Permissions-Policy” headers, you can claim that your website will never require access to the user’s camera. This is an extremely powerful protection mechanism in case of a script injection attack (XSS). Even if the hacker manages to run a malicious script in the victim’s browser, the latter will not allow the script to access the camera.

I invite you to observe the security configuration of any template or boilerplate that you use to craft your own websites. Do you understand them properly? Can you improve them? Answering these questions will inevitably lead you to vastly increase the safety of your websites!

Number 4: Insecure Design

I find this one funny, although a bit insulting for us developers.

Bad code is literally the fourth most common cause of vulnerabilities in web applications! You can’t just blame your infrastructure team anymore.

Design is actually not just about code but about the way we use our programming tools to produce software artifacts.

A visualization with bad design
Bad design can create vulnerabilities that are very hard to detect. The cure is good design, and good design is a lot of learning. Keep reading curated learning resources, and everything will be ok! (Large preview)

In the context of full-stack JavaScript frameworks, I would recommend learning how to use them idiomatically, the same way you’d want to learn a foreign language. It’s not just about translating what you already know word-by-word. You need to get a grasp of how a native speaker would phrase their thoughts.

Learning idiomatic Next.js is really, really hard. Trust me, I teach this framework to web developers. Next is all about client and server logic hybridization, and some patterns may not even transfer to competing frameworks with a different architecture like Astro.js or Remix.

Hopefully, the Next.js core team has produced many free learning resources, including articles and documentation specifically focusing on security.

I recommend reading Sebastian Markbåge’s famous article “How to Think About Security in Next.js” as a starting point. If you use Next.js in a professional setting, consider organizing proper training sessions before you start working on high-stakes projects.

Number 3: Injection

Injections are the epitome of vulnerabilities, the quintessence of breaches, and the paragon of security issues. SQL injections are typically very famous, but JavaScript injections are also quite common. Despite being well-known vulnerabilities, injections are still in the top 3 in the OWASP ranking!

Injections are the reason why forcing a React component to render HTML is done through an unwelcoming `dangerouslySetInnerHTML` function.

React doesn’t want you to include user input that could contain a malicious script.

The screenshot below is a demonstration of an injection using images. It could target a message board, for instance. The attacker misused the image posting system. They passed a URL that points towards an API GET endpoint instead of an actual image. Whenever your website’s users see this post in their browser, an authenticated request is fired against your backend, triggering a payment!

As a bonus, having a GET endpoint that triggers side-effects such as payment also constitutes a risk of Cross-Site Request Forgery (CSRF, which happens to be SSRF client-side cousin).

Cross-Site Request Forgery example
This image will trigger payments using the end user’s identity when displayed! The mistake lies in using a GET endpoint to trigger payments instead of a POST endpoint. (Large preview)

Even experienced developers can be caught off-guard. Are you aware that dynamic route parameters are user inputs? For instance, [language]/page.jsx in a Next.js or Astro app. I often see clumsy attack attempts when logging them, like “language” being replaced by a path traversal like ../../../../passwords.txt.

Zod is a very popular library for running server-side data validation of user inputs. You can add a transform step to sanitize inputs included in database queries, or that could land in places where they end up being executed as code.

Number 2: Cryptographic Failures

A typical discussion between two developers that are in deep, deep trouble:

— We have leaked our database and encryption key. What algorithm was used to encrypt the password again? AES-128 or SHA-512?
— I don’t know, aren’t they the same thing? They transform passwords into gibberish, right?
— Alright. We are in deep, deep trouble.

This vulnerability mostly concerns backend developers who have to deal with sensitive personal identifiers (PII) or passwords.

To be honest, I don’t know much about these algorithms; I studied computer science way too long ago.

The only thing I remember is that you need non-reversible algorithms to encrypt passwords, aka hashing algorithms. The point is that if the encrypted passwords are leaked, and the encryption key is also leaked, it will still be super hard to hack an account (you can’t just reverse the encryption).

In the State of JavaScript survey, we use passwordless authentication with an email magic link and one-way hash emails, so even as admins, we cannot guess a user’s email in our database.

A hashed email
A hashed email generated when a user creates an account: it can’t be reversed even when possessing the encryption key. (Large preview)

And number 1 is…

Such suspense! We are about to discover that the top 1 vulnerability in the world of web development is…

Broken Access Control! Tada.

Yeah, the name is not super insightful, so let me rephrase it. It’s about people being able to access other people’s accounts or people being able to access resources they are not allowed to. That’s more impressive when put this way.

A while ago, I wrote an article about the fact that checking authorization within a layout may leave page content unprotected in Next.js. It’s not a flaw in the framework’s design but a consequence of how React Server Components have a different model than their client counterparts, which then affects how the layout works in Next.

Here is a demo of how you can implement a paywall in Next.js that doesn’t protect anything.

// app/layout.jsx
// Using cookie-based authentication as usual
async function checkPaid() {
  const token = cookies.get("auth_token");
  return await db.hasPayments(token);
}
// Running the payment check in a layout to apply it to all pages
// Sadly, this is not how Next.js works!
export default async function Layout() {
  // ❌ this won't work as expected!!
  const hasPaid = await checkPaid();
  if (!hasPaid) redirect("/subscribe");
  // then render the underlying page
  return <div>{children}</div>;
}
// ❌ this can be accessed directly
// by adding “RSC=1” to the request that fetches it!
export default function Page() {
  return <div>PAID CONTENT</div>
}

What We Have Learned From The Top 5 Vulnerabilities

Most common vulnerabilities are tightly related to application design issues:

  • Copy-pasting configuration without really understanding it.
  • Having an improper understanding of the framework we use in inner working. Next.js is a complex beast and doesn’t make our life easier on this point!
  • Picking an algorithm that is not suited for a given task.

These vulnerabilities are tough ones because they confront us to our own limits as web developers. Nobody is perfect, and the most experienced developers will inevitably write vulnerable code at some point in their lives without even noticing.

How to prevent that? By not staying alone! When in doubt, ask around fellow developers; there are great chances that someone has faced the same issues and can lead you to the right solutions.

Where To Head Now?

First, I must insist that you have already done a great job of improving the security of your applications by reading this article. Congratulations!

Most hackers rely on a volume strategy and are not particularly skilled, so they are really in pain when confronted with educated developers who can spot and fix the most common vulnerabilities.

OWASP top 10
By discovering how the OWASP top 10 can affect full-stack JavaScript applications, you’ve just made hackers’ lives much harder! (Large preview)

From there, I can suggest a few directions to get even better at securing your web applications:

  • Try to apply the OWASP top 10 to an application you know well, either a personal project, your company’s codebase, or an open-source solution.
  • Give a shot at some third-party security tools. They tend to overflow developers with too much information but keep in mind that most actors in the field of security are aware of this issue and work actively to provide more focused vulnerability alerts.
  • I’ve added my favorite security-related resources at the end of the article, so you’ll have plenty to read!

Thanks for reading, and stay secure!

Time To First Byte: Beyond Server Response Time

 Optimizing web performance means looking beyond surface-level metrics. Time to First Byte (TTFB) is crucial, but improving it requires more than tweaking server response time.  breaks down what TTFB is, what causes its poor score, and why reducing server response time alone isn’t enough for optimization and often won’t be the most impactful change you can make to your website.

Loading your website HTML quickly has a big impact on visitor experience. After all, no page content can be displayed until after the first chunk of the HTML has been loaded. That’s why the Time to First Byte (TTFB) metric is important: it measures how soon after navigation the browser starts receiving the HTML response.

Generating the HTML document quickly plays a big part in minimizing TTFB delays. But actually, there’s a lot more to optimizing this metric. In this article, we’ll take a look at what else can cause poor TTFB and what you can do to fix it.

What Components Make Up The Time To First Byte Metric?

TTFB stands for Time to First Byte. But where does it measure from?

Different tools handle this differently. Some only count the time spent sending the HTTP request and getting a response, ignoring everything else that needs to happen first before the resource can be loaded. However, when looking at Google’s Core Web Vitals, TTFB starts from the time when the users start navigating to a new page. That means TTFB includes:

  • Cross-origin redirects,
  • Time spent connecting to the server,
  • Same-origin redirects, and
  • The actual request for the HTML document.

We can see an example of this in this request waterfall visualization.

Request waterfall visualization
(Large preview)

The server response time here is only 183 milliseconds, or about 12% of the overall TTFB metric. Half of the time is instead spent on a cross-origin redirect — a separate HTTP request that returns a redirect response before we can even make the request that returns the website’s HTML code. And when we make that request, most of the time is spent on establishing the server connection.

Connecting to a server on the web typically takes three round trips on the network:

  1. DNS: Looking up the server IP address.
  2. TCP: Establishing a reliable connection to the server.
  3. TLS: Creating a secure encrypted connection.

What Network Latency Means For Time To First Byte #

Let’s add up all the network round trips in the example above:

  • 2 server connections: 6 round trips.
  • 2 HTTP requests: 2 round trips.

That means that before we even get the first response byte for our page we actually have to send data back and forth between the browser and a server eight times!

That’s where network latency comes in, or network round trip time (RTT) if we look at the time it takes to send data to a server and receive a response in the browser. On a high-latency connection with a 150 millisecond RTT, making those eight round trips will take 1.2 seconds. So, even if the server always responds instantly, we can’t get a TTFB lower than that number.

Network latency depends a lot on the geographic distances between the visitor’s device and the server the browser is connecting to. You can see the impact of that in practice by running a global TTFB test on a website. Here, I’ve tested a website that’s hosted in Brazil. We get good TTFB scores when testing from Brazil and the US East Coast. However, visitors from Europe, Asia, or Australia wait a while for the website to load.

Visualisation with a map of a global TTFB test
(Large preview)

What Content Delivery Networks Mean for Time to First Byte

One way to speed up your website is by using a Content Delivery Network (CDN). These services provide a network of globally distributed server locations. Instead of each round trip going all the way to where your web application is hosted, browsers instead connect to a nearby CDN server (called an edge node). That greatly reduces the time spent on establishing the server connection, improving your overall TTFB metric.

By default, the actual HTML request still has to be sent to your web app. However, if your content isn’t dynamic, you can also cache responses at the CDN edge node. That way, the request can be served entirely through the CDN instead of data traveling all across the world.

If we run a TTFB test on a website that uses a CDN, we can see that each server response comes from a regional data center close to where the request was made. In many cases, we get a TTFB of under 200 milliseconds, thanks to the response already being cached at the edge node.

An expanded version of TTFB test with a list of test locations with its server responses
(Large preview)

How To Improve Time To First Byte

What you need to do to improve your website’s TTFB score depends on what its biggest contributing component is.

  • A lot of time is spent establishing the connection: Use a global CDN.
  • The server response is slow: Optimize your application code or cache the response
  • Redirects delay TTFB: Avoid chaining redirects and optimize the server returning the redirect response.
TTFB details, including Redirect, DNS Lookup, TCP Connection, SSL Handshake, Response
(Large preview)

Keep in mind that TTFB depends on how visitors are accessing your website. For example, if they are logged into your application, the page content probably can’t be served from the cache. You may also see a spike in TTFB when running an ad campaign, as visitors are redirected through a click-tracking server.

Monitor Real User Time To First Byte

If you want to get a breakdown of what TTFB looks like for different visitors on your website, you need real user monitoring. That way, you can break down how visitor location, login status, or the referrer domain impact real user experience.

DebugBear can help you collect real user metrics for Time to First Byte, Google Core Web Vitals, and other page speed metrics. You can track individual TTFB components like TCP duration or redirect time and break down website performance by country, ad campaign, and more.

Time to First Byte map
(Large preview)

Conclusion

By looking at everything that’s involved in serving the first byte of a website to a visitor, we’ve seen that just reducing server response time isn’t enough and often won’t even be the most impactful change you can make on your website.

Just because your website is fast in one location doesn’t mean it’s fast for everyone, as website speed varies based on where the visitor is accessing your site from.

Content Delivery Networks are an incredibly powerful way to improve TTFB. Even if you don’t use any of their advanced features, just using their global server network saves a lot of time when establishing a server connection.

Taking RWD To The Extreme

 

evolution of web design, recalling the days when table layouts were all the rage and Flash games were shaping the online culture. And then responsive web design (RWD) happened — and it often feels like the end of history; well, at least for web design. After all, we still create responsive websites, and that’s The True Way™ of doing layouts on the web. Yet the current year, 2025, marks the 15th anniversary of Ethan Marcotte’s article, which forever changed web development. That’s a whole era in “web” years. So, maybe something happened after RWD, but it was so obvious that it went nearly invisible. Let’s try to uncover this something.

When Ethan Marcotte conceived RWD, web technologies were far less mature than today. As web developers, we started to grasp how to do things with floats after years of stuffing everything inside table cells. There weren’t many possible ways to achieve a responsive site. There were two of them: fluid grids (based on percentages) and media queries, which were a hot new thing back then.

What was lacking was a real layout system that would allow us to lay things out on a page instead of improvising with floating content. We had to wait several years for Flexbox to appear. And CSS Grid followed that.

Undoubtedly, new layout systems native to the browser were groundbreaking 10 years ago. They were revolutionary enough to usher in a new era. In her talk “Everything You Know About Web Design Just Changed” at the An Event Apart conference in 2019, Jen Simmons proposed a name for it: Intrinsic Web Design (IWD). Let’s disarm that fancy word first. According to the Merriam-Webster dictionary, intrinsic means “belonging to the essential nature or constitution of a thing.” In other words, IWD is a natural way of doing design for the web. And that boils down to using CSS layout systems for… laying out things. That’s it.

It does not sound that groundbreaking on its own. But it opens a lot of possibilities that weren’t earlier available with float-based layouts or table ones. We got the best things from both worlds: two-dimensional layouts (like tables with their rows and columns) with wrapping abilities (like floating content when there is not enough space for it). And there are even more goodies, like mixing fixed-sized content with fluid-sized content or intentionally overlapping elements:

See the Pen Overlapping elements [forked] by Comandeer.

As Jen points out in her presentation, this allows us to finally make even fancy designs in the “web” way, eliminating the tension between web designers and developers. No more “This print design can’t be translated for the web!” Well, at least far fewer arguments…

But here’s the strange part: that new era didn’t come. IWD never became a household term, the same way that RWD has. We’re still stuck to the good and old RWD era. Yet, Flexbox and Grid became indispensable tools in (nearly) every web developer’s tool belt. They are so natural and intrinsic that we intuitively started to use them, missing their whole revolutionary aspect. Instead of a groundbreaking revolution of IWD, we chose a longer but steadier evolution of RWD.

Enter The Browser

I believe that IWD paved the way for more radical ideas, even if it hasn’t developed into a bonafide era. And the central point of all of those radical ideas is a browser — that part of the web that sits between our code and the user. Web developers have always had a love-hate relationship with browsers. (Don’t get me started on Internet Explorer!) They often amuse us both with new features (WebGPU for the win!) and cryptic bugs (points suddenly take up more space, what the heck?). But at the end of the day, we tell the browser what to do to display our page the way we want it to be displayed to the user.

In some ways, IWD challenged that approach. CSS layout systems aren’t taking direct orders from a web developer. We can barely hint at what we want them to do. But the final decision lies with the browser. And what if we take it even further?

Heydon Pickering proposed the term algorithmic layouts to describe such an approach. The web is inherently algorithmic. Even the simplest page uses internal algorithms to lay things out: a block of text forms a flow layout that will wrap when there is not enough space in the line. And that’s so obvious that we don’t even think about it. That’s just how text works, and that’s how it has always worked. And yet, there is an algorithm behind that. That and all CSS layout systems. We can use Flexbox to make a simple layout that displays on a single line by default and falls back to wrapping up multiple lines if there is not enough space, just like text.

See the Pen Resizable flexbox container [forked] by Comandeer.

And we get all of these algorithms for free! The only thing we need to do is to allow Flexbox to wrap with the flex-wrap property. And it wraps by itself. Imagine that you need to calculate when and how the layout should wrap — that would be a nightmare. Fortunately, browsers are good at laying out things. After all, they have been doing it for over 35 years. They’re experienced in that, so just let them handle this stuff. That’s the power of algorithmic layouts: they work the best when left alone.

Andy Bell summed it pretty well during All Day Hey! in 2022 when he recommended that we “be the browser’s mentor, not its micromanager.” Don’t try to be smarter than a browser because it knows things you can’t possibly know. You don’t know what device the user uses — its processing power, current battery level, viewport, and connection stability. You don’t know what assistive technologies the user uses or how they configured their operating system. You don’t know if they disable cookies and JavaScript.

You know only one thing: there is that peculiar thing between your website and the user called browser — and it knows much more about the page and the user than you do. It’s like an excellent translator that you hire for those extremely important business negotiations with someone from a totally foreign culture that you don’t know anything about. But the translator knows it well and translates your words with ease, gently closing the cultural chasm between you and the potential customer. You don’t want to force them to translate your words literally — that could be catastrophic. What you want is to provide them with your message and allow them to do the magic of shaping it into a message understandable to the customer. And the same applies to browsers; they know better how to display your website.

Enter The Declarative Design

I think that Jen, Heydon, and Andy speak of the same thing — an approach that shifts much of the work from the web developer to the browser. Instead of telling it how to do things, we rather tell it what to do and leave it to figure out the “how” part by itself.

As Jeremy Keith notices, there has been a shift from an imperative design (telling the browser “how”) to a declarative one (telling the browser “what”). Specifically, Jeremy says that we ought to “focus on creating the right inputs rather than trying to control every possible output.”

That’s quite similar to what we do with AI today: we meticulously craft our prompts (inputs) and hope to get the right answer (output). However, there is a very important difference between AI and browsers: the latter is not a black box.

Everything (well, most of what) the browser does is described in detail in open web standards, so we’re able to make educated guesses about the output. Granted, we can’t be sure if the user sees the two-column layout on their 8K screen or a one-column layout on their microwave’s small screen (if it can run DOOM, it can run a web browser!). But we know for sure that we defined these two edge cases, and the browser works out everything in between.

In theory, it all sounds nice and easy. Let’s try to make the declarative design more actionable. If we gather the techniques mentioned by Jen, Heydon, Andy, and Jeremy, we will end up with roughly the following list:

Use Native Layout Systems

They’re available in basically every browser on the market and have been for years, and I believe that they are, indeed, widely used. But from time to time, a question pops up: Which layout system should I use? And the answer is: Yes. Mix and match! After all, different elements on the page would work better with different layout systems. Take, for example, this navigation on top with several links in one row that should wrap if there is not enough space. Sounds like Flexbox. Is the main part divided into three columns, with the third column positioned at the bottom of the content? Definitely CSS Grid. As for the text content? Well, that’s flow.

A layout of the page: there is a navigation in the top left corner, based on the flexbox; the main area is based on the grid and divided into three columns and two rows; the first column contains an aside content; the second column contains the main content; the third column contains another aside content that occupies the second row.
(Large preview)

Native layout systems are here to make the browser work for you — don’t hesitate to use that to your advantage.

Start With Semantic HTML

HTML is the backbone of the web. It’s the language that structures and formats the content for the user. And it comes with a huge bonus: it loads and displays to the user, even if CSS and JavsScript fail to load for whatever reason. In other words, the website should still make sense to the user even if the CSS that provides the layout and the JavsScript that provides the interactivity are no-shows. A website is a text document, not so different from the one you can create in a text processor, like Word or LibreWriter.

Semantic HTML also provides important accessibility features, like headings that are often used by screen-reader users for navigating pages. This is why starting not just with any markup but semantic markup for meaningful structure is a crucial step to embracing native web features.

Use Fluid Type With Fluid Space

We often need to adjust the font size of our content when the screen size changes. Smaller screens mean being able to display less content, and larger screens provide more affordance for additional content. This is why we ought to make content as fluid as possible, by which I mean the content should automatically adjust based on the screen’s size. A fluid typographic system optimizes the content’s legibility when it’s being viewed in different contexts.

Nowadays, we can achieve truly fluid type with one line of CSS, thanks to the clamp() function:

font-size: clamp(1rem, calc(1rem + 2.5vw), 6rem);

The maths involved in it goes quite above my head. Thankfully, there is a detailed article on fluid type by Adrian Bece here on Smashing Magazine and Utopia, a handy tool for doing the maths for us. But beware — there be dragons! Or at least possible accessibility issues. By limiting the maximum font size, we could break the ability to zoom the text content, violating one of the WCAG’s requirements (though there are ways to address that).

Fortunately, fluid space is much easier to grasp: if gaps (margins) between elements are defined in font-dependent units (like rem or em), they will scale alongside the font size. Yet rest assured, there are also caveats.

Always Bet On Progressive Enhancement

Yes, that’s this over-20-year-old technique for creating web pages. And it’s still relevant today in 2025. Many interesting features have limited availability — like cross-page view transitions. They won’t work for every user, but enabling them is as simple as adding one line of CSS:

@view-transition { navigation: auto; }

It won’t work in some browsers, but it also won’t break anything. And if some browser catches up with the standard, the code is already there, and view transitions start to work in that browser on your website. It’s sort of like opting into the feature when it’s ready.

That’s progressive enhancement at its best: allowing you to make your stairs into an escalator whenever it’s possible.

It applies to many more things in CSS (unsupported grid is just a flow layout, unsupported masonry layout is just a grid, and so on) and other web technologies.

Trust The Browser

Trust it because it knows much more about how safe it is for users to surf the web. Besides, it’s a computer program, and computer programs are pretty good at calculating things. So instead of calculating all these breakpoints ourselves, take their helping hand and allow them to do it for you. Just give them some constraints. Make that <main> element no wider than 60 characters and no narrower than 20 characters — and then relax, watching the browser make it 37 characters on some super rare viewport you’ve never encountered before. It Just Works™.

But trusting the browser also means trusting the open web. After all, these algorithms responsible for laying things out are all parts of the standards.

Ditch The “Physical” CSS #

That’s a bonus point from me. Layout systems introduced the concept of logical CSS. Flexbox does not have a notion of a left or right side — it has a start and an end. And that way of thinking lurked into other areas of CSS, creating the whole CSS Logical Properties and Values module. After working more with layout systems, logical CSS seems much more intuitive than the old “physical” one. It also has at least one advantage over the old way of doing things: it works far better with internationalized content.

See the Pen Physical vs logical CSS [forked] by Comandeer.

The demo above shows the difference between physical and logical CSS. The physical tiles have the text-align: left property applied, while the logical ones have text-align: start. When the “left to right” inline text direction is set, both of them look the same. But when the “right to left” one is set, the logical tiles “move” their start to the right, moving the text alongside it.

Additionally, containers with tiles have their width set — the physical container with the width: 400px property and the logical one with the inline-size: 400px property. They both look the same as long as the block text direction is set to “horizontal.” But when it is set to “vertical,” the logical one switches its width with the height (as now the line of text is going from top to bottom, not from left to right), and the physical one keeps its initial width and height.

Taking It To The Extreme

“What do you mean by taking RWD to the extreme — it’s already pretty extreme!”

I hear you. But I believe that there’s still room for more. The changes described above are a big shift in the RWD world. But this shift is mainly technological. Fluid type without the clamp() method or algorithmic layouts without flexbox and grid couldn’t possibly exist — at least not without some horrible hacks (does anyone still remember CSS locks?). Our web development routine just caught up to what the modern browser can do. Yet, there is still another shift that could happen: a mental one.

I’ll be honest: I’m a die-hard fanatic of using rem and em length units. I’ve been using them for years, but they clicked for me only when I stopped trying to translate them into pixels. And what helped me in it was a… chemistry class I attended many years ago. When working with all these chemical concoctions, you often need to calculate their ratios. There’s that fancy method for doing that:

60 — 100%
20 — x

x=100%*20/60=33.(3)%

After I applied this way of thinking to rem and em units, I entered a new world of thinking about layouts: a ratio-based one. Because there is still a myth that 1 rem roughly equals 16 pixels — except it doesn’t. It could equal any number of pixels because it all depends on what value the user sets in their browser. So, thinking in concrete numbers is, in fact, incompatible with rem and em units. The only fully compatible way is to… keep it as-is.

A confused teenager asks his father: “So how do I know how big is 1 rem?” and the father answers with a smile: “That’s the neat part, you don’t”.
(Large preview)

And I know that sounds crazy, but it forces a change in thinking about websites. If you don’t know the most basic information about your content (the font size), you can’t really apply any concrete numbers to your layout. You can only think in ratios. If the font size equals , your heading could equal 2✕, the main column 60✕, some text input — 10✕, and so on. This way, everything should work out with any font size and, by extension, scale up with any font size.

We’ve already been doing that with layout systems — we allow them to work on ratios and figure out how big each part of the layout should be. And we’ve also been doing that with rem and em units for scaling things up depending on font size. The only thing left is to completely forget the “1rem = 16px” equation and fully embrace the exciting shores of unknown dimensions.

But that sort of mental shift comes with one not-so-straightforward consequence. Not setting the font size and working with the user-provided one instead fully moves the power from the web developer to the browser and, effectively, the user. And the browser can provide us with far more information about user preferences.

Thanks to the modern CSS, we can respond to these things. For example, we can switch to dark mode if the user prefers one, we can limit motion if the user requests it, we can make clickable areas bigger if the device has a touch screen, and so on. By having this kind of dialogue with the browser, exchanging information (it gives us data on the user, and we give it hints on how to display our content), we empower the user in the result. The content would be displayed in the way they want. That makes our website far more inclusive and accessible.

After all, the users know what they need best. If they set the default font size to 64 pixels, they would be grateful if we respected that value. We don’t know why they did it (maybe they have some kind of vision impairment, or maybe they simply have a screen far away from them); we only know they did it — and we respect that.

And that’s responsive design for me.

Integrations: From Simple Data Transfer To Modern Composable Architectures

 In today’s web development landscape, the concept of a monolithic application has become increasingly rare. Modern applications are composed of multiple specialized services, each of which handles specific aspects of functionality. This shift didn’t happen overnight — it’s the result of decades of evolution in how we think about and implement data transfer between systems. Let’s explore this journey and see how it shapes modern architectures, particularly in the context of headless CMS solutions.

When computers first started talking to each other, the methods were remarkably simple. In the early days of the Internet, systems exchanged files via FTP or communicated via raw TCP/IP sockets. This direct approach worked well for simple use cases but quickly showed its limitations as applications grew more complex.

# Basic socket server example
import socket

server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(('localhost', 12345))
server_socket.listen(1)

while True:
    connection, address = server_socket.accept()
    data = connection.recv(1024)
    # Process data
    connection.send(response)

The real breakthrough in enabling complex communication between computers on a network came with the introduction of Remote Procedure Calls (RPC) in the 1980s. RPC allowed developers to call procedures on remote systems as if they were local functions, abstracting away the complexity of network communication. This pattern laid the foundation for many of the modern integration approaches we use today.

At its core, RPC implements a client-server model where the client prepares and serializes a procedure call with parameters, sends the message to a remote server, the server deserializes and executes the procedure, and then sends the response back to the client.

Here’s a simplified example using Python’s XML-RPC.

# Server
from xmlrpc.server import SimpleXMLRPCServer

def calculate_total(items):
    return sum(items)

server = SimpleXMLRPCServer(("localhost", 8000))
server.register_function(calculate_total)
server.serve_forever()

# Client
import xmlrpc.client

proxy = xmlrpc.client.ServerProxy("http://localhost:8000/")
try:
    result = proxy.calculate_total([1, 2, 3, 4, 5])
except ConnectionError:
    print("Network error occurred")

RPC can operate in both synchronous (blocking) and asynchronous modes.

Modern implementations such as gRPC support streaming and bi-directional communication. In the example below, we define a gRPC service called Calculator with two RPC methods, Calculate, which takes a Numbers message and returns a Result message, and CalculateStream, which sends a stream of Result messages in response.

// protobuf
service Calculator {
  rpc Calculate(Numbers) returns (Result);
  rpc CalculateStream(Numbers) returns (stream Result);
}

Modern Integrations: The Rise Of Web Services And SOA

The late 1990s and early 2000s saw the emergence of Web Services and Service-Oriented Architecture (SOA). SOAP (Simple Object Access Protocol) became the standard for enterprise integration, introducing a more structured approach to system communication.

<?xml version="1.0"?>
<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope">
  <soap:Header>
  </soap:Header>
  <soap:Body>
    <m:GetStockPrice xmlns:m="http://www.example.org/stock">
      <m:StockName>IBM</m:StockName>
    </m:GetStockPrice>
  </soap:Body>
</soap:Envelope>

While SOAP provided robust enterprise features, its complexity, and verbosity led to the development of simpler alternatives, especially the REST APIs that dominate Web services communication today.

But REST is not alone. Let’s have a look at some modern integration patterns.

RESTful APIs

REST (Representational State Transfer) has become the de facto standard for Web APIs, providing a simple, stateless approach to manipulating resources. Its simplicity and HTTP-based nature make it ideal for web applications.

First defined by Roy Fielding in 2000 as an architectural style on top of the Web’s standard protocols, its constraints align perfectly with the goals of the modern Web, such as performance, scalability, reliability, and visibility: client and server separated by an interface and loosely coupled, stateless communication, cacheable responses.

In modern applications, the most common implementations of the REST protocol are based on the JSON format, which is used to encode messages for requests and responses.

// Request
async function fetchUserData() {
  const response = await fetch('https://api.example.com/users/123');
  const userData = await response.json();
  return userData;
}

// Response
{
  "id": "123",
  "name": "John Doe",
  "_links": {
    "self": { "href": "/users/123" },
    "orders": { "href": "/users/123/orders" },
    "preferences": { "href": "/users/123/preferences" }
  }
}

GraphQL

GraphQL emerged from Facebook’s internal development needs in 2012 before being open-sourced in 2015. Born out of the challenges of building complex mobile applications, it addressed limitations in traditional REST APIs, particularly the issues of over-fetching and under-fetching data.

At its core, GraphQL is a query language and runtime that provides a type system and declarative data fetching, allowing the client to specify exactly what it wants to fetch from the server.

// graphql
type User {
  id: ID!
  name: String!
  email: String!
  posts: [Post!]!
}

type Post {
  id: ID!
  title: String!
  content: String!
  author: User!
  publishDate: String!
}

query GetUserWithPosts {
  user(id: "123") {
    name
    posts(last: 3) {
      title
      publishDate
    }
  }
}

Often used to build complex UIs with nested data structures, mobile applications, or microservices architectures, it has proven effective at handling complex data requirements at scale and offers a growing ecosystem of tools.

Webhooks

Modern applications often require real-time updates. For example, e-commerce apps need to update inventory levels when a purchase is made, or content management apps need to refresh cached content when a document is edited. Traditional request-response models can struggle to meet these demands because they rely on clients’ polling servers for updates, which is inefficient and resource-intensive.

Webhooks and event-driven architectures address these needs more effectively. Webhooks let servers send real-time notifications to clients or other systems when specific events happen. This reduces the need for continuous polling. Event-driven architectures go further by decoupling application components. Services can publish and subscribe to events asynchronously, and this makes the system more scalable, responsive, and simpler.

import fastify from 'fastify';

const server = fastify();
server.post('/webhook', async (request, reply) => {
  const event = request.body;
  
  if (event.type === 'content.published') {
    await refreshCache();
  }
  
  return reply.code(200).send();
});

This is a simple Node.js function that uses Fastify to set up a web server. It responds to the endpoint /webhook, checks the type field of the JSON request, and refreshes a cache if the event is of type content.published.

With all this background information and technical knowledge, it’s easier to picture the current state of web application development, where a single, monolithic app is no longer the answer to business needs, but a new paradigm has emerged: Composable Architecture.

Composable Architecture And Headless CMSs

This evolution has led us to the concept of composable architecture, where applications are built by combining specialized services. This is where headless CMS solutions have a clear advantage, serving as the perfect example of how modern integration patterns come together.

Headless CMS platforms separate content management from content presentation, allowing you to build specialized frontends relying on a fully-featured content backend. This decoupling facilitates content reuse, independent scaling, and the flexibility to use a dedicated technology or service for each part of the system.

Take Storyblok as an example. Storyblok is a headless CMS designed to help developers build flexible, scalable, and composable applications. Content is exposed via API, REST, or GraphQL; it offers a long list of events that can trigger a webhook. Editors are happy with a great Visual Editor, where they can see changes in real time, and many integrations are available out-of-the-box via a marketplace.

Imagine this ContentDeliveryService in your app, where you can interact with Storyblok’s REST API using the open source JS Client:

import StoryblokClient from "storyblok-js-client";

class ContentDeliveryService {
  constructor(private storyblok: StoryblokClient) {}

  async getPageContent(slug: string) {
    const { data } = await this.storyblok.get(`cdn/stories/${slug}`, {
      version: 'published',
      resolve_relations: 'featured-products.products'
    });

    return data.story;
  }

  async getRelatedContent(tags: string[]) {
    const { data } = await this.storyblok.get('cdn/stories', {
      version: 'published',
      with_tag: tags.join(',')
    });

    return data.stories;
  }
}

The last piece of the puzzle is a real example of integration.

Again, many are already available in the Storyblok marketplace, and you can easily control them from the dashboard. However, to fully leverage the Composable Architecture, we can use the most powerful tool in the developer’s hand: code.

Let’s imagine a modern e-commerce platform that uses Storyblok as its content hub, Shopify for inventory and orders, Algolia for product search, and Stripe for payments.

Once each account is set up and we have our access tokens, we could quickly build a front-end page for our store. This isn’t production-ready code, but just to get a quick idea, let’s use React to build the page for a single product that integrates our services.

First, we should initialize our clients:

import StoryblokClient from "storyblok-js-client";
import { algoliasearch } from "algoliasearch";
import Client from "shopify-buy";


const storyblok = new StoryblokClient({
  accessToken: "your_storyblok_token",
});
const algoliaClient = algoliasearch(
  "your_algolia_app_id",
  "your_algolia_api_key",
);
const shopifyClient = Client.buildClient({
  domain: "your-shopify-store.myshopify.com",
  storefrontAccessToken: "your_storefront_access_token",
});

Given that we created a blok in Storyblok that holds product information such as the product_id, we could write a component that takes the productSlug, fetches the product content from Storyblok, the inventory data from Shopify, and some related products from the Algolia index:

async function fetchProduct() {
  // get product from Storyblok
  const { data } = await storyblok.get(`cdn/stories/${productSlug}`);

  // fetch inventory from Shopify
  const shopifyInventory = await shopifyClient.product.fetch(
    data.story.content.product_id
  );

  // fetch related products using Algolia
  const { hits } = await algoliaIndex.search("products", {
    filters: `category:${data.story.content.category}`,
  });
}

We could then set a simple component state:

const [productData, setProductData] = useState(null);
const [inventory, setInventory] = useState(null);
const [relatedProducts, setRelatedProducts] = useState([]);

useEffect(() =>
  // ...
  // combine fetchProduct() with setState to update the state
  // ...

  fetchProduct();
}, [productSlug]);

And return a template with all our data:

<h1>{productData.content.title}</h1>
<p>{productData.content.description}</p>
<h2>Price: ${inventory.variants[0].price}</h2>
<h3>Related Products</h3>
<ul>
  {relatedProducts.map((product) => (
    <li key={product.objectID}>{product.name}</li>
  ))}
</ul>

We could then use an event-driven approach and create a server that listens to our shop events and processes the checkout with Stripe (credits to Manuel Spigolon for this tutorial):

const stripe = require('stripe')

module.exports = async function plugin (app, opts) {
  const stripeClient = stripe(app.config.STRIPE_PRIVATE_KEY)

  server.post('/create-checkout-session', async (request, reply) => {
    const session = await stripeClient.checkout.sessions.create({
      line_items: [...], // from request.body
      mode: 'payment',
      success_url: "https://your-site.com/success",
      cancel_url: "https://your-site.com/cancel",
    })

    return reply.redirect(303, session.url)
  })
// ...

And with this approach, each service is independent of the others, which helps us achieve our business goals (performance, scalability, flexibility) with a good developer experience and a smaller and simpler application that’s easier to maintain.

Conclusion

The integration between headless CMSs and modern web services represents the current and future state of high-performance web applications. By using specialized, decoupled services, developers can focus on business logic and user experience. A composable ecosystem is not only modular but also resilient to the evolving needs of the modern enterprise.

These integrations highlight the importance of mastering API-driven architectures and understanding how different tools can harmoniously fit into a larger tech stack.

In today’s digital landscape, success lies in choosing tools that offer flexibility and efficiency, adapt to evolving demands, and create applications that are future-proof against the challenges of tomorrow.

If you want to dive deeper into the integrations you can build with Storyblok and other services, check out Storyblok’s integrations page. You can also take your projects further by creating your own plugins with Storyblok’s plugin development resources.