Sunday, July 5, 2020

How To Make Performance Visible With GitLab CI And Hoodoo Of GitLab Artifacts

It’s not enough to optimize an application. You need to prevent performance from degradation, and the first step to do it is to make performance changes visible.

Performance degradation is a problem we face on a daily basis. We could put effort to make the application blazing fast, but we soon end up where we started. It’s happening because of new features being added and the fact that we sometimes don’t have a second thought on packages that we constantly add and update, or think about the complexity of our code. It’s generally a small thing, but it’s still all about the small things.
We can’t afford to have a slow app. Performance is a competitive advantage that can bring and retain customers. We can’t afford regularly spending time optimizing apps all over again. It’s costly, and complex. And that means that despite all of the benefits of performance from a business perspective, it’s hardly profitable. As a first step in coming up with a solution for any problem, we need to make the problem visible. This article will help you with exactly that.
Note: If you have a basic understanding of Node.js, a vague idea about how your CI/CD works, and care about the performance of the app or business advantages it can bring, then we are good to go.

How To Create A Performance Budget For A Project

The first questions we should ask ourselves are:
“What is the performant project?”

“Which metrics should I use?”

“Which values of these metrics are acceptable?”
The metrics selection is outside of the scope of this article and depends highly on the project context, but I recommend that you start by reading User-centric Performance Metrics by Philip Walton.
From my perspective, it’s a good idea to use the size of the library in kilobytes as a metric for the npm package. Why? Well, it’s because if other people are including your code in their projects, they would perhaps want to minimize the impact of your code on their application’s final size.
For the site, I would consider Time To First Byte (TTFB) as a metric. This metric shows how much time it takes for the server to respond with something. This metric is important, but quite vague because it can include anything — starting from server rendering time and ending up with latency problems. So it’s nice to use it in conjunction with Server Timing or OpenTracing to find out what it exactly consists of.
You should also consider such metrics as Time to Interactive (TTI) and First Meaningful Paint (the latter will soon be replaced with Largest Contentful Paint (LCP)). I think both of these are most important — from the perspective of perceived performance.
But bear in mind: metrics are always context-related, so please don’t just take this for granted. Think about what is important in your specific case.
The easiest way to define desired values for metrics is to use your competitors — or even yourself. Also, from time to time, tools such as Performance Budget Calculator may come handy — just play around with it a little.
Performance degradation is a problem we face daily. We could put effort to make the application blazing fast, but soon we end up where we started.

Use Competitors For Your Benefit

If you ever happened to run away from an ecstatically overexcited bear, then you already know, that you don’t need to be an Olympic champion in running to get out of this trouble. You just need to be a little bit faster than the other guy.
So make a competitors list. If these are projects of the same type, then they usually consist of page types similar to each other. For example, for an internet shop, it may be a page with a product list, product details page, shopping cart, checkout, and so on.
  1. Measure the values of your selected metrics on each type of page for your competitor’s projects;
  2. Measure the same metrics on your project;
  3. Find the closest better than your value for each metric in the competitor’s projects. Adding 20% to them and set as your next goals.
Why 20%? This is a magic number that supposedly means the difference will be noticeable to the bare eye. You can read more about this number in Denys Mishunov’s article “Why Perceived Performance Matters, Part 1: The Perception Of Time”.

A Fight With A Shadow

Do you have a unique project? Don’t have any competitors? Or you are already better than any of them in all possible senses? It’s not an issue. You can always compete with the only worthy opponent, i.e. yourself. Measure each performance metric of your project on each type of page and then make them better by the same 20%.

Synthetic Tests

There are two ways of measuring performance:
  • Synthetic (in a controlled environment)
  • RUM (Real User Measurements)
    Data is being collected from real users in production.
In this article, we will use synthetic tests and assume that our project uses GitLab with its built-in CI for project deployment.

Library And Its Size As A Metric

Let’s assume that you’ve decided to develop a library and publish it to NPM. You want to keep it light — much lighter than competitors — so it has less impact on the resulting project’s end size. This saves clients traffic — sometimes traffic which the client is paying for. It also allows the project to be loaded faster, which is pretty important in regards to the growing mobile share and new markets with slow connection speeds and fragmented internet coverage.

Package For Measuring Library Size

To keep the size of the library as small as possible, we need to carefully watch how it changes over development time. But how can you do it? Well, we could use package Size Limit created by Andrey Sitnik from Evil Martians.
Let’s install it.
npm i -D size-limit @size-limit/preset-small-lib
Then, add it to package.json.
"scripts": {
+ "size": "size-limit",
  "test": "jest && eslint ."
+ "size-limit": [
+   {
+     "path": "index.js"
+   }
+ ],
The "size-limit":[{},{},…] block contains a list of the size of the files of which we want to check. In our case, it’s just one single file: index.js.
NPM script size just runs the size-limit package, which reads the configuration block size-limit mentioned before and checks the size of the files listed there. Let’s run it and see what happens:
npm run size
We can see the size of the file, but this size is not actually under control. Let’s fix that by adding limit to package.json:
"size-limit": [
+   "limit": "2 KB",
    "path": "index.js"
Now if we run the script it will be validated against the limit we set.
A screenshot of the terminal; the size of the file is less than the limit and is shown as green. (Large preview)
In the case that new development changes the file size to the point of exceeding the defined limit, the script will complete with non-zero code. This, aside from other things, means that it will stop the pipeline in the GitLab CI.
Now we can use git hook to check the file size against the limit before every commit. We may even use the husky package to make it in a nice and simple way.
Let’s install it.
npm i -D husky
Then, modify our package.json.
"size-limit": [
    "limit": "2 KB",
    "path": "index.js"
+  "husky": {
+    "hooks": {
+      "pre-commit": "npm run size"
+    }
+  },
And now before each commit automatically would be executed npm run size command and if it will end with non-zero code then commit would never happen.
But there are many ways to skip hooks (intentionally or even by accident), so we shouldn’t rely on them too much.
Also, it’s important to note that we shouldn’t need to make this check blocking. Why? Because it’s okay that the size of the library grows while you are adding new features. We need to make the changes visible, that’s all. This will help to avoid an accidental size increase because of introducing a helper library that we don’t need. And, perhaps, give developers and product owners a reason to consider whether the feature being added is worth the size increase. Or, maybe, whether there are smaller alternative packages. Bundlephobia allows us to find an alternative for almost any NPM package.
So what should we do? Let’s show the change in the file size directly in the merge request! But you don’t push to master directly; you act like a grown-up developer, right?

Running Our Check On GitLab CI

Let’s add a GitLab artifact of the metrics type. An artifact is a file, which will «live» after the pipeline operation is finished. This specific type of artifact allows us to show an additional widget in the merge request, showing any change in the value of the metric between artifact in the master and the feature branch. The format of the metrics artifact is a text Prometheus format. For GitLab values inside the artifact, it’s just text. GitLab doesn’t understand what it is that has exactly changed in the value — it just knows that the value is different. So, what exactly should we do?
  1. Define artifacts in the pipeline.
  2. Change the script so that it creates an artifact on the pipeline.
To create an artifact we need to change .gitlab-ci.yml this way:
image: node:latest

  - performance

  stage: performance
    - npm ci
    - npm run size
+  artifacts:
+    expire_in: 7 days
+    paths:
+      - metric.txt
+    reports:
+      metrics: metric.txt

  1. The artifact will have the type reports:metrics
Now let’s make Size Limit generate a report. To do so we need to change package.json:
"scripts": {
-  "size": "size-limit",
+  "size": "size-limit --json > size-limit.json",
  "test": "jest && eslint ."
size-limit with key --json will output data in json format:

And redirection > size-limit.json will save JSON into file size-limit.json.
Now we need to create an artifact out of this. Format boils down to [metrics name][space][metrics value]. Let’s create the script generate-metric.js:
const report = require('./size-limit.json');
process.stdout.write(`size ${(report[0].size/1024).toFixed(1)}Kb`);
And add it to package.json:
"scripts": {
  "size": "size-limit --json > size-limit.json",
+  "postsize": "node generate-metric.js > metric.txt",
  "test": "jest && eslint ."
Because we have used the post prefix, the npm run size command will run the size script first, and then, automatically, execute the postsize script, which will result in the creation of the metric.txt file, our artifact.
As a result, when we merge this branch to master, change something and create a new merge request

In the widget that appears on the page we, first, see the name of the metric (size) followed by the value of the metric in the feature branch as well as the value in the master within the round brackets.
Now we can actually see how to change the size of the package and make a reasonable decision whether we should merge it or not..
OK! So, we’ve figured out how to handle the trivial case. If you have multiple files, just separate metrics with line breaks. As an alternative for Size Limit, you may consider bundlesize. If you are using WebPack, you may get all sizes you need by building with the --profile and --json flags:
webpack --profile --json > stats.json
If you are using next.js, you can use the @next/bundle-analyzer plugin. It’s up to you!
  • expire_in: 7 days — artifact will exist for 7 days.
  • paths:

  • It will be saved in the root catalog. If you skip this option then it wouldn’t be possible to download it.
  • reports:
      metrics: metric.txt
  • Tuesday, June 23, 2020

    How To Test A Design Concept For Effectiveness

    Most of us are reasonably comfortable with the idea of carrying out usability testing on a website or prototype. We don’t always get the opportunity, but most people accept that it is a good idea.
    However, when it comes to a design concept, opinion is more divided. Some designers feel it undermines their role, a view that seems to be somewhat backed up by the famous “Forty Shades of Blue” episode, where Google tested which one of forty shades of blue to use for link color.

    It is a position I can sympathize with, and testing certainly doesn’t tell us everything. For example, it cannot come up with the right solution, only judge a design that already exists. Neither is the kind of obsessional testing demonstrated by Google healthy for morale, or most companies bottom line.
    That said, in this post, I want to explore some of the advantages testing design concepts can provide to us as designers, and demonstrate that we can do it cheaply and without slowing down the delivery of the overall project.
    Let’s begin by asking why a designer might favor testing of their design concepts.

    Why You Should Embrace Testing Design Concepts

    Every designer has stories of being caught in revision hell. Endlessly tweaking colors and adjusting layout in the hopes of finally getting sign off from the client.
    It’s not always like that, but every project is a gamble. Will the client, sign off immediately, or will you end up with a design concept named “Final-Version-21.sketch”? And that is the problem; you just do not know, which makes project planning and budgeting extremely difficult.

    Testing Makes the Design Process Predictable

    People tend to consider testing a design as a luxury that a project cannot afford. They see it as too time-consuming and expensive. However, in truth, it brings some much-needed predictability to a project that can, in many cases, make things quicker and cheaper.
    Yes, if everything goes smoothly with design sign-off, design testing can slow things down and cost a little money. But, that rarely happens. Usually, a design goes through at least a few rounds of iteration, and occasionally it has to be thrown out entirely.
    Rarely is this because the design is terrible. Instead, it is because stakeholders are unhappy with it.
    By contrast, testing creates a framework for deciding whether a design is right, that is not based on personal preference. Some quick testing could approve a design without the need for further iteration or at worse would lead to some relatively minor amendments if the designer has done their job.
    In most cases, this proves faster and less expensive than an endless discussion over the direction. However, even when it does not, it is more predictable, which improves product planning.
    Testing also has another related advantage. It changes the basis upon which we assess the design.

    Testing Encourages the Right Focus and Avoids Conflict

    The design has a job to do. In most cases, it has to connect with a user emotionally, while also enabling them to use the website as efficiently as possible. Unfortunately, most design is not assessed on this basis.
    Instead, we often evaluate a design on the simple criteria of whether or not the client likes it. It is this conflict between the role of a design and how it is evaluated that causes disagreements.
    By carrying out testing on a design, you refocus the stakeholders on what matters because you build the test around those criteria instead.
    That has another advantage as well. It helps avoid a lot of the disagreement over the design direction. That is especially true when many people are inputting on the design.
    Because design is subjective, the more people who look at it, the more disagreement there will be. The way this is typically resolved is through a compromise that produces a design that pleases nobody and is often not fit for purpose.
    Testing provides an alternative to that. It leads to less conflict between stakeholders and also ensures the integrity of the design, which ultimately leads to a better product.

    Testing Improves Results

    By using testing to avoid design by committee and focus stakeholders on the right assessment criteria, it almost guarantees a better design in the end.
    However, there is another factor that ensures testing produces better design; that is the fact that we, as designers are not infallible.
    Sometimes we misjudge the tone of the design or the mental model of the user. Sometimes we fail to spot how an image undermines the call to action or that the font is too small for an elderly audience. Testing helps us identify these issues early, while they are still easy to fix. Updating a mockup in Sketch or Figma is a lot easier than on a working website.
    So hopefully now you see that design testing is a good idea for all parties concerned. The next question then becomes; how do we carry out design testing?

    How to Implement Design Testing

    Before you can test how well a design concept is working, you first need to be clear about what you are testing. Earlier I said that a design had two jobs. It had to connect with users emotionally and enable people to use the site as efficiently as possible.
    With that in mind, we want to test two things:
    • The brand and personality of the design, which is what dictates whether a design connects with the user emotionally.
    • The usability and visual hierarchy, which enables people to use the site more efficiently.
    It is important to note as well that for the sake of this article, I am presuming all we have is a static mockup of the design, with no interactivity.
    So, let’s start by looking at how we test brand and personality.

    Test Brand and Personality

    Before somebody is willing to act on a website, they have to trust that website. Users have to form a positive first impression.
    In a study published in the Journal of Behaviour and Information Technology, they found that the brain makes decisions in just a 20th of a second of viewing a webpage. What is more, these decisions have a lasting impact.

    In that length of time, the user is judging the website purely on aesthetics, and so we need to ensure those aesthetics communicate the right things.
    We have three ways we can test this, but let’s begin with my personal favorite.
    Semantic Differential Survey
    A semantic differential survey is a fancy name for a simple idea. Before you begin designing, first agree on a list of keywords that you want the design to signal to the end-user. These might be terms like trustworthy, fun or approachable.
    Once you have created the design, you can now test whether it communicates these impressions in the user by running a semantic differential survey.
    Simply show the user the design and ask them to rate the design against each of your keywords.

    The great thing is that if the design rates well against all of the agreed words, not only do you know it is doing its job, it is also hard for stakeholders to reject the design because they don’t like some aspect of it.
    You can use this method to ascertain the most effective approach from multiple designs. However, there is a much simpler test you can also adopt when you have more than one design concept.
    Preference Tests
    A preference test is what it sounds like. You simply show several design concepts to users and ask them to select which approach they prefer.
    However, instead of just asking users to select which design they like most, ask them to select a design based on your keywords. You can ask users to select which design they feel best conveys the keywords you chose.
    You can also apply the same principle of comparison to your competition.
    Competition Testing
    You can run precisely the same kind of preference test as above, but instead, compare your design concept against competitors' websites. That will help you understand whether your design does a better job of communicating the desired keywords compared to the competition.

    The advantage of both types of preference testing is that it discourages stakeholders from adopting a pick-and-mix approach to design. In other words, it encourages them to compare designs in their entirety, rather than selecting different design elements from the competition or different versions, and asking you to combine them into a Frankenstein approach.
    By combining both semantic differential surveys and preference testing, you can build up a clear picture of whether a design’s aesthetics are communicating the right impression. However, we still need to ensure it is usable and that people can find the information or features they need.

    Test Usability and Visual Hierarchy

    A website can look great and give the user the right feel, but if it is hard to use it will have still failed to do its job.
    Once you have a fully built website or even a prototype, testing for usability is easy by combining A/B testing (quantitive) with usability testing (qualitative).

    However, when all you have is a static mockup of the design, it can appear harder to test. Fortunately, that impression is incorrect. What is more, it is worth testing at this early stage, because things will be much easier to fix.
    We have two tests we can do to ascertain usability. The first focuses on navigation and the second on visual hierarchy.
    First-Click Tests
    An influential study into usability, by Bob Bailey and Cari Wolfson, demonstrated the importance of ensuring that the user makes an excellent first choice when navigating your website. They proved that if users got their first click right, they had an 87% chance of completing their task correctly; however, if they got it wrong that dropped to just 46%.
    Fortunately, we can test whether users will make the right first click using an imaginatively named “first click test”.
    In the first click, test users are given a task (e.g. “Where would you click to contact the website owners?”), and then they are shown the design concept.
    The user then clicks the appropriate place on the concept that they believe is correct, and the results are recorded. It is that simple.

    The advantage of running a first-click test from the designers perspective is that it can resolve disagreements about information architecture by demonstrating whether users understand labeling and the site’s overall structure.
    However, usability isn’t all about clicking. It is also essential that users spot critical content and calls to action. To test for that, you need a 5-second test.
    5-Second Tests
    Research seems to indicate that on average, you have about 8 seconds to grab a users attention and that many leave a website within 10 to 20 seconds. That means our interfaces have to present information we want users to see in the most obvious way possible. Put another way; we need to distinguish between the most important and less important information.
    Testing this kind of visual hierarchy can be achieved using a 5-second test.
    Usability Hub describe a five-second test in this way:
    “A five-second test is run by showing an image to a participant for just five seconds, after which the participant answers questions based on their memory and impression of the design.”
    It is important to note not only whether users remembered seeing critical screen elements, but also how quickly they recalled those elements. If users mention less essential elements first, this might indicate they have too much prominence.
    The great thing about a 5-second test is that it can reassure clients concerns that a user might overlook an interface element. Hopefully, that will reduce the number of “make my logo bigger” requests you receive.
    As you can see, testing can help both improve your designs and make design sign off less painful. However, it may be that you have concerns about implementing these tests. Fortunately, it is more straightforward than you think.

    Who and How to Test?

    The good news is that there are some great tools out there to help you run the tests I have outlined in this post. In fact Usability Hub offers all five tests we have covered and more.

    You simply create your test and then share the website address they give you with users.
    Of course, finding those users can be challenging, so let’s talk about that.
    When it comes to testing usability, we do not need many users. The Nielsen Norman Group suggests you only need to test with five people because beyond that you see diminishing returns.

    These users can be quickly recruited either from your existing customer base or via friends and family. However, if you want to be a bit pickier about your demographics, services like Usability Hub will recruit participants for as little as a dollar per person.
    Testing aesthetics is trickier because as we have already established design is subjective. That means we need more people to remove any statistical anomalies.
    Once again, the Nielsen Norman Group suggest a number. They say when you want statistically significant results, you should look for at least 20 people.
    It is also worth noting that in the case of aesthetics, you should test with demographically accurate individuals, something that your testing platform should be able to help you recruit.
    Although that will cost a small amount of money, it will be insignificant compared to the person-hours that would go into debating the best design approach.
    There is also often a concern that it will take a long time. However, in my experience, you can typically get 20 responses in an hour or less. When was the last time you got design approval in under an hour?

    Worth a Try

    Testing a design concept will not solve all your designer woes. However, it will lead to better designs and has the potential to help with the management of stakeholders significantly. And when you consider the minimal investment in making it happen, it makes little sense not to try it on at least one project.

    Thursday, May 21, 2020

    Consuming REST APIs In React With Fetch And Axios

    Consuming REST APIs in a React Application can be done in various ways, but in this tutorial, we will be discussing how we can consume REST APIs using two of the most popular methods known as Axios (a promise-based HTTP client) and Fetch API (a browser in-built web API). I will discuss and implement each of these methods in detail and shed light on some of the cool features each of them have to offer.
    APIs are what we can use to supercharge our React applications with data. There are certain operations that can’t be done on the client-side, so these operations are implemented on the server-side. We can then use the APIs to consume the data on the client-side.
    APIs consist of a set of data, that is often in JSON format with specified endpoints. When we access data from an API, we want to access specific endpoints within that API framework. We can also say that an API is a contractual agreement between two services over the shape of request and response. The code is just a byproduct. It also contains the terms of this data exchange.
    In React, there are various ways we can consume REST APIs in our applications, these ways include using the JavaScript inbuilt fetch() method and Axios which is a promise-based HTTP client for the browser and Node.js.
    Note: A good knowledge of ReactJS, React Hooks, JavaScript and CSS will come in handy as you work your way throughout this tutorial.
    Let’s get started with learning more about the REST API.

    What Is A REST API

    A REST API is an API that follows what is structured in accordance with the REST Structure for APIs. REST stands for “Representational State Transfer”. It consists of various rules that developers follow when creating APIs.

    The Benefits Of REST APIs

    1. Very easy to learn and understand;
    2. It provides developers with the ability to organize complicated applications into simple resources;
    3. It easy for external clients to build on your REST API without any complications;
    4. It is very easy to scale;
    5. A REST API is not language or platform-specific, but can be consumed with any language or run on any platform.

    An Example Of A REST API Response

    The way a REST API is structured depends on the product it’s been made for — but the rules of REST must be followed.
    The sample response below is from the Github Open API. We’ll be using this API to build a React app later on in this tutorial.

    "login": "bktivist123",
    "id": 26572907,
    "node_id": "MDQ6VXNlcjI2NTcyOTA3",
    "avatar_url": "",
    "gravatar_id": "",
    "url": "",
    "html_url": "",
    "followers_url": "",
    "following_url": "{/other_user}",
    "gists_url": "{/gist_id}",
    "starred_url": "{/owner}{/repo}",
    "subscriptions_url": "",
    "organizations_url": "",
    "repos_url": "",
    "events_url": "{/privacy}",
    "received_events_url": "",
    "type": "User",
    "site_admin": false,
    "name": "Shesh",
    "company": null,
    "blog": "",
    "location": "Lagos, NN",
    "email": null,
    "hireable": true,
    "bio": "☕ Software Engineer | | Developer Advocate🥑|| ❤ Everything JavaScript",
    "public_repos": 68,
    "public_gists": 1,
    "followers": 130,
    "following": 246,
    "created_at": "2017-03-21T12:55:48Z",
    "updated_at": "2020-05-11T13:02:57Z"

    The response above is from the Github REST API when I make a GET request to the following endpoint It returns all the stored data about a user called hacktivist123. With this response, we can decide to render it whichever way we like in our React app.

    Consuming APIs Using The Fetch API

    The fetch() API is an inbuilt JavaScript method for getting resources from a server or an API endpoint. It’s similar to XMLHttpRequest, but the fetch API provides a more powerful and flexible feature set.
    It defines concepts such as CORS and the HTTP Origin header semantics, supplanting their separate definitions elsewhere.
    The fetch() API method always takes in a compulsory argument, which is the path or URL to the resource you want to fetch. It returns a promise that points to the response from the request, whether the request is successful or not. You can also optionally pass in an init options object as the second argument.
    Once a response has been fetched, there are several inbuilt methods available to define what the body content is and how it should be handled.

    The Difference Between The Fetch API And jQuery Ajax

    The Fetch API is different from jQuery Ajax in three main ways, which are:
    1. The promise returned from a fetch() request will not reject when there’s an HTTP error, no matter the nature of the response status. Instead, it will resolve the request normally, if the response status code is a 400 or 500 type code, it’ll set the ok status. A request will only be rejected either because of network failure or if something is preventing the request from completing
    2. fetch() will not allow the use of cross-site cookies i.e you cannot carry out a cross-site session using fetch()
    3. fetch() will also not send cookies by default unless you set the credentials in the init option.

    Parameters For The Fetch API

    • resource
      This is the path to the resource you want to fetch, this can either be a direct link to the resource path or a request object
    • init
      This is an object containing any custom setting or credentials you’ll like to provide for your fetch() request. The following are a few of the possible options that can be contained in the init object:
      • method
        This is for specifying the HTTP request method e.g GET, POST, etc.
      • headers
        This is for specifying any headers you would like to add to your request, usually contained in an object or an object literal.
      • body
        This is for specifying a body that you want to add to your request: this can be a Blob, BufferSource, FormData, URLSearchParams, USVString, or ReadableStream object
      • mode
        This is for specifying the mode you want to use for the request, e.g., cors, no-cors, or same-origin.
      • credentials
        This for specifying the request credentials you want to use for the request, this option must be provided if you consider sending cookies automatically for the current domain.

    Basic Syntax for Using the Fetch() API

    A basic fetch request is really simple to write, take a look at the following code:
      .then(response => response.json())
      .then(data => console.log(data));
    In the code above, we are fetching data from a URL that returns data as JSON and then printing it to the console. The simplest form of using fetch() often takes just one argument which is the path to the resource you want to fetch and then return a promise containing the response from the fetch request. This response is an object.
    The response is just a regular HTTP response and not the actual JSON. In other to get the JSON body content from the response, we’d have to change the response to actual JSON using the json() method on the response.

    Using Fetch API In React Apps

    Using the Fetch API in React Apps is the normal way we’d use the Fetch API in javascript, there is no change in syntax, the only issue is deciding where to make the fetch request in our React app. Most fetch requests or any HTTP request of any sort is usually done in a React Component.
    This request can either be made inside a Lifecycle Method if your component is a Class Component or inside a useEffect() React Hook if your component is a Functional Component.
    For example, In the code below, we will make a fetch request inside a class component, which means we’ll have to do it inside a lifecycle method. In this particular case, our fetch request will be made inside a componentDidMount lifecycle method because we want to make the request just after our React Component has mounted.
    import React from 'react';
    class myComponent extends React.Component {
      componentDidMount() {
        const apiUrl = '';
          .then((response) => response.json())
          .then((data) => console.log('This is your data', data));
      render() {
        return <h1>my Component has Mounted, Check the browser 'console' </h1>;
    export default myComponent;
    In the code above, we are creating a very simple class component that makes a fetch request that logs the final data from the fetch request we have made to the API URL into the browser console after the React component has finished mounting.
    The fetch() method takes in the path to the resource we want to fetch, which is assigned to a variable called apiUrl. After the fetch request has been completed it returns a promise that contains a response object. Then, we are extracting the JSON body content from the response using the json() method, finally we log the final data from the promise into the console.

    Let’s Consume A REST API With Fetch Method

    In this section, we will be building a simple react application that consumes an external API, we will be using the Fetch method to consume the API.
    The simple application will display all the repositories and their description that belongs to a particular user. For this tutorial, I’ll be using my GitHub username, you can also use yours if you wish.
    The first thing we need to do is to generate our React app by using create-react-app:
    npx create-react-app myRepos
    The command above will bootstrap a new React app for us. As soon as our new app has been created, all that’s left to do is to run the following command and begin coding:
    npm start
    If our React is created properly we should see this in our browser window when we navigate to localhost:3000 after running the above command.
    In your src folder, create a new folder called component. This folder will hold all of our React components. In the new folder, create two files titled List.js and withListLoading.js. These two files will hold the components that will be needed in our app.
    The List.js file will handle the display of our Repositories in the form of a list, and the withListLoading.js file will hold a higher-order component that will be displayed when the Fetch request we will be making is still ongoing.
    In the List.js file we created inside the components folder, let’s paste in the following code:
    import React from 'react';
    const List = (props) => {
      const { repos } = props;
      if (!repos || repos.length === 0) return <p>No repos, sorry</p>;
      return (
          <h2 className='list-head'>Available Public Repositories</h2>
          { => {
            return (
              <li key={} className='list'>
                <span className='repo-text'>{} </span>
                <span className='repo-description'>{repo.description}</span>
    export default List;
    The code above is a basic React list component that would display the data, in this case, the repositories name and their descriptions in a list.
    Now, Let me explain the code bit by bit.
    const { repos } = props;
    We are initializing a prop for the component called repos.
    if (repos.length === 0 || !repos) return <p>No repos, sorry</p>;
    Here, all we are doing is making a conditional statement that will render a message when the length of the repos we get from the request we make is equal to zero.
    return (
          <h2 className='list-head'>Available Public Repositories</h2>
          { => {
            return (
              <li key={} className='list'>
                <span className='repo-text'>{} </span>
                <span className='repo-description'>{repo.description}</span>
    Here, we are mapping through each of the repositories that will be provided by the API request we make and extracting each of the repositories names and their descriptions then we are displaying each of them in a list.
    export default List;
    Here we are exporting our List component so that we can use it somewhere else.
    In the withListLoading.js file we created inside the components folder, let’s paste in the following code:
    import React from 'react';
    function WithListLoading(Component) {
      return function WihLoadingComponent({ isLoading, ...props }) {
        if (!isLoading) return <Component {...props} />;
        return (
          <p style={{ textAlign: 'center', fontSize: '30px' }}>
            Hold on, fetching data may take some time :)
    export default WithListLoading;
    The code above is a higher-order React component that takes in another component and then returns some logic. In our case, our higher component will wait to check if the current isLoading state of the component it takes is true or false. If the current isLoading state is true, it will display a message Hold on, fetching data may take some time :). Immediately the isLoading state changes to false it’ll render the component it took in. In our case, it’ll render the List component.
    In your *App.js file inside the src folder, let’s paste in the following code:
    import React, { useEffect, useState } from 'react';
    import './App.css';
    import List from './components/List';
    import withListLoading from './components/withListLoading';
    function App() {
      const ListLoading = withListLoading(List);
      const [appState, setAppState] = useState({
        loading: false,
        repos: null,
      useEffect(() => {
        setAppState({ loading: true });
        const apiUrl = ``;
          .then((res) => res.json())
          .then((repos) => {
            setAppState({ loading: false, repos: repos });
      }, [setAppState]);
      return (
        <div className='App'>
          <div className='container'>
            <h1>My Repositories</h1>
          <div className='repo-container'>
            <ListLoading isLoading={appState.loading} repos={appState.repos} />
            <div className='footer'>
              Built{' '}
              <span role='img' aria-label='love'>
              </span>{' '}
              with by Shedrack Akintayo
    export default App;
    Our App.js is a functional component that makes use of React Hooks for handling state and also side effects. If you’re not familiar with React Hooks, read my Getting Started with React Hooks Guide.
    Let me explain the code above bit by bit.
    import React, { useEffect, useState } from 'react';
    import './App.css';
    import List from './components/List';
    import withListLoading from './components/withListLoading';
    Here, we are importing all the external files we need and also the components we created in our components folder. We are also importing the React Hooks we need from React.
    const ListLoading = withListLoading(List);
      const [appState, setAppState] = useState({
        loading: false,
        repos: null,
    Here, we are creating a new component called ListLoading and assigning our withListLoading higher-order component wrapped around our list component. We are then creating our state values loading and repos using the useState() React Hook.
    useEffect(() => {
        setAppState({ loading: true });
        const user = ``;
          .then((res) => res.json())
          .then((repos) => {
            setAppState({ loading: false, repos: repos });
      }, [setAppState]);
    Here, we are initializing a useEffect() React Hook. In the useEffect() hook, we are setting our initial loading state to true, while this is true, our higher-order component will display a message. We are then creating a constant variable called user and assigning the API URL we’ll be getting the repositories data from.
    We are then making a basic fetch() request like we discussed above and then after the request is done we are setting the app loading state to false and populating the repos state with the data we got from the request.
    return (
        <div className='App'>
          <div className='container'>
            <h1>My Repositories</h1>
          <div className='repo-container'>
            <ListLoading isLoading={AppState.loading} repos={AppState.repos} />
    export default App;
    Here we are basically just rendering the Component we assigned our higher-order component to and also filling the isLoading prop and repos prop with their state value.
    Now, we should see this in our browser, when the fetch request is still being made, courtesy of our withListLoading higher-order component:

    Now, let’s style our project a little bit, in your App.css file, copy and paste this code.
    @import url('');
    :root {
      --basic-color: #23cc71;
    .App {
      box-sizing: border-box;
      display: flex;
      justify-content: center;
      align-items: center;
      flex-direction: column;
      font-family: 'Amiri', serif;
      overflow: hidden;
    .container {
      display: flex;
      flex-direction: row;
    .container h1 {
      font-size: 60px;
      text-align: center;
      color: var(--basic-color);
    .repo-container {
      width: 50%;
      height: 700px;
      margin: 50px;
      box-shadow: 0 2px 10px rgba(0, 0, 0, 0.3);
      overflow: scroll;
    @media screen and (max-width: 600px) {
      .repo-container {
        width: 100%;
        margin: 0;
        box-shadow: none;
    .repo-text {
      font-weight: 600;
    .repo-description {
      font-weight: 600;
      font-style: bold;
      color: var(--basic-color);
    .list-head {
      text-align: center;
      font-weight: 800;
      text-transform: uppercase;
    .footer {
      font-size: 15px;
      font-weight: 600;
    .list {
      list-style: circle;
    So in the code above, we are styling our app to look more pleasing to the eyes, we have assigned various class names to each element in our App.js file and thus we are using these class names to style our app.

    Now our app looks much better. 😊
    So that’s how we can use the Fetch API to consume a REST API. In the next section, we’ll be discussing Axios and how we can use it to consume the same API in the same App.

    Consuming APIs With Axios

    Axios is an easy to use promise-based HTTP client for the browser and node.js. Since Axios is promise-based, we can take advantage of async and await for more readable and asynchronous code. With Axios, we get the ability to intercept and cancel request, it also has a built-in feature that provides client-side protection against cross-site request forgery.

    Features Of Axios

    • Request and response interception
    • Streamlined error handling
    • Protection against XSRF
    • Support for upload progress
    • Response timeout
    • The ability to cancel requests
    • Support for older browsers
    • Automatic JSON data transformation

    Making Requests With Axios

    Making HTTP Requests with Axios is quite easy. The code below is basically how to make an HTTP request.
    // Make a GET request
      method: 'get',
      url: '',
    // Make a Post Request
      method: 'post',
      url: '/login',
      data: {
        firstName: 'ssss',
        lastName: 'aaaaa'
    The code above shows the basic ways we can make a GET and POST HTTP request with Axios.
    Axios also provides a set of shorthand method for performing different HTTP requests. The Methods are as follows:
    • axios.request(config)
    • axios.get(url[, config])
    • axios.delete(url[, config])
    • axios.head(url[, config])
    • axios.options(url[, config])
    •[, data[, config]])
    • axios.put(url[, data[, config]])
    • axios.patch(url[, data[, config]])
    For example, if we want to make a similar request like the example code above but with the shorthand methods we can do it like so:
    // Make a GET request with a shorthand method
    // Make a Post Request with a shorthand method'/signup', {
        firstName: 'ssssss',
        lastName: 'aaaaaa'
    In the code above, we are making the same request as what we did above but this time with the shorthand method. Axios provides flexibility and makes your HTTP requests even more readable.

    Making Multiple Requests With Axios

    Axios provides developers the ability to make and handle simultaneous HTTP requests using the axios.all() method. This method takes in an array of arguments and it returns a single promise object that resolves only when all arguments passed in the array have resolved.
    For example, we can make multiple requests to the GitHub api using the axios.all() method like so:
    .then(response => {
      console.log('Date created: ', response[0].data.created_at);
      console.log('Date created: ', response[1].data.created_at);
    The code above makes simultaneous requests to an array of arguments in parallel and returns the response data, in our case, it will log to the console the created_at object from each of the API responses.

    Let’s Consume A REST API With Axios Client

    In this section, all we’ll be doing is replacing fetch() method with Axios in our existing React Application. All we need to do is to install Axios and then use it in our App.js file for making the HTTP request to the GitHub API.
    Now let’s install Axios in our React app by running either of the following:
    With NPM:
    npm install axios
    With Yarn:
    yarn add axios
    After installation is complete, we have to import axios into our App.js. In our App.js we’ll add the following line to the top of our App.js file:
    import axios from 'axios'
    After adding the line of code our App.js all we have to do inside our useEffect() is to write the following code:
    useEffect(() => {
        setAppState({ loading: true });
        const apiUrl = '';
        axios.get(apiUrl).then((repos) => {
          const allRepos =;
          setAppState({ loading: false, repos: allRepos });
      }, [setAppState]);
    You may have noticed that we have now replaced the fetch API with the Axios shorthand method axios.get to make a get request to the API.
    axios.get(apiUrl).then((repos) => {
          const allRepos =;
          setAppState({ loading: false, repos: allRepos });
    In this block of code, we are making a GET request then we are returning a promise that contains the repos data and assigning the data to a constant variable called allRepos. We are then setting the current loading state to false and also passing the data from the request to the repos state variable.
    If we did everything correctly, we should see our app still render the same way without any change.

    So this is how we can use Axios client to consume a REST API.

    Fetch vs Axios

    In this section, I will be listing our certain features and then I’ll talk about how well Fetch and Axios support these features.
    1. Basic Syntax
      Both Fetch and Axios have very simple syntaxes for making requests. But Axios has an upper hand because Axios automatically converts a response to JSON, so when using Axios we skip the step of converting the response to JSON, unlike Fetch() where we’d still have to convert the response to JSON. Lastly, Axios shorthand methods allow us to make specific HTTP Requests easier.
    2. Browser Compatibility
      One of the many reasons why developers would prefer Axios over Fetch is because Axios is supported across major browsers and versions unlike Fetch that is only supported in Chrome 42+, Firefox 39+, Edge 14+, and Safari 10.1+.
    3. Handling Response Timeout
      Setting a timeout for responses is very easy to do in Axios by making use of the timeout option inside the request object. But in Fetch, it is not that easy to do this. Fetch provides a similar feature by using the AbortController() interface but it takes more time to implement and can get confusing.
    4. Intercepting HTTP Requests
      Axios allows developers to intercept HTTP requests. HTTP interceptors are needed when we need to change HTTP requests from our application to the server. Interceptors give us the ability to do that without having to write extra code.
    5. Making Multiple Requests Simultaneously
      Axios allows us to make multiple HTTP requests with the use of the axios.all() method ( I talked about this above). fetch() provides the same feature with the use of the promise.all() method, we can make multiple fetch() requests inside it.


    Axios and fetch() are all great ways of consuming APIs but I advise you to use fetch() when building relatively small applications and make use of Axios when building large applications for scalability reasons. I hope you enjoyed working through this tutorial, you could always read more on Consuming REST APIs with either Fetch or Axios from the references below. If you have any questions, you can leave it in the comments section below and I’ll be happy to answer every single one.