How To Make Performance Visible With GitLab CI And Hoodoo Of GitLab Artifacts
It’s not enough to optimize an application. You need to prevent
performance from degradation, and the first step to do it is to make
performance changes visible.
Performance degradation is a problem we face on a daily basis. We
could put effort to make the application blazing fast, but we soon end
up where we started. It’s happening because of new features being added
and the fact that we sometimes don’t have a second thought on packages
that we constantly add and update, or think about the complexity of our
code. It’s generally a small thing, but it’s still all about the small
things.
We can’t afford to have a slow app. Performance is a
competitive advantage that can bring and retain customers. We can’t
afford regularly spending time optimizing apps all over again. It’s
costly, and complex. And that means that despite all of the benefits of
performance from a business perspective, it’s hardly profitable. As a
first step in coming up with a solution for any problem, we need to make
the problem visible. This article will help you with exactly that. Note: If you have a basic understanding of Node.js,
a vague idea about how your CI/CD works, and care about the performance
of the app or business advantages it can bring, then we are good to go.
How To Create A Performance Budget For A Project
The first questions we should ask ourselves are:
“What is the performant project?”
“Which metrics should I use?”
“Which values of these metrics are acceptable?”
The
metrics selection is outside of the scope of this article and depends
highly on the project context, but I recommend that you start by reading
User-centric Performance Metrics by Philip Walton.
From
my perspective, it’s a good idea to use the size of the library in
kilobytes as a metric for the npm package. Why? Well, it’s because if
other people are including your code in their projects, they would
perhaps want to minimize the impact of your code on their application’s
final size.
For the site, I would consider Time To First Byte (TTFB)
as a metric. This metric shows how much time it takes for the server to
respond with something. This metric is important, but quite vague
because it can include anything — starting from server rendering time
and ending up with latency problems. So it’s nice to use it in
conjunction with Server Timing or OpenTracing to find out what it exactly consists of.
You should also consider such metrics as Time to Interactive (TTI) and First Meaningful Paint (the latter will soon be replaced with Largest Contentful Paint (LCP)). I think both of these are most important — from the perspective of perceived performance.
But bear in mind: metrics are always context-related, so please don’t just take this for granted. Think about what is important in your specific case.
The
easiest way to define desired values for metrics is to use your
competitors — or even yourself. Also, from time to time, tools such as Performance Budget Calculator may come handy — just play around with it a little.
If
you ever happened to run away from an ecstatically overexcited bear,
then you already know, that you don’t need to be an Olympic champion in
running to get out of this trouble. You just need to be a little bit
faster than the other guy.
So make a competitors list. If these
are projects of the same type, then they usually consist of page types
similar to each other. For example, for an internet shop, it may be a
page with a product list, product details page, shopping cart, checkout,
and so on.
Measure the values of your selected metrics on each type of page for your competitor’s projects;
Measure the same metrics on your project;
Find
the closest better than your value for each metric in the competitor’s
projects. Adding 20% to them and set as your next goals.
Do
you have a unique project? Don’t have any competitors? Or you are
already better than any of them in all possible senses? It’s not an
issue. You can always compete with the only worthy opponent, i.e.
yourself. Measure each performance metric of your project on each type
of page and then make them better by the same 20%.
Synthetic Tests
There are two ways of measuring performance:
Synthetic (in a controlled environment)
RUM (Real User Measurements) Data is being collected from real users in production.
In
this article, we will use synthetic tests and assume that our project
uses GitLab with its built-in CI for project deployment.
Library And Its Size As A Metric
Let’s
assume that you’ve decided to develop a library and publish it to NPM.
You want to keep it light — much lighter than competitors — so it has
less impact on the resulting project’s end size. This saves clients
traffic — sometimes traffic which the client is paying for. It also
allows the project to be loaded faster, which is pretty important in
regards to the growing mobile share and new markets with slow connection
speeds and fragmented internet coverage.
Package For Measuring Library Size
To
keep the size of the library as small as possible, we need to carefully
watch how it changes over development time. But how can you do it?
Well, we could use package Size Limit created by Andrey Sitnik from Evil Martians.
Let’s install it.
The "size-limit":[{},{},…] block contains a list of the size of the files of which we want to check. In our case, it’s just one single file: index.js.
NPM script size just runs the size-limit package, which reads the configuration block size-limit mentioned before and checks the size of the files listed there. Let’s run it and see what happens:
npm run size
We can see the size of the file, but this size is not actually under control. Let’s fix that by adding limit to package.json:
Now if we run the script it will be validated against the limit we set. A screenshot of the terminal; the size of the file is less than the limit and is shown as green. (Large preview)In
the case that new development changes the file size to the point of
exceeding the defined limit, the script will complete with non-zero
code. This, aside from other things, means that it will stop the
pipeline in the GitLab CI.
Now we can use git hook to check the file size against the limit before every commit. We may even use the husky package to make it in a nice and simple way.
Let’s install it.
And now before each commit automatically would be executed npm run size command and if it will end with non-zero code then commit would never happen.
But there are many ways to skip hooks (intentionally or even by accident), so we shouldn’t rely on them too much.
Also,
it’s important to note that we shouldn’t need to make this check
blocking. Why? Because it’s okay that the size of the library grows
while you are adding new features. We need to make the changes visible,
that’s all. This will help to avoid an accidental size increase because
of introducing a helper library that we don’t need. And, perhaps, give
developers and product owners a reason to consider whether the feature
being added is worth the size increase. Or, maybe, whether there are
smaller alternative packages. Bundlephobia allows us to find an alternative for almost any NPM package.
So
what should we do? Let’s show the change in the file size directly in
the merge request! But you don’t push to master directly; you act like a grown-up developer, right?
Running Our Check On GitLab CI
Let’s add a GitLab artifact of the metrics
type. An artifact is a file, which will «live» after the pipeline
operation is finished. This specific type of artifact allows us to show
an additional widget in the merge request, showing any change in the
value of the metric between artifact in the master and the feature
branch. The format of the metrics artifact is a text Prometheus format.
For GitLab values inside the artifact, it’s just text. GitLab doesn’t
understand what it is that has exactly changed in the value — it just
knows that the value is different. So, what exactly should we do?
Define artifacts in the pipeline.
Change the script so that it creates an artifact on the pipeline.
To create an artifact we need to change .gitlab-ci.yml this way:
size-limit with key --json will output data in json format: And redirection > size-limit.json will save JSON into file size-limit.json.
Now we need to create an artifact out of this. Format boils down to [metrics name][space][metrics value]. Let’s create the script generate-metric.js:
Because we have used the post prefix, the npm run size command will run the size script first, and then, automatically, execute the postsize script, which will result in the creation of the metric.txt file, our artifact.
As a result, when we merge this branch to master, change something and create a new merge request In the widget that appears on the page we, first, see the name of the metric (size) followed by the value of the metric in the feature branch as well as the value in the master within the round brackets.
Now we can actually see how to change the size of the package and make a reasonable decision whether we should merge it or not..
Resume
OK!
So, we’ve figured out how to handle the trivial case. If you have
multiple files, just separate metrics with line breaks. As an
alternative for Size Limit, you may consider bundlesize. If you are using WebPack, you may get all sizes you need by building with the --profile and --json flags:
webpack --profile --json > stats.json
If you are using next.js, you can use the @next/bundle-analyzer plugin. It’s up to you!
expire_in: 7 days — artifact will exist for 7 days.
paths:
metric.txt
It will be saved in the root catalog. If you skip this option then it wouldn’t be possible to download it.
No comments:
Post a Comment