⭐ If you would like to buy me a coffee, well thank you very much that is mega kind! : https://www.buymeacoffee.com/honeyvig Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Friday, November 7, 2025

Azure DevOps Pipelines Tutorial - Continuous Integration

 

0:00

welcome today I'll show you how to create a continuous integration casual pipeline to automatically build and test

0:06

all changes to your github repository you will learn how to enable continuous

0:11

integration also known as CI with Azure pipelines what is a general basic pipeline and why use it how to create a

0:18

pipeline that runs on any change to your github repository how to diagnose and fix issues detected by the pipeline and

0:25

how to report the status of the pipeline on github before diving into how to

0:30

create a pipeline it is good to understand what's the typical sequence of steps in Azure pipelines and how it

0:36

enables continuous integration scenarios so it all starts with a software developer who has some code ready to go

0:42

in his box and it's ready to just push that code bi kid into his remote a

0:48

github repository so the push happens either directly or via a pull request and at that point github has already

0:55

been configured to talk to her parents to notify it that such an event has happened so didn't have this go ahead

1:02

and notify your project inertia pipelines so push happen let's say a master branch and at that point either

1:08

by blinds a will read or will evaluate a what we call the pipeline definition

1:14

which is a yellow file and that's stored in in actually in your github repository

1:20

- and so I think pythons will read it and I will tell it hey what are all these steps to execute for this pipeline

1:27

and also a word to execute the pipeline and any kind of constraints and any

1:33

other sort of configurations that are related to the exceptions of pipeline so once it's it reads that file it will go

1:40

ahead and queue what we call a run this is a series of tasks to execute it will

1:46

kill this run in and what we call an agent pool so the agent pool is a series of a bunch of machines there are air

1:53

ready to receive the requests that are coming from Hunter pipelines so this agent pool a can be a series of

2:02

let's say BMS virtual machines you could also be physical machines just normal physical boxes connected to our

2:08

pipelines or they could be also docker containers the most common way to do the things

2:15

these days is VR virtual machines but a more interesting way to do this is via containers and we will see that in the

2:21

video in a few moments now these machines could be either hosted by

2:27

Microsoft in the industry Bob's a product or they could be self hosted so

2:34

if you don't want to really worry too much about how to prepare these machines

2:39

or how to connect them to other pipelines and so that they can execute

2:44

your pipeline so you would just go with Microsoft hosted in this case they are

2:50

BMS and so I think for public projects you will get up to 10 concurrent

2:57

pipelines that can run simultaneously and but of course I mean it is easy to

3:03

just use those but it has its own restrictions like you don't have any control on the software that goes into

3:09

those machines and/or the spec of the machines themselves so depending on what you want to do there may or may not be

3:16

as convenient for you the other option self hosted has the benefit of like you can prepare the

3:22

entire machine by yourself with the exact specs that you need but of course it's a it's an ongoing maintenance task

3:29

and for these machines so it's up to you what you want to use we were using Microsoft hosted in in this video now

3:36

these machines can also be configured to use either Linux Windows or they or Mac

3:43

OS operating system so it really depends on what you want to do in the pipeline like if you need to let's say you want

3:49

to build a micro service it usually can run just fine only nook so you will pick

3:55

the Linux OS or a Linux VM but if you want to do things like using let's say

4:00

dotnet framework or you want to build a YouTube loopy Universal Windows platform projects you may need to go with Windows

4:08

basis VMs and you want to build something for iOS so most likely you have to go for a Mac OS

4:19

then after that happened a a medium-high

4:24

nation will be selected from this poll and the agent will just go ahead and pull this source the source code that you have on on github it will be pulled

4:31

into that machine and the series of tasks that are configured in your pipeline will execute okay so one of a

4:38

typical most basic tasks is just the build step where we build this code just

4:44

as you would have built it in your box it's saying the dotnet case doesn't build in this case the agent will do

4:50

that for you we just build a code automatically and once it's built it can

4:57

do other things like let's say run the tests right like dotnet tests or any other kind of test runner that you have

5:03

configured you can go ahead and run all these tests for you and finally it will publish results into the answered

5:12

pipelines UI and it can also send all

5:18

sorts of notifications like emails if you want to know what happened with the pipeline so this is kind of the overall

5:25

flow on Azure pipelines it will vary a lot especially in terms of the test execute depending on what you have

5:31

configuring you in your jumbo pipeline and of course there's another side of this which is the the deployment a story

5:38

continuous deployment which will not cover yet in this video now here are a

5:44

few things that we will be using in this tutorial first a couple of dotnet core projects already polish in a kid hub

5:50

repository second kit which we will use to manage changes to the repository

5:57

third the.net core 3-0 SDK which we will need to build and test the code locally

6:03

and finally missiles to the code which we will use as our code editor you could

6:10

of course use any other code editor that works best for you to illustrate how to

6:16

enable continuous integration with other pipelines we're going to use the hello pipelines repository that I have already

6:22

polished into github this repository has just a couple of very simple Dannette

6:28

core to zero projects the first one is a Web API and this one is very similar to the one

6:34

that you will get if used to.net new web api we had done that core CLI and the

6:40

main thing about this project is going to be the controller that we have here the weather forecast controller which

6:46

only has just one API here get what it does is just returns a list of our

6:52

collection of weather forecasts in each of these call forecast is going to have a date a temperature and a summary and

6:59

that summary is is just a random string out of these strings and you can see at the top the other project that we have

7:06

here is a little test project this an X unique project that just has one test

7:12

class and that's this class you have what one very simple test that is going to invoke that API and is going to

7:19

confirm that the expected number of days are being returned so how do we enable a

7:24

natural pipeline for this a github project what you want to do is go to a sure that Magnus comm slash services

7:31

slashed above slash pipelines and here depending on if you have already a

7:37

naturally Bob's account or not you may want to click on start free with

7:42

pipelines or sign in to us really pops in this case let's assume that we don't have an account yet so we're starting brand new so start free with pipelines

7:51

now we're going to authenticate and this case I'm going to use my Microsoft account and here we're asking for a

8:00

project name so your project is a place that's going to host both your pipelines

8:05

and any other a assure debuffs related a artifact that you want to use across

8:12

your social development lifecycle so this probably we're going to just call hello

8:18

pipelines you can choose if you want to make it private meaning that only you

8:24

and the people that you invite can see what's going on in this project or public meaning anybody can go ahead and

8:29

see what's going on here so since our repository is public and let's go ahead

8:35

and just make it public to here so I'll click continue and this is also going to create what they call a natural DevOps

8:42

organization which is an uber container of a bunch of potential projects that you can have in

8:48

agile devops now as you can see an organization has been created is called

8:55

hooli CCTV 82 and another M and a project has been created hello a bank

9:02

lines earlier and so now we're presented with an interesting choice which is a

9:08

where and I'm going to choose where to get the code from and at the same time

9:13

we're presented with the option of using either in general based pipelines or

9:19

using the classic editor to create the pipeline okay so download by the way stands for yet

9:27

another markup language that's an acronym and is not nothing more than a human friendly data Association standard

9:34

for all programming languages these days the recommended approach is to just go for the java based pipeline but why

9:40

would you want to use these as opposed to the classic editor another classic editor which is kind of legacy at this

9:45

point will allow you to do to use more of a UI UI friendly approach just drag

9:52

and drop tasks and do a bunch of things visually in in this designer to create

9:58

your pipeline but the main pitfall of that classic designer is that the the

10:04

pipeline definition itself is not checked in alongside your code right and the main problem

10:10

with this which is not evident as as you're starting with this but after a while mounts from from from now when you

10:17

want to go back and build again some codes on all code that you need to build again with the same time and that you're

10:23

using today and in many cases you just can't why and because the pipeline has evolved in a

10:30

separate way from separate way from your from your code right so in the past you

10:36

may have had some other projects or some other binaries or test code or artifacts

10:41

some other things that today are not there and that the pipeline is not honoring anymore so that is connect

10:46

makes the classic editor and the pylons created by the classic editor a not ideal for a long term project so overall

10:55

I'll strongly recommend using the jump as a pipeline and the other thing of course is that a

11:00

and there are new features new Astra pipelines features that are ready being introduced into the jungle basic

11:06

pipelines like deployment and deployment jobs a cron basic jobs skilled jobs and

11:14

probably some other things and those things are just not available in the classic editor so even if it takes a

11:21

little bit - a little bit more to learn and regional vessel pipelines I would strongly recommend that you go for this

11:27

one right now at the time where we are recording this there is a feature that

11:33

we want to use and it's not yet available broadly so we have to enable it explicitly so to do that I'm going to

11:39

go here to my profile and click the Tod preview features and it's called

11:45

multi-stage pipelines all right now where's my code where my code is in a

11:53

github so I'll clicking it up and now at this point you may be prompted to

11:59

authenticate to github in my case is not fronting me because it already did it and just remembering and so I'll click

12:06

on hello pipelines and now we're taken into github why this is because a github

12:17

is is asking us for permission to let assure the bops get access to the code

12:24

so pretty much as really box wants to get notice of any time that some code is

12:30

pushed into github so for that we need to install this what I call an well the

12:36

ultra pipelines application into github and it will grant access to these

12:41

permissions that we see here and so we have to say yes and I will authenticate

12:47

here okay after decatur game with the Microsoft account and so this sets up

12:54

the connection you know between Ning github and Azure pipeline so our survivors will now from now on has

13:00

access to what's in your github repository now at this point we're presented with a bunch of options in

13:06

terms of a template to initialize your Java file you could use or choose among

13:12

a series of templates are available what kind of framework tasks a or build

13:20

tool or test tool whatever you want to do there's a bunch of templates for for you but in our case we'll just keep it

13:27

simple go step by step so we'll go for a started started pipeline here we are so

13:34

an initial pipeline very simple pipeline has been generated for us so let's start exploring what's going on here I'm going

13:40

to collapse this section here to have more space and let's start looking at this the first thing that I'll recommend

13:46

you is to actually go to this link over here aka that I'm a slash channel which

13:52

I think I have already opened somewhere here right here so this page is super

14:00

useful because this is Craig's entire Jamal schema reference so here you can

14:05

tell exactly how to structure your Yama file how the pipeline's are defined by this Jamo file conventions the basics

14:13

and a bunch of samples so that you can get to know how to actually build these

14:18

pipelines and this also description of all the tasks that are available and a bunch of concepts and things so super

14:25

useful page you should keep these handy whenever you're dealing with a jumble basic pipeline so now back to to hear

14:34

one more thing about channel 5 by the way is that this is enforcing what we call as a configuration as code which is

14:41

this a very nice practice of storing your pipeline alongside the data

14:47

repository alongside the code in the repository so this is great because from here on you

14:52

will know exactly what's going on with changes to the pipeline as people this making changes them while pushing them

14:59

to the head hub repository again this is that would not be available with the

15:05

classic the classic pipeline editor so keeping my configuration code great

15:10

stuff first thing here the trigger the trigger is what defines when this

15:16

pipeline is going to get kicked off so what are you saying right now is that

15:22

anytime something is pushed or merged into the master branch this pipeline has to get kicked off and this you can

15:28

change it could be any of the branches that you have a real available in your Casa Torre and there's also some other options if

15:34

you want to limit exactly which pads within your branch you want to use to

15:40

trigger to trigger a pipeline run now there are other options available also

15:47

like you could go no this is difficult as see I've a set trigger but you could

15:52

create a pull request via trigger where the pipeline will kick off whenever a

15:57

new pull request let's say in github is created right so that's another way to run your pipeline the other way is a

16:03

schedule type pipeline so you can say well every every hour go ahead and kick

16:08

off the Python or every night or every morning or once a week stuff like that that's also available

16:14

next is the the pool so we talked about brittle machine pools or agent pools and

16:22

before and so here's what you define what kind of a machine you want to use so by choosing a VM image you're telling

16:29

your pipelines that the first thing that you actually want to use the Microsoft

16:36

hosted a Microsoft hosted virtual machine and second and this case by

16:41

saying they won't - you're saying well I want to use a linux-based machine so it will really depend on what you want to do you could do one two ladies you could

16:48

also do windows ladies if you want to use a Windows virtual machine or you

16:54

could do I think is a Mac OS ladies if you want to build in a in a Mac OS

16:59

device so again it depends on what you want to do okay and there's also some

17:05

other that you can also pick specific versions of this image you don't have to use latest and again if you want to know

17:11

exactly what's available a go back to that to that channel schema reference page and someone here you're going to

17:17

find all the options built on machines available for you and so like I said we're going to use a hosted image here

17:24

we're not going to be managing our own virtual machine now one thing that I like to recommend here

17:31

is to not run the pipeline directly into the virtual machine why because usually you don't know

17:37

exactly what's going on in that built a machine you don't know usually this this

17:42

have tons and tons and tons of tools and frameworks and compilers and test

17:48

runners and artifacts and all sorts of things installed on them and so for the

17:54

very specific project that you want to that you want to go continuous

18:00

integration across you may not need all those dozens and dozens and dozens of things that could have unintended

18:07

consequences in within your pipeline so one thing that you can do to prevent having to use all that is just use a

18:12

container so by using a container you can say okay so you're going to build

18:18

sorry you're going to run my pipeline and specifically within this container that I am specifying so for instance in

18:25

this case we know that we're building and we are testing the net core 3-0 a

18:31

set of projects so for instance in that case what we can do and I read pull this

18:36

up is go and find the dotnet core SDK docker image which is right here I'm going to copy it and I'm going to say

18:45

hey when you run that the pipeline don't just run the pipeline directly in the

18:51

virtual machine first go ahead and pull the dotnet cortecito SDK container container image run it and within that

18:58

container go ahead and run my pipeline so that makes sure that only the things that you need a for your pipeline will

19:05

will be used across the factory in this case they don't need core eske's all we need for us we don't need all the other

19:10

tooling than 700 there so and in fact if you just picked a dealing user container

19:15

and I just go with the Ubuntu latest buta machine that image does not have the net cortecito SDK it has a previous

19:22

version as of the time of this recording so I would have to add an additional task in this pipeline to make sure that

19:29

I actually kept it on core to zero SDK so containers great stuff may add some

19:34

seconds to your pipeline but it's totally worth it now going to steps here's where you

19:39

actually declare a what are the actions on the steps that you want to execute so

19:47

for an example this is this is giving us a couple of scripts but we're definitely

19:53

just removing them and then what we want to do is to add

20:00

our steps so there's two ways to add steps the first way is by using just the just type in them and we can use some

20:06

intelligence here so for instance the first test that we want to use here is well it's it is a task right so and this

20:14

task is going to be the dotnet core CLI - and we're going to need some inputs

20:24

sorry inputs and what we want there is it's

20:34

just one command and that command is called built so the way that the projects are set up we just have to do

20:40

dotnet built and it will go ahead and build all the projects and that's that's all we need to do and that is the setup

20:46

for this desk but now you may say well I don't want to be typing all this stuff all the time I have no idea what to put

20:52

here so again keep in mind that you can always go back to the Jama schema and this will have the definition of all the

20:58

tasks and samples and all these things right so you're not alone there but if you really don't want to just type this stuff

21:03

there's this thing called the assistant on the right side just have to click in there and you're this is going to open

21:10

up a list of all the tasks available in Azure pipelines and you just have to

21:17

select the one that you care about and this is going to bring a bunch of options so that you don't have to type them but you actually want to select

21:23

them over here so in this case we want to do is to use the let's say the test

21:30

command because we want to build a code and then test code right and what's the part to the project in this case we're

21:35

just going to use mini math expression say we want to scan all the directories

21:42

in the source and find anything that contains tests in the name of of the

21:51

probe this rock so anything that has tests in the project name we will be

21:58

picking across all the repository and also let's polish those test results and

22:03

Coco is available into a the Azure pipelines so I click add and as you can see that

22:10

adds immediately the task right here so you can see so either to type it or you

22:16

can pick it from here it's not as fantastic as the old designer but it's a

22:22

very handy tool and it lets us build beautiful demo biplanes so now the

22:28

pipeline is pretty much ready to go and what I'm going to do is just heat a safe

22:33

and run and at this point you're prompted with the option of either committing this directly to the master branch or you can create a new branch

22:40

for this commie and you can potentially even create a pull request if you want to get orders reviews and approvals on

22:46

that to keep these things simple in this video we'll just go ahead and commit directly to the master branch so save

22:53

and run so this is now created a piping so again remember

22:58

so the azure pipe in this channel is checked in into your repository so it will leave and move forward as your as

23:04

your repository moves forward so here we are in the pipelines monitoring page so

23:10

now you're looking at one specific run and telling you the duration and is

23:16

right now in the queue it state and it just changed it to running so your pipeline is now running and if you want

23:22

to know a what exactly is going on with that pipeline you can always just click on the job this will open up this UI

23:31

here and we can walk through what's going on there first what it's doing is of course a polling that a a docker

23:39

container image they donate core to zero SDK and like we said because it is inside this container that all the

23:46

pipeline is going to execute so now we were checking out the code so it's pulling the code from github into this

23:51

this container and next it will go ahead and it will build the code right so just

23:59

dotnet built with that does that we are it building the code and I think

24:04

something happened while building the code so let's see let's wait for a file okay so pipeline has finished now let's

24:12

scroll up a little bit and see what we can find so we have an error actually in the build step in the world for test

24:20

controller there's an error cannot convert from method group to int all right so there's

24:27

something going on here so the best thing that we can do I think is to actually try to reproduce this thing

24:33

locally and see what happens so go back to the github repository I'll get the clone URL and I'll go to my

24:41

box here and I'll just to get clone

24:47

right so let's go - hello pipelines and let's open BS code see what's going on

24:57

here all right here we are let's close this welcome screen and

25:04

let's look again we were looking at the

25:10

controllers weather forecast controller file line 34 please go there with API

25:17

controller the way the forecast controller sure let's restore it packages and let's see line 34 indeed

25:26

there's something going on here and yeah so the problem here is that we're trying to use account a property which does not

25:34

really exist because summer is it's an array arrays don't have account property

25:39

we could use account method if we're using a link here but probably is more

25:46

efficient to just use linked jisub actually another property is already computed just more efficiently using the

25:53

comment so let's do that so this should fix it but make sure let's make sure it's actually fixed

25:59

let's do round build task this is going to do the net build for both projects

26:07

and if this is fix it yes indeed they succeeded so let's commit this use length instead

26:16

or count in get all right

26:28

all right so right there now let's open a non terminal and let's do git push

26:36

origin master alright this should fix

26:43

the issue let's go back to our pipelines and as you can see just by doing that

26:49

git push another built a has kicked in so this is what we call continuous

26:55

integration so any change that's made to our master branch is been immediately

27:00

exercised by the by Tasha pipelines by the continuous integration pipeline and so as you can see we have the pipeline

27:07

now running and so let's see if we can get a successful run this time

27:17

all right so the pipeline has completed and indeed the job has failed and the

27:25

one thing that I noticed besides the fact that it has failed is that zero test has passed so first of all let's

27:30

first review what what failed here so it's saying that yes

27:39

so the dotnet test step failed it's saying that we have an article failure

27:46

so that something failed in the test expecting 7 805 and if you go back to

27:52

the to the run and we click on these is second where it says test your past we can click there and this actually gives

27:58

us an overall view of all the tests that failed in this run and if you click in

28:03

the failed test it will give you a very nice view of a what happened so like we

28:10

saw in the in the error before there's place where we're asserting that we will

28:16

expect 7 and we're getting 5 and that's it in the only test that we have so let's go back to the test and see what's

28:23

going on so let's see our test over here

28:31

ok so this test will go ahead create a controller passing a stop of the of the

28:37

logger that it needs it is it is expecting to receive 7 days call scared and we are not getting in 7 days let's

28:45

see weather forecast controller huh so it is getting a range of five days or

28:51

actually not seven days so that's the issue so at this point to fix this

28:57

so either the test is wrong or our implementation of the method is wrong so let's assume that the test is actually right and let's say well actually let's

29:05

return the seven days that the test is expecting so let's see if the test is

29:10

now happy with this so let's let's run it and run all the tests expand this a

29:16

little bit see what we get uh-huh one out of one test pass so this

29:23

fixes it so let's go back here and say

29:29

fix that you get to wait turn affected

29:35

number of days so yes back to the

29:42

terminal and let's just to get push origin master

29:48

and back to pipelines let's go here and again use my magic the pipeline gears

29:56

runs immediately so let's click here this will go ahead and run the pipeline

30:02

again and if you're lucky this time we'll get access full run

30:12

and so indeed this time the job succeeded we have a hundred percent pass

30:18

rate so this means that we're good and we can actually click there and we will see all tests are good

30:25

there's no failures here so everything's great so the pipeline is is ready now one more

30:32

thing that we may want to do just to reflect the fact that we have a pipeline in our github repository is to add a

30:39

status batch to the github page so that batch we can show right here in this

30:45

page and to enable that let's go back to the hello pipelines page and you just

30:51

click on this dotted dot click a status watch you can click on the sample mark down here I'll just copy it and then go

30:58

back to here please code let's open our

31:03

readme file by the way you should always have a written file that's super useful for future readers of your table and

31:10

just paste that markdown I'll hit save and I'll say add status badge alright

31:21

and let's push this okay am i doing that

31:28

we now go to get home and we refresh this page we'll see a status match right

31:35

here so anybody that just comes to this repository wants to know what the status of this code it will know that that the

31:41

status is well in this case exceeded it would say failed if the last field of is failed and if you click there click

31:46

there you will see the status of a latest build associated to this

31:53

repository so there you go continuous integration for your github repository

31:58

enabled by culture pipelines if this video was useful please consider hitting the like button don't forget to hit

32:05

subscribe and another vacation Belle to know right away when I polish new videos also please leave your comment below

32:11

with any talks about this video thanks for watching see you next time [Music]

No comments:

Post a Comment