⭐ If you would like to buy me a coffee, well thank you very much that is mega kind! : https://www.buymeacoffee.com/honeyvig Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Friday, November 7, 2025

Building a CI/CD pipeline for a containerized ASP.NET Core 3.0 Web API

 Learn how to build a Continuous Integration and Continuous Deployment Azure Pipeline for a containerized ASP.NET Core 3.0 Web API. You will learn: How to push your project files to a GitHub repository and how to show its build status How to create a yaml based Azure Pipeline to continuously build and publish the container image How to use the same pipeline to continuously deploy the container image to AKS How to use Azure Pipelines Environments to get the state and history of deployments

0:00

welcome today I'll show you how to build a continuous integration and continuous deployment assured pipeline for a

0:08

containerized a still net call to zero web api you will learn how to push your

0:14

project files to get half an apposite or e on how to show its real status how to create a general base at our pipeline to

0:21

continuously build and polish the container image how to use the same pipeline to continuously deploy in a

0:27

container image to assure community service and how to use Azure pipelines environments to get the state and

0:34

history of your deployments a few things you're going to need to follow this

0:39

video get which we will use to polish our files into github and therefore

0:47

you're going to need a github account since we will store all our files over there so that they are sure pipeline can

0:53

pull the files from there a natural subscription since that's where we're going to have a

0:59

few actual resources like and I should retain a registry and I should go in any service we are going to be the target of

1:04

our pipeline you will also need to follow the steps in my previous video

1:10

deploying a nice pyramid called 3:05 API on IKS since in that video we created

1:16

all the files that are going to be needed in this video and optionally you

1:23

could use also visual studio code as your code editor which we would use pretty much to a add or move files

1:31

around in our local system and finally postman which you can use to more easily

1:36

query your web api in the previous video

1:41

we manually publish our container image to a natural container registry and we also manually deployed a corresponding

1:48

pod to a secondary service in this video we will do the same but in a fully automated way be a natural pipeline so

1:56

before moving ahead what I have done is I have reverted these to manual steps so

2:01

if we will now go to the azure portal which you can see right now and we look at our container registry you see that

2:06

there's no more any repository here no image and if we switch to BS code where

2:12

I have a terminal ready connected to our corners cluster and we see the pots be at cube CTL get

2:19

pots you see that there's no more any pot here so now let's see how come we can come up with this pipeline that can

2:25

deploy all these things automatically for us the first thing that we're going to do is we're going to move the

2:31

deployment the channel a service a channel files into a new folder let's call it manifest so let's put them there

2:42

and we're doing that so that later on when we create a pipeline it will be

2:48

very easy to tell in which location we have these these manifest files for the

2:54

purpose of being used by the pipeline to apply them to the Canadian cluster now the next step is to get our code polish

3:02

into some remote git repository and we're going to use actually github for this but before we can do that we need

3:09

to turn this local directory into an actual give depository and to do that the other thing that we have to do is

3:15

get in it and there it is this is now turned into each repository and you can

3:22

tell by the color changes here now one thing that we probably want to do is to

3:28

also add a dot git ignore file so that not all the files get added to the

3:34

repository but only the actual ones that we care about like not all the generated binaries and all the temporary files so

3:41

to do that what I did is I went ahead to the a dotnet core repository and I found

3:48

they give ignore file that they use and this isn't usually you use find somebody

3:53

else's get ignore and you go from there so I'll use that I'll create a new file

4:00

that get ignore I'll paste the contents and actually we don't need that much as

4:06

in this file so I'm going to clean up a few things I actually like to keep the pit at this code folder dedicated

4:12

checked in so I'll keep that we don't need this and we don't need these older ones and this one at the very end we

4:18

also get get rid of safe so now both the binaries and they obligate directors

4:25

will not get checked in automatically now I'm going to switch to the source control hub and here's where

4:31

we can commit these changes locally so I'll just put some message add initial

4:37

files say yes and now everything is

4:44

committed locally now it is time to come up with this remote repository and like I said we're

4:51

going to use github for that so I already created a github account for myself here and what I'm going to do is

4:58

just go ahead and create a brand new repository let's call it hello asp net

5:06

core and let's make it public why not

5:11

and let's just say creating a positive here is so now we have a brand new empty

5:19

repository and what we're going to do is to add this remote origin to our local

5:27

repository so that it knows how to map our local repo to this remote repository

5:32

and therefore from there we can actually polish or push our source files over

5:38

there so I'll just go ahead and do that get them out add it's done and now we

5:45

can do a push to the master branch so

5:51

everything that's local now we'll go ahead and be polish over there so if you

5:58

just refresh this page you'll see that

6:04

all the files are now available in github now we can go ahead and get

6:09

started with a short pipe lines and the place where you want to go to get started with for this is this page over

6:14

here I assure that micro comm slash services develops pipelines so here you're going

6:21

to find star free with pipelines a button here and so that's where we're

6:26

going to click you may need to

6:32

authenticate here and now at this point you're asking for project name so

6:39

we'll just go again with hello asp net core and since our source code is

6:46

publicly on github let's also make it our pipeline public why not so let's say

6:52

continue so this creates what actually

6:57

bob's calls devolved organization and inside our organization it creates a project that we mentioned is called

7:05

hello a spinet core and now we have we are placed in the pipeline creation page

7:12

in this page you can pick a few places for the the code that you're going to

7:19

build and deploy from this pipeline in our case that's going to be github so we're going to go for the github option

7:25

but before that before I forget one thing that you want to do in if you have

7:31

not done it is to enable the feature that lets you use multi stage pipelines

7:37

which at the point when when I'm recording this video it's still a preview feature so if that's the case for you to just go to your profile here

7:44

click on this data dot preview features and select multi stage pipelines right

7:51

here step you need to do that in order to be able to see some set in place that

7:57

we're going to use ahead so now we're going to pick it up the channel version here and this will

8:04

show all the repositories that we have available on github it may prompt you for an additional authentication in this

8:10

case for you in my case it already it has the assignee me info so that's why you just showed the depository right

8:15

away so pick our hello a spinet code repo here and now it took us to a page

8:25

and after the github page what it wants to add what it calls an github app to

8:30

our repository so that that would allow our pipelines to be able to get

8:36

notifications and also to interact in a more rich way with the github repository

8:42

so these pages pretty much for approving the installation of this app in our github repository will say approve a

8:50

nice toe again authentication and now we are

9:01

presented with this list of potential templates we can use to generate our

9:07

initial pipeline so I'll go for a deploy - I should have made a service because

9:14

this pretty much covers most of what we want to do here and at this point here you're prompted to select your as your

9:22

subscription this is a subscription where you have created your actual data registry and your current service so

9:27

I'll pick that continue and again we need to authenticate here next and at

9:43

this point our pipelines is able to retrieve information about what we have in that subscription so now it is able

9:50

to tell the cluster that we may want to use in this case Julio one and the code

9:59

name this name space that you want to interact with here with in our case it would be just the default namespace the

10:06

container registry that we want to use here which as you remember it was the same name Julio one and the image name

10:14

which in this case would be hello a spinet core that's the name of the image

10:22

that we want to polish to the container history now I'll click on validate and

10:27

configure

10:33

as you can see we get an initial hazard pipelines a jam of file configure for

10:40

the options that we just selected now before we go into into this a pipeline file one thing I want you to notice is

10:47

what happened just now as we completed this little mini wizard if you actually

10:53

go to project settings and let's actually open in a new tab and you go to

11:01

service connections you see that a bunch of things all got created here these

11:07

service connections are the way that a sure devolves and natural pipelines have

11:13

to store a connection connection info

11:19

credential credentials and connection info for different partial resources in

11:24

this case this one here is a connection to our Ashokan container registry this

11:29

one here is a connection to our our community service and this one here is a connection to our github repository so

11:36

this is how a sure the box stores these secrets so that you can use them in the

11:41

pipeline so just be aware of that because later on in the indie channel and we will refer to at least one of

11:48

these or liquid so just close this one the only thing that I'd like you to

11:53

notice is that a an environment got

11:58

created via the silicon cell phone so what's this environments let's actually go to the environments section here I'll

12:05

open again another tab for this so the environments is the way that a kind of a

12:11

new way that other pipelines have to make a relationship between your

12:16

pipeline and the place were you're actually deploying this pipeline if

12:23

you're going to go ahead and do a deployment so as you can see it created these velocity hello a spinet core

12:29

environment and if you go there you're going to see that it knows about our gunas cluster and the default namespace

12:36

in there and if you can keep Gandia we don't have anything right now but from

12:43

here on out your pipelines has knowledge of what going on in that cluster and as we do our deployments he will show here all

12:52

the resources that get deployed the history of deployments as correlated to these a to this one s cluster and a

12:58

bunch of interesting information so that later on you can tell exactly what happened for each of our deployments has

13:06

in that khones cluster so it's pretty interesting stuff now let's go back to

13:12

the to the pipeline and let's start exploring the pipeline that we've got just generated for us so these a this

13:22

Jamo file here actual pipelines that channel is what we call what we call a

13:27

configuration as code so the idea is that every single configuration change

13:32

to your pipeline is stored in your repository alongside everything else so

13:38

if you make any change to the pipeline depay that change is stored in the repository and it could either even go

13:44

be a pull request and receive approvals and everything so that not change kids on track it's a really a very good best

13:51

practice and so that's what you get by going through this demo file so what is

13:58

all this information in this file so let's just go one by one so first trigger said the trigger means

14:03

that this is the in this case it means that anytime a something is pushed or

14:10

merged into the master branch this pipeline will automatically trigger so we'll just go ahead and run

14:16

automatically for you okay so and you could configure any other branch or branches here that you want this one

14:22

here I'm also going to remove this resources because I honestly I don't know what what is is meant to do and if

14:28

we don't really need it so remove that now in variables this is a this is

14:36

series of a kind of a variables actually that you can use along a bunch of other

14:42

steps are going to come up later on so things that you may want to reuse in multiple places to not hard-code them in

14:47

multiple places you can use store to store them here as very foes and users use them multiple times so the first

14:53

thing we have here is a talker a registry service connection which is a mentioned when we went into the service

15:00

connections screen there was one for docker and what we're seen here is that a kind of a unique identifier or a good

15:09

related to that a connection so this will use the later on down there to

15:14

specify it to which container registry this pipeline has to be has to interact

15:20

with or a deploy stuff into now we have the image repository they which in our

15:27

case actually hello - a spinet code I don't know where you remove the - but yeah so this is the repository we want

15:34

to use this is a container registry where we need to polish our container image this one here the docker file pad

15:41

this is a is a mini match expression to

15:47

say well find all the local files that you can that that all the files name it

15:52

has docker file and in the code repository and that's what we're going to use to build the image before polishing it to a CR now the tag is easy

16:01

tag that's going to be associated to the container image and by default it offers they build maybe I actually like to use

16:08

the build number for this I find it much more easier to track to correlate our build to the generate container we may

16:16

just be at the build number although you can easily easily also do it with me lady but the number is much more

16:21

straightforward in my mind so to build number and this one here the image pool

16:28

secret is what you would use if you need to tell a your khones cluster how to

16:34

connect it to the container with history in the case that the okones cluster

16:42

doesn't know already how to interact with the container so now if you remember in the previous video we actually configured the corners cluster

16:50

with a service principle identity that already has access to the container

16:55

registry since we did that we don't need to create an additional image post Rickett for this so we'll remove that

17:03

next we have the the name of the of a

17:08

brutal machine image that we're going to use and pipelines offers a bunch of a bunch of

17:14

types of BMS based on different operating systems like you know want to Linux based or Windows based and a

17:22

different set of versions of these operating systems now what we're saying

17:27

what we're saying here and is the default option just cope with it whatever is the latest a version of

17:32

Kubuntu that has your pipelines offers so remember that this pipeline what is going to do is just a surefire is it

17:38

going to pick one of their virtual machines according to what we has we haven't specified here and it's going to

17:45

run the pipeline in that machine so you have to specify what you really want to use as your machine for building and

17:51

deploying your files now we go to the

17:57

stages section here and so the stages is is a way that you group a bunch of jobs

18:03

and a bunch of steps that you want to run across your pipeline and the reason what you want to use stage it because

18:09

you may not want to use stages you could just use jobs right away but with stages

18:15

what's interesting is that you can do extra things like for instance enable checks or approval checks that say well

18:23

if you want to go ahead and deploy some-some a container or some files into

18:29

some environment you first need to get a manual approval from somebody in my team or in some other steam so that you can

18:37

do with stages so in this case we have two stages the first one is the build

18:43

stage and the other one is the deploy stage or here does it go with the build

18:49

stage first so this will be the stage where we actually build the code in this case we build a container so display

18:56

name and we do have one job here which we also called it built and here's what

19:03

we specify okay so there's going to be a pool so this is the pool of brutal

19:08

machines that is going to horner we're going to work on our build and we were

19:13

saying well the viateur machine image that we want to use is a VM image name will be specified before over here and

19:19

it won't delays so this is how you would use the variables you have to clear before

19:25

and what are the steps in this in this job well there's really I think just one

19:31

well two steps here the first one is the ad worker task which is a task that you would use to in this case to build and

19:38

push the image to I should contain the registry so the command that you would use is build and push the repository

19:45

that you're going to use is the image repository variable that we defined before again his hello a spinet core and then

19:54

the docker father going to use is a token file Pat variable again that we specify before is just docker file and

20:00

the container registry is a talker ready to service connection that we specified

20:06

also before which again this lives in the service connection section that we

20:11

saw a moment ago and the tag as we said is going to be a the build number okay

20:19

now the other section here the other step here is the out download of our

20:25

polish of the manifest and in fact upload is a deprecated word as it as it's saying so I'll just polish instead

20:32

what this does is it finds these manifest folders that we created as you remember in B's : switch quickly here it

20:38

will find this manifest folder right here and it's going to create an

20:43

artifact called manifest with files that are inside that folder and this is needed so that in the next stage in the

20:49

deploy stage it will automatically download these files which will potentially run in a different machine it doesn't have to be the same machine

20:56

it will get these files and I will use them to do the deployment to coordinates so that's first stage the build stage

21:02

and then we go to the deploy stage now here is where our continuous deployment

21:08

part happens and as you as you can see this stage needs to happen after the the

21:15

build stage and that's why we have the depends on a parameter here and then we

21:22

go to the job section so and now notice

21:27

that this job here is of type deployment and as I specified by this word here this is a special type of Europe that

21:35

allows you to interact with other pipelines environments so we just saw how there's already an environment there

21:41

that's called holy city hello a spinet core and by using a deployment you of

21:48

your able to interact with that environment and if that environment happens to have any checks any any

21:54

approvals that need to happen before you can deploy there then by using a

22:00

deployment type of job you are able to actually enforce such such checks and a

22:06

few other things so so okay so we have a

22:11

job of tie deployment which is called deploy and we're happy to be using the

22:17

exact same maybe an image that we specified for the other stage but you could totally use a completely different

22:22

business machine image here if you wanted to in a completely different operating system if needed

22:28

and then here's the environment that we're going to use so you're specifying okay so it's going to be the coolest et

22:33

hello spinet core that people for the namespace default and then we go to the strategy

22:38

section the strategy specifies how you want to roll out the changes to your

22:45

deployment environment and at the time of this video there's really only one deploy minister D which is one called

22:51

run once which means that all the steps are going to specified here just go one

22:56

by one sequentially but eventually the spectator says there's going to be some other strategies like blue green and

23:03

canary rolling so that eventually you can say things like first employed to

23:08

10% of the parts and then some other templates and and 10% or first go ahead

23:15

and deploy to these others set of pots entirely verify that everything is

23:21

working in there and then just switch or swap the active set of spots with these other set of thoughts and stuff like

23:26

that but today I will just go ahead with the front ones which is one that's available and then we've got specify

23:35

steps so the first step here is going to be using the kubernetes manifest task so

23:41

this is the task that you can use to apply or to use kubernetes a files against Rene's cluster

23:49

and then this first task here which is actually the one that you would use for creating a secret or an image pool

23:55

secret we're not going to be using it because like I said before our closer already knows how to pull the images

24:02

from a CRE ray has permissions via the service principle so we don't need this task today and so the one step that

24:12

we're going to have here is again the kubernetes manifest task but in this case is of type air deployed so this

24:19

task is going to go ahead and take those files that we uploading the previous stage into the manifests artifact is

24:27

going to download them you know both the deployment the channel and the services

24:33

journal files from the manifests artifact we will not use an image pool

24:40

secret like I said we don't need that and then it's going to go ahead and and

24:46

use the the container which is sealed up we have fide the image repository we

24:52

specified and the title you specified all of them as variables so it's going to deploy these manifest files and it's

24:58

going to be using this container file here all right so that's our pipeline

25:04

what we're going to do is go ahead and click and save and run and so there's

25:10

two ways to go for this so either we can go ahead and commit this directly to the master branch like I said this is

25:16

configurations code so this entire thing is going to get checked in into the repository so either we commit directly

25:21

or we can create a new private branch and then start a pull request with these

25:26

changes so that other people can look at it get reviews and I provide approvals and stuff like that in this case we

25:33

don't we will not really go into the pull request flow and so we'll just go ahead and comment directly to the master branch so save and run and here we are

25:43

so at this point the pipeline just started execution and you can see

25:48

there's a summary screen here for the repository that will be used for this

25:54

pipeline the branch is going to be used and they are called commit that's associated to this to this pipeline the

26:01

duration of it and down here you can see the stages of this pipeline there's like a like we say

26:08

there's a dual stage deploy stage and the bill stage just started just start a zero of one jobs have completed if we

26:16

actually we can actually click on this stage and this will pop up a lock view

26:21

of what's going on exactly on this stage as you can see this is very similar to what we did manually before but now

26:28

everything is running remotely in the actual pipeline's environment so it's

26:33

just building the container image which will take a little bit

26:39

and now it is actually pushing the image directly to our assured container registry

26:45

and finally it is polishing the artifact is manifest files diplomatic channels there's a gel it is creating what a ship

26:52

is knows as an ash as an utterly buffs artifact for the next stage and so as

26:58

you can see the build stage is now completed and deployment stage should start soon

27:05

it's just starting let's actually click here to see what's what's happening in that deploy stage so it's downloading

27:12

the artifact and now it is deploying toward the coordinates cluster as you

27:17

can see it is just running Q CTL just like we did locally now it's running in the in the pipeline and it is done so

27:23

now let's actually go back to that first view that we had and as you can see we

27:29

can see the two stages will stage the PlayStation we can tell right away that everything will went successfully and

27:35

more than that we can actually go to the environment section here and see the

27:41

targeted environment which was school city hello a spinet core environment and we can see the job that just got

27:48

executed this deploy job we can click on it well actually that will take us back

27:54

to where we were before but if we go to view environment it will take us to this

27:59

environment section again and if we click and you can see right away the latest job that executed against our

28:05

environment that's very interesting info already and if we click in the in the default namespace we can see a the

28:13

coordinators deployment that were created there same way that we will get if you choose to do CTO get deployments

28:18

is right here available and it's showing that there's a one out of one pot running and I think we can click here

28:26

yeah we can see more details about this deployment as you can see the exact

28:32

exact image and tag that got deployed into the pot and the pot that is the pot

28:37

or pots are running right now and if we go back again to the environment and

28:45

then yeah let's go to the fold and let's go to services we can see that our hello

28:52

is minute course servus it has also now been deployed and we have an external IP that we could

28:57

quit if you wanted to so let's not only copy this IP let's switch to postman and I have already

29:05

prepared that URL that you would use to query the Web API so I'll just replace external IP with a PW code over there

29:12

click send and here it is we're getting results from our akes hosted web api

29:22

right there let's also go back here let's go back one step and let's click

29:29

on deployments and again this is interesting because you will be able to see all the deployments that happen

29:35

again in season so it's a very nice way to track what's going on with our environments and this could be yeah we

29:40

just have one here but you could have one for for the dev environment one for test environment QA integration

29:47

production and cannery all these environments right so it's very easy to track things over here now one last

29:52

thing that we may want to do is to provide information about this bill that

29:58

just happened which by the way let's go back to that deal so this field here that just happened which was successful

30:04

we may want to provide this information back into the github repository so how about we make it so that we can see

30:09

right here in the github page hey what's the current state of that bill so to do

30:14

that is it that certainly very easy to do so let's go back to to here another

30:21

we're hearing BS code the first thing that we have to do is to actually pull the latest version of this repository

30:27

from the remote a remote repository so

30:33

in this case to do that let's do git pull origin master okay so this is

30:41

important because because we're going to make more changes in this local repository and we want to make sure now

30:48

that we have this new file that we added the azure pipelines to before we add any new or new files here so what we're

30:55

going to do here is add a new file let's call it with me lmd now which is by the

31:04

way a very good idea to always add a written repository github repository to describe what's going on with this repository so

31:11

I'll just say this is hello asp net core

31:19

namespace here and then we'll say that this is sample a speed that net core 3.0

31:28

web api project and now here is where we

31:33

can add the status of the build where do we get the status so let's go back to Azure pipelines and for the

31:40

pipeline's page let's go to our pipeline here and if we open this dot dot dot

31:46

over there there's a status batch the status batch is the way for you to know

31:52

what's in the status of the latest field of this pipeline this case succeeded and then down here there's a sample markdown

32:00

that you can use copy and then paste in your ready file for instance and if we

32:07

see the preview of these and we can see on the right side you can see that is right there the status a offer field if

32:14

it was a failed field then this will be having that failed message over there so

32:20

now that we have this let's save it and let's commit it so add it with me file

32:31

commit this and then let's just go ahead and get push origin master now we go

32:41

back to github and we can refresh the

32:46

repository and now we have a readme file and we have the status of the latest field of the associated pipeline so

32:54

anybody that just come to this repository wants to know if these all these files here are actually building if everything is successful they can see

33:01

that by just looking at this patch and if they click here this will take you straight into the last field of this

33:08

pipeline if you want to get more details and since everything is public here if they should be able to get here without hellacious

33:15

if this video was useful please consider hitting the like button don't forget to hit subscribe and a notification bill to

33:21

know right away when I polish new videos also please leave me a comment below with any talks about this video thanks

33:27

for watching see you next time [Music]

 

No comments:

Post a Comment