Learn how to build a Continuous Integration and Continuous Deployment Azure Pipeline for a containerized ASP.NET Core 3.0 Web API. You will learn: How to push your project files to a GitHub repository and how to show its build status How to create a yaml based Azure Pipeline to continuously build and publish the container image How to use the same pipeline to continuously deploy the container image to AKS How to use Azure Pipelines Environments to get the state and history of deployments
welcome today I'll show you how to build a continuous integration and continuous deployment assured pipeline for a
containerized a still net call to zero web api you will learn how to push your
project files to get half an apposite or e on how to show its real status how to create a general base at our pipeline to
continuously build and polish the container image how to use the same pipeline to continuously deploy in a
container image to assure community service and how to use Azure pipelines environments to get the state and
history of your deployments a few things you're going to need to follow this
video get which we will use to polish our files into github and therefore
you're going to need a github account since we will store all our files over there so that they are sure pipeline can
pull the files from there a natural subscription since that's where we're going to have a
few actual resources like and I should retain a registry and I should go in any service we are going to be the target of
our pipeline you will also need to follow the steps in my previous video
deploying a nice pyramid called 3:05 API on IKS since in that video we created
all the files that are going to be needed in this video and optionally you
could use also visual studio code as your code editor which we would use pretty much to a add or move files
around in our local system and finally postman which you can use to more easily
query your web api in the previous video
we manually publish our container image to a natural container registry and we also manually deployed a corresponding
pod to a secondary service in this video we will do the same but in a fully automated way be a natural pipeline so
before moving ahead what I have done is I have reverted these to manual steps so
if we will now go to the azure portal which you can see right now and we look at our container registry you see that
there's no more any repository here no image and if we switch to BS code where
I have a terminal ready connected to our corners cluster and we see the pots be at cube CTL get
pots you see that there's no more any pot here so now let's see how come we can come up with this pipeline that can
deploy all these things automatically for us the first thing that we're going to do is we're going to move the
deployment the channel a service a channel files into a new folder let's call it manifest so let's put them there
and we're doing that so that later on when we create a pipeline it will be
very easy to tell in which location we have these these manifest files for the
purpose of being used by the pipeline to apply them to the Canadian cluster now the next step is to get our code polish
into some remote git repository and we're going to use actually github for this but before we can do that we need
to turn this local directory into an actual give depository and to do that the other thing that we have to do is
get in it and there it is this is now turned into each repository and you can
tell by the color changes here now one thing that we probably want to do is to
also add a dot git ignore file so that not all the files get added to the
repository but only the actual ones that we care about like not all the generated binaries and all the temporary files so
to do that what I did is I went ahead to the a dotnet core repository and I found
they give ignore file that they use and this isn't usually you use find somebody
else's get ignore and you go from there so I'll use that I'll create a new file
that get ignore I'll paste the contents and actually we don't need that much as
in this file so I'm going to clean up a few things I actually like to keep the pit at this code folder dedicated
checked in so I'll keep that we don't need this and we don't need these older ones and this one at the very end we
also get get rid of safe so now both the binaries and they obligate directors
will not get checked in automatically now I'm going to switch to the source control hub and here's where
we can commit these changes locally so I'll just put some message add initial
files say yes and now everything is
committed locally now it is time to come up with this remote repository and like I said we're
going to use github for that so I already created a github account for myself here and what I'm going to do is
just go ahead and create a brand new repository let's call it hello asp net
core and let's make it public why not
and let's just say creating a positive here is so now we have a brand new empty
repository and what we're going to do is to add this remote origin to our local
repository so that it knows how to map our local repo to this remote repository
and therefore from there we can actually polish or push our source files over
there so I'll just go ahead and do that get them out add it's done and now we
can do a push to the master branch so
everything that's local now we'll go ahead and be polish over there so if you
just refresh this page you'll see that
all the files are now available in github now we can go ahead and get
started with a short pipe lines and the place where you want to go to get started with for this is this page over
here I assure that micro comm slash services develops pipelines so here you're going
to find star free with pipelines a button here and so that's where we're
going to click you may need to
authenticate here and now at this point you're asking for project name so
we'll just go again with hello asp net core and since our source code is
publicly on github let's also make it our pipeline public why not so let's say
continue so this creates what actually
bob's calls devolved organization and inside our organization it creates a project that we mentioned is called
hello a spinet core and now we have we are placed in the pipeline creation page
in this page you can pick a few places for the the code that you're going to
build and deploy from this pipeline in our case that's going to be github so we're going to go for the github option
but before that before I forget one thing that you want to do in if you have
not done it is to enable the feature that lets you use multi stage pipelines
which at the point when when I'm recording this video it's still a preview feature so if that's the case for you to just go to your profile here
click on this data dot preview features and select multi stage pipelines right
here step you need to do that in order to be able to see some set in place that
we're going to use ahead so now we're going to pick it up the channel version here and this will
show all the repositories that we have available on github it may prompt you for an additional authentication in this
case for you in my case it already it has the assignee me info so that's why you just showed the depository right
away so pick our hello a spinet code repo here and now it took us to a page
and after the github page what it wants to add what it calls an github app to
our repository so that that would allow our pipelines to be able to get
notifications and also to interact in a more rich way with the github repository
so these pages pretty much for approving the installation of this app in our github repository will say approve a
nice toe again authentication and now we are
presented with this list of potential templates we can use to generate our
initial pipeline so I'll go for a deploy - I should have made a service because
this pretty much covers most of what we want to do here and at this point here you're prompted to select your as your
subscription this is a subscription where you have created your actual data registry and your current service so
I'll pick that continue and again we need to authenticate here next and at
this point our pipelines is able to retrieve information about what we have in that subscription so now it is able
to tell the cluster that we may want to use in this case Julio one and the code
name this name space that you want to interact with here with in our case it would be just the default namespace the
container registry that we want to use here which as you remember it was the same name Julio one and the image name
which in this case would be hello a spinet core that's the name of the image
that we want to polish to the container history now I'll click on validate and
configure
as you can see we get an initial hazard pipelines a jam of file configure for
the options that we just selected now before we go into into this a pipeline file one thing I want you to notice is
what happened just now as we completed this little mini wizard if you actually
go to project settings and let's actually open in a new tab and you go to
service connections you see that a bunch of things all got created here these
service connections are the way that a sure devolves and natural pipelines have
to store a connection connection info
credential credentials and connection info for different partial resources in
this case this one here is a connection to our Ashokan container registry this
one here is a connection to our our community service and this one here is a connection to our github repository so
this is how a sure the box stores these secrets so that you can use them in the
pipeline so just be aware of that because later on in the indie channel and we will refer to at least one of
these or liquid so just close this one the only thing that I'd like you to
notice is that a an environment got
created via the silicon cell phone so what's this environments let's actually go to the environments section here I'll
open again another tab for this so the environments is the way that a kind of a
new way that other pipelines have to make a relationship between your
pipeline and the place were you're actually deploying this pipeline if
you're going to go ahead and do a deployment so as you can see it created these velocity hello a spinet core
environment and if you go there you're going to see that it knows about our gunas cluster and the default namespace
in there and if you can keep Gandia we don't have anything right now but from
here on out your pipelines has knowledge of what going on in that cluster and as we do our deployments he will show here all
the resources that get deployed the history of deployments as correlated to these a to this one s cluster and a
bunch of interesting information so that later on you can tell exactly what happened for each of our deployments has
in that khones cluster so it's pretty interesting stuff now let's go back to
the to the pipeline and let's start exploring the pipeline that we've got just generated for us so these a this
Jamo file here actual pipelines that channel is what we call what we call a
configuration as code so the idea is that every single configuration change
to your pipeline is stored in your repository alongside everything else so
if you make any change to the pipeline depay that change is stored in the repository and it could either even go
be a pull request and receive approvals and everything so that not change kids on track it's a really a very good best
practice and so that's what you get by going through this demo file so what is
all this information in this file so let's just go one by one so first trigger said the trigger means
that this is the in this case it means that anytime a something is pushed or
merged into the master branch this pipeline will automatically trigger so we'll just go ahead and run
automatically for you okay so and you could configure any other branch or branches here that you want this one
here I'm also going to remove this resources because I honestly I don't know what what is is meant to do and if
we don't really need it so remove that now in variables this is a this is
series of a kind of a variables actually that you can use along a bunch of other
steps are going to come up later on so things that you may want to reuse in multiple places to not hard-code them in
multiple places you can use store to store them here as very foes and users use them multiple times so the first
thing we have here is a talker a registry service connection which is a mentioned when we went into the service
connections screen there was one for docker and what we're seen here is that a kind of a unique identifier or a good
related to that a connection so this will use the later on down there to
specify it to which container registry this pipeline has to be has to interact
with or a deploy stuff into now we have the image repository they which in our
case actually hello - a spinet code I don't know where you remove the - but yeah so this is the repository we want
to use this is a container registry where we need to polish our container image this one here the docker file pad
this is a is a mini match expression to
say well find all the local files that you can that that all the files name it
has docker file and in the code repository and that's what we're going to use to build the image before polishing it to a CR now the tag is easy
tag that's going to be associated to the container image and by default it offers they build maybe I actually like to use
the build number for this I find it much more easier to track to correlate our build to the generate container we may
just be at the build number although you can easily easily also do it with me lady but the number is much more
straightforward in my mind so to build number and this one here the image pool
secret is what you would use if you need to tell a your khones cluster how to
connect it to the container with history in the case that the okones cluster
doesn't know already how to interact with the container so now if you remember in the previous video we actually configured the corners cluster
with a service principle identity that already has access to the container
registry since we did that we don't need to create an additional image post Rickett for this so we'll remove that
next we have the the name of the of a
brutal machine image that we're going to use and pipelines offers a bunch of a bunch of
types of BMS based on different operating systems like you know want to Linux based or Windows based and a
different set of versions of these operating systems now what we're saying
what we're saying here and is the default option just cope with it whatever is the latest a version of
Kubuntu that has your pipelines offers so remember that this pipeline what is going to do is just a surefire is it
going to pick one of their virtual machines according to what we has we haven't specified here and it's going to
run the pipeline in that machine so you have to specify what you really want to use as your machine for building and
deploying your files now we go to the
stages section here and so the stages is is a way that you group a bunch of jobs
and a bunch of steps that you want to run across your pipeline and the reason what you want to use stage it because
you may not want to use stages you could just use jobs right away but with stages
what's interesting is that you can do extra things like for instance enable checks or approval checks that say well
if you want to go ahead and deploy some-some a container or some files into
some environment you first need to get a manual approval from somebody in my team or in some other steam so that you can
do with stages so in this case we have two stages the first one is the build
stage and the other one is the deploy stage or here does it go with the build
stage first so this will be the stage where we actually build the code in this case we build a container so display
name and we do have one job here which we also called it built and here's what
we specify okay so there's going to be a pool so this is the pool of brutal
machines that is going to horner we're going to work on our build and we were
saying well the viateur machine image that we want to use is a VM image name will be specified before over here and
it won't delays so this is how you would use the variables you have to clear before
and what are the steps in this in this job well there's really I think just one
well two steps here the first one is the ad worker task which is a task that you would use to in this case to build and
push the image to I should contain the registry so the command that you would use is build and push the repository
that you're going to use is the image repository variable that we defined before again his hello a spinet core and then
the docker father going to use is a token file Pat variable again that we specify before is just docker file and
the container registry is a talker ready to service connection that we specified
also before which again this lives in the service connection section that we
saw a moment ago and the tag as we said is going to be a the build number okay
now the other section here the other step here is the out download of our
polish of the manifest and in fact upload is a deprecated word as it as it's saying so I'll just polish instead
what this does is it finds these manifest folders that we created as you remember in B's : switch quickly here it
will find this manifest folder right here and it's going to create an
artifact called manifest with files that are inside that folder and this is needed so that in the next stage in the
deploy stage it will automatically download these files which will potentially run in a different machine it doesn't have to be the same machine
it will get these files and I will use them to do the deployment to coordinates so that's first stage the build stage
and then we go to the deploy stage now here is where our continuous deployment
part happens and as you as you can see this stage needs to happen after the the
build stage and that's why we have the depends on a parameter here and then we
go to the job section so and now notice
that this job here is of type deployment and as I specified by this word here this is a special type of Europe that
allows you to interact with other pipelines environments so we just saw how there's already an environment there
that's called holy city hello a spinet core and by using a deployment you of
your able to interact with that environment and if that environment happens to have any checks any any
approvals that need to happen before you can deploy there then by using a
deployment type of job you are able to actually enforce such such checks and a
few other things so so okay so we have a
job of tie deployment which is called deploy and we're happy to be using the
exact same maybe an image that we specified for the other stage but you could totally use a completely different
business machine image here if you wanted to in a completely different operating system if needed
and then here's the environment that we're going to use so you're specifying okay so it's going to be the coolest et
hello spinet core that people for the namespace default and then we go to the strategy
section the strategy specifies how you want to roll out the changes to your
deployment environment and at the time of this video there's really only one deploy minister D which is one called
run once which means that all the steps are going to specified here just go one
by one sequentially but eventually the spectator says there's going to be some other strategies like blue green and
canary rolling so that eventually you can say things like first employed to
10% of the parts and then some other templates and and 10% or first go ahead
and deploy to these others set of pots entirely verify that everything is
working in there and then just switch or swap the active set of spots with these other set of thoughts and stuff like
that but today I will just go ahead with the front ones which is one that's available and then we've got specify
steps so the first step here is going to be using the kubernetes manifest task so
this is the task that you can use to apply or to use kubernetes a files against Rene's cluster
and then this first task here which is actually the one that you would use for creating a secret or an image pool
secret we're not going to be using it because like I said before our closer already knows how to pull the images
from a CRE ray has permissions via the service principle so we don't need this task today and so the one step that
we're going to have here is again the kubernetes manifest task but in this case is of type air deployed so this
task is going to go ahead and take those files that we uploading the previous stage into the manifests artifact is
going to download them you know both the deployment the channel and the services
journal files from the manifests artifact we will not use an image pool
secret like I said we don't need that and then it's going to go ahead and and
use the the container which is sealed up we have fide the image repository we
specified and the title you specified all of them as variables so it's going to deploy these manifest files and it's
going to be using this container file here all right so that's our pipeline
what we're going to do is go ahead and click and save and run and so there's
two ways to go for this so either we can go ahead and commit this directly to the master branch like I said this is
configurations code so this entire thing is going to get checked in into the repository so either we commit directly
or we can create a new private branch and then start a pull request with these
changes so that other people can look at it get reviews and I provide approvals and stuff like that in this case we
don't we will not really go into the pull request flow and so we'll just go ahead and comment directly to the master branch so save and run and here we are
so at this point the pipeline just started execution and you can see
there's a summary screen here for the repository that will be used for this
pipeline the branch is going to be used and they are called commit that's associated to this to this pipeline the
duration of it and down here you can see the stages of this pipeline there's like a like we say
there's a dual stage deploy stage and the bill stage just started just start a zero of one jobs have completed if we
actually we can actually click on this stage and this will pop up a lock view
of what's going on exactly on this stage as you can see this is very similar to what we did manually before but now
everything is running remotely in the actual pipeline's environment so it's
just building the container image which will take a little bit
and now it is actually pushing the image directly to our assured container registry
and finally it is polishing the artifact is manifest files diplomatic channels there's a gel it is creating what a ship
is knows as an ash as an utterly buffs artifact for the next stage and so as
you can see the build stage is now completed and deployment stage should start soon
it's just starting let's actually click here to see what's what's happening in that deploy stage so it's downloading
the artifact and now it is deploying toward the coordinates cluster as you
can see it is just running Q CTL just like we did locally now it's running in the in the pipeline and it is done so
now let's actually go back to that first view that we had and as you can see we
can see the two stages will stage the PlayStation we can tell right away that everything will went successfully and
more than that we can actually go to the environment section here and see the
targeted environment which was school city hello a spinet core environment and we can see the job that just got
executed this deploy job we can click on it well actually that will take us back
to where we were before but if we go to view environment it will take us to this
environment section again and if we click and you can see right away the latest job that executed against our
environment that's very interesting info already and if we click in the in the default namespace we can see a the
coordinators deployment that were created there same way that we will get if you choose to do CTO get deployments
is right here available and it's showing that there's a one out of one pot running and I think we can click here
yeah we can see more details about this deployment as you can see the exact
exact image and tag that got deployed into the pot and the pot that is the pot
or pots are running right now and if we go back again to the environment and
then yeah let's go to the fold and let's go to services we can see that our hello
is minute course servus it has also now been deployed and we have an external IP that we could
quit if you wanted to so let's not only copy this IP let's switch to postman and I have already
prepared that URL that you would use to query the Web API so I'll just replace external IP with a PW code over there
click send and here it is we're getting results from our akes hosted web api
right there let's also go back here let's go back one step and let's click
on deployments and again this is interesting because you will be able to see all the deployments that happen
again in season so it's a very nice way to track what's going on with our environments and this could be yeah we
just have one here but you could have one for for the dev environment one for test environment QA integration
production and cannery all these environments right so it's very easy to track things over here now one last
thing that we may want to do is to provide information about this bill that
just happened which by the way let's go back to that deal so this field here that just happened which was successful
we may want to provide this information back into the github repository so how about we make it so that we can see
right here in the github page hey what's the current state of that bill so to do
that is it that certainly very easy to do so let's go back to to here another
we're hearing BS code the first thing that we have to do is to actually pull the latest version of this repository
from the remote a remote repository so
in this case to do that let's do git pull origin master okay so this is
important because because we're going to make more changes in this local repository and we want to make sure now
that we have this new file that we added the azure pipelines to before we add any new or new files here so what we're
going to do here is add a new file let's call it with me lmd now which is by the
way a very good idea to always add a written repository github repository to describe what's going on with this repository so
I'll just say this is hello asp net core
namespace here and then we'll say that this is sample a speed that net core 3.0
web api project and now here is where we
can add the status of the build where do we get the status so let's go back to Azure pipelines and for the
pipeline's page let's go to our pipeline here and if we open this dot dot dot
over there there's a status batch the status batch is the way for you to know
what's in the status of the latest field of this pipeline this case succeeded and then down here there's a sample markdown
that you can use copy and then paste in your ready file for instance and if we
see the preview of these and we can see on the right side you can see that is right there the status a offer field if
it was a failed field then this will be having that failed message over there so
now that we have this let's save it and let's commit it so add it with me file
commit this and then let's just go ahead and get push origin master now we go
back to github and we can refresh the
repository and now we have a readme file and we have the status of the latest field of the associated pipeline so
anybody that just come to this repository wants to know if these all these files here are actually building if everything is successful they can see
that by just looking at this patch and if they click here this will take you straight into the last field of this
pipeline if you want to get more details and since everything is public here if they should be able to get here without hellacious
if this video was useful please consider hitting the like button don't forget to hit subscribe and a notification bill to
know right away when I polish new videos also please leave me a comment below with any talks about this video thanks
for watching see you next time [Music]
No comments:
Post a Comment