Using .NET Core, Docker, And Kubernetes Succinctly : Download Free Book

0 views
Skip to first unread message
Message has been deleted

Ayana Munsen

unread,
Jul 16, 2024, 1:46:00 PM7/16/24
to goldbeercetoll

In a previous article, we talked about the use of Docker containers for development purpose, underlining the advantages that the usage of the Containers can bring in the software development process. One of these advantages is the simplification of the deployment process because container images can be deployed in a simple way.

Using .NET Core, Docker, and Kubernetes Succinctly : Download Free Book


Download File https://mciun.com/2yMdG2



If the application is composed of more than one container, the process of deploy can be a little more complicated, because we need to manage the version of each container and the communication between them. To achieve this aim, the best solution is a container orchestrator, and today, this means Kubernetes.

Kubernetes, or K8s, is an open source container orchestrator started by Google and today maintained from Cloud Native Computing Foundation. You can use it in your on-premise infrastructure or with one of the leading cloud provider like Microsoft, Google, or Amazon. Once you understand basis principles and components behind Kubernetes, you can ask it to deploy your containers, to scale them as you want, to connect them depending on your requirements and security strategy, and update your application with the best strategy that fit your needs.

If you are interested in Docker, Kubernetes and .NET Core and you want a practical guide to start working with these technologies, you can download the book that I have written for Syncfusion for free at the link -netcore-docker-and-kubernetes-succinctly

I have written this book with my typical practical approach, creating from zero a .NET Core application that uses a SQL Server Database, and using Docker both for development and production environment. After the creation of the image, I explain in details how to deploy the application on a Kubernetes cluster installed on our machine. In the last chapter, you can read how to deploy the same application on an AKS (Azure Kubernetes Service) cluster, customizing the data storage type to use the solutions offered by Azure and profit by the monitoring tools offered from the Microsoft Cloud.

Moreover, if you are in Naples, on 26th June, we have organized, with the Cloud Native Computing Foundation meetup of Naples, an evening to spend time together talking about these technologies, starting from a technical session on Docker, Kubernetes and Azure. All the details, here: -IT/cncfnapoli/events/261921731/

In a previous article of mine, I talked about Microservices and how authenticate an Angular client with them using IdentityServer as authentication authority. I that case, I used the in-memory configuration to simplify the concept, but in a real application we need to save the data in a persistent storage like a database.

Since I used MongoDB in a project for some microservices, I thought it was useful to use the Mongo instance to save also the IdentityServer data, obviously in a separate database; this had interesting implications that are worth telling. Moreover, in a real development scenario, you will never have all the microservices and clients in the same repository, so working on localhost can become a problem, but it is easily solved using dockers and docker-compose. This aspect also has interesting implications because it puts you in front of some networking considerations that are certainly elusive in localhost.

Since we are going to save on Mongo some objects that are not ours, but of IdentityServer, we must instruct MongoDB that it will not find in these classes some extra elements that are added by the database engine during the creation of the documents, such as the _id property. To do this, I created a private method that I call before any data entry, configureMongoDriverIgnoreExtraElements():

Now we need to run the MongoDB daemon, so we have the opportunity to start configuring our docker-compose, where at the moment we put only MongoDB. If you do not know Docker you can read a previous article of mine or my free book that talks also about Kubernetes ( -netcore-docker-and-kubernetes-succinctly).

We modify the script to run the container with MongoDB and expose the port on localhost. In general, it is not a good thing to let Mongo save the data inside the container, but for our examples and only for the development phase, we can avoid creating a volume for them:

At this point we can run the script (docker-compose up) and once up, we can launch the IdentityServer project with the classic dotnet run. If it works correctly, you will see the Mongo database collections:

We do not need to expose the MongoDB port on localhost, because the identityserver container will connect to Mongo using the default network created by Docker and resolve the database address through the service name (mongodb). Despite that, I left the port forwarding for convenience, so that I can always connect with a client like Robo 3T, to inspect the collections. Launching the docker-compose up command, the image for IdentityServer is created (only the first time) and then both containers are launched, then by opening the browser at :5000/account/login we will see IdentityServer answer correctly:

Unfortunately although the authentication page is reachable, we get a CORS error, which would seem due to the difference between client (4200) and server (5000) port. But why in the previous article did the same example work without problems? Given that often IdentityServer responds with a CORS error also for different errors, such as a data access error, in this case the reported problem is the right one.

In the previous article, the origins of the registered Clients were automatically added to the authorized sources, therefore the localhost:4200 should be among them. However, if we consult the official documentation, in the CORS section we discover a side effect of not having used Entity Framework:

This time everything will work correctly, giving us access to the application immediately after authentication. If you try to launch microservices too, you will see that they will respond without problems. So we finished? To complete the tour we just have to dockerize the two microservices, making the communication parameters with IdentityServer configurable, since we are no longer in localhost, but in the default Docker network:

At this point we can use the same Dockerfile used for IdentityServer (they are both ASP.NET Core applications) in which we only change the name of the assembly to be launched on ENTRYPOINT:

Note that the authority address is no longer :5000 but we set it to , using the name of the service to resolve the IP of the identityserver container in the default network created by Docker for us. We launch the docker-compose up command, which will take a little longer to create the images of the microservices, and test the invocations:

Therefore, when generating the token requested by our client, it used :5000 as issuer, while our microservice contacts IdentityServer to validate the token using the authority that we have configured in the docker-compose script. In production we would have no problem since these would coincide, but in this hybrid case instead we have to force the hand of IdentityServer, using a fixed IssuerUri:

7fc3f7cf58
Reply all
Reply to author
Forward
0 new messages