title | description | keywords | author | ms.author | ms.date | ms.prod | ms.technology | ms.topic |
---|---|---|---|---|---|---|---|---|
Using a database server running as a container |
.NET Microservices Architecture for Containerized .NET Applications | Using a database server running as a container |
Docker, Microservices, ASP.NET, Container |
CESARDELATORRE |
wiwagn |
05/26/2017 |
.net-core |
dotnet-docker |
article |
You can have your databases (SQL Server, PostgreSQL, MySQL, etc.) on regular standalone servers, in on-premises clusters, or in PaaS services in the cloud like Azure SQL DB. However, for development and test environments, having your databases running as containers is convenient, because you do not have any external dependency, and simply running the docker-compose command starts the whole application. Having those databases as containers is also great for integration tests, because the database is started in the container and is always populated with the same sample data, so tests can be more predictable.
In eShopOnContainers, there is a container named sql.data defined in the docker-compose.yml file that runs SQL Server for Linux with all the SQL Server databases needed for the microservices. (You could also have one SQL Server container for each database, but that would require more memory assigned to Docker.) The important point in microservices is that each microservice owns its related data, therefore its related SQL database in this case. But the databases can be anywhere.
The SQL Server container in the sample application is configured with the following YAML code in the docker-compose.yml file, which is executed when you run docker-compose up. Note that the YAML code has consolidated configuration information from the generic docker-compose.yml file and the docker-compose.override.yml file. (Usually you would separate the environment settings from the base or static information related to the SQL Server image.)
sql.data:
image: microsoft/mssql-server-linux
environment:
- SA_PASSWORD=your@password
- ACCEPT_EULA=Y
ports:
- "5434:1433"
The following docker run command can run that container:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD= your@password' -p 1433:1433 -d microsoft/mssql-server-linux
However, if you are deploying a multi-container application like eShopOnContainers, it is more convenient to use the docker-compose up command so that it deploys all the required containers for the application.
When you start this SQL Server container for the first time, the container initializes SQL Server with the password that you provide. Once SQL Server is running as a container, you can update the database by connecting through any regular SQL connection, such as from SQL Server Management Studio, Visual Studio, or C# code.
The eShopOnContainers application initializes each microservice database with sample data by seeding it with data on startup, as explained in the following section.
Having SQL Server running as a container is not just useful for a demo where you might not have access to an instance of SQL Server. As noted, it is also great for development and testing environments so that you can easily run integration tests starting from a clean SQL Server image and known data by seeding new sample data.
-
Run the SQL Server Docker image on Linux, Mac, or Windows https://docs.microsoft.com/sql/linux/sql-server-linux-setup-docker
-
Connect and query SQL Server on Linux with sqlcmd https://docs.microsoft.com/sql/linux/sql-server-linux-connect-and-query-sqlcmd
To add data to the database when the application starts up, you can add code like the following to the Configure method in the Startup class of the Web API project:
public class Startup
{
// Other Startup code...
public void Configure(IApplicationBuilder app,
IHostingEnvironment env,
ILoggerFactory loggerFactory)
{
// Other Configure code...
// Seed data through our custom class
CatalogContextSeed.SeedAsync(app)
.Wait();
// Other Configure code...
}
}
The following code in the custom CatalogContextSeed class populates the data.
public class CatalogContextSeed
{
public static async Task SeedAsync(IApplicationBuilder applicationBuilder)
{
var context = (CatalogContext)applicationBuilder
.ApplicationServices.GetService(typeof(CatalogContext));
using (context)
{
context.Database.Migrate();
if (!context.CatalogBrands.Any())
{
context.CatalogBrands.AddRange(
GetPreconfiguredCatalogBrands());
await context.SaveChangesAsync();
}
if (!context.CatalogTypes.Any())
{
context.CatalogTypes.AddRange(
GetPreconfiguredCatalogTypes());
await context.SaveChangesAsync();
}
}
}
static IEnumerable<CatalogBrand> GetPreconfiguredCatalogBrands()
{
return new List<CatalogBrand>()
{
new CatalogBrand() { Brand = "Azure"},
new CatalogBrand() { Brand = ".NET" },
new CatalogBrand() { Brand = "Visual Studio" },
new CatalogBrand() { Brand = "SQL Server" }
};
}
static IEnumerable<CatalogType> GetPreconfiguredCatalogTypes()
{
return new List<CatalogType>()
{
new CatalogType() { Type = "Mug"},
new CatalogType() { Type = "T-Shirt" },
new CatalogType() { Type = "Backpack" },
new CatalogType() { Type = "USB Memory Stick" }
};
}
}
When you run integration tests, having a way to generate data consistent with your integration tests is useful. Being able to create everything from scratch, including an instance of SQL Server running on a container, is great for test environments.
Another good choice when running tests is to use the Entity Framework InMemory database provider. You can specify that configuration in the ConfigureServices method of the Startup class in your Web API project:
public class Startup
{
// Other Startup code ...
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<IConfiguration>(Configuration);
// DbContext using an InMemory database provider
services.AddDbContext<CatalogContext>(opt => opt.UseInMemoryDatabase());
//(Alternative: DbContext using a SQL Server provider
//services.AddDbContext<CatalogContext>(c =>
//{
// c.UseSqlServer(Configuration["ConnectionString"]);
//
//});
}
// Other Startup code ...
}
There is an important catch, though. The in-memory database does not support many constraints that are specific to a particular database. For instance, you might add a unique index on a column in your EF Core model and write a test against your in-memory database to check that it does not let you add a duplicate value. But when you are using the in-memory database, you cannot handle unique indexes on a column. Therefore, the in-memory database does not behave exactly the same as a real SQL Server database—it does not emulate database-specific constraints.
Even so, an in-memory database is still useful for testing and prototyping. But if you want to create accurate integration tests that take into account the behavior of a specific database implementation, you need to use a real database like SQL Server. For that purpose, running SQL Server in a container is a great choice and more accurate than the EF Core InMemory database provider.
You can run Redis on a container, especially for development and testing and for proof-of-concept scenarios. This scenario is convenient, because you can have all your dependencies running on containers—not just for your local development machines, but for your testing environments in your CI/CD pipelines.
However, when you run Redis in production, it is better to look for a high-availability solution like Redis Microsoft Azure, which runs as a PaaS (Platform as a Service). In your code, you just need to change your connection strings.
Redis provides a Docker image with Redis. That image is available from Docker Hub at this URL:
https://hub.docker.com/_/redis/
You can directly run a Docker Redis container by executing the following Docker CLI command in your command prompt:
docker run --name some-redis -d redis
The Redis image includes expose:6379 (the port used by Redis), so standard container linking will make it automatically available to the linked containers.
In eShopOnContainers, the basket.api microservice uses a Redis cache running as a container. That basket.data container is defined as part of the multi-container docker-compose.yml file, as shown in the following example:
#docker-compose.yml file
#...
basket.data:
image: redis
expose:
- "6379"
This code in the docker-compose.yml defines a container named basket.data based on the redis image and publishing the port 6379 internally, meaning that it will be accessible only from other containers running within the Docker host.
Finally, in the docker-compose.override.yml file, the basket.api microservice for the eShopOnContainers sample defines the connection string to use for that Redis container:
basket.api:
environment:
# Other data ...
- ConnectionString=basket.data
- EventBusConnection=rabbitmq
[!div class="step-by-step"] [Previous] (multi-container-applications-docker-compose.md) [Next] (integration-event-based-microservice-communications.md)