This is a full day workshop that is designed to demonstrate some key features and strengths of Azure Cosmos DB.
Some features include but are not limited to:
- Globally Distributed/Multi-Region Database
- Multi-Model/Multi-Homed API
- Elastic and Independent Scaling of Compute, Throughput and Storage
- Automatic Indexing
- Multiple Defined Consistency Levels [Strong, Bounded-Staleness, Session, Consistent Prefix, Eventual]
- Cosmos DB Change Feed (Event Bus)
- Apache Spark Connector for Cosmos DB Change Feed
- An Azure Account/Subscription
- You can sign-up for a free trial account here
- If you are using an MSDN or Corporate Subscription ensure you have the ability to create Resource Groups and deploy services into your Account/Subscription
- A Mac, Linux or Windows device to run the labs
- A Modern web browser [Chrome, Edge, Safari or Firefox]
- Docker
- VSCode
- Mongo CLI
- Gremlin CLI
- Cassandra CLI
- Node.JS, Java and .NET Core installed (You can optionally run these in a Docker Container)
- Azure Cosmos DB Overview (See Deck/Slides)
- Setting Up Cosmos DB instance with SQL API
- Azure Portal
- Primary DB Model/API Settings
- Geo Replication
- Setting Consistency Models
- Uploading Data to Cosmos DB with Azure Data Factory v2 and Blob Storage
-- Break --
- Inserting/Updating/Querying via the Azure Portal as the Primary Interface
- Demo: accessing via a different wire protocol API (i.e. Mongo && SQL)
- Demo: Deploying a Node.js Web API that can access your Cosmos instance via Mongo API
- Advanced Features 1:
- Indexing (Automatic and Custom)
- Partitions/Partition Keys
- Unique Keys
- Request Units Explained
-- Lunch --
- Advanced Features 2:
- Stored Procedures
- Triggers
- User Defined Functions (UDFs)
- Overview of the Cosmos DB Change Feed (See Deck/Slides)
- Deploy an Azure Functions App that montiors and reacts to Changes/Events in a Cosmos DB Collection via Cosmos DB Change Feed
- Connecting an Apache Spark Cluster to Cosmos DB with Databricks on Azure