Azure DevOps Fundamentals
35-day structured program covering Git, CI/CD, Containers, IaC, and Kubernetes — with summary notes and hands-on commands for every day.
Course Modules
Click any module to explore day-by-day notes and hands-on labs.
Git & Version Control
Branching, merging, rebase, Azure Repos, branch policies.
Build & Code Quality
Maven, .NET builds, SonarQube analysis and reporting.
CI/CD Foundations
Azure Pipelines, YAML, service connections, .NET & Java pipelines.
Artifacts & Advanced CI/CD
Azure Artifacts feeds, multibranch pipelines, triggers.
Containers & Docker
Dockerfile, volumes, registries, Docker Swarm, overlay networks.
Cloud & IaC
Azure fundamentals, Terraform, ARM templates, Bicep.
Kubernetes & AKS
Pods, Deployments, PVs, AKS setup, Helm charts, monitoring.
Goal: Strong Git fundamentals and repo management in Azure DevOps.
Version Control Basics — Centralized vs Distributed, Git Setup
▶Centralized VCS (CVCS)
Single server holds all versions. Examples: SVN, TFS. Single point of failure.
Distributed VCS (DVCS)
Every developer has full copy of repo. Examples: Git, Mercurial. Works offline.
Why Git?
Fast, branching is cheap, widely adopted, great Azure DevOps integration.
Git Workflow
Working Directory → Staging Area (Index) → Local Repo → Remote Repo
Install Git
brew install git
git --versionwinget install --id Git.Git
git --versionConfigure Git Identity
git config --global user.name "Your Name"
git config --global user.email "you@example.com"
git config --global core.editor "code --wait"
git config --listGit init, Commits, Staging, History & References
▶Three Areas
Working tree (modified), Index/Stage (staged), HEAD (committed). Understand before any git command.
Commit Object
SHA-1 hash, author, timestamp, message, pointer to tree and parent commit.
HEAD
A pointer to the current branch tip — moves forward with each commit.
Git References
Branch = movable pointer to commit. Tag = fixed pointer. HEAD = current position.
# Initialise repo
mkdir myproject && cd myproject
git init
# Stage and commit
echo "Hello" > index.txt
git add index.txt # stage specific file
git add . # stage everything
git status
git commit -m "Initial commit"
# History
git log
git log --oneline --graph --all
# Unstage a file
git restore --staged index.txt
# Show what changed in a commit
git show HEAD
git diff HEAD~1 HEADBranching — Feature Branches, Git Flow, Trunk-Based
▶Git Flow
main → develop → feature/bugfix/hotfix branches. Good for scheduled releases.
Trunk-Based Development
All devs commit to main/trunk frequently. Short-lived feature flags. Preferred for CI/CD.
Feature Branch
Isolated work, merged via PR. Keeps main always deployable.
Branch Naming
Convention: feature/ bugfix/ hotfix/ release/
# Create and switch branches
git branch feature/login
git checkout feature/login
# or in one step:
git checkout -b feature/login
# List branches
git branch -a
# Make a commit on feature branch
echo "login page" > login.txt
git add . && git commit -m "Add login page"
# Switch back to main
git checkout main
# See branch graph
git log --oneline --graph --all --decorateMerging, Tagging & Resolving Conflicts
▶Fast-Forward Merge
No divergence — branch pointer just moves forward. No merge commit created.
3-Way Merge
Both branches diverged. Creates a merge commit with two parents.
Conflict Markers
<<< current, === separator, >>> incoming. Edit file, remove markers, stage and commit.
Tags
Lightweight = pointer only. Annotated = full object with message, author, date. Use annotated for releases.
# Merge feature branch into main
git checkout main
git merge feature/login # fast-forward if possible
git merge --no-ff feature/login # always create merge commit
# Resolve a conflict
# After conflict: edit file, remove markers
git add conflicted-file.txt
git commit -m "Resolve merge conflict"
# Tags
git tag v1.0.0 # lightweight
git tag -a v1.0.0 -m "Release 1.0.0" # annotated
git tag # list tags
git push origin v1.0.0 # push tag
git push origin --tags # push all tags
# Delete a branch after merge
git branch -d feature/loginAdvanced Git — Rebase, Stash, Squash, Rewriting History
▶Rebase
Re-applies commits on top of another branch. Creates clean linear history. Never rebase shared/public branches.
Squash
Combine multiple commits into one before merging. Keeps main history clean.
Stash
Temporarily shelve uncommitted work. Use when switching contexts without committing.
Amend
Fix the last commit message or add forgotten files. Only safe on local commits not yet pushed.
# Rebase feature onto main
git checkout feature/login
git rebase main
# Interactive rebase — squash last 3 commits
git rebase -i HEAD~3
# In editor: change 'pick' to 'squash' or 's' for commits to merge
# Stash
git stash # stash current changes
git stash list # list stashes
git stash pop # apply and drop latest stash
git stash apply stash@{1} # apply specific stash
# Amend last commit
git commit --amend -m "Better message"
# Reset (use carefully)
git reset --soft HEAD~1 # undo commit, keep staged
git reset --mixed HEAD~1 # undo commit, unstage files
git reset --hard HEAD~1 # undo commit, discard changesSSH Key Gen, Clone, Pull/Push/Fetch, Branching Strategies
▶fetch vs pull
fetch downloads changes without merging. pull = fetch + merge. Prefer fetch → review → merge.
origin
Default remote name. Can have multiple remotes (e.g., upstream for forked repos).
SSH vs HTTPS
SSH uses key-pair auth — no password prompts. HTTPS uses PAT/credentials.
Tracking Branch
Local branch linked to a remote branch. Git knows where to push/pull automatically.
# Generate SSH key
ssh-keygen -t ed25519 -C "you@example.com"
cat ~/.ssh/id_ed25519.pub # copy this to Azure DevOps
# Clone via SSH
git clone git@ssh.dev.azure.com:v3/org/project/repo
# Remote operations
git remote -v # list remotes
git remote add origin <url>
git fetch origin # download without merging
git pull origin main # fetch + merge
git push origin feature/login # push branch
git push -u origin feature/login # push + set tracking
# See remote branches
git branch -r
git branch -aAzure DevOps Repos — Creation, Policies, PR Validations
▶Branch Policies
Protect main: require PR, minimum reviewers, linked work items, build validation.
PR Validation Build
Triggers a pipeline on every PR — ensures code builds before merge.
Secure Repo
Disable force push, require signed commits, set permissions per team.
Azure DevOps vs GitHub
Azure DevOps: enterprise features, tight ADO pipeline integration. GitHub: open-source friendly, Actions ecosystem.
# Install Azure DevOps extension
az extension add --name azure-devops
# Configure defaults
az devops configure --defaults \
organization=https://dev.azure.com/yourorg \
project=YourProject
# Create repo
az repos create --name my-repo
# List repos
az repos list --output table
# Create PR
az repos pr create \
--repository my-repo \
--source-branch feature/login \
--target-branch main \
--title "Add login page" \
--description "Adds login functionality"
# List PRs
az repos pr list --output tableGoal: Understand build tools and integrate code quality gates into CI pipelines.
Maven — pom.xml, Lifecycle, Plugins
▶Maven Lifecycle
validate → compile → test → package → verify → install → deploy
pom.xml
Project Object Model. Defines dependencies, plugins, build config, artifact coordinates (groupId, artifactId, version).
Plugins
Surefire (tests), Compiler, JAR, Shade (fat jar). Each lifecycle phase runs plugin goals.
Repository
Local (~/.m2), Central (Maven Central), Remote (Nexus, Azure Artifacts).
# Install Maven
brew install maven # Mac
# or download from maven.apache.org
mvn --version
# Create a new project
mvn archetype:generate \
-DgroupId=com.dpworld \
-DartifactId=myapp \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DinteractiveMode=false
cd myapp
# Maven commands
mvn compile # compile source
mvn test # run tests
mvn package # create JAR
mvn clean package # clean then package
mvn install # install to local repo
# Skip tests (use sparingly)
mvn package -DskipTests.NET Build Basics + Azure DevOps Integration
▶dotnet CLI
Cross-platform CLI for .NET. Build, run, test, publish, restore.
.csproj
Project file defining SDK, framework, dependencies (NuGet packages), output type.
NuGet
Package manager for .NET — equivalent to Maven Central. Azure Artifacts hosts private NuGet feeds.
Build Output
bin/Release/net8.0/ — contains DLLs, exe, publish folder. Self-contained or framework-dependent.
# Create .NET app
dotnet new webapi -n MyApi
cd MyApi
# Restore, build, test, publish
dotnet restore
dotnet build
dotnet test
dotnet publish -c Release -o ./out
# Run locally
dotnet run
# Add NuGet package
dotnet add package Newtonsoft.Json
# List packages
dotnet list packageSonarQube — Architecture & Installation
▶What SonarQube Does
Static code analysis — detects bugs, code smells, vulnerabilities, duplications, and test coverage.
Quality Gate
Pass/fail threshold for merging. Fails if coverage <80%, critical bugs >0, etc. Integrates with PR policies.
Architecture
SonarQube Server (UI + DB) ← SonarScanner (runs analysis in pipeline) ← Source Code
Key Metrics
Reliability (bugs), Security (vulnerabilities), Maintainability (code smells), Coverage, Duplications.
# Run SonarQube via Docker (easiest for lab)
docker run -d \
--name sonarqube \
-p 9000:9000 \
-e SONAR_ES_BOOTSTRAP_CHECKS_DISABLE=true \
sonarqube:community
# Access at http://localhost:9000
# Default login: admin / admin
# Change password on first loginSonarQube in Practice — Analysis, Reports, Management
▶- Create a project in SonarQube UI and generate a token.
- Run SonarScanner from pipeline — passes token + project key + server URL.
- Results appear in SonarQube UI — issues categorised by severity.
- Configure Quality Gate to block PR/merge on failure.
- Add SonarQube Azure DevOps extension from Marketplace for native integration.
steps:
- task: SonarQubePrepare@5
inputs:
SonarQube: 'SonarQube-ServiceConnection'
scannerMode: 'CLI'
configMode: 'manual'
cliProjectKey: 'my-project'
cliSources: '.'
- script: dotnet build
- task: SonarQubeAnalyze@5
- task: SonarQubePublish@5
inputs:
pollingTimeoutSec: '300'Goal: Build end-to-end CI/CD pipelines for .NET and Java projects using Azure Pipelines.
CI/CD Concepts — Jenkins, GitLab, Azure DevOps Comparison
▶CI (Continuous Integration)
Auto build + test on every commit. Goal: catch bugs early. Merge to main frequently.
CD (Continuous Delivery)
Auto release to staging — manual approval for prod. Always in deployable state.
CD (Continuous Deployment)
Auto release to production — no manual gate. Requires mature testing culture.
Tool Comparison
Jenkins: self-hosted, plugin-heavy. GitLab CI: built-in, GitLab-only. Azure Pipelines: managed, multi-repo, tight Azure integration.
Azure Pipelines Basics — Classic + YAML
▶Classic Pipeline
GUI-based drag-and-drop. Easier to start. Stored in Azure DevOps — not version controlled.
YAML Pipeline
Code-as-configuration stored in repo. Version controlled, reviewable, reusable via templates.
Agent
Machine that runs pipeline jobs. Microsoft-hosted (cloud VMs) or self-hosted (your server).
Trigger
What starts the pipeline: push to branch, PR, scheduled, manual, or another pipeline.
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- script: echo "Hello from Azure Pipelines!"
displayName: 'First step'
- script: |
echo "Multi-line script"
echo "Current directory: $(pwd)"
displayName: 'Multi-line step'YAML Deep Dive — Stages, Jobs, Steps
▶Stage
Logical division of pipeline work. Build → Test → Deploy. Stages run sequentially by default.
Job
Unit of work that runs on one agent. Jobs within a stage can run in parallel.
Step
Individual task or script inside a job. Runs sequentially.
dependsOn
Control stage/job ordering. A stage can depend on multiple stages.
dependsOn, condition, and environment approvals.trigger:
- main
pool:
vmImage: ubuntu-latest
stages:
- stage: Build
jobs:
- job: BuildApp
steps:
- script: echo "Building..."
- script: dotnet build
- stage: Test
dependsOn: Build
jobs:
- job: RunTests
steps:
- script: dotnet test --logger trx
- task: PublishTestResults@2
inputs:
testResultsFormat: VSTest
testResultsFiles: '**/*.trx'
- stage: Deploy
dependsOn: Test
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- deployment: DeployProd
environment: production
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to prod"Service Connections & Secure Service Principals
▶Service Connection
Secure credential store in Azure DevOps for connecting to external services (Azure, Docker, SonarQube, etc.).
Service Principal
App identity in Azure AD. Has Client ID, Tenant ID, Client Secret or Certificate. Grants pipeline access to Azure resources.
Workload Identity Federation
Modern approach — no secret rotation needed. Uses OIDC token exchange. Preferred over client secret.
Scopes
Grant minimum permissions. Contributor on a resource group is usually sufficient.
# Create service principal with Contributor on resource group
az ad sp create-for-rbac \
--name "sp-devops-pipeline" \
--role Contributor \
--scopes /subscriptions/<sub-id>/resourceGroups/rg-prod
# Output contains appId, password, tenant
# Add these to Azure DevOps Service Connection
# Use service connection in pipeline
# Project Settings → Service connections → New → Azure Resource Manager- task: AzureCLI@2
inputs:
azureSubscription: 'MyAzureServiceConnection'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
az group list -o tableClassic Releases, Environments & Pipeline Libraries
▶Environment
Logical target for deployment (dev, staging, prod). Has approval gates, checks, and deployment history.
Variable Groups
Shared variables across pipelines. Link to Key Vault for secrets. Use $(variableName) syntax.
Pipeline Templates
Reusable YAML snippets — template: path/to/template.yml. DRY principle for pipelines.
Approval Gates
Manual or automated checks before deploying to an environment. Configured per environment.
variables:
- group: my-variable-group # link variable group
- name: appName
value: 'myapp'
stages:
- stage: DeployDev
jobs:
- deployment: Deploy
environment: dev # environment with optional approval
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying $(appName) to dev"
- stage: DeployProd
dependsOn: DeployDev
jobs:
- deployment: Deploy
environment: production # requires approval gate configured in portal
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to prod"Complete CI/CD Pipeline for .NET Project
▶- Restore NuGet packages → Build → Run unit tests → Publish test results → Publish build artifacts → Deploy.
- Use
PublishBuildArtifactstask to pass compiled output between stages. - Use
DownloadBuildArtifactsin release/deploy stage. - Container-based deployment: push to ACR, pull in deploy stage.
trigger:
- main
pool:
vmImage: ubuntu-latest
variables:
buildConfiguration: Release
stages:
- stage: Build
jobs:
- job: Build
steps:
- task: DotNetCoreCLI@2
displayName: Restore
inputs:
command: restore
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
displayName: Build
inputs:
command: build
arguments: '--configuration $(buildConfiguration)'
- task: DotNetCoreCLI@2
displayName: Test
inputs:
command: test
arguments: '--configuration $(buildConfiguration) --collect "Code coverage"'
- task: DotNetCoreCLI@2
displayName: Publish
inputs:
command: publish
arguments: '--configuration $(buildConfiguration) --output $(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: $(Build.ArtifactStagingDirectory)
artifactName: dropComplete CI/CD Pipeline for Java Project
▶- Maven lifecycle maps directly to pipeline steps:
mvn clean packagecompiles, tests, and packages in one command. - Publish
**/surefire-reports/*.xmlas test results (JUnit format). - Archive the JAR from
target/as a build artifact. - Java version matrix builds: test on Java 11 and 17 simultaneously using parallel jobs.
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: JavaToolInstaller@0
inputs:
versionSpec: '17'
jdkArchitectureOption: x64
jdkSourceOption: PreInstalled
- task: Maven@3
displayName: Build and Test
inputs:
mavenPomFile: pom.xml
goals: clean package
publishJUnitResults: true
testResultsFiles: '**/surefire-reports/TEST-*.xml'
javaHomeOption: JDKVersion
jdkVersionOption: '1.17'
- task: CopyFiles@2
inputs:
contents: '**/target/*.jar'
targetFolder: $(Build.ArtifactStagingDirectory)
- task: PublishBuildArtifacts@1
inputs:
artifactName: jar-outputGoal: Dependency management with Azure Artifacts and enterprise-grade pipeline patterns.
Azure Artifacts — Feeds, Creating & Promoting
▶Feed
Private package repository. Supports NuGet, npm, Maven, Python (pip), Cargo.
Upstream Sources
Feed can proxy public registries (npmjs, Maven Central, NuGet.org). Caches packages for reliability.
Views
@local (all), @prerelease, @release. Promote packages between views to control what consumers get.
Retention
Set retention policies to auto-delete old package versions and save storage.
- task: NuGetAuthenticate@1
- task: DotNetCoreCLI@2
displayName: Pack NuGet
inputs:
command: pack
packagesToPack: '**/*.csproj'
versioningScheme: byBuildNumber
- task: DotNetCoreCLI@2
displayName: Push to Feed
inputs:
command: push
packagesToPush: '$(Build.ArtifactStagingDirectory)/**/*.nupkg'
nuGetFeedType: internal
publishVstsFeed: 'MyProject/my-feed'Using Feeds in Pipelines & Securing Feeds
▶- Use
NuGetAuthenticateornpmAuthenticatetask before restore — pipeline agent auto-gets token. - Permissions: Feed Owner, Contributor, Collaborator, Reader. Assign build service identity as Contributor.
- Dependency confusion attacks: use
allowExternalVersions: falseon upstream sources. - Immutable packages: once published, a version cannot be overwritten.
- task: NuGetAuthenticate@1
- task: DotNetCoreCLI@2
inputs:
command: restore
feedsToUse: select
vstsFeed: 'MyProject/my-feed'CI/CD Deep Dive — Continuous Build, Deploy, Test, Delivery
▶Deployment Strategies
Blue-Green, Canary, Rolling, Recreate. AKS and App Service support these natively.
Blue-Green
Two identical environments. Switch traffic after new version validated. Instant rollback by switching back.
Canary
Gradually shift % of traffic to new version. Monitor metrics before full rollout.
Gate Checks
Pre/post deployment gates: Azure Monitor alerts, REST API calls, Query Work Items, Invoke Azure Function.
Multibranch Pipelines, Triggers, Permissions, Notifications
▶- Branch filters on triggers:
include/excludespecific branches or patterns. - Path filters: only trigger when files in
/srcchange, not/docs. - PR triggers: separate
pr:block fromtrigger:. - Pipeline permissions: who can queue, approve, edit. Set at org, project, or pipeline level.
- Notifications: send Teams/email on pipeline failure via service hooks.
trigger:
branches:
include:
- main
- release/*
exclude:
- feature/*
paths:
include:
- src/
exclude:
- docs/
pr:
branches:
include:
- main
paths:
include:
- src/Goal: Containerize applications and integrate Docker into DevOps pipelines.
Virtualization vs Containerization — Docker Intro & Install
▶VM vs Container
VMs virtualise hardware (full OS). Containers share host kernel — lightweight, start in seconds, portable.
Docker Architecture
Docker Engine (daemon) → Images → Containers. Docker Hub = public registry. Docker CLI talks to daemon via socket.
Image vs Container
Image = read-only template (blueprint). Container = running instance of an image.
Layers
Images are built in layers. Each instruction in Dockerfile = one layer. Layers are cached and shared.
# Install Docker Desktop (Mac/Windows) from docker.com
docker version
docker info
# Run first container
docker run hello-world
docker run -it ubuntu bash # interactive
docker run -d nginx # detached
# List containers
docker ps # running
docker ps -a # all including stoppedDocker Basics — Images, Containers, Volumes
▶Volumes
Persist data outside container lifecycle. Bind mount = host path. Named volume = Docker managed. Preferred for databases.
Port Mapping
-p hostPort:containerPort. Container 80 → host 8080: -p 8080:80.
Environment Variables
-e KEY=VALUE at runtime. Prefer env files or secrets management over hardcoding.
Container Lifecycle
create → start → running → stop → remove. docker rm removes container, docker rmi removes image.
# Image management
docker pull nginx:alpine
docker images
docker image inspect nginx:alpine
docker rmi nginx:alpine
# Container with port and volume
docker run -d \
--name myapp \
-p 8080:80 \
-v $(pwd)/html:/usr/share/nginx/html \
-e MY_VAR=hello \
nginx:alpine
# Exec into running container
docker exec -it myapp sh
# Logs
docker logs myapp -f
# Stop and remove
docker stop myapp && docker rm myapp
# Named volume
docker volume create mydata
docker run -d -v mydata:/data postgresDockerfile, Multi-Container Apps, Networks
▶Multi-Stage Build
Build in one stage (with SDK), copy artifact to lean runtime image. Drastically reduces final image size.
Docker Compose
Define multi-container apps in docker-compose.yml. Services, networks, volumes in one file.
Docker Networks
Bridge (default), Host, None, Custom. Containers on same custom network reach each other by name.
.dockerignore
Exclude files from build context — like .gitignore. Speeds up build and reduces image bloat.
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src
COPY . .
RUN dotnet publish -c Release -o /app/out
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app
COPY --from=build /app/out .
EXPOSE 8080
ENTRYPOINT ["dotnet", "MyApi.dll"]version: '3.8'
services:
api:
build: .
ports: ["8080:8080"]
depends_on: [db]
environment:
- ConnectionString=Host=db;Database=mydb
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=secret
volumes:
pgdata:docker build -t myapp:1.0 .
docker compose up -d
docker compose logs -f
docker compose downPushing Images — Registries & Docker Swarm Basics
▶ACR (Azure Container Registry)
Private Docker registry on Azure. Geo-replication, image scanning (Defender for Containers), Tasks for auto-build.
Image Tagging
registry/repo:tag. Always tag with version + latest. Never deploy latest tag to prod.
Docker Swarm
Docker-native clustering. Manager nodes + Worker nodes. Simple setup but Kubernetes is preferred for production.
Service vs Container
In Swarm, a Service defines desired state (replicas, image, ports). Swarm maintains that state.
# Create ACR
az acr create --name myregistry --resource-group rg-lab --sku Basic
# Login
az acr login --name myregistry
# Tag and push
docker tag myapp:1.0 myregistry.azurecr.io/myapp:1.0
docker push myregistry.azurecr.io/myapp:1.0
# List images in ACR
az acr repository list --name myregistry -o table
# Docker Swarm init (single node)
docker swarm init
docker service create --name web --replicas 3 -p 80:80 nginx
docker service ls
docker service scale web=5Advanced Docker — Overlay Networks, Stack Deployments
▶Overlay Network
Spans multiple Swarm nodes. Containers on different hosts communicate as if on same network. Uses VXLAN.
Docker Stack
Deploy compose file to Swarm. docker stack deploy. Multi-service app as a single unit.
Secrets in Swarm
docker secret create. Secrets mounted as files in containers at /run/secrets/. Encrypted at rest and in transit.
Rolling Updates
Swarm updates service replicas one at a time. Configure update-parallelism and update-delay.
# Create overlay network
docker network create --driver overlay my-overlay
# Deploy stack from compose file
docker stack deploy -c docker-compose.yml mystack
# Manage stack
docker stack ls
docker stack ps mystack
docker stack services mystack
docker stack rm mystack
# Secrets
echo "mysecretpassword" | docker secret create db_password -
docker service create \
--name db \
--secret db_password \
postgres:16Goal: Automate Azure infrastructure provisioning using Terraform, ARM, and Bicep.
Azure Fundamentals — Compute, Storage, Networking, Identity
▶Resource Hierarchy
Management Group → Subscription → Resource Group → Resource. RBAC and policies apply at any level.
Compute
VMs, AKS, App Service, Container Instances, Functions. Choose based on control vs managed tradeoff.
Networking
VNet, Subnet, NSG, Application Gateway, Azure Firewall. VNets are isolated by default — peering links them.
Identity (Entra ID)
Users, Groups, Service Principals, Managed Identity. RBAC assigns roles (Owner, Contributor, Reader) to principals.
az login
az account list -o table
az account set --subscription "<id>"
# Create resource group
az group create --name rg-lab --location uaenorth
# List resources
az resource list --resource-group rg-lab -o table
# RBAC — assign role
az role assignment create \
--assignee <user-or-sp-objectId> \
--role Contributor \
--scope /subscriptions/<sub-id>/resourceGroups/rg-labTerraform Basics — Variables, Blocks, Commands
▶HCL
HashiCorp Configuration Language. Declarative — describe desired state, Terraform figures out how.
Provider
Plugin that knows how to talk to a platform (Azure, AWS, etc.). Configured in required_providers.
State
Terraform tracks deployed resources in terraform.tfstate. State must match reality — store remotely in Azure Blob.
Plan vs Apply
plan = dry run (what will change). apply = make it so. Always review plan before applying.
terraform plan/apply in Azure Pipelines are exam topics.terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0"
}
}
}
provider "azurerm" {
features {}
}
variable "location" {
default = "uaenorth"
}
resource "azurerm_resource_group" "rg" {
name = "rg-terraform-lab"
location = var.location
tags = {
managedBy = "terraform"
}
}
output "rg_id" {
value = azurerm_resource_group.rg.id
}terraform init
terraform plan
terraform apply
terraform show
terraform destroyTerraform Advanced — Modules, Remote State, Workspaces
▶Modules
Reusable Terraform configs. Call with module "name" { source = "./modules/network" }. Like functions for infra.
Remote State Backend
Store tfstate in Azure Blob Storage — enables team collaboration, locking, versioning.
Workspaces
Multiple state files from one config. Use for dev/staging/prod environments.
Data Sources
Read existing resources not managed by Terraform. data "azurerm_resource_group" "existing" {}
terraform {
backend "azurerm" {
resource_group_name = "rg-tfstate"
storage_account_name = "tfstatestorage"
container_name = "tfstate"
key = "prod.terraform.tfstate"
}
}terraform workspace new dev
terraform workspace new prod
terraform workspace select dev
terraform workspace listARM Templates & Bicep — Parameters, Loops, Pipeline Integration
▶ARM vs Bicep
ARM = verbose JSON. Bicep = cleaner DSL that compiles to ARM. Both are Azure-native with full resource support.
copy loop
Deploy multiple resources using copy in ARM or [for ... in ...] in Bicep. Replaces repetition.
ARM Deployment Modes
Incremental (default) = add/update only. Complete = deletes resources not in template. Use Complete carefully.
Bicep Modules
Decompose into reusable files. main.bicep calls modules. Same concept as Terraform modules.
- task: AzureResourceManagerTemplateDeployment@3
inputs:
deploymentScope: Resource Group
azureResourceManagerConnection: 'MyServiceConnection'
subscriptionId: '$(subscriptionId)'
action: Create Or Update Resource Group
resourceGroupName: rg-prod
location: uaenorth
templateLocation: Linked artifact
csmFile: infra/main.bicep
csmParametersFile: infra/main.bicepparam
deploymentMode: IncrementalReal-World IaC Design — Deploying Azure Resources via Terraform/ARM
▶- GitOps for IaC: IaC code in repo, PR-triggered
terraform plan, merge triggersapply. - Drift detection: Schedule
terraform plannightly — alert if output is not empty. - Secrets in IaC: Never in code. Use Key Vault references or pipeline variables marked secret.
- Layered architecture: Foundation (VNet, IAM) → Platform (AKS, DB) → App (namespaces, configs). Deploy in order.
stages:
- stage: Plan
jobs:
- job: TerraformPlan
steps:
- task: TerraformInstaller@1
inputs:
terraformVersion: 'latest'
- task: TerraformTaskV4@4
displayName: Terraform Init
inputs:
provider: azurerm
command: init
backendServiceArm: 'MyServiceConnection'
backendAzureRmResourceGroupName: rg-tfstate
backendAzureRmStorageAccountName: tfstatestorage
backendAzureRmContainerName: tfstate
backendAzureRmKey: prod.tfstate
- task: TerraformTaskV4@4
displayName: Terraform Plan
inputs:
provider: azurerm
command: plan
environmentServiceNameAzureRM: 'MyServiceConnection'
- stage: Apply
dependsOn: Plan
jobs:
- deployment: TerraformApply
environment: production
strategy:
runOnce:
deploy:
steps:
- task: TerraformTaskV4@4
inputs:
provider: azurerm
command: apply
environmentServiceNameAzureRM: 'MyServiceConnection'Goal: Deploy and orchestrate containerized applications with Kubernetes and Azure Kubernetes Service.
Kubernetes Intro — Namespaces, Pods, ReplicaSets, Deployments
▶Pod
Smallest deployable unit. One or more containers sharing network and storage. Ephemeral by nature.
ReplicaSet
Ensures N replicas of a pod are always running. Rarely created directly — use Deployments.
Deployment
Manages ReplicaSets. Handles rolling updates and rollbacks declaratively.
Namespace
Virtual cluster within a cluster. Isolates resources by team/app/environment. RBAC applies per namespace.
Option 1 — Minikube (Recommended for Beginners)
# Install minikube
brew install minikube
# Start cluster (uses Docker as driver)
minikube start --driver=docker --cpus=2 --memory=4096
# Check status
minikube status
# Get cluster info
kubectl cluster-info
kubectl get nodes # shows 1 node: minikube
# Useful minikube commands
minikube dashboard # opens Kubernetes dashboard in browser
minikube stop # stop cluster (preserves state)
minikube delete # delete cluster completely
minikube ssh # SSH into the minikube VM/container# Install minikube
winget install Kubernetes.minikube
# Start cluster
minikube start --driver=docker --cpus=2 --memory=4096
# Verify
kubectl get nodes
minikube statusminikube start. Enable Virtualization in BIOS if you hit driver errors on Windows.Option 2 — kind (Kubernetes in Docker) — Fastest Startup
# Install kind
brew install kind # Mac
# Windows: winget install Kubernetes.kind
# Create single-node cluster
kind create cluster --name lab-cluster
# Multi-node cluster (1 control plane + 2 workers)
cat <Option 3 — k3d (Lightweight k3s in Docker) — Best for CI/CD
# Install k3d
brew install k3d # Mac
# Windows: winget install rancher.k3d
# Create cluster
k3d cluster create lab-cluster \
--agents 2 \
--port "8080:80@loadbalancer"
# List clusters
k3d cluster list
# Stop / start
k3d cluster stop lab-cluster
k3d cluster start lab-cluster
# Delete
k3d cluster delete lab-clusterComparison — Which to Use?
| Tool | Best For | Speed | Multi-node | Dashboard |
|---|---|---|---|---|
| Minikube | Learning, beginners | Medium | Yes (addons) | Built-in |
| kind | CI/CD pipelines, testing | Fast | Yes (config file) | Manual |
| k3d | Lightweight prod-like setup | Very fast | Yes | Manual |
Install kubectl (if not already installed)
# Mac
brew install kubectl
# Windows
winget install Kubernetes.kubectl
# Verify
kubectl version --clientDeploy Your First App on Local Cluster
# Context
kubectl config get-contexts
kubectl config use-context my-cluster
# Namespaces
kubectl create namespace myapp
kubectl get namespaces
# Deploy an app
kubectl create deployment nginx --image=nginx:alpine -n myapp
kubectl get deployments -n myapp
kubectl get pods -n myapp
kubectl get pods -n myapp -o wide
# Scale
kubectl scale deployment nginx --replicas=3 -n myapp
# Rolling update
kubectl set image deployment/nginx nginx=nginx:1.25 -n myapp
kubectl rollout status deployment/nginx -n myapp
kubectl rollout history deployment/nginx -n myapp
kubectl rollout undo deployment/nginx -n myappapiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:1.0
ports:
- containerPort: 8080
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "250m"
memory: "256Mi"Persistent Volumes, Services & AKS Setup with Azure DevOps
▶Service Types
ClusterIP (internal only), NodePort (node IP:port), LoadBalancer (Azure LB with public IP), ExternalName.
PV / PVC
PersistentVolume = actual storage. PersistentVolumeClaim = request for storage. Pod mounts PVC. Azure Disk/File are common PV types.
StorageClass
Dynamic provisioning. Default in AKS: managed-csi (Azure Disk). For shared access use azurefile.
AKS + Azure DevOps
Use KubernetesManifest task. Connects via service connection (Kubernetes type). Deploy, bake (Helm), and validate manifests.
# Create AKS cluster
az aks create \
--resource-group rg-lab \
--name aks-lab \
--node-count 2 \
--node-vm-size Standard_D2s_v5 \
--generate-ssh-keys
# Get credentials
az aks get-credentials --resource-group rg-lab --name aks-lab
kubectl get nodesapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-pvc
spec:
accessModes: [ReadWriteOnce]
storageClassName: managed-csi
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: myapp-svc
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- port: 80
targetPort: 8080- task: KubernetesManifest@1
inputs:
action: deploy
connectionType: azureResourceManager
azureSubscriptionConnection: 'MyServiceConnection'
azureResourceGroup: rg-lab
kubernetesCluster: aks-lab
namespace: myapp
manifests: |
k8s/deployment.yaml
k8s/service.yaml
containers: |
myregistry.azurecr.io/myapp:$(Build.BuildId)Helm Charts, Monitoring AKS & DevOps Integration
▶Helm
Package manager for Kubernetes. Chart = templated Kubernetes manifests. Values file overrides defaults per environment.
Chart Structure
Chart.yaml (metadata), values.yaml (defaults), templates/ (manifests with Go templates).
Helm Release
An installed chart instance. helm upgrade --install = idempotent install. helm rollback reverts release.
AKS Monitoring
Container Insights (Azure Monitor) + Log Analytics. Tracks node CPU/memory, pod restarts, logs. Dashboard in Azure portal.
helm upgrade --install pattern and how values override works per environment.# Install Helm
brew install helm # Mac
winget install Helm.Helm # Windows
# Create a chart
helm create myapp
# Install chart
helm install myapp ./myapp --namespace myapp
# Upgrade with values override
helm upgrade myapp ./myapp \
--namespace myapp \
--set image.tag=$(Build.BuildId) \
--set replicaCount=3
# Rollback
helm rollback myapp 1
# List releases
helm list -A- task: HelmDeploy@0
inputs:
connectionType: Azure Resource Manager
azureSubscription: 'MyServiceConnection'
azureResourceGroup: rg-lab
kubernetesCluster: aks-lab
namespace: myapp
command: upgrade
chartType: FilePath
chartPath: helm/myapp
releaseName: myapp
overrideValues: 'image.tag=$(Build.BuildId),replicaCount=3'
install: true# Enable monitoring on AKS
az aks enable-addons \
--addons monitoring \
--name aks-lab \
--resource-group rg-lab \
--workspace-resource-id /subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.OperationalInsights/workspaces/<workspace>
# View live pod logs
kubectl logs -f deployment/myapp -n myapp
# Top node/pod
kubectl top nodes
kubectl top pods -n myapp