Are you migrating applications from on-prem to Azure? Do you know what you’re doing? I thought I did. After spending years developing for SharePoint Server, I started migrating applications for a government client to Azure Government. In this session, I’ll share the issues I ran into, as well as my solution, to hopefully spare you the hours of research and influx of gray hairs as you start migrating. We’ll cover topics such as: databases, DevOps, managing multiple environments, and more.
Not quite the conversation/situation when getting moved to new client
DoD client with multiple web applications to migrate from on prem servers to, ideally, PaaS in AzGov
Not always full-PaaS, some lift-n-shift, some hybrid PaaS/IaaS
Projects
First project was all C# and SQL work
2nd was VB, Telerik OpenAccess to EF, ARM templates
C# & VB in ASP.Net Website project (yeah, that old nightmare!) – PROTIP: Sonar doesn’t like mixed language project. Checkmarx worked.
Became one of the 2 main DevOps Engineers on the client contract for the next 6-ish apps
Hub-and-spoke architecture
No connections back to on-prem
Makes data migrations more difficult
Workarounds required for accessing data that hasn’t migrated
Blobs
Large, unstructured data store
Primarily used for replacing user-uploaded content on file server
File Shares
Large data
Mountable to VM
Shared some PS scripts for VMs early on. Framework replaced a lot of it
SAS Tokens
Configurable way to limit access to all or part of a storage account
Token is URL Query String (see framework)
Security
Can lock down access to specific vnets, subnets, IPs
Portal
Show SAS, Networking
SPOILERS: Data Migration Assistant makes life a bit easier
Unsupported features: Linked Servers (SMI/elastic queries options), SQL Agent Jobs (custom code, SMI), T-SQL (system functions like sp_send_dbmail, Cross-database queries, USE statement)
Deprecated Field Types (ntext, text, image) – fairly minor
How to get data from on-prem to PaaS
Managed Instance backup issue
Data Migration Assistant
- SMIs can’t do PowerShell jobs, though
Really hampers dev situation where Dev VMs have local copies to dirty up.
YAML isn’t a framework/language, but a data format like JSON or XML
Different implementations for different services (AZDO, GitLab, etc)
Build servers were managed by another team so couldn’t update
AAA: Azure resource that can execute PS runbooks to deploy resources
Only resource provisioning. App deployment was still pipeline tasks
Classic pipeline exports reference some resources (task groups) by IDs. Have to update after import.
Artifactory
1 client didn’t allow CLI uploads, so required manual upload of artifacts…not so continuous
Can’t use CLI in AAA
Idempotence
ARM, Terraform, some CLI
PowerShell, must check most resources exist before trying to create/modify
TF has State awareness – don’t deploy/update unchanged resources, double-edge: manual changes will be removed
MS also offers Bicep. Looks like Terraform. Haven’t played with yet.
GitLab
can connect using CLI or PowerShell, but have to share connection creds via params
YAML implementation differs from AZDO. No task definitions. All command-line.
Ansible
Small sample size during PoC
Limited supported resources using dated demo extension/project
AAA for DSC
Manage VM configs using PS DSC
AAA has Node Mgmt functionality with push capabilities
https://www.appliedis.com/things-i-wish-i-knew-before-migrating-apps-to-azure-automation-and-deployment/