DX@Scale: Optimizing Salesforce Development and Deployment for large scale projects
Nov. 6, 2020•0 likes•1,914 views
Download to read offline
Report
Technology
In this talk, Azlam Abdulsalam and Ramzi Akremi will share their experiences in an ongoing Salesforce program how they build deploy and maintain 20+ unlocked packages through a highly optimised pipeline.
DX@Scale: Optimizing Salesforce Development and Deployment for large scale projects
1. DX@Scale
Optimising Salesforce Development
and Deployment for Large Scale
Projects
Azlam Abdulsalam
Technical Architect specialising in
Salesforce DevOps
@azlus
Ramzi Akremi
Software Engineering on Salesforce
@rakremi
2. Safe Harbour
Opinions are our own
They are not final as we are still
experimenting and learning from our
mistakes…
But so far, so good!
3. Release 1
Service Cloud
Core CRM
Case Management
Knowledge Base & Communities
Account Team Management
Core business domains
Telephony
Livechat
AWS
Sendgrid
9 inbound integrations
2 outbound integrations
7. Few principles we set at the beginning
Right amount of design upfront, listen to our
pains and act accordingly
Each feature is developed in an individual scratch
org
Keep an entire build below 30 minutes
Zero recurring manual steps
Be adamant about technical debt
Fully Traceable Artefacts
Full Deployment to an environment within 1 hour
O(n)
14. Roadmap
April 2020 Oct 2020 Feb 2021 Jun 2021
Discovery
MarketingRelease2
SalesCloud/CPQ
MarketingRelease3
Billing
Jun 2020
COVID
TechnicalRelease
MarketingRelease1
ServiceCloud
15. • If it hits master, it goes to production
(eventually)
• We never use long-lived branches
• We feature toggle by not deploying
the artefact(s)
• We haven’t yet seen the need for a
finer grained feature toggling
approach
18. If it hits master, it’s goes to production (eventually)
This means that everything that hits master is
validated (from a business perspective as well as a
technical)
20. Business validation in Developer scratch org
Merge conflicts (if any), are resolved before the
PR is validated
Dedicated ephemeral CI environment, where we
execute:
Static code analysis using PMD (and CodeScan)
Validate packaging coverage for metadata
Validate test coverage for individual packages by
automatically detecting test classes in a package
Validate Data Packages
Deploy to a sandbox before building the package
to check upgrade behaviour (source deploy)
Pull Requests validation
22. So far we have…
Solved the issue where it takes a lot
of time to spin up a scratch org
Developers are not slowed down
A pull request process that can take
quite some time in
Build
Deployment
Tests
23. Optimise Build Stage
• Build only modified packages
• Build all other packages depending on
modified package
• Build packages in parallel if it is possible
• Do not build packages when their
dependencies are failing
• Discover/resolve dependencies between
packages automatically
25. Model your path to production
• Utilize a system that allows to model
your entire path to production including
your manual steps, approval checks etc.
• Only deploy the artefacts that are
changed by probing against the org, so
we can keep the release definition intact
• Generate automated changelogs to
update stakeholders to decide on the
release
26. Parallelism and dependencies
The bigger the artefact:
• The longer it takes to build,
validate, test, deploy
• The more likely it will need a
rebuild upon modification
• The more dependencies it will
likely have
S(olid) applied at the artefact level
27. High level structure/Our current structure
Business features/capabilities
1 mono-repo
i.e.
Case Management
Telephony
…
Mix unlocked, org dependent, source artefacts
Technical capabilities
1 mono-repo
i.e.
Generic inbound event handler
Trigger Framework
…
Several unlocked packages
Technical libraries
Several repos
I.e.
Rest Framework
Logging
…
Several unlocked packages
28. src-temp -> ‘default’, staging area to decide the location for
new metadata, src-temp will not be deployed to any
environments other than CI
Don’t limit yourself by mono-repo, Move packages to other
repos by monitoring change frequency and dependency
Everything lives in the repo, including manual steps, scripts,
environment settings and get the same importance as any other
metadata
We found repo per package to be painful, especially using git
submodules to link during the push to development
environment (scratch org)
30. Profiles
We use them only when we have to
So far we have only two profiles
• A copy of the standard user
• Admin profile
We use permission sets instead
Permission sets are within the artefact
31. Layouts
Option 1
They serve one clearly identified business domain
We package them alongside the other metadata
Option 2
They span across several business domains
We separate them and usually we don’t package
(unlock) them
Example:
• Contact Layout
• Account Layout
• …
35. Unlocked package
Org dependent package
Source package
Changeset
(In this order)
and Data Package
Which packaging method?
36. What about unit tests?
I/O are usually the reason why tests are slow
Every time a unit test requires the database, there
is an impact in terms of speed (µs -> ms)
Technical debt is the second major reason for
slow tests
APEX is object oriented
SOLID principles
The database is an “implementation detail”
(i.e. Do not sprinkle your code with DML
statements and queries)
Mock the database calls when performing Unit
tests
37. Yes but it is Salesforce, I
need to access the DB
39. Average of 226 Builds/week
Average of 72 package creation/week
One full build takes 25 minutes
8 environments + (several training environments)
30 minutes for a full deployment to production org
Our results so far
40. Azlam Abdulsalam
Technical Architect specialising in
Salesforce DevOps
@azlus
Ramzi Akremi
Software Engineering on Salesforce
@rakremi
Thank You!
https://www.npmjs.com/package/sfpowerkit https://dxatscale.gitbook.io/sfpowerscripts/