People using some combination of Apex and Flow/Process Builder can experience ‘Too many DML statements’ or ‘CPU Time Limit exceeded’ exceptions. Resist the urge to fingerpoint at a specific implementation detail or an AppExchange Package and test your flows during development for specific scenarios including differing batch sizes.
By testing your flows with various batch sizes, you will better understand how Flow and Apex handle bulkification differently. The talk was about getting a grip on what is actually happening between Process Builder and Sub Processes or Process Builder and invocable methods.
We took measurements for all kinds of scenarios and found that, while showing good performance on a per-record bases, performance for any process builder-based solution that invokes Subprocesses or Flows shows a significant increase of CPU time consumption for every new batch of 200 records that are processed in a transaction. This is best consumed in form of our visuals available in the slides and the Github repository. Please also note that Process Builder uses SOQL Queries in order to do its job, even when you’re not explicitly reaching to parent or child object, but only update a field on the record itself.
So how does Process Builder bulkify from Apex? As the documentation states:
“When multiple interviews for the same flow run in one transaction, each interview runs until it reaches a bulkifiable element. Salesforce takes all the interviews that stopped at the same element and intelligently executes those operations together.”
Imagine 2000 records being inserted via Apex and a Process Builder Process is responsible for setting a Boolean on Insert.
For each batch of 200 records
process builder uses a query to get the 200 records
instantiates one flow interview per record
waits for all 200 interviews to reach the same bulkifiable element (record update in our case)
executes the bulkifiable action for all 200 interviews together
It is an important question at design time, if my Process Builder/Flow Implementation will be faced with more than one batch. It is also worth noting that trying to ‘bulkify’ process builder+flow yourself might achieve the opposite: slower running solutions, and higher system limit consumption.
The repo containing sample code for the benchmarks, sample data and a detailed readme, can be found at https://github.com/dstdia/CzechDreamin19
Reach out to @ch_sz_knapp and @stangomat on Twitter
2. #CD19
Christian Szandor Knapp
Salesforce MVP & Munich DG Leader
appero
Head of Salesforce Development
@ch_sz_knapp
github.com/szandor72
Daniel Stange
Frankfurt UG Leader
DIA
Technical Architect
@stangomat
github.com/dstdia
3. #CD19
1. Metamorphosis
a. What we witnessed
b. What we’re afraid of
2. The Trial
a. Premises and Paradoxes
b. Test Setup
3. The Judgement
a. Statistical Evidence
b. Forensic Evidence
Itinerary
5. #CD19
One morning, when Gregor
Samsa woke from troubled
dreams, he found himself
in 2009 - Workflows, Apex
in 2019 - Workflows / Processes,
Flows, Apex
with Events
Metamorphosis
Screenshots:
Old flow cloud flow builder
New flow builder
6. #CD19
GIF ugly bug on a bed black white gif from movie
Here’s what we’re all afraid of
System.LimitException: Too
many DML statements
System.LimitException:
Apex CPU time limit
exceeded
7. #CD19
THE TRIAL
Someone must have sabotaged Josefine K., for one
morning, without having done anything truly wrong, her
org broke.
Franz Kafka, adapted
9. #CD19
Paradoxes - available since Summer 2018
13:01:09:158 (94505994476)
|LIMIT_USAGE_FOR_NS|(default)|
Maximum CPU time:
240 out of 10000
13:01:08:157
FLOW_START_INTERVIEW_LIMIT_USAGE
CPU time in ms:
240 out of 15000
W-6375582
10. #CD19
Paradoxes - look for CPU Time Limit in Flow
Limits
13:01:09:158 (94505994476)
|LIMIT_USAGE_FOR_NS|(default)|
Maximum CPU time:
240 out of 10000
13:01:08:157
FLOW_START_INTERVIEW_LIMIT_USAGE
CPU time in ms:
240 out of 15000
https://developer.salesforce.com/docs/atlas.e
n-
us.216.0.salesforce_vpm_guide.meta/salesfo
rce_vpm_guide/vpm_admin_flow_limits_ape
No results
11. #CD19
The right understanding of any
matter and a misunderstanding
of the same matter do not
wholly exclude each other.
The Trial
Screenshots:
Old flow cloud flow builder
New flow builder
12. #CD19
Process Builder Quiz: Set a Boolean on Insert
Inserting 2000 records (via apex, data loader, …)
… how many Queries will be consumed?
… how many DML statements will be consumed?
… how much CPU Time will be consumed?
14. #CD19
I like to make use of what I know
Setup:
- Mocking an advanced system
with trigger management and
logging
- We insert a lot of light weight
custom objects in various code
and/or click combinations to
measure results
The Trial
Screenshots:
Old flow cloud flow builder
New flow builder
16. #CD19
They’re talking about things of
which they don’t have the
slightest understanding,
anyway.
The Trial
Screenshots:
Old flow cloud flow builder
New flow builder
17. #CD19
Collecting Evidence
Scenario All Clicks All Code Clicks & Invocable
Code
Single Record
Operation
Creating a Mock will create a related MockMock
Multi Records
(single batch < 200)
For every n Mock, n MockMocks are created.
We’ll increase n by 50 in each run up to 200.
Fully batched
operation (> 200
records)
We then fill the full execution context of the Mock trigger with
201, 300, 400, 401 up to 4500 records (23 batches with 200
each). Can it create 4500 MockMocks?
22. #CD19
Forensic Evidence: Debug Logs
A lot of “Flow Interviews” run
(instances of a flow or PB Action)
Yet the DML statements
get batched up properly
23. #CD19
Forensic Evidence: Debug Logs
When we create 400 records in one
transaction400
… Process Builder kicks off 400 flow
interviews (1 per record).
Max CPU time points at the slowest interview, not all!
… all of them will be paused to collect
‘bulkifiable actions’
… which are executed in ‘bulk’, so
invocable Apex receives a maximum of
200 records
200 200 200 200
1 1
1 1
1 1
1 1
…
(400 single flow
interviews
25. #CD19
User Experience differs widely from system limits
consumption!
Fig.: Less than 1 sec CPU time consumed, but 4.4
secs have elapsed.
Forensic Evidence: Debug Logs
Total Elapsed Time
(!= CPU Time!)
is around 4.4secs.
26. #CD19
● Take well informed decisions when choosing your automation tools:
○ Read the documentation and follow up with every release
○ Monitor the operations in the Debug Log and Analysis Perspective
● KISS - avoid complexities if possible
○ If you cannot draw it, do not build it
○ Take the least complex design. Can you go full Flow? Or full Apex?
○ Remember good ol’ Workflow Rules - low impact automations!
● Always test your operations with 201 or more records
○ use Dataloader etc. to insert & update records in a Sandbox
Key Takeaways
27. #CD19
● We are blazing trails to something exciting - an event driven business
platform with secure apps across all devices with visual and
programmatic development
● If you think in events instead of sObject Updates, many things sort
themselves out
Witnessing commotion in the force
“I felt a great disturbance in the Force,
as if millions of [CPU Time Exceptions]
suddenly cried out in terror
and were suddenly silenced.”
Salesforce is a constant deflection which deflects from possibly grasping what it really deflects from
We are witnessing commotion in the force
2009: Apex and Worfklows are complicated but not complex. They have their special place in the order of execution and specific rules how they interact and who wins in the end.
2019: not so much, flows / pb have their place at the end of the execution order ((and are (are not?) limited to 5 re-entries like Workflows))
BTW, it has never been either clicks or code - we’ve been using both from the very beginning. Not counting those wizards that can simply write new object metadata xmls by hand and push it by hand.
And it is also not the ultimate truth it is just a momentary snapshot
Add bug item number
Apex CPU Time is not part of flow limits
Our base line includes code: something to log and something to control trigger.
Your baseline could be a different one. If you do not have any code in your org, your results might differ greatly
Coming back to the baseline: our world is that of orgs several years old with several layers of customisation and/or managed packages
Hence a sophisticated test setup but no complicated operations therein to keep the overhead at a minimum
Our base line includes code: something to log and something to control trigger.
Your baseline could be a different one. If you do not have any code in your org, your results might differ greatly
Handover to Daniel
We all know that this can be automated, and it can be done with almost every tool we have on the platform except workflows (remember they
Key observation: If code runs in a transaction with declarative logic, the latter can force the programmatic logic over its limits
.
Key observation: If code runs in a transaction with declarative logic, the latter can force the programmatic logic over its limits.
Note the peaks whenever we go cross a trigger context’s bulk size, e.g. from 200 to 201 and 400 to 401
n=50/n=100/n=150/n=200/n=300
PB with an internal node: 80/100/136/159/847 (note the surge when entering a second batch!)
PB with a subprocess: 85/126/128/165/1473(!)
PB with a subflow: 68/92/118/162/1563
PB with Inv. Apex: 239/373/607/793/1504
There is currently no way to catch a CPU time limit exception when it happens in Process Builder! So if it happens, ti happens.
.
And this is only one side of the story. The other side is User Experience, and you can see that the CPU time (red bars) is the smaller part of the time that elapses while the transactions is running. All this while, the user will be seeing a spinner in the UI while the operations runs.