The document discusses troubleshooting performance issues with a customer's dashboard. It experiences slow loading times of over 10 minutes due to extensive querying and loading all data at once. The document proposes a 14 step process to archive older data with low impact through creating a duplicate 'current' entity, migrating data over, and changing queries to reference only the current entity for improved loading performance. It also notes more work is still needed to fully optimize the dashboard.
9. | Troubleshooting Dashboard Performance
Why loading all information at once
is a problem:
● Every category contains a lot of
subcategories;
● Repeated query execution;
11. | Troubleshooting Dashboard Performance
Pre-analysis:
● No functionality should be lost;
● Archive data with low impact;
● Not loading all information
simultaneously;
13. | Troubleshooting Dashboard Performance
Challenges with archiving data:
● Two entities;
● ServiceRequest;
● ServiceRequestStep;
● ServiceRequestId used
everywhere;
● How to do this with low impact?
15. | Troubleshooting Dashboard Performance
Step 1/14:
● Only use CRUD wrappers, not
the Create or Update entity
actions directly;
16. | Troubleshooting Dashboard Performance
Step 2/14 and 3/14:
● Copy entity;
● In original entity add boolean;
Original entity New entity
17. | Troubleshooting Dashboard Performance
Step 4/14:
● Update the CRUD wrappers so
that duplicate of
ServiceRequest record will be
made for
ServiceRequestCurrent;
18. | Troubleshooting Dashboard Performance
Step 5/14, 6/14:
● Write a timer to copy data from
ServiceRequest to
ServiceRequestCurrent;
● Write a timer to delete data from
copied entity to prevent it from
having unnecessary data;
19. | Troubleshooting Dashboard Performance
Step 7/14, 8/14:
● Publish;
● Deploy to QA and Prod.;
● To verify data in
ServiceRequestCurrent;
● And validate delete timer;
21. | Troubleshooting Dashboard Performance
Step 10/14, 11/14, 12/14:
● Optional - Remove the newly
added boolean from original
entity;
● Optional - Remove the timer
that migrated the data;
● Deploy to QA and Prod.;
22. | Troubleshooting Dashboard Performance
Step 13/14, 14/14:
● Use performance analytics in
Lifetime;
● Make sure to only use the Id of
original entity;
23. | Troubleshooting Dashboard Performance
Conclusion:
● Performance increased;
● Satisfied customer;
● More work needs to be done;
Recently, we were working on an HR department application and were notified that dashboard performance presented problems.
Today I will explain how Dashboard performance can be improved through data archiving and the importance of placing a ban on loading all data simultaneously.
My name is Daan Brandenburg and I am a consultant for CoolProfs
So what were the issues we had with the Dashboard?
The key issue with Dashboard was performance; it did not function properly, so that loading the dashboard could take up to ten minutes.
It seemed that the application built in 2014 had accumulated lots of data. Somehow, the application appeared to have been used to its full potential.
This presented problems for the HR department; because the main idea behind the Dashboard is to provide a quick overview of how much each Category is used on the one hand, and on the other hand, it should show if the HR department is behind in their work.
We discovered that data had never been archived within those 5 years and that for this application no data archiving functionality was available
One of the reasons the Dashboard had performance issues was the use of webBlocks in this screen.
On the image you can see that there were nested webblocks within webblocks. All data was loaded immediately, even though not all information would be shown immediately on the Dashboard.
This is the query used to count the totals for each category (variant entity) and then for each step in that category (variantstep and servicerequeststep).
A category or variant was managed by the HR department.
Activities were counted since every ServiceRequest that was started started a BPT Process. Only Activity is a system entity, the other entities are specific for this application.
Each new ServiceRequest created multiple ServiceRequestSteps.
This is a query specifically for those ServiceRequests that were handled directly. This is to show the link between a ServiceRequest and a Variant.
This is the query that was used for the bar-chart that is visible on the Dashboard. Again you can see which entities were used.
This image shows why loading all information at once would be a problem as all categories could contain a lot of subcategories. Even though the information of the subcategories would only show when clicking on a category, the information was still loaded immediately.
This also meant that all queries shown before were executed. Even though a single query would be very fast, executing them a lot of times results in longer waiting time.
How to improve?
Before I went to this client some analyzing had already been done by colleagues of mine. This Dashboard was very important for the HR-manager and team leaders as they could use it to see how much work needed to be done and when priority needed to be given to a certain category. So no functionality should be lost when coming up with improvements. And changes should have low impact on the rest of the application since the business had experienced negative impact with predecessors making ‘small’ changes in the application. The conclusion was to find a way to start archiving data with low impact on the rest of the application and to not load all information simultaneously.
What we did to not load all data simultaneously is put the webBlock in an If-statement and only load the webBlock when the selected Row was opened.
When clicking on a category (row), an OnClick action would be triggered and in that OnClick Action only the selected Category would be opened to show and load the subcategories.
The improvement of creating an archive table was a more difficult one. It was clear that especially two entities contained a lot of data: ServiceRequest and ServiceRequestStep. However, ServiceRequest was an entity that was used all over the application and mostly a search by Id of the ServiceRequest was done. Moving data then from this table to an archive table would have a high impact on the entire application. I soon realized that a lot of screens would have to change, so before really starting with this, I decided to check online if there were any alternatives with lower impact on the application.
That is when I came across an article by Justin James, an outsystems MVP. His suggestion was to, after all the usual tricks, to start purging old records out of the system and only accessing them when absolutely necessary. The main point made in this article is to create new entities (ServiceRequestCurrent for example) and make sure that those entities only contain the data that is relevant for the screen on which the new entities are used. In our case we only needed those ServiceRequests that were closed in the last four weeks and those that were still active. In this way there is a lower impact on the entire application, since you can choose which screens to change by using the new entities.
The main reason we went for this solution, and not for other solutions that are available as well, was the low impact it would have on the application so that we could apply it step by step.
In 14 steps, Justin James explains how to do this and these 14 steps are also the steps I applied for the entities ServiceRequest and ServiceRequestStep.
If you are not using CRUD wrappers in your business logic, and calling the Entity Actions directly, make a set of CRUD wrappers for ServiceRequest, and replace all references to the Entity Actions to the new CRUD wrappers.
Copy the “ServiceRequest” Entity and call it “ServiceRequestCurrent”. Set the type of its “Id” Attribute to “ServiceRequest Identifier”. Do NOT make CRUD wrappers for it.
In ServiceRequest, add an Attribute called “IsCopiedToCurrent” and make sure that it is a Boolean.
Update the CRUD wrappers of ServiceRequest so that the Create and Update wrappers duplicate the record to ServiceRequestCurrent (make sure that they do it after the CreateServiceRequest, so they can set the Id attribute of ServiceRequestCurrent), and that the Delete wrapper performs a hard delete of the matching record from ServiceRequestCurrent. The CRUD wrapper should set the IsCopiedToCurrent Attribute of ServiceRequest to true before writing the ServiceRequest record.
Write a “When Published” timer to copy data from ServiceRequest to ServiceRequestCurrent. To speed things up, only copy the data within your “this is what I consider current” range. Only select values where IsCopiedToCurrent is false. If you have a lot of data, create a ‘smart timer’ that is able to restart itself to prevent timeouts. After writing each new ServiceRequestCurrent record, set the IsCopiedToCurrent of ServiceRequest to true.
Write a timer that runs nightly or every week or once a month, or whatever makes sense to you, that detects records in ServiceRequestCurrent that are no longer needed, and performs a hard delete on them to prevent the ServiceRequestCurrent entity to contain unnecessary data.
Publish
Deploy to QA and Prod. Verify that the appropriate data was copied from ServiceRequest to ServiceRequestCurrent, and that new ServiceRequest records also get copied to ServiceRequestCurrent. Validate that your timer purges records from ServiceRequestCurrent as-expected.
Now, you have two Entities that look identical, and are interchangeable in your queries and screens: ServiceRequest (all records) and ServiceRequestCurrent (only the most recent ones). Now it is possible to change the queries on the Dashboard to use the ServiceRequestCurrent instead of ServiceRequest. This could also be applied to other screens and if needed, you can have a checkbox or similar on the screen that says “Include All Records” or “Show Archived Records” or something, which will choose a query that is using the ServiceRequest Entity instead of ServiceRequestCurrent.
The real magic here happens in this step. Because all of your data is still in ServiceRequest and will never leave, you do not need to do a “big bang” release that changes everything in the whole application all at once. You can change the application slowly but surely as your time permits, with minimal disruption to your roadmap or other timelines.
Remove the “IsCopiedToCurrent” attribute from ServiceRequest.
Remove the “When Published” timer that migrated the data. These steps are optional to keep your application nice and clean.
Deploy to QA and Prod.
Over time, use the performance analytics in Lifetime to get a before/after comparison of the level of improvement.
Make sure that going forwards, if another Entity needs to relate to ServiceRequests, you always use a ServiceRequest Identifer and never ServiceRequestCurrent Identifier, (because that Entity has hard deletes occurring on a regular basis).
The result of all of this was a better performance for the Dashboard. Before the changes waiting time could take up to ten minutes. After the changes this was up to 45 seconds. For the client this was sufficient for now as the difference was quite significant. However there is a backlog item to further improve the performance of the Dashboard, but only after some other changes have been implemented in the application.