In this diagram, we can see an example of three ADCs, small, medium and large. I have used just three SteelApp Traffic Manager instances, deployed in the traditional way, each with a fixed size capacity set by license key, but an organization may have hundreds of individual load balancers or ADCs.
.
But what you can also see here is that none of these three instances are running at full capacity – they each have an overhead which is not used, a reserve held back in case of peak throughput surges. The reason IT managers do this is because it takes time to add more capacity if we need it. In the case of a traditional load balancer appliance, it might take a few weeks to order a larger appliance, install in a rack and configure. If we are just adding a new feature or expanding the capacity of a virtual ADC, it might still take an hour to order a new license and install before we can use the new capacity.
But suppose you could share that capacity across all the ADCs? In this diagram, I have brought all three of my ADCs into a single resource pool, and I can share out the capacity between them. I can see from the resource utilization graph that this gives me better utilization of my resources, but it also means I can allocate spare capacity immediately, to any ADC that may need additional throughput, scaling to meet changes in demand in seconds.
In this diagram, we can see an example of three ADCs, small, medium and large. I have used just three SteelApp Traffic Manager instances, deployed in the traditional way, each with a fixed size capacity set by license key, but an organization may have hundreds of individual load balancers or ADCs.
.
But what you can also see here is that none of these three instances are running at full capacity – they each have an overhead which is not used, a reserve held back in case of peak throughput surges. The reason IT managers do this is because it takes time to add more capacity if we need it. In the case of a traditional load balancer appliance, it might take a few weeks to order a larger appliance, install in a rack and configure. If we are just adding a new feature or expanding the capacity of a virtual ADC, it might still take an hour to order a new license and install before we can use the new capacity.
But suppose you could share that capacity across all the ADCs? In this diagram, I have brought all three of my ADCs into a single resource pool, and I can share out the capacity between them. I can see from the resource utilization graph that this gives me better utilization of my resources, but it also means I can allocate spare capacity immediately, to any ADC that may need additional throughput, scaling to meet changes in demand in seconds.
In this diagram, we can see an example of three ADCs, small, medium and large. I have used just three SteelApp Traffic Manager instances, deployed in the traditional way, each with a fixed size capacity set by license key, but an organization may have hundreds of individual load balancers or ADCs.
.
But what you can also see here is that none of these three instances are running at full capacity – they each have an overhead which is not used, a reserve held back in case of peak throughput surges. The reason IT managers do this is because it takes time to add more capacity if we need it. In the case of a traditional load balancer appliance, it might take a few weeks to order a larger appliance, install in a rack and configure. If we are just adding a new feature or expanding the capacity of a virtual ADC, it might still take an hour to order a new license and install before we can use the new capacity.
But suppose you could share that capacity across all the ADCs? In this diagram, I have brought all three of my ADCs into a single resource pool, and I can share out the capacity between them. I can see from the resource utilization graph that this gives me better utilization of my resources, but it also means I can allocate spare capacity immediately, to any ADC that may need additional throughput, scaling to meet changes in demand in seconds.