Interactive Data Streaming ™
In-Memory Performance is
More than Just a Database Issue
A key component of a successful business intelligence (BI) implementation—defined as one that
promotes widespread adoption and continued usage—is creating a positive experience for a user audience
characterized by a wide variety of needs and spectrum of technical abilities. Regardless of the user’s
background, high performance—whether referring to the ability to analyze large data sets or provide
rapid responses to user requests or actions—is an important component of this positive user experience.
As a result, the concept of “in-memory” databases, or “caching,” recently has grown in popularity as BI
vendors tout the storage of data or metadata as the mechanism driving fast access and analysis. With
regards to BI, the concept refers to storing data or metadata in memory (i.e. RAM), rather than physically
on a disk, which enables users to more easily and readily access the data—or any manipulation of that
While this concept may not be a new one, its popularity continues to grow as the cost of hardware
memory decreases and CPU processing speed expands. Combine that scenario with the vast amounts of
data generated in businesses every day and the desire to find more valuable insight within that data, and
suddenly in-memory becomes a very appealing approach for many companies.
Yet caching of data in memory is just one of the many components that can help deliver high performance
in BI. If the rush to in-memory caching results in the neglect of other key areas of the solution, it’s likely
the gains achieved by such an approach will be offset and likely outweighed by the losses, resulting in
waning adoption and usage.
To address this concern, SwiftKnowledge advocates a multi-faceted, hybrid approach to high performance,
one that optimizes performance for end users throughout their entire experience in the BI solution.
We believe this approach provides specific advantages in the areas of connection pooling, session
state, query creation and data record set loading over a standard, database-centric, in-memory approach.
This tech note explains SwiftKnowledge’s approach to in-memory analytics—an approach powered by our
patented Interactive Data Streaming™ (IDS) technology—and the benefits it helps organizations realize.
A Better Approach to High Performance than In-Memory Analytics
Data Loading Data Requesting PROPRIETARY
Page Generation SK Metadata Cache ow
Sw ne Metadata
• Query Generator n
Data Packet Engine Co ling
• Incremental Queries Poo RDMS
“Load on Scroll”
• Page Request Data
Figure 1: Multiple facets of a hybrid approach.
Interactive Data Streaming is comprised of four essential elements: session cache, data loader, metadata
cache and connection pooling (Figure 1). The combination of all these elements ensures high performance
for an end user throughout any action or activity within the application, whether navigating through con-
tent, running complex queries or loading results for consumption.
Like other BI software applications, IDS also leverages the resident database management system
(DBMS)—Microsoft® SQL Server™ in the sample scenario above—to take advantage of any proprietary,
in-memory cache structures available for the data or metadata.
The SwiftKnowledge session cache operates in a manner similar to cookies for web browsers. Where a
cookie might help a web site determine if you have visited it before, who you are, which areas of the
site interested you, etc., the SwiftKnowledge session cache keeps track of your current state, which can
include the following:
Currently active dashboard Group membership
Currently active report Cube security, as defined at the group level
Connection in use Report save status
SwiftKnowledge uses its session cache to properly assign rights and cube security during user actions,
such as browsing the menu or accessing a report. Since SwiftKnowledge may reference this session infor-
mation frequently, it’s cached for performance enhancement reasons. As a result, as users move about
the application, content retrieval and loading is accelerated.
The SwiftKnowledge approach to data loading benefits users with its speed of delivery. While there may
be several million rows of data available in memory, database or disk, it is impractical to return a full
data set to users because the request could overwhelm the browser. Even if the browser could manage
the request, users would likely be overwhelmed by the amount of data they receive since they only can
process and understand a finite amount of information at any one time. And, a user typically does not
need a full data set delivered to their browser if their intent only was to follow a specific path through the
data—perhaps tracking outliers or exceptions—to get to their answers.
The SwiftKnowledge data loader takes these elements into consideration and is largely responsible for
providing the high level of responsiveness SwiftKnowledge end users have come to expect. As a result,
SwiftKnowledge’s data loader and data packet engine interact with the query generator to present data
only if and when the user asks for it.
Equally important to the end user experience is what happens when additional, incremental data is
retrieved and displayed on a report page. To facilitate this experience, a SwiftKnowledge page generator
seamlessly delivers content only to the section of the page that demands it. The rest of the page remains
static, thus avoiding any overhead dealing
with modifications, until the user chooses to
interact with the rest of the page.
Figure 2 shows one example of the incre-
mental retrieval of data as executed within
SwiftKnowledge. In this case, it’s being
delivered in a report filter, 100 rows at a time
from a 4,171 row data set. As the user scrolls,
additional data packets load. This capability
is what we refer to as “load on scroll.” The
user can choose to act on the record set at
any point, retrieve the next data packet, (i.e.
select a value) or leave the filter entirely.
In Figure 3 (shown on the next page), after the
first data packet from a record set loads into
the grid (showing details at the state level), the
Figure 2: SwiftKnowledge’s “load on scroll” capability.
user drills down on a specific member and
expands into another level of detail (cities within
a state), then continues that process several times until seeing specific bank information. In this situation,
an incremental query is created and generated, and then inserted without re-loading the existing data set.
In each case, at all levels of data, only the first packet loads for each new record set until the user
chooses to explore and load more data on demand. If the entire record set had to re-load each time,
the process would require much more time—resulting in the familiar fade-in and fade-out of pages many
users see with other applications—and user interest and their ability to seamlessly analyze data would
deteriorate, thus degrading adoption and use.
While retrieving the record set needed for a given report, it is likely other users, or the same user running
another query, may benefit from metadata already present in the SwiftKnowledge cache (while at the
same time adding to that cache), rather than running repeated requests for the same objects on every
Figure 3: Incremental queries with SwiftKnowledge.
SwiftKnowledge initially retrieves any necessary metadata needed for query generation and execution,
and then stores the hierarchy names, dimension names, level names and member data in its cache for
future use. The SwiftKnowledge metadata cache expands in content and value as more users explore
different areas of a given cube. This process creates greater efficiency for subsequent query generation
and execution as it becomes “smarter” with each executed query.
Creating a database connection on demand can be costly, from both a CPU and time perspective.
SwiftKnowledge alleviates this expense by efficiently managing and re-utilizing existing connections—
both within and across users—which decreases the overall number of new connections needed to meet
the needs of the user population.
Once a connection to a SwiftKnowledge-accessible data source is established, SwiftKnowledge can share
this connection with other users who need access to the same data source. As a result, any SwiftKnowledge
queries can be completed in sub-second times, providing ample opportunity to make this connection
available for another user. For example, a single connection serving one user’s request for a member list
when opening a filter box can be re-assigned to another user’s request to drill down in a grid report, from
a higher- to lower-level.
In fact, a single connection can serve two concurrent users interacting with multiple reports. For instance,
when a report first displays data on a page, there is a high likelihood that the user will spend a few, if not
several, seconds observing or consuming the output. During this time, another user may retrieve data
from a completely different report that accesses the same data source. This user may also spend time
digesting the output from their report while the first user decides to open a filter box and narrow the
range of their report to hone in the detail they need. As long as neither user requires a connection at the
exact same time, there may never be a need to create a second connection, especially considering that
many queries complete in milliseconds.