Learn How RealTime Medicare Data Delivers Caregiver Trends Insights Since Taming its Huge Healthcare Data Trove

  • 101 views
Uploaded on

Transcript of a BriefingsDirect podcast on how a healthcare data collection site met the challenge of increasing volumes by using HP tools.

Transcript of a BriefingsDirect podcast on how a healthcare data collection site met the challenge of increasing volumes by using HP tools.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
101
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Learn How RealTime Medicare Data Delivers Caregiver Trends Insights Since Taming its Huge Healthcare Data Trove Transcript of a BriefingsDirect podcast on how a healthcare data collection site met the challenge of increasing volumes by using HP tools. Listen to the podcast. Find it on iTunes. Sponsor: HP Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives. Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results. This time, we're coming to you directly from the HP Discover 2014 Conference in Las Vegas. We're here the week of June 9 to learn directly from IT and business leaders alike how big data, cloud, and converged infrastructure implementations are supporting their goals.     Our next innovation case study interview highlights how RealTime Medicare Data analyzes huge volumes of Medicare data and provides analysis to their many customers on the caregiver side of the healthcare sector. Here to explain how they manage such large data requirements for quality, speed, and volume, we're joined by Scott Hannon, CIO of RealTime Medicare Data and he's based in Birmingham, Alabama. Welcome, Scott. Scott Hannon: Thank you. Gardner:  First, tell us a bit about your organization and some of the major requirements you have from an IT perspective. Hannon: RealTime Medicare Data has full census Medicare, which includes Part A and Part B, and we do analysis on this data. We provide reports that are in a web-based tool to our customers who are typically acute care organizations, such as hospitals. We also do have a product that provides analysis specific to physicians and their billing practices. Gardner:  And, of course, Medicare is a very large US Government program to provide health insurance to the elderly and other qualifying individuals. Gardner
  • 2. Hannon: Yes, that’s true. Gardner: So what sorts of data requirements have you had? Is this a volume, a velocity, a variety type of the problem, all the above? Volume problem Hannon: It’s been mostly a volume problem, because we're actually a very small company. There are only three of us in the IT department, but it was just me as the IT department, back when I started in 2007. At that time, we had one state, Alabama and then, we began to grow. We grew to seven states which was the South region: Florida, Georgia, Tennessee, Alabama, Louisiana, Arkansas, and Mississippi. We found that Microsoft SQL Server was not really going to handle the type of queries that we did with the volume of data. Currently we have 18 states. We're loading about a terabyte of data per year, which is about 630 million claims and our database currently houses about 3.7 billion claims. Gardner: That is some serious volume of data. From the analytics side, what sort of reporting do you do on that data, who gets it, and what are some of their requirements in terms of how they like to get strategic benefit from this analysis. Hannon: Currently, most of our customers are general acute-care hospitals. We have a web- based tool that has reports in it. We provide reports that start at the physician level. We have reports that start at the provider level. We have reports that you can look at by state. The other great thing about our product is that typically providers have data on themselves, but they can't really compare themselves to the providers in their market or state or region. So this allows them to look not only at themselves, but to compare themselves to other places, like their market, the region, and the state. Gardner: I should think that’s hugely important, given that Medicare is a very large portion of funding for many of these organizations in terms of their revenue. Knowing what the market does and how they compare to it is essential. Hannon: Typically, for a hospital, about 40 to 45 percent of their revenue depends on Medicare. The other thing that we've found is that most physicians don't change how they practice medicine based on whether it’s a Medicare patient, a Blue Cross patient, or whoever their private insurance is. So the insights that they gain by looking at our reports are pretty much 90 to 95 percent of how their business is going to be running. Hannon
  • 3. Gardner: It's definitely mission critical data then. So you started with a relational database, using standard off-the-shelf products. You grew rapidly, and your volume issues grew. Tell us what the problems were and what requirements you had that led you to seek an alternative. Exponential increase Hannon: There were a couple of problems. One, obviously, was the volume. We found that we had to increase the indexes exponentially, because we're talking about 95 percent reads here on the database. As I said, the Microsoft SQL Server really was not able to handle that volume as we expanded. The first thing we tried was to move to an analysis services back end. For that project, we got an outside party to help us because we would need to redesign our front end completely to be able to query analysis services. It just so happened that that project was taking way too long to implement. I started looking at other alternatives and, just by pure research, I happened to find Vertica. I was reading about it and thought "I'm not sure how this is even possible." It didn’t even seem possible to be able to do this with this amount of data. So we got a trial of it. I started using it and was impressed that it actually could do what it said it could do. Gardner: As I understand it, Vertica has the column store architecture. Was that something understood? What is it about the difference of the Vertica approach to data --one that perhaps caught your attention at first, and how has that worked out for you? Hannon: To me the biggest advantages were the fact that it uses the standard SQL query language, so I wouldn't have to learn the MDX, which is required with the analysis services. I don’t understand the complete technical details about column storage, but I understand that it's much faster and that it doesn't have to look at every single row. It can build the actual dataset much faster, which gives you much better performance on the front end. Gardner: And what sort of performance have you had? Hannon: Typically we have seen about a tenfold decrease in actual query performance time. Before, when we would run reports, it would take about 20 minutes. Now, they take roughly 2 minutes. We're very happy about that. Gardner: How long has it been since you implemented Vertica and what are some of supporting infrastructures that you've relied on?
  • 4. Hannon: We implemented Vertica back in 2010. We ended up still utilizing the Microsoft SQL Server as a querying agent, because it was much easier to continue to interface the SQL reporting services, which is what our web-based product uses. And the stored procedure functionality that was in there and also the open query feature. So we just pull the data directly from Vertica and then send it through Microsoft SQL Server to the reporting services engine. New tools Gardner: I've heard from many organizations that not only has this been a speed and volume issue, but there's been an ability to bring new tools to the process. Have you changed any of the tooling that you've used for analysis? How have you gone about creating your custom reports? Hannon: We really haven't changed the reports themselves. It's just that I know when I design a query to pull a specific set of data that I don’t have to worry that it's going to take me 20 minutes to get some data back. I'm not saying that in Vertica every query is 30 seconds, but the majority of the queries that I do use don’t take that long to bring the data back. It’s much improved over the previous solution that we were using. Gardner: Are there any other quality issues, other than just raw speeds and feeds issues, that you've encountered? What are some of the paybacks you've gotten as a result of this architecture? Hannon: First of all, I want to say that I didn’t have a lot of experience with Unix or Linux on the back end and I was a little bit rusty on what experience I did have. But I will tell people to not be afraid of Linux, because Vertica runs on Linux and it’s easy. Most of the time, I don’t even have to mess with it. So now that that's out of the way, some of the biggest advantages of Vertica is the fact that you can expand to multiple nodes to handle the load if you've got a larger client base. It’s very simple. You basically just install commodity hardware, but whatever flavor of Unix or Linux that you prefer, as long as it’s compatible, the installation does all the rest for you, as long as you tell it you're doing multiple nodes. The other thing is the fact that you have multiple nodes that allow for fault tolerance. That was something that we really didn't have with our previous solution. Now we have fault tolerance and load balancing. Gardner: Any lessons learned, as you made this transition from a SQL database to a Vertica columnar store database? You even moved the platform from Windows to Linux. What might you tell others who are pursuing a shift in their data strategy because they're heading somewhere else?
  • 5. Jump right in Hannon: As I said before, don’t be afraid of Linux. If you're a Microsoft or a Mac shop, just don’t be afraid to jump in. Go get the free community edition or talk to a salesperson and try it out. You won't be disappointed. Since the time we started using it, they have made multiple improvements to the product. The other thing that I learned was that with OPENQUERY, there are specific ways that you have to write the store procedures. I like to call it "single-quote hell," because when you write OPENQUERY and you have to quote something, there are a lot of other additional single quotes that you have put in there. I learned that there was a second way of doing it that lessened that impact. Gardner: Okay, good. And we're here at Discover. What's interesting for you to learn here at the show and how does that align with what your next steps are in your evolution? Hannon:  I'm definitely interested in seeing all the other capabilities that Vertica has and seeing how other people are using it in their industry and for their customers. Gardner: In terms of your deployment, are you strictly on-premises for the foreseeable future? Do you have any interest in pursuing a hybrid or cloud-based deployments for any of your data services? Hannon: We actually use a private cloud, which is hosted at TekLinks in Birmingham. We've been that way ever since we started, and that seems to work well for us, because we basically just rent rack space and provide our own equipment. They have the battery backup, power backup generators, and cooling. Gardner: How about backup and recovery? How were those issues managed for you? Hannon: We have multiple copies of it on multiple server systems and we also do cloud backup. Gardner: I see. So you've got a separate location in the cloud that you use, should something unfortunate happen. Hannon: Correct. Gardner: So a good insurance for a Medicare insurance database. Hannon: Absolutely. Gardner: Okay. We’ll leave it there. Please join me in thanking our guest. We've been talking about how RealTime Medicare Data is managing a huge volume of data and providing analysis to care providers in 18 states in the US.
  • 6. So a big thank you to Scott Hannon, CIO at RealTime Medicare Data in Birmingham, Alabama. Thanks. Hannon: Thank you, Dana. Gardner: And thanks also to our audience for joining us for this special new style of IT discussion coming to you directly from the HP Discover 2014 Conference in Las Vegas. I’m Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussions. Thanks again for listening and come back next time. Listen to the podcast. Find it on iTunes. Sponsor: HP Transcript of a BriefingsDirect podcast on how a healthcare data collection site met the challenge of increasing volumes by using HP tools. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved. You may also be interested in: • How Capgemini's UK Financial Services Unit Helps Clients Manage Risk Using Big Data Analysis • Big data meets the supply chain — SAP’s Supplier InfoNet and Ariba Network combine to predict supplier risk • Big data should eclipse cloud as priority for enterprises • HP Updates HAVEn Big Data Portfolio as Businesses Seek Transformation from More Data and Better Analysis • Perfecto Mobile goes to cloud-based testing so developers can build the best apps faster • HP's Project HAVEn rationalizes HP's portfolio while giving businesses a path to total data analysis • Big data’s big payoff arrives as customer experience insights drive new business advantages • Fast-changing demands on data centers drive need for uber data center infrastructure management • How healthcare SaaS provider PointClickCare masters quality and DevOps using cloud ITSM • HP delivers the IT means for struggling enterprises to remake themselves • Istanbul-based Finansbank Manages Risk and Security Using HP ArcSight, Server Automation • HP Access Catalog smooths the way for streamlined deployment of mobile apps