Metadata Virtualization and Orchestration from Stone Bond Offers Enterprises a Way to Improve Response Time and ROI
Metadata Virtualization and Orchestration from Stone BondOffers Enterprises a Way to Improve Response Time and ROITranscript of a BrieﬁngsDirect podcast on how companies can get a handle on exploding datawith new technologies that offer better data management.Listen to the podcast. Find it on iTunes/iPod. Sponsor: Stone Bond TechnologiesDana Gardner: Hi. This is Dana Gardner, Principal Analyst at Interarbor Solutions, and yourelistening to BrieﬁngsDirect.Today, we present a sponsored podcast discussion on the need to make sense of the deluge andcomplexity of data and information swirling in and around modern enterprises. Most large organizations today are able to identify, classify, and exploit only a small percentage of the total data and information within their systems and processes. Perhaps half of those enterprises actually have a strategy for improving on this fact. But as business leaders recognize that managing and exploiting information is a core business competency that will increasingly determine their overall success. Broader solutions to data distress are being called for. [Disclosure:Stone Bond is a sponsor of BrieﬁngsDirect podcasts.]Today, well look at how metadata-driven data virtualization and improved orchestration can helpprovide the inclusivity and scale to accomplish far better data management. Such access thenleads to improved integration of all information into an approachable resource for actionablebusiness activities.With us now to help better understand these issues and the market for solutions to these problemsare our guests. Please join me in welcoming Noel Yuhanna, Principal Analyst at ForresterResearch. Welcome to BrieﬁngsDirect, Noel.Noel Yuhanna: Thanks.Gardner: Were also here with Todd Brinegar. He is the Senior Vice President for Sales andMarketing at Stone Bond Technologies. Welcome, Todd.Todd Brinegar: Dana, how are you? Noel, great to hear you too.Gardner: Welcome to you both. Let me start with you, Noel. Its been said often, but it’s stillhard to overstate, that the size and rate of growth of data and information is just overwhelmingthe business world. Why should we be concerned about this? Its been going on for a while. Whyis it at a critical stage now to change how were addressing these issues?
Yuhanna: Well, data has been growing signiﬁcantly over the last few years because of differentapplication deployments, different devices, such as mobile devices, and different environments, such as globalization. These are obviously creating a bigger need for integration. We have customers who have 55,000 databases, and they plan to double this in the next three to four years. Imagine trying to manage 55,000 databases. It’s a nightmare. In fact, they don’t even know what the count is actually. Then, theyre dealing with unstructured data, which is more than 75 percent of the data. It’s a huge challenge trying to manage this unstructured data. Forget about the intrusions and the hackers trying to break in. You can’teven manage that data.Then, obviously, we have challenges of heterogeneous data sources, structured, unstructured,semi-structured. Then, we have different database types, and then, data is obviously duplicatedquite a lot as well. These are deﬁnitely bigger challenges than weve ever seen.Different data sourcesGardner: Were not just dealing with an increase in data, but we have all these different datasources. Were still dealing with mainframes. Were still adding on new types of data from mobiledevices and sensors. It has become overwhelming.I hear many times people talking about big data, and that big data is one of the top trends in IT. Itseems to me that you can’t just deal with big data. You have to deal with the right data. Its aboutpicking and choosing the correct data that will bring value to the process, to the analysis, orwhatever it is youre trying to accomplish.So Noel, again, to you, what’s the difference between big data and right data?Yuhanna: It’s like GIGO, Garbage In, Garbage Out. A lot of times, organizations that deal withdata don’t know what data theyre dealing with. They don’t know that it’s valuable data in theorganization. The big challenge is how to deal with this data.The other thing is making business sense of this data. Thats a very important point. And rightdata is important. I know a lot of organizations think, "Well, we have big data, but then we wantto just aggregate the data and generate reports." But are these reports valuable? Fifty percent oftimes theyre not, and theyve just burned away 1,000 CPU cycles of this data, big data.Thats where theres a huge opportunity for organizations that are dealing with big data. First ofall, you need to understand what this big data means, and ask are you going to be utilizing it.Throwing something into the big data framework is useless and pointless, unless you know thedata.
Gardner: Todd, reacting to what Noel just said about this very impressive problem, it seems thatthe old approaches, the old architectures, the connectors and the middleware, arent going to beup to the task. Why do we have to think differently then about a solution set, when we face thisdeluge, and also getting to the right data rather than just all the data regardless of its value?Brinegar: Noel is 100 percent correct, and it is all about the right data, not just a lot of data. It’sinteresting. We have clients that have a multiplicity of databases. Some they don’t even know about or no longer use, but theres relevant data in there. Dana, when you were talking about the ability to attach to mainframes, all legacy systems, as well as incorporated into today’s environment, thats really a big challenge for a lot of integration solutions and a lot of companies.So the ability to come in, attach, and get the right data and make that data actionable and make itmatter to a company is really key critical today. And being able to do that with the lowest cost ofownership in the market and the highest time to value equation, so that the companies aren’tcreating a huge amount of tech on top of the tech that they already have to get this right data,that’s really the key critical part.Gardner: Noel, thinking about how to do this differently, I remember it didn’t seem that longago when the solution to data integration was to create one big honking database and try to puteverything in there. Then thats what youd use to crunch it and do your queries. That clearly wasnot going to work then and it’s certainly not going to work now.So what’s this notion about orchestrating, metadata, and virtualization? Why are some of thesearchitectural approaches being arrived at, especially when we start thinking about the real-timeissues?Holistic datasetYuhanna: You have to look at the holistic dataset. Today, most organizations or business userswant to look at the complete datasets in terms of how to make business decisions. Typically,what theyre seeing is that data has always been in silos, in different repositories, and differentdata segregations. They did try to bring this all together like in a warehouse trying to deliver thisvalue.But then the volumes of data, the real-time data needs are deﬁnitely a big challenge. Warehouseswerent meant to be real time. They were able to handle data, but not in real time.So this whole data segregation delivers a yet even better superior framework to deliver real-timedata and the right data to consumers, to processes, to applications, whether it’s structured data,semi-structured, unstructured data, all coming together from different sources, not only on-
premise, also off-premise, such as partners data and marketplace data coming together andproviding that framework towards different elements.We talked about this many years ago and called it the information fabric, which is basically datavirtualization that delivers this whole segregation of data in that layer, so that it could beconsumed by different applications as a service, and this is all delivered in a real-time manner.Now, an important point here is that its not just read-only, but you can also write back throughthis virtualized layer, so that it can back the data back.Deﬁnitely, things have changed with this new framework and there are solutions out there thatoffer this whole framework, not only just accessing data and integrating data, but they also haveframeworks, which includes metadata, security, integration, transformation.Gardner: How about that Todd Brinegar? When we think about a fabric, when we think abouttrying to access data, regardless, and get it closer to real time, what are the architecturalapproaches that you think are working better? What are you putting in place yourselves to try tosolve this issue?Brinegar: Its a great lead in from Noel, because this is exactly the fabric and the framework thatEnterprise Enabler, Stone Bond’s integration technology, is built on.What weve done is look at it from a different approach than traditional integration. Instead oftaking old technologies and modifying those technologies linearly to effect an integration andbring that data into a staging database and then do a transformation and then massage it, wevelooked at it three-dimensionally.We attach with our AppComms, which are our connectors, to the metadata layer of anapplication. We don’t agent within the application. We get the data of the data. We separate thatdata from multiple sources, unlimited sources, and orchestrate that to a view that a client has. Itcould be Salesforce.com, SharePoint, a portal, Excel spreadsheets, or anything that theyre usedto consuming that data in.Actionable dataGardner: Just to be clear Todd, your architecture and solution approach is not only access foranalysis, for business intelligence (BI), for dashboards and insights, but this is also for real-timerunning application sets. This is actionable data.Brinegar: Absolutely. With Enterprise Enabler, were not only a data-integration tool, were anapplications-integration tool. So we are EAI/ETL. We cover that full spectrum of integration.And as you said, it is the real-time solution, the ability to access and act on that information inreal time.
Gardner: We described why this is a problem and why its getting worse. Weve looked at oneapproach to ameliorating these issues. But Im interested in what you get if you do this right.Lets go back to Noel. For some of the companies that you work with at Forrester, that you arefamiliar with, the enterprises that are looking to really differentiate themselves, when they get abetter grasp of their data, when they can make it actionable, when they can pull it together from avariety of sources, old and new, on-premises and off-premises, how impactful is this? What sortof beneﬁts are they able to accomplish?Yuhanna: The good thing about data virtualization is that its not just a single beneﬁt. There aremany, many beneﬁts of data virtualization, and there are customers who are doing real-time BI,business with data virtualization. As I mentioned, there are drawbacks and limitations in some ofthe older approaches, technologies, and architectures weve used for decades.We want real-time BI, in the sense that you can’t just wait a day for this report to show up. Youneed this every hour or every minute. So these are important decisions youve got to make forthat.Real-time BI is deﬁnitely one of the big drivers for data virtualization, but also having a singleversion of the truth. As you know, more than 30 percent of data is duplicated in an organization.That’s a very conservative number. Many people don’t know how much data is duplicated.And you have different duplication of data -- customer data, product data, or internal data. Thereare many different types of data that is duplicated. Then the data has a quality issue, because youmay change customer data in one of the applications that may touch one database, but the otherdatabase is not synchronized as such. What you get is inconsistent data, and customers and otherbusiness users don’t really value the data actually anymore.A single version of the truth is a very important deliverable from solutions, which has never beendone before, unless you have one single database actually, but most organizations have multipledatabases.Also its creating this whole dashboard. You want to get data from different sources, be able topresent business value to the consumers, to the business users, what have you, and the othercases like enterprise search, youre able to search data very quickly.Simpler complianceImagine if an auditor walks into an organization, they want to look at data for a particular event,or an activity, or a customer, searching across a thousand resources. It could be a nightmare. Thecompliance initiative through data virtualization becomes a lot simpler.Then, youre doing things like content-management applications, which need to be delivered infederation and integrate data from many sources to present more valuable information. Also,
smart phones and mobile devices want data from different systems so that they all tie together totheir consumers, to the business users, effectively.So data virtualization has quite a strong value proposition and, typically, organizations get thereturn on investment (ROI) within six months or less with data virtualization.Gardner: Todd, at Stone Bond, when you look to some of your customers, what are some of thesalient paybacks that theyre looking for? Is there some low-hanging fruit, for example? It soundsfrom what Noel said that there are going to be payoffs in areas you might not even haveanticipated, but what are the drivers? What are the ones that are making people face the factswhen it comes to data virtualization and get going with it?Brinegar: With Stone Bond and our technology Enterprise Enabler the ability to virtualize,federate, orchestrate, all in real-time is a huge value. The biggest thing is time to value though.How quickly can they get the software conﬁgured and operational within their enterprise. That isreally the key that is driving a lot of our clients’ actions.When we do an installation, a client can be up and operational doing their ﬁrst integrationtransformations within the ﬁrst day. That’s a huge time-to-value beneﬁt for that client. Then, theycan be fully operational with complex integration in under three weeks. Thats really astoundingin the marketplace.I have one client that on one single project calculated $1.5 million cost savings in personnel inthe ﬁrst year. That’s not even taking into account a technology that they may be displacing byputting in Enterprise Enabler. Those are huge components.Gardner: How about some examples Todd, use cases? I know sometimes you can namecompanies and sometimes you cant, but if you do have some names that you can share aboutwhat the data virtualization value proposition is doing for them, great, but maybe even some usecases if not.Brinegar: HP is a great example. HP runs Enterprise Enabler in their supply chain for theirEnterprise Server Group. That group provides data to all the suppliers within the EnterpriseServer Group on an on-time basis.They are able to build on demand and take care of their ﬁnancials in the manufacturing of theservers much more efﬁciently than they ever have. They were experiencing, I believe, a 10Xreturn on investment within the ﬁrst year. That’s a huge cost beneﬁt for that organization. Itsreally kept them a great client of ours.We do quite a bit of work in the oil business and the oil-ﬁeld services business, and each one ofour clients has experienced a faster ROI and a lower total cost of ownership (TCO).We just announced recently that most of our clients experienced a 300 percent ROI in the ﬁrstyear that they implemented Enterprise Enabler. CenterPoint Energy is a large client of StoneBond and they use us for their strategic transformation of how theyre handling their data.
How to beginGardner: Let’s go back to Noel. When it comes to getting started, because this is such a bigproblem, many times it’s trying to boil the ocean, because of all the different data types, thelegacy involvement. Do you have a sense of where companies that are successful at doing thishave begun?Is there a pattern, is there a methodology that helps them get moving towards some of thesereturns that Todd is talking about, that data virtualization is getting these assets into the hands ofpeople who can work with them? Any thoughts about where you get started, where you beginyour journey?Yuhanna: One is taking an issue, like an application-speciﬁc strategy, and building blocks onthat, or maybe just going out and looking at an enterprise-wide strategy. For the enterprise-widestrategy, I know that some of the large organizations in the ﬁnancial services, retail, and salesforce are starting to embark on looking at all of these data in a more holistic manner:"Ive got customer data that is all over the place. I need to make it more consistent. I need tomake it more real-time." Those are the things that Im dealing with, and I think those are going tobe seen more in the coming years.Obviously, you can’t boil the ocean, but I think you want to start with some data which becomesmore valuable, and this comes back to the point that you talked about as the right data. Start withthe right data and look at those data points that are being shared and consumed by many users,business users, and that’s going to be valuable for the business itself.The important thing is also that youre building this block on the solution. You can deﬁnitelyleverage some existing technologies, if you wanted to. I would deﬁnitely recommend nowlooking at newer technologies, because they deﬁnitely are faster. They do a lot of caching. Theydo a lot of faster integration.As Todd was mentioning, quicker ROI is important. You don’t have to wait for a year trying tointegrate data. So I think those are critical for organizations going forward. But you also have tolook at security, availability, and performance. All of these are critical, when youre makingdecisions about what your architecture is going look like.Gardner: Noel, you do a lot of research at Forrester. Are there any reports, white papers, orstudies that you could point to that would help people as they are starting to sort through this todecide where to start, where the right data might be?Yuhanna: Weve actually done extensive research over the last four or ﬁve years on this topic. Ifyou look at Information Fabric, this is a reference architecture weve told customers to use whenyoure building a data virtualization yourself. You can build the data virtualization yourself, but
obviously it will take a couple of years to build. It’s a bit complex to build, and I think thats whysolutions are better at that.But Information Fabric reports are there. Also, information as a service is something that wevewritten about -- best practices, use cases, and also vendor solutions around this topic ofdiscussion. So information as a service is something that customers could look at and gainunderstanding.Case studiesWe have use cases or case studies that talk about the different types of deployments, whetherit’s a real-time BI implementations or doing single version of fraud detection, or any otherdifferent types of environments theyre doing. So we deﬁnitely have case studies as well.There are case studies, reference architectures, and even product surveys, which talk about all ofthese technologies and solutions.Gardner: Todd, how about at Stone Bond? Do you have some white papers or research, reportsthat you can point to in order to help people sort through this and perhaps get a better sense ofwhere your technologies are relevant and what your value is?Brinegar: We do. On our website, stonebond.com, we have our CTOs blogs, Pamela Szabósblogs, which have a great perspective of data, big data, and the changing face of data usage andvirtualization.I wish everybody would explore the different opportunities and the different technologies thatthere are for integration and really determine not what you need today -- that’s important -- butwhat will you need tomorrow. What’s the tech that youre going to carry forward, and how muchis the TCO going to be as you move forward, and really make that value decision past that onespeciﬁc project, because youre going to live with the solution for a long time.Gardner: Very good. Weve been listening to a sponsored podcast discussion on the need tomake sense of the deluge and the complexity of data and information swirling in and aroundmodern enterprises. Weve also looked at how better data access can lead to improved integrationof all information into approachable resources for actual business activities and intelligence.I want to thank our guests. We have been here with Noel Yuhanna, Principal Analyst at ForresterResearch. Thanks so much, Noel.Yuhanna: Thanks a lot.Gardner: And also Todd Brinegar, the Senior Vice President of Sales and Marketing at StoneBond Technologies. Thanks to you too, Todd.Brinegar: Much appreciated. Thank you very much, Dana. Thank you very much, Noel.
Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again forlistening, and come back next time.Listen to the podcast. Find it on iTunes/iPod. Sponsor: Stone Bond TechnologiesTranscript of a BrieﬁngsDirect podcast on how companies can get a handle on exploding datawith new technologies that offer better data management. Copyright Interarbor Solutions, LLC,2005-2011. All rights reserved.You may also be interested in: • Could Data Sprawl in the Cloud Cost You Your job? • How to Deal with Data Sprawl? Could a Sticky Policy Standard Help? • Tips for Managing System and Data Sprawl Issues • Stone Bond Keeps Focus on Data Integration for the Masses