The cumulative effect of decades of IT infrastructure investment around a diverse set of technologies and processes has stifled innovation at organizations around the globe. Layer upon layer of complexity to accommodate a staggering array of applications has created hardened processes that make changes to systems difficult and cumbersome.
The data center impact of cloud, analytics, mobile, social and security rlw03...Diego Alberto Tamayo
Introduction
The consumerization of IT continues to have a major impact
on business. Technology forces have emerged that are
challenging organizations’ ability to respond. Cloud computing,
mobility, social business, big data and analytics and IT security
technologies are evolving very rapidly, putting an organization’s
IT agility, speed and resilience to the test. As these technologies
mature and converge, they are demanding a total reexamination
of the underlying enterprise infrastructure: its strategy and
design, its operation and its management framework.
One of the clearest expressions of this cloud-driven change is the emergence of lines of business (LOBs) — human resources, sales, R&D, and other areas that are end users of IT — both as direct consumers of cloud-based services, and as ever more prominent influencers of companies’ IT agendas.
Over the last decade, cloud computing has transformed the market for IT services. But the journey to cloud adoption has not been without its share of twists and turns. This report looks at lessons that can be derived from companies' experiences implementing cloud computing technology.
Impact of Cloud on IT Consumption ModelsHiten Sethi
Cisco, in partnership with Intel®, sought to pinpoint how cloud is impacting IT. 4,226 IT leaders in 18 industries across nine key economies, developed as well as emerging were surveyed. The study results highlighted some interesting findings on IT's view of cloud, LOBs' increasing influence on IT purchasing, and what the future holds.
Cloud Pricing is Broken - by Dr James Mitchell, curated by The Economist Inte...James Mitchell
Commodity trading of cloud services would benefit both buyers and sellers, but the industry’s current pricing models are standing in the way, writes Dr James Mitchell, CEO of Strategic Blue, a financial cloud broker.
In almost every organisation, irrespective of its size and industry sector, IT infrastructure managers today are grappling with the same challenges: how to transform IT efficiency; increase agility and flexibility; and lower overheads in such a way that performance, availability, resilience, data security and compliance remain under tight control.
The data center impact of cloud, analytics, mobile, social and security rlw03...Diego Alberto Tamayo
Introduction
The consumerization of IT continues to have a major impact
on business. Technology forces have emerged that are
challenging organizations’ ability to respond. Cloud computing,
mobility, social business, big data and analytics and IT security
technologies are evolving very rapidly, putting an organization’s
IT agility, speed and resilience to the test. As these technologies
mature and converge, they are demanding a total reexamination
of the underlying enterprise infrastructure: its strategy and
design, its operation and its management framework.
One of the clearest expressions of this cloud-driven change is the emergence of lines of business (LOBs) — human resources, sales, R&D, and other areas that are end users of IT — both as direct consumers of cloud-based services, and as ever more prominent influencers of companies’ IT agendas.
Over the last decade, cloud computing has transformed the market for IT services. But the journey to cloud adoption has not been without its share of twists and turns. This report looks at lessons that can be derived from companies' experiences implementing cloud computing technology.
Impact of Cloud on IT Consumption ModelsHiten Sethi
Cisco, in partnership with Intel®, sought to pinpoint how cloud is impacting IT. 4,226 IT leaders in 18 industries across nine key economies, developed as well as emerging were surveyed. The study results highlighted some interesting findings on IT's view of cloud, LOBs' increasing influence on IT purchasing, and what the future holds.
Cloud Pricing is Broken - by Dr James Mitchell, curated by The Economist Inte...James Mitchell
Commodity trading of cloud services would benefit both buyers and sellers, but the industry’s current pricing models are standing in the way, writes Dr James Mitchell, CEO of Strategic Blue, a financial cloud broker.
In almost every organisation, irrespective of its size and industry sector, IT infrastructure managers today are grappling with the same challenges: how to transform IT efficiency; increase agility and flexibility; and lower overheads in such a way that performance, availability, resilience, data security and compliance remain under tight control.
The Federal government today is in the midst of a revolution. The revolution is challenging the norms of government by introducing new ways of serving the people. New models for creating services and delivering information; new policies and procedures that are redefining federal acquisition and what it means to be a federal system integrator. This revolution also lacks the physical and tangible artifacts of the past. Its ephemeral nature, global expanse and economic impact all combine in a tidal wave of change. This revolution is called cloud computing.
Cloud Computing is an information technology gold rush. Everything from social media and smart phones to streaming video and additive games come from the cloud. This revolution has also driven many to wonder how they can retool themselves to take advantage of this massive shift. Many in IT see the technology as an opportunity to accelerate their careers but in their attempt to navigate their cloud computing future, the question of what type of training, vendor-neutral or vendor-specific, is right for them
NJVC Implementation of Cloud Computing Solutions in Federal AgenciesGovCloud Network
This paper outlines the essential steps to constructing a solid cloud computing roadmap.This paper outlines the essential steps to constructing a solid cloud computing roadmap.
White Paper IDC | The Business Value of VCE Vblock Systems: Leveraging Conver...Melissa Luongo
The Business Value of VCE Vblock Systems: Leveraging Convergence to Drive Business Agility
In the past decade, information technology (IT) evolved from an enabler of back-office business processes to the very foundation of a modern business. In the increasingly digital and mobile world, the datacenter is often the first and most frequent point of contact with customers. The ability to innovate quickly lies at the heart of today’s changing business models. Businesses expect their IT investments to accelerate their pace of innovation, provide flexibility to meet new demands, and continually reduce the costs of operations.
Converged infrastructure is essential for many companies to ensure that their datacenter infrastructures can meet today’s challenges. The business rationale for deploying converged infrastructure goes far beyond traditional IT feeds and speeds. Customers using converged solutions like VCE’s Vblock Systems (Vblock) realize lower costs, greater levels of utilization, and reduced downtime. VCE customers in this study recognized business benefits such as improved organizational agility, faster application development, increased innovation, and improved employee productivity.
IDC interviewed 16 VCE Vblock Systems customers to understand and quantify the benefits delivered by their Vblock converged infrastructure deployments. Vblock Systems are built by VCE using compute, network, and storage technologies and virtualization software from Cisco, EMC, and VMware.
IDC found that by using Vblock Systems, these organizations recorded improved business outcomes and that these improvements are increasingly driving IT investment decisions.
All VCE customers interviewed for this study generated substantial business value by consolidating their IT infrastructures with Vblock. IDC calculates that these VCE customers will generate five-year discounted benefits worth an average of $384,202 per 100 users by using Vblock, which will result in an average return on investment (ROI) of 518% and a payback period of 7.5 months.
Booz Allen Hamilton uses its Cloud Analytics Reference Architecture to build technology infrastructures that can withstand the weight of massive datasets – and deliver the deep insights organizations need to drive innovation.
The Cloud Playbook showcases how Booz Allen’s Cloud Analytics Reference Architecture can be utilized to build technology infrastructures that can withstand the weight of massive data sets - and deliver the deep insights organizations need to drive innovation.
Presentation from Chesapeake Regional Tech Council\'s TechFocus Seminar on Cloud Security; Presented by Scott C Sadler, Business Development Executive - Cloud Computing, IBM US East Mid-Market & Channels on Thursday, October 27, 2011. http://www.chesapeaketech.org
Cloud Adoption in Capital Markets: A PerspectiveCognizant
For the financial services industry, the adoption of cloud services has become a viable business directive. As firms work to recoup their losses from the recent financial crisis, pay-as-you-go cloud services allow them to focus more on strategic, innovative and revenue-generating endeavors and less on managing routine IT activities and the supporting infrastructure.
The Federal government today is in the midst of a revolution. The revolution is challenging the norms of government by introducing new ways of serving the people. New models for creating services and delivering information; new policies and procedures that are redefining federal acquisition and what it means to be a federal system integrator. This revolution also lacks the physical and tangible artifacts of the past. Its ephemeral nature, global expanse and economic impact all combine in a tidal wave of change. This revolution is called cloud computing.
Cloud Computing is an information technology gold rush. Everything from social media and smart phones to streaming video and additive games come from the cloud. This revolution has also driven many to wonder how they can retool themselves to take advantage of this massive shift. Many in IT see the technology as an opportunity to accelerate their careers but in their attempt to navigate their cloud computing future, the question of what type of training, vendor-neutral or vendor-specific, is right for them
NJVC Implementation of Cloud Computing Solutions in Federal AgenciesGovCloud Network
This paper outlines the essential steps to constructing a solid cloud computing roadmap.This paper outlines the essential steps to constructing a solid cloud computing roadmap.
White Paper IDC | The Business Value of VCE Vblock Systems: Leveraging Conver...Melissa Luongo
The Business Value of VCE Vblock Systems: Leveraging Convergence to Drive Business Agility
In the past decade, information technology (IT) evolved from an enabler of back-office business processes to the very foundation of a modern business. In the increasingly digital and mobile world, the datacenter is often the first and most frequent point of contact with customers. The ability to innovate quickly lies at the heart of today’s changing business models. Businesses expect their IT investments to accelerate their pace of innovation, provide flexibility to meet new demands, and continually reduce the costs of operations.
Converged infrastructure is essential for many companies to ensure that their datacenter infrastructures can meet today’s challenges. The business rationale for deploying converged infrastructure goes far beyond traditional IT feeds and speeds. Customers using converged solutions like VCE’s Vblock Systems (Vblock) realize lower costs, greater levels of utilization, and reduced downtime. VCE customers in this study recognized business benefits such as improved organizational agility, faster application development, increased innovation, and improved employee productivity.
IDC interviewed 16 VCE Vblock Systems customers to understand and quantify the benefits delivered by their Vblock converged infrastructure deployments. Vblock Systems are built by VCE using compute, network, and storage technologies and virtualization software from Cisco, EMC, and VMware.
IDC found that by using Vblock Systems, these organizations recorded improved business outcomes and that these improvements are increasingly driving IT investment decisions.
All VCE customers interviewed for this study generated substantial business value by consolidating their IT infrastructures with Vblock. IDC calculates that these VCE customers will generate five-year discounted benefits worth an average of $384,202 per 100 users by using Vblock, which will result in an average return on investment (ROI) of 518% and a payback period of 7.5 months.
Booz Allen Hamilton uses its Cloud Analytics Reference Architecture to build technology infrastructures that can withstand the weight of massive datasets – and deliver the deep insights organizations need to drive innovation.
The Cloud Playbook showcases how Booz Allen’s Cloud Analytics Reference Architecture can be utilized to build technology infrastructures that can withstand the weight of massive data sets - and deliver the deep insights organizations need to drive innovation.
Presentation from Chesapeake Regional Tech Council\'s TechFocus Seminar on Cloud Security; Presented by Scott C Sadler, Business Development Executive - Cloud Computing, IBM US East Mid-Market & Channels on Thursday, October 27, 2011. http://www.chesapeaketech.org
Cloud Adoption in Capital Markets: A PerspectiveCognizant
For the financial services industry, the adoption of cloud services has become a viable business directive. As firms work to recoup their losses from the recent financial crisis, pay-as-you-go cloud services allow them to focus more on strategic, innovative and revenue-generating endeavors and less on managing routine IT activities and the supporting infrastructure.
Scaling Your Software Sales: A Guide to the AppDirect Monetization SuiteAppDirect
AppDirect is a leader of cloud service commerce and works with Microsoft, Comcast, Zendesk, Box, and others to enable seamless global distribution of cloud services. Join our Head of Business Development, Paul Arnautoff, as he explores the AppDirect Monetization Suite. Tune in as we share how your business can benefit from the only end-to-end monetization solution for cloud service commerce.
WHAT YOU'LL LEARN IN THE WEBINAR:
- How our flexible and modular monetization solution can support your business at every stage of growth
- The tools that software vendors are using to accelerate sales of both their core and complementary products
- How the AppDirect Monetization Suite can be used to support your partner programs
Declarative Infrastructure with Cloud Foundry BOSHcornelia davis
Initially built to deploy and manage the Cloud Foundry “Elastic Runtime”, the platform that allows application developers and operators to easily deploy and manage applications and services through the entire app lifecycle (including production!), Cloud Foundry BOSH is a system that manages any virtual machine clusters of arbitrarily complex, distributed systems. You define your release through packages (what gets installed on the VMs), jobs (what is run on the VMs) and a deployment manifest (declaration of the cluster) and BOSH will first deploy and then continue to maintain your cluster to match that desired state. The result is a self-healing, eventually consistent system that markedly reduces the operational burdens and supports a great number of other Devops functions such as canary, zero-downtime upgrades, autoscaling, built in high availability and more. In this session we’ll show you how to create, deploy and manage a BOSH release, and we’ll watch what BOSH does when bad things happen.
Devopsdays Berlin 2015 - Keynote - KataJohn Willis
A presentation based on Mike Rother's Toyota Kata and Steven Spears High Velocity Edge. I use Etsy, 2003 Columbia Shuttle, and Alcoa as good Kata and Bad Kata examples...
Automation can only be as good as the people who use it. High trust cultures which empower, minimize blame and connect people with intrinsic motivations will dominate the cloud native landscape. The cloud was not built with ITIL. Heavy change control processes attempted to minimize incidents but typically drain the enthusiasm of the people while resulting in brittle systems which fail catastrophically. Self service policy enforcing automation makes doing the right thing easy and expedient allowing cloud natives to focus on delivering for the business instead of managing politics and signatures.
Pivotal is a trusted partner for IT innovation and transformation. From the technology, to the people, to the way people interact with technology, Pivotal is transforming how the world builds software.
At Strata NYC 2015, Pivotal, announced it will Supercharge the Hadoop Ecosystem by contributing the HAWQ advanced SQL on Hadoop analytics and MADlib machine learning technologies to The Apache Software Foundation.
The Coming Disruption to Datacenter StrategiesStuart Miniman
Presentation from Web-scale Wednesday webinar on 6/25/14.
Abstract: The same web-scale companies that drive much of the commerce on the Internet (Google, Yahoo, Facebook, Amazon) are some of the largest consumers of IT in the world. The operational models and many of the technologies that started in the largest data centers are starting to invade mainstream enterprise IT. This session will help unpack how to bridge the knowledge gap between these very different worlds.
Building a University Community PaaS Using Cloud Foundry (Cloud Foundry Summ...VMware Tanzu
Lightning Talk by Dr. Wei-Min Lu, Founder and CEO
Anchora.
The Shanghai Jiao Tong University PaaS is a community cloud PaaS based on Cloud Foundry jointly built and operated by the Network and Information Center at Shanghai Jiao Tong University and MoPaaS/Anchora. It serves more than 10,000 professors, instructors, and researchers, and more than 50,000 undergraduate and graduate students. In particular, it provides an agile cloud application platform for R&D and teaching. In this talk, I will share our experience building and operating such a community PaaS using Cloud Foundry.
Our Cloud Native Runtime Platform provides a self service deployment workflow with role based access to resources on top of the most deployed container scheduler on the market. The Cloud Foundry Elastic Runtime has been leveraging containers for their speed and density since 2011. Using container isolation and orchestration designed to detect and remediate failure in real time, Pivotal Cloud Foundry provides a structured platform for continuously delivering mission critical applications with speed, reliability and security. Every application is monitored for health, performance and streams logs through the platform. The design offers minimal friction and overhead for developers to build and deploy to a platform operations can trust and control.
devops, platforms and devops platformsVMware Tanzu
Everyone has a platform. The choice is not between having a platform or not, the choice is between having an ad hoc organic platform or a deliberate structured platform. An army of administrators running scripts is a platform. The emergence of system automation tools provided powerful abstractions to increase the velocity and consistency beyond traditional approaches, but left orchestration and integration to be accomplished by additional scripts or manual processes. These tools and processes formed new platforms for deploying and managing software services. The new automation frameworks, like Cloud Foundry, represent the first generation developed after the advent of api driven cloud computing. This presentation will walk through the evolution of systems and software, the emergence of the process and practices driving innovation at scale, highlight the patterns of successful platforms leveraging both technology and people to deliver value with practical advice for evaluating and implementing change in the context of your organization. Visit http://pivotal.io/event/vmworld to see the rest of our sessions at VMworld.
NFRASTRUCTURE MODERNIZATION REVIEW
Analyze the issues
Hardware
Over-running volume of data is a problem that should be addressed by data management and storage management. Data is being constantly collected but poorly analyzed which leads to excessive amounts of data occupying storage and delay in operations which inevitably affect production, sales and profits. If this remains unresolved, current data may have to be moved to external storage and recovered if needed. There is also the risk of data not being encoded into computers and thus will remain in manual state. This can be a case of redundant or extraneous data that is not yet cleaned and normalized by operations managers with the guidance of IT. This situation is known as data overload where companies actually use only a fraction of the data they capture and store. Many companies simply hoard data to make sure that they are readily available when they are needed. This negatively impacts the Corporation when assessing data relevance, accuracies and timeliness (Marr, 2016).
Software
The Largo Corporation (LC) seems to running on an enterprise resource planning system that is probably as long as 20 years old. Initially, LC has had success with the old system because they were able to establish themselves in various industries such as healthcare, media, government, etc. But due to various concerns, the Corporation is currently running on an outdated system because it is unable to provide services that keeps the Corporation a float. The LC is losing revenue and customers. Complete data without analysis is invaluable because, no information and insights can be produced that will support decisions. Customer data should lead to the best marketing and sales campaigns. The Corporation needs to recognize its weaknesses and implement changes to their software by incorporating funding for a new system that is reliable, secure, and has the ability to run on integrated systems; all of which will streamline data organization and analysis for the enterprise. (Rouse, n.d).
Network/Telecommunications
The network that was built in the 1980’s has become slow and unreliable affecting business operations. The problems caused by the old network are; lack of integration and communication between departments affecting the work flow, supply vs. demand, and inability to analyze data to carry out these operations. The Corporation should have taken into consideration the growth of the company by expanding and upgrading their networks along with their services. They should also take into consideration the number of departments, the number of users and their skill level, storage and bandwidth, and budget (Rasmussen, 2011). The current network does not allow employees to connect on their mobile devices which restricts flexibility and places limitations on productivity and portability.
Management
The responses of both IT and the business group are both juxtaposed against e ...
Rebooting IT Infrastructure for the Digital AgeCapgemini
The Digital Transformation Institute has launched its latest research report titled “Faster, Better, Smarter: Rebooting IT Infrastructure for the Digital Age.” The report highlights why organizations need robust and seamless IT infrastructure that keeps pace with evolving market and technology demands. IT infrastructure has always been known as a “keeping the lights on” function but now it has evolved into a core catalyst of Digital Transformation. However, as a function, IT infrastructure is yet to undergo a core transformation. The report discusses why a reboot is critical.
Top 10 Strategic Technology Trends 2007-2014 - Gartner
Top 10 Xu Hướng Chiến Lược Công Nghệ 2007-2014 - Gartner.
A strategic technology may be an existing technology that has matured and/or become suitable for a wider range of uses. It may also be an emerging technology that offers an opportunity for strategic business advantage for early adopters or with potential for significant market disruption in the next five years. These technologies impact the organization's long-term plans, programs and initiatives.
The survival kit for your digital transformationrun_frictionless
To go digital, you need an IT organization, an enterprise architecture, IT processes, and tools that allow for new projects to go live tomorrow instead of next week. The ability to do this will give you a competitive advantage and it will also reduce costs. But how do you get there? This white paper will get you there.
https://runfrictionless.com/b2b-white-paper-service/
Attaining IoT Value: How To Move from Connecting Things to Capturing InsightsSustainable Brands
Cisco estimates that the Internet of Everything (IoE) — the networked connection of people, process, data, and things — will generate $19 trillion in Value at Stake for the private and public sectors combined between 2013 and 2022. More than 42 percent of this value — $8 trillion — will come from one of IoE’s chief enablers, the Internet of Things (IoT). Defined by Cisco as “the intelligent connectivity of physical devices, driving massive gains in efficiency, business growth, and quality of life,” IoT often represents the quickest path to IoE value for private and public sector organizations.
This paper combines original and secondary research, as well as economic analysis, to provide a roadmap for maximizing value from IoT investments. It also explains why, in the worlds of IoT and IoE, the combination of edge computing/analytics and data center/cloud is essential to driving actionable insights that produce improved business outcomes.
For many, web-scale IT is an alien and drastic approach being met with fear and resistance. So the first question for any organization should be; what is it? Cameron Haight, Gartner’s chief of research for infrastructure and operations, coined the term “Web-scale IT” earlier 2014 as a way to describe the new ways organizations leverage technology to provide their customers with content quickly and at massive scale.
A BYOC program enables the use of employee-owned smartphones, tablets, and laptops for business use. The growing popularity and use of personal devices, such as the iPad, is challenging IT to develop a position on their use in the workplace.
This storyboard explores:
•The objectives for a successful BYOC deployment: reducing cost and complexity of desktop management; improving agility and accessibility; and ensuring that security is not sacrificed in achieving those goals.
•Building out the four pillars of capability to prepare your environment for BYOC: infrastructure, security, operations & support, and policy development.
•Refocusing efforts in the last mile by developing a clear communications strategy to manage expectations and prepare for change.
Success in BYOC can be achieved. Rather than just saying no, focus on how to drive positive, secure change in the desktop environment.
Most data integration software was built to run data through ETL servers. It worked well at the time for several reasons: there wasn’t that much data—1TB was considered a large amount of data at the time; most data was structured, and the turnaround time for that data was monthly. Even back then, daily loads became a problem for most companies. Because of the limitations of the early tools, much of the work was hand-coded, without documentation, and no central management.
Top 10 guidelines for deploying modern data architecture for the data driven ...LindaWatson19
Enterprises are facing a new revolution, powered by the rapid adoption of data analytics with modern technologies like machine learning and artificial intelligence (A).
Data center outsourcing a new paradigm for the ITAlessandro Guli
Decisions relating to the hosting of IT assets are reaching new levels of risk and of complexity.
The availability of new technologies and services, principally those associated with the cloud,
have created new possibilities for aligning IT delivery with business needs and, in the process,
meeting new challenges of data traffic, mobility and the cluster of initiatives that are included
under ‘speed to market’.
Taking full advantage of data-driven efficiency makes your operations more precise, predictable and efficient.
Take control of your resources and inventory. Let big data work for you.
Similar to A Special Report on Infrastructure Futures: Keeping Pace in the Era of Big Data Growth (20)
This IBM Redpaper provides a brief overview of OpenStack and a basic familiarity of its usage with the IBM XIV Storage System Gen3. The illustration scenario that is presented uses the OpenStack Folsom release implementation IaaS with Ubuntu Linux servers and the IBM Storage Driver for OpenStack. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn how all flash needs end to end Storage efficiency. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about vSphere Storage API for Array Integration on the IBM Storwize family. IBM Storwize V7000 Unified combines the block storage capabilities of Storwize V7000 with file storage capabilities into a single system for greater ease of management and efficiency. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about IBM FlashSystem 840 and its complete product specification in this Redbook. FlashSystem 840 provides scalable performance for the most demanding enterprise class applications. IBM FlashSystem 840 accelerates response times with IBM MicroLatency to enable faster decision making. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about the IBM System x3250 M5,.The x3250 M5 offers the following energy-efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to a green environment, energy-efficient planar components help lower operational costs. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210746104/IBM-System-x3250-M5
This Redbook talks about the product specification of IBM NeXtScale nx360 M4. The NeXtScale nx360 M4 server provides a dense, flexible solution with a low total cost of ownership (TCO). The half-wide, dual-socket NeXtScale nx360 M4 server is designed for data centers that require high performance but are constrained by floor space. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210745680/IBM-NeXtScale-nx360-M4
Learn about IBM System x3650 M4 HD which is a 2-socket 2U rack-optimized server. This powerful system is designed for your most important business applications and cloud
deployments. Outstanding RAS and high-efficiency design improve your business environment and help save operational costs. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Here are the product specification for IBM System x3300 M4. This product can be managed remotely.The x3300 M4 server contains IBM IMM2, which provides advanced service-processor control, monitoring, and an alerting function. The IMM2 lights LEDs to help you diagnose the problem, records the error in the event log, and alerts you to the problem. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System x iDataPlex dx360 M4. IBM System x iDataPlex is an innovative data center solution that maximizes performance and optimizes energy and space efficiency. The iDataPlex solution provides customers with outstanding energy and cooling efficiency, multi-rack level manageability, complete flexibility in configuration, and minimal deployment effort. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210744055/IBM-System-x-iDataPlex-dx360-M4
This Redbook talks through the benefits and product specification of IBM System x3500 M4. The x3500 M4 offers a flexible, scalable design and simple upgrade path to 32 HDDs, with up to eight PCIe 3.0 slots and up to 768 GB of memory. A high-performance dual-socket tower server, the IBM System x3500 M4, can deliver the scalability, reliable performance, and optimized efficiency for your business-critical applications. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210742768/IBM-System-x3500-M4
Learn about system specification for IBM System x3550 M4. The x3550 M4 offers numerous features to boost performance, improve scalability, and reduce costs. Improves productivity by offering superior system performance with up to 12-core processors, up to 30 MB of L3 cache, and up to two 8 GT/s QPI interconnect links. For more information on System x, visit http://ibm.co/Q7m3iQ.
Learn about IBM System x3650 M4. The x3650 M4 is an outstanding 2U two-socket business-critical server, offering improved performance and pay-as-you grow flexibility along with new features that improve server management capability. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741926/IBM-System-x3650-M4
Learn about the product specification of IBM System x3500 M3. System x3500 M3 has an energy-efficient design which works in conjunction with the IMM to govern fan rotation based on the readings that it delivers. This saves money under normal conditions because the fans do not have to spin at high speed. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741626/IBM-System-x3500-M3
Learn about IBM System x3400 M3. The x3400 M3 offers numerous features to boost performance and reduce costs, x3400 M3 has the ability to grow with your application requirements with these features. Powerful systems management features simplify local and remote management of the x3400 M3. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System 3250 M3 which is a single-socket server that offers new levels of performance and flexibility
to help you respond quickly to changing business demands. Cost-effective and compact, it is well suited to small to mid-sized businesses, as well as large enterprises. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210740347/IBM-System-x3250-M3
Learn about IBM System x3200 M3 and its specifications. The System x3200 M3 features easy installation and management with a rich set of options for hard disk drives and memory. The efficient design helps to save energy and provide a better work environment with less heat and noise. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210739508/IBM-System-x3200-M3
Learn about the configuration of IBM PowerVC. IBM PowerVC is built on OpenStack that controls large pools of server, storage, and networking resources throughout a data center. IBM Power Virtualization Center provides security services that support a secure environment. Installation requires just 20 minutes to get a virtual machine up and running. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about Ibm POWER7 Virtualization Performance. PowerVM Lx86 is a cross-platform virtualization solution that enables the running of a wide range of x86 Linux applications on Power Systems platforms within a Linux on Power partition without modifications or recompilation of the workloads. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
http://www.scribd.com/doc/210734237/A-Comparison-of-PowerVM-and-Vmware-Virtualization-Performance
Learn about IBM PureFlex Sytem and VMware vCloud Enterprise Suite. The IBM PureFlex System platform has been used to meet the hardware requirements in support of this reference architecture. All the components required to support vCloud Suite (including computing, networking, storage, and management interfaces). For more information on Pure Systems, visit http://ibm.co/J7Zb1v.
http://www.scribd.com/doc/210719868/IBM-pureflex-system-and-vmware-vcloud-enterprise-suite-reference-architecture
Learn how x6: The sixth generation of EXA Technology is fast, agile and Resilient for Emerging Workloads from Alex Yost. Vice President, IBM PureSystems and System x
IBM Systems and Technology Group. x6 drives cloud and big data for enterprises by achieving insight faster thereby outperforming competitors. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210715795/X6-The-sixth-generation-of-EXA-Technology
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
A Special Report on Infrastructure Futures: Keeping Pace in the Era of Big Data Growth
1. A Special Report on Infrastructure Futures:
Keeping Pace in the Era of Big Data Growth
How big data analytics impose huge challenges for storage
professionals and the keys for preparing for the future
David Vellante, David Floyer
Analysis from The Wikibon Project May 2012
A Wikibon Reprint
2. Wikibon.org 1 of 13
View the live research note on Wikibon.
The cumulative effect of decades of IT infrastructure investment around a diverse
set of technologies and processes has stifled innovation at organizations around the
globe. Layer upon layer of complexity to accommodate a staggering array of
applications has created hardened processes that make changes to systems difficult
and cumbersome.
The result has been an escalation of labor costs over the years to support this
complexity. Ironically, computers are supposed to automate manual tasks, but the
statistics show some alarming data that flies in the face of this industry promise. In
particular, the percent of spending for both internal and outsourced IT staff has
exploded over the past 15 years. According to Wikibon estimates, of the $250B
spent on server-and storage-related hardware and staffing costs last year, nearly
60% was spent on labor. IDC figures provide further evidence of this trend. The
research firm’s forecasts are even more aggressive than Wikibon’s, with estimates
that suggest labor costs will approach 70% by 2013 (see Figure 1 below).
The situation is untenable for most IT organizations and is compounded by the
explosion of data. Marketers often cite Gartner’s three V’s of Big Data —volume,
velocity, and variety — that refer respectively to data growth, the speed at which
organizations are ingesting data, and the diversity in data texture (e.g. structured,
unstructured, video, etc). There is a fourth V that is often overlooked: Value.
WikiTrend: By 2015, the majority of IT organizations will come to the
realization that big data analytics is tipping the scales and making
information a source of competitive value that can be monetized and not
just a liability that needs to be managed. Those organizations which
cannot capitalize on data as an opportunity, risk losing marketshare.
From an infrastructure standpoint, Wikibon sees five keys to achieving this vision:
▪ Simplifying IT infrastructure through tighter integration across the hardware
stack;
▪ Creating end-to-end virtualization beyond servers into networks, storage, and
applications;
▪ Exploiting flash and managing a changing hardware stack by intelligently
matching data and media characteristics;
▪ Containing data growth by making storage optimization a fundamental capability
of the system;
▪ Developing a service orientation by automating business and IT processes
through infrastructure that can support applications across the portfolio,
versus within a silo, and provide infrastructure-as-a-service that is
“application aware.”
This research note is the latest in a series of efforts to aggregate the experiences of
users within the Wikibon community and put forth a vision for the future of
infrastructure management.
3. Wikibon.org 2 of 13
The IT Labor Problem
The trend toward IT consumerization, led by Web giants servicing millions of users,
often with a single or very few applications, has ushered in a new sense of urgency
for IT organizations. C-level and business line executives have far better
experiences with Web apps from Google, Facebook, and Zynga than with their
internal IT systems as these services have become the poster children of simplicity,
rapid change, speed, and a great user experience.
In an effort to simplify IT and reduce costs, traditional IT organizations have
aggressively adopted server virtualization and built private clouds. Yet relative to
the Web leaders, most IT organizations are still far behind the Internet innovators.
The reasons are quite obvious as large Web properties had the luxury of starting
with a clean sheet of paper and have installed highly homogeneous infrastructure
built for scale.
Both vendor and user communities are fond of citing statistics that 70% of IT
spending is allocated to “Running the Business”, while only 30% goes toward
growth and innovation. Why is this? The answer can be found by observing IT labor
costs over time.
Data derived from researcher IDC (see Figure 1) shows that in 1996, around $30B
was spent on IT infrastructure labor costs, which at the time represented only
about 30% of total infrastructure costs. By next year, the data says that more than
$170B will be spent on managing infrastructure (i.e. labor), which will account for
nearly 70% of the total infrastructure costs (including capex and opex). This is a
whopping 6X increase in labor costs, while overall spending has only increased 2.5X
in those 15+ years.
Figure 1 – IT Labor Cost Over Time
Data Source: IDC 2012
4. Wikibon.org 3 of 13
What does this data tell us? It says we live in a labor-intensive IT economy and
something has to change. The reality is IT investments primarily go toward labor
and this labor-intensity is slowing down innovation. This trend is a primary reason
that IT is not keeping pace with business today — it simply doesn’t have the
economic model to respond quickly at scale. In order for customers to go in new
directions and break this gridlock, vendors must address the REAL cost of
computing, people.
The answer is one part technology, one part people, and one part process.
Virtualization/cloud is the dominant technology trend, and we live in a world where
IT infrastructure and applications, and the security that protects data sources, are
viewed as virtual, not physical entities. The other three dominant technology
themes reported by Wikibon community practitioners are:
1. A move toward pre-engineered and integrated systems (aka converged
infrastructure) that eliminate or at least reduce mundane tasks such as patch
management;
2. Much more aggressive adoption of virtualization beyond servers;
3. A flash-oriented storage hierarchy that exploits automated operations and a
reduction in the manual movement of data — i.e. “smarter systems” that are
both automated and application aware — meaning infrastructure can support
applications across the portfolio and adjust based on quality of service
requirements and policy;
4. Products that are inherently efficient and make data reduction features like
compression and de-duplication fundamental capabilities, not optional add-
ons, along with new media such as flash and the ability to automate
management of the storage infrastructure.
From a people standpoint, organizations are updating skills and training people in
emerging disciplines including data science, devops (the intersection of application
development and infrastructure operations), and other emerging fields that will
enable the monetization of data and deliver hyper increases in productivity.
The goal is that the combination of improved technologies and people skills will lead
to new processes that begin to reshape decades of complexity and deliver a much
more streamlined set of services that are cloud-like and services-oriented.
The hard reality is that this is a difficult task for most organizations, and an
intelligent mix of internal innovation with external sourcing will be required to meet
these objectives and close the gap with the Web giants and emerging cloud service
providers.
New Models of Infrastructure Management
IT infrastructure management is changing to keep pace as new models challenge
existing management practices. Traditional approaches use purpose-built
configurations that meet specific application performance, resilience, and space
5. Wikibon.org 4 of 13
requirements. These are proving wasteful, as infrastructure is often over-
provisioned and underutilized.
The transformative model is to build flexible, self-administered services from
industry-standard components that can be shared and deployed on an as-needed
basis, with usage levels adjusted up or down according to business need. These IT
services building blocks can come as services from public cloud and SaaS providers,
as services provided by the IT department (private clouds), or increasingly as
hybrids between private and public infrastructure.
Efforts by most IT organizations to self-assemble this infrastructure have led to a
repeat of current problems, namely that the specification and maintenance of all
the parts requires significant staff overhead to build and service the infrastructure.
Increasingly, vendors are providing a complete stack of components, including
compute, storage, networking, operating system, and infrastructure management
software.
Creating and maintaining such a stack is not a trivial task. It will not be sufficient
for vendors or systems integrators to create a marketing or sales bundle of
component parts and then hand over the maintenance to the IT department; the
savings from such a model are minimal over traditional approaches. The stack must
be completely integrated, tested, and maintained by the supplier as a single SKU,
or as a well-documented solution with codified best practices that can be applied for
virtually any application. The resultant stack has to be simple enough that a single
IT group can completely manage the system and resolve virtually any issue on its
own.
Equally important, the cost of the stack must be reasonable and must scale out
efficiently. Service providers are effectively using open-source software and focused
specialist skills to decrease the cost of their services. Internal IT will not be able to
compete with services providers if their software costs are out of line.
The risk to this integrated approach according to members of the Wikibon
practitioner community is lock-in. Buyers are concerned that sellers will, over time,
gain pricing power and return to the days of mainframe-like economics. This
concern has merit. Sellers of converged systems today are providing large
incentives to buyers in the form of aggressive pricing and white glove service in an
effort to maintain account control and essentially lock customers into their specific
offering. The best advice is as follows:
▪ Consider converged infrastructure in situations where cloud-like services provide
clear strategic advantage, and the value offsets the risk of lock-in down the
road.
▪ Design processes so that data doesn’t become siloed. In other words, make sure
your data can be migrated easily to other infrastructure.
▪ Don’t sole source. Many providers of integrated infrastructure have realized they
must provide choice of various components such as hypervisor, network, and
server. Keep your options open with a dual-sourcing strategy.
6. Wikibon.org 5 of 13
WikiTrend: Despite the risk of lock-in, by 2017, more than 60%
infrastructure will be purchased as some type of integrated system, either
as a single SKU or a pre-tested reference architecture.
The goal of installing integrated or converged infrastructure is to deliver a world
without stovepipes, where hardware and software can support applications across
the portfolio. The tradeoff of this strategy is it lessens the benefits of tailor-made
infrastructure that exactly meets the needs of an application. For the few
applications that are critical to revenue generation, this will continue to be a viable
model. However, Wikibon users indicate that 90% or more of the applications do
not need a purpose-built approach, and Wikibon has used financial models to
determine that a converged infrastructure environment will cut the operational
costs by more than 50%.
Figure 2 – Traditional Stove-piped Infrastructure Model
Source: Wikibon 2012
The key to exploiting this model is tackling the 90% long tail of applications by
aggregating common technology building blocks into a converged infrastructure.
There are two major objectives in taking this approach:
1. Drive down operational costs by using an integrated stack of hardware,
operating systems, and middleware;
2. Accelerate the deployment of applications.
7. Wikibon.org 6 of 13
Figure 3 – Infrastructure 2.0 Services Model
Source: Wikibon 2012
Virtualization: Moving Beyond Servers
Volume servers that came from the consumer space only had the capability of
running one application per server. The result was servers that had very low
utilization rates, usually well below 10%. Specialized servers that can run multiple
applications can achieve higher utilization rates but at much higher system and
software costs.
Hypervisors, such as VMware, Microsoft’s Hyper V, Xen and hypervisors from IBM
and Oracle, have changed the equation. The hypervisors virtualize the system
resources and allow them to be shared among multiple operating systems. Each
operating system thinks that it has control of a complete hardware system, but the
hypervisor is sharing those resources among them.
The result of this innovation is that volume servers can be driven to much higher
utilization levels, thee-to-four times that of stand-alone systems. This makes low-
cost volume servers that are derived directly from volume consumer products such
as PCs much more attractive as a foundation for processing and much cheaper than
specialized servers and mainframes. There will still be a place for very high-
performance specialized servers for some applications such as certain performance-
critical databases, but the volume will be much lower.
The impact of server virtualization on storage is profound. The I/O path to a server
provides service to many different operating systems and applications. The result is
that the access patterns as seen by the storage devices are much less predictable
and more random. The impact of higher server utilization (and of multi-core
processors) is that IO volumes (IOPS, IOs per second) will be much higher.
8. Wikibon.org 7 of 13
Increasingly, few processor cycles will be available for housekeeping activities such
as backup.
Server virtualization is changing the way that storage is allocated, monitored, and
managed. Instead of defining LUNs and RAID levels, virtual systems are defining
virtual disks and expect array information to reflect these virtual machines and
virtual disks and the applications they are running. Storage virtualization engines
are enabling the pooling of multiple heterogeneous arrays, providing both
investment protection and flexibility for IT organizations with diverse asset bases.
As well, virtualizing the storage layer dramatically simplifies storage provisioning
and management, much in the same way that server virtualization attacked the
problem of underutilized assets.
Conclusions for Storage: Storage arrays will have to serve much higher volumes
of random read and write IOs with applications using multiple protocols. In
addition, storage arrays will need to work across heterogeneous assets and
virtualized systems and speak the language of virtualized administrators. Newer
storage controllers (often implemented as virtual machines) are evolving that will
completely hide the complexities of traditional storage (e.g., the LUNS and RAID
structures) and be replaced with automated storage that is a virtual machine (VM)
focused on providing the metrics that will enable virtual machine operators (e.g.,
VMware administrators) to monitor the performance, resource utilization, and
service level agreement (SLA) at a business application level.
Storage networks will have to adapt to providing shared a transport for the
different protocols. Adaptors and switches will increasingly use lossless Ethernet as
the transport mechanism, with different protocols running underneath.
Backup processes will need to be re-architected and linked to the application versus
a one-size-fits-all approach. Application consistent snaps and continuous backup
processes are some of the technologies that will become increasingly important
over time.
WikiTrend: Virtualization is moving beyond just servers and will impact the
entire infrastructure stack, from storage, backup, networks, infrastructure
management, and security. Overall, the strong trend towards a converged
infrastructure, where storage function placement is more dynamic, being
staged optimally in arrays, in virtual machines or in servers will
necessitate and end-to-end and more intelligent management paradigm.
Flash Storage: Implications to the Stack
Consumers are happy to pay premiums for flash memory over the price of disk
because of the convenience of flash. For example, the early iPods had disk drives
but were replaced by flash because the device required very little battery power
and had no moving parts. The results were much smaller iPods that would work for
days without recharging and would work after being dropped. This led to huge
consumer volume shipments and flash storage costs dropped dramatically.
In the data center, systems and operating system architectures have had to
contend with the volatility of processors and high-speed RAM storage. If power was
9. Wikibon.org 8 of 13
lost to the system, all data in flight was lost. The solutions were either to protect
the processors and RAM with complicated and expensive battery backup systems or
to write the data out to disk storage, which is non-volatile. The difference between
the speed of disk drives (measured in milliseconds, 10-3
) and processor speed
(measured in nanoseconds, 10-9
) is huge and is a major constraint on system
speed. All systems wait for I/O at the same speed. This is especially true for
database systems.
Flash storage is much faster than disk drives (microseconds, 10-6
) and is persistent
– when the power is removed the data is not lost. It can provide an additional
memory level between disk drives and RAM. The impact of flash memory is also
being seen in the iPad effect. The iPad is always on, and the response time for
applications compared with traditional PC systems in amazing. Applications are
being rewritten to take advantage of this capability, and operating systems are
being changed to take advantage of this additional layer. iPads and similar devices
are forecast to have a major impact on portable PCs, and the technology transfer
will have a major impact within the data center, both at the infrastructure level and
in the design of all software.
IO Centric Processing: Big Data Goes Real-time
Wikibon has written extensively about the potential of flash to disrupt industries
and designing systems and infrastructure in the Big Data IO Centric era. The model
developed by Wikibon is shown in Figure 4.
10. Wikibon.org 9 of 13
Figure 4 – Real-time Big Data Processing with IO Centric Storage
Source: Wikibon 2012
The key to this capability is the ability to directly address the flash storage from the
processor with lockable atomic writes, as explained in a previous Wikibon discussion
on designing systems and infrastructure in the Big Data IO Centric era. This
technology has brought down the cost of IO intensive systems by two orders or
magnitude, 100 times, whereas the cost of hard disk-only solutions has remained
constant. This trend will continue.
This technology removes the constraints of disk storage and allows the real-time
parallel ingest of transactional, operational and social media data streams, and
sufficient IO at low-enough cost that allows parallel processing of Big Data
transactional systems at the same time performing Big Data indexing and metadata
processing to drive Big Data Analytics.
WikiTrend: Flash will enable changes in system and application design that
are profound. Transactional systems will evolve, as flash architectures will
remove locking constraints at the highest performance tier. Big Data
analytics will be integrated with operational systems and Big Data streams
will become direct inputs to applications people, devices and machines.
Metadata extraction, index data and other summary data will become
direct inputs to operational Big Data streams and enable more value to be
derived at lower costs from archival and backup systems.
11. Wikibon.org 10 of 13
Conclusions for Storage: Flash will become a ubiquitous technology that will be
used in processors as an additional memory level, in storage arrays as read/write
“Flash cache”, and as a high-speed disk device. Systems management software will
focus high I/O “hot-spots” and low latency I/O on flash technology and allow high-
density disk drives to store the less active data.
Overall within the data center, flash storage will pull storage closer to the
processor. Because of the heat density constraints mentioned above, it is much
easier to put low power flash memory rather than disk drives very close to the
processor.
The result of more storage being closer to the processor will be for some storage
functionality to move away from storage arrays and filers and closer to the
processor, a trend that is made easier by multi-core processors that have cycles to
spare. The challenge for storage management will be to provide the ability to share
a much more distributed storage resource between processors. Future storage
management will have to contend with sharing storage that is within servers as well
as traditional SANs and filers outside servers.
Storage Efficiency Technologies
Storage efficiency is the ability to reduce the amount of physical data on the disk
drives required to store the logical copies of the data as seen by the file systems.
Many of the technologies have become or are becoming mainstream capabilities.
Key technologies include:
▪ Storage virtualization:
Storage virtualization allows volumes to be logically broken into
smaller pieces and mapped onto physical storage. This allows much
greater efficiency in storing data, which previously had to be stored
contiguously. This technology also allows dynamic migration of data
within arrays that can also be used for dynamic tiering systems.
Sophisticated tiering systems, which allow small chunks of data (sub-
LUN) to be migrated to the best place in the storage hierarchy, have
become a standard feature in most arrays.
▪ Thin provisioning:
Thin provisioning is the ability to provision storage dynamically from a
pool of storage that is shared between volumes. This capability has
been extended to include techniques for detecting zeros (blanks) in file
systems and using no physical space to store them. This again has
become a standard feature expected in storage arrays.
▪ Snapshot technologies:
Space-efficient snapshot technologies can be used to store just the
changed blocks and therefore reduce the space required for copies.
This provides the foundation of a new way of backing up systems
using periodic space-efficient snapshots and replicating these copies
remotely.
12. Wikibon.org 11 of 13
▪ Data de-duplication:
Data de-duplication was initially introduced for backup systems, where
many copies of the same or nearly the same data were being stored
for recovery purposes. This technology is now extending to inline
production data, and is set to become a standard feature on storage
controllers.
▪ Data compression:
Originally data compression was an offline process used to reduce the
data held. Data compression is used in almost all tape systems, is now
being extended to online production disk storage systems, and is set
to become a standard feature in many storage controllers. The
standard compression algorithms used are based on LZ (Lempel and
Ziv), and give a compression ratio between 2:1 and 3:1. Compression
is not effective on files that have compression built-in (e.g., JPEG
image files, most audio visual files). The trend is toward real time
compression where performance is not compromised.
WikiTrend: Storage efficiency technologies will have a significant impact
on the amount of storage saved. However, they will not affect the number
of I/Os and the bandwidth required to transfer I/Os. Storage efficiency
techniques will be applied to the most appropriate part of the
infrastructure and become increasingly embedded into systems and
storage design.
Milestones for Next Generation Infrastructure Exploitation
Some key milestones are required to exploit new infrastructure directions in general
and storage infrastructure in particular:
1. Sell the vision to senior business managers.
2. Create a Next Generation Infrastructure Team, including cloud
infrastructure.
3. Set aggressive targets for Infrastructure implementation and cost
savings, in line with external IT service offerings.
4. Select a stack for each set of application suites:
▪ Choose a single vendor Infrastructure stack from a large vendor
that can supply and maintain the hardware and software as a single
stack. The advantage of this approach is the cost of maintenance
within the IT department can be dramatically reduced if the software is
treated as a single SKU and updated as such, and the hardware
firmware is treated the same way. The disadvantage is lack of choice
for components of the stack, and a higher degree of lock-in.
▪ Limit lock-in with a sourcing strategy. Choose an Ecosystem
Infrastructure Stack of software and hardware components that can
be intermixed. The advantage of this approach is greater choice and
13. Wikibon.org 12 of 13
less lock-in, at the expense of significantly increased costs of internal
IT maintenance.
5. Reorganize and flatten IT support by stack(s), and move away from an
organization supporting stovepipes. Give application development and
support groups the responsibility to determine the service levels required,
and the Next Generation Infrastructure team the responsibility to provide the
infrastructure services to meet the SLA. Included in this initiative should be a
move to DevOps, where application development and infrastructure
operation teams are cross-trained with the goal of achieving hyper
productivity.
6. Create a self-service IT environment with a service catalogue and
integrate charge-back or show-back controls.
From a strategic point of view, it will be important for IT to compete with external
IT infrastructure suppliers where internal data proximity or privacy requirements
dictate the use of private clouds, and use complementary external cloud services
where internal clouds are not economic.
Overall Storage Directions and Conclusions
Storage infrastructure will change significantly with the implementation of a new
generation of infrastructure across the portfolio. There will be a small percentage of
application suites that will require a siloed stack and large scale-up monolithic
arrays, but the long tail (90% of applications suites) will require standard storage
services that are inherently efficient and automated. These storage services will be
more distributed within the stack with increasing amounts of flash devices and
distributed within private and public cloud services. Storage software functionality
will become more elastic and will reside or migrate to the part of the stack that
make most practical sense, either in the array or in the server or in a combination
of the two.
The I/O connections between storage and servers will become virtualized, with a
combination of virtualized network adapters and other virtual I/O mechanisms. This
approach will save space, drastically reduce cabling, and allow dynamic
reconfiguration of resources. The transport fabrics will be lossless Ethernet with
some use of InfiniBand or other high speed interconnects for inter-processor
communication. Storage will become protocol agnostic. Where possible, storage will
follow a scale-out model, with meta-data management a key component.
The storage infrastructure will allow dynamic transport of data across the network
when required, for instance to support business continuity, and with some
balancing of workloads. However, data volumes and bandwidth are growing at
approximately the same rate, and large-scale movement of data between sites will
not be a viable strategy. Instead, applications (especially business intelligence and
analytics applications) will often be moved to where the data is (the Hadoop model)
rather than pushing data to the code. This will be especially true of Big Data
environments, where vast amounts of semi-structured data will be available within
the private and public clouds.
14. Wikibon.org 13 of 13
The criteria for selecting storage vendors will change in the future. Storage vendors
will have significant opportunities for innovation within the stack. They will have to
take a systems approach to storage and be able to move the storage software
functionality to the optimal place within the stack in an automated and intelligent
manner. Distributed storage management function will be a critical component of
this strategy, together will seamless integration into backup, recovery and business
continuance. Storage vendors will need to forge close links with the stack providers,
so that there is a single support system (e.g., remote support), a single update
mechanism for maintenance, and a single stack management system.
Action Item: Next generation storage infrastructure is coming to a theater
near you. The bottom line is in order to scale and “compete” with cloud
service providers, internal IT organizations must spend less time on labor-
intensive infrastructure management and more effort on automation, and
providing efficient storage services at scale. The path to this vision will go
through integration in the form of converged infrastructure across the
stack with intelligent management of new types of storage (e.g. flash) and
the integration of Big Data analytics with operational systems to extract
new value from information sources.