DB2 Design for High Availability and ScalabilitySurekha Parekh
Are you overwhelmed by the growing amount of data in your environment? Are you maximizing application availability? As the number of tables with billions of rows continues to grow, so do the management challenges. In this session, we will discuss the challenges and solutions for optimum availability and performance, with techniques to efficiently and effectively manage very large amounts of data.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
Educational seminar lessons learned from customer db2 for z os health check...John Campbell
This presentation presented at the Polish DB2 User Group introduces and discusses the most common issues uncovered by the DB2 for z/OS Development SWAT Team from 360 Degree DB2 for z/OS Continuous Availability Assessment (DB2 360) Studies.
DB2 for z/OS Real Storage Monitoring, Control and PlanningJohn Campbell
Just added another hot DB2 topic around DB2 for z/OS Real Storage Monitoring, Control and Planning - Check it out and make sure your system runs safely
Best practices for DB2 for z/OS log based recoveryFlorence Dubois
The need to perform a DB2 log-based recovery of multiple objects is a very rare event, but statistically, it is more frequent than a true disaster recovery event (flood, fire, etc). Taking regular backups is necessary but far from sufficient for anything beyond minor application recovery. If not prepared, practiced and optimised, it can lead to extended application service downtimes – possibly many hours to several days. This presentation will provide many hints and tips on how to plan, design intelligently, stress test and optimise DB2 log-based recovery.
DB2 Design for High Availability and ScalabilitySurekha Parekh
Are you overwhelmed by the growing amount of data in your environment? Are you maximizing application availability? As the number of tables with billions of rows continues to grow, so do the management challenges. In this session, we will discuss the challenges and solutions for optimum availability and performance, with techniques to efficiently and effectively manage very large amounts of data.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
Educational seminar lessons learned from customer db2 for z os health check...John Campbell
This presentation presented at the Polish DB2 User Group introduces and discusses the most common issues uncovered by the DB2 for z/OS Development SWAT Team from 360 Degree DB2 for z/OS Continuous Availability Assessment (DB2 360) Studies.
DB2 for z/OS Real Storage Monitoring, Control and PlanningJohn Campbell
Just added another hot DB2 topic around DB2 for z/OS Real Storage Monitoring, Control and Planning - Check it out and make sure your system runs safely
Best practices for DB2 for z/OS log based recoveryFlorence Dubois
The need to perform a DB2 log-based recovery of multiple objects is a very rare event, but statistically, it is more frequent than a true disaster recovery event (flood, fire, etc). Taking regular backups is necessary but far from sufficient for anything beyond minor application recovery. If not prepared, practiced and optimised, it can lead to extended application service downtimes – possibly many hours to several days. This presentation will provide many hints and tips on how to plan, design intelligently, stress test and optimise DB2 log-based recovery.
DB2 11 for z/OS Migration Planning and Early Customer ExperiencesJohn Campbell
This extensive presentation provides help and guidance to help DB2 for z/OS customer migrate as quickly as possible, but safely to V11. The material will provide additional planning information, share customer customer experiences and best practices.
DB2 for z/OS - Starter's guide to memory monitoring and controlFlorence Dubois
DB2 for z/OS makes more and more use of REAL memory to improve performance and reduce cost. But if you don't carefully budget and monitor the use of REAL memory on your system, you could be putting your applications at risk. This presentation will go back to the basics and answer the most common questions about REAL memory management including: how does DB2 uses virtual and REAL memory? how to build a budget based on system settings and buffer pool sizes? how to size the LFAREA? what are the key performance indicators and how do I know I am running 'safely'? what can be done to protect the system?
System z Technology Summit Streamlining UtilitiesSurekha Parekh
Most DB2 applications are global non-stop, requiring almost
100% accessibility. Availability demands reduce the amount
of time available to perform necessary routine tasks, such as
utility maintenance on the underlying data and objects stored
in DB2 for z/OS that support critical business applications. In
addition, companies are looking for ways to streamline DB2 utility
processing to maximize system and personnel resources. How
valuable would it be to maximize your use of IBM DB2 Utilities
Suite for z/OS for both DB2 9 and DB2 10? What if you could
establish DB2 utility practices at a company level and know
that they would be monitored and adhered to? Do you want
to reduce your batch window during utility sort processing to
improve availability and performance? How important would it
be to run utilities only on objects when and if it’s necessary?
The answers to these questions and more will be revealed in
this session.
DB2 for z/OS and DASD-based Disaster Recovery - Blowing away the mythsFlorence Dubois
Is your Disaster Recovery solution based on DASD replication functions? In most cases, all you will need to do is a normal restart of DB2 for z/OS. But this assumes the DASD copy is consistent. Otherwise, it is guaranteed data corruption that will have to be fixed up, possibly several weeks or months after the event. This presentation will tell you everything you need to know about the Copy Services for IBM System z and what is required to ensure data consistency. It will address the most common myths and misconceptions about these DASD replication solutions. It will also provide hints and tips on how to tune for fast DB2 restart and how to optimise GRECP/LPL recovery.
DB2 for z/OS requires proper allocation and usage of storage for the data stored in its databases, table spaces and tables. But how does DB2 use storage? And where? This presentation breaks down the elements of mainframe storage in terms of how it is used with DB2 databases.
With the laws of physics providing a nice brick wall that chip builders are heading towards for processor clock speed, we are heading into the territory where simply buying a new machine won't necessarily make your batch go faster. So if you can't go short, go wide! This session looks at some of the performance issues and techniques of splitting your batch jobs into parallel streams to do more at once.
Contains information about the DB2 DSNZPARM that forms the DB2 configuration parameters. All about the different types of zPARMs. A way to update them dynamically.
TDWI San Diego 2014: Wendy Lucas Describes how BLU Acceleration Delivers In-T...IBM Analytics
Originally Published on Sep 25, 2014
Do you experience the snowball effect where you deliver one analytics report and your organization thinks of another and another they need? BLU Acceleration in-memory computing can help. It processes analytics queries at lightning fast speeds, and is a simple to use "load and go" solution. Learn more in this presentation delivered at TDWI San Diego on September 24, 2014.
DB2 11 for z/OS Migration Planning and Early Customer ExperiencesJohn Campbell
This extensive presentation provides help and guidance to help DB2 for z/OS customer migrate as quickly as possible, but safely to V11. The material will provide additional planning information, share customer customer experiences and best practices.
DB2 for z/OS - Starter's guide to memory monitoring and controlFlorence Dubois
DB2 for z/OS makes more and more use of REAL memory to improve performance and reduce cost. But if you don't carefully budget and monitor the use of REAL memory on your system, you could be putting your applications at risk. This presentation will go back to the basics and answer the most common questions about REAL memory management including: how does DB2 uses virtual and REAL memory? how to build a budget based on system settings and buffer pool sizes? how to size the LFAREA? what are the key performance indicators and how do I know I am running 'safely'? what can be done to protect the system?
System z Technology Summit Streamlining UtilitiesSurekha Parekh
Most DB2 applications are global non-stop, requiring almost
100% accessibility. Availability demands reduce the amount
of time available to perform necessary routine tasks, such as
utility maintenance on the underlying data and objects stored
in DB2 for z/OS that support critical business applications. In
addition, companies are looking for ways to streamline DB2 utility
processing to maximize system and personnel resources. How
valuable would it be to maximize your use of IBM DB2 Utilities
Suite for z/OS for both DB2 9 and DB2 10? What if you could
establish DB2 utility practices at a company level and know
that they would be monitored and adhered to? Do you want
to reduce your batch window during utility sort processing to
improve availability and performance? How important would it
be to run utilities only on objects when and if it’s necessary?
The answers to these questions and more will be revealed in
this session.
DB2 for z/OS and DASD-based Disaster Recovery - Blowing away the mythsFlorence Dubois
Is your Disaster Recovery solution based on DASD replication functions? In most cases, all you will need to do is a normal restart of DB2 for z/OS. But this assumes the DASD copy is consistent. Otherwise, it is guaranteed data corruption that will have to be fixed up, possibly several weeks or months after the event. This presentation will tell you everything you need to know about the Copy Services for IBM System z and what is required to ensure data consistency. It will address the most common myths and misconceptions about these DASD replication solutions. It will also provide hints and tips on how to tune for fast DB2 restart and how to optimise GRECP/LPL recovery.
DB2 for z/OS requires proper allocation and usage of storage for the data stored in its databases, table spaces and tables. But how does DB2 use storage? And where? This presentation breaks down the elements of mainframe storage in terms of how it is used with DB2 databases.
With the laws of physics providing a nice brick wall that chip builders are heading towards for processor clock speed, we are heading into the territory where simply buying a new machine won't necessarily make your batch go faster. So if you can't go short, go wide! This session looks at some of the performance issues and techniques of splitting your batch jobs into parallel streams to do more at once.
Contains information about the DB2 DSNZPARM that forms the DB2 configuration parameters. All about the different types of zPARMs. A way to update them dynamically.
TDWI San Diego 2014: Wendy Lucas Describes how BLU Acceleration Delivers In-T...IBM Analytics
Originally Published on Sep 25, 2014
Do you experience the snowball effect where you deliver one analytics report and your organization thinks of another and another they need? BLU Acceleration in-memory computing can help. It processes analytics queries at lightning fast speeds, and is a simple to use "load and go" solution. Learn more in this presentation delivered at TDWI San Diego on September 24, 2014.
One of the most CPU resource intensive, and highly used functions, within every IT environment is sorting. Virtually every application needs to sort data, as well as copy information from one location to another. Utilization of zIIP engines for sort, copy, and compression provides an organization with additional processing capacity and can limit the expansion of the 4-Hour Rolling Average of LPAR Hourly MSU Utilization (4HRA) which will prevent an in increase Monthly License Charges (MLC).
View this IBM Systems Magazine webcast to learn:
• How to assess what's driving peaks in your 4HRA
• 5 advantages of offloading workloads to zIIP engines
• Examples of Fortune 500 companies maximizing their mainframe through zIIP exploitation
IBM Z Cost Reduction Opportunities. Are you missing out?Precisely
Large companies continue to use mainframes for their most business-critical IT workloads. For these companies, finding ways to get more bang for the mainframe buck, in terms of both costs and performance, is always a high priority. Several converging trends in recent years have made it more challenging than ever to achieve the needed organizational performance at the best possible price point. IT leaders in mainframe departments are seeking out ways to speed processing, especially mundane processing tasks such as sorting, copying, merging, compression, and report generation.
Whether you are looking to get more value from your mainframe investment with enhanced performance, improved efficiency, or modernization, Precisely has multiple solutions for customer running IBM Z Systems that can have a dramatic impact on cost and efficiency.
Watch this on-demand webinar to learn about:
• Optimizing mainframe sort workloads
• Leveraging your zIIP processors
• Modernizing your database environment
• Improving visibility into mainframe processing
S de0882 new-generation-tiering-edge2015-v3Tony Pearson
IBM offers a variety of storage optimization technologies that balance performance and cost. This session covers Easy Tier, Storage Analytics, and Spectrum Scale.
I hosted a webcast with Sr. VP and GM of HP Storage David Scott. David and I talked about flash-optimized storage and the software defined data center. You can find the audio for the webcast at http://hpstorage.me/ASTB-podcasts - they are number 146 and 147.
Unlock Cost Savings for your IBM z Systems EnvironmentPrecisely
For many larger enterprises, the pressure to squeeze more performance out of existing mainframes, while avoiding or deferring major upgrades, is a top priority. Routine sort processing tasks such as sort, copy, merge, and compression take up a large and disproportionate share of CPU processing time and related mainframe resources. They are thus obvious areas to examine when seeking relief from mainframe cost and performance pressures.
When evaluating potential solutions many organizations want to “try before they buy” to ensure strong ROI. We have developed tools to help you understand your batch processing and calculate your expected ROI.
Watch this on-demand webinar to hear about:
• How our Syncsort MFX solution drive cost savings
• What a “try before you buy” evaluation looks like
• How to analyze results for your specific situation
POWER8 the x86 Server Farm - IBM Business Partners use POWER8 to Lower Client...Paula Koziol
Discover ISV solutions built using Linux on IBM POWER8 that will lower your Total Cost of Ownership compared to alternative platform technologies. Gain a competitive advantage with these solutions that analyze more data faster and improve performance at a much lower cost. Linux on IBM POWER8 is the only infrastructure that provides both scale-out and scale-up capabilities that optimizes the efficiency of your workloads while reducing your costs, allowing you to handle any business challenge.
This presentation highlights several high value solutions we have developed with Independent Software Vendors (ISV) IBM Business Partners. Hear directly from our ISV partners on the partnership and the value we deliver to our joint customers.
Visit http://www.ibm.com/power for more information.
Also visit the IBM Systems ISVs YouTube video channel for additional ISV Solutions on IBM Power Systems: https://www.youtube.com/playlist?list=PLN0Az2aq_pjgJs-1fFX0fw4BMRy2GeaHC
Follow us at @IBMSystemsISVs (https://twitter.com/IBMSystemsISVs)
Live CEO Interview and Webinar Update on the State of DeduplicationStorage Switzerland
Learn From Two Deduplication Veterans: George Crump, Founder Storage Switzerland and Tom Cook, CEO Permabit:
* Are All Deduplication Methods the Same?
* Why is Dedupe so Valuable in the All-Flash Use Case?
* What Can Go Wrong Deduplication?
* Ask your deduplication questions to the dedupe panel!
Simplify Data Management and Go Green with Supermicro & QumuloRebekah Rodriguez
Data is growing faster than existing systems are designed to ingest and then analyze. As a result, storage sprawl, wasted resources, and time-consuming complexity are holding back employees and customers from making better business decisions. Supermicro and Qumulo have teamed up to create a simple, sustainable, and fast system to store and manage massive amounts of unstructured data.
Join this webinar to learn how to bring a highly performant and dense infrastructure platform that meets business requirements by taming unstructured data management challenges with Qumulo and Supermicro.
Watch the webinar: https://www.brighttalk.com/webcast/17278/513928
Iod session 3423 analytics patterns of expertise, the fast path to amazing ...Rachel Bland
Session content from IBM Information On Demand 2013 provides an overview of the IBM Business Intelligence Pattern with BLU Acceleration and explains the underlying technology employed to deliver high speed analysis more quickly and easily than ever before.
JavaOne BOF 5957 Lightning Fast Access to Big DataBrian Martin
Traditionally data is placed in storage and then, when needed, accessed and acted upon in memory. This results in a natural bottleneck that degrades performance. Today, with in-memory computing, we can take advantage of a better understanding of how data is shaped and stored. In this session, you will learn how in-memory data grids mark an inflection point for enterprise applications, especially in dealing with big data. The session covers how large data sets can be made available and can be accessed nearly instantaneously.
Presentazione dello speech tenuto da Carmine Spagnuolo (Postdoctoral Research Fellow - Università degli Studi di Salerno/ ACT OR) dal titolo "Technology insights: Decision Science Platform", durante il Decision Science Forum 2019, il più importante evento italiano sulla Scienza delle Decisioni.
Why z/OS is a Great Platform for Developing and Hosting APIsTeodoro Cipresso
z/OS Connect Enterprise Edition makes it possible to create new value by enabling the creation of APIs that bring together multiple, disparate, z subsystem assets.
DB2® 11 for z/OS® helps secure and manage high-volume business transactions, batch reporting, and business-critical analytics together, in a single environment, and built on the legendary qualities of service of DB2 and System z. Upgrading to the latest release of this critical software can bring valuable rewards – if you know how to take advantage of it.
Join John Campbell, IBM Distinguished Engineer, for this special event “DB2 11 for z/OS: Technical overview and hot topics.” John will provide a technical overview and review “hot topics” on one of the most stable releases in DB2 history. Announced in October 2013, DB2 11 offers many improvements in performance and CPU savings, including management of large volumes of data with minimal disruption for running applications. Attendees will hear John’s personal “hints and tips” for a fast migration, pitfalls to avoid and important lessons learned from early client experiences. You will learn about many important enhancements and other DB2 11 news.
Overall, this webcast will give you a good foundation of knowledge for building your DB2 11 migration strategy, and understanding how it can benefit you and your enterprise.
IBM Analytics Accelerator Trends & Directions Namk Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
Abstract - DB2 10 for z/OS - Where we are today, and where we are going. This session will take you through the latest with DB2 10. What functions are customers finding most valuable, what the latest enhancements are, and what is the current status of DB2 10 in the marketplace? We will also take you through the latest on DB2 11, the status of the ESP, and also touch on some industry trends that are influencing the enhancements that we are planning for DB2 in the future.
Tools for developing and monitoring SQL in DB2 for z/OSSurekha Parekh
Building optimal applications against DB2 for z/OS is often difficult. As most of the real issues relate to maintenancence and changes in the surrounding parameters it’s necessary to define a methodology which enables performance monitoring as well as optimization in existing code.
By knowing more about how the SQL in your shop performs, a lot can be done to to anticipate future problems, such as identifying performance bottlenecks before they impact users and add to IT costs. By building a performance history table, where you monitor performance as time passes, and changes in the environment occur, it’s possible to be proactive, and optimize before the costs turns red.
Gain Insight Into DB2 9 And DB2 10 for z/OS Performance Updates And Save Cost...Surekha Parekh
In this session, we will discuss the latest updates for system
and application performance on IBM DB2 9 and DB2 10 for
z/OS. Beginning with performance impact and tuning at the
system and application level, we’ll have a special focus on
topics requested by product representatives and field inquiries.
This session will also cover DB2 10 for z/OS and its improved
performance and scalability — including general CPU usage
reduction and scalability, buffer management, and insert and
select functionality — in addition to the reduction of virtual
storage constraints. Other topics include improvements to DDF,
JDBC, SQLPL and line-of-business performance. You’ll also
learn how DB2 9 and DB2 10 interact with IBM z10 and z11
processors.
DB2 for z/OS Update Data Warehousing On System ZSurekha Parekh
Abstract:
Data Warehouses delivers the floor in most Business analytics solution. Recent analysis reveals that the demand for near-real-time data, as well as integration between day-to-day business applications increases.
IBM will share insight on today’s business environment and its impact on an IT organization’s ability to deliver a competitive Business Analytics and Data Warehousing strategy. You will learn how DB2 for z/OS combined with other InfoSphere data movement offerings from IBM can help enable information on demand – user demands.
What’s New For SQL Optimization In DB2 9 And DB2 10 For z/OSSurekha Parekh
Abstract:
Reducing the total cost of ownership is everyone’s challenge, and it’s one of the key benefits of query optimization. Since its initial inception in DB2® for z/OS®, the cost-based Optimizer has continually evolved to deliver expert-based query and workload analysis and many other performance-enhancing functions. Based on the latest use by customers, the Optimizer delivers even more in DB2 10. This session will show you how the Optimizer in DB2 10 for z/OS can help reduce your total cost of ownership. We’ll discuss incremental query optimization enhancements available with the upcoming release of DB2 10 that build on DB2 9 enhancements, such as global query optimization. This session will cover the insight discovered in early usage, provide the reason for each enhancement and highlight the most important enhancements.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
A Time Traveller’s Guide to DB2: Technology Themes for 2014 and Beyond
1. #IDUG
A Time Traveller’s Guide to DB2:
Technology Themes for 2014 and
Beyond
Julian StuhlerJulian Stuhler
Principal Consultant
Triton Consulting
2. #IDUG
Disclaimer
“Prediction is very difficult, especially if it's about
the future”
Nils Bohr, Nobel laureate in Physics
• Any mention of future features, products or overall strategic direction are purely my personal
opinion, and no reliance should be placed upon them ever coming to pass
• The user assumes the entire risk related to its use of information in this presentation. Triton
Consulting provides such information "as is," and disclaims any and all warranties, whether
express or implied, including (without limitation) any implied warranties of merchantability or
fitness for a particular purpose. In no event will Triton Consulting be liable to you or to any
third party for any direct, indirect, incidental, consequential, special or exemplary damages or
lost profit resulting from any use or misuse of this data.
• DB2 for OS/390 and DB2 for z/OS are trademarks of International Business Machine
corporation. This presentation uses many terms that are trademarks. Wherever we are
aware of trademarks the name has been spelled in capitals.
2
4. #IDUG
DB2 Today
“The only constant is change”
“Good character is not formed in a week or a
month. It is created little by little, day by day.
Protracted and patient effort is needed to develop
DBAs are
DBAs.”
They are
4
Heraclitus
Greek Philosopher
(c.535 BC – 475 BC)
good character.”DBAs.”
5. #IDUG
DB2 Today
5
June 1 2008 Sept 24 2013
Worldles generated by Tagxedo
http://http://www.tagxedo.com
Old IBM website from Wayback
Machine
http://web.archive.org http://www-01.ibm.com/software/data/
7. #IDUG
DB2 Technology Themes
• Cost Reduction
• High Availability
• In-Memory Computing
• DB2 Skills Availability• DB2 Skills Availability
• Database Commoditisation
• Big Data
7
8. #IDUG
Cost Reduction – Today
• Ongoing focus on improving profitability and ruthlessly
eliminating unnecessary costs
• IT spending is a major cost component for all organisations
• Gartner’s 2013 Worldwide IT Spending analysis showed growth rate
of just 0.4% for 2013
• Managing hardware costs• Managing hardware costs
• Moore’s law is still alive and well as it approaches its 50th birthday
• Compression can dramatically reduce DASD costs
• Adaptive compression in DB2 for LUW V10 can yield spectacular gains
• Further savings possible via actionable compression in BLU
• Overall cost savings being partially offset by move to more expensive
SSD devices (although they are getting cheaper too)
• Virtualisation and consolidation technologies are helping to improve
hardware utilisation rates
• Linux on System z offers some intriguing possibilities here
8
9. #IDUG
Cost Reduction – Today
• Managing Software licence fees
• MLC pricing on the mainframe means that CPU burned during peak period
(4HRA) directly impacts software costs
• Ongoing focus within DB2 for z/OS to drive down
CPU consumption
• DB2 code optimisation in DB2 10 (and now in DB2 11)• DB2 code optimisation in DB2 10 (and now in DB2 11)
• Increased use of System z speciality engines and
hybrid solutions such as the IBM DB2 Analytic
Accelerator
• Aggressive new packaging options for DB2 for LUW
• AWSE and AESE include lots of additional functionality
such as compression, BLU, pureScale, etc
• Linux on System z can offer major software licence savings
• Managing people costs
• Salary increases have been generally outstripping increase in overall IT
spend, so we’re all consuming a greater proportion of the IT budget
• From ALTER to Autonomics, it’s all about improving productivity and doing
more with less
9
10. #IDUG
Cost Reduction – Tomorrow
• Signs that pressure is easing on
overall IT budgets
• Latest Gartner estimates show 3.2%
annual increase for 2014, to $3.8
trilliontrillion
• 6.9% increase in enterprise software
spending, with CRM, DBMS and data
management the major items
• However, Gartner expects a
renewed focus on implementing
new IT systems which will consume
budget
• Current cost management pressures
unlikely to reduce
10
11. #IDUG
Cost Reduction – Tomorrow
• Hardware
• Moore’s Law under pressure, only has 6-8 years left before physics
dictate fundamental shift from CMOS to other technologies
• Photonics, quantum computers
• Ongoing focus on reducing operational
costs will continue to deliver benefits
• Recent Intel POC submerged high-end
servers in 3M’s dielectric “Novec
Engineered Fluid” to increase server
density and cut cooling costs by up
to 95%
• System z hardware approaching thermal limits for indirect cooling so
mainframes may go this way too
11
12. #IDUG
Cost Reduction – Tomorrow
• Software Licence Fees
• Increased offload to zIIP, IDAA and other speciality processors and hybrid
solutions
• But what happens when we approach 100% CP offload?
• New MLC models to recognise the changing role of the mainframe• New MLC models to recognise the changing role of the mainframe
• IBM announcement on 8th April 2014 for new model offering up to 60% reduction on
processor capacity reported for Mobile transactions http://www-
03.ibm.com/press/uk/en/pressrelease/43619.wss
• Practice of “bundling” likely to continue as a way of maintaining software
revenues on distributed platforms
• People Costs
• Skills shortages likely to continue to increase people costs. See skills section
later
• Continued emphasis on autonomics, ease of use and productivity features
12
13. #IDUG
High Availability – Today
• Impact of down time in critical IT systems has never been
higher
• Revenue loss
• Reputational damage
• Remedial costs• Remedial costs
• Regulatory and Contract Compliance Impact
• How much?
• A 2011 Ponemon Institute report calculated average of $5,617 per
minute for large US data centres
• Amazon “went dark” for 49 minutes in Jan 2013, at estimated cost of
$66,240 per minute
• Unplanned outage is usually the most painful, but planned
outage hurts too
13
14. #IDUG
High Availability – Today
• Relax, you’re working with IBM – DB2 on both platforms is
in good shape for reducing unplanned outage
• Data Sharing on DB2 for z/OS is mature and generally much
better understood by customers than it used to be
• “Gold standard” for continuous availability
• DB2 11 for z/OS contains some valuable new performance
enhancements
• DB2 for LUW pureScale feature implements similar architecture
• Included in AWSE and AESE
• Until recently pureScale supported only on IBM POWER and System
x servers, but as of DB2 10.1 FP2 or DB2 10.5 FP1, non-IBM x86
servers also supported
14
15. #IDUG
High Availability – Today
• Eliminating planned outage is an ongoing challenge, but
news is generally good and improving all of the time
• Schema change
• Housekeeping• Housekeeping
• Preventative maintenance
• Version upgrades
15
16. #IDUG
High Availability – Tomorrow
• Further data sharing and GDPS enhancements for DB2
for z/OS to re-open the gap with competitors
• Continued expansion of dynamic schema change
capabilities for LUW and z/OScapabilities for LUW and z/OS
• Online version upgrades
• Further strides towards truly online version upgrades for DB2 for
z/OS
• First steps for pureScale
16
17. #IDUG
In-Memory Computing – Today
• Disk access speeds are increasing, but processor speeds are increasing
at an even greater rate
• Therefore, relative “cost” of I/O operations is getting bigger
• Even new (expensive) SSDs are orders of magnitude slower than accessing
processor storage
• Caching data in memory avoids I/O• Caching data in memory avoids I/O
• Improves elapsed time
• Reduces CPU
• Reduces operational cost
• Allows novel access patterns to be used
• Availability of NAND / flash memory
reduces impact if I/O is required
• SSD
• Flash Express
• Pricing is volatile/complicated,
but memory is a one-off cost
17
DASD - CacheDASD - Cache
DASD - DiskDASD - Disk
Nanoseconds (10-9)
<2 milliseconds (10-3)
>5 milliseconds
Buffer
Pool
18. #IDUG
In-Memory Computing – Today
• OLTP
• Today’s server platforms can cache large amounts of data in memory
• zEC12 can support up to 3TB per CEC (1TB per LPAR)
• High-end Intel-based servers support 6-8TB per server
• Average deployed server memory is increasing on both mainframe and• Average deployed server memory is increasing on both mainframe and
distributed platforms
• Specific steps being taken to allow DB2 customers to exploit larger
memory footprints for OLTP workloads
• PGFIX(YES) in DB2 9
• PGSTEAL(NONE) and high-performance DBATs in DB2 10
• 1MB / 2GB page frames in DB2 10 / DB2 11
• Large (16MB) and Huge (16GB, AIX only) OS page support in DB2 for
LUW
18
19. #IDUG
In-Memory Computing – Today
• Analytics
• DB2 10.5 for LUW (AWSE & AESE) includes “BLU” technology - a collection of
novel technologies for optimising analytic queries,
including some specific in-memory techniques
• Columnar data store with patented dynamic
in-memory optimisation for data prefetch andin-memory optimisation for data prefetch and
retention – “treats DRAM as disk”
• Data held in compressed format in memory, while
still allowing joins and predicate evaluation –
“actionable compression”
• Very impressive query performance across a wide
variety of analytic (and even some “heavy” OLTP)
workloads
• 10x – 25x elapsed time improvement is common
• Ability to more fully utilise all of the available
memory / CPU in a given server configuration
19
20. #IDUG
In-Memory Computing – Tomorrow
• Future zEnterprise machines likely to significantly increase maximum
memory capacity per CEC / LPAR
• Cost per GB likely to continue with general downward trend
• Average installed memory per CEC will continue to increase
• DB2 for z/OS may page-fix buffer pools by default• DB2 for z/OS may page-fix buffer pools by default
• More common customer use of large / huge page frames
• Page fixing and large page frame support for other DB2 storage areas
(e.g. EDM pool)
• Possible use of pageable 1MB page frames supported by zEC12
• Increased autonomic capability, reduction of memory-specific system
parameters
• DB2 BLU will continue to evolve
• Big push just starting on DB2 BLU in the cloud
20
21. #IDUG
Database Commoditisation – Today
• We’ve always lived in a heterogeneous world, but perception of
databases as a commodity is increasing
• Many reasons, including
• The ubiquity of SQL
• The rise of packaged solutions• The rise of packaged solutions
• Java (JDBC, frameworks)
• RDBMS vendor compatibility / migration initiatives
• SOA
• Skills availability and support team size
• The result
• Lack of management awareness of business value of a specific
database
• Support teams and developers working with many database systems
• Lowest common denominator approach
21
22. #IDUG
Database Commoditisation – Today
• Fight back!
• Make it your mission to keep your management aware of
the unique business value of DB2
• If you have to be a Jack of all trades, at• If you have to be a Jack of all trades, at
least try to become a master of one
• Guess which one?
• Take pragmatic approach to lowest
common denominator issue
• Fight the battles worth winning
• Accept the rest
22
23. #IDUG
DB2 Skills – Today
• DB2 is getting more complex / capable in every release
• At the same time, IBM is trying to make it easier to use / understand
• Great until something needs fixing “under the hood”
• DB2 skills demographic is changing
• Source: My own observations only – no scientific backup!• Source: My own observations only – no scientific backup!
23
Skill Level
%ofDB2Technicians
Skill Level
%ofDB2Technicians
25. #IDUG
DB2 Skills – Tomorrow
• Jury still out on longer-term impact of greying mainframe
workforce
• IBM making efforts with its Academic Initiative
• Training provided for 80,000 students at over 1,000 schools in 70
countries during past 7 yearscountries during past 7 years
• 3 mainframe Massive Open On-line Courses
(MOOCs) will be made available in stages
throughout the year (no cost and
available to anyone, anywhere,
at any time)
• Expansion of DB2’s autonomic
capabilities will help, but requirement
for some deeper specialist skills
likely to continue for foreseeable
future
25
TaskComplexity
Autonomics
26. #IDUG
DB2 Skills – Tomorrow
SQLDBA
Permanent UK jobs requiring specific skills as proportion of total demand
Performance Tuning Big Data
27. #IDUG
Big Data – Today
• Big Data and Analytics are everywhere you look
• What’s a DB2 guy (or girl) to do?
• Things to keep in mind
• Hadoop is not a replacement for existing infrastructure, but a tool to• Hadoop is not a replacement for existing infrastructure, but a tool to
augment it
• Your role is still vital to your organisation!
• “90% of the world’s data is unstructured, but 90% of the world’s most
important data is structured”
David Barnes, IBM, 2012 IDUG Europe Keynote Speaker
• Database people have been doing big data and analytics for the past
40 years or so, just with different tools and terms (and capitalisation)
• If you have the right attitude / mind-set, a DBA background is an
excellent stepping stone to becoming a wealthy “Data Scientist”
27
28. #IDUG
Big Data – Today
• One of the secrets to DB2’s longevity is to “embrace and
extend” new technologies, and Big Data is no exception
• DB2 for z/OS
• IBM DB2 Analytics Accelerator for efficiently running complex
query workloadsquery workloads
• SQL extensions in most recent releases to improve query /
analytic workloads
• DB2 for LUW
• BLU Acceleration to dramatically speed up analytics and
reporting, by multiple orders of magnitude
• Part of DB2 for LUW V10.5 (included in AWSE and AESE)
• Remember that DB2 for LUW still holds Guinness World Record
for Largest Data Warehouse (3PB)
28
29. #IDUG
Big Data – Today
• Integration between DB2 and Hadoop opens new
possibilities for gaining actionable insight
29
30. #IDUG
Big Data – Tomorrow
• DB2 will continue with “embrace and extend” philosophy
• Efficient interaction with highly optimised big data platforms such as Hadoop /
BigInsights
• Further expand internal analytic / big data capabilities
• One size does NOT fit all !• One size does NOT fit all !
• Each approach has strengths and
weaknesses, best one is dependent
on application requirements
• NoSQL = Not Only SQL (or YeSQL)
• Several NoSQL databases have
added SQL capabilities
• NoSQL for z/OS!
• Simple Key / value NoSQL database for z/OS, currently freeware
• http://www.nosqlz.com
30
31. #IDUG
Some Questions to Ponder
• What have you done recently to:
• Reduce the operational costs of the systems you support?
• Improve your personal productivity?
• Make the savings that you’ve made visible to the budget holders?
• Test your failover / disaster recovery arrangements?• Test your failover / disaster recovery arrangements?
• Review your housekeeping / maintenance / upgrade procedures to
ensure you’re maximising availability?
• Improve and expand your DB2 skills?
• Make management aware of the business value of DB2?
• Keep yourself relevant in a Big Data world?
• Prepare for the future?
31
“The future depends on what you do today”
Mahatma Ghandi
32. #IDUG
Where’s the future I was promised?
32
Portable fusion
reactor
Self-Tying
Laces
Hoverboard
Flying
cars
33. #IDUG
A Time Travellers Guide to DB2:
Technology Themes for 2014 and
beyond
Julian StuhlerJulian Stuhler
Principal Consultant
Triton Consulting
Editor's Notes
Predated Plato by about 100 years“The weeping philosopher”The first DBA?
IBM acquired Netezza in 2010pureData brand announced in late 2011
Moore's law: over the history of computing hardware, number of transistors on integrated circuits doubles approximately every two years. Named after Intel co-founder Gordon Moore, who described the trend in his 1965 paper.From thousandsof transistors in the early 1970s to billions now (Intel Xeon Phi has 5 billion, 61 cores on 22nm). 14nm Knights landing generation has 72 cores, Adaptive compression actually uses two compression approaches. The first employs the same table-level compression dictionary used in classic row compression to compress data based on repetition within a sampling of data from the table as a whole. The second approach uses a page-level dictionary-based compression algorithm to compress data based on data repetition within each page of data. The dictionaries map repeated byte patterns to much smaller symbols; these symbols then replace the longer byte patterns in the table. The table-level compression dictionary is stored within the table object for which it is created, and is used to compress data throughout the table. The page-level compression dictionary is stored with the data in the data page, and is used to compression only the data within that pageDB2 with BLU Acceleration introduces several patented techniques permitting DB2 to not only store data more efficiently, but also to better process it while it is still compressed. BLU Acceleration applies predicates, performs joins, and does grouping, all on the compressed values of column-organized tables. Since no secondary indexes or MQTs are needed on column-organized tables, you save storage space. This combination brings together all resources—I/O bandwidth, bufferpools, memory bandwidth, processor caches, and even machine cycles—through single-instruction, multiple data (SIMD) operations. SSDs now well under 1$ per GB ($40 per GB in 2008). Mention FlashExpress??Hardware utilisation rates of 20% or less common on distributed servers.
DB2 11out-of-the-box" DB2 CPU savings:Up to 10% for complex OLTPup to 10% for update intensive batch workloadsup to 25% for . Complex reporting queries against uncompressed tables (up to 40% CPU savings against compressed tables)zIIP used for pseudo-deleted index cleanup, log write/read, utilities Additional CPU savings and performance improvements may be possible with application and/or system changes that take advantage of new DB2 11 enhancements including log replication capture, data sharing using extended log record format, and pureXML120 PVUs per core on zLinux EC12 (100 on Intel)AutonomicsPseudo-deleted index cleanup, reorg avoidanceSelf tuning memory manager on LUW (less successful on z)Temporal, transparent archiving
Intel Broadwell architecture uses 14nm, existing roadmap goes down to 5nm. 7-5nm accepted as limit of current technology, around 2020-2022.New materials: indium gallium arsenide (InGaAs), indium phosphide (InP) and silicon germanium (SiGe), GraphenePhotonicszEC12 still have water cooling options
zEC12 has 2:1 ratio of zIIP to CPsThree times the number of mobile phones in the world as computersDB2 AWSE/AESE bundle Optim tooling, compression, pureScale, HADR, etc
JC will discuss DB2 11 data sharing enhancements – LRSN spin reduction, GBP write around protocolDB2 10.5 supports HADR with pureScaleRoy Cecil will talk about GDPC later
Schema change – ALTER TABLE DROP COLUMN..DB2 Luw 10.5 online fixpak
Data sharing is key system z competitive differentiator, but competition is closing gap•Improved automatic DB2 peer recovery•Asynch CF Lock duplexing for multi-site continuous availability•Improved IFI 306 performance for active/active log captureDynamic schema change – change DB owner, view changes, change tablespace type, etc. Major theme for Cypress
DRAM/NAND prices decreased historically, but now static/rising due to non-technology factors (volume down due to PC shipments down, lower demands of mobile devices, recession, supplier fires, etc)
Pools for a given subsystem can be up to 1TB total in DB2 8, 9 and 10, with limit of 2x available real storageHuge page memory is only available on AIX.
By isolating workloads, DB2 can achieve higher cache coherency and eliminate low-level thrashing of the cache. 10x compression in Blu compared to traditional techniques – due to column-based formatNo MQT, indexes etc needed so more savings
Data scientist – from David BarnesNeed curiosity and cleverness as well as technical skillsAsk questions of the dataLearn when and how to simplify the data – and when it’s good enough
IDAA V4 just announcedAccelerate greater range of SQL (e.g. static SQL)Workload balancing/failover across multiple acceleratorsSQL Extensions- DB2 11 includes advanced aggregation grouping sets, group by rollupBLU conceptsColumnar storeData skipping (skip irrelevant data)Dynamic in memory technology (RAM instead of disk)Actionable compression (keep data compressed for processing)Parallel vector processing (use of parallel cores/processors)
"Embrace, extend, and extinguish",[1] also known as "Embrace, extend, and exterminate",[2] is a phrase that the U.S. Department of Justice found[3] was used internally by Microsoft[4] to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences to disadvantage its competitors.Object relational, XML, columnar all examples of DB2 embracing and extending
NoSQL v SQL – remember Object DB v Relational?NoSQL: Column: Accumulo, Cassandra, HBase Document: Clusterpoint, Couchbase, MarkLogic, MongoDB Key-value: Dynamo, MemcacheDB, Project Voldemort, Redis, Riak Graph: Allegro, Neo4J, OrientDB, Virtuoso