The document discusses operational analytics and its performance on Informix, including what operational analytics is, how it can be implemented on Informix, and performance analysis of Informix on Intel platforms. It provides an overview of operational analytics and its challenges, how it can leverage Informix for the complete lifecycle, and benchmarks showing Informix's scaling on Intel's Xeon platforms for operational analytics workloads.
Operational Process Analytics - Why traditional analytics and monitoring are ...Elmar Weber
A talk from the Activiti Global User Day 2015 in Paris. It covers the topic of operational intelligence. Why it is an important topic, specifically for Business Process Management, why current BPM vendors don't cover it and the typical reaction to use Business Intelligence methods are not enough. I then go into how Cupenya is solving this and how easy it is to get started with the Open Source Activiti Process engine with one line of code to provide real-time and predictive, operational analytics to business users.
Operational Process Analytics - Why traditional analytics and monitoring are ...Elmar Weber
A talk from the Activiti Global User Day 2015 in Paris. It covers the topic of operational intelligence. Why it is an important topic, specifically for Business Process Management, why current BPM vendors don't cover it and the typical reaction to use Business Intelligence methods are not enough. I then go into how Cupenya is solving this and how easy it is to get started with the Open Source Activiti Process engine with one line of code to provide real-time and predictive, operational analytics to business users.
This presentation will help you understand the basic building blocks of Business Intelligence. Learn how decisions are triggered, the complete decision process and who makes decisions in the corporate world.
More importantly, understand core components of a Business Intelligence architecture such as a data warehouse, data mining, OLAP (Online analytical procession) , OLTP (Online Transaction Processing) and data reporting. Each component plays an integral part which enables today's managers and decision makers collect, analyze and interpret data to make it actionable for decision making.
Business intelligence has become an integral part that needs to be incorporated to ensure business survival. It is a tool that helps analyze historical data and forecast future so that your are always one step ahead in your business.
Please feel free to like, share and comment as you please!
Fully leveraging your data, infrastructure, and IT staff has never been more important than it is now, during these times of fiscal responsibility and evolving business demands. In response, businesses need to maximize their IT by getting increased performance, efficiency, and economics out of their infrastructure and resources.
This presentation focuses on three key technologies that provide particularly compelling opportunities to maximize IT:
-All-flash systems that accelerate access to information for faster decision-making, analysis and productivity.
-Unified storage solutions that enable you to process more, and diverse, workloads in less time while driving capacity efficiencies.
-Unified compute solutions that deliver improved orchestration and automation and enhance the productivity of your IT staff, while avoiding costly over- or under-provisioning.
sap hana|sap hana database| Introduction to sap hanaJames L. Lee
SAP HANA, sap hana implementation scenarios, sap hana deployment scenarios, SAP HANA Implementations, sap hana implementation and modeling, sap hana implementation cost, sap hana implementation partners, Applications based on SAP HANA, SAP HANA Databases.
Microsoft SQL Server 2012 Data Warehouse on Hitachi Converged PlatformHitachi Vantara
Accelerate breakthrough insights across your organization with Microsoft SQL Server 2012 Data Warehouse running on the mission-critical and ready-to-deploy Hitachi server-storage-networking platform, Hitachi Unified Compute Platform. Amplify infrastructure performance with Hitachi and Microsoft SQL Server 2012 Fast Track Data Warehouse xVelocity in-memory technologies. Learn how your organization can extract 100 million+ records in 2 or 3 seconds versus the 30 minutes required previously. With SQL Server 2012 Fast Track Data Warehouse and Hitachi software, your organization will be able to leverage a data platform that processes any data anywhere. View this webcast and learn:How to reduce deployment time with ready-to-deploy solutions that have been engineered and pre-configured by Hitachi and validated by the Microsoft Fast Track Data Warehouse program. How Hitachi and Microsoft have optimized performance for your data warehouse requirements. How your organization can realize immediate ROI from your data warehouse investment. For more information on Hitachi Unified Compute Platform please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
Use cases for Hadoop and Big Data Analytics - InfoSphere BigInsightsGord Sissons
This presentation is from TDWI's event in Boston during the summer of 2014. IBM InfoSphere BigInsights is IBM's enterprise grade Hadoop offering. It combines the best of open-source Hadoop, with advanced capabilities including Big SQL that clients can optionally deploy to get to market faster with a variety of big data and analytic applications.
ADV Slides: Platforming Your Data for Success – Databases, Hadoop, Managed Ha...DATAVERSITY
Thirty years is a long time for a technology foundation to be as active as relational databases. Are their replacements here? In this webinar, we say no.
Databases have not sat around while Hadoop emerged. The Hadoop era generated a ton of interest and confusion, but is it still relevant as organizations are deploying cloud storage like a kid in a candy store? We’ll discuss what platforms to use for what data. This is a critical decision that can dictate two to five times additional work effort if it’s a bad fit.
Drop the herd mentality. In reality, there is no “one size fits all” right now. We need to make our platform decisions amidst this backdrop.
This webinar will distinguish these analytic deployment options and help you platform 2020 and beyond for success.
ADV Slides: When and How Data Lakes Fit into a Modern Data ArchitectureDATAVERSITY
Whether to take data ingestion cycles off the ETL tool and the data warehouse or to facilitate competitive Data Science and building algorithms in the organization, the data lake – a place for unmodeled and vast data – will be provisioned widely in 2020.
Though it doesn’t have to be complicated, the data lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the data swamp, but not the data lake! The tool ecosystem is building up around the data lake and soon many will have a robust lake and data warehouse. We will discuss policy to keep them straight, send data to its best platform, and keep users’ confidence up in their data platforms.
Data lakes will be built in cloud object storage. We’ll discuss the options there as well.
Get this data point for your data lake journey.
This presentation will help you understand the basic building blocks of Business Intelligence. Learn how decisions are triggered, the complete decision process and who makes decisions in the corporate world.
More importantly, understand core components of a Business Intelligence architecture such as a data warehouse, data mining, OLAP (Online analytical procession) , OLTP (Online Transaction Processing) and data reporting. Each component plays an integral part which enables today's managers and decision makers collect, analyze and interpret data to make it actionable for decision making.
Business intelligence has become an integral part that needs to be incorporated to ensure business survival. It is a tool that helps analyze historical data and forecast future so that your are always one step ahead in your business.
Please feel free to like, share and comment as you please!
Fully leveraging your data, infrastructure, and IT staff has never been more important than it is now, during these times of fiscal responsibility and evolving business demands. In response, businesses need to maximize their IT by getting increased performance, efficiency, and economics out of their infrastructure and resources.
This presentation focuses on three key technologies that provide particularly compelling opportunities to maximize IT:
-All-flash systems that accelerate access to information for faster decision-making, analysis and productivity.
-Unified storage solutions that enable you to process more, and diverse, workloads in less time while driving capacity efficiencies.
-Unified compute solutions that deliver improved orchestration and automation and enhance the productivity of your IT staff, while avoiding costly over- or under-provisioning.
sap hana|sap hana database| Introduction to sap hanaJames L. Lee
SAP HANA, sap hana implementation scenarios, sap hana deployment scenarios, SAP HANA Implementations, sap hana implementation and modeling, sap hana implementation cost, sap hana implementation partners, Applications based on SAP HANA, SAP HANA Databases.
Microsoft SQL Server 2012 Data Warehouse on Hitachi Converged PlatformHitachi Vantara
Accelerate breakthrough insights across your organization with Microsoft SQL Server 2012 Data Warehouse running on the mission-critical and ready-to-deploy Hitachi server-storage-networking platform, Hitachi Unified Compute Platform. Amplify infrastructure performance with Hitachi and Microsoft SQL Server 2012 Fast Track Data Warehouse xVelocity in-memory technologies. Learn how your organization can extract 100 million+ records in 2 or 3 seconds versus the 30 minutes required previously. With SQL Server 2012 Fast Track Data Warehouse and Hitachi software, your organization will be able to leverage a data platform that processes any data anywhere. View this webcast and learn:How to reduce deployment time with ready-to-deploy solutions that have been engineered and pre-configured by Hitachi and validated by the Microsoft Fast Track Data Warehouse program. How Hitachi and Microsoft have optimized performance for your data warehouse requirements. How your organization can realize immediate ROI from your data warehouse investment. For more information on Hitachi Unified Compute Platform please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
Use cases for Hadoop and Big Data Analytics - InfoSphere BigInsightsGord Sissons
This presentation is from TDWI's event in Boston during the summer of 2014. IBM InfoSphere BigInsights is IBM's enterprise grade Hadoop offering. It combines the best of open-source Hadoop, with advanced capabilities including Big SQL that clients can optionally deploy to get to market faster with a variety of big data and analytic applications.
ADV Slides: Platforming Your Data for Success – Databases, Hadoop, Managed Ha...DATAVERSITY
Thirty years is a long time for a technology foundation to be as active as relational databases. Are their replacements here? In this webinar, we say no.
Databases have not sat around while Hadoop emerged. The Hadoop era generated a ton of interest and confusion, but is it still relevant as organizations are deploying cloud storage like a kid in a candy store? We’ll discuss what platforms to use for what data. This is a critical decision that can dictate two to five times additional work effort if it’s a bad fit.
Drop the herd mentality. In reality, there is no “one size fits all” right now. We need to make our platform decisions amidst this backdrop.
This webinar will distinguish these analytic deployment options and help you platform 2020 and beyond for success.
ADV Slides: When and How Data Lakes Fit into a Modern Data ArchitectureDATAVERSITY
Whether to take data ingestion cycles off the ETL tool and the data warehouse or to facilitate competitive Data Science and building algorithms in the organization, the data lake – a place for unmodeled and vast data – will be provisioned widely in 2020.
Though it doesn’t have to be complicated, the data lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the data swamp, but not the data lake! The tool ecosystem is building up around the data lake and soon many will have a robust lake and data warehouse. We will discuss policy to keep them straight, send data to its best platform, and keep users’ confidence up in their data platforms.
Data lakes will be built in cloud object storage. We’ll discuss the options there as well.
Get this data point for your data lake journey.
The Most Trusted In-Memory database in the world- AltibaseAltibase
Life is a database. How you manage data defines business. ALTIBASE HDB with its Hybrid architecture combines the extreme speed of an In-Memory Database with the storage capacity of an On-Disk Database’ in a single unified engine.
ALTIBASE® HDB™ is the only Hybrid DBMS in the industry that combines an in-memory DBMS with an on-disk DBMS, with a single uniform interface, enabling real-time access to large volumes of data, while simplifying and revolutionizing data processing. ALTIBASE XDB is the world’s fastest in-memory DBMS, featuring unprecedented high performance, and supports SQL-99 standard for wide applicability.
Altibase is provider of In-Memory data solutions for real-time access, analysis and distribution of high volumes of data in mission-critical environments.
Please visit our website (www.altibase.com) to learn more about our products and read more about our case studies. Or contact us at info@altibase.com. We look forward to helping you!
ADV Slides: The Evolution of the Data Platform and What It Means to Enterpris...DATAVERSITY
Thirty years is a long time for a technology foundation to be as active as relational databases. Are their replacements here?
In this webinar, we look at this foundational technology for modern Data Management and show how it evolved to meet the workloads of today, as well as when other platforms make sense for enterprise data.
[db tech showcase Tokyo 2017] C37: MariaDB ColumnStore analytics engine : use...Insight Technology, Inc.
MariaDB ColumnStore is the analytics engine for MariaDB. This talk will introduce the product, use cases, and also introduce the new features coming in the next major release 1.1.
Bridging the Last Mile: Getting Data to the People Who Need It (APAC)Denodo
Watch full webinar here: https://bit.ly/34iCruM
Many organizations are embarking on strategically important journeys to embrace data and analytics. The goal can be to improve internal efficiencies, improve the customer experience, drive new business models and revenue streams, or – in the public sector – provide better services. All of these goals require empowering employees to act on data and analytics and to make data-driven decisions. However, getting data – the right data at the right time – to these employees is a huge challenge and traditional technologies and data architectures are simply not up to this task. This webinar will look at how organizations are using Data Virtualization to quickly and efficiently get data to the people that need it.
Attend this session to learn:
- The challenges organizations face when trying to get data to the business users in a timely manner
- How Data Virtualization can accelerate time-to-value for an organization’s data assets
- Examples of leading companies that used data virtualization to get the right data to the users at the right time
Dell Solutions Tour 2014
Jan Bjelde Manager, Enterprise Solution Group, Dell Norway
Arild Hansen System Enginer Server, Dell Norway
Løsninger for Software defined and konvergert datasenter
Gjennomgang av Dell VRTX som kan være verdens minste Konvergerte infrastruktur, et komplett konvergert og virtuelt datasenter ogkomponent i en Software Defined Storage løsning (SDS) “Softwared defined datasenter (SDDC) snur opp ned på tradisjonelle enterprise løsninger.Lagring lagres i server plattformer (SDS). Programvare i nettverket flyttes til virtuelle plattformer (SDN+NFV). Infrastruktur forenkles og enten leveres som stack’er i konvergert infrastruktur eller som standardiserte server plattformer (Hyper konvergens). Dell kommer med både moderniseringsstrategier og løsninger for det moderne datasenteret. Dell har mange nye produkter somoptimaliseres for det software definerte datasenter eller som konvergert infrastruktur for det virtuelle datasenter eller for SAP eller Oracle type workloads.»
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...ScyllaDB
Discover how to avoid common pitfalls when shifting to an event-driven architecture (EDA) in order to boost system recovery and scalability. We cover Kafka Schema Registry, in-broker transformations, event sourcing, and more.
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Precisely
Tackling the challenge of designing a machine learning model and putting it into production is the key to getting value back – and the roadblock that stops many promising machine learning projects. After the data scientists have done their part, engineering robust production data pipelines has its own set of challenges. Syncsort software helps the data engineer every step of the way.
Building on the process of finding and matching duplicates to resolve entities, the next step is to set up a continuous streaming flow of data from data sources so that as the sources change, new data automatically gets pushed through the same transformation and cleansing data flow – into the arms of machine learning models.
Some of your sources may already be streaming, but the rest are sitting in transactional databases that change hundreds or thousands of times a day. The challenge is that you can’t affect performance of data sources that run key applications, so putting something like database triggers in place is not the best idea. Using Apache Kafka or similar technologies as the backbone to moving data around doesn’t solve the problem of needing to grab changes from the source pushing them into Kafka and consuming the data from Kafka to be processed. If something unexpected happens – like connectivity is lost on either the source or the target side, you don’t want to have to fix it or start over because the data is out of sync.
View this 15-minute webcast on-demand to learn how to tackle these challenges in large scale production implementations.
How KeyBank Used Elastic to Build an Enterprise Monitoring SolutionElasticsearch
KeyBank is using an iterative design approach to scale their end-to-end enterprise monitoring system with Kafka and Elasticsearch at its core. See how they did it and the lessons learned along the way.
Real World Use Cases and Success Stories for In-Memory Data Grids (TIBCO Acti...Kai Wähner
A lot of data grid products are available. TIBCO ActiveSpaces, Oracle Coherence, Infinispan, IBM WebSphere eXtreme Scale, Hazelcast, Gigaspaces, GridGain, Pivotal Gemfire to name most of the important ones. Not SAP HANA!
The goal of my talk was not very technical. Instead, I discussed several different real world use cases and success stories for using in-memory data grids. Here is the abstract for my talk:
NoSQL is not just about different storage alternatives such as document store, key value store, graphs or column-based databases. The hardware is also getting much more important. Besides common disks and SSDs, enterprises begin to use in-memory storages more and more because a distributed in-memory data grid provides very fast data access and update. While its performance will vary depending on multiple factors, it is not uncommon to be 100 times faster than corresponding database implementations. For this reason and others described in this session, in-memory computing is a great solution for lifting the burden of big data, reducing reliance on costly transactional systems, and building highly scalable, fault-tolerant applications.The session begins with a short introduction to in-memory computing. Afterwards, different frameworks and product alternatives are discussed for implementing in-memory solutions. Finally, the main part of this session shows several different real world uses cases where in-memory computing delivers business value by supercharging the infrastructure.
Customer value analysis of big data productsVikas Sardana
Business value analysis through Customer Value Model for software technology choices with a case study from Mobile Advertising industry for Big Data use case.
The N1QL is a developer favorite because it’s SQL for JSON. Developer’s life is going to get easier with the upcoming N1QL features. We have exciting features in many areas including language to performance, indexing to search, and tuning to transactions. This session will preview new the features for both new and advanced users.
N1QL+GSI: Language and Performance Improvements in Couchbase 5.0 and 5.5Keshav Murthy
N1QL gives developers and enterprises an expressive, powerful, and complete language for querying, transforming, and manipulating JSON data. We’ll begin this session with a brief overview of N1QL and then explore some key enhancements we’ve made in the latest versions of Couchbase Server. Couchbase Server 5.0 has language and performance improvements for pagination, index exploitation, integration, index availability, and more. Couchbase Server 5.5 will offer even more language and performance features for N1QL and global secondary indexes (GSI), including ANSI joins, aggregate performance, index partitioning, auditing, and more. We’ll give you an overview of the new features as well as practical use case examples.
XLDB Lightning Talk: Databases for an Engaged World: Requirements and Design...Keshav Murthy
Traditional databases have been designed for system of record and analytics. Modern enterprises have orders of magnitude more interactions than transactions. Couchbase Server is a rethinking of the database for interactions and engagements called, Systems of Engagement. Memory today is much cheaper than disks were when traditional databases were designed back in the 1970's, and networks are much faster and much more reliable than ever before. Application agility is also an extremely important requirement. Today's Couchbase Server is a memory- and network-centric, shared-nothing, auto-partitioned, and distributed NoSQL database system that offers both key-based and secondary index-based data access paths as well as API- and query-based data access capabilities. This lightning talk gives you an overview of requirements posed by next-generation database applications and approach to implementation including “Multi Dimensional Scaling.
Couchbase 5.5: N1QL and Indexing featuresKeshav Murthy
This deck contains the high-level overview of N1QL and Indexing features in Couchbase 5.5. ANSI joins, hash join, index partitioning, grouping, aggregation performance, auditing, query performance features, infrastructure features.
N1QL (SQL for JSON) has built-in rule based optimizer. In Couchbase 5.0, N1QL's optimizer has a number of improvements for resource utilization, performance improvements. This deck deck by Couchbase Principal Engineer, Sitaram describes those improvements.
Mindmap: Oracle to Couchbase for developersKeshav Murthy
This deck provides a high-level comparison between Oracle and Couchbase: Architecture, database objects, types, data model, SQL & N1QL statements, indexing, optimizer, transactions, SDK and deployment options.
Queries need indexes to speed up and optimize resource utilization. What indexes to create and what rules to follow to create right indexes to optimize the workload? This presentation gives the rules for those.
N1QL = SQL + JSON. N1QL gives developers and enterprises an expressive, powerful, and complete language for querying, transforming, and manipulating JSON data. We begin with a brief overview. Couchbase 5.0 has language and performance improvements for pagination, index exploitation, integration, and more. We’ll walk through scenarios, features, and best practices.
From SQL to NoSQL: Structured Querying for JSONKeshav Murthy
Can SQL be used to query JSON? SQL is the universally known structured query language, used for well defined, uniformly structured data; while JSON is the lingua franca of flexible data management, used to define complex, variably structured data objects.
Yes! SQL can most-definitely be used to query JSON with Couchbase's SQL query language for JSON called N1QL (verbalized as Nickel.)
In this session, we will explore how N1QL extends SQL to provide the flexibility and agility inherent in JSON while leveraging the universality of SQL as a query language.
We will discuss utilizing SQL to query complex JSON objects that include arrays, sets and nested objects.
You will learn about the powerful query expressiveness of N1QL, including the latest features that have been added to the language. We will cover how using N1QL can solve your real-world application challenges, based on the actual queries of Couchbase end-users.
Tuning for Performance: indexes & QueriesKeshav Murthy
There are three things important in databases: performance, performance, performance. From a simple query to fetch a document to a query joining millions of documents, designing the right data models and indexes is important. There are many indices you can create, and many options you can choose for each index. This talk will help you understand tuning N1QL query, exploiting various types of indices, analyzing the system behavior, and sizing them correctly.
Understanding N1QL Optimizer to Tune QueriesKeshav Murthy
Every flight has a flight plan. Every query has a query plan. You must have seen its text form, called EXPLAIN PLAN. Query optimizer is responsible for creating this query plan for every query, and it tries to create an optimal plan for every query. In Couchbase, the query optimizer has to choose the most optimal index for the query, decide on the predicates to push down to index scans, create appropriate spans (scan ranges) for each index, understand the sort (ORDER BY) and pagination (OFFSET, LIMIT) requirements, and create the plan accordingly. When you think there is a better plan, you can hint the optimizer with USE INDEX. This talk will teach you how the optimizer selects the indices, index scan methods, and joins. It will teach you the analysis of the optimizer behavior using EXPLAIN plan and how to change the choices optimizer makes.
Utilizing Arrays: Modeling, Querying and IndexingKeshav Murthy
Arrays can be simple; arrays can be complex. JSON arrays give you a method to collapse the data model while retaining structure flexibility. Arrays of scalars, objects, and arrays are common structures in a JSON data model. Once you have this, you need to write queries to update and retrieve the data you need efficiently. This talk will discuss modeling and querying arrays. Then, it will discuss using array indexes to help run those queries on arrays faster.
N1QL supports select, join, project,nest,unnest operations on flexible schema documents represented in JSON.
Couchbase 4.5 enhances the data modeling and query flexibility.
When you have parent-child relationship, children documents point to parent document, you join from child to parent. Now, how would you join from parent to child when parent does not contain the reference to child? How would you improve performance on this? This presentation explain the syntax, execution of the query.
Bringing SQL to NoSQL: Rich, Declarative Query for NoSQLKeshav Murthy
Abstract
NoSQL databases bring the benefits of schema flexibility and
elastic scaling to the enterprise. Until recently, these benefits have
come at the expense of giving up rich declarative querying as
represented by SQL.
In today’s world of agile business, developers and organizations need
the benefits of both NoSQL and SQL in a single platform. NoSQL
(document) databases provide schema flexibility; fast lookup; and
elastic scaling. SQL-based querying provides expressive data access
and transformation; separation of querying from modeling and storage;
and a unified interface for applications, tools, and users.
Developers need to deliver applications that can easily evolve,
perform, and scale. Otherwise, the cost, effort, and delay in keeping
up with changing business needs will become significant disadvantages.
Organizations need sophisticated and rapid access to their operational data, in
order to maintain insight into their business. This access should
support both pre-defined and ad-hoc querying, and should integrate
with standard analytical tools.
This talk will cover how to build applications that combine the
benefits of NoSQL and SQL to deliver agility, performance, and
scalability. It includes:
- N1QL, which extends SQL to JSON
- JSON data modeling
- Indexing and performance
- Transparent scaling
- Integration and ecosystem
You will walk away with an understanding of the design patterns and
best practices for effective utilization of NoSQL document
databases - all using open-source technologies.
SQL for JSON: Rich, Declarative Querying for NoSQL Databases and Applications Keshav Murthy
In today’s world of agile business, Java developers and organizations benefit when JSON-based NoSQL databases and SQL-based querying come together. NoSQL provides schema flexibility and elastic scaling. SQL provides expressive, independent data access. Java developers need to deliver apps that readily evolve, perform, and scale with changing business needs. Organizations need rapid access to their operational data, using standard analytical tools, for insight into their business. In this session, you will learn to build apps that combine NoSQL and SQL for agility, performance, and scalability. This includes
• JSON data modeling
• Indexing
• Tool integration
Introducing N1QL: New SQL Based Query Language for JSONKeshav Murthy
This session introduces N1QL and sets the stage for the rich selection of N1QL-related sessions at Couchbase Connect 2015. N1QL is SQL for JSON, extending the querying power of SQL with the modeling flexibility of JSON. In this session, you will get an introduction to the N1QL language, architecture, and ecosystem, and you will hear the benefits of N1QL for developers and for enterprises.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
1. Operational Analytics on Informix:
Architecture and Performance evaluation
Jantz Tran Intel – Database Performance
Keshava Murthy IBM Informix Development
2. Agenda
• Operational analytics
– What is it? Requiremens & challenges.
• Operational analytics with Informix
– Complete lifecycle discussion.
• Intel® Technology & Roadmap
– Scaling on Xeon® E7 Platform
• Performance work and analysis for Informix on
Intel
3. • Operational analytics
– Focus on excellence in operations
– Operations of most organizations are complex & multi-
faceted
• Supply chain, production processes, people, partners, etc
• HR, Sales, IT, etc
• More than Efficiency, operational excellence
needs effective, smarter processes
• Customized experience, repeatable at scale
What is Operational analytics?
4. Challenges in Operational Excellence
• Respond quickly to shifts in reality
• React to competition quickly
• Continuously lower the cost
• IT Challenge:
– handle volume and response times a modern business
requires
– or use people to provide flexibility to respond to
developing situation
• False choice
– System should handle volume, velocity & be flexible
6. “Most discussions of decision making assume only
senior executives make decisions or that only
senior executives’ decisions mater.
This is a dangerous mistake”
-- Peter Drucker
7. • What to change?
• What to change to?
• How to cause the change?
9. Business Analytics
• Traditionally, business analytics is on customer
opportunity and risk management
• Quickly detect shifts in reality
• Make reaction part of the routine operations.
10. The Changing World of BI Analytics
• Advanced Analytics
– Improved analytic tools and techniques for statistical and predictive
analytics
– New tools for exploring and visualizing new varieties of data
– Operational intelligence with embedded BI services and BI
automation
• Data Management
– Analytic relational database systems that offer improved
price/performance and libraries of analytic functions
– In-memory computing for high performance
– Non-relational systems such as Hadoop for handling new types of
data
– Stream processing/CEP systems for analyzing in-motion data
15. • Data Warehouse query Performance without Perspiration
• Consistent query performance without tuning efforts.
• More questions, faster answers, better data driven decisions & business insights
• SKECHERS: Acceleration from 60x to 1400x – average acceleration of 450x
Motivation
16. Informix Database Server
Informix warehouse Accelerator
BI Applications
Step 1. Install, configure,
start Informix
Step 2. Install, configure,
start Accelerator
Step 3. Connect Studio to
Informix & add accelerator
Step 4. Design, validate,
Deploy Data mart
Step 5. Load data to
accelerator
Ready for Queries
IBM Smart Analytics
Studio
Step 1
Step 2
Step 3
Step 4
Step 5
Ready
Informix Ultimate Warehouse edition
17. 17
Informix Primary
Informix warehouse Accelerator
BI Applications
Step 1. Install, configure,
start Informix
Step 2. Install, configure,
start Accelerator
Step 3. Connect Studio to
Informix & add accelerator
Step 4. Design, validate,
Deploy Data mart from
Primary, SDS, HDR, RSS
Step 5. Add IWA to sqlhosts
Load data to
Accelerator from any node.
Ready for Queries
IBM Smart Analytics
Studio
Step 1
Step 3
Step 4
Step 5
Ready
Informix Warehouse Accelerator – 11.70.FC5. MACH11 SupportInformix Warehouse Accelerator – 11.70.FC5. MACH11 Support
Informix
SDS1
Informix
SDS2
Informix
HDR
Secondary
Informix
RSS
Step 2
18. Design DM by
workload analysis or
manually
Deployed datamart
Datamart
Deleted
Datamart in USE
Datamart Disabled
Partition based refresh
Trickle feed refresh
Deploy
Load
Drop
Disable
Enable Drop
Typically,
300 GB/hr
10 GB under 3 mins
Online operation
Stages & Options for data loading to IWA
19. Scaling in Westmere: Data Warehouse Setup.
• TPC-DS Schema;
web_sales
• Mart Data size: 1
terabytes
• web_sales, 4.1 billion
rows
– Fact with 34 partitions
• Dimensions: 13, non
partitioned.
4.1 billion
73,049 66
22
86,400
20
7,20015 million
66
30 million
1.9 million
1,800
360,000
3600
25. 25
INTEL/IWA: Breakthrough technologies for
performance
1
2
3
4
5
6
7 1
2
3
4
5
6
7
1. Large memory support
64-bit computing; System X with MAX5 supports up
to 6TB on a single SMP box; Up to 640GB on each
node of blade center. IWA: Compress large dataset
and keep it in memory; totally avoid IO.
7. Multi-core, multi-node environment
Nehalem has 8 cores and Westmere 10 cores. This trend is
expected to continue. IWA: Parallelize the scan, join, group
operations. Keep copies of dimensions to avoid cross-node
synchronization.
4. Virtualization Performance
Lower overhead: Core micro-architecture
enhancements, EPT, VPID, and End-to-End
HW assist IWA: Helps informix and IWA to
seemlessly run and perform in virtualized
environment.
5. Hyperthreading
2x logical processors; increases processor
throughput and overall performance of threaded
software. IWA: Does not exploit this since the
software is written to avoid pipeline flushing.
3. Frequency Partitioning
IWA: Enabler for the effective parallel access
of the compressed data for scanning.
Horizontal and Vertical Partition Elimination.
2. Large on-chip Cache
L1 cache 64KB per core, L2 cache is 256KB per
core and L3 cache is about 4-12 MB.
Additional Translation lookaside buffer (TLB).
IWA: New algorithms to avoid pipeline
flushing and cache hash tables in L2/L3 cache
6. Single Instruction Multiple Data
Specialized instructions for manipulating
128-bit data simultaneously. IWA:
Compresses the data into deep columnar
fashion optimized to exploit SIMD. Used in
parallel predicate evaluation in scans.
26. Tick-Tock Development Model:Tick-Tock Development Model:
Sustained Microprocessor LeadershipSustained Microprocessor Leadership
Intel®
Core™
Microarchitecture
Intel®
Core™
Microarchitecture
TOCK
New
Micro-
architecture
MeromMerom
65nm65nm
TICK
PenrynPenryn
New
Process
Technology
45nm45nm
Intel® Microarchitecture
Codename Nehalem
Intel® Microarchitecture
Codename Nehalem
TOCK
New
Micro-
architecture
NehalemNehalem
45nm45nm
TICK
WestmereWestmere
32nm32nm
New
Process
Technology
Intel® Microarchitecture
Codename Sandy
Bridge
Intel® Microarchitecture
Codename Sandy
Bridge
TOCK
SandySandy
BridgeBridge32nm32nm
New
Micro-
architecture
TICK
IvyIvy
BridgeBridge22nm22nm
New
Process
Technology
Intel® Microarchitecture
Codename Haswell
Intel® Microarchitecture
Codename Haswell
TOCK
HaswellHaswell
22nm22nm
New
Micro-
architecture
TICK
FutureFuture
14nm14nm
New
Process
Technology
27. Mainstream
Enterprise
Best combination of
performance, power efficiency,
and cost
High Performance Computing &
Workstations
Bandwidth-optimized for high
performance analytics & visualization
Small
Business
Economical and more
dependable vs. desktop
Increasing capability
Cloud Computing
Efficient, secure, and open platforms for
Internet datacenters and IAAS
Entry Servers and
Workstations
More features and performance than
traditional desktop systems
Enterprise Server
Versatility for infrastructure apps (up to 4S)
Scalable
Enterprise
Top-of-the-line performance,
scalability, and reliability
Cloud Computing
Highest virtualization density and advanced
reliability for private cloud
Mission Critical
Performance and reliability for the most
business critical workloads with outstanding
economics
High Performance Computing
Greater scaling and memory capacity
27
Intel®
Xeon®
Processor Family for Business
28. Intel®
Xeon®
Processor
E7-8800/4800/2800 Product Families
Building on Xeon®
7500 Leadership Capabilities
• More performance within same
max CPU TDP as Xeon 7500
• Lower partial active & idle power
via Intel Intelligent Power
Technology2
• Support for Low Voltage-DIMMs3
• Reduced power memory buffers4
More Efficient
• Supports 32GB DDR3 DIMMs (2TB per 4-
socket system)1
More Expandable
More Security & RAS
• 10 cores / 20 threads
• 30MB of last level cache
More Performance
E7-4800 E7-4800
E7-4800 E7-4800
SECURITY
• Intel®
Advanced Encryption
Standard-New Instructions
• Intel®
Trusted Execution
Technology (TXT)
RELIABILITY, AVAILABILITY, SERVICEABILITY
• Enhanced DRAM Double Device Data Correction
• Fine Grained Memory Mirroring
1. Up to 64 slots per standard 4 socket system x 32GB/DIMM = 2TB
2. Uses similar core and package C6 power states enabled on Intel Xeon 5500/5600 series processors. Requires OS support.
3. Savings dependent on workload and configuration.
4. Memory buffer power savings of up to 1.3W active and 3W idle per buffer per Intel estimates. Slightly more savings when used with LV DIMMs
Delivers more Performance, Expandability and RASDelivers more Performance, Expandability and RAS
while improving Energy Efficiencywhile improving Energy Efficiency
Delivers more Performance, Expandability and RASDelivers more Performance, Expandability and RAS
while improving Energy Efficiencywhile improving Energy Efficiency
29. 29
Intel® Xeon® 7500/E7 8 Socket Configuration
4+4 (8S)
Up to 10 cores and 2.4 Ghz
per CPU
Support 8 socket mode by
combining 2 systems via
external QPI links
Memory Configuration
4TB in 8 socket server
6TB in 8 socket + MAX5
Continued 1066MHz
support
IBM® System
x3850 X5
30. 30
• Machine Check Architecture (MCA)
recovery (MCA-R)
• Machine Check Architecture (MCA)
recovery (MCA-R)
Memory
• Inter-socket Memory Mirroring
• Intel®
Scalable Memory
Interconnect (Intel® SMI) Lane
Failover
• Intel®
SMI Clock Fail Over
• Intel®
SMI Packet Retry
• Memory Address Parity
• Failed DIMM Isolation
• Memory Board Hot Add/Remove
• Dynamic Memory Migration*
• OS Memory On-lining *
• Recovery from Single DRAM
Device Failure (SDDC) plus
random bit error
• Memory Thermal Throttling
• Demand and Patrol scrubbing
• Fail Over from Single DRAM
Device Failure (SDDC)
• Enhanced DRAM Double Device
Data Correction
• Fine Grained Memory Mirroring
• Memory DIMM and Rank Sparing
• Intra-socket Memory Mirroring
• Mirrored Memory Board Hot
Add/Remove
• Inter-socket Memory Mirroring
• Intel®
Scalable Memory
Interconnect (Intel® SMI) Lane
Failover
• Intel®
SMI Clock Fail Over
• Intel®
SMI Packet Retry
• Memory Address Parity
• Failed DIMM Isolation
• Memory Board Hot Add/Remove
• Dynamic Memory Migration*
• OS Memory On-lining *
• Recovery from Single DRAM
Device Failure (SDDC) plus
random bit error
• Memory Thermal Throttling
• Demand and Patrol scrubbing
• Fail Over from Single DRAM
Device Failure (SDDC)
• Enhanced DRAM Double Device
Data Correction
• Fine Grained Memory Mirroring
• Memory DIMM and Rank Sparing
• Intra-socket Memory Mirroring
• Mirrored Memory Board Hot
Add/Remove
Advanced Reliability Starts With Silicon
Intel® Xeon® processor E7 family RAS Capabilities
I/O Hub
• Physical IOH Hot Add
• OS IOH On-lining*
• PCI-E Hot Plug
• Physical IOH Hot Add
• OS IOH On-lining*
• PCI-E Hot Plug
CPU/Socket
• Machine Check Architecture
(MCA) recovery (MCA-R)
• Corrected Machine Check
Interrupt (CMCI)
• Corrupt Data Containment Mode
• Viral Mode
• OS Assisted Processor Socket
Migration*
• OS CPU on-lining *
• CPU Board Hot Add at QPI
• Electronically Isolated (Static)
Partitioning
• Single Core Disable for Fault
Resilient Boot
• Machine Check Architecture
(MCA) recovery (MCA-R)
• Corrected Machine Check
Interrupt (CMCI)
• Corrupt Data Containment Mode
• Viral Mode
• OS Assisted Processor Socket
Migration*
• OS CPU on-lining *
• CPU Board Hot Add at QPI
• Electronically Isolated (Static)
Partitioning
• Single Core Disable for Fault
Resilient Boot
Intel®
QuickPath Interconnect
• Intel QPI Packet Retry
• Intel QPI Protocol Protection via
CRC (8bit or 16bit rolling)
• QPI Clock Fail Over
• QPI Self-Healing
• Intel QPI Packet Retry
• Intel QPI Protocol Protection via
CRC (8bit or 16bit rolling)
• QPI Clock Fail Over
• QPI Self-Healing
Advanced reliability features work to maintain data integrityAdvanced reliability features work to maintain data integrityAdvanced reliability features work to maintain data integrityAdvanced reliability features work to maintain data integrity
31. 2012 2013/Future
Roadmap
2S Efficient
Performance
Intel® Xeon® processor E5-2600 product family
2 sockets, up to 8C/16T per sockets, up to 20MB shared cache, “Sandy Bridge” microarchitecture
Future Intel®
Micro-
architecture
codename
Ivy Bridge
4S Efficient
Performance
Intel® Xeon® processor E5-4600 product family
4 sockets, up to 8C/16T per sockets, up to 20MB shared cache, “Sandy Bridge” microarchitecture
31
Expandable
Intel®
Xeon®
processor E7-8800/4800/2800
product families
2-8 sockets, up to 10C/20T per socket, up to 30MB shared cache, “Westmere” microarchitecture
34. “Real-world” basis for TPC-E
Network
Network
Database
Services
Application
And
Business Logic
Services
Presentation
Services
Workstation
Laptop
Hand-held
Cell phone
Examples of
User Interfaces
Stock Market
Exchange
Example of
External Business
Modeled Business
Legend
Customer
Sponsor Provided
Stock Market
Network
Network
Database
Services
Application
And
Business Logic
Services
Presentation
Services
Workstation
Laptop
Hand-held
Cell phone
Examples of
User Interfaces
Stock Market
Exchange
Example of
External Business
Modeled Business
Network
Network
Database
Services
Application
And
Business Logic
Services
Presentation
Services
Workstation
Laptop
Hand-held
Cell phone
Examples of
User Interfaces
Stock Market
Exchange
Example of
External Business
Modeled Business
Legend
Customer
Sponsor Provided
Stock Market
LegendLegend
Customer
Sponsor Provided
Stock Market
Customer
Sponsor Provided
Stock Market
37. OLAP queries
SELECT T0.c0 AS ct_dtskey,
T0.c1 AS ct_amt,
T0.c1 AS c3,
T0.c2 AS c4,
Min(T0.c3)
OVER (
PARTITION BY T0.c0) AS ct_amt2
FROM (SELECT DISTINCT cash_transaction.ct_dts AS C0,
Sum(cash_transaction.ct_amt)
OVER (
PARTITION BY cash_transaction.ct_dts) AS C1,
COUNT(cash_transaction.ct_amt)
OVER (
PARTITION BY cash_transaction.ct_dts) AS C2,
Stddev(cash_transaction.ct_amt)
OVER (
PARTITION BY cash_transaction.ct_dts) AS C3
FROM cash_transaction cash_transaction
WHERE DATE(ct_dts) BETWEEN DATE('2005-01-04') AND DATE('2005-01-05')
AND ct_name LIKE 'Stop-Loss%') T0;
38. Intel® Xeon® E7-8870:
• Hardware setup
– Intel® Xeon® E7-8870 processor – 4 socket (40C/80T) and
8 socket (80C/160T) configurations
• 2.4 GHz, 30MB last level shared cache
– 10 TB storage
– 2 TB RAM
• Software Setup
– Informix and Informix Warehouse Accelerator: v11.70.FC7
and Informix 12.10
– Both Informix and IWA on the same machine.
39. Data Setup
• Data Loading
– 300 GB of starting data set
– Data size is about nnn GB including indexes.
• TPCE is heavily indexed for performance
– As we run the OLTP workload, the data size increases.
40. IDS 12.10 on Intel Westmere: Multi user scaling
0
500
1000
1500
2000
2500
3000
3500
4000
4500
1 2 4 8 16 32 50
Concurrent User Count
QueryTime(seconds)
4s-NoHT 4s+HT 8sNoHT 8s+HT
41. IDS 12.10 on Intel Westmere: Multi user scaling
0
500
1000
1500
2000
2500
3000
3500
4000
4500
1 2 4 8 16 32 50
Concurrent User Count
NumberofQueriesperhour
4sNoHT 4s+HT 8sNoHT 8s+HT
47. IBM Informix* Database
Scale-up Optimized for Intel Architecture
Baseline
Intel Xeon
processor E7-4870
Informix* v11.7
Up to 45%
Intel®
Xeon®
processor E7-8870
Informix* v11.7
Informix* v11.7
1.45x
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as
SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those
factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated
purchases, including the performance of that product when combined with other products.
*Other brands and names are the property of their respective owners
48. IBM Informix* Database
Scale-up Optimized for Intel Architecture
Informix* v12.1
1.6x
Up to 60%
Intel®
Xeon®
processor E7-8870
Informix* v12.1
Intel Xeon
processor E7-4870
Informix* v12.1
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as
SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those
factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated
purchases, including the performance of that product when combined with other products.
*Other brands and names are the property of their respective owners
49. IBM Informix* Database
Scale-up Optimized for Intel Architecture
Baseline
Intel Xeon
processor E7-4870
Informix* v11.7
Up to 550%
Intel Xeon
processor E7-8870
Informix* v12.1
Intel®
Xeon®
processor E7-8870
Informix* v11.7
Up to 540%
Intel Xeon
processor E7-4870
Informix* v12.1
Informix* v11.7 Informix* v12.1
Up to 5.4x
Up to 5.5x
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as
SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those
factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated
purchases, including the performance of that product when combined with other products.
*Other brands and names are the property of their respective owners
52. Informix Publications
Bulletin of the Technical Committee on Data Engineering: March 2012
Vol. 35 No. 1
Real Time Business Intelligence. September 2, 2011 - Seattle, United States
IBM Data management Magazine: Supercharging
the
data wharehouse while keeping the costs down.
2012 Bloor Report: IBM Informix in hybrid workload
environments
2012 Ovum Analyst report: Informix Accelerates Analytic Integration
into OLTP
DBTA Article: Empowering Business Analysts with Faster Insights
http://youtu.be/xJd8M-fbMI0
53. Jantz Tran Intel jantz.c.tran@intel.com
Keshava Murthy IBM rkeshav@us.ibm.com
55. Intel - Legal Disclaimers Performance
• Performance tests and ratings are measured using specific computer systems and/or components and reflect the
approximate performance of Intel products as measured by those tests. Any difference in system hardware or
software design or configuration may affect actual performance. Buyers should consult other sources of information
to evaluate the performance of systems or components they are considering purchasing. For more information on
performance tests and on the performance of Intel products, Go to:
http://www.intel.com/performance/resources/benchmark_limitations.htm.
• Intel does not control or audit the design or implementation of third party benchmarks or Web sites referenced in this
document. Intel encourages all of its customers to visit the referenced Web sites or others where similar
performance benchmarks are reported and confirm whether the referenced benchmarks are accurate and reflect
performance of systems available for purchase.
• Relative performance is calculated by assigning a baseline value of 1.0 to one benchmark result, and then dividing
the actual benchmark result for the baseline platform into each of the specific benchmark results of each of the other
platforms, and assigning them a relative performance number that correlates with the performance improvements
reported.
• INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS”. NO LICENSE, EXPRESS OR IMPLIED, BY
ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS
DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR
IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING
TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT,
COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
• Performance tests and ratings are measured using specific computer systems and/or components and reflect the
approximate performance of Intel products as measured by those tests. Any difference in system hardware or
software design or configuration may affect actual performance. Buyers should consult other sources of information
to evaluate the performance of systems or components they are considering purchasing. For more information on
performance tests and on the performance of Intel products, reference www.intel.com/software/products.
56. IBM’s statements regarding its plans, directions, and intent are subject to change or
withdrawal without notice at IBM’s sole discretion.
Information regarding potential future products is intended to outline our general
product direction and it should not be relied on in making a purchasing decision.
The information mentioned regarding potential future products is not a commitment,
promise, or legal obligation to deliver any material, code or functionality. Information
about potential future products may not be incorporated into any contract. The
development, release, and timing of any future features or functionality described for
our products remains at our sole discretion.
Please Note:
Performance is based on measurements and projections using standard IBM
benchmarks in a controlled environment. The actual throughput or performance that
any user will experience will vary depending upon many factors, including
considerations such as the amount of multiprogramming in the user's job stream, the
I/O configuration, the storage configuration, and the workload processed. Therefore,
no assurance can be given that an individual user will achieve results similar to those
stated here.
57. 04/23/13 57
Availability. References in this presentation to IBM products, programs, or services
do not imply that they will be available in all countries in which IBM operates.
The workshops, sessions and materials have been prepared by IBM or the session
speakers and reflect their own views. They are provided for informational purposes
only, and are neither intended to, nor shall have the effect of being, legal or other
guidance or advice to any participant. While efforts were made to verify the
completeness and accuracy of the information contained in this presentation, it is
provided AS-IS without warranty of any kind, express or implied. IBM shall not be
responsible for any damages arising out of the use of, or otherwise related to, this
presentation or any other materials. Nothing contained in this presentation is intended
to, nor shall have the effect of, creating any warranties or representations from IBM or
its suppliers or licensors, or altering the terms and conditions of the applicable license
agreement governing the use of IBM software.
Acknowledgements and
Disclaimers:
59. Do you have a great presentation topic that
you’d like to share?
•We’re looking for dynamic, innovative and thought-provoking
sessions
•Whether your proposal aims at sharpening skills, sharing best
practices, or presenting new ideas and groundbreaking concepts, all
proposals are welcome
•Visit the conference website to learn more
The Call for Speakers closes April 30! Hurry to submit your session!
60. Sign Up! Informix Usability Sandbox!
Help shape the future of Informix.
Influence Informix usability and functionality.
Share your experiences and feedback.
Usability Sandbox sessions in Santa Fe 3
April 22-24th, between 9am and 5pm
Sign-up at the IBM Information Table or find Justin McDavid.
*The first 20 participants will get a free IBM t-shirt!
61. Informix RFE (Request For Enhancement) Process
As Simple as 1, 2, 3
1. Submit from the IM RFE site – simply complete the RFE form and click Submit when ready
Many fields will be auto-filled as a convenience for you
Note that fields with the ‘key’ field e.g. Company Name and Business Justification will be
kept private for confidentiality purposes
Provide as much detail as possible in the Description, Use Case, and Business
Justification fields to help the IBM team understand your requirement
2. View via Watchlist
Lists all the RFEs that you’re interested in
Simple to add an RFE via Search
3. Subscribe to email notifications
Specify ‘Opting in for email notifications’
Notified when any change occurs to any RFE on your watch list
YouTube: http://www.ibm.com/developerworks/rfe/execute?use_case=tutorials#tut2YouTube: http://www.ibm.com/developerworks/rfe/execute?use_case=tutorials#tut2
Give it a shot! http://www.ibm.com/developerworks/rfe/
Editor's Notes
Slide Purpose: Show full systems and use as chance to highlight the Energy Efficiency enhancements in Intel® Xeon® processor E7 family The Xeon E7 family is designed and built upon Intel’s 32nm Nehalem micro-architecture, which allows us to deliver 25% more cores and cache providing more performance within same maximum TDP as the Xeon 7500 series. It also supports 16 DIMMs per socket, which equates to 2TB of memory for the 4-socket E7-4800 product family – allowing for increased expandability. The Xeon E7 family features energy efficiency technologies including the Intel® Intelligent Power Technology (IPT) which is a shared technology from Intel’s Efficient Performance product line. IPT reduces partial active and idle power in the CPU and memory. Xeon E7 also supports lower power memory as well as memory buffers which support both standard and LV-DIMMs. The Xeon processor E7 family not only includes all of the reliability, availability and serviceability (RAS) features of the previous generation such as machine check architecture-recovery but also includes additional memory error correction features such as Enhanced DRAM Double Device Data Correction (DDDC) and Fine Grained Memory Mirroring. DDDC is an improved memory RAS feature which allows for a 2nd memory error & replacement of DIMMs w/o crashing . Fine Grained Memory Mirroring provides protection against uncorrectable memory errors that would otherwise result in a platform failure and allows for more flexible memory mirroring configurations (allows memory mirroring of just a critical portion of memory, leaving the rest of memory un-mirrored). This enables more cost-effective mirroring by mirroring just the critical portion of memory versus the entire memory space. New security features such as Intel® Advanced Encryption Standard New Instructions (AES-NI) and Intel® Trusted Execution Technology (TXT) are also supported. These advanced security features within the Xeon processor E7 family work to maintain data integrity, accelerate encrypted transactions, and maximize business continuity.
Intel Confidential
The advantage of working together is multiplied when both hardware and software is improved. On 11.7, TPCE schema. 268 GB.database Operational analytics querie: hand written report queries and cognos generated queries for reports & widgets Ran with 1, 2, 4, 8, 16, 32 and 50 user configuration. All the queries ran on Informix and IWA.
The advantage of working together is multiplied when both hardware and software is improved. On 11.7, TPCE schema. 268 GB.database Operational analytics querie: hand written report queries and cognos generated queries for reports & widgets Ran with 1, 2, 4, 8, 16, 32, and 50 user configuration. All the queries ran on Informix and IWA. Used more complex queries (like OLAP window functions) since we supppot it on IWA for 12.10.
The advantage of working together is multiplied when both hardware and software is improved. On 11.7, TPCE schema. 268 GB.database Operational analytics queries: hand written report queries and cognos generated queries for reports & widgets that can be run on both Informix 11.7 and 12.1. Some of the report run just on 11.7 Informix only and will run on Informix + IWA on 12.10. There are additional hash join and other performance improvement. Hence, in a multi-user environment, the CPU utilization on 12.1 is better resuling in > 500% improvement. That are supported by both 11.70 and 12.1. Ran with 1, 2, 4, 8, 16, 32, and 50 user configuration. All the queries ran on Informix and IWA.
The slide shows both scalability and performance of OLTP and OLAP of Informix (with IWA) in mixed workload environment. --- In this case, we ran TPCE (non-audited) workload (OLTP) concurrent to OLAP workload mentioned in previous slides on Informix 12.10. Observations on mixed workload: OLTP workload performance decreased minimally as we increased the OLAP performance (from 0 OLAP user to 1,2,3,8,16 and 32 OLAP user) OLAP performance scaled well when running by itself or mixed workload environment and performance.
YouTube tutorial for RFE submit, view, and send out notification http://www.ibm.com/developerworks/rfe/execute?use_case=tutorials#tut2 Note: Transcript for this video http://www.ibm.com/developerworks/podcasts/demos/special-RFE-process-2/cm-int-special-RFE-process-2.html What is Different from the Current Requirements System? Requirements submitter interacts directly with Product Management No need to involve Customer Support or Sales rep Requirements go to back-end system already being used by Product Management & Development No separate tracking system that is not “part of the process” Improved ability to monitor and manage requirements Watch lists, “me too”, groups, voting Crisply defined Service Level Agreements Compliance to SLAs will be monitored monthly by Informix team Consistent requirements system for IBM Software Group products