MicroStrategy and Teradata have a long partnership in providing business intelligence capabilities. MicroStrategy is optimized to run on Teradata and leverages many Teradata features and extensions for performance and scalability. These include multi-pass SQL, bulk inserts, Teradata indexing, functions, and syntax. MicroStrategy also integrates with Teradata tools and provides additional functionality like middle-tier computations and caching.
MicroStrategy integrates with Microsoft SQL Server in several ways to optimize analytical queries:
1) MicroStrategy generates SQL Server-specific syntax and pushes over 120 functions to take advantage of SQL Server's analytics capabilities.
2) MicroStrategy uses multi-pass SQL and intermediate tables to help answer complex analytical questions, with options like global temporary tables and parallel query execution.
3) MicroStrategy supports key SQL Server features like parallel queries, indexed views, compression, and partitioning to improve performance.
This document provides information about a webinar on SQL Server 2016 Stretch Database presented by Antonios Chatzipavlis. The webinar covers an introduction to Stretch Database, its limitations and pricing, backup and restore of Stretch databases, and frequently asked questions. Antonios Chatzipavlis has over 30 years of experience working with computers and SQL Server. He is a Microsoft Certified Trainer and SQL Server Evangelist who runs the SQL School Greece training organization.
Industry leading
Build mission-critical, intelligent apps with breakthrough scalability, performance, and availability.
Security + performance
Protect data at rest and in motion. SQL Server is the most secure database for six years running in the NIST vulnerabilities database.
End-to-end mobile BI
Transform data into actionable insights. Deliver visual reports on any device—online or offline—at one-fifth the cost of other self-service solutions.
In-database advanced analytics
Analyze data directly within your SQL Server database using R, the popular statistics language.
Consistent experiences
Whether data is in your datacenter, in your private cloud, or on Microsoft Azure, you’ll get a consistent experience.
SQL Server 2016 includes several new features such as columnstore indexes, in-memory OLTP, live query statistics, temporal tables, and row-level security. It also features improved manage backup functionality, support for multiple tempdb files, and new ways to format and encrypt query results. Advanced capabilities like PolyBase and Stretch Database further enhance analytics and management of historical data.
This document summarizes new features in SQL Server 2016. It discusses improvements to columnstore indexes, in-memory OLTP, the query store, temporal tables, always encrypted, stretch database, live query statistics, row level security, and dynamic data masking. It provides links to documentation and demos for these features. It also suggests what may be included in future CTP releases and lists resources for learning more about SQL Server 2016.
This document summarizes new features in SQL Server 2016 for SQL Server Integration Services (SSIS), Master Data Services (MDS), Data Quality Services (DQS), Analysis Services (SSAS), and Reporting Services (SSRS). For SSIS, new features include auto-adjusting buffer size, an Azure feature pack, and incremental package deployment. For MDS, improvements include longer attribute names, composite indexes, and entity synchronization. For SSAS, enhancements focus on performance, consistency, and new DAX functions. For SSRS, additions center around treemap/sunburst charts, custom parameters, and HTML5 rendering.
SQL Server 2016 introduces new editions that provide varying levels of capabilities for different workloads. The key editions are Express, Standard, and Enterprise. Express is free and ideal for small applications. Standard provides core data management and business intelligence. Enterprise delivers comprehensive datacenter capabilities for mission critical workloads and advanced analytics. All editions now support new security features and hybrid cloud capabilities like stretch database.
MicroStrategy integrates with Microsoft SQL Server in several ways to optimize analytical queries:
1) MicroStrategy generates SQL Server-specific syntax and pushes over 120 functions to take advantage of SQL Server's analytics capabilities.
2) MicroStrategy uses multi-pass SQL and intermediate tables to help answer complex analytical questions, with options like global temporary tables and parallel query execution.
3) MicroStrategy supports key SQL Server features like parallel queries, indexed views, compression, and partitioning to improve performance.
This document provides information about a webinar on SQL Server 2016 Stretch Database presented by Antonios Chatzipavlis. The webinar covers an introduction to Stretch Database, its limitations and pricing, backup and restore of Stretch databases, and frequently asked questions. Antonios Chatzipavlis has over 30 years of experience working with computers and SQL Server. He is a Microsoft Certified Trainer and SQL Server Evangelist who runs the SQL School Greece training organization.
Industry leading
Build mission-critical, intelligent apps with breakthrough scalability, performance, and availability.
Security + performance
Protect data at rest and in motion. SQL Server is the most secure database for six years running in the NIST vulnerabilities database.
End-to-end mobile BI
Transform data into actionable insights. Deliver visual reports on any device—online or offline—at one-fifth the cost of other self-service solutions.
In-database advanced analytics
Analyze data directly within your SQL Server database using R, the popular statistics language.
Consistent experiences
Whether data is in your datacenter, in your private cloud, or on Microsoft Azure, you’ll get a consistent experience.
SQL Server 2016 includes several new features such as columnstore indexes, in-memory OLTP, live query statistics, temporal tables, and row-level security. It also features improved manage backup functionality, support for multiple tempdb files, and new ways to format and encrypt query results. Advanced capabilities like PolyBase and Stretch Database further enhance analytics and management of historical data.
This document summarizes new features in SQL Server 2016. It discusses improvements to columnstore indexes, in-memory OLTP, the query store, temporal tables, always encrypted, stretch database, live query statistics, row level security, and dynamic data masking. It provides links to documentation and demos for these features. It also suggests what may be included in future CTP releases and lists resources for learning more about SQL Server 2016.
This document summarizes new features in SQL Server 2016 for SQL Server Integration Services (SSIS), Master Data Services (MDS), Data Quality Services (DQS), Analysis Services (SSAS), and Reporting Services (SSRS). For SSIS, new features include auto-adjusting buffer size, an Azure feature pack, and incremental package deployment. For MDS, improvements include longer attribute names, composite indexes, and entity synchronization. For SSAS, enhancements focus on performance, consistency, and new DAX functions. For SSRS, additions center around treemap/sunburst charts, custom parameters, and HTML5 rendering.
SQL Server 2016 introduces new editions that provide varying levels of capabilities for different workloads. The key editions are Express, Standard, and Enterprise. Express is free and ideal for small applications. Standard provides core data management and business intelligence. Enterprise delivers comprehensive datacenter capabilities for mission critical workloads and advanced analytics. All editions now support new security features and hybrid cloud capabilities like stretch database.
Live Query Statistics and Query Store are new features in SQL Server 2016 that provide insights into query performance. Live Query Statistics allows users to view live execution plans and operator statistics to troubleshoot long-running or problematic queries. Query Store automatically captures query histories, plans, and runtime statistics to help users identify performance regressions and force previous high-performing plans. Both features aim to simplify performance troubleshooting and provide greater visibility into the query optimization and execution process.
SQL Server 2016: Just a Few of Our DBA's Favorite ThingsHostway|HOSTING
Join Rodney Landrum, Senior DBA Consultant for Ntirety, a division of HOSTING, as he demonstrates his favorite new features of the latest Microsoft SQL Server 2016 Service Pack 1.
During the accompanying webinar and slides, Rodney will touch on the following:
• A demo of his favorite new features in SQL Server 2016 and SP1 including:
o Query Store
o Database Cloning
o Dynamic Data Masking
o Create or Alter
• A review of Enterprise features that are now available in standard edition
• New information in Dynamic Management Views and SQL Error Log that will make your DBAs job easier.
This document provides an overview of auditing data access in SQL Server. It discusses various methods for auditing such as using common criteria, SQL Trace, DML triggers, temporal tables, and implementing SQL Server Audit. SQL Server Audit is described as the primary auditing tool in SQL Server that can track both server and database level events. Considerations for implementing and managing SQL Server Audit are also covered.
SQL Server 2016 is now in review! The newest version promises to deliver new real-time, built-in advanced analytics, advanced security technology, hybrid cloud scenarios as well as amazing rich visualizations on mobile devices.
There are many great reasons to move to SQL 2016, however if you are still working on SQL Server 2005 you may have another good motivator - the end-of-life clock of SQL 2005 is ticking down and support is about to end April 12, 2016.
In this deck we review the significant licensing changes introduced with SQL 2012. If our experience as Microsoft's Gold Certified Member has taught us anything - it is one thing. During migrations many of our clients get outright lost when trying to figure out the number of licenses they have or need. This often leads to under-deployment, and subsequently serious compliance issues with Microsoft. And yes, in some cases over-deployment means big savings back to your department.
Row Level Security (RLS) enables implementation of row-level access restrictions in SQL Server. RLS uses predicate functions to define the security logic and filters rows for queries based on that logic. Security predicates bind the predicate functions to tables and are defined as filter predicates to silently filter rows or blocking predicates to prevent write operations. Best practices include keeping the security logic simple and on separate schemas for maintenance. RLS has some limitations including incompatibility with Filestream and Polybase.
This document provides instructions for creating an Oracle Data Integrator (ODI) project and interface to export data from one flat file to another flat file. It outlines the steps to create a new physical schema for the flat file model, a new ODI model for the flat file source, new ODI source and target datastores, and a new ODI interface to perform the flat file to flat file transformation. The interface can then be executed using the ODI Operator to verify the data export.
This document provides an introduction and background about the presenter along with information about SQL Database. The presenter has over 30,000 hours of training experience with SQL Server and various Microsoft certifications. They created SQL School Greece as a resource for IT professionals and others interested in SQL Server. The presentation will cover what SQL Database is on Azure, its service tiers including basic, standard, and premium, database transaction units (DTUs), the Azure SQL Database logical server, management tools for SQL Database, and securing SQL Database. It concludes with an invitation to sign up for SQL PASS and follow the presenter on social media.
Oracle Autonomous Database for DevelopersTércio Costa
This document introduces Tércio Costa, an Oracle DBA and ACE member who is an expert on Oracle Autonomous Database. It lists his credentials and certifications. It then briefly discusses the Oracle ACE program membership tiers and its global community of over 500 technical experts. Finally, it mentions some of the key services provided by Oracle Autonomous Database such as provisioning, scaling, management, security, data protection, and optimization.
The document provides an overview and summary of new features in Microsoft SQL Server 2016. It discusses enhancements to the database engine, in-memory OLTP, columnstore indexes, R services, high availability, security, and Reporting Services. Key highlights include support for up to 2TB of durable memory-optimized tables, increased index key size limits, temporal data support, row-level security, and improved integration with Azure and Power BI capabilities. The presentation aims to help users understand and leverage the new and improved features in SQL Server 2016.
CDC was introduced in SQL Server 2008 to capture insert, update, and delete activity on SQL Server tables. It makes the details of changes available in change tables that mirror the structure of the source table. SSIS 2012 components were added to more easily handle CDC in packages. CDC must be enabled on databases and tables to track changes. It is designed to load data warehouses with changes from source systems and maintain audit and change logs. Considerations for using CDC include limiting tracked columns and using different filegroups for change tables to optimize performance.
An overview of the new features available in SQL Server 2016 including Stretch Database, Always Encrypted, Data Masking, In Memory Operational Analytics and more.
How Clean is your Database? Data Scrubbing for all Skill SetsChad Petrovay
With staff working from home, many institutions are prioritizing data quality projects. Join Chad Petrovay, TMS Administrator at The Morgan Library & Museum, as he shares his deep knowledge of data scrubbing. Power users, system administrators, and SQL experts will learn how to correct and monitor data quality, and are introduced to new low-cost/free tools.
This document provides an introduction and overview of Azure DocumentDB. It discusses how DocumentDB is a fully managed NoSQL database service that provides fast and predictable performance for JSON data through SQL querying capabilities. It also describes how DocumentDB offers features like elastic scaling, high availability, global distribution and ease of development. The document then provides information on starting with DocumentDB, writing queries, and programming capabilities within DocumentDB like stored procedures and triggers.
Eladio Rincón discusses Microsoft SQL Server 2016's Stretch Database capability. Stretch Database allows organizations to migrate cold, historical data from on-premises SQL Server databases to Microsoft Azure for cost savings while still allowing the data to be queried locally and on Azure. The key benefits are reducing storage costs for large datasets, providing indefinite data retention within a consolidated datacenter in Azure, and ensuring business service level agreements are met. Stretch Database uses secure connections and provides backup, restore, and auditing functionality across the on-premises and Azure environments.
The document discusses building a data warehouse in SQL Server. It provides an agenda that covers topics like an overview of data warehousing, data warehouse design, dimension and fact tables, and physical design. It also discusses components of a data warehousing solution like the data warehouse database, ETL processes, and security considerations.
SKILLWISE-SSIS DESIGN PATTERN FOR DATA WAREHOUSINGSkillwise Group
This document provides an overview of the SSIS design pattern for data warehousing and change data capture. It discusses what design patterns are and how they are commonly used for SSIS and data warehousing projects. It then covers 13 specific patterns including truncate and load, slowly changing dimensions, hashbytes, change data capture, merge, and master/child workflows. The document explains when each pattern is best used and provides pros and cons. It also provides guidance on configuring and using SQL Server change data capture functionality.
oracle data integrator training | oracle data integrator training videos | or...Nancy Thomas
Website : http://www.todaycourses.com
Oracle Data Integrator 11g Course Content :
1.Introduction to Oracle Data Integrator
What is Oracle Data Integrator?
Why Oracle Data Integrator?
Overview of ODI 11g Architecture
Overview of ODI 11g Components
About Graphical Modules
Types of ODI Agents
Overview of Oracle Data Integrator Repositories
2. Administrating ODI Repositories and Agents
Administrating the ODI Repositories
Creating Repository Storage Spaces
Creating and Connecting to the Master Repository
Creating and Connecting to the Work Repository
Managing ODI Agents
Creating a Physical Agent
Launching a Listener, Scheduler and Web Agent
Example of Load Balancing
oracle data integrator training, oracle data integrator training videos, oracle data integrator training online, oracle data integrator training material, oracle data integrator training ppt, oracle data integrator tutorial, oracle data integrator 12c, oracle data integration suite, oracle dataintegrator 12c, oracle database 12c multitenant architecture overview, odi master repository, odi12c introduction demo, oracle data integrator training videos
This document summarizes a presentation on Oracle Data Integrator 11g given by Mark Rittman. It introduces ODI and its key features, components, and new capabilities in 11g such as the new Fusion IDE, J2EE deployment option, and integration with other Oracle technologies like WebLogic Server, Enterprise Manager, and OBIEE. The presentation also demonstrates typical development tasks in ODI like creating interfaces and packages.
A paper mill in Comana, Romania hosted a competition and school visit focused on the paper making process. Students from School 279 in Bucharest, Romania learned about the world of books and printing through an unusual journey into the paper mill. The mill uses a Fourdrinier machine and other paper making equipment to transform vegetable fibers like wood pulp into paper.
Live Query Statistics and Query Store are new features in SQL Server 2016 that provide insights into query performance. Live Query Statistics allows users to view live execution plans and operator statistics to troubleshoot long-running or problematic queries. Query Store automatically captures query histories, plans, and runtime statistics to help users identify performance regressions and force previous high-performing plans. Both features aim to simplify performance troubleshooting and provide greater visibility into the query optimization and execution process.
SQL Server 2016: Just a Few of Our DBA's Favorite ThingsHostway|HOSTING
Join Rodney Landrum, Senior DBA Consultant for Ntirety, a division of HOSTING, as he demonstrates his favorite new features of the latest Microsoft SQL Server 2016 Service Pack 1.
During the accompanying webinar and slides, Rodney will touch on the following:
• A demo of his favorite new features in SQL Server 2016 and SP1 including:
o Query Store
o Database Cloning
o Dynamic Data Masking
o Create or Alter
• A review of Enterprise features that are now available in standard edition
• New information in Dynamic Management Views and SQL Error Log that will make your DBAs job easier.
This document provides an overview of auditing data access in SQL Server. It discusses various methods for auditing such as using common criteria, SQL Trace, DML triggers, temporal tables, and implementing SQL Server Audit. SQL Server Audit is described as the primary auditing tool in SQL Server that can track both server and database level events. Considerations for implementing and managing SQL Server Audit are also covered.
SQL Server 2016 is now in review! The newest version promises to deliver new real-time, built-in advanced analytics, advanced security technology, hybrid cloud scenarios as well as amazing rich visualizations on mobile devices.
There are many great reasons to move to SQL 2016, however if you are still working on SQL Server 2005 you may have another good motivator - the end-of-life clock of SQL 2005 is ticking down and support is about to end April 12, 2016.
In this deck we review the significant licensing changes introduced with SQL 2012. If our experience as Microsoft's Gold Certified Member has taught us anything - it is one thing. During migrations many of our clients get outright lost when trying to figure out the number of licenses they have or need. This often leads to under-deployment, and subsequently serious compliance issues with Microsoft. And yes, in some cases over-deployment means big savings back to your department.
Row Level Security (RLS) enables implementation of row-level access restrictions in SQL Server. RLS uses predicate functions to define the security logic and filters rows for queries based on that logic. Security predicates bind the predicate functions to tables and are defined as filter predicates to silently filter rows or blocking predicates to prevent write operations. Best practices include keeping the security logic simple and on separate schemas for maintenance. RLS has some limitations including incompatibility with Filestream and Polybase.
This document provides instructions for creating an Oracle Data Integrator (ODI) project and interface to export data from one flat file to another flat file. It outlines the steps to create a new physical schema for the flat file model, a new ODI model for the flat file source, new ODI source and target datastores, and a new ODI interface to perform the flat file to flat file transformation. The interface can then be executed using the ODI Operator to verify the data export.
This document provides an introduction and background about the presenter along with information about SQL Database. The presenter has over 30,000 hours of training experience with SQL Server and various Microsoft certifications. They created SQL School Greece as a resource for IT professionals and others interested in SQL Server. The presentation will cover what SQL Database is on Azure, its service tiers including basic, standard, and premium, database transaction units (DTUs), the Azure SQL Database logical server, management tools for SQL Database, and securing SQL Database. It concludes with an invitation to sign up for SQL PASS and follow the presenter on social media.
Oracle Autonomous Database for DevelopersTércio Costa
This document introduces Tércio Costa, an Oracle DBA and ACE member who is an expert on Oracle Autonomous Database. It lists his credentials and certifications. It then briefly discusses the Oracle ACE program membership tiers and its global community of over 500 technical experts. Finally, it mentions some of the key services provided by Oracle Autonomous Database such as provisioning, scaling, management, security, data protection, and optimization.
The document provides an overview and summary of new features in Microsoft SQL Server 2016. It discusses enhancements to the database engine, in-memory OLTP, columnstore indexes, R services, high availability, security, and Reporting Services. Key highlights include support for up to 2TB of durable memory-optimized tables, increased index key size limits, temporal data support, row-level security, and improved integration with Azure and Power BI capabilities. The presentation aims to help users understand and leverage the new and improved features in SQL Server 2016.
CDC was introduced in SQL Server 2008 to capture insert, update, and delete activity on SQL Server tables. It makes the details of changes available in change tables that mirror the structure of the source table. SSIS 2012 components were added to more easily handle CDC in packages. CDC must be enabled on databases and tables to track changes. It is designed to load data warehouses with changes from source systems and maintain audit and change logs. Considerations for using CDC include limiting tracked columns and using different filegroups for change tables to optimize performance.
An overview of the new features available in SQL Server 2016 including Stretch Database, Always Encrypted, Data Masking, In Memory Operational Analytics and more.
How Clean is your Database? Data Scrubbing for all Skill SetsChad Petrovay
With staff working from home, many institutions are prioritizing data quality projects. Join Chad Petrovay, TMS Administrator at The Morgan Library & Museum, as he shares his deep knowledge of data scrubbing. Power users, system administrators, and SQL experts will learn how to correct and monitor data quality, and are introduced to new low-cost/free tools.
This document provides an introduction and overview of Azure DocumentDB. It discusses how DocumentDB is a fully managed NoSQL database service that provides fast and predictable performance for JSON data through SQL querying capabilities. It also describes how DocumentDB offers features like elastic scaling, high availability, global distribution and ease of development. The document then provides information on starting with DocumentDB, writing queries, and programming capabilities within DocumentDB like stored procedures and triggers.
Eladio Rincón discusses Microsoft SQL Server 2016's Stretch Database capability. Stretch Database allows organizations to migrate cold, historical data from on-premises SQL Server databases to Microsoft Azure for cost savings while still allowing the data to be queried locally and on Azure. The key benefits are reducing storage costs for large datasets, providing indefinite data retention within a consolidated datacenter in Azure, and ensuring business service level agreements are met. Stretch Database uses secure connections and provides backup, restore, and auditing functionality across the on-premises and Azure environments.
The document discusses building a data warehouse in SQL Server. It provides an agenda that covers topics like an overview of data warehousing, data warehouse design, dimension and fact tables, and physical design. It also discusses components of a data warehousing solution like the data warehouse database, ETL processes, and security considerations.
SKILLWISE-SSIS DESIGN PATTERN FOR DATA WAREHOUSINGSkillwise Group
This document provides an overview of the SSIS design pattern for data warehousing and change data capture. It discusses what design patterns are and how they are commonly used for SSIS and data warehousing projects. It then covers 13 specific patterns including truncate and load, slowly changing dimensions, hashbytes, change data capture, merge, and master/child workflows. The document explains when each pattern is best used and provides pros and cons. It also provides guidance on configuring and using SQL Server change data capture functionality.
oracle data integrator training | oracle data integrator training videos | or...Nancy Thomas
Website : http://www.todaycourses.com
Oracle Data Integrator 11g Course Content :
1.Introduction to Oracle Data Integrator
What is Oracle Data Integrator?
Why Oracle Data Integrator?
Overview of ODI 11g Architecture
Overview of ODI 11g Components
About Graphical Modules
Types of ODI Agents
Overview of Oracle Data Integrator Repositories
2. Administrating ODI Repositories and Agents
Administrating the ODI Repositories
Creating Repository Storage Spaces
Creating and Connecting to the Master Repository
Creating and Connecting to the Work Repository
Managing ODI Agents
Creating a Physical Agent
Launching a Listener, Scheduler and Web Agent
Example of Load Balancing
oracle data integrator training, oracle data integrator training videos, oracle data integrator training online, oracle data integrator training material, oracle data integrator training ppt, oracle data integrator tutorial, oracle data integrator 12c, oracle data integration suite, oracle dataintegrator 12c, oracle database 12c multitenant architecture overview, odi master repository, odi12c introduction demo, oracle data integrator training videos
This document summarizes a presentation on Oracle Data Integrator 11g given by Mark Rittman. It introduces ODI and its key features, components, and new capabilities in 11g such as the new Fusion IDE, J2EE deployment option, and integration with other Oracle technologies like WebLogic Server, Enterprise Manager, and OBIEE. The presentation also demonstrates typical development tasks in ODI like creating interfaces and packages.
A paper mill in Comana, Romania hosted a competition and school visit focused on the paper making process. Students from School 279 in Bucharest, Romania learned about the world of books and printing through an unusual journey into the paper mill. The mill uses a Fourdrinier machine and other paper making equipment to transform vegetable fibers like wood pulp into paper.
This document lists the names of several characters and locations including Javier, Renzo, Miguel, Guillermo, The Three Fantastic Creatures, Evil Pooh, The Dungeon, The Phoenix, Phoenix Peck, Oops, Help!, and Winnie Pooh. It also includes the phrase "Ha ha ha!" and the word "Conclution".
This document provides career advice for meteorologists. It discusses priorities like location, weather, quality of life, and job security. It emphasizes chasing good jobs, not just markets. Contract duration is very important, and it's best to negotiate shorter deals. Social media effectiveness is discussed, with photos performing best on Facebook and weather updates on Twitter. Relationship building with emergency managers and avoiding arguments are also advised. Overall, the document offers tips on negotiation, social media, career longevity, and maintaining positivity in the field.
This document discusses teaching English in Bulgarian foreign language schools from the perspective of a teacher. It provides details about the schools, including that they are selective, co-ed, for middle-class students seeking to attend university. It outlines the school's priorities of exam preparation and international competitions. The document also examines methodologies, resources, classroom culture differences, and tips for embracing cultural differences when teaching English.
The document discusses an external round table discussion on securing data and applications with context aware external authorization. It provides an overview of Oracle's Entitlements Server product, which provides dynamic authorization to data, applications, and relational databases with real-time sub-millisecond authorization response. Entitlements Server is part of Oracle's Identity Platform and provides strategic, heterogeneous, and leading authorization capabilities at scale.
The document provides an overview of Week 6 of an English course, including the following key points:
1) The week's activities will include listening to music, reviewing grammar, talking about illnesses, telling jokes, and studying commercials.
2) A grammar exercise focuses on the present perfect simple and present perfect continuous tenses.
3) A vocabulary review covers common medical terms like temperature, cough, and blister from the previous week.
4) Sample questions are provided to ask about health, illnesses, medical history and lifestyle habits.
RODOVIAS RS-ANÁLISE ZERO HORA NOV/2011 A MARÇO/2013-PARTE IIPLANORS
La Unión Europea ha acordado un embargo petrolero contra Rusia en respuesta a la invasión de Ucrania. El embargo forma parte de un sexto paquete de sanciones y prohibirá la mayoría de las importaciones de petróleo ruso en la UE a finales de este año. Algunos estados miembros aún dependen en gran medida del petróleo ruso y se les ha concedido una exención, pero se espera que el embargo reduzca de manera significativa los ingresos de Rusia por la venta de petróleo.
Interactive Reader pgs. 105 112 + Rubric, Bohr Model & Lewis Dotjmori1
The document provides a schedule and assignments for a science class. It includes a checklist of assignments with due dates, a priority list of assignments to focus on completing today which includes power notes and interactive reader pages, and homework assignments due on Friday including completing interactive reader pages and star cards.
These are my slides from the Internet Researcher's Conference (#IR15.0) in Daegu, Korea in October 2014... you can read more about it at my research blog over at www.incitestories.com.au
The document outlines the upcoming science class schedule and assignments which include a definition of matter lab due the next day, a binder check on Thursday, and a test on Thursday which requires a half page of notes. Students are instructed to bring specific materials to class like pencils and markers. The document also previews topics that will be covered like states of matter, changes in states, gas laws, and energy transfer relating to temperature changes.
The document discusses Java interfaces. It explains that interfaces declare methods but do not provide implementations, and that classes implement interfaces to provide those method implementations. It provides examples of how to declare an interface and how a class implements an interface. It also discusses why interfaces are useful for allowing classes to have multiple roles and behave in standard ways.
Mba724 s4 2 writing up the final reportRachel Chung
This document provides guidance on key sections of a research paper, including the literature review, methodology, results, and discussion sections. The main points are:
1) The literature review should selectively review high-quality, unbiased prior research and build a case for the paper's research question and hypotheses.
2) The methodology section should describe the study design and procedures in enough detail to allow replication.
3) The results section should present statistical findings without interpretation, including descriptive and inferential statistics relevant to testing the hypotheses.
4) The discussion section should interpret the results in the context of the research questions, hypotheses, and prior literature, and discuss limitations and implications.
Báo cáo vừa được xuất bản với bản quyền thuộc về Ủy ban Kinh Tế của Quốc hội và UNDP tại Việt Nam.Với độ dài 294 trang, báo cáo này được xem là báo cáo công phu thứ 2 trong vòng 4 năm qua( báo cáo đầu tiên là của Chương trình Việt Nam tại Đại học Harvard năm 2009).
Báo cáo Kinh tế vĩ mô được xây dựng hàng năm với cách viết “thân thiện” với Đại biểu Quốc hội và các nhà hoạch định chính sách nhằm tổng kết và đánh giá diễn biến tình hình kinh tế vĩ mô Việt Nam và thế giới, phân tích chuyên sâu một số vấn đề và chính sách kinh tế vĩ mô nổi bật trong năm, đồng thời thảo luận những vấn đề mang tính trung và dài hạn đối với nền kinh tế, từ đó đưa ra các khuyến nghị chính sách thiết thực.
Designing high performance datawarehouseUday Kothari
Just when the world of “Data 1.0” showed some signs of maturing; the “Outside In” driven demands seem to have already initiated some the disruptive changes to the data landscape. Parallel growth in volume, velocity and variety of data coupled with incessant war on finding newer insights and value from data has posed a Big Question: Is Your Data Warehouse Relevant?
In short, the surrounding changes happening real time is the new “Data 2.0”. It is characterized by feeding the ever hungry minds with sharper insights whether it is related to regulation, finance, corporate action, risk management or purely aimed at improving operational efficiencies. The source in this new “Data 2.0” has to be commensurate to the outside in demands from customers, regulators, stakeholders and business users; and hence, you would need a high relformance (relevance + performance) data warehouse which will be relevant to your business eco-system and will have the power to scale exponentially.
We starts this webinar by giving the audiences a sneak preview of what happened in the Data 1.0 world & which characteristics are shaping the new Data 2.0 world. It then delves deep on the challenges that growing data volumes have posed to the Data warehouse teams. It also presents the audiences some of the practical and proven methodologies to address these performance challenges. Finally, in the end it will highlight some of the thought provoking ways to turbo charge your data warehouse related initiatives by leveraging some of the newer technologies like Hadoop. Overall, the webinar will educate audiences with building high performance and relevant data warehouses which is capable of meeting the newer demands while significantly driving down the total cost of ownership.
This document discusses techniques for optimizing Power BI performance. It recommends tracing queries using DAX Studio to identify slow queries and refresh times. Tracing tools like SQL Profiler and log files can provide insights into issues occurring in the data sources, Power BI layer, and across the network. Focusing on optimization by addressing wait times through a scientific process can help resolve long-term performance problems.
Teradata Technology Leadership and InnovationTeradata
Teradata is a global leader in data warehousing and analytics. It provides a range of products including data warehouse appliances, an enterprise data warehouse, and database technology. Teradata's solutions leverage the latest technology and are optimized for performance, flexibility, and integrated analytics to deliver insights faster.
The document discusses several new features in Oracle Database 12c including:
- A new multi-tenant architecture using container databases and pluggable databases.
- Enhanced threaded execution that reduces the number of processes required.
- Ability to gather statistics online during direct-path loads instead of full table scans.
- Option to keep statistics on global temporary tables private to each session.
- Introduction of temporary undo segments to reduce undo in the undo tablespace.
- Ability to add invisible columns to tables.
- Support for multiple indexes on the same column.
- New information lifecycle management features like heat maps and data movement.
- Ability to log all DDL statements for troubleshooting.
- L
Vamshi Krishna Reddy has over 7 months of experience using Informatica Power Center and Teradata for data warehousing projects. He has strong skills in ETL development, data modeling, and debugging Teradata utilities. He has extensive experience developing complex mappings from varied data sources and transformations for data loading and analytics.
Maximizing Data Lake ROI with Data Virtualization: A Technical DemonstrationDenodo
Watch full webinar here: https://bit.ly/3ohtRqm
Companies with corporate data lakes also need a strategy for how to best integrate them with their overall data fabric. To take full advantage of a data lake, data architects must determine what data belongs in the Lake vs. other sources, how end users are going to find and connect to the data they need as well as the best way to leverage the processing power of the data lake. This webinar will provide you with a deep dive look at how the Denodo Platform for data virtualization enables companies to maximize their investment in their corporate data lake.
Watch on-demand this webinar to learn:
- How to create a logical data fabric with Denodo
- How to leverage the a data lake for MPP Acceleration and Summary Views
- How to leverage Presto with Denodo for file based data lakes (ie. S3, ADLS, HDFS, etc.)
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
Spark SQL is a highly scalable and efficient relational processing engine with ease-to-use APIs and mid-query fault tolerance. It is a core module of Apache Spark. Spark SQL can process, integrate and analyze the data from diverse data sources (e.g., Hive, Cassandra, Kafka and Oracle) and file formats (e.g., Parquet, ORC, CSV, and JSON). This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will get a deeper understanding of Spark SQL and understand how to tune Spark SQL performance.
The document discusses various data modeling techniques for data warehouses including star schemas and column-oriented storage. It notes that traditional OLTP systems are not optimized for data warehousing queries. Star schemas organize data around a central fact table linked to dimension tables and are widely used. However, star schemas can have performance issues like large intermediate results. Column-oriented storage improves performance by storing columns together rather than rows.
Certainly! However, your request is quite broad, as "description for data" can encompass a wide range of topics. Could you please provide more details or specify the type of data you need a description for? For instance, are you looking for a description of a dataset, a specific type of data, or something else? The more information you provide, the better I can assist you.
METASUITE is a data integration software that extracts, transforms, and loads large amounts of data from various sources into targets like data warehouses. It provides a single solution for all data integration functions through an easy to use visual interface. The software utilizes a metadata-driven approach to manage changes and maintain data integration processes. It also offers powerful filtering, transformation and data quality control capabilities.
This white paper discusses Oracle to Netezza migration for a Fortune 100 retailer. It describes the key steps in the migration process including impact analysis, design and development, history load, and testing. Impact analysis identifies all database objects, ETL processes, and applications/reports impacted. Design considerations include data type mapping, SQL conversion, and report changes. History data can be loaded via flat files or ETL. Rigorous testing of database objects, SQL, ETL processes, and data is recommended to identify any issues.
Migration to ClickHouse. Practical guide, by Alexander ZaitsevAltinity Ltd
This document provides a summary of migrating to ClickHouse for analytics use cases. It discusses the author's background and company's requirements, including ingesting 10 billion events per day and retaining data for 3 months. It evaluates ClickHouse limitations and provides recommendations on schema design, data ingestion, sharding, and SQL. Example queries demonstrate ClickHouse performance on large datasets. The document outlines the company's migration timeline and challenges addressed. It concludes with potential future integrations between ClickHouse and MySQL.
The document discusses DeepDB, a storage engine plugin for MySQL that aims to address MySQL's performance and scaling limitations for large datasets and heavy indexing. It does this through techniques like a Cache Ahead Summary Index Tree, Segmented Column Store, Streaming I/O, Extreme Concurrency, and Intelligent Caching. The document provides examples showing DeepDB significantly outperforming MySQL's InnoDB storage engine for tasks like data loading, transactions, queries, backups and more. It positions DeepDB as a drop-in replacement for InnoDB that can scale MySQL to support billions of rows and queries 2x faster while reducing data footprint by 50%.
This document provides an overview of in-memory databases, summarizing different types including row stores, column stores, compressed column stores, and how specific databases like SQLite, Excel, Tableau, Qlik, MonetDB, SQL Server, Oracle, SAP Hana, MemSQL, and others approach in-memory storage. It also discusses hardware considerations like GPUs, FPGAs, and new memory technologies that could enhance in-memory database performance.
Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!Caserta
Joe Caserta went over the details inside the big data ecosystem and the Caserta Concepts Data Pyramid, which includes Data Ingestion, Data Lake/Data Science Workbench and the Big Data Warehouse. He then dove into the foundation of dimensional data modeling, which is as important as ever in the top tier of the Data Pyramid. Topics covered:
- The 3 grains of Fact Tables
- Modeling the different types of Slowly Changing Dimensions
- Advanced Modeling techniques like Ragged Hierarchies, Bridge Tables, etc.
- ETL Architecture.
He also talked about ModelStorming, a technique used to quickly convert business requirements into an Event Matrix and Dimensional Data Model.
This was a jam-packed abbreviated version of 4 days of rigorous training of these techniques being taught in September by Joe Caserta (Co-Author, with Ralph Kimball, The Data Warehouse ETL Toolkit) and Lawrence Corr (Author, Agile Data Warehouse Design).
For more information, visit http://casertaconcepts.com/.
The document provides details of 6 projects undertaken by Lekkala Sekhar as an ETL developer and Datastage and Teradata expert. The projects involved designing and developing ETL processes to extract, transform and load data from various source systems like flat files, Oracle and Teradata databases into target data warehouses. Technologies used include Datastage 7.5, 8.0, 8.5, 9.1 and 11.3 and Teradata V2R5, 12, 13 and 14.10. Responsibilities included requirement gathering, documentation, job development, testing, performance tuning and support.
Oracle Database 12c includes several new features:
1) Online statistics gathering improves optimizer performance by gathering statistics for new objects during creation instead of requiring a full data scan later.
2) Invisible columns allow adding a column to a table without showing it in SELECT queries or the table definition unless explicitly specified.
3) Multiple indexes on the same column are now supported if they differ in characteristics like being unique/non-unique or using different index types.
Spark SQL allows users to perform relational operations on Spark's RDDs using a DataFrame API. It addresses challenges in existing systems like limited optimization and data sources by providing a DataFrame API that can query both external data and RDDs. Spark SQL leverages a highly extensible optimizer called Catalyst to optimize logical query plans into efficient physical query plans using features of Scala. It has been part of the Spark core distribution since version 1.0 in 2014.
Similar to World2016_T5_S7_TeradataFunctionalOverview (20)
1. Teradata Database and MicroStrategy 10:
Functional Overview Including Recommendations
for Performance Optimization
MicroStrategy World 2016
2. MicroStrategy
MicroStrategy and Teradata
Partnership Strength and Value
Teradata
• Annual Strategy Session
• Optimized SQL for Teradata
• Extensive leverage of Teradata extensions
• High Availability Solutions
• Consistent participant in Teradata Early Adopter program
• Over 350 Joint Customers
• Industry leading BI platform
• Relationship since 1995 in enterprise
Business Intelligence
• BI Applications run natively on Teradata
• Optimized SQL for Teradata
• Teradata indexing, and user-defined
functions
• High-Availability
• Extended server-based computations
• Enterprise data integration
• MicroStrategy BI performance and
scalability
• Largest number of users
• Highest level of BI complexity
• Pre-defined and ad hoc query support
• OLAP extensions
• Teradata uses MicroStrategy SQL for
Optimizer testing
• Dedicated Engineering Resources
4. MicroStrategy Data Access Workflows
There are numerous ways for MicroStrategy to interact with Teradata
• Adhoc Schema
o For Analysts familiar with data
in database
o Schema is created
automatically on the fly
o Optimal time-to-value
• Modeled Schema
o BI Architect creates logical
model of data in MicroStrategy
o Analyst or Consumers use
model objects (attributes and
metrics) to express their
analytical needs
o MicroStrategy generates multi-
pass SQL specific to a
database
• Live Connect
o User actions result in interactive
queries against data source
o Good for frequently changing data
• In-Memory Dataset
o Dataset is imported from database into
Multi-dimensional In-Memory
o Can improve performance and user
scale accessing less frequently
updated data sets
5. Push-down Analytics send analytical queries to Teradata
Key technical characteristics
• Most queries access vast amounts of data
• Most queries perform significant calculations
Challenge
• Interactive analysis demands fast query runtimes
MicroStrategy and Teradata work together to tackle challenge
• MicroStrategy formulates “good queries”
• Teradata executes queries well
6. Many Integration Points Tackle Common Challenges
• Integration with Teradata tools
o Integrates with Teradata's core EDW mixed
Workload Management features
o Unity
o TPTAPI/Export
• Extensions to Teradata functionality
o Vast number of features that complement
Teradata's architecture
o Aggregate awareness with physical
summary tables
o Middle-tier computation of calculations not
available in Teradata
o Middle-tier caching via Intelligent Cubes
o Report caching
• Multi-pass SQL for analytical
sophistication
o Ability to answer complex business questions
inside Teradata
o Use of volatile tables or derived tables
o Control of primary indexes and statistics
collection on intermediate results
• Teradata-specific SQL syntax
o Takes advantage of Teradata's Massive Parallel
Processing architecture and rich analytics
o Ordered Analytic (OLAP) functions
o CASE expressions
o Full outer joins
o Set operators
o Sub queries
• Seamless support for key Teradata
features
o Couples with underlying Teradata optimizations for
best superior query performance
o Partitioned primary indexes
o Aggregate join indexes
o Teradata function library and UDFs
o UNICODE character set
o Columnar support
7. Multi-pass SQL For Analytical Sophistication
Ability to answer complex business questions inside Teradata
SELECT …
FROM …
WHERE …
GROUP BY …
SELECT …
FROM …
WHERE …
GROUP BY …
SELECT …
FROM …
WHERE …
GROUP BY …
• Derived Table syntax (default)
• True Temporary Table (Volatile
Table) syntax
A simple configuration setting
allows switching
VLDB: Intermediate Table Type
MicroStrategy offers
multiple approaches
• Intermediate result sets are truly temporary in
nature
• Don’t require typical protections.
8. select pa1.SUBCAT_ID SUBCAT_ID,
a11.SUBCAT_DESC SUBCAT_DESC,
pa1.YEAR_ID YEAR_ID,
pa1.WJXBFS1 WJXBFS1,
pa2.WJXBFS1 WJXBFS2
from (select a12.SUBCAT_ID SUBCAT_ID,
a13.YEAR_ID YEAR_ID,
sum(a11.TOT_UNIT_SALES) WJXBFS1
from ITEM_MNTH_SLS a11
join LU_ITEM a12
on (a11.ITEM_ID = a12.ITEM_ID)
join LU_MONTH a13
on (a11.MONTH_ID = a13.MONTH_ID)
group by a12.SUBCAT_ID,
a13.YEAR_ID
) pa1
…
join (select …
) pa2
on (pa1.SUBCAT_ID = pa2.SUBCAT_ID and
pa1.YEAR_ID = pa2.YEAR_ID)
join LU_SUBCATEG a11
on (pa1.SUBCAT_ID = a11.SUBCAT_ID)
Derived Tables vs. Volatile Tables
By default MicroStrategy switches from Derived Table Syntax to using Volatile tables for reports with
more than 64 passes
8
create volatile table ZZSP00, no fallback, no log(
YEAR_ID SMALLINT,
SUBCAT_ID BYTEINT,
WJXBFS1 FLOAT)
primary index (YEAR_ID, SUBCAT_ID) on commit preserve rows
;insert into ZZSP00
select a13.YEAR_ID YEAR_ID,
a12.SUBCAT_ID SUBCAT_ID,
sum(a11.TOT_UNIT_SALES) WJXBFS1
from ITEM_MNTH_SLS a11
join LU_ITEM a12
on (a11.ITEM_ID = a12.ITEM_ID)
join LU_MONTH a13
on (a11.MONTH_ID = a13.MONTH_ID)
group by a13.YEAR_ID,
a12.SUBCAT_ID
…
select pa1.SUBCAT_ID SUBCAT_ID,
a11.SUBCAT_DESC SUBCAT_DESC,
pa1.YEAR_ID YEAR_ID,
pa1.WJXBFS1 WJXBFS1,
pa2.WJXBFS1 WJXBFS2
from ZZSP00 pa1
join ZZSP01 pa2
on (pa1.SUBCAT_ID = pa2.SUBCAT_ID and
pa1.YEAR_ID = pa2.YEAR_ID)
join LU_SUBCATEG a11
on (pa1.SUBCAT_ID = a11.SUBCAT_ID)
P1
P2
P1
P2
9. 9
Intelligent Table Indexing Improves JOIN performance
MicroStrategy transparently takes advantage of primary indexes (and partitioned
primary indexes) defined on fact tables
Additionally, MicroStrategy generates primary indexes on intermediate tables
• System administrator can weigh columns and control the size of an index for
a particular report
Matching of primary index is crucial to join performance
• Temporary Tables will be indexed to match fact tables which minimizes
database processing that would be required to repartition the temp table to
match the fact table primary index
10. Row-by-Row Inserts are Slow
Requires time-consuming
locking/unlocking of table
10
Improved Performance Using Bulk Inserts
Intelligence Server inserts data into intermediate database tables for:
1. Multi-Source Reports
2. Data Mart creation
3. Iterative Analysis
Analytical Engine computations requiring back-and-forth data movement with the database
Bulk Insert
Bulk-Inserts are Fast
Uses Parameterized Statements to
insert blocks of data all at once
Row Insert
Row Insert
Row Insert
Row Insert
Rows are inserted in 32K blocks rather than individual records
11. Many Integration Points Tackle Common Challenges
• Integration with Teradata tools
o Integrates with Teradata's core EDW mixed
Workload Management features
o Unity
o TPTAPI/Export
• Extensions to Teradata functionality
o Vast number of features that complement
Teradata's architecture
o Aggregate awareness with physical
summary tables
o Middle-tier computation of calculations not
available in Teradata
o Middle-tier caching via Intelligent Cubes
o Report caching
• Multi-pass SQL for analytical
sophistication
o Ability to answer complex business questions
inside Teradata
o Use of volatile tables or derived tables
o Control of primary indexes and statistics
collection on intermediate results
Teradata-specific SQL syntax
o Takes advantage of Teradata's Massive Parallel
Processing architecture and rich analytics
o Ordered Analytic (OLAP) functions
o CASE expressions
o Full outer joins
o Set operators
o Sub queries
Seamless support for key Teradata features
o Couples with underlying Teradata optimizations
for best superior query performance
o Partitioned primary indexes
o Aggregate join indexes
o Teradata function library and UDFs
o UNICODE character set
o Columnar support
12. Teradata-specific SQL syntax
Takes advantage of Teradata's Massive Parallel Processing architecture and rich analytics
Push down 120+ functions
• Mathematical,
• String,
• Statistical,
• Date-Time functions, etc.
20+ Teradata-specific tunable settings
• Full outer joins,
• Set Operators,
• Implicit/Explicit Table Creation Type,
• Query banding,
• Indexing,
• Sub-Query Type, etc.
13. Many Integration Points Tackle Common Challenges
• Integration with Teradata tools
o Integrates with Teradata's core EDW mixed
Workload Management features
o Unity
o TPTAPI/Export
• Extensions to Teradata functionality
o Vast number of features that complement
Teradata's architecture
o Aggregate awareness with physical
summary tables
o Middle-tier computation of calculations not
available in Teradata
o Middle-tier caching via Intelligent Cubes
o Report caching
• Seamless support for key Teradata features
o Couples with underlying Teradata optimizations
for best superior query performance
o Partitioned primary indexes
o Aggregate join indexes
o Teradata function library and UDFs
o UNICODE character set
o Columnar support
• Multi-pass SQL for analytical
sophistication
o Ability to answer complex business
questions inside Teradata
o Use of volatile tables or derived tables
o Control of primary indexes and statistics
collection on intermediate results
• Teradata-specific SQL syntax
o Takes advantage of Teradata's Massive
Parallel Processing architecture and rich
analytics
o Ordered Analytic (OLAP) functions
o CASE expressions
o Full outer joins
o Set operators
o Sub queries
14. Many Teradata features Are Transparently Used
Here is but a short selection of the most commonly implemented ones
PPI
• Minimizes physical access targeting
only the rows of qualifying
partitions. Queries run faster.
• Helpful for queries based on range
access, such as date ranges
NoPI
• Useful for applications that
concurrently load data into a staging
table
• MicroStrategy can use NoPI for
intermediate table creation
AJI
• Creation, maintenance, and
automatic navigation of pre-
aggregations and pre-joined tables
Data Distribution
• Primary Indexes are very crucial
• Physical profile of tables relates
directly to response time for
MicroStrategy reports
15. Many Integration Points Tackle Common Challenges
• Integration with Teradata tools
o Integrates with Teradata's core EDW mixed
Workload Management features
o Unity
o TPTAPI/Export
• Multi-pass SQL for analytical sophistication
o Ability to answer complex business questions inside
Teradata
o Use of volatile tables or derived tables
o Control of primary indexes and statistics collection
on intermediate results
• Teradata-specific SQL syntax
o Takes advantage of Teradata's Massive Parallel
Processing architecture and rich analytics
o Ordered Analytic (OLAP) functions
o CASE expressions
o Full outer joins
o Set operators
o Sub queries
• Seamless support for key Teradata features
o Couples with underlying Teradata optimizations for
best superior query performance
o Partitioned primary indexes
o Aggregate join indexes
o Teradata function library and UDFs
o UNICODE character set
o Columnar support
• Extensions to Teradata functionality
o Vast number of features that complement
Teradata's architecture
o Aggregate awareness with physical
summary tables
o Middle-tier computation of calculations not
available in Teradata
o Middle-tier caching via Intelligent Cubes
o Report caching
16. Integration with Teradata Workload Management
Integrates with Teradata's core EDW mixed workload management features
Workload Management (WLM) is necessary to optimize
access to shared resources for concurrently executing
queries.
The goals of a functional workload management are to
• Optimally leverage available (hardware) resources for
performance and throughput
• Prioritize access for high priority jobs
• Assure resource availability by avoiding system lock-up
by any small set of jobs
Both MicroStrategy and Teradata provide WLM
18. 18
Teradata Manages Workload Using Query Bands
Query Bands assign resources to incoming queries
• Teradata allows applications to “tag” each report / SQL statement with identifying
information
• MicroStrategy makes use of Query Bands
• Combined execution logs from MicroStrategy (Enterprise Manager) and Teradata
(DBQL) enable deep usage analysis
SET QUERY_BAND
='ApplicationName=MicroStrategy;Version=9.0.1;ClientUser=!u;Source=!p;Action=
!o;
StartTime=!dT!t;
JobID=!j;Importance=!i;sess_id=!s;proj_id=!z;report_guid=!r;' FOR SESSION;
create volatile table ZZSP00, no fallback, no log(
YEAR_ID INTEGER,
SUBCAT_ID INTEGER,
WJXBFS1 FLOAT)
primary index (YEAR_ID, SUBCAT_ID) on commit preserve rows
;insert into ZZSP00
select a13.YEAR_ID YEAR_ID,
a12.SUBCAT_ID SUBCAT_ID,
…
SET QUERY_BAND NONE FOR SESSION;
19. MicroStrategy 10 Offers Two Connectivity Options
Performance Considerations
• ODBC for Push-down Reports
o Proven reliable industry standard
o JDBC on Mac
• TPTAPI (Teradata Parallel Transporter API) for In-Memory
cubes load
o Enables effective data transfer to MicroStrategy
o Due to API overhead this is only recommended for data volumes
larger than 1GB
20. Optimal ODBC Connectivity Require Non-default Settings
Small parameter changes have a big impact on data throughput
Pay special attention to:
• Maximum Response Buffer Size
• Enable Read Ahead Double Buffering
for interleaved fetches
• Session Mode
• Session Character Set for Unicode
Data
21. • Alternative means to load/unload data
between a Teradata Database Server and
Client application
• MicroStrategy 10.1 invokes the Export
Operator from TPTAPI and export data
quickly out of Teradata into MicroStrategy
Cubes.
• The “FastExport” protocol is capable of
exporting data out of Teradata utilizing
parallel sessions and therefore has a higher
throughput rate than a single session
traditional ODBC.
• Multiple processes launched to read data in
parallel.
• TPTAPI further optimizes throughput by
enabling multiple “instances”.
• For Setup/supported configuration, check
out TN266840 on MicroStrategy
Community website: FAQ on using
Teradata Parallel Transporter API (TPTAPI)
Parallel sessions out of Teradata into MicroStrategy Cubes
TPT
Export
Teradata
Invokes
TPT API
TPT
Export
TPT
Export
MicroStrategy
MSTR
Cube
MSTR
Cube
MSTR
Cube
22. Optimal Performance Requires TPT Parameter Adjustment
Two steps required to enable use of TPTAPI Export
1. Enable use of TPTAPI for Teradata
connection
2. Enable use of TPTAPI on Report
level (typically a cube report)
• If TPTAPI is enabled for a multi-
pass SQL report, MicroStrategy
only retrieves the final result set via
TPTAPI
• SQL View Allows Verification of TPT
Use
23. MicroStrategy can seamlessly integrate with Teradata Unity
Unity gives an integrated portfolio turning a multi-system environment into an analytical
ecosystem
MicroStrategy integrates with
the Unity server which
effectively manages multiple
Teradata systems.
Why do we integrate?
• HA (High Availability)
Requirements
• Active/Active
Configurations
• Appliance for transactions
and EDW for MicroStrategy
analytics.
Teradata
System A
Teradata
System B
Unity Server
TPT/parser/Queries/
DDL changes/Data
Dictionary
Users/
Applications
Users/
Applications
Users/
Applications
Schema/Data
Synchronization
…..
24. Teradata Query Grid
• What is Teradata Query Grid
• How MicroStrategy can use Query Grid
25. Teradata Query Grid Teradata-Hadoop
Leverage Hadoop resources, Reduce data movement
• Bi-directional to Hadoop
• Query push-down
• Easy configuration of
server connections
• Query through Teradata
• Sent to Hadoop through
Hive
• Results returned to
Teradata
• Additional processing joins
data in Teradata
• Final results sent back to
application/user
26. How MicroStrategy Leverages Query Grid
• MicroStrategy can use the remote tables just like any other table and
should work across ROLAP-SQL, Query Builder, Data Import, etc
• Joining Hadoop tables with Teradata tables and doing Analytics
• Import Snapshots (Views or tables) from Hadoop; MicroStrategy then
queries these snapshots
• Importing data as permanent or temporary Teradata Database table.
MicroStrategy ROLAP/SQL, Query
Builder, Cubes
Teradata Database
Load_to_hcatalog
Load_from_hcatalog
Export
Import
Hive tables
27. Summary
• MicroStrategy and Teradata continue to have a strong partnership. We work together to
further optimize our integration to provide a seamless reporting experience
Call-to-Action:
• Refer to existing best practices for developing MicroStrategy applications. Please see
our jointly authored Integration paper in the MicroStrategy Knowledge base: TN274564
and for the FAQ on TPTAPI implementation refer to: TN266840
• Make sure to take advantage of DB features designed for analytical workloads
• Look for best practices to take advantage of data source strengths in MicroStrategy
Community
• MicroStrategy customer requests / requirements should be submitted to the
http://community.microstrategy.com website under the “Ideas” section.
• Attend the Claraview Workshop:
o Mobile Productivity: Build an iPhone or iPad App in 50 minutes
o Date/Time: Wednesday @ 11:30am -12:30pm
o Location: Flamingo 3
• Contact Information:
o MicroStrategy: Farah Omer – fomer@microstrategy.com
o Teradata: Steve Greenberg – steve.greenberg@teradata.com (for integration questions)
o Claraview: Tyler Rebman – tyler.rebman@claraview.com (for implementation questions)