This document summarizes new features in Teradata Database 13.10 including temporal database capabilities, geospatial enhancements, workload management improvements, and availability/serviceability enhancements. Key features include support for valid time, transaction time, and bitemporal tables, character-based primary partitioned indexes, timestamp partitioning, and increasing the number of available workload definitions in Teradata Active System Management.
Big data architectures and the data lakeJames Serra
With so many new technologies it can get confusing on the best approach to building a big data architecture. The data lake is a great new concept, usually built in Hadoop, but what exactly is it and how does it fit in? In this presentation I'll discuss the four most common patterns in big data production implementations, the top-down vs bottoms-up approach to analytics, and how you can use a data lake and a RDBMS data warehouse together. We will go into detail on the characteristics of a data lake and its benefits, and how you still need to perform the same data governance tasks in a data lake as you do in a data warehouse. Come to this presentation to make sure your data lake does not turn into a data swamp!
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
You’ve heard the marketing buzz, maybe you have been to a workshop and worked with some Spark, Delta, SQL, Python, or R, but you still need some help putting all the pieces together? Join us as we review some common techniques to build a lakehouse using Delta Lake, use SQL Analytics to perform exploratory analysis, and build connectivity for BI applications.
Big data architectures and the data lakeJames Serra
With so many new technologies it can get confusing on the best approach to building a big data architecture. The data lake is a great new concept, usually built in Hadoop, but what exactly is it and how does it fit in? In this presentation I'll discuss the four most common patterns in big data production implementations, the top-down vs bottoms-up approach to analytics, and how you can use a data lake and a RDBMS data warehouse together. We will go into detail on the characteristics of a data lake and its benefits, and how you still need to perform the same data governance tasks in a data lake as you do in a data warehouse. Come to this presentation to make sure your data lake does not turn into a data swamp!
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
You’ve heard the marketing buzz, maybe you have been to a workshop and worked with some Spark, Delta, SQL, Python, or R, but you still need some help putting all the pieces together? Join us as we review some common techniques to build a lakehouse using Delta Lake, use SQL Analytics to perform exploratory analysis, and build connectivity for BI applications.
Wallchart - Data Warehouse Documentation RoadmapDavid Walker
All projects need documentation and many companies provide templates as part of a methodology. This document describes the templates, tools and source documents used by Data Management & Warehousing. It serves two purposes:
• For projects using other methodologies or creating their own set of documents to use as a checklist. This allows the project to ensure that the documentation covers the essential areas for describing the data warehouse.
• To demonstrate our approach to our clients by describing the templates and deliverables that are produced.
Documentation, methodologies and templates are inherently both incomplete and flexible. Projects may wish to add, change, remove or ignore any part of any document. Some may also believe that aspects of one document would sit better in another. If this is the case then users of this document and these templates are encouraged to change them to fit their needs.
Data Management & Warehousing believes that the approach or methodology for building a data warehouse should be to use a series of guides and checklists. This ensures that small teams of relatively skilled resources developing the system can cover all aspects of the project whilst being free to deal with the specific issues of their environment to deliver exceptional solutions, rather than a rigid methodology that ensures that large teams of relatively unskilled staff can meet a minimum standard.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
Databricks: A Tool That Empowers You To Do More With DataDatabricks
In this talk we will present how Databricks has enabled the author to achieve more with data, enabling one person to build a coherent data project with data engineering, analysis and science components, with better collaboration, better productionalization methods, with larger datasets and faster.
The talk will include a demo that will illustrate how the multiple functionalities of Databricks help to build a coherent data project with Databricks jobs, Delta Lake and auto-loader for data engineering, SQL Analytics for Data Analysis, Spark ML and MLFlow for data science, and Projects for collaboration.
Master the Multi-Clustered Data Warehouse - SnowflakeMatillion
Snowflake is one of the most powerful, efficient data warehouses on the market today—and we joined forces with the Snowflake team to show you how it works!
In this webinar:
- Learn how to optimize Snowflake
- Hear insider tips and tricks on how to improve performance
- Get expert insights from Craig Collier, Technical Architect from Snowflake, and Kalyan Arangam, Solution Architect from Matillion
- Find out how leading brands like Converse, Duo Security, and Pets at Home use Snowflake and Matillion ETL to make data-driven decisions
- Discover how Matillion ETL and Snowflake work together to modernize your data world
- Learn how to utilize the impressive scalability of Snowflake and Matillion
Agile Data Engineering - Intro to Data Vault Modeling (2016)Kent Graziano
(Updated deck) As we move more and more towards the need for everyone to do Agile Data Warehousing, we need a data modeling method that can be agile with us. Data Vault Data Modeling is an agile data modeling technique for designing highly flexible, scalable, and adaptable data structures for enterprise data warehouse repositories. It is a hybrid approach using the best of 3NF and dimensional modeling. It is not a replacement for star schema data marts (and should not be used as such). This approach has been used in projects around the world (Europe, Australia, USA) for over 10 years but is still not widely known or understood. The purpose of this presentation is to provide attendees with an introduction to the components of the Data Vault Data Model, what they are for and how to build them. The examples will give attendees the basics:
• What the basic components of a DV model are
• How to build, and design structures incrementally, without constant refactoring
This presenation explains basics of ETL (Extract-Transform-Load) concept in relation to such data solutions as data warehousing, data migration, or data integration. CloverETL is presented closely as an example of enterprise ETL tool. It also covers typical phases of data integration projects.
Phar Data Platform: From the Lakehouse Paradigm to the RealityDatabricks
Despite the increased availability of ready-to-use generic tools, more and more enterprises are deciding to build in-house data platforms. This practice, common for some time in research labs and digital native companies, is now making its waves across large enterprises that traditionally used proprietary solutions and outsourced most of their IT. The availability of large volumes of data, coupled with more and more complex analytical use cases driven by innovations in data science have yielded these traditional and on premise architectures to become obsolete in favor of cloud architectures powered by open source technologies.
The idea of building an in-house platform at a larger enterprise comes with many challenges of its own: Build an Architecture that combines the best elements of data lakes and data warehouses to accommodate all kinds from BI to ML use cases. The need to interoperate with all the company’s data and technology, including legacy systems. Cultural transformation, including a commitment to adopt agile processes and data driven approaches.
This presentation describes a success story on building a Lakehouse in an enterprise such as LIDL, a successful chain of grocery stores operating in 32 countries worldwide. We will dive into the cloud-based architecture for batch and streaming workloads based on many different source systems of the enterprise and how we applied security on architecture and data. We will detail the creation of a curated Data Lake comprising several layers from a raw ingesting layer up to a layer that presents cleansed and enriched data to the business units as a kind of Data Marketplace.
A lot of focus and effort went into building a semantic Data Lake as a sustainable and easy to use basis for the Lakehouse as opposed to just dumping source data into it. The first use case being applied to the Lakehouse is the Lidl Plus Loyalty Program. It is already deployed to production in 26 countries with more than 30 millions of customers’ data being analyzed on a daily basis. In parallel to productionizing the Lakehouse, a cultural and organizational change process was undertaken to get all involved units to buy into the new data driven approach.
Data Architecture, Solution Architecture, Platform Architecture — What’s the ...DATAVERSITY
A solid data architecture is critical to the success of any data initiative. But what is meant by “data architecture”? Throughout the industry, there are many different “flavors” of data architecture, each with its own unique value and use cases for describing key aspects of the data landscape. Join this webinar to demystify the various architecture styles and understand how they can add value to your organization.
Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
Enterprise data architectures usually contain many systems—data lakes, message queues, and data warehouses—that data must pass through before it can be analyzed. Each transfer step between systems adds a delay and a potential source of errors. What if we could remove all these steps? In recent years, cloud storage and new open source systems have enabled a radically new architecture: the lakehouse, an ACID transactional layer over cloud storage that can provide streaming, management features, indexing, and high-performance access similar to a data warehouse. Thousands of organizations including the largest Internet companies are now using lakehouses to replace separate data lake, warehouse and streaming systems and deliver high-quality data faster internally. I’ll discuss the key trends and recent advances in this area based on Delta Lake, the most widely used open source lakehouse platform, which was developed at Databricks.
Snowflake concepts & hands on expertise to help get you started on implementing Data warehouses using Snowflake. Necessary information and skills that will help you master Snowflake essentials.
Wallchart - Data Warehouse Documentation RoadmapDavid Walker
All projects need documentation and many companies provide templates as part of a methodology. This document describes the templates, tools and source documents used by Data Management & Warehousing. It serves two purposes:
• For projects using other methodologies or creating their own set of documents to use as a checklist. This allows the project to ensure that the documentation covers the essential areas for describing the data warehouse.
• To demonstrate our approach to our clients by describing the templates and deliverables that are produced.
Documentation, methodologies and templates are inherently both incomplete and flexible. Projects may wish to add, change, remove or ignore any part of any document. Some may also believe that aspects of one document would sit better in another. If this is the case then users of this document and these templates are encouraged to change them to fit their needs.
Data Management & Warehousing believes that the approach or methodology for building a data warehouse should be to use a series of guides and checklists. This ensures that small teams of relatively skilled resources developing the system can cover all aspects of the project whilst being free to deal with the specific issues of their environment to deliver exceptional solutions, rather than a rigid methodology that ensures that large teams of relatively unskilled staff can meet a minimum standard.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
Databricks: A Tool That Empowers You To Do More With DataDatabricks
In this talk we will present how Databricks has enabled the author to achieve more with data, enabling one person to build a coherent data project with data engineering, analysis and science components, with better collaboration, better productionalization methods, with larger datasets and faster.
The talk will include a demo that will illustrate how the multiple functionalities of Databricks help to build a coherent data project with Databricks jobs, Delta Lake and auto-loader for data engineering, SQL Analytics for Data Analysis, Spark ML and MLFlow for data science, and Projects for collaboration.
Master the Multi-Clustered Data Warehouse - SnowflakeMatillion
Snowflake is one of the most powerful, efficient data warehouses on the market today—and we joined forces with the Snowflake team to show you how it works!
In this webinar:
- Learn how to optimize Snowflake
- Hear insider tips and tricks on how to improve performance
- Get expert insights from Craig Collier, Technical Architect from Snowflake, and Kalyan Arangam, Solution Architect from Matillion
- Find out how leading brands like Converse, Duo Security, and Pets at Home use Snowflake and Matillion ETL to make data-driven decisions
- Discover how Matillion ETL and Snowflake work together to modernize your data world
- Learn how to utilize the impressive scalability of Snowflake and Matillion
Agile Data Engineering - Intro to Data Vault Modeling (2016)Kent Graziano
(Updated deck) As we move more and more towards the need for everyone to do Agile Data Warehousing, we need a data modeling method that can be agile with us. Data Vault Data Modeling is an agile data modeling technique for designing highly flexible, scalable, and adaptable data structures for enterprise data warehouse repositories. It is a hybrid approach using the best of 3NF and dimensional modeling. It is not a replacement for star schema data marts (and should not be used as such). This approach has been used in projects around the world (Europe, Australia, USA) for over 10 years but is still not widely known or understood. The purpose of this presentation is to provide attendees with an introduction to the components of the Data Vault Data Model, what they are for and how to build them. The examples will give attendees the basics:
• What the basic components of a DV model are
• How to build, and design structures incrementally, without constant refactoring
This presenation explains basics of ETL (Extract-Transform-Load) concept in relation to such data solutions as data warehousing, data migration, or data integration. CloverETL is presented closely as an example of enterprise ETL tool. It also covers typical phases of data integration projects.
Phar Data Platform: From the Lakehouse Paradigm to the RealityDatabricks
Despite the increased availability of ready-to-use generic tools, more and more enterprises are deciding to build in-house data platforms. This practice, common for some time in research labs and digital native companies, is now making its waves across large enterprises that traditionally used proprietary solutions and outsourced most of their IT. The availability of large volumes of data, coupled with more and more complex analytical use cases driven by innovations in data science have yielded these traditional and on premise architectures to become obsolete in favor of cloud architectures powered by open source technologies.
The idea of building an in-house platform at a larger enterprise comes with many challenges of its own: Build an Architecture that combines the best elements of data lakes and data warehouses to accommodate all kinds from BI to ML use cases. The need to interoperate with all the company’s data and technology, including legacy systems. Cultural transformation, including a commitment to adopt agile processes and data driven approaches.
This presentation describes a success story on building a Lakehouse in an enterprise such as LIDL, a successful chain of grocery stores operating in 32 countries worldwide. We will dive into the cloud-based architecture for batch and streaming workloads based on many different source systems of the enterprise and how we applied security on architecture and data. We will detail the creation of a curated Data Lake comprising several layers from a raw ingesting layer up to a layer that presents cleansed and enriched data to the business units as a kind of Data Marketplace.
A lot of focus and effort went into building a semantic Data Lake as a sustainable and easy to use basis for the Lakehouse as opposed to just dumping source data into it. The first use case being applied to the Lakehouse is the Lidl Plus Loyalty Program. It is already deployed to production in 26 countries with more than 30 millions of customers’ data being analyzed on a daily basis. In parallel to productionizing the Lakehouse, a cultural and organizational change process was undertaken to get all involved units to buy into the new data driven approach.
Data Architecture, Solution Architecture, Platform Architecture — What’s the ...DATAVERSITY
A solid data architecture is critical to the success of any data initiative. But what is meant by “data architecture”? Throughout the industry, there are many different “flavors” of data architecture, each with its own unique value and use cases for describing key aspects of the data landscape. Join this webinar to demystify the various architecture styles and understand how they can add value to your organization.
Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
Enterprise data architectures usually contain many systems—data lakes, message queues, and data warehouses—that data must pass through before it can be analyzed. Each transfer step between systems adds a delay and a potential source of errors. What if we could remove all these steps? In recent years, cloud storage and new open source systems have enabled a radically new architecture: the lakehouse, an ACID transactional layer over cloud storage that can provide streaming, management features, indexing, and high-performance access similar to a data warehouse. Thousands of organizations including the largest Internet companies are now using lakehouses to replace separate data lake, warehouse and streaming systems and deliver high-quality data faster internally. I’ll discuss the key trends and recent advances in this area based on Delta Lake, the most widely used open source lakehouse platform, which was developed at Databricks.
Snowflake concepts & hands on expertise to help get you started on implementing Data warehouses using Snowflake. Necessary information and skills that will help you master Snowflake essentials.
Introduction to Teradata And How Teradata WorksBigClasses Com
Watch How Teradata works with Introduction to teradata ,How Teradata Visual Explain Works,teradata database and tools,teradata database model,teradata hardware and software architecture,teradata database security,teradata storage based on primary index
How to Use Algorithms to Scale Digital BusinessTeradata
Gartner defines digital business as the creation of new business designs by blurring the digital and physical worlds. Digital business creates new business opportunities, but the amount of data generated will eclipse the human ability to process it. Further, many complex decisions will need to be made in timeframes, and at scales, that are impossible by human actors. Gartner analyst Chet Geschickter will explain share advice on how to leverage algorithmic business principles to drive digital business success.
Time is of the essence - The Fourth Dimension in Oracle Database 12c (on Flas...Lucas Jellema
Time has always been an important dimension for data in any database with topics like when was data created, when are records valid, how did records evolve over time, can we compare with yesteryear or even travel through time and data. The Oracle Database 12c release added a number of features in this area of time and history. The powerful Flashback mechanism is enhanced in many ways, such as to allow history to be constructed, for example from existing journaling tables and to capture the transaction context as well as the data change. Now for the first time, Flashback (Query & Data Archive) will become a key element in database design and application implementation.
The support for Valid Time Modeling (aka Temporal Database) makes the database aware of the fact that records have a business lifetime with start and expiry date. This awareness results in many new features that will be discussed and demonstrated.
Slides from the Singapore Oracle Sessions presentation on July 13th 2015, sponsored by the Oracle ACE Program and organized by Doug Burns.
This presentation given by Think Big's senior data scientist Eliano Marques at Digital Natives conference in Berlin, Germany (November 2015), details how to go from experimentation to productionization for a predictive maintenance use case.
White Paper - Data Warehouse GovernanceDavid Walker
An organisation that is embarking on a data warehousing project is undertaking a long-term development and maintenance programme of a computer system. This system will be critical to the organisation and cost a significant amount of money, therefore control of the system is vital. Governance defines the model the organisation will use to ensure optimal use and re- use of the data warehouse and enforcement of corporate policies (e.g. business design, technical design and application security) and ultimately derive value for money.
This paper has identified five sources of change to the system and the aspects of the system that these sources of change will influence in order to assist the organisation to develop standards and structures to support the development and maintenance of the solution. These standards and structures must then evolve, as the programme develops to meet its changing needs.
“Documentation is not understanding, process is not discipline, formality is not skill”1
The best governance must only be an aid to the development and not an end in itself. Data Warehouses are successful because of good understanding, discipline and the skill of those involved. On the other hand systems built to a template without understanding, discipline and skill will inevitably deliver a system that fails to meet the users’ needs and sooner rather than later will be left on the shelf, or maintained at a very high cost but with little real use.
PURPOSE of the project is Williams Specialty Company (WSC) reque.docxamrit47
PURPOSE of the project is
Williams Specialty Company (WSC) requested a business automation application in which will allow for the WSC employees be able to automate customer service, inventory, and quality control. This project will deliver an application that will keep track of the processes with a database. The purpose of the application is to accelerate and improve the management of the customer orders and processes. The features that will be included are:
· Create, modify, and store customer orders
· Access to the database (only by the manager and sales person)
· Create memos and e-mails within the employees and save them within the database
· Validate customer orders
· Mark order as “complete”1. Scope
This section will define the scope of the project by defining and describing the System, the major functions of the application, and the database.
1.1. System DescriptionWilliams Specialty Company (WSC) requested a business automation application in which will allow for the WSC employees be able to automate customer service, inventory, and quality control. This project will deliver an application that will keep track of the processes with a database. The purpose of the application is to accelerate and improve the management of the customer orders and processes. The features that will be included are:· Create, modify, and store customer orders· Access to the database (only by the manager and sales person)· Create memos and e-mails within the employees and save them within the database· Validate customer orders· Mark order as “complete”
Major Software Functions
The business application being created will allow for Order capture by the Williams Staff. Assign that order to a specific employee. Store this Information in an Oracle or Microsoft SQL Database(To be discussed). Allow access to this database to managers and sales people. As well as provide communication between the employee assigned to the order and the Sales team and management via Internal Electronic Communication( Email and Memo). As well as create a Customer satisfaction and follow up communication on Completion of the order(Via Email). Then Mark Inventory as Complete and help maintain Inventory Control and improve Order completion.
Diagram To Follow…
Database Description
The Project will utilize an Oracle or Microsoft SQL Database engine.
0. 1.4. Design Constraints and Limitations
The Design Constraints within this project will be Conforming the application to Run on Their Current Infrastructure. (Windows 7 OS and Windows 2012 Server).
The Second Design Constraint will be creating this application in such a way that it is easy to access(Low level of technical knowledge needed) as well as easy to maintain.
Team:
Team Members:
Date:
Project Title:
Team Leader:
Note: All diagrams should be clearly labeled. Remove all text that is shown in RED.Scope
This section will define the sc ...
Humans are sentient. We perceive. We feel. We listen. The problem is the more you put together, the more we lose these capabilities. We get slower. The idea is, how we create a company that acts like a single organism, where we identify opportunities, and that allows us to work in a faster and exponential world world where development happens in months rather than years. Don't let digital transformation become a war of competitive attrition. You may need to invest in your future to change the game.
Teradata Listener™: Radically Simplify Big Data StreamingTeradata
Teradata Listener™ is an intelligent, self-service solution for ingesting and distributing extremely fast moving data streams throughout the analytical ecosystem. Listener
is designed to be the primary ingestion framework for organizations with multiple data streams. Listener reliably delivers data without loss and provides low-latency ingestion for near real-time applications.
Telematics data provides a wealth of new, actionable insights, particularly when integrated with other enterprise data. But where do you start? How do you prioritize? What is the roadmap? In an interactive workshop learn how to derive more from data so you can do more in your business.
- Find the value of integrating telematics data with traditional data elements, including financial, customer, manufacturing, location and weather data
- How integrated telematics data can improve customer satisfaction, lifecycle management, warranty reserves, supply chain performance, and even engineering & design choices
- Gain practical examples from top manufacturers to improve operational efficiencies, develop new revenue streams, create customer insights, and better understand product performance
The Tools You Need to Build Relationships and Drive Revenue Checklist Teradata
This Campaign Manager Leadership series paper provides a checklist for marketers when considering blending offline data with online data to improve the customer experience.
Right Message, Right Time: The Secrets to Scaling Email Success Teradata
This Campaign Manager Leadership Series ebook outlines the 4 keys to an automated email marketing strategy and how marketers can scale to meet these “always-on” customer expectations.
BSI Teradata: The Shocking Case of Home Electronics PlanetTeradata
Home Electronics Planet, a big-box retailer, has digital marketing campaigns that are failing. Their Chief Marketing Officer gets some analytics and data science help from Business Scenario Investigators who recommend changing their search keywords mix, creating tighter customer segments based on product purchase sequencing coupled with real-time web page personalizations, and revising their e-mail marketing to improve business results.
How we did it: BSI: Teradata Case of the Tainted LasagnaTeradata
Great Brands, a major food producer, faces yet another recall. The government is pointing at Turkey Broccoli Lasagna as the culprit, so the Chief Risk Officer and Chief Supply Chain officer bring in BSI investigators to help them build a better/faster track and trace system, using Big Data analytics.
To see more BSI: Teradata, go to http://www.facebook.com/bsiTeradata
Teradata BSI: Case of the Retail Turnaround Teradata
This set of Powerpoint slides describes the analytics work of Teradata: Business Scenario Investigation employees who help move Taylor & Swift, a big-box retailer, from a silo’d stores vs. web approach to an integrated Omni-Channel Retailing approach to customers, marketing, and sales. The team comes up with 5 ideas, 2 of which are tried out. The story illustrates the use Teradata, Aster, Aprimo, and Tableau as tools to glean faster and deeper analytical insights on Big Data, specifically web walks.
BSI team recommends technologies from Teradata Aster and Aprimo, a Teradata company, for better marketing via event-based Marketing, GoldenPath Analytics, and Attribution/Digital Marketing Optimization.
5. Temporal Query Provide a list of members who were reported as covered on Jan. 15, 2000 in the Feb. 1, 2000 NCQA report, with names as accurate as our best data shows today. SELECT member.member_id, member.member_nm FROM edw.member_x_coverage VALIDTIME AS OF DATE ‘2000-01-15’ AND TRANSACTIONTIME AS OF DATE ‘2000-01-01’ ,edw.member WHERE member_x_coverage.member_id = member.member_id; select member.member_id ,member.member_nm from edw.member_x_coverage coverage ,edw.member where coverage.member_id = member.member_id and coverage.observation_start_dt <= '2000-02-01' and (coverage.observation_end_dt > '2000-02-01' or coverage.observation_end_dt is NULL) and coverage.effective_dt <= '2000-01-15' and (coverage.termination_dt > '2000-01-15' or coverage.termination_dt is NULL) With Temporal Support Without Temporal Support
6.
7.
8.
9.
10.
11.
12. Projection of Impact Zone & Storm Path to Google Earth Where do I deploy my cat management team.
V1.2 – Added User Defined Time Special meaning is associated with each of these times.
Effective Dates versus Observation Dates (VT vs TT) DW keeps track of coverage for members in a group health care plan. Effective and expiration dates indicate the period over which a member has specific coverage by a medical plan. Sometimes it takes a while for the paperwork to get through its hoops in and out of the operations systems…resulting in observation dates significantly after effective dates.