Event-driven Architecture eli tapahtumapohjainen arkkitehtuuri (EDA) on konsepti, jonka kantavana ajatuksena on käyttää asynkronista tapahtumien välitystä organisaation tietojärjestelmien viestintämekanismina. Samalla EDA ohjaa arkkitehtuuria yleisesti kohti erittäin löyhiä kytkentöjä järjestelmien välillä ja mahdollistaa siten entistä skaalautuvamman ja tehokkaamman toiminnan.
Tapahtuma on käsitteenä sinänsä abstrakti, mutta sen rakenteen määrittelyyn voidaan esittää selkeät käytännöt, joita käyttämällä tapahtumat ovat aidosti liiketoimintalähtöisiä ja hyödynnettävissä kaikissa järjestelmissä, jotka ovat niistä kiinnostuneita.
Tapahtumapohjaisuus voidaan myös liittää toiseen tärkeään trendiin; palvelupohjaisuuteen eli SOA:aan. EDA ja SOA täydentävät toisiaan kahdella tavalla. SOA-palvelu voi toimia tapahtumien lähteenä, ja toisaalta SOA-palveluita tai -liiketoimintaprosesseja voidaan käynnistää tapahtumien perusteella. EDA tuo palvelupohjaisiin järjestelmiin entistä löyhempää kytkentää, suorituskykyä ja mahdollisuuden tapahtumien reaaliaikaiseen, joustavaan käsittelyyn.
In the past few years, the term "data lake" has leaked into our lexicon. But what exactly IS a data lake? Some IT managers confuse data lakes with data warehouses. Some people think data lakes replace data warehouses. Both of these conclusions are false. Their is room in your data architecture for both data lakes and data warehouses. They both have different use cases and those use cases can be complementary.
Todd Reichmuth, Solutions Engineer with Snowflake Computing, has spent the past 18 years in the world of Data Warehousing and Big Data. He spent that time at Netezza and then later at IBM Data. Earlier in 2018 making the jump to the cloud at Snowflake Computing.
Mike Myer, Sales Director with Snowflake Computing, has spent the past 6 years in the world of Security and looking to drive awareness to better Data Warehousing and Big Data solutions available! Was previously at local tech companies FireMon and Lockpath and decided to join Snowflake due to the disruptive technology that's truly helping folks in the Big Data world on a day to day basis.
The document discusses Delta Live Tables (DLT), a tool from Databricks that allows users to build reliable data pipelines in a declarative way. DLT automates complex ETL tasks, ensures data quality, and provides end-to-end visibility into data pipelines. It unifies batch and streaming data processing with a single SQL API. Customers report that DLT helps them save significant time and effort in managing data at scale, accelerates data pipeline development, and reduces infrastructure costs.
Data driven organizations can be challenged to deliver new and growing business intelligence requirements from existing data warehouse platforms, constrained by lack of scalability and performance. The solution for customers is a data warehouse that scales for real-time demands and uses resources in a more optimized and cost-effective manner. Join Snowflake, AWS and Ask.com to learn how Ask.com enhanced BI service levels and decreased expenses while meeting demand to collect, store and analyze over a terabyte of data per day. Snowflake Computing delivers a fast and flexible elastic data warehouse solution that reduces complexity and overhead, built on top of the elasticity, flexibility, and resiliency of AWS.
Join us to learn:
• Learn how Ask.com eliminates data redundancy, and simplifies and accelerates data load, unload, and administration
• Learn how to support new and fluid data consumption patterns with consistently high performance
• Best practices for scaling high data volume on Amazon EC2 and Amazon S3
Who should attend: CIOs, CTOs, CDOs, Directors of IT, IT Administrators, IT Architects, Data Warehouse Developers, Database Administrators, Business Analysts and Data Architects
Microsoft Data Integration Pipelines: Azure Data Factory and SSISMark Kromer
The document discusses tools for building ETL pipelines to consume hybrid data sources and load data into analytics systems at scale. It describes how Azure Data Factory and SQL Server Integration Services can be used to automate pipelines that extract, transform, and load data from both on-premises and cloud data stores into data warehouses and data lakes for analytics. Specific patterns shown include analyzing blog comments, sentiment analysis with machine learning, and loading a modern data warehouse.
The document provides an overview of data mesh principles and hands-on examples for implementing a data mesh. It discusses key concepts of a data mesh including data ownership by domain, treating data as a product, making data available everywhere through self-service, and federated governance of data wherever it resides. Hands-on examples are provided for creating a data mesh topology with Apache Kafka as the underlying infrastructure, developing data products within domains, and exploring consumption of real-time and historical data from the mesh.
Deep dive into Microsoft Purview Data Loss PreventionDrew Madelung
Are you protecting your data at rest and in transit?
In this session we will go through all the different types of DLP in Microsoft Purview including endpoint, Exchange, Teams, SharePoint, OneDrive, and more. We will discuss the configuration options, why it is important, and the best practices to get started while going through a collection of demos.
You will leave this sessions with a deeper understanding of the technology and how it can impact your employee's experience
In the past few years, the term "data lake" has leaked into our lexicon. But what exactly IS a data lake? Some IT managers confuse data lakes with data warehouses. Some people think data lakes replace data warehouses. Both of these conclusions are false. Their is room in your data architecture for both data lakes and data warehouses. They both have different use cases and those use cases can be complementary.
Todd Reichmuth, Solutions Engineer with Snowflake Computing, has spent the past 18 years in the world of Data Warehousing and Big Data. He spent that time at Netezza and then later at IBM Data. Earlier in 2018 making the jump to the cloud at Snowflake Computing.
Mike Myer, Sales Director with Snowflake Computing, has spent the past 6 years in the world of Security and looking to drive awareness to better Data Warehousing and Big Data solutions available! Was previously at local tech companies FireMon and Lockpath and decided to join Snowflake due to the disruptive technology that's truly helping folks in the Big Data world on a day to day basis.
The document discusses Delta Live Tables (DLT), a tool from Databricks that allows users to build reliable data pipelines in a declarative way. DLT automates complex ETL tasks, ensures data quality, and provides end-to-end visibility into data pipelines. It unifies batch and streaming data processing with a single SQL API. Customers report that DLT helps them save significant time and effort in managing data at scale, accelerates data pipeline development, and reduces infrastructure costs.
Data driven organizations can be challenged to deliver new and growing business intelligence requirements from existing data warehouse platforms, constrained by lack of scalability and performance. The solution for customers is a data warehouse that scales for real-time demands and uses resources in a more optimized and cost-effective manner. Join Snowflake, AWS and Ask.com to learn how Ask.com enhanced BI service levels and decreased expenses while meeting demand to collect, store and analyze over a terabyte of data per day. Snowflake Computing delivers a fast and flexible elastic data warehouse solution that reduces complexity and overhead, built on top of the elasticity, flexibility, and resiliency of AWS.
Join us to learn:
• Learn how Ask.com eliminates data redundancy, and simplifies and accelerates data load, unload, and administration
• Learn how to support new and fluid data consumption patterns with consistently high performance
• Best practices for scaling high data volume on Amazon EC2 and Amazon S3
Who should attend: CIOs, CTOs, CDOs, Directors of IT, IT Administrators, IT Architects, Data Warehouse Developers, Database Administrators, Business Analysts and Data Architects
Microsoft Data Integration Pipelines: Azure Data Factory and SSISMark Kromer
The document discusses tools for building ETL pipelines to consume hybrid data sources and load data into analytics systems at scale. It describes how Azure Data Factory and SQL Server Integration Services can be used to automate pipelines that extract, transform, and load data from both on-premises and cloud data stores into data warehouses and data lakes for analytics. Specific patterns shown include analyzing blog comments, sentiment analysis with machine learning, and loading a modern data warehouse.
The document provides an overview of data mesh principles and hands-on examples for implementing a data mesh. It discusses key concepts of a data mesh including data ownership by domain, treating data as a product, making data available everywhere through self-service, and federated governance of data wherever it resides. Hands-on examples are provided for creating a data mesh topology with Apache Kafka as the underlying infrastructure, developing data products within domains, and exploring consumption of real-time and historical data from the mesh.
Deep dive into Microsoft Purview Data Loss PreventionDrew Madelung
Are you protecting your data at rest and in transit?
In this session we will go through all the different types of DLP in Microsoft Purview including endpoint, Exchange, Teams, SharePoint, OneDrive, and more. We will discuss the configuration options, why it is important, and the best practices to get started while going through a collection of demos.
You will leave this sessions with a deeper understanding of the technology and how it can impact your employee's experience
Modern data lakes are now built on cloud storage, helping organizations leverage the scale and economics of object storage while simplifying overall data storage and analysis flow
Best Practices for implementing Database Security Comprehensive Database Secu...Kal BO
Best Practices for implementing Database Security
Comprehensive Database Security
Saikat Saha
Product Director
Database Security, Oracle
October 02, 2017
Introducing Snowflake, an elastic data warehouse delivered as a service in the cloud. It aims to simplify data warehousing by removing the need for customers to manage infrastructure, scaling, and tuning. Snowflake uses a multi-cluster architecture to provide elastic scaling of storage, compute, and concurrency. It can bring together structured and semi-structured data for analysis without requiring data transformation. Customers have seen significant improvements in performance, cost savings, and the ability to add new workloads compared to traditional on-premises data warehousing solutions.
Delivering Data Democratization in the Cloud with SnowflakeKent Graziano
This is a brief introduction to Snowflake Cloud Data Platform and our revolutionary architecture. It contains a discussion of some of our unique features along with some real world metrics from our global customer base.
This document discusses Azure HDInsight, a managed Apache Hadoop and Spark platform. It provides a secure environment for building data lakes in the cloud. Key capabilities include ingesting and analyzing data from various sources using technologies like Apache Spark, Hive, Kafka and HBase. It also discusses data storage options, performance, security features and tools for management and monitoring of HDInsight clusters.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
In this presentation, Kaz Ohta, Kiyoto Tamura, and Ankush Rustagi from Treasure Data describe the company's Cloud Data Warehouse service.
"The Treasure Data Cloud Data Warehouse service enables companies to get big data analytics running in days not months without specialist IT resources and for a tenth the cost of other alternatives. Traditional data warehousing solutions - even modern alternatives such as Hadoop - are too expensive, complex and take too long for many companies to implement, so the idea of quickly launching a data warehouse service that uses the power and economics of the Cloud for companies of any size, opens up a huge potential market."
Learn more at: http://treasure-data.com * Watch the presentation video: http://inside-bigdata.com/?p=3531
Data Warehouse - Incremental Migration to the CloudMichael Rainey
A data warehouse (DW) migration is no small undertaking, especially when moving from on-premises to the cloud. A typical data warehouse has numerous data sources connecting and loading data into the DW, ETL tools and data integration scripts performing transformations, and reporting, advanced analytics, or ad-hoc query tools accessing the data for insights and analysis. That’s a lot to coordinate and the data warehouse cannot be migrated all at once. Using a data replication technology such as Oracle GoldenGate, the data warehouse migration can be performed incrementally by keeping the data in-sync between the original DW and the new, cloud DW. This session will dive into the steps necessary for this incremental migration approach and walk through a customer use case scenario, leaving attendees with an understanding of how to perform a data warehouse migration to the cloud.
Presented at RMOUG Training Days 2019
This document is a training presentation on Databricks fundamentals and the data lakehouse concept by Dalibor Wijas from November 2022. It introduces Wijas and his experience. It then discusses what Databricks is, why it is needed, what a data lakehouse is, how Databricks enables the data lakehouse concept using Apache Spark and Delta Lake. It also covers how Databricks supports data engineering, data warehousing, and offers tools for data ingestion, transformation, pipelines and more.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
Lambda Architecture in the Cloud with Azure Databricks with Andrei VaranovichDatabricks
The term “Lambda Architecture” stands for a generic, scalable and fault-tolerant data processing architecture. As the hyper-scale now offers a various PaaS services for data ingestion, storage and processing, the need for a revised, cloud-native implementation of the lambda architecture is arising.
In this talk we demonstrate the blueprint for such an implementation in Microsoft Azure, with Azure Databricks — a PaaS Spark offering – as a key component. We go back to some core principles of functional programming and link them to the capabilities of Apache Spark for various end-to-end big data analytics scenarios.
We also illustrate the “Lambda architecture in use” and the associated tread-offs using the real customer scenario – Rijksmuseum in Amsterdam – a terabyte-scale Azure-based data platform handles data from 2.500.000 visitors per year.
07 - Defend Against Threats with SIEM Plus XDR Workshop - Microsoft Sentinel ...carlitocabana
This document provides an overview and summary of Microsoft Sentinel, a cloud-native security information and event management (SIEM) tool powered by artificial intelligence. The summary highlights that Microsoft Sentinel allows organizations to harness the scale of the cloud to optimize security operations, detect evolving threats using machine learning, and expedite incident response. It collects security data from any source at cloud scale, provides analytics and hunting capabilities, integrates threat intelligence, and enables automated incident response through orchestration and playbooks.
Short introduction to different options for ETL & ELT in the Cloud with Microsoft Azure. This is a small accompanying set of slides for my presentations and blogs on this topic
Snowflake: Your Data. No Limits (Session sponsored by Snowflake) - AWS Summit...Amazon Web Services
Snowflake is a cloud-based data warehouse that is built for the cloud. It was founded in 2012 and has raised $1 billion in funding. Snowflake's architecture separates storage, compute, and metadata services, allowing it to offer unlimited scalability, multiple clusters that can access shared data with no downtime, and full transactional consistency across the system. Snowflake has over 2000 customers including large enterprises that use it for analytics, data science, and sharing large volumes of data securely.
Learn to Use Databricks for Data ScienceDatabricks
Data scientists face numerous challenges throughout the data science workflow that hinder productivity. As organizations continue to become more data-driven, a collaborative environment is more critical than ever — one that provides easier access and visibility into the data, reports and dashboards built against the data, reproducibility, and insights uncovered within the data.. Join us to hear how Databricks’ open and collaborative platform simplifies data science by enabling you to run all types of analytics workloads, from data preparation to exploratory analysis and predictive analytics, at scale — all on one unified platform.
An introduction to Office 365 Advanced Threat Protection (ATP)Robert Crane
The document describes Microsoft's security solutions for email, files, and collaboration. It discusses how email, attachments, links, and files shared in Teams, OneDrive, and SharePoint are scanned for threats. Advanced Threat Protection uses detonation chambers, reputation blocking, and heuristic clustering to identify malicious content.
As cloud computing continues to gather speed, organizations with years’ worth of data stored on legacy on-premise technologies are facing issues with scale, speed, and complexity. Your customers and business partners are likely eager to get data from you, especially if you can make the process easy and secure.
Challenges with performance are not uncommon and ongoing interventions are required just to “keep the lights on”.
Discover how Snowflake empowers you to meet your analytics needs by unlocking the potential of your data.
Agenda of Webinar :
~Understand Snowflake and its Architecture
~Quickly load data into Snowflake
~Leverage the latest in Snowflake’s unlimited performance and scale to make the data ready for analytics
~Deliver secure and governed access to all data – no more silos
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...DataScienceConferenc1
Dragan Berić will take a deep dive into Lakehouse architecture, a game-changing concept bridging the best elements of data lake and data warehouse. The presentation will focus on the Delta Lake format as the foundation of the Lakehouse philosophy, and Databricks as the primary platform for its implementation.
This document summarizes a presentation about Amazon Aurora. It discusses how Aurora provides the speed and availability of commercial databases at a lower cost than open source databases. Aurora is a MySQL and PostgreSQL compatible database that is managed as a service, automating administrative tasks. It utilizes a distributed, self-healing storage system to provide high availability and durability across availability zones.
Demystifying Data Warehousing as a Service - DFWKent Graziano
This document provides an overview and introduction to Snowflake's cloud data warehousing capabilities. It begins with the speaker's background and credentials. It then discusses common data challenges organizations face today around data silos, inflexibility, and complexity. The document defines what a cloud data warehouse as a service (DWaaS) is and explains how it can help address these challenges. It provides an agenda for the topics to be covered, including features of Snowflake's cloud DWaaS and how it enables use cases like data mart consolidation and integrated data analytics. The document highlights key aspects of Snowflake's architecture and technology.
This document provides an overview of multi-core processors, including their history, architecture, performance advantages and disadvantages, applications, and future aspects. It discusses how multi-core processors contain two or more independent processors on a single integrated circuit to provide performance and productivity benefits beyond single-core processors. The document also compares different multi-core architectures and provides examples of applications that benefit from multi-core processing like video editing, gaming, and artificial intelligence.
This document summarizes key aspects of CPU processor design, including:
1) It examines two implementations of a MIPS processor - a simple single-cycle version and a more realistic pipelined version. The pipelined version breaks instruction execution into five stages to improve performance.
2) It discusses hazards that can occur in a pipeline like data hazards and branch hazards. Techniques like forwarding, stalling, and branch prediction are used to resolve hazards.
3) The control logic for the pipelined MIPS processor is explained, including how it detects hazards and forwards data between stages when needed. Stalls are also inserted when necessary to ensure correctness.
Modern data lakes are now built on cloud storage, helping organizations leverage the scale and economics of object storage while simplifying overall data storage and analysis flow
Best Practices for implementing Database Security Comprehensive Database Secu...Kal BO
Best Practices for implementing Database Security
Comprehensive Database Security
Saikat Saha
Product Director
Database Security, Oracle
October 02, 2017
Introducing Snowflake, an elastic data warehouse delivered as a service in the cloud. It aims to simplify data warehousing by removing the need for customers to manage infrastructure, scaling, and tuning. Snowflake uses a multi-cluster architecture to provide elastic scaling of storage, compute, and concurrency. It can bring together structured and semi-structured data for analysis without requiring data transformation. Customers have seen significant improvements in performance, cost savings, and the ability to add new workloads compared to traditional on-premises data warehousing solutions.
Delivering Data Democratization in the Cloud with SnowflakeKent Graziano
This is a brief introduction to Snowflake Cloud Data Platform and our revolutionary architecture. It contains a discussion of some of our unique features along with some real world metrics from our global customer base.
This document discusses Azure HDInsight, a managed Apache Hadoop and Spark platform. It provides a secure environment for building data lakes in the cloud. Key capabilities include ingesting and analyzing data from various sources using technologies like Apache Spark, Hive, Kafka and HBase. It also discusses data storage options, performance, security features and tools for management and monitoring of HDInsight clusters.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
In this presentation, Kaz Ohta, Kiyoto Tamura, and Ankush Rustagi from Treasure Data describe the company's Cloud Data Warehouse service.
"The Treasure Data Cloud Data Warehouse service enables companies to get big data analytics running in days not months without specialist IT resources and for a tenth the cost of other alternatives. Traditional data warehousing solutions - even modern alternatives such as Hadoop - are too expensive, complex and take too long for many companies to implement, so the idea of quickly launching a data warehouse service that uses the power and economics of the Cloud for companies of any size, opens up a huge potential market."
Learn more at: http://treasure-data.com * Watch the presentation video: http://inside-bigdata.com/?p=3531
Data Warehouse - Incremental Migration to the CloudMichael Rainey
A data warehouse (DW) migration is no small undertaking, especially when moving from on-premises to the cloud. A typical data warehouse has numerous data sources connecting and loading data into the DW, ETL tools and data integration scripts performing transformations, and reporting, advanced analytics, or ad-hoc query tools accessing the data for insights and analysis. That’s a lot to coordinate and the data warehouse cannot be migrated all at once. Using a data replication technology such as Oracle GoldenGate, the data warehouse migration can be performed incrementally by keeping the data in-sync between the original DW and the new, cloud DW. This session will dive into the steps necessary for this incremental migration approach and walk through a customer use case scenario, leaving attendees with an understanding of how to perform a data warehouse migration to the cloud.
Presented at RMOUG Training Days 2019
This document is a training presentation on Databricks fundamentals and the data lakehouse concept by Dalibor Wijas from November 2022. It introduces Wijas and his experience. It then discusses what Databricks is, why it is needed, what a data lakehouse is, how Databricks enables the data lakehouse concept using Apache Spark and Delta Lake. It also covers how Databricks supports data engineering, data warehousing, and offers tools for data ingestion, transformation, pipelines and more.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
Lambda Architecture in the Cloud with Azure Databricks with Andrei VaranovichDatabricks
The term “Lambda Architecture” stands for a generic, scalable and fault-tolerant data processing architecture. As the hyper-scale now offers a various PaaS services for data ingestion, storage and processing, the need for a revised, cloud-native implementation of the lambda architecture is arising.
In this talk we demonstrate the blueprint for such an implementation in Microsoft Azure, with Azure Databricks — a PaaS Spark offering – as a key component. We go back to some core principles of functional programming and link them to the capabilities of Apache Spark for various end-to-end big data analytics scenarios.
We also illustrate the “Lambda architecture in use” and the associated tread-offs using the real customer scenario – Rijksmuseum in Amsterdam – a terabyte-scale Azure-based data platform handles data from 2.500.000 visitors per year.
07 - Defend Against Threats with SIEM Plus XDR Workshop - Microsoft Sentinel ...carlitocabana
This document provides an overview and summary of Microsoft Sentinel, a cloud-native security information and event management (SIEM) tool powered by artificial intelligence. The summary highlights that Microsoft Sentinel allows organizations to harness the scale of the cloud to optimize security operations, detect evolving threats using machine learning, and expedite incident response. It collects security data from any source at cloud scale, provides analytics and hunting capabilities, integrates threat intelligence, and enables automated incident response through orchestration and playbooks.
Short introduction to different options for ETL & ELT in the Cloud with Microsoft Azure. This is a small accompanying set of slides for my presentations and blogs on this topic
Snowflake: Your Data. No Limits (Session sponsored by Snowflake) - AWS Summit...Amazon Web Services
Snowflake is a cloud-based data warehouse that is built for the cloud. It was founded in 2012 and has raised $1 billion in funding. Snowflake's architecture separates storage, compute, and metadata services, allowing it to offer unlimited scalability, multiple clusters that can access shared data with no downtime, and full transactional consistency across the system. Snowflake has over 2000 customers including large enterprises that use it for analytics, data science, and sharing large volumes of data securely.
Learn to Use Databricks for Data ScienceDatabricks
Data scientists face numerous challenges throughout the data science workflow that hinder productivity. As organizations continue to become more data-driven, a collaborative environment is more critical than ever — one that provides easier access and visibility into the data, reports and dashboards built against the data, reproducibility, and insights uncovered within the data.. Join us to hear how Databricks’ open and collaborative platform simplifies data science by enabling you to run all types of analytics workloads, from data preparation to exploratory analysis and predictive analytics, at scale — all on one unified platform.
An introduction to Office 365 Advanced Threat Protection (ATP)Robert Crane
The document describes Microsoft's security solutions for email, files, and collaboration. It discusses how email, attachments, links, and files shared in Teams, OneDrive, and SharePoint are scanned for threats. Advanced Threat Protection uses detonation chambers, reputation blocking, and heuristic clustering to identify malicious content.
As cloud computing continues to gather speed, organizations with years’ worth of data stored on legacy on-premise technologies are facing issues with scale, speed, and complexity. Your customers and business partners are likely eager to get data from you, especially if you can make the process easy and secure.
Challenges with performance are not uncommon and ongoing interventions are required just to “keep the lights on”.
Discover how Snowflake empowers you to meet your analytics needs by unlocking the potential of your data.
Agenda of Webinar :
~Understand Snowflake and its Architecture
~Quickly load data into Snowflake
~Leverage the latest in Snowflake’s unlimited performance and scale to make the data ready for analytics
~Deliver secure and governed access to all data – no more silos
[DSC Europe 22] Lakehouse architecture with Delta Lake and Databricks - Draga...DataScienceConferenc1
Dragan Berić will take a deep dive into Lakehouse architecture, a game-changing concept bridging the best elements of data lake and data warehouse. The presentation will focus on the Delta Lake format as the foundation of the Lakehouse philosophy, and Databricks as the primary platform for its implementation.
This document summarizes a presentation about Amazon Aurora. It discusses how Aurora provides the speed and availability of commercial databases at a lower cost than open source databases. Aurora is a MySQL and PostgreSQL compatible database that is managed as a service, automating administrative tasks. It utilizes a distributed, self-healing storage system to provide high availability and durability across availability zones.
Demystifying Data Warehousing as a Service - DFWKent Graziano
This document provides an overview and introduction to Snowflake's cloud data warehousing capabilities. It begins with the speaker's background and credentials. It then discusses common data challenges organizations face today around data silos, inflexibility, and complexity. The document defines what a cloud data warehouse as a service (DWaaS) is and explains how it can help address these challenges. It provides an agenda for the topics to be covered, including features of Snowflake's cloud DWaaS and how it enables use cases like data mart consolidation and integrated data analytics. The document highlights key aspects of Snowflake's architecture and technology.
This document provides an overview of multi-core processors, including their history, architecture, performance advantages and disadvantages, applications, and future aspects. It discusses how multi-core processors contain two or more independent processors on a single integrated circuit to provide performance and productivity benefits beyond single-core processors. The document also compares different multi-core architectures and provides examples of applications that benefit from multi-core processing like video editing, gaming, and artificial intelligence.
This document summarizes key aspects of CPU processor design, including:
1) It examines two implementations of a MIPS processor - a simple single-cycle version and a more realistic pipelined version. The pipelined version breaks instruction execution into five stages to improve performance.
2) It discusses hazards that can occur in a pipeline like data hazards and branch hazards. Techniques like forwarding, stalling, and branch prediction are used to resolve hazards.
3) The control logic for the pipelined MIPS processor is explained, including how it detects hazards and forwards data between stages when needed. Stalls are also inserted when necessary to ensure correctness.
This document outlines a presentation on pipelining and data hazards in microprocessors. It begins with rules for participant questions and outlines the topics to be covered: what is pipelining, types of pipelining, data hazards and their types, and solutions to data hazards. It then defines pipelining as executing subsequent instructions before prior ones complete. Types of pipelining include control, data, and structure hazards. Data hazards occur if an instruction uses a value before it is ready, and their types are RAW, WAR, and WAW. Solutions involve forwarding newer register values to bypass stale values in the pipeline and prevent hazards.
The document provides an overview of instruction sets, including:
1) Instruction formats contain operation codes, source/result operand references, and next instruction references. Operands can be located in memory, registers, or immediately within the instruction.
2) Types of operations include data transfer, arithmetic, logical, conversion, I/O, system control, and transfer of control.
3) Addressing modes specify how the target address is identified in the instruction, such as immediate, direct, indirect, register, register indirect, displacement, and stack addressing.
Cache memory is a small, fast memory located close to the CPU that stores frequently accessed instructions and data. It aims to bridge the gap between the fast CPU and slower main memory. Cache memory is organized into blocks that each contain a tag field identifying the memory address, a data field containing the cached data, and status bits. There are different mapping techniques like direct mapping, associative mapping, and set associative mapping to determine how blocks are stored in cache. When cache is full, replacement algorithms like LRU, FIFO, LFU, and random are used to determine which existing block to replace with the new block.
Capgemini Digital Transformation - Beyond the Hypedefault default
This document discusses digital transformation and digital mastery. It defines digital mastery as having strong capabilities in both technology-enabled customer and internal experiences as well as leadership capabilities like vision and governance. Companies are classified into four levels of digital mastery: digital masters, conservatives, fashionistas, and beginners. Digital masters achieve higher financial performance and market valuation through strategically focusing investments on key digital capabilities and leadership practices like having a chief digital officer. The document provides recommendations for companies to assess their digital maturity, develop a vision, build a digital roadmap, and engage their organization to drive successful digital transformation.
There are situations, called hazards, that prevent the next instruction in the instruction stream from executing during its designated cycle
There are three classes of hazards
Structural hazard
Data hazard
Branch hazard
Event-driven Architecture eli tapahtumapohjainen arkkitehtuuri
1. eli EDA eli ”tapahtumapohjainen arkkitehtuuri” Eetu Blomqvist – eetu.blomqvist@gofore.com Event-driven architecture 20.6.2011 Eetu Blomqvist
2. Kiinnostava asia, joka tapahtuu organisaation sisällä tai ulkopuolella Tulisi olla liiketoimintalähtöinen, jotta kaikki siitä kiinnostuneet osaavat tulkita sen Koostuu kahdesta osasta: otsakeja runko Otsakesisältää tietyn tapahtuman yksilöiviä tietoja, kuten id, tyyppi, nimi, aikaleima, luojan tunniste jne. Rungonsisältö kuvaa, mitä oikeastaan tapahtui Sisällön rakenne olisi hyvä määritellä esimerkiksi formaalin ontologian avulla Sisällön pitää olla riittävän informaatiivinen, jotta kiinnostuneen tahon ei tarvitse hakea lisätietoa tapahtuman lähteeltä Event eli tapahtuma 20.6.2011 Eetu Blomqvist
3. Kiinnostavien tapahtumien välittäminen niistä kiinnostuneille tahoille Vastaanotettujen tapahtumien tulkitseminen ja toiminta niiden perusteella Toimintaa voivat olla mm. palvelukutsut, prosessipalveluiden käynnistys, sähköpostin lähetystai uusien tapahtumien luominen Erittäin hajautettujen komponenttien äärimmäisen löyhä kytkentä - tapahtuman aiheuttajalla ei ole mitään tietoa siitä, keitä tapahtuma kiinnostaa tai miten sitä prosessoidaan Tapahtumien jäljitettävyys monimutkaisessa käsittelyketjussa haastavaa sopii parhaiten asynkroniseen järjestelmään Tapahtumapohjaisuus 20.6.2011 Eetu Blomqvist
4. EDA ja SOA Täydentävät toisiaan – eivät kilpaile keskenään Event-driven SOA: tapahtuma laukaisee SOA-palveluiden kutsuja. Palvelut voivat olla myös kokonaisia liiketoimintaprosesseja. Service as Event Generator: palvelu tuottaa tapahtuman, joka voi indikoida vaikkapa ongelmaa, määriteltyä kynnystä tai onnistunutta operaatiota. SOA-palvelut siis toimivat tapahtumien lähteenä. EDA mahdollistaa reaaliaikaisen tiedonvälityksen ja analyysin sekä tapahtumien monimutkaisen prosessoinnin – SOA ei ainakaan yhtä hyvin ja helposti 20.6.2011 Eetu Blomqvist
5. Käsittelytavat voidaan jakaa karkeasti kolmeen Yksinkertainen käsittely Kiinnostavien tapahtumien tapahtumien välittäminen kiinnostuneille Nopeaa ja kustannustehokasta Tapahtumavirran käsittely Sekä huomattavien (notable) että tavallisten (ordinary) tapahtumien käsittelyä Tavallinen tapahtuma (esim. RFID-signaali) voidaan muokata kiinnostavaksi (esim. ”kallis tuote lähti varastosta”) Tapahtumien välitys Monimutkainen käsittely Välittämisen lisäksi tapahtumia tulkitaan säännöstön mukaan Seurauksena voidaan mm. luoda uusia tapahtumia Tapahtumien välinen korrelaatio Kompleksisuuden takia kaupalliset ohjelmistot ovat käytännössä aina tarpeen Tapahtumien käsittely 20.6.2011 Eetu Blomqvist
6. Tapahtumageneraattorit Luovat tapahtumat Voivat suodattaaja muuntaa ”tavallisia” tapahtuvia kiinnostaviksi Tapahtumaväylä Välittää standardiformaattiin muunnetut tapahtumat prosessointikomponenteille Standardiformaatti on joustava käsite, voi olla mm. organisaation sisäinen standardi Tapahtumien prosessointi Tapahtumien käsittely ja niiden välitys (julkaisu)kiinnostuneille (= niihin perustuvan toiminnan käynnistäminen) Mahdolliset käsittelysäännöt voivat olla monimutkaisia Palvelukutsut, sähköpostit, uudet tapahtumat, tapahtuman taltiointijne. Tapahtumaan perustuva toiminta Tapahtumasta kiinnostuneet tahot toimivat (esim. palvelun sisäinen toteutus) Varsinainen tapahtumaan reagointi EDA-järjestelmän eri osat 20.6.2011 Eetu Blomqvist
7. Esimerkki yksinkertaisesta tapahtumakäsittelystä 20.6.2011 Eetu Blomqvist Lähde Brenda M. Michelson, Elemental Links http://dl.dropbox.com/u/20315902/EventDrivenArchitectureOverview_ElementalLinks_Feb2011.pdf
8. Esimerkki yksinkertaisesta tapahtumakäsittelystä(2) Asiakas tilaa kirjan myyjä varaa sen varastosta Varastopalvelu tekee varauksen ja tarkistaa sen jälkeen, onko varastossa vielä riittävästi kyseisen kirjan kappaleita Jos vapaiden kappaleiden määrä alittaa määritellyn kynnyksen, varastopalvelu luo uuden tapahtuman ”Low Inventory Threshold” (Event Q) Tapahtuma siirretään välityskanavaan, josta prosessori poimii sen Prosessorilla on tapahtumalle 2 käsittelysääntöä Se käynnistää uudelleentilausprosessin (voi olla manuaalinen, tietotekninen tai jotain siltä väliltä) Tapahtuma julkaistaan edelleen siitä kiinnostuneille Kiinnostuneita ovat varastoonostaja sekä varaston päällikön hallintasovellus 20.6.2011 Eetu Blomqvist
9. Esimerkki tapahtumavirran käsittelystä 20.6.2011 Eetu Blomqvist Lähde Brenda M. Michelson, Elemental Links http://dl.dropbox.com/u/20315902/EventDrivenArchitectureOverview_ElementalLinks_Feb2011.pdf
10. Esimerkki tapahtumavirran käsittelystä(2) Sisältää kolme tapahtumavuota 1. (alkaa vasemmasta yläkulmasta) RFID-sensori luo tapahtuman (Event A) aina, kun tuote lähtee varastosta. Elektroniikkamyynti haluaa tietää, koska varastosta lähtee ”high-end”-tuotteita. Kyseisten tuotteiden huomaamiseksi RFID-tapahtumista suodatetaan alle 4000 dollarin tuotteet pois Jäljelle jää 5000 dollarin plasma-TV, josta muodostetaan transformaation avulla organisaation standardimuotoinen tapahtuma joka siirretään tapahtumaväylään Prosessori toimii käsittelysääntöjen mukaisesti Tapahtuma julkaistaan edelleen Tapahtumasta on kiinnostunut varastopäällikön valvontasovellus 2. ja 3. (alkavat vasemmasta alakulmasta) Tilausten syöttämissovellus luo ”tavallisen” tapahtuman (Event B) aina, kun siihen syötetään tilaus Tavalliset tilaustapahtumat välitetään tietovarastoon käsittelijän ja sen julkaisutoiminnon avulla Lisäksi yli 1500 dollarin tilauksien yhteydessä tilauksen tehneen asiakkaan luokitus halutaan nostaa seuraavaan luokkaan 20.6.2011 Eetu Blomqvist
11. Esimerkki tapahtumavirran käsittelystä(3) Tarkoitukseen käytetään erillistä reititintä, koska tapahtumien reititys ja jakelu ei saa olla tapahtumalähteen vastuulla Lähteen ainoa velvollisuus on kertoa, että jotain tapahtui Reititin tulkitsee jokaisen tilauksen kokonaissumman ja luo uuden ”huomattavan” tapahtuman (Event C), kun 1500 dollaria ylittyy Myös tämä tapahtuma siirretään tapahtumaväylään Prosessori välittää tapahtuman käsittelysääntöjen mukaisesti SOA-palvelulle, joka sisältää logiikan asiakasluokituksen nostamiseksi Esimerkki Event-driven SOA:sta! 20.6.2011 Eetu Blomqvist
12. Esimerkki monimutkaisesta käsittelystä 20.6.2011 Eetu Blomqvist Lähde Brenda M. Michelson, Elemental Links http://dl.dropbox.com/u/20315902/EventDrivenArchitectureOverview_ElementalLinks_Feb2011.pdf
13. Esimerkki monimutkaisesta käsittelystä(2) Sisältää 3 tapahtumavuota 1. (alkaa vasemmasta yläkulmasta) B2B-integraatioväylän pitäisi välittää tapahtumaväylään 15 minuutin välein tapahtuma (Event W), joka ilmaisee mm. IT-tuelle, että väylä toimii Tapahtuman puuttuminen tarkoittaa virhettä Prosessorilla on tiedossaan viimeisimmän tapahtuman aikaleima Jos aikaleimasta on yli 15 minuuttia, prosessi toimii säännöstön mukaan Ylläpitäjää hälytetään hakulaitteella Virheestä luodaan uusi tapahtuma (Event E) ja se siirretään tapahtumaväylään Uusi tapahtuma julkaistaan, jolloin siitä kiinnostunut organisaation vikojenhallintajärjestelmä saa tiedon ongelmasta 2. ja 3. (alkavat vasemmalta keskeltä) Esittävät väärinkäytöksiä estäviä mekanismejä Myyntipistesovellus luo tapahtuman jokaista myyntipisteessä tehtyä ostosta kohden (Event Y) Reititin evaluoi yli 1500 dollarin ostot, joista luodaan uusia tapahtumia (Event Z) Kaikki tapahtumat siirretään tapahtumaväylään 20.6.2011 Eetu Blomqvist
14. Esimerkki monimutkaisesta käsittelystä(3) 1. tarkastusmekanismi Prosessori etsii kaikki samalla luottokortilla lyhyen ajan (10 min) sisään tehdyt transaktiot (Event Y), jotka ovat syntyneet eri paikoissa laajalla alueella (20 mailia). Jos transaktioita löytyy tehdään kutsu palvelulle, joka käsittelee väärinkäytösepäilyn alaisia asiakastietoja 2. tarkastusmekanismi Prosessori käy läpi asiakkaan ostohistoriaa, kun se vastaanottaa tapahtuman yli 1500 dollarin ostoksesta Jos oston kokonaissumma on 50% suurempi kuin yhdenkään asiakkaan aikaisemman ostoksen kokonaissumma, tapahtuma julkaistaan edelleen ”epäilyttävänä” Asiakaspalvelutiimi vastaanottaa tapahtuman ja käynnistää tiedustelun asiakkaan suuntaan soittamalla kortin omistajalle 20.6.2011 Eetu Blomqvist
15. EDA:n toteutuksen komponentit Tapahtumien metadata Tapahtumamäärittelyt Tapahtumien käsittelysäännöt Tapahtumien prosessointi Yksinkertaiset prosessorit ovat usein itse toteutettuja ohjelmistokomponentteja Monimutkaisemmat ovat kaupallisia (toimittajia mm. Oracle, IBM) Kehitys- ja seurantatyökalut Metadatan tuottaminen helposti Ylläpito ja seuranta tapahtumien käsittelylle, tapahtumien kulun seuranta, statistiikat jne. Integraatio organisaation tietojärjestelmiin Tapahtumien esikäsittely (suodatus, reititys, transformaatiot) Palvelukutsut ja liiketoimintaprosessien käynnistäminen Tapahtumien julkistaminen ja tilaaminen Palveluväylä tarjoaa useita yllä mainituista palveluista... Ja tietenkin tapahtumien lähteet ja kohteet Organisaation sisäiset tai ulkoiset ohjelmistot, palvelut, prosessit, tietovarastot, ihmiset ja niin edelleen 20.6.2011 Eetu Blomqvist