Lessons Learned from Leveraging Real-Time Power Consumption Data with Apache Kudu
ApacheCon North America 2019
https://www.apachecon.com/acna19/s/#/scheduledEvent/1179
Masahiro Ito
OSS Solution Center
Hitachi, Ltd.
Enabling the Active Data Warehouse with Apache KuduGrant Henke
Apache Kudu is an open source data storage engine that makes fast analytics on fast and changing data easy. In this presentation, Grant Henke from Cloudera will provide an overview of what Kudu is, how it works, and how it makes building an active data warehouse for real time analytics easy. Drawing on experiences from some of our largest deployments, this talk will also include an overview of common Kudu use cases and patterns. Additionally, some of the newest Kudu features and what is coming next will be covered.
Understand the Query Plan to Optimize Performance with EXPLAIN and EXPLAIN AN...EDB
What do you do, when you have to deal with poor database and query performance in PostgreSQL and there is no one around to help? Let us introduce you to important commands in PostgreSQL - EXPLAIN and EXPLAIN ANALYZE. Knowing how to use these 'tools' will help you identify query performance bottlenecks and opportunities. It provides a query plan detailing what approach the planner took to execute the statement provided.
Attend this webinar to learn:
- What are EXPLAIN and EXPLAIN ANALYZE in PostgreSQL?
- How do they help?
- Know planner tuning parameters
An overview of reference architectures for PostgresEDB
EDB Reference Architectures are designed to help new and existing users alike to quickly design a deployment architecture that suits their needs. They can be used as either the blueprint for a deployment, or as the basis for a design that enhances and extends the functionality and features offered.
Add-on architectures allow users to easily extend their core database server deployment to add additional features and functionality "building block" style.
In this webinar, we will review the following architectures:
- Single Node
- Multi Node with Asynchronous Replication
- Multi Node with Synchronous Replication
- Add-on Architectures
Speaker:
Michael Willer
Sales Engineer, EDB
Guide to heterogeneous system architecture (hsa)dibyendu.das
The document provides an overview of heterogeneous system architecture (HSA). HSA enables CPU, GPU, and other processors to work together on a single chip by moving tasks to the best suited processor. It features unified memory access, so all processors can access the same memory address space. This simplifies programming. The HSA Foundation is working to build an ecosystem around HSA through standards and by bringing together industry partners. HSA aims to provide a scalable architecture for programming across devices from smartphones to supercomputers.
An overview of reference architectures for PostgresEDB
EDB Reference Architectures are designed to help new and existing users alike to quickly design a deployment architecture that suits their needs. They can be used as either the blueprint for a deployment, or as the basis for a design that enhances and extends the functionality and features offered.
Add-on architectures allow users to easily extend their core database server deployment to add additional features and functionality "building block" style.
In this webinar, we will review the following architectures:
- Single Node
- Multi Node with Asynchronous Replication
- Multi Node with Synchronous Replication
- Add-on Architectures
Performance tuning your Hadoop/Spark clusters to use cloud storageDataWorks Summit
Remote storage provides the ability to separate compute and storage, which ushers in a new world of infinitely scalable and cost-effective storage. Remote storage in the cloud built to the HDFS standard has unique features that make it a great choice for storing and analyzing petabytes of data at a time. Customers can have unlimited storage capacity without any limit to the number or size of the files. With such scale, superior I/O performance becomes an increasingly important consideration when performing analysis on this data. For all workloads, a remote storage in the cloud can provide amazing performance when all the different knobs are tuned correctly...
Speaker
Stephen Wu, Senior Program Manager, Microsoft
YARN is a resource manager for Hadoop that allows for more efficient resource utilization and supports non-MapReduce applications. It separates resource management from job scheduling and execution. Key components include the ResourceManager, NodeManagers, and Containers. Ambari can be used to monitor YARN components and applications, configure queues and capacity scheduling, and view metrics and alerts. Future work includes supporting more applications and improving Capacity Scheduler configuration and health checks.
The document discusses the Stinger Initiative from Hortonworks to improve the performance and capabilities of interactive queries in Hive. The initiative takes a two-pronged approach, focusing on improvements to the query engine and the introduction of a new optimized column store file format called ORCFile. A new Tez execution engine is also introduced to avoid bottlenecks in MapReduce and enable lower latency queries. The goal is to extend Hive's ability to handle interactive queries with response times measured in seconds rather than minutes.
Enabling the Active Data Warehouse with Apache KuduGrant Henke
Apache Kudu is an open source data storage engine that makes fast analytics on fast and changing data easy. In this presentation, Grant Henke from Cloudera will provide an overview of what Kudu is, how it works, and how it makes building an active data warehouse for real time analytics easy. Drawing on experiences from some of our largest deployments, this talk will also include an overview of common Kudu use cases and patterns. Additionally, some of the newest Kudu features and what is coming next will be covered.
Understand the Query Plan to Optimize Performance with EXPLAIN and EXPLAIN AN...EDB
What do you do, when you have to deal with poor database and query performance in PostgreSQL and there is no one around to help? Let us introduce you to important commands in PostgreSQL - EXPLAIN and EXPLAIN ANALYZE. Knowing how to use these 'tools' will help you identify query performance bottlenecks and opportunities. It provides a query plan detailing what approach the planner took to execute the statement provided.
Attend this webinar to learn:
- What are EXPLAIN and EXPLAIN ANALYZE in PostgreSQL?
- How do they help?
- Know planner tuning parameters
An overview of reference architectures for PostgresEDB
EDB Reference Architectures are designed to help new and existing users alike to quickly design a deployment architecture that suits their needs. They can be used as either the blueprint for a deployment, or as the basis for a design that enhances and extends the functionality and features offered.
Add-on architectures allow users to easily extend their core database server deployment to add additional features and functionality "building block" style.
In this webinar, we will review the following architectures:
- Single Node
- Multi Node with Asynchronous Replication
- Multi Node with Synchronous Replication
- Add-on Architectures
Speaker:
Michael Willer
Sales Engineer, EDB
Guide to heterogeneous system architecture (hsa)dibyendu.das
The document provides an overview of heterogeneous system architecture (HSA). HSA enables CPU, GPU, and other processors to work together on a single chip by moving tasks to the best suited processor. It features unified memory access, so all processors can access the same memory address space. This simplifies programming. The HSA Foundation is working to build an ecosystem around HSA through standards and by bringing together industry partners. HSA aims to provide a scalable architecture for programming across devices from smartphones to supercomputers.
An overview of reference architectures for PostgresEDB
EDB Reference Architectures are designed to help new and existing users alike to quickly design a deployment architecture that suits their needs. They can be used as either the blueprint for a deployment, or as the basis for a design that enhances and extends the functionality and features offered.
Add-on architectures allow users to easily extend their core database server deployment to add additional features and functionality "building block" style.
In this webinar, we will review the following architectures:
- Single Node
- Multi Node with Asynchronous Replication
- Multi Node with Synchronous Replication
- Add-on Architectures
Performance tuning your Hadoop/Spark clusters to use cloud storageDataWorks Summit
Remote storage provides the ability to separate compute and storage, which ushers in a new world of infinitely scalable and cost-effective storage. Remote storage in the cloud built to the HDFS standard has unique features that make it a great choice for storing and analyzing petabytes of data at a time. Customers can have unlimited storage capacity without any limit to the number or size of the files. With such scale, superior I/O performance becomes an increasingly important consideration when performing analysis on this data. For all workloads, a remote storage in the cloud can provide amazing performance when all the different knobs are tuned correctly...
Speaker
Stephen Wu, Senior Program Manager, Microsoft
YARN is a resource manager for Hadoop that allows for more efficient resource utilization and supports non-MapReduce applications. It separates resource management from job scheduling and execution. Key components include the ResourceManager, NodeManagers, and Containers. Ambari can be used to monitor YARN components and applications, configure queues and capacity scheduling, and view metrics and alerts. Future work includes supporting more applications and improving Capacity Scheduler configuration and health checks.
The document discusses the Stinger Initiative from Hortonworks to improve the performance and capabilities of interactive queries in Hive. The initiative takes a two-pronged approach, focusing on improvements to the query engine and the introduction of a new optimized column store file format called ORCFile. A new Tez execution engine is also introduced to avoid bottlenecks in MapReduce and enable lower latency queries. The goal is to extend Hive's ability to handle interactive queries with response times measured in seconds rather than minutes.
The document discusses GPU computing and ATI Stream technology. It provides an overview of GPU architecture evolution from 2004 to present, highlighting increasing programmability and computational capabilities. ATI Stream uses OpenCL and DirectCompute for heterogeneous programming across CPUs and GPUs. The document outlines OpenCL programming concepts like memory spaces and execution model, and provides examples of host code to initialize buffers, compile, and run a simple vector addition kernel.
Beginners Guide to High Availability for PostgresEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- High availability concepts and workings
- RPO, RTO, and uptime in high availability
- Postgres high availability using:
- Streaming replication
- Logical replication
- Important high availability parameters in Postgres and options to monitor high availability.
- EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
Speaker:
Gaby Schilders
Senior Sales Engineer, EDB
Kappa Architecture is a software architecture pattern that makes use of an immutable, append only log. All the processing of the event will be performed in the input streams and persisted as real-time views. Apache Flink is very well suited to be the processing engine because it provides support for event-time semantics, stateful exactly-once processing, and achieves high throughput and low latency at the same time. Apache Kudu Kudu is a storage system good at both ingesting streaming data and analysing it using ad-hoc queries (e.g. interactive SQL based) and full-scan processes (e.g Spark/Flink). So Kudu is a good fit to store the real-time views in a Kappa Architecture. We have developed and open-sourced a connector to integrate Apache Kudu and Apache Flink. It allows reading/writing data from/to Kudu using the DataSet and DataStream Flink’s APIs. The connector has been submitted to the Apache Bahir project and is already available from maven central repository.
Public Sector Virtual Town Hall: High Availability for PostgreSQLEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
High availability concepts and workings
RPO, RTO, and uptime in high availability
Postgres high availability using streaming replication and logical replication
Important high availability parameters in PostgreSQL and options to monitor high availability
EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
Overcoming write availability challenges of PostgreSQLEDB
There's no shortage of physical replication solutions for PostgreSQL, they scale horizontally and provide high read availability. But where they fall short is write availability, which leads many users to consider PostgreSQL logical replication. Existing solutions have a single point of failure or are dependent on a forked, vendor-provided PostgreSQL extension making reliable, enterprise-class logical replication hard to come by. Furthermore, these solutions put limits on scaling PostgreSQL.
By combining Kafka, an open source event streaming system with PostgreSQL, customers can get a fault tolerant, scalable logical replication service. Learn how EDB Replicate leverages Kafka for high write availability needed for today's demanding consumers who expect their applications to be always available and won't tolerate latency.
Automating a PostgreSQL High Availability Architecture with AnsibleEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. Postgres is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
EDB reference architectures are designed to help new and existing users alike to quickly design a deployment architecture that suits their needs. Users can use these reference architectures as a blueprint or as the basis for a design that enhances and extends the functionality and features offered.
This webinar will explore:
- Concepts of High Availability
- Quick review of EDB reference architectures
- EDB tools to create a highly available PostgreSQL architecture
- Options for automating the deployment of reference architectures
- EDB Ansible® roles helping in automating the deployment of reference architectures
- Features and capabilities of Ansible roles
- Automating the provisioning of the resources in the cloud using Terraform™
Traditionally database systems were optimized either for OLAP either for OLTP workloads. Such mainstream DBMSes like Postgres,MySQL,... are mostly used for OLTP, while Greenplum, Vertica, Clickhouse, SparkSQL,... are oriented on analytic queries. But right now many companies do not want to have two different data stores for OLAP/OLTP and need to perform analytic queries on most recent data. I want to discuss which features should be added to Postgres to efficiently handle HTAP workload.
Senior Data Engineer, David Nhim, will share how News Distribution Network, Inc (NDN) went from generating multiple routine reports daily, taking up valuable time and resources, to instant reporting accessible company wide.
NDN, the fourth largest online video property in the US, quickly analyzes 600 million ad impressions and tests new clusters within minutes using Amazon Redshift.
In this session, we will learn how NDN reshaped their data governance strategy, resulting in valuable resources saved and performance optimization across their organization by using Amazon Redshift and Chartio.
Exploiting machine learning to keep Hadoop clusters healthyDataWorks Summit
Oath has one of the largest footprint of Hadoop, with tens of thousands of jobs run every day. Reliability and consistency is the key here. With 50k+ nodes there will be considerable amount of nodes having disk, memory, network, and slowness issues. If we have any hosts with issues serving/running jobs can increase tight SLA bound jobs’ run times exponentially and frustrate users and support team to debug it.
We are constantly working to develop system that works in tandem with Hadoop to quickly identify and single out pressure points. Here we would like to concentrate on disk, as per our experience disk are the most trouble maker and fragile, specially the high density disks. Because of the huge scale and monetary impact because of slow performing disks, we took challenge to build system to predict and take worn-out disks before they become performance bottleneck and hit jobs’ SLAs. Now task is simple look into symptoms of hard drive failure and take them out? Right? No it’s not straight forward when we are talking about 200+k disk drives. Just collecting such huge data periodically and reliably is one of the small challenges as compared to analyzing such huge datasets and predicting bad disks. Now lets see data regarding each disk we have reallocated sectors count, reported uncorrectable errors, command timeout, and uncorrectable sector count. On top of it hard disk model has its own interpretation of the above-mentioned statistics. DHEERAJ KAPUR, Principal Engineer, Oath and SWETHA BANAGIRI
Beginner's Guide to High Availability for Postgres - FrenchEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- High availability concepts and workings
- RPO, RTO, and uptime in high availability
- Postgres high availability using:
- Streaming replication
- Logical replication
- Important high availability parameters in Postgres and options to monitor high availability
- EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele Hakka Labs
Hadoop 2.0 is approaching. A defining characteristic of Hadoop 2.0 is its next generation resource management framework called YARN. YARN enables Hadoop to grow beyond its MapReduce origins to embrace multiple workloads spanning interactive queries, batch processing, streaming & more.
The document summarizes new features in PostgreSQL 13 and EDB Postgres Advanced Server 13. Some key highlights include improvements to vacuum that enable faster parallel vacuum of indexes and vacuum for append-only tables, enhanced security and consistency features like libpq channel binding, and new partitioning capabilities like interval partitioning and automatic hash partitioning. EDB Postgres Advanced Server 13 adds features like the ability to specify indexes during table creation and enhancements to data loading and Oracle compatibility.
"Performance Evaluation, Scalability Analysis, and Optimization Tuning of A...Altair
This document summarizes the results of benchmarking and optimizing Altair HyperWorks RADIOSS simulation software on an HPC cluster. Key findings include:
- EDR InfiniBand interconnect provided the best performance and scalability compared to Ethernet or other InfiniBand technologies.
- Increasing CPU cores per node, simulation time, and enabling hybrid MPI/OpenMP parallelization improved performance.
- Tuning the MPI configuration, such as the MPI_Allreduce algorithm, provided significant performance gains.
- Single precision runs were faster than double precision by 47%. Higher CPU frequencies also increased performance.
Kudu is an open source storage engine that provides low-latency random reads and writes while also supporting efficient analytical queries. It horizontally partitions and replicates data across servers for high availability and performance. Kudu integrates with Hadoop ecosystems tools like Impala, Spark, and MapReduce. The demo will cover Kudu architecture, data storage, and how to implement Kudu in a buffer load using Scala and Impala.
Hadoop {Submarine} Project: Running Deep Learning Workloads on YARNDataWorks Summit
Deep learning is useful for enterprises tasks in the field of speech recognition, image classification, AI chatbots and machine translation, just to name a few.
In order to train deep learning/machine learning models, applications such as TensorFlow / MXNet / Caffe / XGBoost can be leveraged. And sometimes these applications will be used together to solve different problems.
To make distributed deep learning/machine learning applications easily launched, managed, monitored. Hadoop community has introduced Submarine project along with other improvements such as first-class GPU support, container-DNS support, scheduling improvements, etc. These improvements make distributed deep learning/machine learning applications run on YARN as simple as running it locally, which can let machine-learning engineers focus on algorithms instead of worrying about underlying infrastructure. Also, YARN can better manage a shared cluster which runs deep learning/machine learning and other services/ETL jobs with these improvements.
In this session, we will take a closer look at Submarine project as well as other improvements and show how to run these deep learning workloads on YARN with demos. Audiences can start trying running these workloads on YARN after this talk.
Speakers:
Sunil Govindan, Staff Engineer
Hortonworks
Zhankun Tank, Staff Engineer
Hortonworks
During this webinar, we will review best practices and lessons learned from working with large and mid-size companies on their deployment of PostgreSQL. We will explore the practices that helped industry leaders move through these stages quickly, and get as much value out of PostgreSQL as possible without incurring undue risk.
We have identified a set of levers that companies can use to accelerate their success with PostgreSQL:
- Application Tiering
- Collaboration between DBAs and Development Teams
- Evangelizing
- Standardization and Automation
- Balance of Migration and New Development
The document summarizes the agenda and presentations for a YARN Meet Up in September 2013. Key topics included Hadoop 2.0 beta testing with YARN, a new Application History Server, improving RM reliability through restartability and high availability, Apache Tez and other YARN applications like Samza and Giraph, using YARN at LinkedIn, and a Go programming language YARN application demo. Individual presentations provided details on YARN APIs, existing application compatibility, the Application History Server design and implementation, RM restartability work, RM high availability architecture, and using Tez as a YARN application.
Seamless Replication and Disaster Recovery for Apache Hive WarehouseSankar H
As Apache Hadoop clusters become central to an organization’s operations, they have clusters in more than one data center. Historically, this has been largely driven by requirements of business continuity planning or geo localization. It has also recently been gaining a lot of interest from a hybrid cloud perspective, i.e. wherein people are trying to augment their traditional on-prem setup with cloud-based additions as well. A robust replication solution is a fundamental requirement in such cases.
Seamless disaster recovery has several challenges. Data, metadata, and transaction information need to be moved in sync. It should also be easy for the users and applications to reason about the state of the replica. The “hadoop scale” also brings unique challenges as bandwidth between clusters can be a limiting factor. The data transfer has to be minimized for replication, failover, as well as fail back scenarios.
In this talk we will discuss how the above challenges are addressed for supporting seamless replication and disaster recovery for Hive.
Introducing Data Redaction - an enabler to data security in EDB Postgres Adva...EDB
With the rapid growth in digitalization, coupled with the current pandemic situation globally, many organizations and businesses are forced to operate remotely and online, more than they would prefer. At such times, how do corporations and businesses ensure data security, especially the secure management of personal information?
There are many techniques used to secure information, such as authentication, authorization, access control, virtual database, and encryption. In this webinar, we focus on Data Redaction - a technique that limits sensitive data exposure in EDB Postgres Advanced Server (EPAS).
This webinar covers:
- What is EDB Data Redaction
- How to limit sensitive data exposure in EPAS
- Provision for Oracle compatibility in EPAS
- Demo
「Hyperledger Weather Report 2019/02/19」
Hyperledger コミュニティ全体で注目される動向と、Hyperledger Fabric 1.4の目玉機能を紹介
Global Center for Social Innovation North America, R&D Division, Hitachi America, Ltd. 大島 訓氏
2月19日開催 Hyperledger Tokyo Meetup にて講演
The document discusses GPU computing and ATI Stream technology. It provides an overview of GPU architecture evolution from 2004 to present, highlighting increasing programmability and computational capabilities. ATI Stream uses OpenCL and DirectCompute for heterogeneous programming across CPUs and GPUs. The document outlines OpenCL programming concepts like memory spaces and execution model, and provides examples of host code to initialize buffers, compile, and run a simple vector addition kernel.
Beginners Guide to High Availability for PostgresEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- High availability concepts and workings
- RPO, RTO, and uptime in high availability
- Postgres high availability using:
- Streaming replication
- Logical replication
- Important high availability parameters in Postgres and options to monitor high availability.
- EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
Speaker:
Gaby Schilders
Senior Sales Engineer, EDB
Kappa Architecture is a software architecture pattern that makes use of an immutable, append only log. All the processing of the event will be performed in the input streams and persisted as real-time views. Apache Flink is very well suited to be the processing engine because it provides support for event-time semantics, stateful exactly-once processing, and achieves high throughput and low latency at the same time. Apache Kudu Kudu is a storage system good at both ingesting streaming data and analysing it using ad-hoc queries (e.g. interactive SQL based) and full-scan processes (e.g Spark/Flink). So Kudu is a good fit to store the real-time views in a Kappa Architecture. We have developed and open-sourced a connector to integrate Apache Kudu and Apache Flink. It allows reading/writing data from/to Kudu using the DataSet and DataStream Flink’s APIs. The connector has been submitted to the Apache Bahir project and is already available from maven central repository.
Public Sector Virtual Town Hall: High Availability for PostgreSQLEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
High availability concepts and workings
RPO, RTO, and uptime in high availability
Postgres high availability using streaming replication and logical replication
Important high availability parameters in PostgreSQL and options to monitor high availability
EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
Overcoming write availability challenges of PostgreSQLEDB
There's no shortage of physical replication solutions for PostgreSQL, they scale horizontally and provide high read availability. But where they fall short is write availability, which leads many users to consider PostgreSQL logical replication. Existing solutions have a single point of failure or are dependent on a forked, vendor-provided PostgreSQL extension making reliable, enterprise-class logical replication hard to come by. Furthermore, these solutions put limits on scaling PostgreSQL.
By combining Kafka, an open source event streaming system with PostgreSQL, customers can get a fault tolerant, scalable logical replication service. Learn how EDB Replicate leverages Kafka for high write availability needed for today's demanding consumers who expect their applications to be always available and won't tolerate latency.
Automating a PostgreSQL High Availability Architecture with AnsibleEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. Postgres is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
EDB reference architectures are designed to help new and existing users alike to quickly design a deployment architecture that suits their needs. Users can use these reference architectures as a blueprint or as the basis for a design that enhances and extends the functionality and features offered.
This webinar will explore:
- Concepts of High Availability
- Quick review of EDB reference architectures
- EDB tools to create a highly available PostgreSQL architecture
- Options for automating the deployment of reference architectures
- EDB Ansible® roles helping in automating the deployment of reference architectures
- Features and capabilities of Ansible roles
- Automating the provisioning of the resources in the cloud using Terraform™
Traditionally database systems were optimized either for OLAP either for OLTP workloads. Such mainstream DBMSes like Postgres,MySQL,... are mostly used for OLTP, while Greenplum, Vertica, Clickhouse, SparkSQL,... are oriented on analytic queries. But right now many companies do not want to have two different data stores for OLAP/OLTP and need to perform analytic queries on most recent data. I want to discuss which features should be added to Postgres to efficiently handle HTAP workload.
Senior Data Engineer, David Nhim, will share how News Distribution Network, Inc (NDN) went from generating multiple routine reports daily, taking up valuable time and resources, to instant reporting accessible company wide.
NDN, the fourth largest online video property in the US, quickly analyzes 600 million ad impressions and tests new clusters within minutes using Amazon Redshift.
In this session, we will learn how NDN reshaped their data governance strategy, resulting in valuable resources saved and performance optimization across their organization by using Amazon Redshift and Chartio.
Exploiting machine learning to keep Hadoop clusters healthyDataWorks Summit
Oath has one of the largest footprint of Hadoop, with tens of thousands of jobs run every day. Reliability and consistency is the key here. With 50k+ nodes there will be considerable amount of nodes having disk, memory, network, and slowness issues. If we have any hosts with issues serving/running jobs can increase tight SLA bound jobs’ run times exponentially and frustrate users and support team to debug it.
We are constantly working to develop system that works in tandem with Hadoop to quickly identify and single out pressure points. Here we would like to concentrate on disk, as per our experience disk are the most trouble maker and fragile, specially the high density disks. Because of the huge scale and monetary impact because of slow performing disks, we took challenge to build system to predict and take worn-out disks before they become performance bottleneck and hit jobs’ SLAs. Now task is simple look into symptoms of hard drive failure and take them out? Right? No it’s not straight forward when we are talking about 200+k disk drives. Just collecting such huge data periodically and reliably is one of the small challenges as compared to analyzing such huge datasets and predicting bad disks. Now lets see data regarding each disk we have reallocated sectors count, reported uncorrectable errors, command timeout, and uncorrectable sector count. On top of it hard disk model has its own interpretation of the above-mentioned statistics. DHEERAJ KAPUR, Principal Engineer, Oath and SWETHA BANAGIRI
Beginner's Guide to High Availability for Postgres - FrenchEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. PostgreSQL is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
This webinar will explore:
- High availability concepts and workings
- RPO, RTO, and uptime in high availability
- Postgres high availability using:
- Streaming replication
- Logical replication
- Important high availability parameters in Postgres and options to monitor high availability
- EDB tools (EDB Postgres Failover Manager, BART etc) to create a highly available Postgres architecture
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele Hakka Labs
Hadoop 2.0 is approaching. A defining characteristic of Hadoop 2.0 is its next generation resource management framework called YARN. YARN enables Hadoop to grow beyond its MapReduce origins to embrace multiple workloads spanning interactive queries, batch processing, streaming & more.
The document summarizes new features in PostgreSQL 13 and EDB Postgres Advanced Server 13. Some key highlights include improvements to vacuum that enable faster parallel vacuum of indexes and vacuum for append-only tables, enhanced security and consistency features like libpq channel binding, and new partitioning capabilities like interval partitioning and automatic hash partitioning. EDB Postgres Advanced Server 13 adds features like the ability to specify indexes during table creation and enhancements to data loading and Oracle compatibility.
"Performance Evaluation, Scalability Analysis, and Optimization Tuning of A...Altair
This document summarizes the results of benchmarking and optimizing Altair HyperWorks RADIOSS simulation software on an HPC cluster. Key findings include:
- EDR InfiniBand interconnect provided the best performance and scalability compared to Ethernet or other InfiniBand technologies.
- Increasing CPU cores per node, simulation time, and enabling hybrid MPI/OpenMP parallelization improved performance.
- Tuning the MPI configuration, such as the MPI_Allreduce algorithm, provided significant performance gains.
- Single precision runs were faster than double precision by 47%. Higher CPU frequencies also increased performance.
Kudu is an open source storage engine that provides low-latency random reads and writes while also supporting efficient analytical queries. It horizontally partitions and replicates data across servers for high availability and performance. Kudu integrates with Hadoop ecosystems tools like Impala, Spark, and MapReduce. The demo will cover Kudu architecture, data storage, and how to implement Kudu in a buffer load using Scala and Impala.
Hadoop {Submarine} Project: Running Deep Learning Workloads on YARNDataWorks Summit
Deep learning is useful for enterprises tasks in the field of speech recognition, image classification, AI chatbots and machine translation, just to name a few.
In order to train deep learning/machine learning models, applications such as TensorFlow / MXNet / Caffe / XGBoost can be leveraged. And sometimes these applications will be used together to solve different problems.
To make distributed deep learning/machine learning applications easily launched, managed, monitored. Hadoop community has introduced Submarine project along with other improvements such as first-class GPU support, container-DNS support, scheduling improvements, etc. These improvements make distributed deep learning/machine learning applications run on YARN as simple as running it locally, which can let machine-learning engineers focus on algorithms instead of worrying about underlying infrastructure. Also, YARN can better manage a shared cluster which runs deep learning/machine learning and other services/ETL jobs with these improvements.
In this session, we will take a closer look at Submarine project as well as other improvements and show how to run these deep learning workloads on YARN with demos. Audiences can start trying running these workloads on YARN after this talk.
Speakers:
Sunil Govindan, Staff Engineer
Hortonworks
Zhankun Tank, Staff Engineer
Hortonworks
During this webinar, we will review best practices and lessons learned from working with large and mid-size companies on their deployment of PostgreSQL. We will explore the practices that helped industry leaders move through these stages quickly, and get as much value out of PostgreSQL as possible without incurring undue risk.
We have identified a set of levers that companies can use to accelerate their success with PostgreSQL:
- Application Tiering
- Collaboration between DBAs and Development Teams
- Evangelizing
- Standardization and Automation
- Balance of Migration and New Development
The document summarizes the agenda and presentations for a YARN Meet Up in September 2013. Key topics included Hadoop 2.0 beta testing with YARN, a new Application History Server, improving RM reliability through restartability and high availability, Apache Tez and other YARN applications like Samza and Giraph, using YARN at LinkedIn, and a Go programming language YARN application demo. Individual presentations provided details on YARN APIs, existing application compatibility, the Application History Server design and implementation, RM restartability work, RM high availability architecture, and using Tez as a YARN application.
Seamless Replication and Disaster Recovery for Apache Hive WarehouseSankar H
As Apache Hadoop clusters become central to an organization’s operations, they have clusters in more than one data center. Historically, this has been largely driven by requirements of business continuity planning or geo localization. It has also recently been gaining a lot of interest from a hybrid cloud perspective, i.e. wherein people are trying to augment their traditional on-prem setup with cloud-based additions as well. A robust replication solution is a fundamental requirement in such cases.
Seamless disaster recovery has several challenges. Data, metadata, and transaction information need to be moved in sync. It should also be easy for the users and applications to reason about the state of the replica. The “hadoop scale” also brings unique challenges as bandwidth between clusters can be a limiting factor. The data transfer has to be minimized for replication, failover, as well as fail back scenarios.
In this talk we will discuss how the above challenges are addressed for supporting seamless replication and disaster recovery for Hive.
Introducing Data Redaction - an enabler to data security in EDB Postgres Adva...EDB
With the rapid growth in digitalization, coupled with the current pandemic situation globally, many organizations and businesses are forced to operate remotely and online, more than they would prefer. At such times, how do corporations and businesses ensure data security, especially the secure management of personal information?
There are many techniques used to secure information, such as authentication, authorization, access control, virtual database, and encryption. In this webinar, we focus on Data Redaction - a technique that limits sensitive data exposure in EDB Postgres Advanced Server (EPAS).
This webinar covers:
- What is EDB Data Redaction
- How to limit sensitive data exposure in EPAS
- Provision for Oracle compatibility in EPAS
- Demo
「Hyperledger Weather Report 2019/02/19」
Hyperledger コミュニティ全体で注目される動向と、Hyperledger Fabric 1.4の目玉機能を紹介
Global Center for Social Innovation North America, R&D Division, Hitachi America, Ltd. 大島 訓氏
2月19日開催 Hyperledger Tokyo Meetup にて講演
The document discusses Apache Hive and Apache Druid for fast SQL on big data. It provides performance benchmarks showing Hive LLAP is faster than Presto and Spark SQL for TPC-DS queries. It describes features of Hive LLAP including in-memory caching, query result caching, and metadata caching. It also discusses new Hive 3 features like materialized views and optimizer improvements. The document then provides an overview of Apache Druid's capabilities for real-time ingestion and querying of streaming data before discussing how Hive and Druid can work together, with Hive able to push down queries to Druid.
Sizing Splunk SmartStore - Spend Less and Get More Out of SplunkPaula Koziol
Data is growing exponentially; however IT budgets are not. Growth in internal use cases and additional data sources can put organizations under intense pressure to manage spiraling costs. The good news is that help is on the way. We will show how to size and configure Splunk SmartStore to yield significant cost savings, for both current and future data growth. In addition, learn how to configure the Splunk deployment for optimal search performance.
Originally presented at Splunk .conf19 on October 22, 2019
Kirin User Story: Migrating Mission Critical Applications to OpenStack Privat...Motoki Kakinuma
NTT Data is an IT service company.
Kirin is one of the largest beverages companies in Japan.
In this presentation, we will present the user story of migrating all applications from creaky infrastructure to OpenStack private cloud including actual challenges, know-hows and future prospects.
The key concept of this project is:
* Mission Critical: Migrate all Kirin enterprise applications to OpenStack private cloud.
* Think Big, Start Small: Start from small number of apps, and expand rapidly.
* Agility and elasticity: Adopt a PaaS-like automation approach, targeting 50% less development cost and 40% less operational cost.
In order to achieve all items above, we have decided to use OpenStack IaaS, ICO, which is an automation product by IBM, serverspec for testing, and Hinemos for monitoring management.
Starting from Aug 2014, the project expects 100 VM / 100 TB storage as the first-stage migration by end of 2015. We're planning to migrate 500 VM / 300 TB by end of 2016 and 2000 VM / 1 PB finally.
This document discusses new features in the InnoDB storage engine in MySQL 8.0, including a single shared data dictionary, serialized dictionary information stored in tablespaces, and atomic DDL operations using a new DDL log table. It also describes improvements to ALTER TABLE operations with a new instant ADD COLUMN algorithm that does not require rebuilding tables.
3 Things to Learn About:
-How Kudu is able to fill the analytic gap between HDFS and Apache HBase
-The trade-offs between real-time transactional access and fast analytic performance
-How Kudu provides an option to achieve fast scans and random access from a single API
Azure + DataStax Enterprise Powers Office 365 Per User StoreDataStax Academy
We will present our O365 use case scenarios, why we chose Cassandra + Spark, and walk through the architecture we chose for running DataStax Enterprise on azure.
Iperconvergenza come migliora gli economics del tuo ITNetApp
The document describes instructions for connecting audio to an online webinar. It provides three options for connecting audio: calling using a computer, calling a phone number, or having the system call back a provided number. It also includes the webinar title and information about asking questions.
Syncsort, Tableau, & Cloudera present: Break the Barriers to Big Data InsightPrecisely
The document discusses moving legacy data and workloads from traditional data warehouses to Hadoop. It describes how ELT processes on dormant data waste resources and how offloading this data to Hadoop can optimize costs and performance. The presentation includes a demonstration of using Tableau for self-service analytics on data in Hadoop and a case study of a financial organization reducing ELT development time from weeks to hours by offloading mainframe data to Hadoop.
Syncsort, Tableau, & Cloudera present: Break the Barriers to Big Data InsightSteven Totman
Demand for quicker access to multiple integrated sources of data continues to rise. Immediate access to data stored in a variety of systems - such as mainframes, data warehouses, and data marts - to mine visually for business intelligence is the competitive differentiation enterprises need to win in today’s economy.
Stop playing the waiting game and learn about a new end-to-end solution for combining, analyzing, and visualizing data from practically any source in your enterprise environment.
Leading organizations are already taking advantage of this architectural innovation to gain modern insights while reducing costs and propelling their businesses ahead of the competition.
Are you tired of waiting? Don't let your architecture hold you back. Access this webinar and hear from a team of industry experts on how you can Break the Barriers to Big Data Insight.
This document discusses how to make software more green and environmentally friendly. It defines green software as software that is carbon efficient, energy efficient, hardware efficient, and carbon aware. It provides recommendations for various roles within an organization on driving green initiatives, including focusing on efficiency for CxOs, architects, infrastructure engineers, and developers. Examples include optimizing resource usage, using public clouds effectively, prioritizing equipment standardization, and developing applications that can run more efficiently.
NVMe and all-flash systems can solve any performance, floor space and energy problems. At least this is the marketing message many vendors and analysts spread today – but actually, sounds too good to be true, right?
Like always in real life, there is no clear black or white, but some circumstances you should be aware of – especially if you intend to leverage these technologies.
You may ask yourself: Do I need to rip and replace my existing storage? What is the best way to integrate both? What benefits do I receive?
Well, just join our brief webinar, which also includes a live demo and audience Q&A so you can get the most out of these technologies, make your storage great again and discover:
• How to integrate Flash over NVMe in real life
• How to benefit of some Flash/NVMe for your entire applications
This document presents a technique called VM-aware adaptive storage cache prefetching that uses information from virtual machines to improve storage caching performance in hybrid storage arrays. It exploits access locality based on file layouts obtained from the guest file system and adaptively tunes prefetching by adjusting the prefetch window size based on application performance statistics. An evaluation using TPCx-V benchmarks showed the technique improved performance over 32% by using file layout information and 6.7% by using adaptive prefetch window tuning compared to other caching approaches.
When your databases support mission-critical applications, latency and outages can hurt your business. That’s why you need monitoring and management tools to help you keep your enterprise Postgres servers — and the applications they support — consistently available and consistently fast. In this webinar, you’ll learn:
The various tasks — monitoring, administration, etc. — required to keep a database server working well, and how they differ
Why it’s hard to monitor databases with general-purpose monitoring tools
The main tools available for enterprise Postgres needs
How these solutions differ, and when and why to choose each of them for specific cases
By the end of this session, you will have an understanding of how to avoid downtime and optimize the user experience with database monitoring tools.
Simplifying Real-Time Architectures for IoT with Apache KuduCloudera, Inc.
3 Things to Learn About:
*Building scalable real time architectures for managing data from IoT
*Processing data in real time with components such as Kudu & Spark
*Customer case studies highlighting real-time IoT use cases
Oracle Database 19c - poslední z rodiny 12.2 a co přináší novéhoMarketingArrowECS_CZ
The document provides an overview of Oracle Database 19c, highlighting its key features and capabilities. It notes that Oracle Database 19c is Oracle's recommended release for all database upgrades. New features in 19c include fast data ingestion support for IoT workloads, SQL statement quarantine, and enhancements to JSON and high availability functionality.
This document provides an overview and summary of a presentation about authentication and authorization for cloud native applications using Keycloak. The presentation introduces Keycloak as an open source identity and access management solution, discusses the importance of authentication and authorization, and describes how Keycloak can be used for authentication methods like single sign-on, social login, and multi-factor authentication as well as authorization standards like OAuth 2.0 and Financial-Grade API 1.0. It also covers Keycloak features that help secure cloud native environments and applications.
The document discusses the challenge of implementing scalable authorization and describes how to use Keycloak's authorization service to achieve it. Keycloak allows defining fine-grained authorization policies and centralizing authorization data, improving scalability. Combined with OPA and CockroachDB, Keycloak can also enhance performance and availability while maintaining a centralized approach. The document provides an overview of Keycloak's authorization capabilities and how they enable scalable and standards-based authorization.
The document describes a session from the KubeCon EU 2023 conference on Keycloak, an open-source identity and access management solution. It provides an overview of the session which was presented by Alexander Schwartz from Red Hat and Yuuichi Nakamura from Hitachi and demonstrated how Keycloak can be used to securely authenticate users to applications like Grafana. It also discusses Keycloak's support for advanced security specifications like FAPI and efforts by the FAPI-SIG working group to promote features needed for compliance.
This document discusses security considerations for API gateway aggregation. It proposes building an API gateway aggregator in front of existing API gateways to expose APIs outside a company while minimizing security risks and impact on existing services. It describes how the aggregator can implement OAuth 2.0 authorization with a centralized authorization server and token exchange to authorize external applications without complexifying authorization for internal services. Advanced use cases discussed include supporting the Financial-grade API security profile for highly sensitive data and implementing zero-trust networking.
This document discusses the differences between assertion-based access tokens and handle-based access tokens in OAuth 2.0. Assertion-based tokens are parsable tokens like JWTs that contain user and client information, while handle-based tokens are opaque references. Assertion-based tokens have advantages for performance and scalability but require cryptographic protection, while handle-based tokens require validation through the authorization server. The document then examines scenarios where handle-based tokens could cause problems, such as with multiple authorization servers, and outlines secure validation steps for assertion-based tokens.
Yoshiyuki Tabata from Hitachi presented on API specifications and tools that help engineers construct high-security API systems. He discussed standards like OAuth 2.0, OIDC, PKCE, and OAuth MTLS. Useful features for testing include decoding tokens to check validity, and calling authorization server endpoints to validate access control. Implementing these features in mock servers and clients allows engineers to efficiently test if high-security requirements are met before production.
The document discusses implementing security and availability requirements for a banking API system using open source software. It describes using the 3scale API management platform and Keycloak identity management software together to meet authentication, authorization, access control, availability, and standards compliance requirements. Patches were submitted to these open source projects to enhance their features and better support the banking use case.
This document discusses implementing a lightweight zero-trust network using the open source tools Keycloak and NGINX. It begins by explaining the transition from a traditional network security model with clear boundaries between public and private networks to a zero-trust model where security boundaries are defined individually for each service or pod. It then covers how to implement the underlying technologies of JWT validation, mutual TLS authentication, and OAuth MTLS using Keycloak as an authorization server and NGINX as an API gateway. Additional topics discussed include how to secure east-west internal traffic and resolve potential policy decision point chokepoints.
This document discusses identity provider mix-up attacks in OAuth and describes several patterns of these attacks. It also outlines various mitigations and which mitigations are effective against each attack pattern. Specifically, it covers attack patterns that occur before and after the authorization code is obtained, listing three patterns for the former and two for the latter. Finally, it analyzes how the mitigation of using distinct redirect URIs matches up against each combination of attack patterns.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/