OpenText Archive Center 16.2 Single File Vendor Interface (VI) using Microsoft Azure Storage Account as a storage device is now supported on Linux. Checkout this brief overview of its usage on one of our current projects. Thanks to Manish Shah (Microsoft) for his contribution and working with OpenText to achieve support on Linux, to Supriya Pande for her article on the Microsoft Azure Storage Explorer, to Oleh Khrypko (SAP) for his input to handling disaster recovery on OpenText Archive Center and Gary Jackson (Aliter Consulting) for the article.
Checkout the latest article by Darryl Griffiths from Aliter Consulting. SAP on Azure Web Dispatcher High Availability provides an overview of how to utilise an Azure Internal Load Balancer in conjunction with the parallel SAP Web Dispatchers to achieve a highly available, load-balanced and scalable solution for fronting SAP Fiori and other SAP components. This deployment is proving very successful on a current SAP Fiori and SAP S/4HANA implementation project for one of our clients.
SAP HANA System Replication (HSR) versus SAP Replication Server (SRS)Gary Jackson MBCS
This document provides information about SAP HANA System Replication (HSR) and compares it to SAP Replication Server (SRS). HSR replicates transaction log entries from a primary HANA database to secondary databases. It supports synchronous and asynchronous replication and can be used for high availability and disaster recovery. The document outlines the initial setup process and ongoing administration of HSR configurations.
Aliter Consulting's latest challenge on a customer project was the integration of SAP on Azure into the customer’s SaaS Office 365 environment for outbound and inbound email for SAP S/4HANA to support inbound email for OpenText VIM and SAP GRC, and other general outbound mail requirements...
The first presentation for Kafka Meetup @ Linkedin (Bangalore) held on 2015/12/5
It provides a brief introduction to the motivation for building Kafka and how it works from a high level.
Please download the presentation if you wish to see the animated slides.
Nozomi from Yahoo! Japan gave a presentation how Yahoo! Japan uses Apache Pulsar to build their internal messaging platform for processing tens of billions of messages every day. He explains why Yahoo! Japan choose Pulsar and what are the use cases of Apache Pulsar and their best practices.
#PulsarBeijingMeetup
This document summarizes Kafka internals including how Zookeeper is used for coordination, how brokers store partitions and messages, how producers and consumers interact with brokers, how to ensure data integrity, and new features in Kafka 0.9 like security enhancements and the new consumer API. It also provides an overview of operating Kafka clusters including adding and removing brokers through reassignment.
OpenText Archive Center 16.2 Single File Vendor Interface (VI) using Microsoft Azure Storage Account as a storage device is now supported on Linux. Checkout this brief overview of its usage on one of our current projects. Thanks to Manish Shah (Microsoft) for his contribution and working with OpenText to achieve support on Linux, to Supriya Pande for her article on the Microsoft Azure Storage Explorer, to Oleh Khrypko (SAP) for his input to handling disaster recovery on OpenText Archive Center and Gary Jackson (Aliter Consulting) for the article.
Checkout the latest article by Darryl Griffiths from Aliter Consulting. SAP on Azure Web Dispatcher High Availability provides an overview of how to utilise an Azure Internal Load Balancer in conjunction with the parallel SAP Web Dispatchers to achieve a highly available, load-balanced and scalable solution for fronting SAP Fiori and other SAP components. This deployment is proving very successful on a current SAP Fiori and SAP S/4HANA implementation project for one of our clients.
SAP HANA System Replication (HSR) versus SAP Replication Server (SRS)Gary Jackson MBCS
This document provides information about SAP HANA System Replication (HSR) and compares it to SAP Replication Server (SRS). HSR replicates transaction log entries from a primary HANA database to secondary databases. It supports synchronous and asynchronous replication and can be used for high availability and disaster recovery. The document outlines the initial setup process and ongoing administration of HSR configurations.
Aliter Consulting's latest challenge on a customer project was the integration of SAP on Azure into the customer’s SaaS Office 365 environment for outbound and inbound email for SAP S/4HANA to support inbound email for OpenText VIM and SAP GRC, and other general outbound mail requirements...
The first presentation for Kafka Meetup @ Linkedin (Bangalore) held on 2015/12/5
It provides a brief introduction to the motivation for building Kafka and how it works from a high level.
Please download the presentation if you wish to see the animated slides.
Nozomi from Yahoo! Japan gave a presentation how Yahoo! Japan uses Apache Pulsar to build their internal messaging platform for processing tens of billions of messages every day. He explains why Yahoo! Japan choose Pulsar and what are the use cases of Apache Pulsar and their best practices.
#PulsarBeijingMeetup
This document summarizes Kafka internals including how Zookeeper is used for coordination, how brokers store partitions and messages, how producers and consumers interact with brokers, how to ensure data integrity, and new features in Kafka 0.9 like security enhancements and the new consumer API. It also provides an overview of operating Kafka clusters including adding and removing brokers through reassignment.
Apache Kafka becoming the message bus to transfer huge volumes of data from various sources into Hadoop.
It's also enabling many real-time system frameworks and use cases.
Managing and building clients around Apache Kafka can be challenging. In this talk, we will go through the best practices in deploying Apache Kafka
in production. How to Secure a Kafka Cluster, How to pick topic-partitions and upgrading to newer versions. Migrating to new Kafka Producer and Consumer API.
Also talk about the best practices involved in running a producer/consumer.
In Kafka 0.9 release, we’ve added SSL wire encryption, SASL/Kerberos for user authentication, and pluggable authorization. Now Kafka allows authentication of users, access control on who can read and write to a Kafka topic. Apache Ranger also uses pluggable authorization mechanism to centralize security for Kafka and other Hadoop ecosystem projects.
We will showcase open sourced Kafka REST API and an Admin UI that will help users in creating topics, re-assign partitions, Issuing
Kafka ACLs and monitoring Consumer offsets.
How Kafka and MemSQL Became the Dynamic Duo (Sarung Tripathi, MemSQL) Kafka S...HostedbyConfluent
Kafka and MemSQL are the perfect combination of speed, scale, and power to take on the world’s most complex operational analytics challenges. In this session, you will learn how Kafka and MemSQL have become the dynamic duo, and how you can use them together to achieve ingest of tens of millions of records per second and enable highly concurrent, real-time analytics. In the last few months, Kafka and MemSQL have been hard at work, devising a plan to take on the world’s next set of streaming data challenges. So stay tuned: there may just be an announcement!
Hello, kafka! (an introduction to apache kafka)Timothy Spann
Hello ApacheKafka
An Introduction to Apache Kafka with Timothy Spann and Carolyn Duby Cloudera Principal engineers.
We also demo Flink SQL, SMM, SSB, Schema Registry, Apache Kafka, Apache NiFi and Public Cloud - AWS.
Security Best Practices for your Postgres DeploymentPGConf APAC
These slides were used by Sameer Kumar of Ashnik for presenting his topic at pgDay Asia 2016. He took audience through some of the security best practices for deploying and hardening PostgreSQL
Presented at SF Big Analytics Meetup
Online event processing applications often require the ability to ingest, store, dispatch and process events. Until now, supporting all of these needs has required different systems for each task -- stream processing engines, messaging queuing middleware, and pub/sub messaging systems. This has led to the unnecessary complexity for the development of such applications and operations leading to increased barrier to adoption in the enterprises. In this talk, Karthik will outline the need to unify these capabilities in a single system and make it easy to develop and operate at scale. Karthik will delve into how Apache Pulsar was designed to address this need with an elegant architecture. Apache Pulsar is a next generation distributed pub-sub system that was originally developed and deployed at Yahoo and running in production in more than 100+ companies. Karthik will explain how the architecture and design of Pulsar provides the flexibility to support developers and applications needing any combination of queuing, messaging, streaming and lightweight compute for events. Furthermore, he will provide real life use cases how Apache Pulsar is used for event processing ranging from data processing tasks to web processing applications.
October 2016 HUG: Pulsar, a highly scalable, low latency pub-sub messaging s...Yahoo Developer Network
Yahoo recently open-sourced Pulsar, a highly scalable, low latency pub-sub messaging system running on commodity hardware. It provides simple pub-sub messaging semantics over topics, guaranteed at-least-once delivery of messages, automatic cursor management for subscribers, and cross-datacenter replication. Pulsar is used across various Yahoo applications for large scale data pipelines. Learn more about Pulsar architecture and use-cases in this talk.
Speakers:
Matteo Merli from Pulsar team at Yahoo
Building High-Throughput, Low-Latency Pipelines in Kafkaconfluent
William Hill is one of the UK’s largest, most well-established gaming companies with a global presence across 9 countries with over 16,000 employees. In recent years the gaming industry and in particular sports betting, has been revolutionised by technology. Customers now demand a wide range of events and markets to bet on both pre-game and in-play 24/7. This has driven out a business need to process more data, provide more updates and offer more markets and prices in real time.
At William Hill, we have invested in a completely new trading platform using Apache Kafka. We process vast quantities of data from a variety of feeds, this data is fed through a variety of odds compilation models, before being piped out to UI apps for use by our trading teams to provide events, markets and pricing data out to various end points across the whole of William Hill. We deal with thousands of sporting events, each with sometimes hundreds of betting markets, each market receiving hundreds of updates. This scales up to vast numbers of messages flowing through our system. We have to process, transform and route that data in real time. Using Apache Kafka, we have built a high throughput, low latency pipeline, based on Cloud hosted Microservices. When we started, we were on a steep learning curve with Kafka, Microservices and associated technologies. This led to fast learnings and fast failings.
In this session, we will tell the story of what we built, what went well, what didn’t go so well and what we learnt. This is a story of how a team of developers learnt (and are still learning) how to use Kafka. We hope that you will be able to take away lessons and learnings of how to build a data processing pipeline with Apache Kafka.
Introduction to Apache BookKeeper Distributed StorageStreamlio
A brief technical introduction to Apache BookKeeper, the scalable, fault-tolerant, and low-latency storage service optimized for real-time and streaming workloads.
hbaseconasia2017: HBase Disaster Recovery Solution at HuaweiHBaseCon
Ashish Singhi
HBase Disaster recovery solution aims to maintain high availability of HBase service in case of disaster of one HBase cluster with very minimal user intervention. This session will introduce the HBase disaster recovery use cases and the various solutions adopted at Huawei like.
a) Cluster Read-Write mode
b) DDL operations synchronization with standby cluster
c) Mutation and bulk loaded data replication
d) Further challenges and pending work
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...HostedbyConfluent
Apache Kafka is a key part of the Big Data infrastructure at Salesforce, enabling publish/subscribe and data transport in near real-time at enterprise scale handling trillions of messages per day. In this session, hear from the teams at Salesforce that manage Kafka as a service, running over a hundred clusters across on-premise and public cloud environments with over 99.9% availability. Hear about best practices and innovations, including:
* How to manage multi-tenant clusters in a hybrid environment
* High volume data pipelines with Mirus replicating data to Kafka and blob storage
* Kafka Fault Injection Framework built on Trogdor and Kibosh
* Automated recovery without data loss
* Using Envoy as an SNI-routing Kafka gateway
We hope the audience will have practical takeaways for building, deploying, operating, and managing Kafka at scale in the enterprise.
Fundamentals and Architecture of Apache KafkaAngelo Cesaro
Fundamentals and Architecture of Apache Kafka.
This presentation explains Apache Kafka's architecture and internal design giving an overview of Kafka internal functions, including:
Brokers, Replication, Partitions, Producers, Consumers, Commit log, comparison over traditional message queues.
Kinesis to Kafka Bridge is a Samza job that replicates AWS Kinesis to a configurable set of Kafka topics and vice versa. It enables integration between AWS and the rest of LinkedIn. It supports replicating streams in any LinkedIn fabric, any AWS account, and any AWS region. DynamoDB Stream to Kafka Bridge is built on top of Kinesis to Kafka Bridge. It enables data replication from AWS DynamoDB to LinkedIn. In this presentation we will talk about how we designed the system and how we use it in LinkedIn.
Rajasekar Elango works for the Monitoring and Management Team at Salesforce.com, which builds tools to monitor the health and performance of Salesforce infrastructure. They implemented Apache Kafka to securely collect and aggregate monitoring data from application servers across multiple datacenters. The secure Kafka implementation uses SSL/TLS mutual authentication between brokers and producers/consumers to encrypt traffic and authenticate clients across datacenters.
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Securing Kafka At Zendesk (Joy Nag, Zendesk) Kafka Summit 2020confluent
Kafka is one of the most important foundation services at Zendesk. It became even more crucial with the introduction of Global Event Bus which my team built to propagate events between Kafka clusters hosted at different parts of the world and between different products. As part of its rollout, we had to add mTLS support in all of our Kafka Clusters (we have quite a few of them), this was to make propagation of events between clusters hosted at different parts of the world secure. It was quite a journey, but we eventually built a solution that is working well for us.
Things I will be sharing as part of the talk:
1. Establishing the use case/problem we were trying to solve (why we needed mTLS)
2. Building a Certificate Authority with open source tools (with self-signed Root CA)
3. Building helper components to generate certificates automatically and regenerate them before they expire (helps using a shorter TTL (Time To Live) which is good security practice) for both Kafka Clients and Brokers
4. Hot reloading regenerated certificates on Kafka brokers without downtime
5. What we built to rotate the self-signed root CA without downtime as well across the board
6. Monitoring and alerts on TTL of certificates
7. Performance impact of using TLS (along with why TLS affects kafka’s performance)
8. What we are doing to drive adoption of mTLS for existing Kafka clients using PLAINTEXT protocol by making onboarding easier
9. How this will become a base for other features we want, eg ACL, Rate Limiting (by using the principal from the TLS certificate as Identity of clients)
Andrew Stevenson from DataMountaineer presented on Kafka Connect. Kafka Connect is a common framework that facilitates data streams between Kafka and other systems. It handles delivery semantics, offset management, serialization/deserialization and other complex tasks, allowing users to focus on domain logic. Connectors can load and unload data from various systems like Cassandra, Elasticsearch, and MongoDB. Configuration files are used to deploy connectors with no code required.
Kafka Tutorial - basics of the Kafka streaming platformJean-Paul Azar
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have started to expand on the Java examples to correlate with the design discussion of Kafka. We have also expanded on the Kafka design section and added references.
This is the slide deck which was used for a talk 'Change Data Capture using Kafka' at Kafka Meetup at Linkedin (Bangalore) held on 11th June 2016.
The talk describes the need for CDC and why it's a good use case for Kafka.
This document discusses common patterns for running Apache Kafka across multiple data centers. It describes stretched clusters, active/passive, and active/active cluster configurations. For each pattern, it covers how to handle failures and recover consumer offsets when switching data centers. It also discusses considerations for using Kafka with other data stores in a multi-DC environment and future work like timestamp-based offset seeking.
Amazon Kinesis Firehose was launched in the end of 2015 and provides easy way for customers to ingest streaming data to Amazon Redshift, Amazon Elasticsearch and Amazon S3. In 2016, we introduced new features that helps customers to implement in-line processing within Kinesis firehose using AWS Lambda. Amazon Kinesis Firehose makes it easy to load real-time, streaming data into AWS without having to build custom stream processing applications. This session is designed for developers, data engineers and analysts who are looking to load, analyze and get powerful insights from real0time streaming data using existing analytics tools. We will explore key features and walk through how to use Kinesis Firehose to ingest streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift and Amazon Elasticsearch service.
System design for video streaming serviceNirmik Kale
This is my presentation for a "Streaming Service" like Netflix or Amazon Prime.
This was a part of an Interview I did woth a company so there is a lot of text explaining all components in detail.
Apache Kafka becoming the message bus to transfer huge volumes of data from various sources into Hadoop.
It's also enabling many real-time system frameworks and use cases.
Managing and building clients around Apache Kafka can be challenging. In this talk, we will go through the best practices in deploying Apache Kafka
in production. How to Secure a Kafka Cluster, How to pick topic-partitions and upgrading to newer versions. Migrating to new Kafka Producer and Consumer API.
Also talk about the best practices involved in running a producer/consumer.
In Kafka 0.9 release, we’ve added SSL wire encryption, SASL/Kerberos for user authentication, and pluggable authorization. Now Kafka allows authentication of users, access control on who can read and write to a Kafka topic. Apache Ranger also uses pluggable authorization mechanism to centralize security for Kafka and other Hadoop ecosystem projects.
We will showcase open sourced Kafka REST API and an Admin UI that will help users in creating topics, re-assign partitions, Issuing
Kafka ACLs and monitoring Consumer offsets.
How Kafka and MemSQL Became the Dynamic Duo (Sarung Tripathi, MemSQL) Kafka S...HostedbyConfluent
Kafka and MemSQL are the perfect combination of speed, scale, and power to take on the world’s most complex operational analytics challenges. In this session, you will learn how Kafka and MemSQL have become the dynamic duo, and how you can use them together to achieve ingest of tens of millions of records per second and enable highly concurrent, real-time analytics. In the last few months, Kafka and MemSQL have been hard at work, devising a plan to take on the world’s next set of streaming data challenges. So stay tuned: there may just be an announcement!
Hello, kafka! (an introduction to apache kafka)Timothy Spann
Hello ApacheKafka
An Introduction to Apache Kafka with Timothy Spann and Carolyn Duby Cloudera Principal engineers.
We also demo Flink SQL, SMM, SSB, Schema Registry, Apache Kafka, Apache NiFi and Public Cloud - AWS.
Security Best Practices for your Postgres DeploymentPGConf APAC
These slides were used by Sameer Kumar of Ashnik for presenting his topic at pgDay Asia 2016. He took audience through some of the security best practices for deploying and hardening PostgreSQL
Presented at SF Big Analytics Meetup
Online event processing applications often require the ability to ingest, store, dispatch and process events. Until now, supporting all of these needs has required different systems for each task -- stream processing engines, messaging queuing middleware, and pub/sub messaging systems. This has led to the unnecessary complexity for the development of such applications and operations leading to increased barrier to adoption in the enterprises. In this talk, Karthik will outline the need to unify these capabilities in a single system and make it easy to develop and operate at scale. Karthik will delve into how Apache Pulsar was designed to address this need with an elegant architecture. Apache Pulsar is a next generation distributed pub-sub system that was originally developed and deployed at Yahoo and running in production in more than 100+ companies. Karthik will explain how the architecture and design of Pulsar provides the flexibility to support developers and applications needing any combination of queuing, messaging, streaming and lightweight compute for events. Furthermore, he will provide real life use cases how Apache Pulsar is used for event processing ranging from data processing tasks to web processing applications.
October 2016 HUG: Pulsar, a highly scalable, low latency pub-sub messaging s...Yahoo Developer Network
Yahoo recently open-sourced Pulsar, a highly scalable, low latency pub-sub messaging system running on commodity hardware. It provides simple pub-sub messaging semantics over topics, guaranteed at-least-once delivery of messages, automatic cursor management for subscribers, and cross-datacenter replication. Pulsar is used across various Yahoo applications for large scale data pipelines. Learn more about Pulsar architecture and use-cases in this talk.
Speakers:
Matteo Merli from Pulsar team at Yahoo
Building High-Throughput, Low-Latency Pipelines in Kafkaconfluent
William Hill is one of the UK’s largest, most well-established gaming companies with a global presence across 9 countries with over 16,000 employees. In recent years the gaming industry and in particular sports betting, has been revolutionised by technology. Customers now demand a wide range of events and markets to bet on both pre-game and in-play 24/7. This has driven out a business need to process more data, provide more updates and offer more markets and prices in real time.
At William Hill, we have invested in a completely new trading platform using Apache Kafka. We process vast quantities of data from a variety of feeds, this data is fed through a variety of odds compilation models, before being piped out to UI apps for use by our trading teams to provide events, markets and pricing data out to various end points across the whole of William Hill. We deal with thousands of sporting events, each with sometimes hundreds of betting markets, each market receiving hundreds of updates. This scales up to vast numbers of messages flowing through our system. We have to process, transform and route that data in real time. Using Apache Kafka, we have built a high throughput, low latency pipeline, based on Cloud hosted Microservices. When we started, we were on a steep learning curve with Kafka, Microservices and associated technologies. This led to fast learnings and fast failings.
In this session, we will tell the story of what we built, what went well, what didn’t go so well and what we learnt. This is a story of how a team of developers learnt (and are still learning) how to use Kafka. We hope that you will be able to take away lessons and learnings of how to build a data processing pipeline with Apache Kafka.
Introduction to Apache BookKeeper Distributed StorageStreamlio
A brief technical introduction to Apache BookKeeper, the scalable, fault-tolerant, and low-latency storage service optimized for real-time and streaming workloads.
hbaseconasia2017: HBase Disaster Recovery Solution at HuaweiHBaseCon
Ashish Singhi
HBase Disaster recovery solution aims to maintain high availability of HBase service in case of disaster of one HBase cluster with very minimal user intervention. This session will introduce the HBase disaster recovery use cases and the various solutions adopted at Huawei like.
a) Cluster Read-Write mode
b) DDL operations synchronization with standby cluster
c) Mutation and bulk loaded data replication
d) Further challenges and pending work
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
Tales from the four-comma club: Managing Kafka as a service at Salesforce | L...HostedbyConfluent
Apache Kafka is a key part of the Big Data infrastructure at Salesforce, enabling publish/subscribe and data transport in near real-time at enterprise scale handling trillions of messages per day. In this session, hear from the teams at Salesforce that manage Kafka as a service, running over a hundred clusters across on-premise and public cloud environments with over 99.9% availability. Hear about best practices and innovations, including:
* How to manage multi-tenant clusters in a hybrid environment
* High volume data pipelines with Mirus replicating data to Kafka and blob storage
* Kafka Fault Injection Framework built on Trogdor and Kibosh
* Automated recovery without data loss
* Using Envoy as an SNI-routing Kafka gateway
We hope the audience will have practical takeaways for building, deploying, operating, and managing Kafka at scale in the enterprise.
Fundamentals and Architecture of Apache KafkaAngelo Cesaro
Fundamentals and Architecture of Apache Kafka.
This presentation explains Apache Kafka's architecture and internal design giving an overview of Kafka internal functions, including:
Brokers, Replication, Partitions, Producers, Consumers, Commit log, comparison over traditional message queues.
Kinesis to Kafka Bridge is a Samza job that replicates AWS Kinesis to a configurable set of Kafka topics and vice versa. It enables integration between AWS and the rest of LinkedIn. It supports replicating streams in any LinkedIn fabric, any AWS account, and any AWS region. DynamoDB Stream to Kafka Bridge is built on top of Kinesis to Kafka Bridge. It enables data replication from AWS DynamoDB to LinkedIn. In this presentation we will talk about how we designed the system and how we use it in LinkedIn.
Rajasekar Elango works for the Monitoring and Management Team at Salesforce.com, which builds tools to monitor the health and performance of Salesforce infrastructure. They implemented Apache Kafka to securely collect and aggregate monitoring data from application servers across multiple datacenters. The secure Kafka implementation uses SSL/TLS mutual authentication between brokers and producers/consumers to encrypt traffic and authenticate clients across datacenters.
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Securing Kafka At Zendesk (Joy Nag, Zendesk) Kafka Summit 2020confluent
Kafka is one of the most important foundation services at Zendesk. It became even more crucial with the introduction of Global Event Bus which my team built to propagate events between Kafka clusters hosted at different parts of the world and between different products. As part of its rollout, we had to add mTLS support in all of our Kafka Clusters (we have quite a few of them), this was to make propagation of events between clusters hosted at different parts of the world secure. It was quite a journey, but we eventually built a solution that is working well for us.
Things I will be sharing as part of the talk:
1. Establishing the use case/problem we were trying to solve (why we needed mTLS)
2. Building a Certificate Authority with open source tools (with self-signed Root CA)
3. Building helper components to generate certificates automatically and regenerate them before they expire (helps using a shorter TTL (Time To Live) which is good security practice) for both Kafka Clients and Brokers
4. Hot reloading regenerated certificates on Kafka brokers without downtime
5. What we built to rotate the self-signed root CA without downtime as well across the board
6. Monitoring and alerts on TTL of certificates
7. Performance impact of using TLS (along with why TLS affects kafka’s performance)
8. What we are doing to drive adoption of mTLS for existing Kafka clients using PLAINTEXT protocol by making onboarding easier
9. How this will become a base for other features we want, eg ACL, Rate Limiting (by using the principal from the TLS certificate as Identity of clients)
Andrew Stevenson from DataMountaineer presented on Kafka Connect. Kafka Connect is a common framework that facilitates data streams between Kafka and other systems. It handles delivery semantics, offset management, serialization/deserialization and other complex tasks, allowing users to focus on domain logic. Connectors can load and unload data from various systems like Cassandra, Elasticsearch, and MongoDB. Configuration files are used to deploy connectors with no code required.
Kafka Tutorial - basics of the Kafka streaming platformJean-Paul Azar
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have started to expand on the Java examples to correlate with the design discussion of Kafka. We have also expanded on the Kafka design section and added references.
This is the slide deck which was used for a talk 'Change Data Capture using Kafka' at Kafka Meetup at Linkedin (Bangalore) held on 11th June 2016.
The talk describes the need for CDC and why it's a good use case for Kafka.
This document discusses common patterns for running Apache Kafka across multiple data centers. It describes stretched clusters, active/passive, and active/active cluster configurations. For each pattern, it covers how to handle failures and recover consumer offsets when switching data centers. It also discusses considerations for using Kafka with other data stores in a multi-DC environment and future work like timestamp-based offset seeking.
Amazon Kinesis Firehose was launched in the end of 2015 and provides easy way for customers to ingest streaming data to Amazon Redshift, Amazon Elasticsearch and Amazon S3. In 2016, we introduced new features that helps customers to implement in-line processing within Kinesis firehose using AWS Lambda. Amazon Kinesis Firehose makes it easy to load real-time, streaming data into AWS without having to build custom stream processing applications. This session is designed for developers, data engineers and analysts who are looking to load, analyze and get powerful insights from real0time streaming data using existing analytics tools. We will explore key features and walk through how to use Kinesis Firehose to ingest streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift and Amazon Elasticsearch service.
System design for video streaming serviceNirmik Kale
This is my presentation for a "Streaming Service" like Netflix or Amazon Prime.
This was a part of an Interview I did woth a company so there is a lot of text explaining all components in detail.
This document summarizes a presentation about the new Oak repository in AEM 6.0. It discusses key differences between Oak and the previous CRX2 repository, such as Oak being designed for scalability with a plugin architecture. It also covers deployment scenarios and options for migrating from CRX2 to Oak, including using the crx2oak tool to migrate content. The document provides an overview of search indexes in Oak and how custom indexes can be defined.
Aws object storage and cdn(s3, glacier and cloud front) part 3Parag Patil
Amazon S3 Transfer Acceleration enables fast and secure file transfers between clients and S3 buckets by leveraging Amazon CloudFront's global network. Transfer Acceleration routes data to S3 over an optimized network path. Additional charges may apply when using Transfer Acceleration.
To configure an S3 bucket for static website hosting, select the bucket in the S3 console and enable static website hosting. Specify an index document and error document. A bucket policy can grant public read access to hosted content.
AWS Storage Gateway connects on-premises environments to cloud storage. It offers file, volume and tape gateway solutions for seamless integration between on-premises IT and AWS storage infrastructure while maintaining data security.
ME_Snowflake_Introduction_for new students.pptxSamuel168738
Snowflake is a cloud-based data warehouse that runs entirely on cloud infrastructure like AWS or Azure. It uses a shared-disk and shared-nothing architecture. Data is stored in an optimized columnar format in cloud storage. Queries are executed using virtual warehouses that are independent compute clusters. Snowflake provides a SQL interface and connectors to load, query, and analyze data without having to manage any hardware or software.
Cloud Native Analysis Platform for NGS analysisYaoyu Wang
Cloud Native Analysis Platform optimized for user-friendly large data set transfer from Dropbox to cloud infrastructure for data processing and analysis. It is particular tailored for easy Next Generation Sequence (NGS) fastq file transfer for rapid exome, RNASeq, small RNASeq, and amplicon analysis.
The document provides an overview of Hadoop including:
- A brief history of Hadoop and its origins from Nutch.
- An overview of the Hadoop architecture including HDFS and MapReduce.
- Examples of how companies like Yahoo, Facebook and Amazon use Hadoop at large scales to process petabytes of data.
Creating a scalable & cost efficient BI infrastructure for a startup in the A...vcrisan
Presentation for Bucharest Big Data Meetup - October 14th 2021
How we created an efficient BI solution that can easily used by a startup, using the AWS cloud environment. Using Python we can easily import, process and store data in Amazon S3 from different data sources including Rabbit MQ, Big Query, MySQL etc. From there we are taking advantage of the power of Dremio as a query engine & the scalability of S3, you can create beautiful dashboards in Tableau fast, in order to kickstart a data journey in a startup.
Integrating On-premises Enterprise Storage Workloads with AWS (ENT301) | AWS ...Amazon Web Services
AWS gives designers of enterprise storage systems a completely new set of options. Aimed at enterprise storage specialists and managers of cloud-integration teams, this session gives you the tools and perspective to confidently integrate your storage workloads with AWS. We show working use cases, a thorough TCO model, and detailed customer blueprints. Throughout we analyze how data-tiering options measure up to the design criteria that matter most: performance, efficiency, cost, security, and integration.
EUBra-BIGSEA: Cloud services with QoS guarantees for Big Data analyticsEUBra BIGSEA
Presentation given by Ignacio Blanquer, EUBra-BIGSEA EU coordinator at the Digital Infrastructures for Research conference held in Krakow, Poland, from 28th to 30th September 2016. Presentation overview available at http://www.digitalinfrastructures.eu/content/eubra-bigsea-cloud-services-qos-guarantees-big-data-analytics
This webinar previewed FileCatalyst 3.5's new integration with Amazon S3. It demonstrated how FileCatalyst can now treat S3 storage as a file system, allowing files to be streamed directly to S3 without first being cached locally. This is done through Java NIO.2 and Amazon's SDK. The webinar showed a demo and discussed how S3 buckets/folders can be integrated and accessed, as well as ways to connect and improve performance, such as using enhanced networking on certain EC2 instance types. Future plans include finalizing performance optimization and integrating additional file systems and object stores.
This presentation talks about the Following -
-Working of AWS S3 & CloudFront Logs with respect to
Content Storing and Distribution.
-The hidden potential of your Stored S3 & CloudFront Logs
& Unlocking them with Cloudlytics
-Some of our Reports using Cloudlytics
Check the video embedded after the slideshare for a Live recording of our webinar conducted around this topic.
Streaming Data Analytics with Amazon Redshift and Kinesis FirehoseAmazon Web Services
by Joyjeet Banerjee, Enterprise Solutions Architect, AWS
Evolving your analytics from batch processing to real-time processing can have a major business impact, but ingesting streaming data into your data warehouse requires building complex streaming data pipelines. Amazon Kinesis Firehose solves this problem by making it easy to transform and load streaming data into Amazon Redshift so that you can use existing analytics and business intelligence tools to extract information in near real-time and respond promptly. In this session, we will dive deep using Amazon Kinesis Firehose to load streaming data into Amazon Redshift reliably, scalably, and cost-effectively. Level: 200
This document summarizes a study that benchmarked the performance of personal cloud storage services like Dropbox, Google Drive, SkyDrive, Wuala, and Amazon Cloud Drive. The study developed a methodology to test each service's system architecture, data center locations, file synchronization capabilities, and performance. Key findings include: Dropbox implemented the most capabilities but had high overhead, while Google Drive and Wuala had the fastest completion times due to data centers near the testbed. The study provided insights into how each service's design impacts performance.
The document provides information about querying and analyzing data in Amazon S3 using various AWS services. It discusses:
1. Using Amazon EMR to process raw web logs delivered to S3 by Kinesis Firehose using Apache Spark.
2. Loading the processed data into Amazon Redshift for interactive querying using SQL.
3. Performing ad-hoc analysis on the data in S3 using serverless Athena without having to set up any infrastructure.
Globus Command Line Interface (APS Workshop)Globus
The document provides information about using the Globus Command Line Interface (CLI) to automate data transfers and sharing. It discusses installing the CLI and some basic commands like searching for endpoints, listing files, and doing transfers. It also covers more advanced topics like managing permissions, batch transfers, notifications, and examples of automation scripts that use the CLI to move data between endpoints and share it with other users based on permissions. The final section walks through an example of using a shell script to automate the process of moving data from an instrument to a shared guest collection and setting permissions for another user to access it.
Deep Dive on Accelerating Content, APIs, and Applications with Amazon CloudFr...Amazon Web Services
This document provides an overview of Amazon CloudFront and Lambda@Edge. It discusses how CloudFront is a global content delivery network that can accelerate content delivery, including both static and dynamic content. It also introduces Lambda@Edge, which allows running code at the edge using AWS Lambda. Lambda@Edge functions can be triggered by CloudFront events to customize content delivery, such as modifying requests and responses. The document provides details on CloudFront pricing and architecture, including how it uses edge locations globally to improve performance.
"How to" Webinar: Sending Data to Sumo LogicSumo Logic
Ready to send your data to Sumo Logic? Learn the details of data collection, including:
- Installed versus Hosted Collectors
- Deployment Options and Best Practices
- Creating your Sources
- Processing Rules
- Local File Configuration Management
- Collector Management API
** This webinar is intended for Administrators with access to create Data Collectors.
Level 3 Certification: Setting up Sumo Logic - Oct 2018Sumo Logic
Get Certified as a Sumo Power Admin!
Designed for Administrators, this course shows you how to set up your data collection according to your organization’s data sources. Best practices around deployment options ensure you choose a deployment that scales as your organization grows. Because metadata is so important to a healthy environment, learn how to design and set up a naming convention that works best for your teams. Use Chef, Puppet or the likes? Learn how to automate your deployment. Test your deployment with simple searches, and learn to take advantage of optimization tools that can help you stay on top of your deployment.
Similar to SAP OS/DB Migration using Azure Storage Account (20)
This document discusses placing the SAP Application Server Central Services (ASCS) into containers on Kubernetes. It proposes using containers for the ASCS and Enqueue Replication Server (ERS) with anti-affinity rules to ensure high availability without traditional clustering. Benefits include simplified high availability without requiring cluster technology while still providing required features and allowing SAP systems to utilize anonymous compute nodes rather than dedicated hardware. Considerations include licensing and ensuring the Message Server and ERS are never placed on the same node.
Tips on implementing SAP adaptive computing design with SAP LaMa on Microsoft Azure. We discuss the best options for SAP and some of the challenges faced.
This document provides instructions for setting up SSL connectivity between SAP LVM and the SAP Host Agent using x509 certificate authentication. It involves generating a certificate signing request for the LVM server, having it signed by a certificate authority, uploading the signed certificate and CA/ICA certificates to the LVM keystore. It also describes adding the CA/ICA certificates to the Host Agent's PSE, configuring the host profile, and testing the SSL connection between LVM and the Host Agent.
This document provides instructions for integrating SAP Business Process Automation (BPA) with SAP Landscape Virtualization Management (LVM). It involves creating a custom operation in LVM that allows controlling BPA queues. This is done by creating a provider implementation and custom operation in LVM along with a process definition and web service in BPA. It also requires registering a script with the host agent to connect the LVM and BPA configurations. The custom operation then allows holding or releasing BPA queues from the LVM interface.
This document provides an overview of how to customize SAP Landscape Virtualization Management (LVM) with custom operations and hooks. It describes defining a provider implementation ("LVM_CustomOperation_ClusterAdm") and custom operations ("Freeze", "Unfreeze", "Relocate") for managing a Red Hat cluster. A sample script ("ClusterAdm.ksh") demonstrates how custom operations could freeze/unfreeze the cluster before SAP instance start/stop operations. The provider implementation and custom operations/hooks allow LVM to integrate cluster management operations.
This document provides instructions for installing SAP Router using Secure Network Communication (SNC) and registering it with SAP. It outlines downloading the installation files, creating a dedicated system user and filesystem, unpacking and configuring the software, generating and importing an SNC certificate, creating a router table, and starting/stopping the SAP Router service.
This document provides guidance on customizing SAP Landscape Virtualization Management (LVM) to manage custom instance types. It describes how to configure generic operations like detect, monitor, start, and stop by creating scripts referenced in configuration files. An example is provided for managing SAP Replication Server (SRS) instances, with configuration files and sample scripting code shown.
The document discusses SAP Web Dispatcher 7.40, which is a load balancer that provides intelligent load distribution for SAP Portal. It can handle stateful or stateless sessions over HTTP or HTTPS invisibly to clients. It supports round-robin load distribution for non-SAP backends like Tomcat. It also allows for multiple SSL certificates to handle multiple domains and backends. SAP Web Dispatcher provides reliability, security, and high performance to handle thousands of concurrent users. It includes features like maintenance mode, custom error pages, and is free to use with an SAP license.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
2. • This presentation illustrates one possible option for performing
an SAP heterogeneous migration from an on-premises SAP ECC
system to the Microsoft Azure Public Cloud
• Key requirements from the client included the ability to migrate
the 4TB SAP ECC system during a 36 hour period over the
weekend
• Whilst the database backup and restore technique was
considered, we wanted to take advantage of reorganizing and
restructuring the database during the move; for this reason,
traditional SAP heterogeneous migration techniques were
chosen
Introduction
3. • Option #1 – Perform a heterogeneous SAP migration in
standalone mode to a local export file system which is then
transferred across the WAN using SFTP.
• Option #2 – Perform a heterogeneous SAP migration using the
parallel export/import option utilizing a network file system.
The network file system is provided by an export from an NFS
server hosted in Azure. The NFS file system is then mounted
onto both the source (on-premises) and target (Azure) VM.
• Option #3 – Perform a heterogeneous SAP migration using the
parallel export/import option utilizing an Azure Storage Account
in combination with customs scripts and blobxfer.
Options Considered
4. • Option #1 – This option took too long end-to-end and wouldn’t
fit into the migration window offered by the client.
• Option #2 – The latency across the WAN with the NFS-mounted
file system imposed a long runtime for the migration despite
using the parallel export/import option.
• Option #3 – This was the chosen option in this client case. The
upload speed offered by the client’s internet connection to the
Azure Storage Account and the download speed within Azure
produced the best result allowing the migration to fit into the
migration window offered by the client.
Chosen Option
5. Process Schematic
DB
DATA
<sid>
<sid>
<sid>
Locally
Redundant
Storage
(LRS)
R3LOAD
MIGMON
R3LDCTL
Azure
Storage
Account
R3LOAD
MIGMON
AnyDB
AnyDB
AnyDB
<sid>
1
2
3 4
5
1. R3LDCTL writes STR files to “DATA”
2. R3LOAD writes TOC and data files to “DATA” and MIGMON writes
SGN signal file to “SIGN” to indicate package ready for upload
3. Custom upload script monitors for signal files and calls blobxfer to
upload STR, TOC and data files to Storage Account when signal file
detected
4. Custom download script calls blobxfer to download STR, TOC and
data files from Storage Account and creates signal file in “SIGN” to
trigger MIGMON to start package import
5. R3LOAD reads TOC and data files from “DATA” and loads database
DB
DATA
Export
SIGN
Signal
SIGN
Signal
DB
DATA
Export
Custom
Upload
Script
blobxfer blobxfer
Custom
Download
Script
Data Flow
Process Flow
6. • The standard heterogenous SAP OS/DB migration with the
parallel export/import option is started on the source and
target systems.
• The custom upload script is started on the source system and
uploads the STR files (and WHR files if table splitting
performed).
• The custom upload script then starts monitoring for signal files
created by MIGMON indicating a package is ready for transfer.
When a signal file is detected, the TOC and data files associated
with the package are uploaded to the Azure Storage Account.
Process Detail
7. • The custom download script is started on the target system and
downloads the STR files (and WHR files if table splitting
performed) and the ready TOC and data files.
• The custom download script creates a signal file to indicate to
MIGMON on the target system that a package is ready to load
into the target database.
Process Detail
8. • blobxfer is an advanced data movement tool and library for Azure
Storage Blob and Files.
• blobxfer offers the following functionality:
– Upload files into Azure Storage
– Download files out of Azure Storage
– Command Line Interface (CLI)
– Integration into custom Python and other flavour of scripts
For further information see https://github.com/Azure/blobxfer
About blobxfer
9. • Azure Storage Explorer can be
used to monitor the progress of
the uploaded migration related
files.
• The first image shows the
”DATA” and “DB” virtual folders
within the <sid> container; the
container being the root
location.
• The second shows an example
of the content of the “DATA”
virtual folder with STR and TOC
files visible.
Azure Storage Explorer
10. • Provides an explorer-like GUI for working with Azure Storage
Accounts.
• Allows administration of data lakes, files, blobs, tables and
queues.
For details on how better to use Azure Storage Explorer, please see the following
excellent Red Gate article by Supriya Pande from our LinkedIn network:
https://www.red-gate.com/simple-talk/cloud/cloud-development/using-azure-storage-
explorer/
Azure Storage Explorer