The presentation covers how to get started to build big data solutions in Azure. Azure provides different Hadoop clusters for Hadoop ecosystem. The session covers the basic understanding of HDInsight clusters including: Apache Hadoop, HBase, Storm and Spark. The session covers how to integrate with HDInsight in .NET using different Hadoop integration frameworks and libraries. The session is a jump start for engineers and DBAs with RDBMS experience who are looking for a jump start working and developing Hadoop solutions. The session is a demo driven and will cover the basics of Hadoop open source products.
The session covers how to get started to build big data solutions in Azure. Azure provides different Hadoop clusters for Hadoop ecosystem. The session covers the basic understanding of HDInsight clusters including: Apache Hadoop, HBase, Storm and Spark. The session covers how to integrate with HDInsight in .NET using different Hadoop integration frameworks and libraries. The session is a jump start for engineers and DBAs with RDBMS experience who are looking for a jump start working and developing Hadoop solutions. The session is a demo driven and will cover the basics of Hadoop open source products.
Cortana Analytics Workshop: Azure Data LakeMSAdvAnalytics
Rajesh Dadhia. This session introduces the newest services in the Cortana Analytics family. Azure Data Lake is a hyper-scale data repository designed for big data analytics workloads. It provides a single place to store any type of data in its native format. In this session, we will show how the HDFS compatibility of Azure Data Lake as a Hadoop File System enables all Hadoop workloads including Azure HDInsight, Hortonworks and Cloudera. Further, we will focus on the key capabilities of the Azure Data Lake that make it an ideal choice for storing, accessing and sharing data for a wide range of analytics applications. Go to https://channel9.msdn.com/ to find the recording of this session.
Progress® DataDirect ® Spark SQL ODBC and JDBC drivers deliver the fastest, high-performance connectivity so your existing BI and analytics applications can access Big Data in Apache Spark.
A presentation prepared for Data Stack as a part of their Interview process on July 20.
This 15 presentation in ignite format features 10 items that you might not know about the V1.0 Spark release
These slides provide highlights of my book HDInsight Essentials. Book link is here: http://www.packtpub.com/establish-a-big-data-solution-using-hdinsight/book
The session covers how to get started to build big data solutions in Azure. Azure provides different Hadoop clusters for Hadoop ecosystem. The session covers the basic understanding of HDInsight clusters including: Apache Hadoop, HBase, Storm and Spark. The session covers how to integrate with HDInsight in .NET using different Hadoop integration frameworks and libraries. The session is a jump start for engineers and DBAs with RDBMS experience who are looking for a jump start working and developing Hadoop solutions. The session is a demo driven and will cover the basics of Hadoop open source products.
Cortana Analytics Workshop: Azure Data LakeMSAdvAnalytics
Rajesh Dadhia. This session introduces the newest services in the Cortana Analytics family. Azure Data Lake is a hyper-scale data repository designed for big data analytics workloads. It provides a single place to store any type of data in its native format. In this session, we will show how the HDFS compatibility of Azure Data Lake as a Hadoop File System enables all Hadoop workloads including Azure HDInsight, Hortonworks and Cloudera. Further, we will focus on the key capabilities of the Azure Data Lake that make it an ideal choice for storing, accessing and sharing data for a wide range of analytics applications. Go to https://channel9.msdn.com/ to find the recording of this session.
Progress® DataDirect ® Spark SQL ODBC and JDBC drivers deliver the fastest, high-performance connectivity so your existing BI and analytics applications can access Big Data in Apache Spark.
A presentation prepared for Data Stack as a part of their Interview process on July 20.
This 15 presentation in ignite format features 10 items that you might not know about the V1.0 Spark release
These slides provide highlights of my book HDInsight Essentials. Book link is here: http://www.packtpub.com/establish-a-big-data-solution-using-hdinsight/book
The Fundamentals Guide to HDP and HDInsightGert Drapers
This session will give you the architectural overview and introduction in to inner workings of HDP 2.0 (http://hortonworks.com/products/hdp-windows/) and HDInsight. The world has embraced the Hadoop toolkit to solve their data problems from ETL, data warehouses to event processing pipelines. As Hadoop consists of many components, services and interfaces, understanding its architecture is crucial, before you can successfully integrate it in to your own environment.
An introduction to self-service data with Dremio. Dremio reimagines analytics for modern data. Created by veterans of open source and big data technologies, Dremio is a fundamentally new approach that dramatically simplifies and accelerates time to insight. Dremio empowers business users to curate precisely the data they need, from any data source, then accelerate analytical processing for BI tools, machine learning, data science, and SQL clients. Dremio starts to deliver value in minutes, and learns from your data and queries, making your data engineers, analysts, and data scientists more productive.
Hive, Impala, and Spark, Oh My: SQL-on-Hadoop in Cloudera 5.5Cloudera, Inc.
Inefficient data workloads are all too common across enterprises - causing costly delays, breakages, hard-to-maintain complexity, and ultimately lost productivity. For a typical enterprise with multiple data warehouses, thousands of reports, and hundreds of thousands of ETL jobs being executed every day, this loss of productivity is a real problem. Add to all of this the complex handwritten SQL queries, and there can be nearly a million queries executed every month that desperately need to be optimized, especially to take advantage of the benefits of Apache Hadoop. How can enterprises dig through their workloads and inefficiencies to easily see which are the best fit for Hadoop and what’s the fastest path to get there?
Cloudera Navigator Optimizer is the solution - analyzing existing SQL workloads to provide instant insights into your workloads and turns that into an intelligent optimization strategy so you can unlock peak performance and efficiency with Hadoop. As the newest addition to Cloudera’s enterprise Hadoop platform, and now available in limited beta, Navigator Optimizer has helped customers profile over 1.5 million queries and ultimately save millions by optimizing for Hadoop.
This presentation discusses the follow topics
What is Hadoop?
Need for Hadoop
History of Hadoop
Hadoop Overview
Advantages and Disadvantages of Hadoop
Hadoop Distributed File System
Comparing: RDBMS vs. Hadoop
Advantages and Disadvantages of HDFS
Hadoop frameworks
Modules of Hadoop frameworks
Features of 'Hadoop‘
Hadoop Analytics Tools
Simplifying And Accelerating Data Access for Python With Dremio and Apache ArrowPyData
By Sudheesh Katkam
PyData New York City 2017
Dremio is a new open source project for self-service data fabric. Dremio simplifies and accelerates access to data from any source and any size, including relational databases, NoSQL, Hadoop, Parquet, and text files. We'll show you how you can use Dremio to visually curate data from any source, then access via Pandas or Jupyter notebook for rapid access.
PASS Summit - SQL Server 2017 Deep DiveTravis Wright
Deep dive into SQL Server 2017 covering SQL Server on Linux, containers, HA improvements, SQL graph, machine learning, python, adaptive query processing, and much much more.
Hadoop as we know is a Java based massive scalable distributed framework for processing large data (several peta bytes) across a cluster (1000s) of commodity computers.
The Hadoop ecosystem has grown over the last few years and there is a lot of jargon in terms of tools as well as frameworks.
Many organizations are investing & innovating heavily in Hadoop to make it better and easier. The mind map on the next slide should be useful to get a high level picture of the ecosystem.
Big Data Day LA 2015 - The AWS Big Data Platform by Michael Limcaco of AmazonData Con LA
Introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures (e.g., streaming, real-time intelligence, and analytics). We will review the AWS big data portfolio of services including Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR), Redshift, Aurora and Machine Learning, and learn how customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
The Future of Hadoop by Arun Murthy, PMC Apache Hadoop & Cofounder HortonworksData Con LA
Arun Murthy will be discussing the future of Hadoop and the next steps in what the big data world would start to look like in the future. With the advent of tools like Spark and Flink and containerization of apps using Docker, there is a lot of momentum currently in this space. Arun will share his thoughts and ideas on what the future holds for us.
Bio:-
Arun C. Murthy
Arun is a Apache Hadoop PMC member and has been a full time contributor to the project since the inception in 2006. He is also the lead of the MapReduce project and has focused on building NextGen MapReduce (YARN). Prior to co-founding Hortonworks, Arun was responsible for all MapReduce code and configuration deployed across the 42,000+ servers at Yahoo!. In essence, he was responsible for running Apache Hadoop’s MapReduce as a service for Yahoo!. Also, he jointly holds the current world sorting record using Apache Hadoop. Follow Arun on Twitter: @acmurthy.
Driving in the Desert - Running Your HDP Cluster with Helion, Openstack, and ...DataWorks Summit
DataWorks Summit 2017 - Sydney
Alejandro Tesch, Cloud Evangelist, Asia Pacific and Japan, HPE
Big Data is a hot topic today for most organisations today as they race to convert vast amounts of data into useful information that can be leveraged to make critical decisions and recommendations in a very limited time windows. Today, there is a widely accepted talent gap when it comes to creating and managing Hadoop cluster, even for the experts – it can take hours (or days) to get a fully functional hadoop farm up and running. The HDP Ambari plugin for Sahara is looking to address most of this challenges by facilitating the deployment of Hortonworks Hadoop clusters and provide a set of open API to facilitate data analytics tasks in your own cloud. In this presentation we will cover why it makes sense to run your data analytics cluster in your cloud and we will demonstrate basic Sahara / Ambari functionality.
The presentation covers in detail how to build intelligent microservices solutions using Azure App Service features in Azure. The presentation is a demo driven and demonstrate how to design and provision complete end-to-end solutions using cloud services & Azure App Services capabilities.
The Fundamentals Guide to HDP and HDInsightGert Drapers
This session will give you the architectural overview and introduction in to inner workings of HDP 2.0 (http://hortonworks.com/products/hdp-windows/) and HDInsight. The world has embraced the Hadoop toolkit to solve their data problems from ETL, data warehouses to event processing pipelines. As Hadoop consists of many components, services and interfaces, understanding its architecture is crucial, before you can successfully integrate it in to your own environment.
An introduction to self-service data with Dremio. Dremio reimagines analytics for modern data. Created by veterans of open source and big data technologies, Dremio is a fundamentally new approach that dramatically simplifies and accelerates time to insight. Dremio empowers business users to curate precisely the data they need, from any data source, then accelerate analytical processing for BI tools, machine learning, data science, and SQL clients. Dremio starts to deliver value in minutes, and learns from your data and queries, making your data engineers, analysts, and data scientists more productive.
Hive, Impala, and Spark, Oh My: SQL-on-Hadoop in Cloudera 5.5Cloudera, Inc.
Inefficient data workloads are all too common across enterprises - causing costly delays, breakages, hard-to-maintain complexity, and ultimately lost productivity. For a typical enterprise with multiple data warehouses, thousands of reports, and hundreds of thousands of ETL jobs being executed every day, this loss of productivity is a real problem. Add to all of this the complex handwritten SQL queries, and there can be nearly a million queries executed every month that desperately need to be optimized, especially to take advantage of the benefits of Apache Hadoop. How can enterprises dig through their workloads and inefficiencies to easily see which are the best fit for Hadoop and what’s the fastest path to get there?
Cloudera Navigator Optimizer is the solution - analyzing existing SQL workloads to provide instant insights into your workloads and turns that into an intelligent optimization strategy so you can unlock peak performance and efficiency with Hadoop. As the newest addition to Cloudera’s enterprise Hadoop platform, and now available in limited beta, Navigator Optimizer has helped customers profile over 1.5 million queries and ultimately save millions by optimizing for Hadoop.
This presentation discusses the follow topics
What is Hadoop?
Need for Hadoop
History of Hadoop
Hadoop Overview
Advantages and Disadvantages of Hadoop
Hadoop Distributed File System
Comparing: RDBMS vs. Hadoop
Advantages and Disadvantages of HDFS
Hadoop frameworks
Modules of Hadoop frameworks
Features of 'Hadoop‘
Hadoop Analytics Tools
Simplifying And Accelerating Data Access for Python With Dremio and Apache ArrowPyData
By Sudheesh Katkam
PyData New York City 2017
Dremio is a new open source project for self-service data fabric. Dremio simplifies and accelerates access to data from any source and any size, including relational databases, NoSQL, Hadoop, Parquet, and text files. We'll show you how you can use Dremio to visually curate data from any source, then access via Pandas or Jupyter notebook for rapid access.
PASS Summit - SQL Server 2017 Deep DiveTravis Wright
Deep dive into SQL Server 2017 covering SQL Server on Linux, containers, HA improvements, SQL graph, machine learning, python, adaptive query processing, and much much more.
Hadoop as we know is a Java based massive scalable distributed framework for processing large data (several peta bytes) across a cluster (1000s) of commodity computers.
The Hadoop ecosystem has grown over the last few years and there is a lot of jargon in terms of tools as well as frameworks.
Many organizations are investing & innovating heavily in Hadoop to make it better and easier. The mind map on the next slide should be useful to get a high level picture of the ecosystem.
Big Data Day LA 2015 - The AWS Big Data Platform by Michael Limcaco of AmazonData Con LA
Introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures (e.g., streaming, real-time intelligence, and analytics). We will review the AWS big data portfolio of services including Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR), Redshift, Aurora and Machine Learning, and learn how customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
The Future of Hadoop by Arun Murthy, PMC Apache Hadoop & Cofounder HortonworksData Con LA
Arun Murthy will be discussing the future of Hadoop and the next steps in what the big data world would start to look like in the future. With the advent of tools like Spark and Flink and containerization of apps using Docker, there is a lot of momentum currently in this space. Arun will share his thoughts and ideas on what the future holds for us.
Bio:-
Arun C. Murthy
Arun is a Apache Hadoop PMC member and has been a full time contributor to the project since the inception in 2006. He is also the lead of the MapReduce project and has focused on building NextGen MapReduce (YARN). Prior to co-founding Hortonworks, Arun was responsible for all MapReduce code and configuration deployed across the 42,000+ servers at Yahoo!. In essence, he was responsible for running Apache Hadoop’s MapReduce as a service for Yahoo!. Also, he jointly holds the current world sorting record using Apache Hadoop. Follow Arun on Twitter: @acmurthy.
Driving in the Desert - Running Your HDP Cluster with Helion, Openstack, and ...DataWorks Summit
DataWorks Summit 2017 - Sydney
Alejandro Tesch, Cloud Evangelist, Asia Pacific and Japan, HPE
Big Data is a hot topic today for most organisations today as they race to convert vast amounts of data into useful information that can be leveraged to make critical decisions and recommendations in a very limited time windows. Today, there is a widely accepted talent gap when it comes to creating and managing Hadoop cluster, even for the experts – it can take hours (or days) to get a fully functional hadoop farm up and running. The HDP Ambari plugin for Sahara is looking to address most of this challenges by facilitating the deployment of Hortonworks Hadoop clusters and provide a set of open API to facilitate data analytics tasks in your own cloud. In this presentation we will cover why it makes sense to run your data analytics cluster in your cloud and we will demonstrate basic Sahara / Ambari functionality.
The presentation covers in detail how to build intelligent microservices solutions using Azure App Service features in Azure. The presentation is a demo driven and demonstrate how to design and provision complete end-to-end solutions using cloud services & Azure App Services capabilities.
The talk will cover how to build interactive analytics, dashboards and reports in PowerBI. Using multiple data sources, synchronization schedule and sharing capabilities in PowerBI. The PowerBI web and client tool provides sophisticated tools for non-technical users to build interactive dashboards.
The session will be demo driven and will cover different use cases that use multiple data formats and sources.
-- PowerBI client tool & web capabilities
-- Interactive Dashboards and Reporting.
-- Platform capabilities such as sharing, scheduling and more.
In this session i am covering data science principals such as: Regression, Clustering, Classification, Recommendation and how to build programmable components in Azure Machine Learning experiments using data science programming languages. The session shows and illustrate how to implement these concepts using Azure ML studio.
The talk will cover how to build interactive analytics, dashboards and reports in PowerBI. Using multiple data sources, synchronization schedule and sharing capabilities in PowerBI. The PowerBI web and client tool provides sophisticated tools for non-technical users to build interactive dashboards.
The session will be demo driven and will cover different use cases that use multiple data formats and sources.
-- PowerBI client tool & web capabilities
-- Interactive Dashboards and Reporting.
-- Platform capabilities such as sharing, scheduling and more.
The session covers how to build web, mobile, Api solutions using Microsoft Azure platform. The session covers in detail how to build intelligent solutions using Azure App Service. Azure App Service includes web apps, logic apps and mobile apps services as PaaS solutions for engineers. The session is a demo driven and demonstrate how to design and provision complete end-to-end solutions using Microsoft intelligent cloud.
This presentation covers the basics of TypeScript and will show you how to enable TypeScript features in existing projects. We'll take a guided tour of TypeScript, showing how the language and toolset makes it easier to write cross-platform, large-scale JavaScript applications.
Building predictive models in Azure Machine LearningMostafa
This presentation covers how to build and drive insights from data by building machine learning models. The session covers how to develop and train models in Python/R using Azure Machine Learning. The session covers how to explore key concepts in data acquisition, preparation, exploration, and visualization, and take a look at how to build a predictive solution using Azure Machine Learning, R, and Python. The session covers tips and tricks on selecting the right algorithm for your data science problem and how to utilize Machine Learning to solve it.
Application development and deployment in the traditional datacenter has been a challenge for many organizations primarily due to resource constraints. This has historically led to unfortunate compromises between functionality and security for business applications.
With public cloud providers, we have seen the limitations to technical capabilities fall away; the attainable to the Fortune 500 has become available to organizations of any size.
This yields some exciting new options for the development, deployment and operation of secure applications. Here you will find the presentation deck and recording of webinar.
A presentation covers how data science is connected to build effective machine learning solutions. How to build end to end solutions in Azure ML. How to build, model, and evaluate algorithms in Azure ML.
Microsoft Azure Hybrid Cloud - Getting Started For TechiesAidan Finn
This is my "getting started for techies" presentation on using the Microsoft Azure public cloud to build hybrid cloud solutions in conjunction with Windows Server 2012 R2 Hyper-V and System Center.
Use Machine learning to solve classification problems through building binary and multi-class classifiers.
Does your company face business-critical decisions that rely on dynamic transactional data? If you answered “yes,” you need to attend this free event featuring Microsoft analytics tools. We’ll focus on Azure Machine Learning capabilities and explore the following topics: - Introduction of two class classification problems.
- Classification Algorithms (Two Class Classification)
- Available algorithms in Azure ML.
- Real business problems that is solved using two class classification.
The session covers how to get started to build big data solutions in Azure. Azure provides different Hadoop clusters for Hadoop ecosystem. The session covers the basic understanding of HDInsight clusters including: Apache Hadoop, HBase, Storm and Spark. The session covers how to integrate with HDInsight in .NET using different Hadoop integration frameworks and libraries. The session is a jump start for engineers and DBAs with RDBMS experience who are looking for a jump start working and developing Hadoop solutions. The session is a demo driven and will cover the basics of Hadoop open source products.
Azure Data Platform Services
HDInsight Clusters in Azure
Data Storage: Apache Hive, Apache Hbase, Azure Data Catalog
Data Transformations: Apache Storm, Apache Spark, Azure Data Factory
Healthcare / Life Sciences Use Cases
Big Data Warsaw v 4 I "The Role of Hadoop Ecosystem in Advance Analytics" - R...Dataconomy Media
What is Big Data? What is Hadoop? What is MapReduce? How do the other components such as: Oozie, Hue, Hive, Impala works? Which are the main Hadoop distributions? What is Spark? What are the differences between Batch and Streaming processing? What are some Business Intelligence Solutions by focusing on some business cases?
The initiation of The Hadoop Apache Hive began in 2007 by Facebook due to its data growth.
This ETL system began to fail over few years as more people joined Facebook.
In August 2008, Facebook decided to move to scalable a more scalable open-source Hadoop environment; Hive
Facebook, Netflix and Amazons support the Apache Hive SQL now known as the HiveQL
Apache Hive is a tool built on top of Hadoop for analyzing large, unstructured data sets using a SQL-like syntax, thus making Hadoop accessible to legions of existing BI and corporate analytics researchers.
Comparison between RDBMS, Hadoop and Apache based on parameters like Data Variety, Data Storage, Querying, Cost, Schema, Speed, Data Objects, Hardware profile, and Used cases. It also mentions benefits and limitations.
The Apache Hadoop software library is essentially a framework that allows for the distributed processing of large datasets across clusters of computers using a simple programming model. Hadoop can scale up from single servers to thousands of machines, each offering local computation and storage.
Architecting big data solutions in the cloudMostafa
The session covers how to architect big data solutions in Azure. Azure provides different Hadoop clusters types. The session covers the basic understanding of Lambda Architecture and how to build end to end big data solution using Apache Hadoop HDFS, HBase, Hive, Storm and Spark. The session covers how to integrate by building apps using any programming language such as Java, Python, or Scala with HDInsight clusters using various Hadoop integration frameworks and libraries. The session is a jump start for Architects, engineers and DBAs with RDBMS experience who are looking for starting building big data solutions based on Hadoop. The session is a demo driven and will cover the basics of Hadoop open source products.
This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.
This video covers an introduction to HBase in Azure. It covers what is HDInsight clusters, What are the available cluster types. What Microsoft Azure offers as Hadoop ecosystem components. The video focuses on HDInsight HBase cluster type and the need for HBase in Hadoop ecosystem to store NoSQL data and the available tools (such as: hbase shell) and commands to use to manipulate data within HBase tables.
The video covers the column families concept for engineers who come from RDBMS background.
This video helps any engineer with no Hadoop experience to understand what is the role of HBase in Hadoop and big data applications.
We are committed to keeping you safe by letting you know when anything you own is recalled. Our patent pending process lets you import your purchases from eBay or Amazon, or manually enter anything you own and get alerts about recalls. eRecall platform is designed to help you get your refund, a fix or replacement faster. eRecall platform has powerful features for consumers and businesses as well. Connect and share recalls notices through our platform and get rewards points.
eRecall is an innovative service that provides immediate access to product recall information at the time of purchase. Our platform connects government with manufacturers, re-sellers and consumers. We built an innovative platform/service to make the data accessible to all people and businesses. You are able check out all recalled products you own in an effective, personalized and easy way! eRecall built with love in Fairfax, Virginia.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Essentials of Automations: Optimizing FME Workflows with Parameters
Building Big data solutions in Azure
1.
2. Session Objectives And Takeaways
Understanding HDInsight cluster types & tiers in Azure
HBase as a Hadoop NoSQL database
Hive is a data warehouse software to manage large datasets
using SQL
Understanding data processing options in Hadoop ecosystem
using Storm and Spark
3. • HDInsight is a cloud implementation on Microsoft Azure of the rapidly expanding Apache
Hadoop technology stack that is the go-to solution for big data analysis.
• It includes implementations of Apache Spark, HBase, Storm, Pig, Hive, Sqoop, Oozie,
Ambari, and so on.
• HDInsight also integrates with business intelligence (BI) tools such as Power BI, Excel, SQL
Server Analysis Services, and SQL Server Reporting Services.
• HDInsight is available on Windows and Linux
• HDInsight on Linux: A Hadoop cluster on Ubuntu
• HDInsight on Windows: A Hadoop cluster on Win Server 2012 R2
What is HDInsight
4. • HDInsight provides cluster Types & custom configurations for:
• Hadoop (HDFS)
• HBase
• Storm
• Spark
• R Server (Preview)
• Skip maintaining and purchasing hardware
• HDInsight has powerful programming extensions for languages including C#, Java,
and .NET. Use your programming language of choice on Hadoop to create, configure,
submit, and monitor Hadoop jobs.
HDInsight clusters on Azure
6. • Apache HBase is an open-source, NoSQL database that is built on Hadoop and modeled
after Google BigTable.
• HBase provides random access and strong consistency for large amounts of unstructured
and semistructured data in a schemaless database organized by column families
• Data is stored in the rows of a table, and data within a row is grouped by column family.
• The open-source code scales linearly to handle petabytes of data on thousands of nodes.
It can rely on data redundancy, batch processing, and other features that are provided by
distributed applications in the Hadoop ecosystem.
What is HBase
7. Order No Customer Name Customer Phone Company Name Company
Address
12012015 Mostafa 101-232-2345 Microsoft Redmond, WA
Customer Company
Order No Customer Name Customer Phone Company Name Company Address
12012015 Mostafa 101-232-2345 Microsoft Redmond, WA
8. • HBase Commands:
• create Equivalent to Create table in T-SQL
• get Equivalent to Select statements in T-SQL
• put Equivalent to Update, Insert statement in T-SQL
• scan Equivalent to Select (no where condition) in T-SQL
• HBase shell is your query tool to execute in CRUD commands to a HBase cluster.
• Data can also be managed using the HBase C# API, which provides a client library on top
of the HBase REST API.
• An HBase database can also be queried by using Hive using SQLHive.
What is HBase
9. • Apache Hive is a data warehouse system for Hadoop, which enables data summarization,
querying, and analysis of data by using HiveQL (a query language similar to SQL).
• Hive understands how to work with structured and semi-structured data, such as text files
where the fields are delimited by specific characters.
• Hive also supports custom serializer/deserializers (SerDe) for complex or irregularly
structured data.
• Hive can also be extended through user-defined functions (UDF).
• A UDF allows you to implement functionality or logic that isn't easily modeled in HiveQL.
What is Hive
10.
11. • Apache Storm is a distributed, fault-tolerant, open-source computation system that allows
you to process data in real-time with Hadoop.
• Apache Storm on HDInsight allows you to create distributed, real-time analytics solutions
in the Azure environment by using Apache Hadoop.
• Storm solutions can also provide guaranteed processing of data, with the ability to replay
data that was not successfully processed the first time.
• Ability to write Storm components in C#, JAVA and Python.
• Azure Scale up or Scale down without an impact for running Storm topologies.
• Ease of provision and use in Azure portal.
• Visual Studio project templates for Storm apps
What is Apache Storm
12. • Apache Storm apps are submitted as Topologies.
• A topology is a graph of computation that processes streams
• Stream: An unbound collection of tuples. Streams are produced by spouts and bolts, and
they are consumed by bolts.
• Tuple: A named list of dynamically typed values.
• Spout: Consumes data from a data source and emits one or more streams.
• Bolt: Consumes streams, performs processing on tuples, and may emit streams. Bolts are
also responsible for writing data to external storage, such as a queue, HDInsight, HBase, a
blob, or other data store.
• Nimbus: JobTracker in Hadoop that distribute jobs, monitoring failures.
Apache Storm Components
13.
14. • Apache Spark™ is a fast and general engine for large-scale data processing.
• Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on
disk.
• Write applications quickly in Java, Scala, Python, R.
• Combine SQL, streaming, and complex analytics.
• Spark's in-memory computation capabilities
make it a good choice for iterative algorithms in
ML and graph computations.
• Spark is also compatible with Azure Blob storage (WASB) so your existing data stored in
Azure can easily be processed via Spark.
• Support for R Server & Azure Data Lake.
What is Apache Spark
15.
16. Session Objectives And Takeaways
Understanding HDInsight cluster types & tiers in Azure
HBase as a Hadoop NoSQL database
Hive is a data warehouse software to manage large datasets
using SQL
Understanding data processing options in Hadoop ecosystem
using Storm and Spark
Editor's Notes
The session covers how to get started to build big data solutions in Azure. Azure provides different Hadoop clusters for Hadoop ecosystem. The session covers the basic understanding of HDInsight clusters including: Apache Hadoop, HBase, Storm and Spark. The session covers how to integrate with HDInsight in .NET using different Hadoop integration frameworks and libraries. The session is a jump start for engineers and DBAs with RDBMS experience who are looking for a jump start working and developing Hadoop solutions. The session is a demo driven and will cover the basics of Hadoop open source products.
The session covers how to get started to build big data solutions in Azure. Azure provides different Hadoop clusters for Hadoop ecosystem. The session covers the basic understanding of HDInsight clusters including: Apache Hadoop, HBase, Storm and Spark. The session covers how to integrate with HDInsight in .NET using different Hadoop integration frameworks and libraries. The session is a jump start for engineers and DBAs with RDBMS experience who are looking for a jump start working and developing Hadoop solutions. The session is a demo driven and will cover the basics of Hadoop open source products.
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-hbase-tutorial-get-started/
A) Working with hbase shell:
Create a table.
Insert a record.
Update a record.
Delete a record.
Create a hive table that maps to hbase table we just created.
B) Working with Hive: use the dashboard to create database and tables.
Apache Storm in HDInsight
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-storm-overview/
Apache Storm in HDInsight
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-storm-overview/
Tips:
The Nimbus node provides similar functionality to the Hadoop JobTracker, and it assigns tasks to other nodes in the cluster through Zookeeper.
Demo:
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-storm-develop-csharp-visual-studio-topology/
Overview in HDInsight project templates in Visual Studio 2015:
Create storm application
Create Hive Application