Scala is a modern multi-paradigm programming language that integrates features of object-oriented and functional programming. It is statically typed, has a lightweight syntax, and compiles to Java bytecode so it can use Java libraries. Scala source code is compiled to Java bytecode using the Scala compiler and runs on the Java Virtual Machine. Common data structures in Scala include collections like lists, arrays, sets and maps. Scala supports higher-order functions, immutable data, pattern matching and case classes. Build tools like SBT are used to manage Scala projects.
Machine learning with Apache Spark MLlib | Big Data Hadoop Spark Tutorial | C...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2L227PI
This CloudxLab Spark MLlib tutorial helps you to understand Spark MLlib in detail. Below are the topics covered in this tutorial:
1) Introduction to Machine Learning
2) Applications of Machine Learning
3) Machine Learning - Types & Tools
4) Introduction to Spark MLlib
5) Movie Lens Recommendation - Collaborative Filtering in Spark MLlib
Spark real world use cases and optimizationsGal Marder
Using Spark for BigData became the standard in the industry. The internet is
full with "hello world" examples, but when your Spark job meets production all hell breaks loose. We will cover real world use cases, how they were designed, why they didn't work and how we made them run fast
There are many programming languages that can be used on the JVM (Java, Scala, Groovy, Kotlin, ...). In this session, we'll outline the main differences between them and discuss advantages and disadvantages for each.
Machine learning with Apache Spark MLlib | Big Data Hadoop Spark Tutorial | C...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2L227PI
This CloudxLab Spark MLlib tutorial helps you to understand Spark MLlib in detail. Below are the topics covered in this tutorial:
1) Introduction to Machine Learning
2) Applications of Machine Learning
3) Machine Learning - Types & Tools
4) Introduction to Spark MLlib
5) Movie Lens Recommendation - Collaborative Filtering in Spark MLlib
Spark real world use cases and optimizationsGal Marder
Using Spark for BigData became the standard in the industry. The internet is
full with "hello world" examples, but when your Spark job meets production all hell breaks loose. We will cover real world use cases, how they were designed, why they didn't work and how we made them run fast
There are many programming languages that can be used on the JVM (Java, Scala, Groovy, Kotlin, ...). In this session, we'll outline the main differences between them and discuss advantages and disadvantages for each.
Introduction to Spark ML Pipelines WorkshopHolden Karau
Introduction to Spark ML Pipelines Workshop slides - companion IJupyter notebooks in Python & Scala are available from my github at https://github.com/holdenk/spark-intro-ml-pipeline-workshop
Knoldus organized a Meetup on 1 April 2015. In this Meetup, we introduced Spark with Scala. Apache Spark is a fast and general engine for large-scale data processing. Spark is used at a wide range of organizations to process large datasets.
This presentation show the main Spark characteristics, like RDD, Transformations and Actions.
I used this presentation for many Spark Intro workshops from Cluj-Napoca Big Data community : http://www.meetup.com/Big-Data-Data-Science-Meetup-Cluj-Napoca/
Abstract –
Spark 2 is here, while Spark has been the leading cluster computation framework for severl years, its second version takes Spark to new heights. In this seminar, we will go over Spark internals and learn the new concepts of Spark 2 to create better scalable big data applications.
Target Audience
Architects, Java/Scala developers, Big Data engineers, team leaders
Prerequisites
Java/Scala knowledge and SQL knowledge
Contents:
- Spark internals
- Architecture
- RDD
- Shuffle explained
- Dataset API
- Spark SQL
- Spark Streaming
Stream processing from single node to a clusterGal Marder
Building data pipelines shouldn't be so hard, you just need to choose the right tools for the task.
We will review Akka and Spark streaming, how they work and how to use them and when.
Weaving Dataflows with Silk - ScalaMatsuri 2014, TokyoTaro L. Saito
Silk is a framework for building dataflows in Scala. In Silk users write data processing code with collection operators (e.g., map, filter, reduce, join, etc.). Silk uses Scala Macros to construct a DAG of dataflows, nodes of which are annotated with variable names in the program. By using these variable names as markers in the DAG, Silk can support interruption and resume of dataflows and querying the intermediate data. By separating dataflow descriptions from its computation, Silk enables us to switch executors, called weavers, for in-memory or cluster computing without modifying the code. In this talk, we will show how Silk helps you run data-processing pipelines as you write the code.
A brief introduction to Spark ML with PySpark for Alpine Academy Spark Workshop #2. This workshop covers basic feature transformation, model training, and prediction. See the corresponding github repo for code examples https://github.com/holdenk/spark-intro-ml-pipeline-workshop
Beyond shuffling global big data tech conference 2015 sjHolden Karau
Beyond Shuffling - Tips & Tricks for scaling your Apache Spark programs. This talk walks through a number of common mistakes which can keep our Spark programs from scaling and examines the solutions, as well as general techniques useful for moving from beyond a prof of concept to production.
While the Java platform has gained notoriety in the last 15 years as a robust application platform with a thriving ecosystem and well-established practices, the Java language has had its share of criticism. Highly verbose, overly didactic, limited feature set; whichever flavor of criticism you prefer, it's patently obvious that Java is playing catch up to more modern languages with a less rigid evolution path.
The language landscape today is vastly different than it had been five or ten years ago; a wide array of languages are available, designed to suit a variety of flavors: Groovy, Clojure, Scala, Gosu, Kotlin... which should you choose? This lecture focuses on one company's decision to focus on Scala, and presents a case study based on our experiences using Scala in practice, in the hope of providing much-needed real world context to assist your decision.
This presentation was used for the Scala In Practice lecture at the Botzia Israeli Java User Group meeting, May 3rd 2012.
Survey of Spark for Data Pre-Processing and AnalyticsYannick Pouliot
A short presentation I gave on why Apache Spark is such an impressive analytics platform, particularly for R and Python users. I also discuss how academia can benefit from Amazon AWS implementation.
Beyond Shuffling - Effective Tips and Tricks for Scaling Spark (Vancouver Sp...Holden Karau
Beyond Shuffling - Tips & Tricks for scaling your Apache Spark programs. This talk walks through a number of common mistakes which can keep our Spark programs from scaling and examines the solutions, as well as general techniques useful for moving from beyond a prof of concept to production. It covers topics like effective RDD re-use, considerations for working with key/value data, and finishes up with a preview of some of the work being done to add code generation to Spark ML.
Scalaz is a Scala library for functional programming.
It provides purely functional data structures to complement those from the Scala standard library. It defines a set of foundational type classes (e.g. Functor, Monad) and corresponding instances for a large number of data structures.
An Introduction to Higher Order Functions in Spark SQL with Herman van HovellDatabricks
Nested data types offer Apache Spark users powerful ways to manipulate structured data. In particular, they allow you to put complex objects like arrays, maps and structures inside of columns. This can help you model your data in a more natural way.
While this feature is certainly useful, it can quite bit cumbersome to manipulate data inside of complex objects because SQL (and Spark) do not have primitives for working with such data. In addition, it is time-consuming, non-performant, and non-trivial. During this talk we will discuss some of the commonly used techniques for working with complex objects, and we will introduce new ones based on Higher-order functions. Higher-order functions will be part of Spark 2.4 and are a simple and performant extension to SQL that allow a user to manipulate complex data such as arrays.
Introduction to Spark ML Pipelines WorkshopHolden Karau
Introduction to Spark ML Pipelines Workshop slides - companion IJupyter notebooks in Python & Scala are available from my github at https://github.com/holdenk/spark-intro-ml-pipeline-workshop
Knoldus organized a Meetup on 1 April 2015. In this Meetup, we introduced Spark with Scala. Apache Spark is a fast and general engine for large-scale data processing. Spark is used at a wide range of organizations to process large datasets.
This presentation show the main Spark characteristics, like RDD, Transformations and Actions.
I used this presentation for many Spark Intro workshops from Cluj-Napoca Big Data community : http://www.meetup.com/Big-Data-Data-Science-Meetup-Cluj-Napoca/
Abstract –
Spark 2 is here, while Spark has been the leading cluster computation framework for severl years, its second version takes Spark to new heights. In this seminar, we will go over Spark internals and learn the new concepts of Spark 2 to create better scalable big data applications.
Target Audience
Architects, Java/Scala developers, Big Data engineers, team leaders
Prerequisites
Java/Scala knowledge and SQL knowledge
Contents:
- Spark internals
- Architecture
- RDD
- Shuffle explained
- Dataset API
- Spark SQL
- Spark Streaming
Stream processing from single node to a clusterGal Marder
Building data pipelines shouldn't be so hard, you just need to choose the right tools for the task.
We will review Akka and Spark streaming, how they work and how to use them and when.
Weaving Dataflows with Silk - ScalaMatsuri 2014, TokyoTaro L. Saito
Silk is a framework for building dataflows in Scala. In Silk users write data processing code with collection operators (e.g., map, filter, reduce, join, etc.). Silk uses Scala Macros to construct a DAG of dataflows, nodes of which are annotated with variable names in the program. By using these variable names as markers in the DAG, Silk can support interruption and resume of dataflows and querying the intermediate data. By separating dataflow descriptions from its computation, Silk enables us to switch executors, called weavers, for in-memory or cluster computing without modifying the code. In this talk, we will show how Silk helps you run data-processing pipelines as you write the code.
A brief introduction to Spark ML with PySpark for Alpine Academy Spark Workshop #2. This workshop covers basic feature transformation, model training, and prediction. See the corresponding github repo for code examples https://github.com/holdenk/spark-intro-ml-pipeline-workshop
Beyond shuffling global big data tech conference 2015 sjHolden Karau
Beyond Shuffling - Tips & Tricks for scaling your Apache Spark programs. This talk walks through a number of common mistakes which can keep our Spark programs from scaling and examines the solutions, as well as general techniques useful for moving from beyond a prof of concept to production.
While the Java platform has gained notoriety in the last 15 years as a robust application platform with a thriving ecosystem and well-established practices, the Java language has had its share of criticism. Highly verbose, overly didactic, limited feature set; whichever flavor of criticism you prefer, it's patently obvious that Java is playing catch up to more modern languages with a less rigid evolution path.
The language landscape today is vastly different than it had been five or ten years ago; a wide array of languages are available, designed to suit a variety of flavors: Groovy, Clojure, Scala, Gosu, Kotlin... which should you choose? This lecture focuses on one company's decision to focus on Scala, and presents a case study based on our experiences using Scala in practice, in the hope of providing much-needed real world context to assist your decision.
This presentation was used for the Scala In Practice lecture at the Botzia Israeli Java User Group meeting, May 3rd 2012.
Survey of Spark for Data Pre-Processing and AnalyticsYannick Pouliot
A short presentation I gave on why Apache Spark is such an impressive analytics platform, particularly for R and Python users. I also discuss how academia can benefit from Amazon AWS implementation.
Beyond Shuffling - Effective Tips and Tricks for Scaling Spark (Vancouver Sp...Holden Karau
Beyond Shuffling - Tips & Tricks for scaling your Apache Spark programs. This talk walks through a number of common mistakes which can keep our Spark programs from scaling and examines the solutions, as well as general techniques useful for moving from beyond a prof of concept to production. It covers topics like effective RDD re-use, considerations for working with key/value data, and finishes up with a preview of some of the work being done to add code generation to Spark ML.
Scalaz is a Scala library for functional programming.
It provides purely functional data structures to complement those from the Scala standard library. It defines a set of foundational type classes (e.g. Functor, Monad) and corresponding instances for a large number of data structures.
An Introduction to Higher Order Functions in Spark SQL with Herman van HovellDatabricks
Nested data types offer Apache Spark users powerful ways to manipulate structured data. In particular, they allow you to put complex objects like arrays, maps and structures inside of columns. This can help you model your data in a more natural way.
While this feature is certainly useful, it can quite bit cumbersome to manipulate data inside of complex objects because SQL (and Spark) do not have primitives for working with such data. In addition, it is time-consuming, non-performant, and non-trivial. During this talk we will discuss some of the commonly used techniques for working with complex objects, and we will introduce new ones based on Higher-order functions. Higher-order functions will be part of Spark 2.4 and are a simple and performant extension to SQL that allow a user to manipulate complex data such as arrays.
The Scala programming language has been gaining momentum recently as an alternative (and some might say successor) to Java on the JVM. This talk will start with an introduction to basic Scala syntax and concepts, then delve into some of Scala's more interesting and unique features. At the end we'll show a brief example of how Scala is used by the Lift web framework to simplify dynamic web apps.
The workshop aims to introduce Scala > 2.10 and was held at Clueda AG .
It gives an introduction to Scala also explaining widely used language features like case classes, patten matching, futures and string interpolation.
Qcon2011 functions rockpresentation_scalaMichael Stal
This is the part I of the tutorial I planned to give at QCon 2011 on Functional Programming with Scala. It also includes Scala 2.8 as well as upcoming features
Understanding computer vision with Deep LearningCloudxLab
Computer vision is a branch of computer science which deals with recognising objects, people and identifying patterns in visuals. It is basically analogous to the vision of an animal.
Topics covered:
1. Overview of Machine Learning
2. Basics of Deep Learning
3. What is computer vision and its use-cases?
4. Various algorithms used in Computer Vision (mostly CNN)
5. Live hands-on demo of either Auto Cameraman or Face recognition system
6. What next?
( Machine Learning & Deep Learning Specialization Training: https://goo.gl/5u2RiS )
This CloudxLab Reinforcement Learning tutorial helps you to understand Reinforcement Learning in detail. Below are the topics covered in this tutorial:
1) What is Reinforcement?
2) Reinforcement Learning an Introduction
3) Reinforcement Learning Example
4) Learning to Optimize Rewards
5) Policy Search - Brute Force Approach, Genetic Algorithms and Optimization Techniques
6) OpenAI Gym
7) The Credit Assignment Problem
8) Inverse Reinforcement Learning
9) Playing Atari with Deep Reinforcement Learning
10) Policy Gradients
11) Markov Decision Processes
Apache Spark - Key Value RDD - Transformations | Big Data Hadoop Spark Tutori...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2sm5Ekd
This CloudxLab Key-Value RDD Transformations tutorial helps you to understand Key-Value RDD transformations in detail. Below are the topics covered in this tutorial:
1) Transformations on Key-Value Pair RDD - keys(), values(), groupByKey(), combineByKey(), sortByKey(), subtractByKey(), join(), leftOuterJoin(), rightOuterJoin(), cogroup(), countByKey() and lookup()
Advanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2kyRTuW
This CloudxLab Advanced Spark Programming tutorial helps you to understand Advanced Spark Programming in detail. Below are the topics covered in this slide:
1) Shared Variables - Accumulators & Broadcast Variables
2) Accumulators and Fault Tolerance
3) Custom Accumulators - Version 1.x & Version 2.x
4) Examples of Broadcast Variables
5) Key Performance Considerations - Level of Parallelism
6) Serialization Format - Kryo
7) Memory Management
8) Hardware Provisioning
Apache Spark - Dataframes & Spark SQL - Part 2 | Big Data Hadoop Spark Tutori...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2sm9c61
This CloudxLab Introduction to Spark SQL & DataFrames tutorial helps you to understand Spark SQL & DataFrames in detail. Below are the topics covered in this slide:
1) Loading XML
2) What is RPC - Remote Process Call
3) Loading AVRO
4) Data Sources - Parquet
5) Creating DataFrames From Hive Table
6) Setting up Distributed SQL Engine
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2sf2z6i
This CloudxLab Introduction to Spark SQL & DataFrames tutorial helps you to understand Spark SQL & DataFrames in detail. Below are the topics covered in this slide:
1) Introduction to DataFrames
2) Creating DataFrames from JSON
3) DataFrame Operations
4) Running SQL Queries Programmatically
5) Datasets
6) Inferring the Schema Using Reflection
7) Programmatically Specifying the Schema
Apache Spark - Running on a Cluster | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
(Big Data with Hadoop & Spark Training: http://bit.ly/2IUsWca
This CloudxLab Running in a Cluster tutorial helps you to understand running Spark in the cluster in detail. Below are the topics covered in this tutorial:
1) Spark Runtime Architecture
2) Driver Node
3) Scheduling Tasks on Executors
4) Understanding the Architecture
5) Cluster Managers
6) Executors
7) Launching a Program using spark-submit
8) Local Mode & Cluster-Mode
9) Installing Standalone Cluster
10) Cluster Mode - YARN
11) Launching a Program on YARN
12) Cluster Mode - Mesos and AWS EC2
13) Deployment Modes - Client and Cluster
14) Which Cluster Manager to Use?
15) Common flags for spark-submit
Introduction to SparkR | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2LCTufA
This CloudxLab Introduction to SparkR tutorial helps you to understand SparkR in detail. Below are the topics covered in this tutorial:
1) SparkR (R on Spark)
2) SparkR DataFrames
3) Launch SparkR
4) Creating DataFrames from Local DataFrames
5) DataFrame Operation
6) Creating DataFrames - From JSON
7) Running SQL Queries from SparkR
Introduction to NoSQL | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2kyP2Ct
This CloudxLab Introduction to NoSQL tutorial helps you to understand NoSQL in detail. Below are the topics covered in this slide:
1) Introduction to NoSQL
2) Scaling Out vs Scaling Up
3) ACID - Properties of DB Transactions
4) RDBMS - Story
5) What is NoSQL?
6) Types Of NoSQL Stores
7) CAP Theorem
8) Serialization
9) Column Oriented Database
10) Column Family Oriented DataStore
Introduction to MapReduce - Hadoop Streaming | Big Data Hadoop Spark Tutorial...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2sh5b3E
This CloudxLab Hadoop Streaming tutorial helps you to understand Hadoop Streaming in detail. Below are the topics covered in this tutorial:
1) Hadoop Streaming and Why Do We Need it?
2) Writing Streaming Jobs
3) Testing Streaming jobs and Hands-on on CloudxLab
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLabCloudxLab
( Machine Learning & Deep Learning Specialization Training: https://goo.gl/6n3vko )
This CloudxLab TensorFlow tutorial helps you to understand TensorFlow in detail. Below are the topics covered in this tutorial:
1) Why TensorFlow?
2) What are Tensors?
3) What is TensorFlow?
4) Creating your First Graph
5) Linear Regression with TensorFlow
6) Implementing Gradient Descent using TensorFlow
7) Implementing Gradient Descent Using autodiff
8) Implementing Gradient Descent Using an Optimizer
9) Graph Visualization using TensorBoard
10) Name Scopes in TensorFlow
11) Modularity in TensorFlow
12) Sharing Variables in TensorFlow
Introduction to Deep Learning | CloudxLabCloudxLab
( Machine Learning & Deep Learning Specialization Training: https://goo.gl/goQxnL )
This CloudxLab Deep Learning tutorial helps you to understand Deep Learning in detail. Below are the topics covered in this tutorial:
1) What is Deep Learning
2) Deep Learning Applications
3) Artificial Neural Network
4) Deep Learning Neural Networks
5) Deep Learning Frameworks
6) AI vs Machine Learning
In this tutorial, we will learn the the following topics -
+ The Curse of Dimensionality
+ Main Approaches for Dimensionality Reduction
+ PCA - Principal Component Analysis
+ Kernel PCA
+ LLE
+ Other Dimensionality Reduction Techniques
In this tutorial, we will learn the the following topics -
+ Voting Classifiers
+ Bagging and Pasting
+ Random Patches and Random Subspaces
+ Random Forests
+ Boosting
+ Stacking
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
2. Scala
Scala - Scalable Language
● Scala is a modern multi-paradigm programming
language designed to express common programming
patterns in a concise, elegant, and type-safe way
● Integrates the features of object-oriented and
functional languages
3. Scala
Why Scala?
● Statically Typed
■ Proven correctness before deployment
■ Performance
● Lightweight Composable Syntax
■ Low boilerplate
■ Syntax is very similar to other data centric languages
● Stable
■ Is being used in enterprises, financial sectors, retail, gaming
and more for many years
4. Scala
Scala - History
2011 Corporate stewardship
2005 Scala 2.0 written in Scala
2003 First Experimental Release
2001 Decided to create even better Java
1990
Martin Odersky (Creator of Scala) made Java better via
generics in the “javac” compiler
5. Scala
Scala - JVM
● Scala source code gets compiled to Java byte
code
● Resulting bytecode is executed by a JVM - the
Java Virtual Machine
● Java libraries may be used directly in Scala code
and vice versa
6. Scala
// Create a file hello_world.scala
// using nano hello_world.scala
object HelloWorld {
def main(args: Array[String]) {
println("Hello, world!")
}
}
Scala - Hello World - Example
7. Scala
Scala - Hello World - Compiling and Running
● Compile using scalac
scalac hello_world.scala
● To run it
scala HelloWorld
9. Scala
Scala - Interpreter
To evaluate a single expression, you can use the following:
scala -e 'println("Hello, World!")'
10. Scala
Scala - Variables
● Variables are reserved memory locations to store values
● Compiler allocates a memory based on the data type of
the variable
12. Scala
Scala - Variables - Types
● Mutable variables are defined using “var” keyword
○ values can be changed
● Immutable variables are defined using “val” keyword
○ values can not be changed after assignment
13. Scala
● val x: Int = 8 // Immutable, Read Only
● var y: Int = 7 // Mutable
Scala - Variables - Hands-on
14. Scala
What is the importance of immutable variables?
● In large systems, we do not want to be in a situation
where a variable's value changes unexpectedly
● In case of threads, APIs, functions and classes, we may
not want some of the variables to change
Scala - Immutable Variables - Importance
15. Scala
● We can define the variables without specifying their data
type
● Scala compiler can understand the type of the variable
based on the value assigned to it
Scala - Variables - Type Inference
16. Scala
● var x = "hi" // Type Inference (Scala compiler determines the type)
● var x: String = "hi" // Explicitly specifying the type
Scala - Variables - Type Inference
17. Scala
● We should always define type of variables explicitly
○ Code will be compiled faster as compiler will not have to spend
time in guessing the type
○ No ambiguity
Scala - Variables - Type Inference
18. Scala
● A class is a way of creating your own data type
● We can create a data type that represents a customers using a class
● A customer might have states like first name, last name, age,
address and contact number
● A customer class might represent behaviours for how these states
can be transformed like how to change someone’s address or name
● A class is not concrete until it has been instantiated using a new
keyword
Scala - Classes
19. Scala
class Person(val fname: String, val lname: String, val anage: Int) {
val firstname:String = fname;
val lastname: String = lname;
val age = anage;
}
val obj = new Person("Robin", "Gill", 42);
print(obj.age)
print(obj.firstname)
Scala - Classes - Hands-on
20. Scala
Scala - Classes - Methods - Hands-on
class Person(val fname: String, val lname: String, val anage: Int) {
val firstname:String = fname;
val lastname: String = lname;
val age = anage;
def getFullName():String = {
return firstname + " " + lastname;
}
}
val obj = new Person("Robin", "Gill", 42);
print(obj.getFullName())
21. Scala
● A singleton is a class which can have only one instance at any point
in time
● Methods and values that aren’t associated with individual instances
of a class belong in singleton objects
● A singleton class can be directly accessed via its name
● Create a singleton class using the keyword object
Scala - Singleton Objects
23. Scala
● Objects are useful in defining utility methods and constants as they
are not related to any specific instance of the class
Scala - Singleton Objects
24. Scala
var x = 10;
if( x < 20 ){
println("x is less than 20");
}
Scala - if
25. Scala
var x = 30;
if( x < 20 ){
println("x is less than 20");
} else {
println("x is greater than 20");
}
Scala - if - else
var x = 10;
if( x < 20 ){
println("x is less than 20");
}
26. Scala
var x = 30;
if( x == 10 ){
println("Value of X is 10");
} else if( x == 20 ){
println("Value of X is 20");
} else if( x == 30 ){
println("Value of X is 30");
} else{
println("This is else statement");
}
Scala - if - else - if - else
27. Scala
while loop
Repeats a statement or group of statements while a given condition
is true. It tests the condition before executing the loop body.
do-while loop
Like a while statement, except that it tests the condition at the end
of the loop body.
for loop
Executes a sequence of statements multiple times and abbreviates
the code that manages the loop variable.
Scala - loops
28. Scala
Repeats a statement or group of statements while a given condition is
true. It tests the condition before executing the loop body.
Scala - while loop
var a = 10;
// while loop execution
while( a < 20 ){
println( "Value of a: " + a );
a = a + 1;
}
value of a: 10
value of a: 11
value of a: 12
value of a: 13
value of a: 14
value of a: 15
value of a: 16
value of a: 17
value of a: 18
value of a: 19
29. Scala
Scala - do-while Loop
value of a: 10
value of a: 11
value of a: 12
value of a: 13
value of a: 14
value of a: 15
value of a: 16
value of a: 17
value of a: 18
value of a: 19
var a = 10;
// do loop execution
do {
println( "Value of a: " + a );
a = a + 1;
}
while( a < 20 )
Like a while statement, except that it tests the condition at the end
of the loop body.
30. Scala
Scala - for loop
value of a: 10
value of a: 11
value of a: 12
value of a: 13
value of a: 14
value of a: 15
value of a: 16
value of a: 17
value of a: 18
value of a: 19
var a = 0;
// for loop execution with a range
for( a <- 1 to 10){
println( "Value of a: " + a );
}
Executes a sequence of statements multiple times and abbreviates
the code that manages the loop variable.
31. Scala
Scala - Functions - Representations
1. def add(x : Int, y : Int) : Int = {
return x + y
}
2. def add(x : Int, y : Int) = { //return type is inferred
x + y //"return" keyword is optional
}
3. //Curly braces are optional on single line blocks
def add(x : Int, y : Int) = x + y
32. Scala
● Collections provide different ways to store data
● Scala collections hierarchy represents a structure of collections that
can contain data in different ways
Scala - Collections - Overview
34. Scala
● An ordered collection of data
● May or may not be indexed
● Array, List, Vector
Scala - Collections - Sequence
35. Scala
● Contains elements of same type
● Fixed size, ordered sequence of data
● Values are contiguous in memory
● Indexed by position
Scala - Collections - Sequence - Array
36. Scala
Scala - Collections - Sequence - Array
● var languages = Array("Ruby", "SQL", "Python")
● languages(0)
● languages(1) = "C++"
● // Iterate over array
for(x <- languages) {
println(x);
}
languages :+ "scala"
37. Scala
● List represents linked list with a value and a pointer to the
next element
● Poor performance as data could be anywhere in the
memory
● Theoretically unbounded in size
Scala - Collections - Sequence - List
38. Scala
● var number_list = List(1, 2, 3)
● number_list :+ 4
Scala - Collections - Sequence - List
39. Scala
● Bag of data with no duplicates
● Order is not guaranteed
Scala - Collections - Sets
40. Scala
● var set = Set(76, 5, 9, 1, 2);
● set = set + 9;
● set = set + 20;
● set(5);
● set(14);
Scala - Collections - Sets
41. Scala
Scala - Collections - Tuples
● Unlike array or list, tuples can hold elements of different
data types
● Example var t = (14, 45.69, "Australia") or
● var t = Tuple3(14, 45.69, "Australia") //Same result
● Can be accessed using a 1-based accessor for each value
42. Scala
Scala - Collections - Tuples
● t._1 // 14
● t._3 // Australia
● Can be deconstructed into names bound to each value in
the tuple
● Tuples are immutable
43. Scala
● Collection of key/value pairs
● Allows indexing values by a specific key for fast access
● Java HashMap, Python dictionary
Scala - Collections - Maps
45. Scala
Scala - Collections
● Two types of Collection
○ Mutable
○ Immutable
● By default, Scala uses the immutable map
● For mutable maps explicitly import
○ import scala.collection.mutable.Map
https://docs.scala-lang.org/overviews/collections/overview.html
46. Scala
Scala - Higher Order Functions
● Higher order function is a function which takes another
function as an argument
● Describes “how” the work to be done in a collection
47. Scala
● The map applies the function to each value in the
collection and returns a new collection
Scala - Higher Order Functions - Map
49. Scala
var list = List(1,2,3)
1. def incrByOne(x:Int): Int = x + 1
list.map(incrByOne)
2. list.map(x => x + 1) // List(2, 3, 4)
3. var list1 = list.map(_+ 1) // Same output
Scala - Higher Order Functions - Map
50. Scala
● flatMap takes a function as an argument and this function
must return a collection
● Returned collection could be empty
● The flatMap applies the function passed to it as argument
on each value in the collection just like map
● The returned collections from each call of function are
then merged together to form a new collection
Scala - Higher Order Functions - flatMap
51. Scala
● var list = List("Python", "Go")
● list.flatMap(lang => lang + "#") // List(P, y, t, h, o, n, #, G, o,
#)
● list.flatMap(_+ "#") // Same output
Scala - Higher Order Functions - flatMap
52. Scala
● Filter applies a function to each value in the collection and
returns a new collection with values that satisfy a condition
specified in the function
Scala - Higher Order Functions - Filter
53. Scala
Scala - Higher Order Functions - Filter
● var list = List("Scala", "R", "Python", "Go", "SQL")
● list.filter(lang => lang.contains("S")) // List(Scala, SQL)
54. Scala
● Each of the previous higher order functions returned a
new collection after applying the transformation
● At times we do not want the functions to return a new
collection
○ Waste of memory resources on JVM if we do not want
a return value
● foreach applies a function to each value of collection
without returning a new collection
Scala - Higher Order Functions - foreach
55. Scala
● var list = List("Python", "Go")
● list.foreach(println)
Scala - Higher Order Functions - foreach
58. Scala
Scala - Higher Order Functions - Reduce
● In previous example, addition of two variables was called
again and again until all the values got reduced to a single
value
● Please note, in case of reduce input collection should
contain elements of same data type
59. Scala
Scala - Higher Order Functions - Reduce
● var list = List(3, 7, 13, 16)
● list.reduce((x, y) => x + y) // 39
● list.reduce(_ + _) // same
60. Scala
import java.util.{Date, Locale}
import java.text.DateFormat
import java.text.DateFormat._
object USDate {
def getDate(): String = {
val now = new Date
val df = getDateInstance(LONG, Locale.US)
return df.format(now)
}
}
Scala - Interaction with Java
61. Scala
● If we are working on a big project containing hundreds of
source files, it becomes really tedious to compile these files
manually
● We need a build tool to manage the compilation of all
these files
● SBT is build tool for Scala and Java projects, similar to
Java's Maven or ant
Scala - Build Tool - SBT
62. Scala
● Sample Scala project is located at CloudxLab GitHub repository
https://github.com/cloudxlab/bigdata/tree/master/scala/sbt
● Clone the repository
git clone https://github.com/cloudxlab/bigdata.git
● Or update the repository
cd ~/bigdata && git pull origin master
Scala - Build Tool - SBT - Demo
63. Scala
● cd ~/bigdata/scala/sbt
● Look at the build.sbt file and the directory layout. Scala files must be
located at
src/main/scala
● Run the project using
sbt run
Scala - Build Tool - SBT - Demo
64. Scala
● Case classes are regular classes that are:
○ Immutable by default
○ Can be pattern matched
○ Compiler automatically generates hashCode and equals
methods, so less boilerplate code
○ Helps in writing more expressive and maintainable code
Scala - Case Classes
65. Scala
● Sample Scala project is located at CloudxLab GitHub repository
https://github.com/cloudxlab/bigdata/tree/master/scala/case_classes
● Clone the repository
git clone https://github.com/cloudxlab/bigdata.git
● Or update the repository
cd ~/cloudxlab && git pull origin master
Scala - Case Classes - Demo
66. Scala
● cd ~/cloudxlab/scala/case_classes
● scala case_class.scala // Run case class demo
● scala pattern_matching.scala // Run pattern matching demo
Scala - Case Classes - Demo