Social networks are not new, even though websites like Facebook and Twitter might make you want to believe they are; and trust me- I’m not talking about Myspace! Social networks are extremely interesting models for human behavior, whose study dates back to the early twentieth century. However, because of those websites, data scientists have access to much more data than the anthropologists who studied the networks of tribes!
Because networks take a relationship-centered view of the world, the data structures that we will analyze model real world behaviors and community. Through a suite of algorithms derived from mathematical Graph theory we are able to compute and predict behavior of individuals and communities through these types of analyses. Clearly this has a number of practical applications from recommendation to law enforcement to election prediction, and more.
NetworkX is a Python language software package and an open-source tool for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. NetworkX can load, store and analyze networks, generate new networks, build network models, and draw networks. It is a computational network modelling tool and not a software tool development. The first public release of the library, which is all based on Python, was in April 2005.
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.
Here is a basic Linear Algebra review for the class of Machine Learning. This is actually becoming a new class in the mathematics of Intelligent Systems, there I will be teaching stuff in
1.- Linear Algebra - From the basics to the Cayley-Hamilton Theorem with applications
2.- Mathematical Analysis - from set to the Reimann Integral
3.- Topology - Mostly in Hilbert Spaces
4.- Optimization - Convex functions, KKT conditions, Duality Theory, etc.
The stuff is going to be interesting...
NetworkX is a Python language software package and an open-source tool for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. NetworkX can load, store and analyze networks, generate new networks, build network models, and draw networks. It is a computational network modelling tool and not a software tool development. The first public release of the library, which is all based on Python, was in April 2005.
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.
Here is a basic Linear Algebra review for the class of Machine Learning. This is actually becoming a new class in the mathematics of Intelligent Systems, there I will be teaching stuff in
1.- Linear Algebra - From the basics to the Cayley-Hamilton Theorem with applications
2.- Mathematical Analysis - from set to the Reimann Integral
3.- Topology - Mostly in Hilbert Spaces
4.- Optimization - Convex functions, KKT conditions, Duality Theory, etc.
The stuff is going to be interesting...
Introduction to Machine Learning with Find-SKnoldus Inc.
Machine Learning with Find-S algorithm. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
Social Network Analysis Introduction including Data Structure Graph overview. Doug Needham
Social Network Analysis Introduction including Data Structure Graph overview. Given in Cincinnati August 18th 2015 as part of the DataSeed Meetup group.
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Preferred Networks
This presentation explains basic ideas of graph neural networks (GNNs) and their common applications. Primary target audiences are students, engineers and researchers who are new to GNNs but interested in using GNNs for their projects. This is a modified version of the course material for a special lecture on Data Science at Nara Institute of Science and Technology (NAIST), given by Preferred Networks researcher Katsuhiko Ishiguro, PhD.
A pre conference workshop on Machine Learning was organized as a part of #doppa17, DevOps++ Global Summit 2017. The workshop was conducted by Dr. Vivek Vijay and Dr. Sandeep Yadav. All the copyrights are reserved with the author.
Introduction to Machine Learning with Find-SKnoldus Inc.
Machine Learning with Find-S algorithm. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
Social Network Analysis Introduction including Data Structure Graph overview. Doug Needham
Social Network Analysis Introduction including Data Structure Graph overview. Given in Cincinnati August 18th 2015 as part of the DataSeed Meetup group.
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Preferred Networks
This presentation explains basic ideas of graph neural networks (GNNs) and their common applications. Primary target audiences are students, engineers and researchers who are new to GNNs but interested in using GNNs for their projects. This is a modified version of the course material for a special lecture on Data Science at Nara Institute of Science and Technology (NAIST), given by Preferred Networks researcher Katsuhiko Ishiguro, PhD.
A pre conference workshop on Machine Learning was organized as a part of #doppa17, DevOps++ Global Summit 2017. The workshop was conducted by Dr. Vivek Vijay and Dr. Sandeep Yadav. All the copyrights are reserved with the author.
From Data to Knowledge thru Grailog Visualizationgiurca
Visualization of Data & Knowledge: Graphs Remove Entry Barrier to Logic: From 1-dimensional symbol-logic knowledge specification to 2-dimensional graph-logic visualization in a systematic 2D syntax; Supports human in the loop across knowledge elicitation, specification, validation, and reasoning; Combinable with graph transformation, (‘associative’) indexing & parallel processing for efficient implementation of specifications
Presentation given at DMZ about Data Structure Graphs.
Also known as Applying Social Network Analysis Techniques to Data Modeling and Data Architecture
Inductive Triple Graphs: A purely functional approach to represent RDFJose Emilio Labra Gayo
Slides of my presentation on 3rd International Workshop on Graph Structures for Knowledge Representation, part of the International Joint Conference on Artificial Intelligence, Beijing, China. 4 August 2013
This tutorial will provide you with a basic understanding of graph database technology and the ability to quickly begin development of a graph database application. You will have the capability to recognize graph-based problems and present the benefits of using graph technology for problem resolution.
The tutorial will give you an understanding of:
• Graph theory - origins and concepts
• Benefits of graph databases
• Different types of graph databases
• Typical graph database API
• Programming basics
• Use cases
Bring your laptops for a hands-on opportunity to practice some sample codes. A basic understanding of Java programming is a recommended prerequisite to understand this course. This session is led by the InfiniteGraph technical team and the demonstration code will be drawn from InfiniteGraph examples, however the broader educational presentation is product-neutral and not a commercial presentation of their products.
To participate in the hands-on portion of the graph tutorial users must have:
• Java programming experience
• Java Developer Kit (JDK)
• Current InfiniteGraph installed on laptop. (To download visit www.objectivity.com/infinitegraph)
• HelloGraph test – Upon installing IG, run HelloGraph to test the install. (HelloGraph can be found online at http://wiki.infinitegraph.com/2.1/w/index.php?title=Download_Sample_Code)
Leon Guzenda was one of the founding members of Objectivity in 1988 and one of the original architects of Objectivity/DB. He currently works with Objectivity's major customers to help them effectively develop and deploy complex applications and systems that use the industry's highest-performing, most reliable DBMS technology, Objectivity/DB. He also liaises with technology partners and industry groups to help ensure that Objectivity/DB remains at the forefront of database and distributed computing technology. Leon has more than 35 years experience in the software industry. At Automation Technology Products, he managed the development of the ODBMS for the Cimplex solid modeling and numerical control system. Before that, he was Principal Project Director for International Computers Ltd. in the United Kingdom, delivering major projects for NATO and leading multinationals. He was also design and development manager for ICL's 2900 IDMS product. He spent the first 7 years of his career working in defense and government systems. Leon has a B.S. degree in Electronic Engineering from the University of Wales.
The tutorial describes existing approaches to model graph databases and different techniques implemented in RDF and Database engines including their main drawbacks when a large volume of interconnected data needs to be traversed.
Graph Algorithms, Sparse Algebra, and the GraphBLAS with Janice McMahonChristopher Conlan
This talk will provide a very brief overview of graph algorithms and their expression using sparse linear algebra, followed by a high-level description of the GraphBLAS library and its usage.
Graphs are among the most important abstract data types in computer science, and the algorithms that operate on them are critical to modern life. Algorithms on graphs are applied in many ways in today's world—from Web rankings to metabolic networks, from finite element meshes to semantic graphs. Graphs have been shown to be powerful tools for modeling these complex problems because of their simplicity and generality. GraphBLAS is an API specification that defines standard building blocks for graph algorithms in the language of linear algebra. Graph algorithms have long taken advantage of the idea that a graph can be represented as a matrix, and graph operations can be performed as linear transformations and other linear algebraic operations on sparse matrices. For example, matrix-vector multiplication can be used to perform a step in a breadth-first search. The GraphBLAS specification (and the various libraries that implement it) provides data structures and functions to compute these linear algebraic operations. In particular, the GraphBLAS specifies sparse matrix objects which map well to graphs where vertices are likely connected to relatively few neighbors (i.e. the degree of a vertex is significantly smaller than the total number of vertices in the graph). The benefits of this approach are reduced algorithmic complexity, ease of implementation, and improved performance.
This deck is meant to help compliance officers and technical implementors at Virtual Asset Service Providers (VASPs) implement TRISA for travel rule compliance.
The goal of TRISA is to enable compliance with the FATF and FinCEN Travel Rules, as well as Travel Rules implemented by equivalent authorities, without: modifying the core blockchain protocols, incurring increased transaction costs, or modifying virtual currency peer-to-peer transaction flows.
Visual diagnostics for more effective machine learningBenjamin Bengfort
The model selection process is a search for the best combination of features, algorithm, and hyperparameters that maximize F1, R2, or silhouette scores after cross-validation. This view of machine learning often leads us toward automated processes such as grid searches and random walks. Although this approach allows us to try many combinations, we are often left wondering if we have actually succeeded.
By enhancing model selection with visual diagnostics, data scientists can inject human guidance to steer the search process. Visualizing feature transformations, algorithmic behavior, cross-validation methods, and model performance allows us a peek into the high dimensional realm that our models operate. As we continue to tune our models, trying to minimize both bias and variance, these glimpses allow us to be more strategic in our choices. The result is more effective modeling, speedier results, and greater understanding of underlying processes.
Visualization is an integral part of the data science workflow, but visual diagnostics are directly tied to machine learning transformers and models. The Yellowbrick library extends the scikit-learn API providing a Visualizer object, an estimator that learns from data and produces a visualization as a result. In this talk, we will explore feature visualizers, visualizers for classification, clustering, and regression, as well as model analysis visualizers. We'll work through several examples and show how visual diagnostics steer model selection, making machine learning more effective.
Visualizing Model Selection with Scikit-Yellowbrick: An Introduction to Devel...Benjamin Bengfort
This is an overview of the goals and roadmap for the Yellowbrick model visualization library (www.scikit-yb.org). If you're interested in contributing to Yellowbrick or writing visualizers, this is a good place to get started.
In the presentation we discuss the expected workflow of data scientists interacting with the model selection triple and Scikit-Learn. We describe the Yellowbrick API and it's relationship to the Scikit-Learn API. We introduce our primary object: the Visualizer, an estimator that learns from data and displays it visually. Finally we describe the requirements for developing for Yellowbrick, the tools and utilities in place and how to get started.
Yellowbrick is a suite of visual diagnostic tools called "Visualizers" that extend the Scikit-Learn API to allow human steering of the model selection process. In a nutshell, Yellowbrick combines Scikit-Learn with Matplotlib in the best tradition of the Scikit-Learn documentation, but to produce visualizations for your models!
This presentation was given during the opening session of the 2017 Spring DDL Research Labs.
Network analyses are powerful methods for both visual analytics and machine learning but can suffer as their complexity increases. By embedding time as a structural element rather than a property, we will explore how time series and interactive analysis can be improved on Graph structures. Primarily we will look at decomposition in NLP-extracted concept graphs using NetworkX and Graph Tool.
Machine learning is the hacker art of describing the features of instances that we want to make predictions about, then fitting the data that describes those instances to a model form. Applied machine learning has come a long way from it's beginnings in academia, and with tools like Scikit-Learn, it's easier than ever to generate operational models for a wide variety of applications. Thanks to the ease and variety of the tools in Scikit-Learn, the primary job of the data scientist is model selection. Model selection involves performing feature engineering, hyperparameter tuning, and algorithm selection. These dimensions of machine learning often lead computer scientists towards automatic model selection via optimization (maximization) of a model's evaluation metric. However, the search space is large, and grid search approaches to machine learning can easily lead to failure and frustration. Human intuition is still essential to machine learning, and visual analysis in concert with automatic methods can allow data scientists to steer model selection towards better fitted models, faster. In this talk, we will discuss interactive visual methods for better understanding, steering, and tuning machine learning models.
Data products derive their value from data and generate new data in return; as a result, machine learning techniques must be applied to their architecture and their development. Machine learning fits models to make predictions on unknown inputs and must be generalizable and adaptable. As such, fitted models cannot exist in isolation; they must be operationalized and user facing so that applications can benefit from the new data, respond to it, and feed it back into the data product. Data product architectures are therefore life cycles and understanding the data product lifecycle will enable architects to develop robust, failure free workflows and applications. In this talk we will discuss the data product life cycle, explore how to engage a model build, evaluation, and selection phase with an operation and interaction phase. Following the lambda architecture, we will investigate wrapping a central computational store for speed and querying, as well as incorporating a discussion of monitoring, management, and data exploration for hypothesis driven development. From web applications to big data appliances; this architecture serves as a blueprint for handling data services of all sizes!
Entity Resolution is the task of disambiguating manifestations of real world entities through linking and grouping and is often an essential part of the data wrangling process. There are three primary tasks involved in entity resolution: deduplication, record linkage, and canonicalization; each of which serve to improve data quality by reducing irrelevant or repeated data, joining information from disparate records, and providing a single source of information to perform analytics upon. However, due to data quality issues (misspellings or incorrect data), schema variations in different sources, or simply different representations, entity resolution is not a straightforward process and most ER techniques utilize machine learning and other stochastic approaches.
An Interactive Visual Analytics Dashboard for the Employment Situation ReportBenjamin Bengfort
The Employment Situation Report is a monthly news release by the Bureau of Labor Statistics which describes the results of the Current Population Survey. Its release is widely anticipated by economists, journalists, and politicians as it is used to forecast the economic condition of the United States by describing ongoing trends and has a broad impact on public and corporate economic confidence leading directly to investment decisions. The report itself is in a PDF format that is comprised primarily of text and tabular information. Quickly and correctly interpreting the results of the jobs report is vital for quality reporting and decision making, but the report is more suited for longer study than deriving insights. In this project we explore the use of an interactive dashboard for visual analytics upon the released BLS data. Using an application demonstration and a usability study we will show that visually interacting with the most current employment data, users are able to rapidly achieve rich insights similar to those reported on in the text of the Employment Situation Report.
Scikit-Learn is a powerful machine learning library implemented in Python with numeric and scientific computing powerhouses Numpy, Scipy, and matplotlib for extremely fast analysis of small to medium sized data sets. It is open source, commercially usable and contains many modern machine learning algorithms for classification, regression, clustering, feature extraction, and optimization. For this reason Scikit-Learn is often the first tool in a Data Scientists toolkit for machine learning of incoming data sets.
The purpose of this one day course is to serve as an introduction to Machine Learning with Scikit-Learn. We will explore several clustering, classification, and regression algorithms for a variety of machine learning tasks and learn how to implement these tasks with our data using Scikit-Learn and Python. In particular, we will structure our machine learning models as though we were producing a data product, an actionable model that can be used in larger programs or algorithms; rather than as simply a research or investigation methodology.
In this one day workshop, we will introduce Spark at a high level context. Spark is fundamentally different than writing MapReduce jobs so no prior Hadoop experience is needed. You will learn how to interact with Spark on the command line and conduct rapid in-memory data analyses. We will then work on writing Spark applications to perform large cluster-based analyses including SQL-like aggregations, machine learning applications, and graph algorithms. The course will be conducted in Python using PySpark.
Using only simple rules for local interactions, groups of agents can form self-organizing super-organisms or “flocks” that show global emergent behavior. When agents are also extended with memory and goals the resulting flock not only demonstrates emergent behavior, but also collective intelligence: the ability for the group to solve problems that might be beyond the ability of the individual alone. Until now, research has focused on the improvement of particle design for global behavior; however, techniques for human-designed particles are task-specific. In this paper we will demonstrate that evolutionary computing techniques can be applied to design particles, not only to optimize the parameters for movement but also the structure of controlling finite state machines that enable collective intelligence. The evolved design not only exhibits emergent, self-organizing behavior but also significantly outperforms a human design in a specific problem domain. The strategy of the evolved design may be very different from what is intuitive to humans and perhaps reflects more accurately how nature designs systems for problem solving. Furthermore, evolutionary design of particles for collective intelligence is more flexible and able to target a wider array of problems either individually or as a whole.
An Overview of Spanner: Google's Globally Distributed DatabaseBenjamin Bengfort
Spanner is a globally distributed database that provides external consistency between data centers and stores data in a schema based semi-relational data structure. Not only that, Spanner provides a versioned view of the data that allows for instantaneous snapshot isolation across any segment of the data. This versioned isolation allows Spanner to provide globally consistent reads of the database at a particular time allowing for lock-free read-only transactions (and therefore no communications overhead for consensus during these types of reads). Spanner also provides externally consistent reads and writes with a timestamp-based linear execution of transactions and two phase commits. Spanner is the first distributed database that provides global sharding and replication with strong consistency semantics.
Natural Language Processing (NLP) is often taught at the academic level from the perspective of computational linguists. However, as data scientists, we have a richer view of the world of natural language - unstructured data that by its very nature has important latent information for humans. NLP practitioners have benefitted from machine learning techniques to unlock meaning from large corpora, and in this class we’ll explore how to do that particularly with Python, the Natural Language Toolkit (NLTK), and to a lesser extent, the Gensim Library.
NLTK is an excellent library for machine learning-based NLP, written in Python by experts from both academia and industry. Python allows you to create rich data applications rapidly, iterating on hypotheses. Gensim provides vector-based topic modeling, which is currently absent in both NLTK and Scikit-Learn. The combination of Python + NLTK means that you can easily add language-aware data products to your larger analytical workflows and applications.
A presentation of Hermansky & Morgan's 1994 paper, RASTA Processing of Speech. Learn the dramatic effect of RASTA on critical band analysis when combined with PLP to do speech detection!
Hermansky, Hynek, and Nelson Morgan. "RASTA processing of speech." Speech and Audio Processing, IEEE Transactions on 2.4 (1994): 578-589.
District Data Labs Workshop
Current Workshop: August 23, 2014
Previous Workshops:
- April 5, 2014
Data products are usually software applications that derive their value from data by leveraging the data science pipeline and generate data through their operation. They aren’t apps with data, nor are they one time analyses that produce insights - they are operational and interactive. The rise of these types of applications has directly contributed to the rise of the data scientist and the idea that data scientists are professionals “who are better at statistics than any software engineer and better at software engineering than any statistician.”
These applications have been largely built with Python. Python is flexible enough to develop extremely quickly on many different types of servers and has a rich tradition in web applications. Python contributes to every stage of the data science pipeline including real time ingestion and the production of APIs, and it is powerful enough to perform machine learning computations. In this class we’ll produce a data product with Python, leveraging every stage of the data science pipeline to produce a book recommender.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
5. Graph Theory
The Mathematical study of the application and
properties of graphs, originally motivated by the
study of games of chance.
Traced back to Euler’s work on the Konigsberg Bridges problem (1735), leading to the
concept of Eulerian graphs.
6. A Graph, G, consists of a finite set denoted by V or V(G) and a collection E or E
(G) of ordered or unordered pairs {u,v} where u and v ∈ V
vertices (nodes)
edges (links)
7. Graphs can be directed or undirected
DiGraphs, the edges are ordered pairs: (u,v)
15. a path from i to k
Length(p) = # of nodes in path
Paths(i,j) = set of paths from i to j
Shortest (unweighted) path length
Paths in a Network
Traversal is about following paths between vertices
17. Classes of Graph Algorithms
Generally speaking CS Algorithms are
designed to solve the classes of Math
problems, but we can further categorize them
into the following:
1. Traversal (flow, shortest distance)
2. Search (optimal node location)
3. Subgraphing (find minimum weighted
spanning tree)
4. Clustering (group neighbors of nodes)
18. For Reference
Bellman-Ford Algorithm | Dijkstra's Algorithm
Ford-Fulkerson Algorithm | Kruskai's Algorithm
Nearest neighbor | Depth-First and Breadth-
First
23. Graph Analysis for Big Data (Uber Trips in San Francisco)
http://radar.oreilly.com/2014/07/there-are-many-use-cases-for-graph-databases-and-analytics.html
25. Why Graphs?
1. Graphs are abstractions of real life
2. Represent information flows that exist
3. Explicitly demonstrate relationships
4. Enable computations across large datasets
5. Allow us to compute locally to areas of
interest with small traversals
6. Because everyone else is doing it
(PageRank, SocialGraph)
33. Machine Learning using Graphs
- Machine Learning is iterative but iteration
can also be seen as traversal.
34. Machine Learning using Graphs
- Machine Learning is iterative but iteration
can also be seen as traversal.
- Machine Learning requires many instances
with which to fit a model to make predictions.
35. Machine Learning using Graphs
- Machine Learning is iterative but iteration
can also be seen as traversal.
- Machine Learning requires many instances
with which to fit a model to make predictions.
- Important analyses are graph algorithms:
clusters, influence propagation, centrality.
36. Machine Learning using Graphs
- Machine Learning is iterative but iteration
can also be seen as traversal.
- Machine Learning requires many instances
with which to fit a model to make predictions.
- Important analyses are graph algorithms:
clusters, influence propagation, centrality.
- Performance benefits on sparse data
37. Machine Learning using Graphs
- Machine Learning is iterative but iteration
can also be seen as traversal.
- Many domains have structures already
modeled as graphs (health records, finance)
- Important analyses are graph algorithms:
clusters, influence propagation, centrality.
- Performance benefits on sparse data
- More understandable implementation
38. Iterative PageRank in Python
def pageRank(G, s = .85, maxerr = .001):
n = G.shape[0]
# transform G into markov matrix M
M = csc_matrix(G,dtype=np.float)
rsums = np.array(M.sum(1))[:,0]
ri, ci = M.nonzero()
M.data /= rsums[ri]
sink = rsums==0 # bool array of sink states
# Compute pagerank r until we converge
ro, r = np.zeros(n), np.ones(n)
while np.sum(np.abs(r-ro)) > maxerr:
ro = r.copy()
for i in xrange(0,n):
Ii = np.array(M[:,i].todense())[:,0] # inlinks of state i
Si = sink / float(n) # account for sink states
Ti = np.ones(n) / float(n) # account for teleportation
r[i] = ro.dot( Ii*s + Si*s + Ti*(1-s) )
return r/sum(r) # return normalized pagerank
41. ● Existence: Does there exist [a path, a
vertex, a set] within [constraints]?
● Construction: Given a set of [paths,
vertices] is a [constraint] graph construction
possible?
● Enumeration: How many [vertices, edges]
exist with [constraints], is it possible to list
them?
● Optimization: Given several [paths, etc.] is
one the best?
Classes of Graph Analyses
44. A social network is a data structure whose nodes are composed of
actors (proper nouns except places) that transmit information to each
other according to their relationships (links) with other actors.
Semantic definitions of both actors and relationships are illustrative:
Actor: person, organization, place, role
Relationship: friends, acquaintance, penpal, correspondent
Social networks are complex - they have non-trivial topological features
that do not occur in simple networks.
Almost any system humans participate and communicate in can be
modeled as a social network (hence rich semantic relevance)
45. - Scale Free Networks
http://en.wikipedia.org/wiki/Scale-free_network
- Degree distribution follows a power law
- Significant topological features (not random)
- Commonness of vertices with a degree that greatly exceeds
the average degree (“hubs”) which serve some purpose
- Small World Networks
http://en.wikipedia.org/wiki/Small-world_network
- Most nodes aren’t neighbors but can be reached quickly
- Typical distance between two nodes grows proportionally to the
logarithm of the order of the network.
- Exhibits many specific clusters
Attributes of a Social Network
46. Degree Distribution
- We’ve looked so far at per node properties
(degree, etc) and averaging them gives us
some information.
- Instead, let’s look at the entire distribution
-
Degree Distribution:
Power Law distribution:
50. Graphs contain semantically relevant information - “Property Graph”
https://github.com/tinkerpop/blueprints/wiki/Property-Graph-Model
51. Property Graph Model
The primary data model for Graphs, containing these elements:
1. a set of vertices
○ each vertex has a unique identifier.
○ each vertex has a set of outgoing edges.
○ each vertex has a set of incoming edges.
○ each vertex has a collection of properties defined by a map
from key to value.
2. a set of edges
○ each edge has a unique identifier.
○ each edge has an outgoing tail vertex.
○ each edge has an incoming head vertex.
○ each edge has a label that denotes the type of relationship
between its two vertices.
○ each edge has a collection of properties defined by a map from
key to value.
52. Graph: An object that contains vertices and edges.
Element: An object that can have any number of key/value pairs
associated with it (i.e. properties)
Vertex: An object that has incoming and outgoing edges.
Edge: An object that has a tail and head vertex.
53. Modeling property graphs with labels, relationships, and properties.
http://neo4j.com/developer/guide-data-modeling/
66. Shows which nodes are likely pathways of
information and can be used to determine how
a graph will break apart of nodes are removed.
67. Betweenness: the sum of the fraction of all the pairs of shortest paths that pass
through that particular node. Can also be normalized by the number of nodes or
an edge weight.
68. A measure of reach; how fast information will
spread to all other nodes from a single node.
69. Closeness: average number of hops to reach any other node in the network.
The reciprocal of the mean distance: n-1 / size(G) - 1 for a neighborhood, n
70. A measure of related influence, who is closest
to the most important people in the Graph?
Kind of like “power behind the scenes” or
influence beyond popularity.
71. Eigenvector: proportional to the sum of centrality scores of the neighborhood.
(PageRank is a stochastic eigenvector scoring)
72. Detection of communities or groups that exist in
a network by counting triangles.
Measures “transitivity” - tripartite relationships
that indicate clusters
T(i) = # of triangles with i as a vertex
Local Clustering Coefficient Graph Clustering Coefficient
Counting the number of triangles is a start towards
making inferences about “transitive closures” - e.g.
predictions or inferences about relationships
73. Green lines are connections from the node, black are the other connections
75. Partitive classification utilizes subgraphing techniques
to find the minimum number of splits required to divide
a graph into two classes. Laplacian Matrices are often
used to count the number of spanning trees.
76. Distance based techniques like k-Nearest Neighbors
embed distances in graphical links, allowing for very
fast computation and blocking of pairwise distance
computations.
77. Just add probability! Bayesian Networks are directed,
acyclic graphs that encode conditional dependencies
and can be trained from data, then used to make
inferences.
79. Layouts
- Open Ord
- http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=731088
- Draws large scale undirected graphs with visual clusters
- Yifan Hu
- http://yifanhu.net/PUB/graph_draw_small.pdf
- Force Directed Layout with multiple levels and quadtree
- Force Atlas
- http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0098679
- A continuous force directed layout (default of Gephi)
- Fruchterman Reingold
- http://cs.brown.edu/~rt/gdhandbook/chapters/force-directed.pdf
- Graph as a system of mass particles (nodes are particles, edges are
springs) This is the basis for force directed layouts
Others: circular, shell, neato, spectral, dot, twopi …
90. Plan of Study
Extraction of Network from Email
Introduction to NetworkX
Analyzing our Email Networks
Visualizing our Email Network
Relief from Gephi