How is the algorithmic thinking is applied in (mathematical) finance? What are the main differences in approaches?
Some basics of theoretical computer science are explained, some hands-on applications to finance are shown together with the industry practice.
This document provides an overview of neural art and neural style transfer using convolutional neural networks. It first discusses how visual perception and computer vision work, then explains how neural networks like VGG19 can be used to generate artistic images by combining the content of one image with the artistic style of another. Specifically, it describes how the content image's filter responses are matched and the style image's gram matrix is matched to generate a new image that reflects both the content and style.
Computational Social Science, Lecture 07: Counting Fast, Part Ijakehofman
This document discusses techniques for finding the intersection between two large datasets efficiently. It begins by noting that while computers can perform billions of operations per second, datasets are huge, ranging from terabytes to petabytes in size. Two techniques for finding intersections are examined: scanning and sorting. Scanning has a runtime of O(n2) while sorting has a runtime of O(n log n), making sorting faster. Hashing is proposed as an even faster alternative with a runtime of O(n). Reasoning about algorithmic runtimes and how they scale with input size is discussed.
LSGAN - SIMPle(Simple Idea Meaningful Performance Level up)Hansol Kang
LSGAN은 기존의 GAN loss가 아닌 MSE loss를 사용하여, 더욱 realistic한 데이터를 생성함.
LSGAN 논문 리뷰 및 PyTorch 기반의 구현.
[참고]
Mao, Xudong, et al. "Least squares generative adversarial networks." Proceedings of the IEEE International Conference on Computer Vision. 2017.
Slides from our PacificVis 2015 presentation.
The paper tackles the problems of the “giant hairballs”, the dense and tangled structures often resulting from visualiza- tion of large social graphs. Proposed is a high-dimensional rotation technique called AGI3D, combined with an ability to filter elements based on social centrality values. AGI3D is targeted for a high-dimensional embedding of a social graph and its projection onto 3D space. It allows the user to ro- tate the social graph layout in the high-dimensional space by mouse dragging of a vertex. Its high-dimensional rotation effects give the user an illusion that he/she is destructively reshaping the social graph layout but in reality, it assists the user to find a preferred positioning and direction in the high- dimensional space to look at the internal structure of the social graph layout, keeping it unmodified. A prototype im- plementation of the proposal called Social Viewpoint Finder is tested with about 70 social graphs and this paper reports four of the analysis results.
This document discusses backtracking algorithms and provides examples for solving problems using backtracking, including:
1) Generating all subsets and permutations of a set using backtracking.
2) The eight queens problem, which can be solved using a backtracking algorithm that places queens on a chessboard one by one while checking for threats.
3) Key components of backtracking algorithms including candidate construction, checking for solutions, and pruning search spaces for efficiency.
This document provides an overview of neural art and neural style transfer using convolutional neural networks. It first discusses how visual perception and computer vision work, then explains how neural networks like VGG19 can be used to generate artistic images by combining the content of one image with the artistic style of another. Specifically, it describes how the content image's filter responses are matched and the style image's gram matrix is matched to generate a new image that reflects both the content and style.
Computational Social Science, Lecture 07: Counting Fast, Part Ijakehofman
This document discusses techniques for finding the intersection between two large datasets efficiently. It begins by noting that while computers can perform billions of operations per second, datasets are huge, ranging from terabytes to petabytes in size. Two techniques for finding intersections are examined: scanning and sorting. Scanning has a runtime of O(n2) while sorting has a runtime of O(n log n), making sorting faster. Hashing is proposed as an even faster alternative with a runtime of O(n). Reasoning about algorithmic runtimes and how they scale with input size is discussed.
LSGAN - SIMPle(Simple Idea Meaningful Performance Level up)Hansol Kang
LSGAN은 기존의 GAN loss가 아닌 MSE loss를 사용하여, 더욱 realistic한 데이터를 생성함.
LSGAN 논문 리뷰 및 PyTorch 기반의 구현.
[참고]
Mao, Xudong, et al. "Least squares generative adversarial networks." Proceedings of the IEEE International Conference on Computer Vision. 2017.
Slides from our PacificVis 2015 presentation.
The paper tackles the problems of the “giant hairballs”, the dense and tangled structures often resulting from visualiza- tion of large social graphs. Proposed is a high-dimensional rotation technique called AGI3D, combined with an ability to filter elements based on social centrality values. AGI3D is targeted for a high-dimensional embedding of a social graph and its projection onto 3D space. It allows the user to ro- tate the social graph layout in the high-dimensional space by mouse dragging of a vertex. Its high-dimensional rotation effects give the user an illusion that he/she is destructively reshaping the social graph layout but in reality, it assists the user to find a preferred positioning and direction in the high- dimensional space to look at the internal structure of the social graph layout, keeping it unmodified. A prototype im- plementation of the proposal called Social Viewpoint Finder is tested with about 70 social graphs and this paper reports four of the analysis results.
This document discusses backtracking algorithms and provides examples for solving problems using backtracking, including:
1) Generating all subsets and permutations of a set using backtracking.
2) The eight queens problem, which can be solved using a backtracking algorithm that places queens on a chessboard one by one while checking for threats.
3) Key components of backtracking algorithms including candidate construction, checking for solutions, and pruning search spaces for efficiency.
The document summarizes a class on Merkle trees. It recaps blockchain and exploring the Bitcoin core code. It discusses how transactions are recorded in blocks and introduces Merkle trees as a method to efficiently verify all transactions are included in a block. The class plan includes a quiz the following week to assess understanding of concepts covered so far, including mining. Students are notified that Project 2 will be assigned and they should begin work before the next class.
The document describes experience with partial SIMDization in the Open64 compiler using dynamic programming. It discusses applying partial SIMDization to pack isomorphic instructions into SIMD instructions without fully vectorizing loops. It outlines applying a dynamic programming approach based on the Barik-Zhao-Sarkar algorithm to choose optimal SIMD tile combinations. Initial results show applying this to a hot loop in GROMACS was able to SIMDize over 88% of operations, demonstrating the potential of the technique.
This document discusses the derivation of backpropagation for neural networks. It begins with an overview of logistic regression and the sigmoid activation function. It then shows how to apply the chain rule to calculate the gradients needed for backpropagation. Specifically, it derives that the error term δ for each layer can be calculated as the error from the next layer multiplied by the weights and activation. This allows efficient computation of the gradient for each weight and bias term. Sample Python code is also provided to implement a basic neural network with backpropagation.
The document discusses error analysis for quasi-Monte Carlo methods used for numerical integration. It introduces the concepts of reproducing kernel Hilbert spaces and mean square discrepancy to analyze integration error. Specifically, it shows that the mean square discrepancy of randomized low-discrepancy point sets can be computed in O(n) operations, whereas the standard discrepancy requires O(n^2) operations, making randomized quasi-Monte Carlo methods more efficient for high-dimensional integration problems.
1) The document analyzes the complexity of scheduling inventory releasing jobs to satisfy time-varying demand.
2) It considers problems where processing times and product quantities are equal or varying, and objectives of minimizing total inventory or maximum inventory.
3) It finds that most general cases are NP-hard but some restricted cases can be solved in polynomial time, such as with identical processing times and a bounded number of product types.
This document discusses lattices and lattice-based cryptography. It defines what a lattice and basis are, provides examples, and describes the properties they must satisfy. It then explains the Goldreich-Goldwasser-Halevi (GGH) encryption scheme which uses lattices for encryption. The document outlines the key operations of GGH encryption - how public and private keys are generated from a lattice basis, how encryption and decryption work, and how the security relies on the closest vector problem. It notes that a 1999 paper by Nguyen showed GGH encryption reveals some plaintext information and the decryption problem can be simplified.
The document discusses building robust machine learning systems that can handle concept drift. It introduces the challenges of concept drift when the underlying data distribution changes over time. It proposes using Gaussian process classifiers with an adaptive training window approach. The approach monitors for concept drift and retrains the model if detected. It tests the approach on artificial data streams with different drift scenarios and finds the adaptive approach performs better than a static model at handling concept drift. Future work could explore other drift detection methods and ensembles of adaptive Gaussian process classifiers.
In this talk we consider the question of how to use QMC with an empirical dataset, such as a set of points generated by MCMC. Using ideas from partitioning for parallel computing, we apply recursive bisection to reorder the points, and then interleave the bits of the QMC coordinates to select the appropriate point from the dataset. Numerical tests show that in the case of known distributions this is almost as effective as applying QMC directly to the original distribution. The same recursive bisection can also be used to thin the dataset, by recursively bisecting down to many small subsets of points, and then randomly selecting one point from each subset. This makes it possible to reduce the size of the dataset greatly without significantly increasing the overall error. Co-author: Fei Xie
This document provides an overview of brute force and divide-and-conquer algorithms. It discusses various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It also analyzes the time efficiency of these brute force algorithms. The document then discusses the divide-and-conquer approach and provides examples like merge sort, quicksort, and matrix multiplication. It provides pseudocode and analysis for mergesort. In summary, the document covers brute force and divide-and-conquer techniques for solving algorithmic problems.
The document provides an overview of recursive and iterative algorithms. It discusses key differences between recursive and iterative algorithms such as definition, application, termination, usage, code size, and time complexity. Examples of recursive algorithms like recursive sum, factorial, binary search, tower of Hanoi, and permutation generator are presented along with pseudocode. Analysis of recursive algorithms like recursive sum, factorial, binary search, Fibonacci number, and tower of Hanoi is demonstrated to determine their time complexities. The document also discusses iterative algorithms, proving an algorithm's correctness, the brute force approach, and store and reuse methods.
On February 2017, the University of Sherbrooke inveted me to talk about deep learning and my professional experience to student of Master program. These slides are extracted from my original presentation.
The document discusses amortized analysis, which averages the time required to perform a sequence of operations over all operations. It describes three methods of amortized analysis: aggregate analysis, accounting analysis, and potential analysis. As an example, it analyzes the amortized cost of operations on a dynamic table using these three methods and shows that the amortized cost of insertion and deletion is O(1), even though some operations may have higher actual costs when triggering expansions or contractions of the table.
The document discusses the AlexNet convolutional neural network architecture that won the ImageNet challenge in 2012. It describes the layers of AlexNet in detail, including the input and output sizes of each layer. Some key details are that AlexNet had 5 convolutional layers and 3 fully connected layers, used ReLU activations, dropout, data augmentation, and other techniques. It discusses how AlexNet was implemented across two GPUs due to memory constraints. Finally, it briefly mentions subsequent winners of the ImageNet challenge that improved on AlexNet's architecture.
1. Introduction to time and space complexity.
2. Different types of asymptotic notations and their limit definitions.
3. Growth of functions and types of time complexities.
4. Space and time complexity analysis of various algorithms.
This document discusses brute force and divide-and-conquer algorithms. It covers various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It then discusses divide-and-conquer techniques like merge sort, quicksort, and multiplication of large integers. Examples and analysis of algorithms like selection sort, bubble sort, and mergesort are provided.
Multi-scalar multiplication: state of the art and new ideasGus Gutoski
A 90-minute online presentation for zkStudyClub, delivered 2020-06-01. I present a new idea with a demonstrated 5% speed-up for multi-scalar multiplication. When combined with precomputation, this method could yield upwards of 20% speed-up.
This document provides an overview of finite fields and their importance in cryptography. It discusses how finite fields allow for efficient storage and arithmetic operations on integers for encryption algorithms. The document outlines the basic properties of groups, rings, and fields. It also covers modular arithmetic, greatest common divisors, and Euclid's algorithm for computing gcd. The goal is to introduce concepts needed to understand the arithmetic of the AES encryption algorithm, which uses operations in the finite field GF(28).
This document summarizes key concepts from a lecture on finite fields and their use in cryptography. It introduces finite fields and explains why they are important for cryptography. It discusses the structure of finite fields, including that every finite field has pn elements, where p is a prime number. It also provides examples of computing in finite fields through modular arithmetic.
This document summarizes key concepts from a lecture on finite fields and their use in cryptography. It introduces finite fields and explains why they are important for cryptography. It discusses the structure of finite fields, including that every finite field has pn elements, where p is a prime number. It also provides examples of computing in finite fields through modular arithmetic.
This document summarizes key concepts from a lecture on finite fields including:
- Finite fields have a specific structure with a set number of elements that allows for division, unlike modular arithmetic over integers.
- Modern cryptographic algorithms like AES rely on computations in finite fields to avoid weaknesses from patterns in integer arithmetic.
- The lecture will introduce groups, rings, fields and their properties to provide the foundations for understanding polynomial arithmetic over finite fields used in AES.
Linear Programming - Simplex Algorithm by Yunus HatipogluYunus Hatipoglu
The worst case complexity of the simplex algorithm is exponential. Specifically, in the worst case the simplex algorithm visits all 2n vertices, where n is the number of variables, and this takes exponential time. However, in practice simplex algorithms usually perform much better than the worst case. The specific pivot rule used can also affect the performance and complexity - some rules tend to work faster than others on average. Linear programming and the simplex algorithm are used in areas like resource allocation, production scheduling, transportation/logistics, and finance to optimize objectives subject to constraints.
Teaching Mathematics Concepts via Computer Algebra Systemsinventionjournals
Most articles examine computer algebra systems (CAS) as they relate to the teaching and
learning of mathematics from advantages to disadvantages. This paper will explore junior undergraduate
students’ ability to solve distinguish tricky examples using various CAS technologies. Additionally, an
understanding for how CAS technologies are adopted and applied in professional environments is valuable,
both in guiding improvements to these tools and identifying new tools which can aid mathematician
The document summarizes a class on Merkle trees. It recaps blockchain and exploring the Bitcoin core code. It discusses how transactions are recorded in blocks and introduces Merkle trees as a method to efficiently verify all transactions are included in a block. The class plan includes a quiz the following week to assess understanding of concepts covered so far, including mining. Students are notified that Project 2 will be assigned and they should begin work before the next class.
The document describes experience with partial SIMDization in the Open64 compiler using dynamic programming. It discusses applying partial SIMDization to pack isomorphic instructions into SIMD instructions without fully vectorizing loops. It outlines applying a dynamic programming approach based on the Barik-Zhao-Sarkar algorithm to choose optimal SIMD tile combinations. Initial results show applying this to a hot loop in GROMACS was able to SIMDize over 88% of operations, demonstrating the potential of the technique.
This document discusses the derivation of backpropagation for neural networks. It begins with an overview of logistic regression and the sigmoid activation function. It then shows how to apply the chain rule to calculate the gradients needed for backpropagation. Specifically, it derives that the error term δ for each layer can be calculated as the error from the next layer multiplied by the weights and activation. This allows efficient computation of the gradient for each weight and bias term. Sample Python code is also provided to implement a basic neural network with backpropagation.
The document discusses error analysis for quasi-Monte Carlo methods used for numerical integration. It introduces the concepts of reproducing kernel Hilbert spaces and mean square discrepancy to analyze integration error. Specifically, it shows that the mean square discrepancy of randomized low-discrepancy point sets can be computed in O(n) operations, whereas the standard discrepancy requires O(n^2) operations, making randomized quasi-Monte Carlo methods more efficient for high-dimensional integration problems.
1) The document analyzes the complexity of scheduling inventory releasing jobs to satisfy time-varying demand.
2) It considers problems where processing times and product quantities are equal or varying, and objectives of minimizing total inventory or maximum inventory.
3) It finds that most general cases are NP-hard but some restricted cases can be solved in polynomial time, such as with identical processing times and a bounded number of product types.
This document discusses lattices and lattice-based cryptography. It defines what a lattice and basis are, provides examples, and describes the properties they must satisfy. It then explains the Goldreich-Goldwasser-Halevi (GGH) encryption scheme which uses lattices for encryption. The document outlines the key operations of GGH encryption - how public and private keys are generated from a lattice basis, how encryption and decryption work, and how the security relies on the closest vector problem. It notes that a 1999 paper by Nguyen showed GGH encryption reveals some plaintext information and the decryption problem can be simplified.
The document discusses building robust machine learning systems that can handle concept drift. It introduces the challenges of concept drift when the underlying data distribution changes over time. It proposes using Gaussian process classifiers with an adaptive training window approach. The approach monitors for concept drift and retrains the model if detected. It tests the approach on artificial data streams with different drift scenarios and finds the adaptive approach performs better than a static model at handling concept drift. Future work could explore other drift detection methods and ensembles of adaptive Gaussian process classifiers.
In this talk we consider the question of how to use QMC with an empirical dataset, such as a set of points generated by MCMC. Using ideas from partitioning for parallel computing, we apply recursive bisection to reorder the points, and then interleave the bits of the QMC coordinates to select the appropriate point from the dataset. Numerical tests show that in the case of known distributions this is almost as effective as applying QMC directly to the original distribution. The same recursive bisection can also be used to thin the dataset, by recursively bisecting down to many small subsets of points, and then randomly selecting one point from each subset. This makes it possible to reduce the size of the dataset greatly without significantly increasing the overall error. Co-author: Fei Xie
This document provides an overview of brute force and divide-and-conquer algorithms. It discusses various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It also analyzes the time efficiency of these brute force algorithms. The document then discusses the divide-and-conquer approach and provides examples like merge sort, quicksort, and matrix multiplication. It provides pseudocode and analysis for mergesort. In summary, the document covers brute force and divide-and-conquer techniques for solving algorithmic problems.
The document provides an overview of recursive and iterative algorithms. It discusses key differences between recursive and iterative algorithms such as definition, application, termination, usage, code size, and time complexity. Examples of recursive algorithms like recursive sum, factorial, binary search, tower of Hanoi, and permutation generator are presented along with pseudocode. Analysis of recursive algorithms like recursive sum, factorial, binary search, Fibonacci number, and tower of Hanoi is demonstrated to determine their time complexities. The document also discusses iterative algorithms, proving an algorithm's correctness, the brute force approach, and store and reuse methods.
On February 2017, the University of Sherbrooke inveted me to talk about deep learning and my professional experience to student of Master program. These slides are extracted from my original presentation.
The document discusses amortized analysis, which averages the time required to perform a sequence of operations over all operations. It describes three methods of amortized analysis: aggregate analysis, accounting analysis, and potential analysis. As an example, it analyzes the amortized cost of operations on a dynamic table using these three methods and shows that the amortized cost of insertion and deletion is O(1), even though some operations may have higher actual costs when triggering expansions or contractions of the table.
The document discusses the AlexNet convolutional neural network architecture that won the ImageNet challenge in 2012. It describes the layers of AlexNet in detail, including the input and output sizes of each layer. Some key details are that AlexNet had 5 convolutional layers and 3 fully connected layers, used ReLU activations, dropout, data augmentation, and other techniques. It discusses how AlexNet was implemented across two GPUs due to memory constraints. Finally, it briefly mentions subsequent winners of the ImageNet challenge that improved on AlexNet's architecture.
1. Introduction to time and space complexity.
2. Different types of asymptotic notations and their limit definitions.
3. Growth of functions and types of time complexities.
4. Space and time complexity analysis of various algorithms.
This document discusses brute force and divide-and-conquer algorithms. It covers various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It then discusses divide-and-conquer techniques like merge sort, quicksort, and multiplication of large integers. Examples and analysis of algorithms like selection sort, bubble sort, and mergesort are provided.
Multi-scalar multiplication: state of the art and new ideasGus Gutoski
A 90-minute online presentation for zkStudyClub, delivered 2020-06-01. I present a new idea with a demonstrated 5% speed-up for multi-scalar multiplication. When combined with precomputation, this method could yield upwards of 20% speed-up.
This document provides an overview of finite fields and their importance in cryptography. It discusses how finite fields allow for efficient storage and arithmetic operations on integers for encryption algorithms. The document outlines the basic properties of groups, rings, and fields. It also covers modular arithmetic, greatest common divisors, and Euclid's algorithm for computing gcd. The goal is to introduce concepts needed to understand the arithmetic of the AES encryption algorithm, which uses operations in the finite field GF(28).
This document summarizes key concepts from a lecture on finite fields and their use in cryptography. It introduces finite fields and explains why they are important for cryptography. It discusses the structure of finite fields, including that every finite field has pn elements, where p is a prime number. It also provides examples of computing in finite fields through modular arithmetic.
This document summarizes key concepts from a lecture on finite fields and their use in cryptography. It introduces finite fields and explains why they are important for cryptography. It discusses the structure of finite fields, including that every finite field has pn elements, where p is a prime number. It also provides examples of computing in finite fields through modular arithmetic.
This document summarizes key concepts from a lecture on finite fields including:
- Finite fields have a specific structure with a set number of elements that allows for division, unlike modular arithmetic over integers.
- Modern cryptographic algorithms like AES rely on computations in finite fields to avoid weaknesses from patterns in integer arithmetic.
- The lecture will introduce groups, rings, fields and their properties to provide the foundations for understanding polynomial arithmetic over finite fields used in AES.
Linear Programming - Simplex Algorithm by Yunus HatipogluYunus Hatipoglu
The worst case complexity of the simplex algorithm is exponential. Specifically, in the worst case the simplex algorithm visits all 2n vertices, where n is the number of variables, and this takes exponential time. However, in practice simplex algorithms usually perform much better than the worst case. The specific pivot rule used can also affect the performance and complexity - some rules tend to work faster than others on average. Linear programming and the simplex algorithm are used in areas like resource allocation, production scheduling, transportation/logistics, and finance to optimize objectives subject to constraints.
Teaching Mathematics Concepts via Computer Algebra Systemsinventionjournals
Most articles examine computer algebra systems (CAS) as they relate to the teaching and
learning of mathematics from advantages to disadvantages. This paper will explore junior undergraduate
students’ ability to solve distinguish tricky examples using various CAS technologies. Additionally, an
understanding for how CAS technologies are adopted and applied in professional environments is valuable,
both in guiding improvements to these tools and identifying new tools which can aid mathematician
The document summarizes a lecture on dynamic programming and matrix chain multiplication. It discusses dynamic programming as a technique for solving optimization problems by breaking them down into overlapping subproblems and storing solutions. It then provides an example of using dynamic programming to find the most efficient way to multiply a chain of matrices by considering all possible parenthesizations and choosing the one with the lowest operation count. A 4-step process for dynamic programming problems is outlined.
A Lexi Search Approach to Generalized Travelling Salesman ProblemBRNSSPublicationHubI
This document describes a Lexi Search approach for solving the Generalized Traveling Salesman Problem (GTSP). The GTSP involves finding the minimum cost tour for a salesman to visit n cities grouped into clusters, where the salesman must visit clusters in sequence. The author proposes encoding the problem as patterns and words that can be searched lexicographically. A computer program implementing the Lexi Search algorithm is developed and tested on sample problems. The algorithm is shown to solve fairly large problems in less time than other approaches.
This document summarizes a new method for projective splitting algorithms called projective splitting with forward steps. The method allows using forward steps instead of proximal steps when the operator is Lipschitz continuous. This can improve efficiency compared to only using proximal steps. Preliminary computational tests on LASSO problems show the method with greedy block selection and asynchronous delays can speed up convergence compared to non-greedy, synchronous versions. However, more work is still needed to fully understand adaptive step sizes and how to minimize the separation function at each iteration.
These slides are containing basic concept of Computer Graphics with the cooperation of various resources.
We have targeted to visit various field of Computer graphics which can be beneficial for any beginner that they can follow the basic concept with the help of various pictures and diagrams.
The quality of Image and their presentation is very important parameter by which we can upgrade the quality of teaching and learning process .
The Vector or Bit map method,involves the full orientation of image drying with the help of parameters which is bets for the area of computer graphics.
Cluster analysis is an unsupervised learning technique used to group similar data objects into clusters. It aims to partition data into groups called clusters such that objects within a cluster are as similar as possible while objects in different clusters are as dissimilar as possible. The k-means algorithm is commonly used for partitioning-based clustering. It works by randomly selecting k initial cluster centroids and then iteratively assigning data points to their nearest centroid and recalculating the centroids until cluster membership stabilizes. However, k-means is sensitive to outliers and noise since outliers can distort cluster centroids.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
3. Outline
1st half - build algorithmic lens
We’ll get the fast overview of fundmentals of algortihms
Take-aways: 1 picture, 2 paradoxes, a few example algorithms
2nd half - look through it on finance
3 practical cases
Some theoretical insights
Blockchain
Auctions
Non-monetary algorithms
Maxim Litvak Finance through algorithmic lens 22 April, 2017 3 / 53
4. Example: find an atom in the universe
How fast can we find a specific atom in the universe?
The current estimate is that there are 1080 (≈ 2270) atoms in the
universe.
Sounds like a needle in a haystack at least . . .
Maxim Litvak Finance through algorithmic lens 22 April, 2017 4 / 53
5. Example: find an atom in the universe
1st attempt
Check each atom if it’s the right one
With some luck, the 1st guess works
With no luck, the last guess works (after 1080 guess operations)
On average we’d need 5 ∗ 1079 operations
Can we do better than this?
Maxim Litvak Finance through algorithmic lens 22 April, 2017 5 / 53
6. Example: find an atom in the universe
Algorithm
Split the universe in 2 halves: in which half is the atom we need?
Repeat
Result
In 270 operations we’ll find it
Could be done by hand in less than 5 minutes . . .
We’ve seen an algorithm being extremely fast on a large input
Maxim Litvak Finance through algorithmic lens 22 April, 2017 6 / 53
8. Example: Hanoi tower
Task
Move all disks from one peg (tower) to another
Rules:
1. Move one disk at a time
2. Every disk must be bigger than the one below
3. You can move only the upper disk
Maxim Litvak Finance through algorithmic lens 22 April, 2017 8 / 53
9. Example: Hanoi tower
3 moves needed for 2 disks
7 moves are needed for 3 disks
15 moves are needed for 4 disks
. . . million moves for 20 disks
Number of moves goes up exponentially with number of disks!
Maxim Litvak Finance through algorithmic lens 22 April, 2017 9 / 53
10. Example: Hanoi tower
We’ve seen a task with an algorithm that is fast on huge input (atom
searching)
We’ve seen a task with an algorithm that is slow on small input
(Hanoi tower)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 10 / 53
11. Picture to take away
The height is log(n) (fast), the width is n = 2h, the number of nodes
is 2n − 1 = 2h+1 − 1 (slow)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 11 / 53
12. Picture to take away
Maxim Litvak Finance through algorithmic lens 22 April, 2017 12 / 53
13. Paradoxes - Multiplication
It’s possible to multiply 2 numbers faster than the school method
Andrey Kolmogorov in 1960 conjectured that there’s no multiplication
method faster than the school method (used by the humanity for
thousands of years), and organized seminar to prove it
A 17 years old student (Anatoly Karatsuba) found a faster method . . .
Maxim Litvak Finance through algorithmic lens 22 April, 2017 13 / 53
20. Paradoxes - Multiplication
Difference
School method - 4 multiplications and some additions
Karatsuba method - 3 multiplications and some more additions
Karatsuba is faster because the recursion can be deployed on
multiplication
Maxim Litvak Finance through algorithmic lens 22 April, 2017 20 / 53
21. Paradoxes - Braess paradox
Decreasing the number of opportunities might increase the efficiency
In option pricing - adding an opportunity can’t reduce the price
Maxim Litvak Finance through algorithmic lens 22 April, 2017 21 / 53
22. Paradoxes - Braess paradox
The roads START-DOWN and UP-END take 1 hour, the roads S-up
and Down-t depend on traffic (100% cars on the road - 1 hour, 50% -
30min)
Initial equilibrium - 1.5h trip time (one half goes the upper way)
Adding high-speed UP-DOWN highway (with 0h time to cross it)
increases the time to 2h
Maxim Litvak Finance through algorithmic lens 22 April, 2017 22 / 53
23. What is "fast"
From algorithmic point of view, under the "speed" is considered the
asymptotic behaviour of the algorithm on input
For a certain input, algorithm A might be faster than B, but "slower"
in the asymptotic sense, i.e. with input big enough B outperforms A
Big-O notation: f(n) = O(g(n)) if (up to a constant) for some n, g(n)
catches up or outperforms f(n)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 23 / 53
24. What is "fast"
reset
set title "Algorithms’ running time on input"
set xlabel "X"
set xrange [0:5]
set xtics 0,1,5
set ylabel "Y"
set yrange [-5:30]
set ytics -5,5,30
hanoi_tower(x) = 2**x
sort_numbers(x) = x*log(x)
mult_numbers(x) = x**2
binary_search(x) = log(x)Maxim Litvak Finance through algorithmic lens 22 April, 2017 24 / 53
25. What is "hard" to compute
As a rule of thumb, the algorithms that can be computed in
polynomial time are considered as "easy" to compute
. . . and algorithms which computation time can’t be bound by a
polynom (e.g. exponential) are considered to be "hard"
Sometimes computational "hardness" is a good thing, if you don’t
want something to be easily computed (e.g. cryptography)
A lot of "hard" algorithms are considered to be "hard" since no faster
solution is known (includes P = NP problem)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 25 / 53
26. Examples of algorithm’s speed (current stand)
Sorting numbers - O(n log(n))
Binary search - O(log(n))
Satifiability (if some logical statement is true or not) - exponential
Integer number factorization (important in cryptography) - exponential
Maxim Litvak Finance through algorithmic lens 22 April, 2017 26 / 53
27. Non-standard examples on complexity . . .
xkcd.com
Maxim Litvak Finance through algorithmic lens 22 April, 2017 27 / 53
28. Algorithmic topics in finance
Cases in practice of quantitative finance
Theoretical insights
Blockchain
Auctions
Non-monetary mechanisms
Maxim Litvak Finance through algorithmic lens 22 April, 2017 28 / 53
29. Case: generate normal r.v.
We can generate a normally distributed r.v. in a few ways, not all of
them are equal computationally
(Bad solution)
Generate 1000 uniformly distributed r.v. and use the CLT
Their sum (centered by mean and normalized by standard deviation) is
(approximately) normally distributed
However, it’s computationally expensive to do it this way
Maxim Litvak Finance through algorithmic lens 22 April, 2017 29 / 53
30. Case: generate normal r.v.
Another solution:
use inverse c.d.f.
ξ = Φ−1(η), s.t. η ∈ U0,1
Next question: how expensive is it to compute inverse c.d.f.?
Subproblem: computation of inverse c.d.f.
Option: calculate using the root-finding procedure (on average 5-6
iterations, however, in this case the c.d.f. itself might be costly to
compute)
Option: use the Taylor expansion (well explored for normal
distribution, might not be the case for other distributions)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 30 / 53
31. Case: generate normal r.v.
Other methods are known (Box-Miller procedure etc)
However, the very 1st question: how costly is it to compute
pseudo-random uniformly distributed r.v.?
(guess) r.v. generators with better properties might be harder to
compute
Maxim Litvak Finance through algorithmic lens 22 April, 2017 31 / 53
32. Case: generate normal r.v.
Fast, but not so efficient generator . . . (xkcd.com)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 32 / 53
33. Case: generate normal r.v.
Example: middle square method
Simple, but has different flaws
Was enough for the simulations for the 1st nuclear bomb
middle square method
Start with some n-digit number
Square it
Take the middle as the next number in the sequence
Example: 62342 -> 38862756, i.e. next number is 8627
Problem: 76002 -> 57760000, i.e. generator got stuck at the same
number
For 4 digits maximal period is 212 (one of modern generators has the
period 219937)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 33 / 53
34. Case: credit pricing
A big bank re-calculated for controlling purposes each month its whole
portfolio of credits (106 credits)
For a lot of credits you need to do the same calculation (e.g. credits
with the same conditions, but different principals)
Credit was priced (simplification) based on the following parameters:
loan-to-value (LTV)
r (interest rate)
T (time to maturity)
R (rating of the client)
K (quality category of the collateral)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 34 / 53
35. Case: credit pricing
Idea
why not to split all parameters into intervals (10 on average)
pre-calculate all the possible situations (105)
Then assign to each credit its price based on its parameters
Rational: searching in the DB is faster than calculating
Maxim Litvak Finance through algorithmic lens 22 April, 2017 35 / 53
36. Case: credit pricing
Some time later, a few improvements to the price engine were
introduced
instead of collateral category, there were introduced the mean and
standard deviation of its value
an option for extra payment (e.g. maximally 5% p.a. of the collateral)
type of credit (most of the credits were annuities, but some were
bullet loan credits etc)
now, there were 108 credits in the database
with a few other propositions to extend the number of parameters
down the way . . .
Maxim Litvak Finance through algorithmic lens 22 April, 2017 36 / 53
37. Case: credit pricing
time complexity (linear on number of credits and polynomial on
number of parameters) was exchanged for the space complexity
(exponential on number of parameters)
at some point of time exponential growth outperformed the
polynomial . . .
Maxim Litvak Finance through algorithmic lens 22 April, 2017 37 / 53
38. Case: search for the best regression
The frequent problem: given a number of factors, pick some of them,
s.t. the dependend variable is explained in the "best" way
The criteria can be e.g. some information criteria (e.g. Akaike)
. . . or some combined criteria (R2 is high enough, p-values for
coefficients are at least x%, p-value for normality test in residuals at
least y% etc)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 38 / 53
39. Case: search for the best regression
Problem: the number of all combinations n
k=0 Ck
n = 2n, i.e. grows
exponentially on number of factors
e.g. for 50 factors, we would need to consider 250 ≈ 1015 regressions -
too much
Maxim Litvak Finance through algorithmic lens 22 April, 2017 39 / 53
40. Case: search for the best regression
Attempt solution:
constrain the number of possible factors (who needs regression with
100 factors?)
e.g. for 50 possible factors, consider regressions with at most 4
factors, we’d need ca. 250 tsd. regressions - doable . . .
. . . doable, but not scalable. Consider, we’d like also to consider the
1st lags (doubling the number of factors), then for 100 factors, we’d
need to go through ca. 4 mln regressions
Maxim Litvak Finance through algorithmic lens 22 April, 2017 40 / 53
41. Case: search for the best regression
The flaw of the attempt solution: it’s faster, but it still grows
exponentially
p.s. O( kmax
k=0 Ck
n ) = O(2nH( kmax
n
)
), where H() is the binary entropy
function
Maxim Litvak Finance through algorithmic lens 22 April, 2017 41 / 53
42. Case: search for the best regression
Solutions in practice
As the criteria (e.g. AIC) is chosen that assigns a number to a
regression
Deploy an optimal solution search method
It usually has a polynomial complexity (however, not guaranted to find
the global optimum)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 42 / 53
43. Algorithms in practice of quantitative finance
Different tasks require different speed
Calculation of capital can easily take a few hours (or even days) longer
With a client sitting in front of you, you want the program to calculate
the conditions in 10-20-30 seconds
In high-frequency trading you need to deploy the fastest algorithm, in
the most efficient language on the fastest infrastructure
Maxim Litvak Finance through algorithmic lens 22 April, 2017 43 / 53
44. Theoretical insights
Usual question in economic theory: does the equilibrium (e.g. price)
exist?
The algorithmic approach: how fast it can be computed?
Nash-equilibrium is "hard" to compute
One of ideas: markets are efficient if P = NP
Maxim Litvak Finance through algorithmic lens 22 April, 2017 44 / 53
45. Blockchain
Blockchain is a distributed database of transactions
"Hard to find, easy to check" principle used
incentives used to ensure the fairness
Market capitalization as of 22.04.17
Bitcoin $20b
Ethereum $4.5b
Maxim Litvak Finance through algorithmic lens 22 April, 2017 45 / 53
46. Blockchain
Some blockchain allows smart contracts (e.g. Ethereum)
Would they reduce some risks (e.g. counterparty risk) and create the
new ones? How to price them?
Still an emerging field
Cryptocurrencies are not exactly money
Money properties
medium of exchange
measure of value
store of value
Maxim Litvak Finance through algorithmic lens 22 April, 2017 46 / 53
47. Auctions
2nd price auction principle - the highest bid wins, but the 2nd highest
bid is paid
Everytime you see an ad from google, an auction is run (i.e. millions
auctions a day take place)
Maxim Litvak Finance through algorithmic lens 22 April, 2017 47 / 53
48. Non-monetary mechanisms
not all tasks are solved with money, examples:
matchings
allocations
Maxim Litvak Finance through algorithmic lens 22 April, 2017 48 / 53
49. Matchings
How to match different parties with different preferences
e.g. stable marriage problem (match couples s.t. noone could improve
their choice)
Examples
In USA, medical students are assigned to hospitals in a similar way
In France, teachers to public school
Maxim Litvak Finance through algorithmic lens 22 April, 2017 49 / 53
51. Non-monetary mechanisms
Simple matching/allocation algorithms are "easy to compute"
Some not, e.g. hospital/residents problem with couples
Hospital might admit multiple students, the constraint to assign a
married couple to one hospital makes the problem "hard"
Maxim Litvak Finance through algorithmic lens 22 April, 2017 51 / 53
52. Thank you for your attention
Thank you for your attention
You can find the slides here:
https://github.com/maxlit/lectures/
Questions?
Maxim Litvak Finance through algorithmic lens 22 April, 2017 52 / 53
53. Material sources
xkcd.com
Dasgupta et al. "Algorithms"
M.Hetland "Mastering Basic Algorithms in the Python language"
Maxim Litvak Finance through algorithmic lens 22 April, 2017 53 / 53