An introductory-to-mid level to presentation to complex network analysis: network metrics, analysis of online social networks, approximated algorithms, memorization issues, storage.
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...Beniamino Murgante
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov – National Centre for Geocomputation, National University of Ireland , Maynooth (Ireland)
Intelligent Analysis of Environmental Data (S4 ENVISA Workshop 2009)
An introductory-to-mid level to presentation to complex network analysis: network metrics, analysis of online social networks, approximated algorithms, memorization issues, storage.
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov –...Beniamino Murgante
Kernel based models for geo- and environmental sciences- Alexei Pozdnoukhov – National Centre for Geocomputation, National University of Ireland , Maynooth (Ireland)
Intelligent Analysis of Environmental Data (S4 ENVISA Workshop 2009)
Nelly Litvak – Asymptotic behaviour of ranking algorithms in directed random ...Yandex
There is a vast empirical research on the behaviour of ranking algorithms, e.g. Google PageRank, in scale-free networks. In this talk, we address this problem by analytical probabilistic methods. In particular, it is well-known that the PageRank in scale-free networks follows a power law with the same exponent as in-degree. Recent probabilistic analysis has provided an explanation for this phenomenon by obtaining a natural approximation for PageRank based on stochastic fixed-point equations. For these equations, explicit solutions can be constructed on weighted branching trees, and their tail behavior can be described in great detail.
In this talk we present a model for generating directed random graphs with prescribed degree distributions where we can prove that the PageRank of a randomly chosen node does indeed converge to the solution of the corresponding fixed-point equation as the number of nodes in the graph grows to infinity. The proof of this result is based on classical random graph coupling techniques combined with the now extensive literature on the behavior of branching recursions on trees.
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...MLconf
Anima Anandkumar is a faculty at the EECS Dept. at U.C.Irvine since August 2010. Her research interests are in the area of large-scale machine learning and high-dimensional statistics. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She has been a visiting faculty at Microsoft Research New England in 2012 and a postdoctoral researcher at the Stochastic Systems Group at MIT between 2009-2010. She is the recipient of the Microsoft Faculty Fellowship, ARO Young Investigator Award, NSF CAREER Award, and IBM Fran Allen PhD fellowship.
Two further methods for obtaining post-quantum security are discussed, namely code-based and isogeny-based cryptography. Topic 1: Revocable Identity-based Encryption from Codes with Rank Metric (will be presented by Dr. Reza Azarderakhsh) Authors: Donghoon Chang; Amit Kumar Chauhan; Sandeep Kumar; Somitra Kumar Sanadhya Topic 2: An Exposure Model for Supersingular Isogeny Diffie-Hellman Key Exchange Authors: Brian Koziel; Reza Azarderakhsh; David Jao
(Source: RSA Conference USA 2018)
Efficient end-to-end learning for quantizable representationsNAVER Engineering
발표자: 정연우(서울대 박사과정)
발표일: 2018.7.
유사한 이미지 검색을 위해 neural network를 이용해 이미지의 embedding을 학습시킨다. 기존 연구에서는 검색 속도 증가를 위해 binary code의 hamming distance를 활용하지만 여전히 전체 데이터 셋을 검색해야 하며 정확도가 떨어지는 다는 단점이 있다. 이 논문에서는 sparse한 binary code를 학습하여 검색 정확도가 떨어지지 않으면서 검색 속도도 향상시키는 해쉬 테이블을 생성한다. 또한 mini-batch 상에서 optimal한 sparse binary code를 minimum cost flow problem을 통해 찾을 수 있음을 보였다. 우리의 방법은 Cifar-100과 ImageNet에서 precision@k, NMI에서 최고의 검색 정확도를 보였으며 각각 98× 와 478×의 검색 속도 증가가 있었다.
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...MLconf
Tensor Methods: A New Paradigm for Training Probabilistic Models and Feature Learning: Tensors are rich structures for modeling complex higher order relationships in data rich domains such as social networks, computer vision, internet of things, and so on. Tensor decomposition methods are embarrassingly parallel and scalable to enormous datasets. They are guaranteed to converge to the global optimum and yield consistent estimates of parameters for many probabilistic models such as topic models, community models, hidden Markov models, and so on. I will show the results of these methods for learning topics from text data, communities in social networks, disease hierarchies from healthcare records, cell types from mouse brain data, etc. I will also demonstrate how tensor methods can yield rich discriminative features for classification tasks and can serve as an alternative method for training neural networks.
A Novel Methodology for Designing Linear Phase IIR FiltersIDES Editor
This paper presents a novel technique for
designing an Infinite Impulse Response (IIR) Filter with
Linear Phase Response. The design of IIR filter is always a
challenging task due to the reason that a Linear Phase
Response is not realizable in this kind. The conventional
techniques involve large number of samples and higher
order filter for better approximation resulting in complex
hardware for implementing the same. In addition, an
extensive computational resource for obtaining the inverse
of huge matrices is required. However, we propose a
technique, which uses the frequency domain sampling along
with the linear programming concept to achieve a filter
design, which gives a best approximation for the linear
phase response. The proposed method can give the closest
response with less number of samples (only 10) and is
computationally simple. We have presented the filter design
along with its formulation and solving methodology.
Numerical results are used to substantiate the efficiency of
the proposed method.
Information-theoretic clustering with applicationsFrank Nielsen
Information-theoretic clustering with applications
Abstract: Clustering is a fundamental and key primitive to discover structural groups of homogeneous data in data sets, called clusters. The most famous clustering technique is the celebrated k-means clustering that seeks to minimize the sum of intra-cluster variances. k-Means is NP-hard as soon as the dimension and the number of clusters are both greater than 1. In the first part of the talk, we first present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means but also other kinds of clustering algorithms like the k-medoids, the k-medians, the k-centers, etc.
We extend the method to incorporate cluster size constraints and show how to choose the appropriate number of clusters using model selection. We then illustrate and refine the method on two case studies: 1D Bregman clustering and univariate statistical mixture learning maximizing the complete likelihood. In the second part of the talk, we introduce a generalization of k-means to cluster sets of histograms that has become an important ingredient of modern information processing due to the success of the bag-of-word modelling paradigm.
Clustering histograms can be performed using the celebrated k-means centroid-based algorithm. We consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We prove that the Jeffreys centroid can be expressed analytically using the Lambert W function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms and conclude with some remarks on the k-means histogram clustering.
References: - Optimal interval clustering: Application to Bregman clustering and statistical mixture learning IEEE ISIT 2014 (recent result poster) http://arxiv.org/abs/1403.2485
- Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms.
IEEE Signal Process. Lett. 20(7): 657-660 (2013) http://arxiv.org/abs/1303.7286
http://www.i.kyoto-u.ac.jp/informatics-seminar/
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/2017-dlsl/
Winter School on Deep Learning for Speech and Language. UPC BarcelonaTech ETSETB TelecomBCN.
The aim of this course is to train students in methods of deep learning for speech and language. Recurrent Neural Networks (RNN) will be presented and analyzed in detail to understand the potential of these state of the art tools for time series processing. Engineering tips and scalability issues will be addressed to solve tasks such as machine translation, speech recognition, speech synthesis or question answering. Hands-on sessions will provide development skills so that attendees can become competent in contemporary data analytics tools.
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
We present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means, k-medoids, k-medians, k-centers, etc. We extend the method to incorporate cluster size constraints and show how to choose the appropriate k by model selection. Finally, we illustrate and refine the method on two case studies: Bregman clustering and statistical mixture learning maximizing the complete likelihood.
http://arxiv.org/abs/1403.2485
Digital Signal Processing[ECEG-3171]-Ch1_L02Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced
#Africa#Ethiopia
Nelly Litvak – Asymptotic behaviour of ranking algorithms in directed random ...Yandex
There is a vast empirical research on the behaviour of ranking algorithms, e.g. Google PageRank, in scale-free networks. In this talk, we address this problem by analytical probabilistic methods. In particular, it is well-known that the PageRank in scale-free networks follows a power law with the same exponent as in-degree. Recent probabilistic analysis has provided an explanation for this phenomenon by obtaining a natural approximation for PageRank based on stochastic fixed-point equations. For these equations, explicit solutions can be constructed on weighted branching trees, and their tail behavior can be described in great detail.
In this talk we present a model for generating directed random graphs with prescribed degree distributions where we can prove that the PageRank of a randomly chosen node does indeed converge to the solution of the corresponding fixed-point equation as the number of nodes in the graph grows to infinity. The proof of this result is based on classical random graph coupling techniques combined with the now extensive literature on the behavior of branching recursions on trees.
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...MLconf
Anima Anandkumar is a faculty at the EECS Dept. at U.C.Irvine since August 2010. Her research interests are in the area of large-scale machine learning and high-dimensional statistics. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She has been a visiting faculty at Microsoft Research New England in 2012 and a postdoctoral researcher at the Stochastic Systems Group at MIT between 2009-2010. She is the recipient of the Microsoft Faculty Fellowship, ARO Young Investigator Award, NSF CAREER Award, and IBM Fran Allen PhD fellowship.
Two further methods for obtaining post-quantum security are discussed, namely code-based and isogeny-based cryptography. Topic 1: Revocable Identity-based Encryption from Codes with Rank Metric (will be presented by Dr. Reza Azarderakhsh) Authors: Donghoon Chang; Amit Kumar Chauhan; Sandeep Kumar; Somitra Kumar Sanadhya Topic 2: An Exposure Model for Supersingular Isogeny Diffie-Hellman Key Exchange Authors: Brian Koziel; Reza Azarderakhsh; David Jao
(Source: RSA Conference USA 2018)
Efficient end-to-end learning for quantizable representationsNAVER Engineering
발표자: 정연우(서울대 박사과정)
발표일: 2018.7.
유사한 이미지 검색을 위해 neural network를 이용해 이미지의 embedding을 학습시킨다. 기존 연구에서는 검색 속도 증가를 위해 binary code의 hamming distance를 활용하지만 여전히 전체 데이터 셋을 검색해야 하며 정확도가 떨어지는 다는 단점이 있다. 이 논문에서는 sparse한 binary code를 학습하여 검색 정확도가 떨어지지 않으면서 검색 속도도 향상시키는 해쉬 테이블을 생성한다. 또한 mini-batch 상에서 optimal한 sparse binary code를 minimum cost flow problem을 통해 찾을 수 있음을 보였다. 우리의 방법은 Cifar-100과 ImageNet에서 precision@k, NMI에서 최고의 검색 정확도를 보였으며 각각 98× 와 478×의 검색 속도 증가가 있었다.
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...MLconf
Tensor Methods: A New Paradigm for Training Probabilistic Models and Feature Learning: Tensors are rich structures for modeling complex higher order relationships in data rich domains such as social networks, computer vision, internet of things, and so on. Tensor decomposition methods are embarrassingly parallel and scalable to enormous datasets. They are guaranteed to converge to the global optimum and yield consistent estimates of parameters for many probabilistic models such as topic models, community models, hidden Markov models, and so on. I will show the results of these methods for learning topics from text data, communities in social networks, disease hierarchies from healthcare records, cell types from mouse brain data, etc. I will also demonstrate how tensor methods can yield rich discriminative features for classification tasks and can serve as an alternative method for training neural networks.
A Novel Methodology for Designing Linear Phase IIR FiltersIDES Editor
This paper presents a novel technique for
designing an Infinite Impulse Response (IIR) Filter with
Linear Phase Response. The design of IIR filter is always a
challenging task due to the reason that a Linear Phase
Response is not realizable in this kind. The conventional
techniques involve large number of samples and higher
order filter for better approximation resulting in complex
hardware for implementing the same. In addition, an
extensive computational resource for obtaining the inverse
of huge matrices is required. However, we propose a
technique, which uses the frequency domain sampling along
with the linear programming concept to achieve a filter
design, which gives a best approximation for the linear
phase response. The proposed method can give the closest
response with less number of samples (only 10) and is
computationally simple. We have presented the filter design
along with its formulation and solving methodology.
Numerical results are used to substantiate the efficiency of
the proposed method.
Information-theoretic clustering with applicationsFrank Nielsen
Information-theoretic clustering with applications
Abstract: Clustering is a fundamental and key primitive to discover structural groups of homogeneous data in data sets, called clusters. The most famous clustering technique is the celebrated k-means clustering that seeks to minimize the sum of intra-cluster variances. k-Means is NP-hard as soon as the dimension and the number of clusters are both greater than 1. In the first part of the talk, we first present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means but also other kinds of clustering algorithms like the k-medoids, the k-medians, the k-centers, etc.
We extend the method to incorporate cluster size constraints and show how to choose the appropriate number of clusters using model selection. We then illustrate and refine the method on two case studies: 1D Bregman clustering and univariate statistical mixture learning maximizing the complete likelihood. In the second part of the talk, we introduce a generalization of k-means to cluster sets of histograms that has become an important ingredient of modern information processing due to the success of the bag-of-word modelling paradigm.
Clustering histograms can be performed using the celebrated k-means centroid-based algorithm. We consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We prove that the Jeffreys centroid can be expressed analytically using the Lambert W function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms and conclude with some remarks on the k-means histogram clustering.
References: - Optimal interval clustering: Application to Bregman clustering and statistical mixture learning IEEE ISIT 2014 (recent result poster) http://arxiv.org/abs/1403.2485
- Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms.
IEEE Signal Process. Lett. 20(7): 657-660 (2013) http://arxiv.org/abs/1303.7286
http://www.i.kyoto-u.ac.jp/informatics-seminar/
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/2017-dlsl/
Winter School on Deep Learning for Speech and Language. UPC BarcelonaTech ETSETB TelecomBCN.
The aim of this course is to train students in methods of deep learning for speech and language. Recurrent Neural Networks (RNN) will be presented and analyzed in detail to understand the potential of these state of the art tools for time series processing. Engineering tips and scalability issues will be addressed to solve tasks such as machine translation, speech recognition, speech synthesis or question answering. Hands-on sessions will provide development skills so that attendees can become competent in contemporary data analytics tools.
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
We present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means, k-medoids, k-medians, k-centers, etc. We extend the method to incorporate cluster size constraints and show how to choose the appropriate k by model selection. Finally, we illustrate and refine the method on two case studies: Bregman clustering and statistical mixture learning maximizing the complete likelihood.
http://arxiv.org/abs/1403.2485
Digital Signal Processing[ECEG-3171]-Ch1_L02Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced
#Africa#Ethiopia
Continuum Modeling and Control of Large Nonuniform NetworksYang Zhang
Presented at The 49th Annual Allerton Conference on Communication, Control, and Computing, 2011
Abstract—Recent research has shown that some Markov chains modeling networks converge to continuum limits, which are solutions of partial differential equations (PDEs), as the number of the network nodes approaches infinity. Hence we can approximate such large networks by PDEs. However, the previous results were limited to uniform immobile networks with a fixed transmission rule. In this paper we first extend the analysis to uniform networks with more general transmission rules. Then through location transformations we derive the continuum limits of nonuniform and possibly mobile networks. Finally, by comparing the continuum limits of corresponding nonuniform and uniform networks, we develop a method to control the transmissions in nonuniform and mobile networks so that the continuum limit is invariant under node locations, and hence mobility. This enables nonuniform and mobile networks to maintain stable global characteristics in the presence of varying node locations.
I am Arcady N. I am a Computer Network Assignments Expert at computernetworkassignmenthelp.com. I hold a Master's in Computer Science from, City University, London. I have been helping students with their assignments for the past 10 years. I solve assignments related to the Computer Network.
Visit computernetworkassignmenthelp.com or email support@computernetworkassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with the Computer Network Assignments.
The amount of digital data in the new era has grown exponentially in recent years and with the development of new technologies, is growing more rapidly than ever before.
Nevertheless, simply knowing that all these data are out there is easily understandable, utilizing these data to turn a profit is not trivial.
The need of data mining techniques able to extract profitable insight information is the next frontier of innovation, competition and profit.
A data analytic services provider, in order to well-scale and exponentially grow its profit, has to deal with scalability, multi-tenancy and self-adaptability.
In big data applications, machine learning is a very powerful instrument but a bad choice regarding the algorithm and its configuration parameters can easily lead to poor results. The key problem is automating the tuning process without a priori knowledge of the data and without human intervention.
In this research project we implemented and analysed TunUp: A Distributed Cloud-based Genetic Evolutionary Tuning for Data Clustering.
The proposed solution automatically evaluates and tunes data clustering algorithms, so that big data services can self-adapt and scale in a cost-efficient manner.
For our experiments, we considered k-means as clustering algorithm, that is a simple but popular algorithm, widely used in many data mining applications.
Clustering outputs are evaluated using four internal techniques: AIC, Dunn, Davies-Bouldin and Silhouette and an external evaluation: AdjustedRand.
We then performed a correlation t-test in order to validate and benchmark our internal techniques against AdjustedRand.
Defined the best evaluation criteria, the main challenge of k-means is setting the right value of k, that represents the number of clusters, and the distance measure used to compute distances of each pair of points in the data space.
To address this problem we propose an implementation of the Genetic Evolutionary Algorithm that heuristically finds out an optimal configuration of our clustering algorithm.
In order to improve performances, we implemented a parallel version of genetic algorithm developing a REST API and deploying several instances in the Amazon Cloud Computing (EC2) infrastructure.
In conclusion, with this research we contributed building and analysing TunUp, an open solution for evaluation, validation and tuning of data clustering algorithms, with a particularly focused on cloud services.
Our experiments show the quality and efficiency of tuning k-means on a set of public datasets.
The research also provides a Roadmap that gives indications of how the current system should be extended and utilized for future clustering applications, such as: Tuning of existing clustering algorithms, Supporting new algorithms design, Evaluation and comparison of different algorithms.
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
We study communication cost of computing functions when inputs are distributed among k processors, each of which is located at one vertex of a network/graph called a terminal. Every other node of the network also has a processor, with no input. The communication is point-to-point and the cost is the total number of bits exchanged by the protocol, in the worst case, on all edges. Our results show the effect of topology of the network on the total communication cost. We prove tight bounds for simple functions like Element-Distinctness (ED), which depend on the 1-median of the graph. On the other hand, we show that for a large class of natural functions like Set-Disjointness the communication cost is essentially n times the cost of the optimal Steiner tree connecting the terminals. Further, we show for natural composed functions like ED∘XOR and XOR∘ED, the naive protocols suggested by their definition is optimal for general networks. Interestingly, the bounds for these functions depend on more involved topological parameters that are a combination of Steiner tree and 1-median costs. To obtain our results, we use some tools like metric embeddings and linear programming whose use in the context of communication complexity is novel as far as we know. (Based on joint works with Jaikumar Radhakrishnan and Atri Rudra)
I am Stacy W. I am a Statistical Physics Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from, University of McGill, Canada
I have been helping students with their homework for the past 7years. I solve assignments related to Statistical.
Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Statistical Physics Assignments.
New Mathematical Tools for the Financial SectorSSA KPI
AACIMP 2010 Summer School lecture by Gerhard Wilhelm Weber. "Applied Mathematics" stream. "Modern Operational Research and Its Mathematical Methods with a Focus on Financial Mathematics" course. Part 5.
More info at http://summerschool.ssa.org.ua
Python is a high level language focused on readability. The Python community developed the concept of "Pythonic Code", requiring not only semantic correctness, but also conformity to universally acknowledged stylistic criteria.
A pre-requisite to write pythonic code is to write idiomatic code. Using the right idioms is a matter of acquired taste and experience, however, some idioms are quite easy to learn.
This presentation focuses on some of these idioms and other stylistic criteria:
* for vs. while
* iterators, itertools
* code conventions (space invaders)
* avoid default values bugs
* first order functions
* internal/external iterators
* substituting the switch statement
* properties, attributes, read only objects
* named tuples
* duck typings
* bits of metaprogramming
* exception management: LBYL vs. EAFP
Simple presentation on Twisted fundamentals.
Originally part 4 of a 4 lectures seminar for the Networking class of the Computer Science course at the University of Parma
Object Oriented programming in Python.
Originally part 3 of a 4 lectures seminar for the Networking class of the Computer Science course at the University of Parma
A simple introduction to the Python programming language. In Italian. OLD: superseeded by Pycrashcourse 3.1.
Originally part 1 of a 4 lectures seminar for the Networking class of the Computer Science course at the University of Parma
A simple introduction to the Python programming language. In Italian. OLD: superseeded by Pycrashcourse 3.1.
Originally presented during the Networking class of the Computer Science course at the University of Parma
Object Oriented programming in Python.
Originally part 2 of a 4 lectures seminar for the Networking class of the Computer Science course at the University of Parma
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
2. Outline
SNA = Complex Network Analysis on Social Networks
Notation & Metrics Degree Distribution
Path Lengths
Transitivity
Models Random Graphs
Small-Worlds
Preferential Attachment
Models Discussion
Conclusion
2
3. Network Directed Network
G = (V, E) E ⊂ V 2
k out
= ∑ A ij k = ∑ A ji
in
{(x, x) x ∈V } ∩ E = ∅
i i
j j
ki = kiin + kiout
Undirected Network
Adjacency Matrix
A symmetric
⎧1 if (i,j) ∈E
A ij = ⎨
⎩0 otherwise ki = ∑ A ji = ∑ A ij
j j
px = # {i ki = x }
1
Degree Distribution
n
Average Degree k =n −1
∑k x
x∈V
3
4. Measure of Transitivity
()
−1
ki
Local Clustering Coefficient Ci = 2 T (i)
T(i): # distinct triangles with i as vertex
1
Clustering Coefficient C = ∑ Ci
n i∈V
C=
( number of closed paths of length 2 ) = ( number of triangles ) × 3
( number of paths of length 2 ) ( number of connected triples )
4
5. Shortest Path Length and Diameter
scalar operations
AB = A + .⋅ B The matrix product depends from
( A,+,⋅) [ AB]ij = ∑ A ik ⋅ Bkj
the operations of the semi-ring
k
Set of Adjacency Matrices
min
Other matrix products make sense: e.g., ( A,+,^ ) or ( A,^,+ )
We consider: (
Sk (M) = M + .^ M k ^ .+ M k )
Shortest path lengths matrix: L = ( Sn … S1 ) ( M )
Diameter: d = max L Average shortest path: = Lij
ij
5
6. Computational Complexity of ASPL:
All pairs shortest path matrix based (parallelizable): ( ) α ≈ 3/ 4
O n 3+α
All pairs shortest path Bellman-Ford: O (n )3
All pairs shortest path Dijkstra w. Fibonacci Heaps: O ( n log n + nm )
2
Computing the CPL
x = M q (S) q#S elements are ≤ than x and (1-q)#S are > than x
x = Lqδ (S) q#S(1-δ) elements are ≤ than x and (1-q)#S(1-δ) are > than x
Huber Algorithm
2 2 (1 − δ )
2
Let R a random sample of S such that #R=s, then
s = 2 ln
q δ 2 Lqδ(S) = Mq(R) with probability p = 1-ε.
6
8. Facebook Hugs Degree Distribution
10000000 Nodes: 1322631 Edges: 1555597
m/n: 1.17 CPL: 11.74
1000000
Clustering Coefficient: 0.0527
Number of Components: 18987
100000
Isles: 0
10000
Largest Component Size: 1169456
1000
For large k we have
100
statistical fluctuations
10
1
1 10 100 1000
For small k power-laws do not hold 8
9. Many networks have
power-law degree distribution. pk ∝ k −γ
γ >1
• Citation networks
k r
=?
• Biological networks
• WWW graph
• Internet graph
• Social Networks
Power-Law: ! gamma=3
1000000
100000
10000
1000
100
10
1
0.1 9
1 10 100 1000
10. Erdös-Rényi Random Graphs
Connectedness
p Threshold log n / n
G(n, p)
p
G(n, m) p
p
p
p
Ensembles of Graphs p p
When describe values of p
properties, we actually the p Pr(Aij = 1) = p
expected value of the property
d := d = ∑ Pr(G)⋅ d(G) ∝
log n
Pr(G) = p m
(1− p)
() n
2 −m
G log k
⎛ n⎞
m =⎜ ⎟ p k = (n − 1)p C = k (n − 1) −1
⎝ 2⎠
⎛ n − 1⎞ k k
k
pk = ⎜ ⎟ p (1− p)
n−1−k
n→∞ pk = e − k
10
⎝k ⎠ k!
11. p
Watts-Strogatz Model
In the modified model, we only add the edges.
ki = κ + si ps = e −κ s (κ p ) s
C=
3(κ − 2)
s! 4(κ − 1) + 8κ p + 4κ p 2
Edges in
the lattice # added
pk = e −κ s (κ p ) k−κ
≈
log(npκ )
shortcuts
( k − κ )! κ p
2
11
12. Strogatz-Watts Model - 10000 nodes k = 4
1
CPL(p)/CPL(0)
C(p)/C(0)
0.8
CPL(p)/CPL(0)
0.6
C(p)/C(0)
0.4
0.2
0
0 0.2 0.4 p 0.6 0.8 1
Short CPL
Large Clustering Coefficient 12
Threshold
Threshold
14. Barabási-Albert Model Connectedness log n
Threshold log log n
BARABASI-ALBERT-MODEL(G,M0,STEPS) Pr(V = x ) = ∑ Pr(E = e) =
FOR K FROM 1 TO STEPS e∈N ( x )
N0 ← NEW-NODE(G) kx 2k x
= =
ADD-NODE(G,N0) m ∑ kx
A ← MAKE-ARRAY() x
FOR N IN NODES(G)
−3
PUSH(A, N) pk ∝ x
FOR J IN DEGREE(N)
log n
PUSH(A, N) ≈
FOR J FROM 1 TO M log log n
N ← RANDOM-CHOICE(A)
−3/4
ADD-LINK (N0, N) C≈n
Scale-free entails
short CPL
Transitivity disappears 14
with network size No analytical proof available
15. OSN Refs. Users Links <k> C CP d γ r
L
Club Nexus Adamic et al 2.5 K 10 K 8.2 0.2 4 13 n.a. n.a.
Cyworld Ahn et al 12 M 191 M 31.6 0.2 3.2 16 -0.1
Cyworld T Ahn et al 92 K 0.7 M 15.3 0.3 7.2 n.a. n.a. 0.4
LiveJournal Mislove et al 5 M 77 M 17 0.3 5.9 20 0.2
Flickr Mislove et al 1.8 M 22 M 12.2 0.3 5.7 27 0.2
Twitter Kwak et al 41 M 1700 M n.a. n.a. 4 4.1 n.a.
Orkut Mislove et al 3 M 223 M 106 0.2 4.3 9 1.5 0.1
Orkut Ahn et al 100 K 1.5 M 30.2 0.3 3.8 n.a. 3.7 0.3
Youtube Mislove et al 1.1 M 5 M 4.29 0.1 5.1 21 -0
Facebook Gjoka et al 1 M n.a. n.a. 0.2 n.a. n.a. 0.23
FB H Nazir et al 51 K 116 K n.a. 0.4 n.a. 29 n.a.
FB GL Nazir et al 277 K 600 K n.a. 0.3 n.a. 45 n.a.
BrightKite Scellato et al 54 K 213 K 7.88 0.2 4.7 n.a. n.a.
FourSquare Scellato et al 58 K 351 K 12 0.3 4.6 n.a. n.a.
LiveJournal Scellato et al 993 K 29.6 M 29.9 0.2 4.9 n.a. n.a.
Twitter Java et al 87 K 829 K 18.9 0.1 n.a. 6 0.59
Twitter Scellato et al 409 K 183 M 447 0.2 2.8 n.a. n.a.
15
16. Static Deg C Rigid
ER Yes Poisson Low -
WS Yes Poisson Ok Yes
BA No PL γ=3 Fixable Yes
• Moreover:
• Mostly no navigability
• Uniformity assumption
• Sometimes too complex for analytic study
• Few features studied
• Power-law?
16
17. Alternative models for degree distributions
Power-laws are difficult to fit.
When they do, there are often better distributions.
Power-law with cutoff almost always fits better than plain power-law.
f (x;γ , β ) = x −γ eβ x
Sometimes the log-normal distribution is more appropriate
1 ⎛ − ( log(x / m))2 ⎞
f (x;σ , m) = exp ⎜ ⎟
xσ (2π )1/2
⎝ 2σ 2
⎠
Most of the times random and preferential attachment processes concur
F(x;r) = 1− (rm)1+r (x + rm)−(1+r )
r→0 r→∞
17
scale-free negative exponential dist.
18. Massachussets 1st run: 64/296 arrived, most
Boston delivered to him by 2 men
Nebraska
2nd run: 24/160 arrived, 2/3
delivered by “Mr. Jacobs”
Omaha
2 ≤ hops ≤ 10; µ=5.x
Wichita 6 Degrees
CPL, hubs, ...
Kansas ... and Kleinberg’s Intuition
Milgram’s Experiment
• Random people from Omaha & Wichita were asked to
send a postcard to a person in Boston:
• Write the name on the postcard
• Forward the message only to people personally known
18
that was more likely to know the target
19. Biased Preferential Attachment
At each step:
A new node is added to the network and is assigned to one of the
sets P, I and L according to a probability distribution h
+
e0 ∈ edges are added to the network
for each edge (u,v) u is chosen with distribution D0 and:
if u ∈ I, v is a new node and is assigned to P;
if u ∈ L, v is chosen according to Dγ.
⎧(β + 1)(ku + 1) u ∈L
β ⎪
D (u) ∝ ⎨ ku + 1 u ∈I
⎪0 u ∈P
⎩
No analytic results available.
19
20. Transitive Linking Model [Davidsen 02]
Transitive Linking
I At each step:
TL: a random node is chosen, and it introduces two other nodes that
are linked to it; if the node does not have 2 edges, it introduces
himself to a random node
RM: with probability p a node is chosen and removed along its edges
and replaced with a node with one random edge
I When p ⇤ 1 the TL dominates the process:
I the degree distribution is a power-law with cutoff
I 1 C = p(⌅k ⇧ 1), i.e., quite large in practice
I For larger values of p the two different process concur to form an
exponential degree distribution
I for p ⇥ 1 the degree distribution is essentially a Poisson
distribution
Instead of p it would make sense to have distinct p and r
Bergenti, Franchi, Poggi (Univ. Parma) Models for Agent-based Simulation of SN SNAMAS ’11 11 / 19
parameters for nodes leaving and entering the network
Few analytic results available.
20
21. [1]
Dorogovtsev, S. N. and Mendes, J. F. F. 2003 Evolution of Networks: From Biological Nets
to the Internet and WWW (Physics). Oxford University Press, USA.
[2]
Watts, D. J. 2003 Small Worlds: The Dynamics of Networks between Order and
Randomness (Princeton Studies in Complexity). Princeton University Press.
[3]
Jackson, M. O. 2010 Social and Economic Networks. Princeton University Press.
[4]
Newman, M. 2010 Networks: An Introduction. Oxford University Press, USA.
[5]
Wasserman, S. and Faust, K. 1994 Social Network Analysis: Methods and Applications
(Structural Analysis in the Social Sciences). Cambridge University Press.
[6]
Scott, J. P. 2000 Social Network Analysis: A Handbook. Sage Publications Ltd.
[7]
Kepner, J. and Gilbert, J. 2011 Graph Algorithms in the Language of Linear Algebra
(Software, Environments, and Tools). Society for Industrial & Applied Mathematics.
[8]
Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C. 2009 Introduction to
Algorithms. The MIT Press.
[9]
Skiena, S. S. 2010 The Algorithm Design Manual. Springer.
[10]
Bollobas, B. 1998 Modern Graph Theory. Springer.
[11]
Watts, D. J. and Strogatz, S. H. 1998. Collective dynamics of ‘small-world’networks.
Nature. 393, 6684, 440-442.
[12]
Barabási, A. L. and Albert, R. 1999. Emergence of scaling in random networks. Science.
286, 5439, 509.
[13]
Kleinberg, J. 2000. The small-world phenomenon: an algorithm perspective. Proceedings of
the thirty-second annual ACM symposium on Theory of computing. 163-170.
[14]
Milgram, S. 1967. The small world problem. Psychology today. 2, 1, 60-67.
21
22. Thanks for your kind attention.
Enrico Franchi (efranchi@ce.unipr.it)
AOTLAB, Dipartimento Ingegneria dell’Informazione,
Università di Parma
22