Computational Intelligence for Time Series PredictionGianluca Bontempi
This document provides an overview of computational intelligence methods for time series prediction. It begins with introductions to time series analysis and machine learning approaches for prediction. Specific models discussed include autoregressive (AR), moving average (MA), and autoregressive moving average (ARMA) processes. Parameter estimation techniques for AR models are also covered. The document outlines applications in areas like forecasting, wireless sensors, and biomedicine and concludes with perspectives on future directions.
A Monte Carlo strategy for structure multiple-step-head time series predictionGianluca Bontempi
The document proposes a Monte Carlo approach called SMC (Structured Monte Carlo) for multiple-step-ahead time series forecasting that takes into account the structural dependencies between predictions. It generates samples using a direct forecasting approach and weights them based on how well they satisfy dependencies identified by an iterated approach. Experiments on three benchmark datasets show the SMC approach achieves more accurate forecasts as measured by SMAPE than iterated, direct, or other comparison methods for most prediction horizons tested.
Computational Intelligence for Time Series PredictionGianluca Bontempi
This document provides an overview of computational intelligence methods for time series prediction. It begins with introductions to time series analysis and machine learning approaches for prediction. Specific models discussed include autoregressive (AR), moving average (MA), and autoregressive moving average (ARMA) processes. Parameter estimation techniques for AR models are also covered. The document outlines applications in areas like forecasting, wireless sensors, and biomedicine and concludes with perspectives on future directions.
A Monte Carlo strategy for structure multiple-step-head time series predictionGianluca Bontempi
The document proposes a Monte Carlo approach called SMC (Structured Monte Carlo) for multiple-step-ahead time series forecasting that takes into account the structural dependencies between predictions. It generates samples using a direct forecasting approach and weights them based on how well they satisfy dependencies identified by an iterated approach. Experiments on three benchmark datasets show the SMC approach achieves more accurate forecasts as measured by SMAPE than iterated, direct, or other comparison methods for most prediction horizons tested.
Pranešimas XVI kompiuterininkų konferencijos sekcijoje „Tikimybinių ir statistinių metodų taikymai“,
„Kompiuterininkų dienos – 2013“, Šiauliai 2013-09-21
Pranešimas XVI kompiuterininkų konferencijos sekcijoje „Informacinės technologijos studijų ir mokymo(-si) procese“,
„Kompiuterininkų dienos – 2013“, Šiauliai 2013-09-21
Image segmentation is a fundamental operation in image processing, which consists to di-vide an image in the homogeneous region for helping a human to analyse image, to diagnose a disease and take the decision. In this work, we present a comparative study between two iterative estimator algorithms such as EM (Expectation-Maximization) and ICE (Iterative Conditional Estimation) according to the complexity, the PSNR index, the SSIM index, the error rate and the convergence. These algorithms are used to segment brain tumor Magnetic Resonance Imaging (MRI) images, under Hidden Markov Chain with Indepedant Noise (HMC-IN). We apply a final Bayesian decision criteria MPM (Marginal Posteriori Mode) to estimate a final configuration of the resulted image X. The experimental results show that ICE and EM give the same results in term of the quality PSNR index, SSIM index and error rate, but ICE converges to a solution faster than EM. Then, ICE is more complex than EM.
This 10 hours class is intended to give students the basis to empirically solve statistical problems. Talk 1 serves as an introduction to the statistical software R, and presents how to calculate basic measures such as mean, variance, correlation and gini index. Talk 2 shows how the central limit theorem and the law of the large numbers work empirically. Talk 3 presents the point estimate, the confidence interval and the hypothesis test for the most important parameters. Talk 4 introduces to the linear regression model and Talk 5 to the bootstrap world. Talk 5 also presents an easy example of a markov chains.
All the talks are supported by script codes, in R language.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Markov chain and SIR epidemic model (Greenwood model)writwik mandal
This document discusses Markov chains and the SIR epidemic model. It begins by defining random processes and Markov processes, noting that a Markov process is one where the future is independent of the past given the present state. It then introduces the basics of Markov chains, including the transition probability matrix. The document also explains the SIR epidemic model, which categorizes a population into susceptible, infected, and recovered groups. It provides the differential equations that model changes between these groups over time. Finally, it demonstrates how to model an SIR epidemic using a Markov chain with examples.
2009 PRE - Simulation of the time evolution of the Wigner function with a fir...Guilherme Tosi
This document describes a Monte Carlo method for simulating the time evolution of the Wigner function. It presents a formalism using a hidden variable representation that allows the full quantum dynamics to be calculated classically. Specifically:
1) It defines a two-state classical system that can describe the time evolution of quasiprobability densities like the Wigner function.
2) It extends this to define a hidden variable stochastic field and associated probability vector that can capture the dynamics of the Wigner function through a renormalization procedure.
3) This allows the use of classical Monte Carlo techniques to simulate the quantum dynamics and obtain phase space information like the Wigner function over time for systems under arbitrary time-dependent potentials.
Robust Fuzzy Data Clustering In An Ordinal Scale Based On A Similarity MeasureIJRES Journal
This paper is devoted to processing data given in an ordinal scale. A new objective function of a
special type is introduced. A group of robust fuzzy clustering algorithms based on the similarity measure is
introduced.
- Point estimation involves using sample data to calculate a single number (point estimate) that estimates an unknown population parameter.
- A point estimator is a statistic used to calculate the point estimate. For example, when estimating an unknown population mean μ, the sample mean x̅ is a point estimator for μ.
- An unbiased estimator has an expected value equal to the true population parameter value. A biased estimator has an expected value that is not equal to the true parameter value.
- Common methods for finding estimators include maximum likelihood estimation and the method of moments. Maximum likelihood estimation identifies the value of the parameter that maximizes the likelihood function based on the sample data. The method of moments equates sample moments
- Point estimation involves using sample data to calculate a single number (point estimate) that estimates an unknown population parameter.
- A point estimator is a statistic used to calculate the point estimate. The sample mean is a common point estimator used to estimate the population mean.
- An unbiased estimator has an expected value equal to the true population parameter. A biased estimator has an expected value that is not equal to the true parameter.
- Maximum likelihood estimation and the method of moments are two common approaches for finding estimators. Maximum likelihood estimation selects the value of the parameter that maximizes the likelihood function. The method of moments equates sample moments to population moments to estimate parameters.
Probability and random processes project based learning template.pdfVedant Srivastava
To understand the concept of Monte –Carlo Method and its various applications and it rely on repeated and random sampling to obtain numerical result.
Developing the computational algorithms to solve the problem related to random sampling.
Objective also contains simulation of specific problem in Matlab Software.
This document presents a comparison of dimension reduction techniques for survival analysis, including principal component analysis (PCA), partial least squares (PLS), and random matrix approaches. Simulation data with 100 observations and 1000 covariates was generated to test the ability of each method to minimize bias and mean squared error in estimating survival functions. PCA and PLS were able to capture 50% of the variance by reducing the dimensions to 37. The estimated survival functions were compared to the true function over 5000 iterations. PLS had the lowest bias and mean squared error, followed by PCA, with the random matrix approaches performing worse.
Random Walks in Statistical Theory of CommunicationIRJET Journal
1. The document discusses random walks, which are a type of random process that can model phenomena like diffusion and stock price variations. Random walks involve successive random steps and can take place in discrete or continuous time and space.
2. The document provides details on modeling random walks mathematically using probabilities and binomial distributions. It also discusses calculating the probability of a random walk returning to its origin.
3. The document shows examples of using random walks to model particle motion in 2D and 3D spaces. It also discusses how continuous random variables can produce Brownian motion, a type of random walk. Random walks have various applications in fields like computer science, image processing, genetics and neuroscience.
This document provides an introduction to bootstrap methods and Markov chains. It discusses how bootstrap can be used to estimate properties of a statistic like mean or variance when the sample is small and assumptions of the central limit theorem may not apply. The basic bootstrap approach resamples the original sample with replacement to create new bootstrap samples and estimates the statistic for each. Markov chains are defined as stochastic processes where the next state only depends on the current state. An example of a 2-state Markov chain is provided along with notation for transition probabilities and computing unconditional probabilities. The document also discusses stationary distributions for Markov chains.
This document provides information about a computational stochastic processes course, including lecture details, prerequisites, syllabus, and examples. The key points are:
- Lectures will cover Monte Carlo simulation, stochastic differential equations, Markov chain Monte Carlo methods, and inference for stochastic processes.
- Prerequisites include probability, stochastic processes, and programming.
- Assessments will include a coursework and exam. The coursework will involve computational problems in Python, Julia, R, or similar languages.
- Motivating examples discussed include using Monte Carlo methods to evaluate high-dimensional integrals and simulating Langevin dynamics in statistical physics.
The document discusses audio quantization and transmission. It covers:
1) Quantization converts continuous audio signals into discrete digital signals by sampling and assigning numeric codes, which are then transmitted or stored.
2) Compression uses linear or non-linear quantization, with non-linear providing better protection of quiet passages.
3) Common compression techniques include pulse code modulation (PCM), differential PCM (DPCM), adaptive DPCM (ADPCM) which adapt the quantizer and predictor to the audio signal.
Pranešimas XVI kompiuterininkų konferencijos sekcijoje „Tikimybinių ir statistinių metodų taikymai“,
„Kompiuterininkų dienos – 2013“, Šiauliai 2013-09-21
Pranešimas XVI kompiuterininkų konferencijos sekcijoje „Informacinės technologijos studijų ir mokymo(-si) procese“,
„Kompiuterininkų dienos – 2013“, Šiauliai 2013-09-21
Image segmentation is a fundamental operation in image processing, which consists to di-vide an image in the homogeneous region for helping a human to analyse image, to diagnose a disease and take the decision. In this work, we present a comparative study between two iterative estimator algorithms such as EM (Expectation-Maximization) and ICE (Iterative Conditional Estimation) according to the complexity, the PSNR index, the SSIM index, the error rate and the convergence. These algorithms are used to segment brain tumor Magnetic Resonance Imaging (MRI) images, under Hidden Markov Chain with Indepedant Noise (HMC-IN). We apply a final Bayesian decision criteria MPM (Marginal Posteriori Mode) to estimate a final configuration of the resulted image X. The experimental results show that ICE and EM give the same results in term of the quality PSNR index, SSIM index and error rate, but ICE converges to a solution faster than EM. Then, ICE is more complex than EM.
This 10 hours class is intended to give students the basis to empirically solve statistical problems. Talk 1 serves as an introduction to the statistical software R, and presents how to calculate basic measures such as mean, variance, correlation and gini index. Talk 2 shows how the central limit theorem and the law of the large numbers work empirically. Talk 3 presents the point estimate, the confidence interval and the hypothesis test for the most important parameters. Talk 4 introduces to the linear regression model and Talk 5 to the bootstrap world. Talk 5 also presents an easy example of a markov chains.
All the talks are supported by script codes, in R language.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Markov chain and SIR epidemic model (Greenwood model)writwik mandal
This document discusses Markov chains and the SIR epidemic model. It begins by defining random processes and Markov processes, noting that a Markov process is one where the future is independent of the past given the present state. It then introduces the basics of Markov chains, including the transition probability matrix. The document also explains the SIR epidemic model, which categorizes a population into susceptible, infected, and recovered groups. It provides the differential equations that model changes between these groups over time. Finally, it demonstrates how to model an SIR epidemic using a Markov chain with examples.
2009 PRE - Simulation of the time evolution of the Wigner function with a fir...Guilherme Tosi
This document describes a Monte Carlo method for simulating the time evolution of the Wigner function. It presents a formalism using a hidden variable representation that allows the full quantum dynamics to be calculated classically. Specifically:
1) It defines a two-state classical system that can describe the time evolution of quasiprobability densities like the Wigner function.
2) It extends this to define a hidden variable stochastic field and associated probability vector that can capture the dynamics of the Wigner function through a renormalization procedure.
3) This allows the use of classical Monte Carlo techniques to simulate the quantum dynamics and obtain phase space information like the Wigner function over time for systems under arbitrary time-dependent potentials.
Robust Fuzzy Data Clustering In An Ordinal Scale Based On A Similarity MeasureIJRES Journal
This paper is devoted to processing data given in an ordinal scale. A new objective function of a
special type is introduced. A group of robust fuzzy clustering algorithms based on the similarity measure is
introduced.
- Point estimation involves using sample data to calculate a single number (point estimate) that estimates an unknown population parameter.
- A point estimator is a statistic used to calculate the point estimate. For example, when estimating an unknown population mean μ, the sample mean x̅ is a point estimator for μ.
- An unbiased estimator has an expected value equal to the true population parameter value. A biased estimator has an expected value that is not equal to the true parameter value.
- Common methods for finding estimators include maximum likelihood estimation and the method of moments. Maximum likelihood estimation identifies the value of the parameter that maximizes the likelihood function based on the sample data. The method of moments equates sample moments
- Point estimation involves using sample data to calculate a single number (point estimate) that estimates an unknown population parameter.
- A point estimator is a statistic used to calculate the point estimate. The sample mean is a common point estimator used to estimate the population mean.
- An unbiased estimator has an expected value equal to the true population parameter. A biased estimator has an expected value that is not equal to the true parameter.
- Maximum likelihood estimation and the method of moments are two common approaches for finding estimators. Maximum likelihood estimation selects the value of the parameter that maximizes the likelihood function. The method of moments equates sample moments to population moments to estimate parameters.
Probability and random processes project based learning template.pdfVedant Srivastava
To understand the concept of Monte –Carlo Method and its various applications and it rely on repeated and random sampling to obtain numerical result.
Developing the computational algorithms to solve the problem related to random sampling.
Objective also contains simulation of specific problem in Matlab Software.
This document presents a comparison of dimension reduction techniques for survival analysis, including principal component analysis (PCA), partial least squares (PLS), and random matrix approaches. Simulation data with 100 observations and 1000 covariates was generated to test the ability of each method to minimize bias and mean squared error in estimating survival functions. PCA and PLS were able to capture 50% of the variance by reducing the dimensions to 37. The estimated survival functions were compared to the true function over 5000 iterations. PLS had the lowest bias and mean squared error, followed by PCA, with the random matrix approaches performing worse.
Random Walks in Statistical Theory of CommunicationIRJET Journal
1. The document discusses random walks, which are a type of random process that can model phenomena like diffusion and stock price variations. Random walks involve successive random steps and can take place in discrete or continuous time and space.
2. The document provides details on modeling random walks mathematically using probabilities and binomial distributions. It also discusses calculating the probability of a random walk returning to its origin.
3. The document shows examples of using random walks to model particle motion in 2D and 3D spaces. It also discusses how continuous random variables can produce Brownian motion, a type of random walk. Random walks have various applications in fields like computer science, image processing, genetics and neuroscience.
This document provides an introduction to bootstrap methods and Markov chains. It discusses how bootstrap can be used to estimate properties of a statistic like mean or variance when the sample is small and assumptions of the central limit theorem may not apply. The basic bootstrap approach resamples the original sample with replacement to create new bootstrap samples and estimates the statistic for each. Markov chains are defined as stochastic processes where the next state only depends on the current state. An example of a 2-state Markov chain is provided along with notation for transition probabilities and computing unconditional probabilities. The document also discusses stationary distributions for Markov chains.
This document provides information about a computational stochastic processes course, including lecture details, prerequisites, syllabus, and examples. The key points are:
- Lectures will cover Monte Carlo simulation, stochastic differential equations, Markov chain Monte Carlo methods, and inference for stochastic processes.
- Prerequisites include probability, stochastic processes, and programming.
- Assessments will include a coursework and exam. The coursework will involve computational problems in Python, Julia, R, or similar languages.
- Motivating examples discussed include using Monte Carlo methods to evaluate high-dimensional integrals and simulating Langevin dynamics in statistical physics.
The document discusses audio quantization and transmission. It covers:
1) Quantization converts continuous audio signals into discrete digital signals by sampling and assigning numeric codes, which are then transmitted or stored.
2) Compression uses linear or non-linear quantization, with non-linear providing better protection of quiet passages.
3) Common compression techniques include pulse code modulation (PCM), differential PCM (DPCM), adaptive DPCM (ADPCM) which adapt the quantizer and predictor to the audio signal.
Data Driven Choice of Threshold in Cepstrum Based Spectrum Estimatesipij
The technique of cepstrum thresholding, which is shown to be an effective, yet simple, way of obtaining a smoothed non parametric spectrum estimate of a stationary signal. The major problem of this method is the choice of the threshold value for variance reduction of spectrum estimates. This paper proposes a new threshold selection method which is based on cross validation schemes such as Leave-One-Out, LeaveTwo-Out and Leave-Half-Out. This new methods are easy to describe, simple to implement, and does not impose severe conditions on the unknown spectrum. Numerical results suggest that this new methods are shown to be in agreement with those obtained when the spectrum is fully known.
Self-sampling Strategies for Multimemetic Algorithms in Unstable Computationa...Rafael Nogueras
This document discusses self-sampling strategies for multimemetic algorithms (MMAs) in unstable computational environments subject to churn. It proposes using probabilistic models to sample new individuals when populations need to be enlarged due to node failures. Experimental results show the bivariate model is superior for high churn, maintaining diversity and convergence better than random strategies. Future work aims to extend these self-sampling strategies to dynamic network topologies and more complex probabilistic models.
The document describes a study that uses an optimized k-means clustering algorithm to analyze crime data and predict crime patterns in India. The study collects crime data from 2001-2010, performs data preprocessing, and uses the elbow method to determine that the optimal number of clusters is 8. The k-means algorithm is then applied with k=8 to cluster the data. The results show that males commit more crimes in Madhya Pradesh and females in Maharashtra, and those under 18 commit more crimes in Chhattisgarh. UP has the highest total number of criminals. The optimized k-means approach improves accuracy and efficiency in crime analysis and prediction.
Predicting electricity consumption using hidden parametersIJLT EMAS
data mining technique to forecast power demand of a
biological region based on the metrological conditions. The value
forecast analytical data mining technique is implement with the
Hidden Marko Model. The morals of the factor such as heat,
clamminess and municipal celebration on which influence
operation depends and the everyday utilization morals compose
the data. Data mining operation are perform on this
chronological data to form a forecast model which is able of
predict every day utilization provide the meteorological
parameter. The steps of information detection of data process are
implemented. The data is preprocessed and fed to HMM for
guidance it. The educated HMM network is used to predict the
electricity demand for the given meteorological conditions.
This document summarizes a novel algorithm for fast sparse image reconstruction from compressed sensing measurements. The algorithm uses adaptive nonlinear filtering strategies in an iterative framework. It formulates the image reconstruction problem using total variation minimization and solves it using a two-step iterative scheme. Numerical experiments show that the algorithm is efficient, stable, and fast compared to state-of-the-art methods, as it can reconstruct images from highly incomplete samples in just a few seconds with competitive performance.
Similar to Vaičiulytė, Ingrida ; Sakalauskas, Leonidas „Daugiamatis retų įvykių tikimybių vertinimo algoritmas“ (VU MII) (20)
The document discusses object detection using YOLOv5 models of varying sizes on different hardware platforms. It evaluates the mAP, inference time, parameters, and GFLOPS of YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x models on a reduced COCO dataset. It also measures the average inference time of the optimized Int8 versions of these models on an iPhone 12's Neural Engine, GPU, and CPU. The results show that optimized YOLOv5 models can run real-time object detection at up to 100 images per second on the iPhone 12's Neural Engine.
This document summarizes research on supervised environmental data classification using spatial auto-beta models. The data consists of random fields with attribute values and class labels. A training set is used to classify new observations using generative classification methods. Specifically, attribute values fall within an interval and class labels take one of two values. Transformations are applied to make the data distribution normal. The best fitting distribution is selected to best describe the data. Classification accuracy is evaluated using actual error rates estimated from the data.
This document summarizes a presentation on analyzing Lombard speech and its acoustic properties. It discusses an experiment where 8 speakers recorded words in two rooms, one with acoustic treatment and one without, both with and without noise. Acoustic features were extracted from the speech samples and analyzed based on noise type, room type, and speaker gender. Key findings included identifying features that distinguish Lombard speech from normal speech and vary based on noise level. Future work will use these findings to automatically monitor and improve speech quality and intelligibility in noise.
This document discusses the history and development of hypertext and markup languages. It begins with early methods of calculating and writing before discussing the development of printing press and moveable type in the 15th century. It then outlines important developments in hypertext standards and systems from 1945 to the present, including XML, HTML, CSS and the creation of the World Wide Web in 1990. It also discusses early limitations and issues with HTML and predictions for the future of hypertext.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
2. Introduction
This work describes the empirical Bayesian
approach applied in the estimation of multi –
dimensional frequency. It also introduces the
Monte-Carlo Markov Chain (MCMC) procedure,
which is designed for Bayesian computation.
Modeling of the discrete variable - the number
of occurrences of rare, used statistical models: a
normal distribution with unknown parameters mean and variance, and Poisson distribution.
COMPUTER DAYS – 2013
Šiauliai
3. Introduction
Let us consider a set
1 , 2 , , K
of K populations, where each population j
consists of N j individuals j 1, K .
Assume that some event (e.g., death due to
some disease, insured event) can occur in the
populations under observation.
COMPUTER DAYS – 2013
Šiauliai
4. The aim
Our aim is to estimate unknown probabilities of
events Pjm ,
Y jm of events in populations
when the numbers
are observed j 1, K ; m 1, M .
Y jm
Since a simple estimate of relative risk N j
cannot be used in many cases due to great
differences in the population size N j ,
the empirical Bayesian approach is applied.
COMPUTER DAYS – 2013
Šiauliai
5. Poisson-Gaussian model
An assumption is often justified that the
numbers of cases Y jm follow to the Poisson
m
N j Pjm
distribution with the parameters j
and its density is as follows:
m
m
j
f Y ,
m
j
e
m
j
m Yj
j
m
j
Y
!
COMPUTER DAYS – 2013
Šiauliai
j 1, , K .
6. Poisson-Gaussian model
The empirical Bayesian method is a two stage
procedure, depending on the prior distribution
introduced in the second stage. It is of interest
to consider a model in which the logits
P
ln
1 P
are normally distributed with the parameters , .
COMPUTER DAYS – 2013
Šiauliai
7. Poisson-Gaussian model
Thus the density of logit is
, ,
1
2
g
T
exp
M
2
Pjm are evaluated as a posteriori
Then the rates
means for given ,
m
j
P
where
1
1 e
m
m
j
f Y ,
m 1
Dj
M
Dj
m
j
,
f Y ,
m 1
Nj
M
Nj
1 e
m
1 e
m
g
, , d
,
,
g
COMPUTER DAYS – 2013
Šiauliai
, , d ,
j 1, K , m 1, M .
8. Maximum likelihood method
The Bayesian analysis is often related in statistics to the
minimization of a certain function, expressed as the
integral of a posteriori density. Thus, in the empirical
Bayesian approach, the unknown parameters
are
,
estimated by the maximum likelihood method.
We get the logarithmic likelihood function after some
manipulation such as
M
K
L ,
m
j
ln
j 1
f Y ,
m 1
Nj
1 e
K
g
, , d
m
ln D j
,
,
j 1
which have to be minimized to get estimates for the
parameters.
COMPUTER DAYS – 2013
Šiauliai
9. Derivatives of the maximum likelihood
function
Likelihood function is differentiable many times
with respect to the parameters ,
and the respective first derivatives of this
function are as follows:
M
1
L ,
m
j
f Y ,
K
m 1
Dj
j 1
1
L ,
1
Nj
1 e
,
g
, , d
,
M
T
1
K
j 1
m
f Y jm ,
m 1
Dj
COMPUTER DAYS – 2013
Šiauliai
,
Nj
1 e
m
g
, , d
.
10. Poisson-Gaussian model estimates
The maximum likelihood estimates of
parameters , of Poisson-Gaussian model are
found by solving equations, where the first
derivatives must be equal to zero:
Nj
M
1
K
K
f Y jm ,
m 1
D
j 1
T
1
K
K
j 1
1 e
m
j ,k
m
, , d
,
,
M
f Y jmk ,
,
m 1
D m, k
j
g
,
COMPUTER DAYS – 2013
Šiauliai
Nj
1 e
g
, , d
.
11. Poisson-Gaussian model estimates
For instance, the “fixed point iteration” method
is useful to solve these equations in order to get
the maximum likelihood estimates of , :
1
K
t 1
f Yj ,
K
j 1
Nj
1 e
Dj t ,
T
t 1
1
K
K
j 1
t
t
f Yj ,
Dj
g
,
,
t
d
,
t
Nj
1 e
t, t
COMPUTER DAYS – 2013
Šiauliai
t
g
,
t
,
t
d
.
12. MCMC algorithm
The “fixed point iteration” method we can to
realize by Monte-Carlo Markov chain approach.
Let be generated t chains and in each chain we
generate a multivariate Gaussian vector
j ,k
~ N( t ,
t
), k 1,, N t .
t
N is the Monte – Carlo sample size at the t
step.
COMPUTER DAYS – 2013
Šiauliai
th
13. MCMC algorithm
In order to avoid computational problems, when
the intermediate results are very small, we have
introduced the auxiliary function
M
rj
m
j
ln
f j (Y ,
m 1
Nj
Nj
M
1 e
m
m
j
)/
f j (Y ,
m 1
1 e
m
or
M
rj
m 1
Mj e
1 e
m
m
e
1 e
m
m
COMPUTER DAYS – 2013
Šiauliai
Y
m
j
1 e
ln
1 e
m
m
.
) ,
14. MCMC algorithm
And then we get estimates of parameters
t 1
1
K
K
j
~
m tj
~t ,
1 Dj
1
K
t 1
K
j
~t
Sj
~t ,
1 Dj
where the Monte-Carlo estimators are as follows
~t
Dj
Nt
rj (
j ,k
~
D2tj
),
k 1
~
m tj
j ,k
r(
j ,k ),
k 1
p
t
j ,m
k 1
rj (
j ,k
k 1
Nt
Nt
Nt
~
S jt
Nt
j ,k
k 1
r(
1 e
j ,k
)
j ,k ,m
.
COMPUTER DAYS – 2013
Šiauliai
~
mtj
)
~
D tj
Nt
j ,k
2
,
~
mtj
T
r(
j ,k
),
15. MCMC algorithm
Next, the estimate of the log-likelihood function is
obtained using the Monte-Carlo estimate:
K
~
ln D tj ,
t
L
j 1
its sample variance estimate:
K
dt
j 1
~
D 2 tj N t
~ 2
D tj
1,
population of events probabilities estimate:
~t
Pj ,m
p tj ,m
~t .
Dj
COMPUTER DAYS – 2013
Šiauliai
16. MCMC algorithm
The Monte-Carlo chain can be terminated at the
t th step, if difference between estimates of
two current steps differs insignificantly. Thus,
the hypothesis on the termination condition is
rejected, if
K
Ht
1
K
K
j 1
k 1
~
D 2tj
~ 2
D tj
ln
k
SP
k 1
k
1
k 1
COMPUTER DAYS – 2013
Šiauliai
k T
k
1
k 1
k
M
F
,v
17. MCMC algorithm
The next rule of sample size regulation is
implemented; in order large samples would be
taken only at the moment of making the
decision on termination of the Monte-Carlo
Markov chain:
t
N
F
,v
t 1
N v
F
t
H
,v
- Fisher’s quantile,
- is the significance level.
COMPUTER DAYS – 2013
Šiauliai
18. MCMC algorithm
Application of this rule allows to rational select
of samples size in Monte-Carlo Markov chain to
ensure the convergence of the maximum
likelihood function.
COMPUTER DAYS – 2013
Šiauliai
19. Computer simulation
Next, we used familiar data to construct and
estimate this statistical model.
The random sample
of K 10
1 , 2 , , K
populations has been simulated to explore the
approach developed, in which can occur M 3
events. The logits of probabilities are normally
distributed with these parameters
3
0,25
0
0
4 ;
0
0,25
0
5
0
0
0,25
COMPUTER DAYS – 2013
Šiauliai
.
20. Computer simulation
Next, we have computed the Monte-Carlo
Markov chain of t 100 estimators. To avoid
very small or very large sample sizes, the
following limits were applied
500 N k
17000.
The termination conditions started to be valid
after t 6 iterations.
And we have got these means of parameters:
COMPUTER DAYS – 2013
Šiauliai
22. Conclusions
The empirical Bayesian approach applied in the estimation of
multi-dimensional frequency has been described in this work.
In this paper we:
• presented an iterative method of “fixed point iteration” to
compute the estimates;
• introduced the Monte-Carlo Markov Chain procedure with
adaptive regulation sample size and treatment of the
simulation error in the statistical manner;
• computed the empirical Bayesian estimation of unknown
parameters and probabilities of the events.
The approach developed can be applied in the analysis of
social and medical data.
COMPUTER DAYS – 2013
Šiauliai