The document discusses a study analyzing high frequency trading data from 29 Russian stocks over 6 months in 2010. 369 shock events were identified, where the price changed significantly within a short period. On average there were 13 shock events identified per stock. The study uses high frequency tick data and various filters to identify periods of abrupt price changes at microsecond to minute time scales. The goal is to better understand dynamics and inefficiencies in short term price formation.
Accelerated life testing (ALT) is widely used to expedite failures of a product in a short time period for predicting the product’s reliability under normal operating conditions. The resulting ALT data are often characterized by a probability distribution, such as Weibull, Lognormal, Gamma distribution, along with a life-stress relationship. However, if the selected failure time distribution is not adequate in describing the ALT data, the resulting reliability prediction would be misleading. In this talk, we provide a generic method for modeling ALT data which will assist engineers in dealing with a variety of failure time distributions. The method uses Erlang-Coxian (EC) distributions, which belong to a particular subset of phase-type (PH) distributions, to approximate the underlying failure time distributions arbitrarily closely. To estimate the parameters of such an EC-based ALT model, two statistical inference approaches are proposed. First, a mathematical programming approach is formulated to simultaneously match the moments of the EC-based ALT model to the ALT data collected at all test stress levels. This approach resolves the feasibility issue of the method of moments. In addition, the maximum likelihood estimation (MLE) approach is proposed to handle ALT data with type-I censoring. Numerical examples are provided to illustrate the capability of the generic method in modeling ALT data.
This document is the distribution material on "Code-Saturne beginner seminar". (November 1 2014 "OpenCAE Study Meeting @ Kansai")
http://ofbkansai.sakura.ne.jp/
Accelerated life testing (ALT) is widely used to expedite failures of a product in a short time period for predicting the product’s reliability under normal operating conditions. The resulting ALT data are often characterized by a probability distribution, such as Weibull, Lognormal, Gamma distribution, along with a life-stress relationship. However, if the selected failure time distribution is not adequate in describing the ALT data, the resulting reliability prediction would be misleading. In this talk, we provide a generic method for modeling ALT data which will assist engineers in dealing with a variety of failure time distributions. The method uses Erlang-Coxian (EC) distributions, which belong to a particular subset of phase-type (PH) distributions, to approximate the underlying failure time distributions arbitrarily closely. To estimate the parameters of such an EC-based ALT model, two statistical inference approaches are proposed. First, a mathematical programming approach is formulated to simultaneously match the moments of the EC-based ALT model to the ALT data collected at all test stress levels. This approach resolves the feasibility issue of the method of moments. In addition, the maximum likelihood estimation (MLE) approach is proposed to handle ALT data with type-I censoring. Numerical examples are provided to illustrate the capability of the generic method in modeling ALT data.
This document is the distribution material on "Code-Saturne beginner seminar". (November 1 2014 "OpenCAE Study Meeting @ Kansai")
http://ofbkansai.sakura.ne.jp/
Optimising Autonomous Robot Swarm Parameters for Stable Formation DesignDaniel H. Stolfi
Autonomous robot swarm systems allow to address many inherent limitations of single robot systems, such as scalability and reliability. As a consequence, these have found their way into numerous applications including in the space and aerospace domains like swarm-based asteroid observation or counter-drone systems. However, achieving stable formations around a point of interest using different number of robots and diverse initial conditions can be challenging. In this article we propose a novel method for autonomous robots swarms self-organisation solely relying on their relative position (angle and distance). This work focuses on an evolutionary optimisation approach to calculate the parameters of the swarm, e.g. inter-robot distance, to achieve a reliable formation under different initial conditions. Experiments are conducted using realistic simulations and considering four case studies. The results observed after testing the optimal configurations on 72 unseen scenarios per case study showed the high robustness of our proposal since the desired formation was always achieved. The ability of self-organise around a point of interest maintaining a predefined fixed distance was also validated using real robots.
https://doi.org/10.1145/3512290.3528709
Introduction to behavior based recommendation systemKimikazu Kato
Material presented at Tokyo Web Mining Meetup, March 26, 2016.
The source code is here:
https://github.com/hamukazu/tokyo.webmining.2016-03-26
東京ウェブマイニング(2016年3月27)の発表資料です。すべて英語です。
The window functions used for digital filter design are used to eliminate oscillations in
the FIR (Finite Impulse Response) filter design. In this work, the use of Particle Swarm Optimization
(PSO) algorithm is proposed in the design of cosh window function, in which has widely used in the
literature and has useful spectral parameters. The cosh window is a window function derived from the
Kaiser window. It is more advantageous than the Kaiser window because there is no power series
expansion in the time domain representation. The designed window function shows better ripple ratio
characteristics than other window functions commonly used in the literature. The results obtained
were presented in tables and figures and successful results were obtained
The Comprehensive Product Platform Planning (CP3) framework presents a flexible mathematical model of the platform planning process, which allows (i) the formation of sub-families of products, and (ii) the simultaneous identification and quantification of plat- form/scaling design variables. The CP3 model is founded on a generalized commonality matrix that represents the product platform plan, and yields a mixed binary-integer non- linear programming problem. In this paper, we develop a methodology to reduce the high dimensional binary integer problem to a more tractable integer problem, where the com- monality matrix is represented by a set of integer variables. Subsequently, we determine the feasible set of values for the integer variables in the case of families with 3 − 7 kinds of products. The cardinality of the feasible set is found to be orders of magnitude smaller than the total number of unique combinations of the commonality variables. In addition, we also present the development of a generalized approach to Mixed-Discrete Non-Linear Optimization (MDNLO) that can be implemented through standard non-gradient based op- timization algorithms. This MDNLO technique is expected to provide a robust and compu- tationally inexpensive optimization framework for the reduced CP3 model. The generalized approach to MDNLO uses continuous optimization as the primary search strategy, how- ever, evaluates the system model only at the feasible locations in the discrete variable space.
ZK Study Club: Sumcheck Arguments and Their ApplicationsAlex Pruden
Talk given at the ZK Study Club by Jonathan Bootle and Katerina Sotiraki about the universality of sumcheck arguments and their importance in zero-knowledge cryptography.
Fall 2016 Insurance Case Study – Finance 360Loss ControlLoss.docxlmelaine
Fall 2016 Insurance Case Study – Finance 360
Loss Control
Loss control activities of a business focus on finding and implementing solutions to reduce the probability of loss (loss prevention) and/or reduce the actual amount of loss (loss reduction), and therefore reduce the total cost of risk to maximize firm profitability.
Loss control techniques have been widely used in environmental loss prevention, catastrophic loss prevention, and employee-related risk management. Many firms face loss exposures caused by using, storing, and transporting hazardous materials, caustic substances, gasses, acids, etc., and may have unique issues posed by deployment of “greener” vehicle fleets using CNG, LNG, and bio-fuel solutions. Catastrophic risks, such as earthquakes, tornado, hurricanes or big fire, also pose significant threat to the property safety and business continuation for firms. Employee behavior-related risks and product safety are also important concern of corporate risk management.
Lack of effective loss control (such as inadequate systems, inadequate standards, and inadequate compliance with safety standards) may cause significant damage to a firm, such as injury costs, property damage, liability damage, bad press, lower sales, loss of employee morale, so on and so forth, as British Petroleum (BP) or Toyota had suffered in the past.
In this project, select an S&P 500 company and analyze its loss control policies focusing on either environmental loss prevention, or catastrophic loss prevention, or employee-related risk management.
Your analysis should address the following questions in the least:
· How likely the firm is subject to catastrophic losses?
· Has the business suffered losses of the kind in the past?
· What losses could be caused to the firm if a catastrophic event occurs?
A. Direct Property Loss
B. Indirect (or consequential) Property Loss
C. Liability Loss
D. Personnel Loss
E. Crime
F. Other Loss Exposures
· What loss control activities has the firm implemented to reduce the loss?
· E.g. For Property loss control, comment on Facility design and construction, Automatic Sprinkler Protection, Preventative maintenance, Equipment and Process controls and safeguards, Human Element programs, Pre-incident planning and Business continuity planning
· Proactive Safety procedures vs. Reactive Safety & Recovery policies
Requirements
1. Paper length: 8 page minimum, 12 page maximum; 12 point font—double-spaced
2. Paper sections
A. Title Page, including: (1) paper title, (2) course number and name, (3) instructor, (4) your name, and (5) date submitted
B. Executive Summary: This is a 1-2 paragraph overall summary of your paper.
C. Discussion and analysis: Cover all the individual topic areas set out above, each of which should be labeled with an appropriate subject heading.
D. Works Cited: List all secondary sources consulted in preparing this paper.
E. Attachments (if any). You may append any relevant attachment to ...
Development of a family of products that satisfies different sectors of the market introduces significant challenges to today’s manufacturing industries – from development time to aftermarket services. A product family with a common platform paradigm offers a powerful solution to these daunting challenges. The Comprehensive Product Platform Planning (CP3) framework formulates a flexible product family model that (i) seeks to eliminate traditional boundaries between modular and scalable families, (ii) allows the formation of sub-families of products, and (iii) yield the optimal depth and number of platforms. In this paper, the CP3 framework introduces a solution strategy that obviates common assumptions; namely (i) the identification of platform/non-platform design variables and the determination of variable values are separate processes, and (ii) the cost reduction of creating product platforms is independent of the total number of each product manufactured. A new Cost Decay Function (CDF) is developed to approximate the reduction in cost with increasing commonalities among products, for a specified capacity of production. The Mixed Integer Non-Liner Programming (MINLP) problem, presented by the CP3 model, is solved using a novel Platform Segregating Mapping Function (PSMF). The proposed CP3 framework is implemented on a family of universal electric motors.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
More Related Content
Similar to Extent3 prognoz practical_approach_lppl_model_2012
Optimising Autonomous Robot Swarm Parameters for Stable Formation DesignDaniel H. Stolfi
Autonomous robot swarm systems allow to address many inherent limitations of single robot systems, such as scalability and reliability. As a consequence, these have found their way into numerous applications including in the space and aerospace domains like swarm-based asteroid observation or counter-drone systems. However, achieving stable formations around a point of interest using different number of robots and diverse initial conditions can be challenging. In this article we propose a novel method for autonomous robots swarms self-organisation solely relying on their relative position (angle and distance). This work focuses on an evolutionary optimisation approach to calculate the parameters of the swarm, e.g. inter-robot distance, to achieve a reliable formation under different initial conditions. Experiments are conducted using realistic simulations and considering four case studies. The results observed after testing the optimal configurations on 72 unseen scenarios per case study showed the high robustness of our proposal since the desired formation was always achieved. The ability of self-organise around a point of interest maintaining a predefined fixed distance was also validated using real robots.
https://doi.org/10.1145/3512290.3528709
Introduction to behavior based recommendation systemKimikazu Kato
Material presented at Tokyo Web Mining Meetup, March 26, 2016.
The source code is here:
https://github.com/hamukazu/tokyo.webmining.2016-03-26
東京ウェブマイニング(2016年3月27)の発表資料です。すべて英語です。
The window functions used for digital filter design are used to eliminate oscillations in
the FIR (Finite Impulse Response) filter design. In this work, the use of Particle Swarm Optimization
(PSO) algorithm is proposed in the design of cosh window function, in which has widely used in the
literature and has useful spectral parameters. The cosh window is a window function derived from the
Kaiser window. It is more advantageous than the Kaiser window because there is no power series
expansion in the time domain representation. The designed window function shows better ripple ratio
characteristics than other window functions commonly used in the literature. The results obtained
were presented in tables and figures and successful results were obtained
The Comprehensive Product Platform Planning (CP3) framework presents a flexible mathematical model of the platform planning process, which allows (i) the formation of sub-families of products, and (ii) the simultaneous identification and quantification of plat- form/scaling design variables. The CP3 model is founded on a generalized commonality matrix that represents the product platform plan, and yields a mixed binary-integer non- linear programming problem. In this paper, we develop a methodology to reduce the high dimensional binary integer problem to a more tractable integer problem, where the com- monality matrix is represented by a set of integer variables. Subsequently, we determine the feasible set of values for the integer variables in the case of families with 3 − 7 kinds of products. The cardinality of the feasible set is found to be orders of magnitude smaller than the total number of unique combinations of the commonality variables. In addition, we also present the development of a generalized approach to Mixed-Discrete Non-Linear Optimization (MDNLO) that can be implemented through standard non-gradient based op- timization algorithms. This MDNLO technique is expected to provide a robust and compu- tationally inexpensive optimization framework for the reduced CP3 model. The generalized approach to MDNLO uses continuous optimization as the primary search strategy, how- ever, evaluates the system model only at the feasible locations in the discrete variable space.
ZK Study Club: Sumcheck Arguments and Their ApplicationsAlex Pruden
Talk given at the ZK Study Club by Jonathan Bootle and Katerina Sotiraki about the universality of sumcheck arguments and their importance in zero-knowledge cryptography.
Fall 2016 Insurance Case Study – Finance 360Loss ControlLoss.docxlmelaine
Fall 2016 Insurance Case Study – Finance 360
Loss Control
Loss control activities of a business focus on finding and implementing solutions to reduce the probability of loss (loss prevention) and/or reduce the actual amount of loss (loss reduction), and therefore reduce the total cost of risk to maximize firm profitability.
Loss control techniques have been widely used in environmental loss prevention, catastrophic loss prevention, and employee-related risk management. Many firms face loss exposures caused by using, storing, and transporting hazardous materials, caustic substances, gasses, acids, etc., and may have unique issues posed by deployment of “greener” vehicle fleets using CNG, LNG, and bio-fuel solutions. Catastrophic risks, such as earthquakes, tornado, hurricanes or big fire, also pose significant threat to the property safety and business continuation for firms. Employee behavior-related risks and product safety are also important concern of corporate risk management.
Lack of effective loss control (such as inadequate systems, inadequate standards, and inadequate compliance with safety standards) may cause significant damage to a firm, such as injury costs, property damage, liability damage, bad press, lower sales, loss of employee morale, so on and so forth, as British Petroleum (BP) or Toyota had suffered in the past.
In this project, select an S&P 500 company and analyze its loss control policies focusing on either environmental loss prevention, or catastrophic loss prevention, or employee-related risk management.
Your analysis should address the following questions in the least:
· How likely the firm is subject to catastrophic losses?
· Has the business suffered losses of the kind in the past?
· What losses could be caused to the firm if a catastrophic event occurs?
A. Direct Property Loss
B. Indirect (or consequential) Property Loss
C. Liability Loss
D. Personnel Loss
E. Crime
F. Other Loss Exposures
· What loss control activities has the firm implemented to reduce the loss?
· E.g. For Property loss control, comment on Facility design and construction, Automatic Sprinkler Protection, Preventative maintenance, Equipment and Process controls and safeguards, Human Element programs, Pre-incident planning and Business continuity planning
· Proactive Safety procedures vs. Reactive Safety & Recovery policies
Requirements
1. Paper length: 8 page minimum, 12 page maximum; 12 point font—double-spaced
2. Paper sections
A. Title Page, including: (1) paper title, (2) course number and name, (3) instructor, (4) your name, and (5) date submitted
B. Executive Summary: This is a 1-2 paragraph overall summary of your paper.
C. Discussion and analysis: Cover all the individual topic areas set out above, each of which should be labeled with an appropriate subject heading.
D. Works Cited: List all secondary sources consulted in preparing this paper.
E. Attachments (if any). You may append any relevant attachment to ...
Development of a family of products that satisfies different sectors of the market introduces significant challenges to today’s manufacturing industries – from development time to aftermarket services. A product family with a common platform paradigm offers a powerful solution to these daunting challenges. The Comprehensive Product Platform Planning (CP3) framework formulates a flexible product family model that (i) seeks to eliminate traditional boundaries between modular and scalable families, (ii) allows the formation of sub-families of products, and (iii) yield the optimal depth and number of platforms. In this paper, the CP3 framework introduces a solution strategy that obviates common assumptions; namely (i) the identification of platform/non-platform design variables and the determination of variable values are separate processes, and (ii) the cost reduction of creating product platforms is independent of the total number of each product manufactured. A new Cost Decay Function (CDF) is developed to approximate the reduction in cost with increasing commonalities among products, for a specified capacity of production. The Mixed Integer Non-Liner Programming (MINLP) problem, presented by the CP3 model, is solved using a novel Platform Segregating Mapping Function (PSMF). The proposed CP3 framework is implemented on a family of universal electric motors.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
2. About Prognoz
Leading Russian developers of Business Intelligence
and Performance Management systems
• international company that has
been working in the IT market
since 1991
• joint team of over 1 200 skilled
economists, programmers,
analysts
• 50% market of BI in Russia
• Prognoz Platform, 1-st Russian
platform in Magic Quadrant of
Gartner
3. CONTENTS
Technical architecture Practical approach
Evolution of bubble and risk management
About MMP cluster Monitoring of financial bubbles
MMP cluster architecture The system of bubble recognition
Financial bubbles Science and experiment
Historical bubbles Financial bubble experiment
Definition of financial bubbles Market microstructure approach
Theory of crashes
LPPL model
Fitting of the model
Models selection
3
5. Technical info:
Installation Site: Perm state university
Supercomputer type: Cluster
Number of nodes: 3
Number of Cores per node: 12
CPU type: Intel Xeon 5650 (2.66 GHz)
RAM per node: 64 Gb
OS: Windows Server 2003
5
7. R is statistical and graphical
programming environment
Appeared in 1993 and designed by There is more than 4300
Ross Ihaka and Robert Gentleman packages that allow to use
specialized statistical
R is a GNU project techniques, graphical
devices, import/export
R – a free implementation of the S capabilities, reporting tools,
etc.
language
It runs on a variety of platforms
including Windows, Unix and MacOS
It contains advanced statistical
routines not yet available in other
packages
7
8. Commands
Database
Batch file R file
Task R file
Batch file
Runner
Batch file R file
8
13. Mr. Greenspan
Thefreedictionary.com
Charles Kindleberger, MIT
Professor J.Barley Rosser, James Madison University
13
14. Authors
A.Johansen, O.Ledoit, D.Sornette (JLS)
First publication
Large financial crashes (1997)
Famous book
Didier Sornette
Why Stock Markets Crash (2004)
𝑡 𝑐 - critical time when bubble crash or
change to another regime
14
21. For each log periodic curve we fixed:
𝑡0 - start time of the bubble
First model 𝑡 𝑐 - critical time when bubble crash or
change to another regime
Second model
Sample of 𝑡 𝑐
𝑡 𝑐1 𝑡 𝑐2
21
27. Timeframe LPPL
• Bubble • Long • Large
• Anti - bubble • Short • Small • Parameters
Type Size
27
28. The Financial Crisis Observatory (FCO) is a scientific platform aimed at testing and
quantifying rigorously, in a systematic way and on a large scale the hypothesis that
financial markets exhibit a degree of inefficiency and a potential for predictability,
especially during regimes when bubbles develop. (http://www.er.ethz.ch/fco/index)
Testing two hypotheses:
• Hypothesis H1: financial (and other) bubbles can be diagnosed in real-time
before they end..
• Hypothesis H2: The termination of financial (and other) bubbles can be
bracketed using probabilistic forecasts, with a reliability better than chance
(which remains to be quantied).
D. Sornette, R. Woodard,
M. Fedorovsky,S. Reimann, H. Woodard, W.-X. Zhou
The Financial Bubble Experiment. First Results (2 November 2009 - 1 May 2010)
28
29. 2 November 2009 – 1 May 2010 [http://www.er.ethz.ch/fco/FBE_report_May_2010]
2 of 4 bubbles detected by model were real bubbles
All of them changed their regimes
12 May 2010 – 1 November 2010 [http://www.er.ethz.ch/fco/fbe_Report_1Nov10_2]
5 of 7 bubbles detected by model were real bubbles
4 of 5 changed their regimes
12 November 2011 – 2 May 2011 [http://www.er.ethz.ch/fco/fbe_20110502_assets_3.pdf]
24 of 27 bubbles detected by model were real bubbles
17 of 24 changed there regime
29
33. Statistics: IDENT UP DOWN ALL
PMTL 15 36 51
MAGN 31 6 37
Stocks analyzed 29 blue chips NOTK
OGKC
18
13
18
23
36
36
01.04.2010-30.06.2010; AFLT 9 25 34
RTKM 14 19 33
Period 1.09.2010-12.10.2010 MGNT 4 16 20
NLMK 8 12 20
Trading days 82 URKA 7 11 18
Sample analyzed 20.2 mln. ticks SIBN
RASP
6
7
10
8
16
15
Trading time 11.30-18.40 MRKH 3 9 12
MSNG 5 7 12
CHMF 3 4 7
Shocks found 369 RU14TATN3006
HYDR
3
3
3
2
6
5
TRNFP 3 0 3
IUES 0 2 2
We use a tick dynamics of prices for MTSI 1 1 2
SNGSP 2 0 2
filtering (source: MICEX) ROSN 1 0 1
SNGS 1 0 1
FEES 0 0 0
GAZP 0 0 0
GMKN 0 0 0
Total 369 events (13 per stock) LKOH 0 0 0
On average 1 shock/7 days per stock SBER03 0 0 0
SBERP03 0 0 0
VTBR 0 0 0
Average 5 7 33
13
34. Science
Laboratory of financial modeling and risk
management - Prognoz Risk Lab
Мagistracy in finance and IT (Master in
Finance & IT) in Perm State National Research
University
mifit.ru
Perm Winter School is an annual conference
on modeling of financial markets and risk
management
permwinterschool.ru