Operation-based formal verification (OFV) uses operation properties to formally verify VHDL designs at the register-transfer level. It constructs an abstract VHDL model from the code by identifying the start and end states of operations and the properties connecting them. Property checking and completeness tests are used to prove the equivalence between the VHDL code and abstract model, finding any errors. OFV supports the full verification process from pre-proof to proof to theory formation. It has been applied successfully to verify an industrial processor with over 100,000 lines of code. However, its adoption is limited by design practices focusing on integration rather than module construction, and verification methods relying more on simulation than formal analysis.
Ast2Cfg - A Framework for CFG-Based Analysis and Visualisation of Ada ProgramsGneuromante canalada.org
Georg Kienesberger - Vienna University of Technology
FOSDEM’09
Free and Open Source Software Developers’ European Meeting
7-8 February 2009 - Brussels, Belgium
These slides are licensed under a Creative Commons Attribution-Share Alike 3.0 Austria License. http://creativecommons.org
Accelerated Mac OS X Core Dump Analysis training public slidesDmitry Vostokov
The slides from Software Diagnostics Services Mac OS X core dump analysis training. The training description: "Learn how to analyse app crashes and freezes, navigate through process core memory dump space and diagnose corruption, memory leaks, CPU spikes, blocked threads, deadlocks, wait chains, and much more. We use a unique and innovative pattern-driven analysis approach to speed up the learning curve. The training consists of practical step-by-step exercises using GDB and LLDB debuggers highlighting more than 30 memory analysis patterns diagnosed in 64-bit process core memory dumps. The training also includes source code of modelling applications written in Xcode environment, a catalogue of relevant patterns from Software Diagnostics Institute, and an overview of relevant similarities and differences between Windows and Mac OS X user space memory dump analysis useful for engineers with Wintel background. Audience: software technical support and escalation engineers, system administrators, software developers, security professionals and quality assurance engineers."
Ast2Cfg - A Framework for CFG-Based Analysis and Visualisation of Ada ProgramsGneuromante canalada.org
Georg Kienesberger - Vienna University of Technology
FOSDEM’09
Free and Open Source Software Developers’ European Meeting
7-8 February 2009 - Brussels, Belgium
These slides are licensed under a Creative Commons Attribution-Share Alike 3.0 Austria License. http://creativecommons.org
Accelerated Mac OS X Core Dump Analysis training public slidesDmitry Vostokov
The slides from Software Diagnostics Services Mac OS X core dump analysis training. The training description: "Learn how to analyse app crashes and freezes, navigate through process core memory dump space and diagnose corruption, memory leaks, CPU spikes, blocked threads, deadlocks, wait chains, and much more. We use a unique and innovative pattern-driven analysis approach to speed up the learning curve. The training consists of practical step-by-step exercises using GDB and LLDB debuggers highlighting more than 30 memory analysis patterns diagnosed in 64-bit process core memory dumps. The training also includes source code of modelling applications written in Xcode environment, a catalogue of relevant patterns from Software Diagnostics Institute, and an overview of relevant similarities and differences between Windows and Mac OS X user space memory dump analysis useful for engineers with Wintel background. Audience: software technical support and escalation engineers, system administrators, software developers, security professionals and quality assurance engineers."
OLD VERSION - Understanding the V8 Runtime to Maximize Application PerformanceDaniel Fields
This is an outdated version. Please see: https://www.slideshare.net/DanielFields9/understanding-the-v8-runtime-to-maximize-application-performance-97759045
Deep dive into the V8 JavaScript engine, and learn how the runtime treats your code, and how small syntactical changes can have dramatic impacts on performance. Topic cover hidden classes, inline caching, and the compiler optimizer.
Introduction to developing or migrating models to be compliant to the OpenMI Standard. OpenMI is an open standard which allows dynamic linking of numerical models, such as river models rainfall-runoff models and so on. See also:
http://www.lictek.com
Introduction to developing or migrating models to be compliant to the OpenMI Standard. OpenMI is an open standard which allows dynamic linking of numerical models, such as river models rainfall-runoff models and so on. See also:
http://www.lictek.com
Presentation on native interfaces for the R programming language given as part of a course in advanced R programming at FHCRC:
https://secure.bioconductor.org/SeattleMay10/
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2L4rPmM
This CloudxLab Basics of RDD tutorial helps you to understand Basics of RDD in detail. Below are the topics covered in this tutorial:
1) What is RDD - Resilient Distributed Datasets
2) Creating RDD in Scala
3) RDD Operations - Transformations & Actions
4) RDD Transformations - map() & filter()
5) RDD Actions - take() & saveAsTextFile()
6) Lazy Evaluation & Instant Evaluation
7) Lineage Graph
8) flatMap and Union
9) Scala Transformations - Union
10) Scala Actions - saveAsTextFile(), collect(), take() and count()
11) More Actions - reduce()
12) Can We Use reduce() for Computing Average?
13) Solving Problems with Spark
14) Compute Average and Standard Deviation with Spark
15) Pick Random Samples From a Dataset using Spark
OLD VERSION - Understanding the V8 Runtime to Maximize Application PerformanceDaniel Fields
This is an outdated version. Please see: https://www.slideshare.net/DanielFields9/understanding-the-v8-runtime-to-maximize-application-performance-97759045
Deep dive into the V8 JavaScript engine, and learn how the runtime treats your code, and how small syntactical changes can have dramatic impacts on performance. Topic cover hidden classes, inline caching, and the compiler optimizer.
Introduction to developing or migrating models to be compliant to the OpenMI Standard. OpenMI is an open standard which allows dynamic linking of numerical models, such as river models rainfall-runoff models and so on. See also:
http://www.lictek.com
Introduction to developing or migrating models to be compliant to the OpenMI Standard. OpenMI is an open standard which allows dynamic linking of numerical models, such as river models rainfall-runoff models and so on. See also:
http://www.lictek.com
Presentation on native interfaces for the R programming language given as part of a course in advanced R programming at FHCRC:
https://secure.bioconductor.org/SeattleMay10/
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2L4rPmM
This CloudxLab Basics of RDD tutorial helps you to understand Basics of RDD in detail. Below are the topics covered in this tutorial:
1) What is RDD - Resilient Distributed Datasets
2) Creating RDD in Scala
3) RDD Operations - Transformations & Actions
4) RDD Transformations - map() & filter()
5) RDD Actions - take() & saveAsTextFile()
6) Lazy Evaluation & Instant Evaluation
7) Lineage Graph
8) flatMap and Union
9) Scala Transformations - Union
10) Scala Actions - saveAsTextFile(), collect(), take() and count()
11) More Actions - reduce()
12) Can We Use reduce() for Computing Average?
13) Solving Problems with Spark
14) Compute Average and Standard Deviation with Spark
15) Pick Random Samples From a Dataset using Spark
Apache Spark - Basics of RDD & RDD Operations | Big Data Hadoop Spark Tutoria...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2JgbT3E
This CloudxLab Basics of RDD & RDD Operations tutorial helps you to understand basics of RDD and RDD Operations in detail. Below are the topics covered in this tutorial:
1) Pick Random Samples From a Dataset using Spark
2) Spark Transformations - mapPartitions() & sortBy()
3) Spark Pseudo set operations - distinct(), union(), subtract(), intersection() & cartesian()
4) Spark Actions - fold(), aggregate(), countByValue(), top(), takeOrdered(), foreach() & foreachPartition()
Many experts believe that ageing can be delayed, this is one of the main goals of the the Institute of Healthy Ageing at University College London. I will present the results of my lifespan-extension research where we integrated publicly available genes databases in order to identify ageing related genes. I will show what challenges we met and what we have learned about the process of ageing.
Ageing is one of the fundamental mysteries in biology and many scientists are starting to study this fascinating process. I am part of the research group led by Dr Eugene Schuster at UCL Institute of Healthy Ageing. We experiment with Drosophila and Caenorhabditis elegans by modifying their genes in order to create long-lived mutants. The results of our experiments are quantified using high-throughput microarray analysis. Finally we apply information technology in order to understand how the ageing process works. I will show how we mine microarrays data in order to find the connections between thousands of genes and how we identify candidates for ageing genes.
We are interested in building a better understanding of genes functions by harnessing the large quantity of experimental microarray data in the public databases. Our hope is that after understanding the ageing process in simpler organisms we will be able to apply this knowledge in humans.
Cross-referencing expressions levels in thousands of genes and hundreds of experiments turned out to be a computationally challenging problem but Hadoop and Amazon cloud came to our rescue. In this talk I will present a case study based on our use of R with Amazon Elastic MapReduce and will give background on our bioinformatics challenges.
These slides were presented at ApacheCon Europe 2012:
http://www.apachecon.eu/schedule/presentation/3/
In den letzten Jahren hat sich die Software RRDtool zu einem unverzichtbaren Werkzeug in den Bereichen Monitoring und System Management entwickelt.
In dieser Präsentation erhalten Sie eine kurze Einführung in die Funktionsweise von RRDtool. Vor allem wird auf die neuen Funktionen in RRDtool 1.5 eingegangen und ein Ausblick auf die weitere Entwicklung der Software gegeben.
A recent direction in Business Process Management studied methodologies to control the execution of Business Processes under several sources of uncertainty in order to always get to the end by satisfying all constraints. Current approaches encode business processes into temporal constraint networks or timed game automata in order to exploit their related strategy synthesis algorithms. However, the proposed encodings can only synthesize single-strategies and fail to handle loops. To overcome these limits I will discuss a recent approach based on supervisory control. The approach considers structured business processes with resources, parallel and mutually exclusive branches, loops, and uncertainty. I will discuss an encoding into finite state automata and prove that their concurrent behavior models exactly all possible executions of the process. After that, I will introduce tentative commitment constraints as a new class of constraints restricting the executions of a process. Finally, I will discuss a tree decomposition of the process that plays a central role in modular supervisory control.
In his ignite talk „The Digital Transformation of Education: A Hyper-Disruptive Era through Blockchain and Generative AI,“ Dr. Alexander Pfeiffer delves into the intricate challenges and potential benefits associated with integrating blockchain technologies and generative AI into the educational landscape. He scrutinizes consensus algorithms and explores sustainable methods of operating blockchain systems, while also examining how smart contracts and transactions can be tailored to meet the specific needs of the educational sector. Alexander underscores the importance of establishing secure digital identities and ensuring robust data protection, while simultaneously casting a critical eye on potential risks and vulnerabilities. The topic of digital identities, facilitated through tokenization, forms a bridge between storing data using blockchain-based databases and the increasingly urgent need for content verification of AI-generated material.
Alexander explores the profound alterations occurring in teaching methodologies, assignment creation, and evaluation processes, shedding light on the hyper-disruptive impact these changes are having on both research and practical applications in education. The production of textual content by educators and students is analyzed with a focus on ensuring clear traceability of content sources and editors, and its proper citation, a critical aspect in the responsible use of AI. In addition to generative text and graphics, AI plays a crucial role in future learning and assignment practices, particularly through adaptive game-based learning and assessment. Alexander will provide a brief glimpse into his game „Gallery-Defender,“ a prototype demonstrating how AI and blockchain can be effectively implemented in serious gaming scenarios.
Furthermore, he emphasizes the imperative for ongoing education and professional development for educational personnel, advocating for a proactive stance in addressing the (legal) challenges associated with AI-generated images and text. This ignite talk aims to provide a balanced and critically reflective perspective on hyper-disruptive technologies, setting the stage for further discourse and exploration in the subsequent discussion.
The simulation of melee combat is central to many contemporary and traditional strategic games and simulations. In order to elevate this element of play from mere exercises of stats-comparison and dice rolling to a meaningful experience of play, strategy games rely on a rich plethora of cultural motives as deciding factors of their mechanic design. On the example of Samurai-themed skirmishing games, my talk elaborates on the impact that (popular) culture and other inspirations have on gaming experiences. It provides concrete examples from Japanese history, its traditional cinema, and postmodern Western reflections of Japanese cultural practices. Based on these insights, it compares four tabletop strategy games, muses on which phenomena they have adapted in their mechanics, and asks why or why not they may succeed in capturing a cultural essence via their rules.
Ultimately, this comparative approach shall serve to decipher the interplay of dice mechanics and aesthetic properties as the longing for a dramatic ideal in tabletop gaming and encourage participants to reflect on the idea in a subsequent, shared gaming experience.
How does a development team expand on an already existing game?
We will look at the two community driven and committee led expansions to the abandoned Tabletop game 'GuildBall' and explore the stages of development that the game went through. The art and lore driven approach employed will show us how rough sketches and concept ideas become a fully fledged ruleset and ultimately miniatures that can be put on the table. We will also explore pitfalls in rules design like over complicating abilities, the lack of streamlining across the game or simply creating expansions who break the game instead of the mold.
Exploring the development and production pipelines for miniatures in the tabletop wargaming industry. Including a look at the career route taken by the speaker, a case study on developing anatomical archetypes for consistent design outcomes, and a brief look at the various production methods available to the industry.
In recent years, we have experienced an exponential growth in the amount of data generated by IoT devices. Data have to be processed strict low latency constraints, that cannot be addressed by conventional computing paradigm and architectures. On top of this, if we consider that we recently hit the limit codified by the Moore’s law, satisfying low-latency requirements of modern applications will become even more challenging in the future. In this talk, we discuss challenges and possibilities of heterogeneous distributed systems in the Post-Moore era.
In the modern world, we are permanently using, leveraging, interacting with, and relying upon systems of ever higher sophistication, ranging from our cars, recommender systems in eCommerce, and networks when we go online, to integrated circuits when using our PCs and smartphones, security-critical software when accessing our bank accounts, and spreadsheets for financial planning and decision making. The complexity of these systems coupled with our high dependency on them implies both a non-negligible likelihood of system failures, and a high potential that such failures have significant negative effects on our everyday life. For that reason, it is a vital requirement to keep the harm of emerging failures to a minimum, which means minimizing the system downtime as well as the cost of system repair. This is where model-based diagnosis comes into play.
Model-based diagnosis is a principled, domain-independent approach that can be generally applied to troubleshoot systems of a wide variety of types, including all the ones mentioned above. It exploits and orchestrates techniques for knowledge representation, automated reasoning, heuristic problem solving, intelligent search, learning, stochastics, statistics, decision making under uncertainty, as well as combinatorics and set theory to detect, localize, and fix faults in abnormally behaving systems.
In this talk, we will give an introduction to the topic of model-based diagnosis, point out the major challenges in the field, and discuss a selection of approaches from our research addressing these challenges. For instance, we will present methods for the optimization of the time and memory performance of diagnosis systems, show efficient techniques for a semi-automatic debugging by interacting with a user or expert, and demonstrate how our algorithms can be effectively leveraged in important application domains such as scheduling or the Semantic Web.
Function-as-a-Service (FaaS) is the latest paradigm of cloud computing in which developers deploy their codes as serverless functions, while the entire underlying platform and infrastructure is completely managed by cloud providers. Each cloud provider offers a huge set of cloud services and many libraries to simplify development and deployment, but only inside their clouds, often in a single cloud region. With such „help“ of cloud providers, users are locked to use resources and services of the selected cloud provider, which are often limited. Moreover, such heterogeneous and distributed environment of multiple cloud regions and providers challenge scientists to engineer cloud applications, often in a form of serverless workflows. In this talk, I will present our design principle „code once, run everywhere, with everything“. In particular, I will present challenges and our approaches and techniques how to program, model, orchestrate, and run distributed serverless workflow applications in federated FaaS.
As the network softwarization trend started by SDN and NFV keeps evolving, the hardware/software continuum becomes more relevant than ever, offering new offloading/acceleration opportunities at node and network-wide scales. This talk will review evolving transformations behind network softwarization with a special focus on network refactoring and offloading trends leading to “fluid networks planes”, characterized by multiple candidate options for the specific HW/SW embodiment and the location of chained network functions, from the edge to core, from one administrative provider to another, from programmable silicon to portable lightweight virtualized containers. The talk will overview concrete examples from the literature with a special focus on the role of Machine Learning to assist key (automated) decision-making steps. Lastly, the talk will conclude with a glimpse on ongoing ML work applied to Youtube video QoE prediction in live 5G networks.
The dynamics of networks enables the function of a variety of systems we rely on every day, from gene regulation and metabolism in the cell to the distribution of electric power and communication of information. Understanding, steering and predicting the function of interacting nonlinear dynamical systems, in particular if they are externally driven out of equilibrium, relies on obtaining and evaluating suitable models, posing at least two major challenges. First, how can we extract key structural system features of networks if only time series data provide information about the dynamics of (some) units? Second, how can we characterize nonlinear responses of nonlinear multi-dimensional systems externally driven by fluctuations, and consequently, predict tipping points at which normal operational states may be lost? Here we report recent progress on nonlinear response theory extended to predict tipping points and on model-free inference of network structural features from observed dynamics.
When it comes to integrating digital technologies into the classroom in higher education, many teachers face similar challenges. Nevertheless, it is difficult for teachers to share experiences because it is usually not possible to transfer successful teaching scenarios directly from one area to another, as subject-specific characteristics make it difficult to reuse them. To address this problem, instructional scenarios can be described as patterns that have been used previously in educational contexts. Patterns can capture proven teaching strategies and describe instructional scenarios in a consistent structure that can be reused. Because priorities for content, methods, and tools are different in each domain, a consensus-tested taxonomy was first developed with the goal of modeling a domain-independent database to collect digital instructional practices. In addition, this presentation will present preliminary insights into a data-driven approach to identifying effective instructional practices from interdisciplinary data as patterns. A web-based application will be developed for this that can both collect teaching/learning scenarios and individually extract scenarios from patterns for a learning platform.
The advent of fog and edge computing has prompted predictions that they will take over the traditional cloud for information processing and knowledge extraction in Internet of Things (IoT) systems. Notwithstanding the fact that fog and edge computing have undoubtedly large potential, these predictions are probably oversimplified and wrongly portray the relations between cloud, fog and edge computing.
Concretely, fog and edge computing have been introduced as an extension of the cloud services towards the data sources, thus forming the computing continuum. The computing continuum enables the creation of a new type of services, spanning across distributed infrastructures, supporting various IoT applications. These applications have a large spectrum of requirements, burdensome to meet with "distant'' cloud data centers. However, the introduction of the computing continuum raises multiple challenges for management, deployment and orchestration of complex distributed applications, such as: increased network heterogeneity, limited resource capacity of edge devices, fragmented storage management, high mobility of edge devices and limited support of native monolithic applications. These challenges primarily concern the complexity and the large diversity of the devices, managed by different entities (cloud providers, universities, private institutions), which range from single-board computers such as Raspberry Pis to powerful multi-processor servers.
Therefore, in this talk, we will discuss novel algorithms for low latency, scalable, and sustainable computing over heterogeneous resources for information processing and reasoning, thus enabling transparent integration of IoT applications. We will tackle the heterogeneity challenge of dynamically changing topologies of the computing infrastructure and present a novel concept for sustainable processing at scale.
East-west oriented photovoltaic power system is a new trend in orienting photovoltaic system. This lecture presents an evaluation of east–west oriented photovoltaic power system. A comparison between east–west oriented photovoltaic system and south oriented photovoltaic system in terms of cost of energy and technical requirement is conducted is presented in this lecture. In addition to that, the benefits of using east–west oriented photovoltaic system are discussed in this paper.
Randomized Signature or random feature selection are two instances of machine learning, where randomly chosen structures appear to be highly expressive. We analyze several aspects of the theory behind it, show that these structures have several theoretically attractive properties and introduce two classes of examples from finance (joint works with Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Martin Larsson, and Juan-Pablo Ortega).
We live in a “digital” world, the separation between physical and virtual makes (almost) no sense anymore. Here, the Corona pandemic has also acted as an accelerator/magnifier demonstrating that the future of our digital society is here with all its possibilities, but also shortcomings.
In his talk, Hannes Werthner will briefly reflect on the history of computer science, and then discuss the need for an interdisciplinary response to these shortcomings. Such an answer is the Digital Humanism, which looks at this interplay of technology and humankind, it analyzes, and, most importantly, tries to influence the complex interplay of technology and humankind, for a better society and life. In the second part he will discuss this approach, and show what was achieved since its first workshop in 2019, and what lies ahead.
In the latest years, we have witnessed a growing number of media transmitted and stored on computers and mobile devices. For this reason, there is an actual need to employ smart compression algorithms to reduce the size of our media files. However, such techniques are often responsible for severe reduction of user perceived quality. In this talk we present several approaches we have developed to restore degraded images and videos to match their original quality, making use of Generative Adversarial Networks. The aim of the talk is to highlight the main features of our research work, including the advantages of our solution, the current challenges and the possible directions for future improvements.
Recommendation systems today are widely used across many applications such as in multimedia content platforms, social networks, and ecommerce, to provide suggestions to users that are most likely to fulfill their needs, thereby improving the user experience. Academic research, to date, largely focuses on the performance of recommendation models in terms of ranking quality or accuracy measures, which often don’t directly translate into improvements in the real-world. In this talk, we present some of the most interesting challenges that we face in the personalization efforts at Netflix. The goal of this talk is to sunshine challenging research problems in industrial recommendation systems and start a conversation about exciting areas of future research.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3
Theorie, Praxis und Perspektiven der operationsbasierten formalen Schaltungsverifikation
1. Theory, Practice and Perspectives of
Operation-Based Formal Circuit Verification
Wolfram Büttner
wolfram-buettner@aon.at
December 2012
2. Principles of Mathematical Work
Overall objective
- Construct mathematical object
- Document understanding of object in terms of theorems
Process of gaining understanding
- Pre-proof: Set up hypothesis, constraints, assertions
- Proof: Prove hypothesis or adjust hypothesis, constraints, assertions until proof succeeds
- Theory formation: Develop hierarchy of theorems to achieve good understanding of object
Formal verification
- Analyze mathematical models capturing key functionality of technical systems – most
important models are FSM‘s describing discrete control
- Emphasis is on finding errors – proof as termination criterion for successful verification
- Automated proof is essential for acceptance in Engineering
- Automated proof is necessary, but is it sufficient for a good verification solution?
December 2012
Page 2
3. Model Checking: Automated Debugging/Proof
Temporal Logic as Property Description Language for FSM‘s
AGp - p holds for all EGp - p holds for all AFp - p holds for some
states of all traces states of some trace state in every trace
More complex properties
e.g. AG(p AFq), AGAFp, AGEFp
EFp - p holds for some
state in some trace
December 2012
Page 3
4. Model Checking: Automated Debugging/Proof
Does temporal logic formula hold for FSM ?
AGp - p holds for all Basic Model Checking:
states of all traces if p does not hold for z0 then reset activation defines counterexample,
else for i > 0 … {
• calculate Zi+1
z0 • if Zi+1 = Zi proof holds, stop else
• examine all new z that can be reached from Zi in one step
if p does not hold for z then calculate trace to z,
stop
}
}
z0 = reset state
Z0 = {z0} Symbolic Model Checking:
…. • Identify sets Zi with their characteristic (Boolean) functions
Zi+1 = Zi plus new • f Boolean then f(x1, …, xn) = ite (x1=1, f(1, ……, xn), f(0, …… , xn))
states reachable • Iterated decomposition represents f as directed acyclic graph (BDD)
from states in Zi • Graph is often compact; permits efficient build-up of Zi, comparison
in one step of Zi and Zi+1 and intersection of Zi+1 with set of states violating p
December 2012
Page 4
5. Model Checking: Automated Debugging/Proof
Assessment
Status of approach
• Best known automated formal verification paradigm
• Bound to be an add-on to conventional simulation-based testing
• Applied in various domains by experts verifying critical functionality – no
generally accepted engineering practice
• Often faces state-explosion requiring problem specific abstractions
• Finding safe abstractions requires deep knowledge of tool and application
Conclusions
• Push-button verification solution based on MC works only for simple properties
• Additional support of „process of gaining understanding“ is essential for broad
acceptance of formal verification in industry
• In early 1990s new circuit verification approach emerged supporting pre-proof,
proof and theory formation – OFV (operation-based formal circuit verification)
December 2012
Page 5
7. OFV: Operation Properties/Abstract VHDL
sd_ctrl <= nop; req = '0' / pnop / mnop
ready <= '0'; sd_ctrl <= nop; reset
ready <= '0'; IDLE
reset
req = '1' / pwrite(R,C,D) /
sd_ctrl <= activate; activate(R),
sd_ctrl <= nop; idle pnop / pread(R,C) /
sd_addr <= row(address); mwrite(C,D),
ready <= '0'; precharge activate(R) &
last_row <= row(address); actrow <= R
mread(C),
ready <= '0';
pread(R,C) actrow = R
pwrite(R,C,D)
and R = actrow /
(req = '0' or ROW_ACT and R = actrow /
sd_ctrl <= nop; mread(C)
row(address /= mwrite(C,D)
ready <= '0'
last_row) / pwrite(R,C,D)
sd_ctrl <= row_act
req = '1' and rw = '1‚ pread(R,C) and R ≠ actrow /
precharge; and row(address) and R ≠ actrow / precharge,
ready <= '0'; = last_row / precharge, activate(R),
sd_ctrl <= read; activate(R), mwrite(C,D),
sd_addr <= col(address) mread(C), actrow <= R
ready <= '0'; actrow <= R
(req = '1' and rw = '0'
and row(address) = t T
last_row) / sd_ctrl <= nop;
sd_ctrl <= write; state ROW_ACT
ready <= '0';
sd_addr <= col(address); actrow R
ready <= '1'; request R ≠ actrow
sd_wdata <= wdata; sd_ctrl <= nop; rw
ready <= '0'; ready
address R,C
rdata D
rdata <= sd_rdata; wdata
ready <= '1'; sd_ctrl prech nop activate nop read nop nop
Sd_ctrl <= nop;
sd_addr R C
sd_ctrl <= stop; sd_rdata
ready <= '0'; sd_ctrl <= nop; D
ready <= '0'; sd_wdata
December 2012
Page 7
8. OFV: Formal Verification of Single
Operation Property
Verification of single operation property is reduced to SAT-problem
• A = A(z0, Z, I, O, R(z0, Z, I, O)) (Mealy automaton of VHDL program)
R defines transition equations zj+1 = zj+1(zj, ij), oj = oj(zj, ij) (polynomials in zj, ij)
• P = P(it, it+1, …, it+n, zt, zt+1, … zt+n, ot, ot+1, …, ot+n) ε { True, False}
Property describes behaviour of an operation over n cycles (usually n ≤ 50)
• By inserting transition equations of A into P a property P‘ of A arises with
P‘ = P‘(it, it+1, …, it+n, zt)
• Application of SAT solver:
P holds for A iff P‘ = True otherwise solver computes trace T (counter example)
triggered by it‘, it+1‘, …, it+n ‘ such that T starts at zt‘ and P fails for T
• Complexity shifted from BDD representation to SAT search; heuristics deal with
many thousand variables; few properties run longer than 5 minutes
December 2012
Page 8
9. OFV: Methodology to Systematically Find
Operation Properties
Review VHDL/spec and automatically verify identified behavior
• Verification engineer searches in VHDL for start and ending states of operations
of abstract VHDL
• Incremental build-up of these states and connecting operations by firstly
inspecting state machine (s) of code and then taking data path into account:
– Suspected (stage of) operation is formalized by – possibly partial - operation property
– Property checking reveals errors or ensures correct behavior of code fragments
• This way engineer walks through code, operation by operation, and covers
behaviour of VHDL by operation properties
• Review stops once automated completeness check confirms coverage of full
functionality of code by properties
• Productivity: 2000-4000 lines of fully verified VHDL per person month
December 2012
Page 9
10. OFV: Completeness of Set of Operation
Properties
Set of operation properties of an automaton A describing a VHDL program is
complete iff for every input trace of A a chain of properties exists which uniquely
determines A‘s output trace – i.e. A and its Abstract VHDL have same I/O behavior.
In order to gap-free chain operation properties for any such property P its ending
and starting states must comprise conditions which permit tests ensuring
completeness of a property set:
For every property P
1. and for every input stimulus there exist successor properties Qi such that the ending state
condition of P fulfills the starting state condition of Qi (successor test)
2. and for every input stimulus any successor Qi of P uniquely determines the output trace in
the considered interval (determination test)
3. the input conditions of the successors Qi of P cover all possible inputs (case split test)
Similarly as for property checking completeness tests amount to solving SAT problems
December 2012
Page 10
11. OFV: Success Story
Operation-Based Formal Verification of Large Industrial Processor
• Verisoft-Project funded by German Ministry
MMU FPU
Data
for Education and Research to challenge
Program
TriCore 1.3 formal techniques
Interface
Cache
Interface
Cache
Program Core Data
Scratch RAM Scratch RAM • Testcase due to Verisoft-Partner Infineon:
Program Bus Interface Unit Data
Scratch RAM Scratch RAM – New superscalar 32-bit microcontroller-DSP, 3
pipelines, 850 instructions
Interrupt &
Interrupts
Debug Unit – Around 100k lines VHDL/1000 pages spec
Other IP Crossbar (64 bit) Other IP – Widely used in automotive applications
• Effort: 4 PY vs. significantly higher effort
Bridge needed for simulation
• Critical bugs found by OFV in spec and RTL
System Bus
• 1532 properties; 5 processes; 30 k lines of
formally verified
property code
Source: Infineon; Verisoft project 2007 • Correctness proven on single WS in 5 days
December 2012
Page 11
12. Chip Development and Main Hurdle for OFV
Early phase
• set up/assess functional prototypes
Architecture
• explore architectural choices
• specify modules and communication for
target architecture
Design
• Development and verification or re-use of
modules (e.g. VHDL programs)
• Verification engineers used to black-box
verification (random test generation)
• system integration, communication
structures
Lower-Level Activities
• Automated implementation of logic firstly
by gates then by transistors
• Generation of production data and tests
December 2012
Page 12
13. Further Perspectives of Abstract VHDL
Operation-Based Design, Optimization wrt. Area, Speed, Power,
Functional Safety Analysis
sd_ctrl <= nop; req = '0' / pnop / mnop
ready <= '0'; sd_ctrl <= nop; reset
ready <= '0'; IDLE
reset
req = '1' / pwrite(R,C,D) /
sd_ctrl <= row_act; activate(R),
sd_ctrl <= nop; idle pnop / pread(R,C) /
sd_addr <= row(address); mwrite(C,D),
ready <= '0'; precharge activate(R) &
last_row <= row(address); actrow <= R
mread(C),
ready <= '0';
pread(R,C) actrow = R
pwrite(R,C,D)
and R = actrow /
(req = '0' or ROW_ACT and R = actrow /
sd_ctrl <= nop; mread(C)
row(address /= mwrite(C,D)
ready <= '0'
last_row) / pwrite(R,C,D)
sd_ctrl <= row_act
req = '1' and rw = '1‚ pread(R,C) and R ≠ actrow /
precharge; and row(address) and R ≠ actrow / precharge,
ready <= '0'; = last_row / precharge, activate(R),
sd_ctrl <= read; activate(R), mwrite(C,D),
sd_addr <= col(address) mread(C), actrow <= R
ready <= '0'; (ready <= '1') actrow <= R
(req = '1' and rw = '0'
and row(address) = t T
last_row) / sd_ctrl <= stop;
sd_ctrl <= write; state ROW_ACT
ready <= '0';
sd_addr <= col(address); actrow R
ready <= '1'; request R ≠ actrow
sd_wdata <= wdata; sd_ctrl <= nop; rw
ready <= '0'; ready
address R,C
rdata D
rdata <= sd_rdata; wdata
ready <= '1'; sd_ctrl prech nop activate nop read nop nop
ctrl <= nop;
sd_addr R C
sd_ctrl <= stop; sd_rdata
ready <= '0'; sd_ctrl <= nop; D
ready <= '0'; sd_wdata
December 2012
Page 13
14. Summary
• Modules are built to implement operations - often computing results within few cycles.
• Functional essence of an operation is captured by concept of operation property.
• Start/end states of operations and operation properties define abstract automaton -
tool-supported code review extracts this Abstract VHDL from VHDL and spec.
• SAT-based property checking and completeness tests guarantee functional equivalence
between VHDL and Abstract VHDL or reveal errors in code or spec – respective tools
are supported and marketed by OneSpin Solutions GmbH.
• OFV is a full verification solution supporting pre-proof, proof, theory formation -
reliably yields top quality at reasonable effort.
• Two barriers prevent OFV from entering mainstream engineering:
– Chip manufacturers now focus on system construction – most modules exist as re-use blocks
– Verification engineers got used to black box verification - automated random test simulation
• Way forward: Operation-based design, exploitation of full potential of Abstract VHDL
Reference: J. Bormann: "Vollständige funktionale Verifikation", Dissertation, TU Kaiserslautern, 2009
December 2012
Page 14