This document discusses incremental software engineering using knowledge engineering techniques. It notes that requirements are never complete and domain knowledge is constantly evolving. Ripple-down rules is presented as an incremental knowledge engineering technique that allows efficient addition of new rules to address unanticipated situations. The document suggests this approach could enable incremental software engineering by continuously refining programs with additional rules over time based on new situations encountered.
Automated Discovery of Performance Regressions in Enterprise ApplicationsSAIL_QU
This document summarizes the author's research on automated discovery of performance regressions in enterprise applications. It discusses challenges with current performance verification practices, and proposes approaches at the design and implementation levels. At the design level, it suggests using layered simulation models to evaluate design changes early. At the implementation level, it presents techniques to analyze large performance datasets, detect regressions while limiting subjectivity, and deal with tests in heterogeneous environments. Case studies show the approaches achieve 75-100% precision and 52-80% recall. The research aims to help analysts efficiently identify performance regressions.
This document discusses anomaly detection using the Cortical Learning Algorithm (CLA). It defines anomalies and describes how NuPIC/CLA computes anomaly scores for streaming data to detect spatial, temporal, and other types of anomalies. Sample code is provided to demonstrate anomaly detection on CPU usage, heater temperature, and randomness change examples. The document also discusses how anomaly likelihood is computed in Grok and presents several use cases. It concludes by discussing future work including a benchmark for streaming anomaly detection.
Simulink is an environment for modeling and simulation that allows users to model complex systems using block diagrams. It is used for various applications like control system design, signal processing, communication systems, and more. Key benefits of Simulink include its user-friendly block diagram interface, flexibility to model different system types, and integration with MATLAB. A demo of modeling a clapping sensor system that turns a light on and off is presented to illustrate Simulink capabilities. The presentation also discusses how Simulink supports the model-based design process through executable specification, design simulation, automatic code generation, and continuous testing.
Complexity Reduction for Cyber-Physical-Human Medical SystemsPo-Liang Wu
The document proposes a hierarchical organ-based architecture and consistency protocol (CVGC protocol) to reduce complexity in medical cyber-physical systems. The architecture divides the system into layers representing organ systems and clinical specialties. This fits better with human physiology and allows localizing event handling. The CVGC protocol ensures consistency by having controllers generate a consistent view of their subsystems before coordinating actions. It addresses race conditions to prevent unsafe interactions despite distributed and asynchronous operation. Evaluation shows the approach reduces state space explosion compared to centralized and asynchronous designs.
Complexity Reduction for Cyber-Physical-Human Medical SystemsMDPnP_UIUC
This document proposes methods to reduce complexity in medical cyber-physical systems through system design approaches. It discusses several major sources of complexity, including verification complexity due to asynchronous communication, treatment complexity from validating preconditions and monitoring side effects, and mental workload complexity for medical practitioners. To address concurrency-related complexity, the document presents an Interruptible Remote Procedure Call pattern that limits possible message interleaving while maintaining flexibility. This pattern is evaluated using a model checking tool and shown to significantly reduce state space complexity compared to asynchronous and synchronous approaches. Overall, the document advocates for protocols and architectural patterns to control interactions and reduce unnecessary complexity in medical cyber-physical systems.
Dealing with the Three Horrible Problems in VerificationDVClub
1) There are three major problems in verification: specifying the properties to check, specifying the environment, and computational complexity of achieving high coverage.
2) The author proposes using "perspectives" to address these problems by focusing verification on specific aspects or classes of properties of a design using minimal formalization, rather than trying to tackle all issues at once.
3) This approach reduces complexity by omitting irrelevant details, targeting properties designers care about, and allowing verification to keep pace with frequent design changes.
This document discusses incremental software engineering using knowledge engineering techniques. It notes that requirements are never complete and domain knowledge is constantly evolving. Ripple-down rules is presented as an incremental knowledge engineering technique that allows efficient addition of new rules to address unanticipated situations. The document suggests this approach could enable incremental software engineering by continuously refining programs with additional rules over time based on new situations encountered.
Automated Discovery of Performance Regressions in Enterprise ApplicationsSAIL_QU
This document summarizes the author's research on automated discovery of performance regressions in enterprise applications. It discusses challenges with current performance verification practices, and proposes approaches at the design and implementation levels. At the design level, it suggests using layered simulation models to evaluate design changes early. At the implementation level, it presents techniques to analyze large performance datasets, detect regressions while limiting subjectivity, and deal with tests in heterogeneous environments. Case studies show the approaches achieve 75-100% precision and 52-80% recall. The research aims to help analysts efficiently identify performance regressions.
This document discusses anomaly detection using the Cortical Learning Algorithm (CLA). It defines anomalies and describes how NuPIC/CLA computes anomaly scores for streaming data to detect spatial, temporal, and other types of anomalies. Sample code is provided to demonstrate anomaly detection on CPU usage, heater temperature, and randomness change examples. The document also discusses how anomaly likelihood is computed in Grok and presents several use cases. It concludes by discussing future work including a benchmark for streaming anomaly detection.
Simulink is an environment for modeling and simulation that allows users to model complex systems using block diagrams. It is used for various applications like control system design, signal processing, communication systems, and more. Key benefits of Simulink include its user-friendly block diagram interface, flexibility to model different system types, and integration with MATLAB. A demo of modeling a clapping sensor system that turns a light on and off is presented to illustrate Simulink capabilities. The presentation also discusses how Simulink supports the model-based design process through executable specification, design simulation, automatic code generation, and continuous testing.
Complexity Reduction for Cyber-Physical-Human Medical SystemsPo-Liang Wu
The document proposes a hierarchical organ-based architecture and consistency protocol (CVGC protocol) to reduce complexity in medical cyber-physical systems. The architecture divides the system into layers representing organ systems and clinical specialties. This fits better with human physiology and allows localizing event handling. The CVGC protocol ensures consistency by having controllers generate a consistent view of their subsystems before coordinating actions. It addresses race conditions to prevent unsafe interactions despite distributed and asynchronous operation. Evaluation shows the approach reduces state space explosion compared to centralized and asynchronous designs.
Complexity Reduction for Cyber-Physical-Human Medical SystemsMDPnP_UIUC
This document proposes methods to reduce complexity in medical cyber-physical systems through system design approaches. It discusses several major sources of complexity, including verification complexity due to asynchronous communication, treatment complexity from validating preconditions and monitoring side effects, and mental workload complexity for medical practitioners. To address concurrency-related complexity, the document presents an Interruptible Remote Procedure Call pattern that limits possible message interleaving while maintaining flexibility. This pattern is evaluated using a model checking tool and shown to significantly reduce state space complexity compared to asynchronous and synchronous approaches. Overall, the document advocates for protocols and architectural patterns to control interactions and reduce unnecessary complexity in medical cyber-physical systems.
Dealing with the Three Horrible Problems in VerificationDVClub
1) There are three major problems in verification: specifying the properties to check, specifying the environment, and computational complexity of achieving high coverage.
2) The author proposes using "perspectives" to address these problems by focusing verification on specific aspects or classes of properties of a design using minimal formalization, rather than trying to tackle all issues at once.
3) This approach reduces complexity by omitting irrelevant details, targeting properties designers care about, and allowing verification to keep pace with frequent design changes.
NeuraLint and TheDeepChecker are tools for finding bugs in deep learning programs.
NeuraLint works by statically analyzing the code structure and checking for potential issues based on a taxonomy of common deep learning faults. It can find bugs quickly and less expensively than testing the full program.
TheDeepChecker monitors programs as they train and checks for issues based on defined verification rules related to parameters, activations, and optimization. It can find 30% more bugs than AWS SageMaker and captures defects during training rather than just on the final model. Both tools aim to improve quality and catch bugs earlier in the development process.
Basics of process fault detection and diagnosticsRahul Dey
This document provides an overview of process fault detection and diagnostics. It discusses key topics such as fault detection vs diagnosis, abnormal event management, components of a fault diagnosis framework, classes of failures, and desirable characteristics of a fault diagnostics system. Quantitative model-based methods are also introduced, including the use of redundancy, Kalman filters, and residual generation in dynamic systems.
Cheminfo Stories APAC 2020 - Chemical Descriptors & Standardizers for Machine...ChemAxon
This document discusses chemical descriptors and standardizers that are useful for machine learning models. It introduces standardizers, which canonicalize chemical structures for comparability, and structure checkers, which detect errors. Extended connectivity fingerprints (ECFP) are described as circular fingerprints that encode molecular structure. Case studies demonstrate using ECFP descriptors and standardizers to train deep neural networks for hERG activity prediction, achieving over 80% accuracy. Combining ECFP with topological descriptors led to slightly better performance than ECFP alone in a random forest model. In summary, customizable fingerprints and standardizers allow application in different tasks, and combining fingerprints with other descriptors can increase model performance.
David Parnas - Documentation Based Software Testing - SoftTest IrelandDavid O'Dowd
This document discusses documentation-based software testing and testing approaches. It advocates planning testing early and basing tests on documentation prepared throughout the design process. This allows test plans and evaluation to be determined in advance so high quality standards can be enforced on a project. The document also discusses different types of testing like black box, clear box, and grey box testing and notes that while black box testing tests against specifications, knowledge of internal structure can provide better test coverage.
Webinar Slides - How KeyBank Liberated its IT Ops from Rules-Based Event Mana...Moogsoft
Managing IT Operations is a challenging job that’s only getting harder. Humans can no longer effectively process the volumes of event data intended to help identify and remediate IT issues. So what’s an enterprise to do?
This fundamental question leads to another: is your legacy event management system still up to the job? For most enterprises, their legacy tool is based on technology that still relies on RULES.
KeyBank and Moogsoft describe the technical limitations of rules-based solutions, and how AIOps solutions represent the intelligent automation of the future. They also cover:
* How to move your monitoring regime from Reactive to Proactive to Predictive
* How AIOps can support the delivery of a great Customer Experience (Cx)
* The KeyBank story of AIOps adoption.
Challenges in Practicing High Frequency Releases in Cloud Environments Liming Zhu
Talk at RELENG 2014
Full paper: http://www.nicta.com.au/pub?doc=7925
The continuous delivery trend is dramatically shortening release cycles from months into hours. Applications with high frequency releases often rely heavily on automated deployment tools using cloud infrastructure APIs. We report some results from experiments on reliability issues of cloud infrastructure and trade-offs between using heavily-baked and lightly-baked images. Our experiments were based on Amazon Web Service (AWS) OpsWorks APIs and configuration management tool Chef. As a result of our experiments, we then propose error handling practices that can be included in tailor-made continuous deployment facilities.
More related info at our DevOps book http://www.ssrg.nicta.com.au/projects/devops_book/
In the age of Big Data, what role for Software Engineers?CS, NcState
This document discusses the role of software engineers in the age of big data. It begins by outlining two perspectives on whether data analysis is a "systems" task that can be fully automated or a "human" task requiring human analysts. The document then introduces several concepts including the "CPU crisis" caused by exponentially growing data and models, search-based software engineering which applies optimization techniques to software engineering problems, and goal-oriented requirements engineering which uses goals to structure requirements. It presents two case studies on how goal-oriented reasoning and search-based techniques can help tackle challenges related to big data and the CPU crisis: one on optimizing feature maps for product line engineering, and another proposing a new technique called GALE for actively and
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
This document discusses search-based testing and its applications in software testing. It outlines some key strengths of search-based software testing (SBST) such as being scalable, parallelizable, versatile, and flexible. It also discusses some limitations of search-based approaches for problems that require formal verification to establish properties for all possible usages. The document compares classical optimization approaches, which build solutions incrementally, to stochastic optimization approaches used in SBST, which sample solutions in a randomized way. It notes that while testing can find bugs, it cannot prove their absence. Finally, it discusses how SBST can be combined with other techniques like constraint solving and machine learning.
This document provides an overview of the MCT 3325 Control Systems Design course. It outlines the teaching team, lecture hours, required textbook, course outline, objectives, learning outcomes, class attendance policy, evaluation method, tips for success, and expectations. It also directs students to the learning management system website for additional course materials. The lecture introduces digital control systems, comparing them to analog systems. It defines the key elements of each, including converters, and discusses their relative advantages and disadvantages. The lecture concludes that while digital controls present some challenges, their benefits generally outweigh limitations.
Finding interesting patterns in data can lead to uncovering new knowledge. New patterns that haven’t occurred before can signify events of interest. Depending on context, these can be called novelties, anomalies, outliers or events. Whatever they are called, they are interesting because they tell a story different from the norm. In this talk, we will call them anomalies. Two diverse applications of anomaly detection are detecting fraudulent credit card transactions and identifying astronomical anomalies such as solar flares.
However, there are many challenges in anomaly detection including high false positive rates and low predictive accuracy. Ensemble learning is a way of combining many algorithms or models to obtain better predictive performance. Anomaly detection is generally an unsupervised task, that is, we do not train models using labelled data. Constructing an unsupervised anomaly detection ensemble is challenging because we do not know the labels. In this talk we discuss two topics in anomaly detection. First, we introduce an anomaly detection ensemble using Item Response Theory (IRT) – a class of models used in educational psychometrics. Using IRT we construct an ensemble that can downplay noisy, non-discriminatory methods and accentuate sharper methods.
Then we explore anomaly detection in computer network security. With cyber incidents and data breaches becoming increasingly common, we have seen a massive increase in computer network attacks over the years. Anomaly detection methods, even though used to detect suspicious behaviour, are criticized for high false positive rates. In addition, computer networks produce a large amount of complex data. We go through the end-to-end process of detecting anomalies in this scenario and show how we can minimize false positives and visualise anomalies developing over time.
The document discusses various software failures caused by bugs in software systems and the importance of software testing. Some key points:
- A rocket launch failed after 37 seconds due to an undetected bug in the control software that caused an exception. The failure cost over $1 billion.
- Medical radiation equipment killed patients in the 1980s due to race conditions in the software that allowed high-energy radiation to operate unsafely.
- A Mars lander crashed in 1999 because the descent engines shut down prematurely due to a single line of bad code that caused sensors to falsely indicate the craft had landed.
Testing Safety Critical Systems (10-02-2014, VU amsterdam)Jaap van Ekris
Presentation about the steps required for Verifying and Validating safety critical systems, as well as the test approach used. It goes beyond the simple processes, and also talks about the required safety culture and people required. The presentation contains examples of real-life IEC 61508 SIL 4 systems used on stormsurge barriers...
WHAT IS AN OPERATING SYSTEM?
• An interface between users and hardware - an environment "architecture” • Allows convenient usage; hides the tedious stuff • Allows efficient usage; parallel activity, avoids wasted cycles • Provides information protection • Gives each user a slice of the resources • Acts as a control program.
Testing Dynamic Behavior in Executable Software Models - Making Cyber-physica...Lionel Briand
This document discusses testing dynamic behavior in executable software models for cyber-physical systems. It presents challenges for model-in-the-loop (MiL) testing due to large input spaces, expensive simulations, and lack of simple oracles. The document proposes using search-based testing to generate critical test cases by formulating it as a multi-objective optimization problem. It demonstrates the approach on an advanced driver assistance system and discusses improving performance with surrogate modeling.
The document discusses black-box behavioral model inference for automated systems like autopilots. It proposes using a hybrid deep neural network to infer internal states and detect state changes of black-box systems from input/output data. Experiments on two datasets show the approach outperforms baselines at detecting state changes and predicting internal states, with improvements of 88-102% and up to 19% respectively. The approach was replicated successfully on another system and autopilot data, confirming its feasibility.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
NeuraLint and TheDeepChecker are tools for finding bugs in deep learning programs.
NeuraLint works by statically analyzing the code structure and checking for potential issues based on a taxonomy of common deep learning faults. It can find bugs quickly and less expensively than testing the full program.
TheDeepChecker monitors programs as they train and checks for issues based on defined verification rules related to parameters, activations, and optimization. It can find 30% more bugs than AWS SageMaker and captures defects during training rather than just on the final model. Both tools aim to improve quality and catch bugs earlier in the development process.
Basics of process fault detection and diagnosticsRahul Dey
This document provides an overview of process fault detection and diagnostics. It discusses key topics such as fault detection vs diagnosis, abnormal event management, components of a fault diagnosis framework, classes of failures, and desirable characteristics of a fault diagnostics system. Quantitative model-based methods are also introduced, including the use of redundancy, Kalman filters, and residual generation in dynamic systems.
Cheminfo Stories APAC 2020 - Chemical Descriptors & Standardizers for Machine...ChemAxon
This document discusses chemical descriptors and standardizers that are useful for machine learning models. It introduces standardizers, which canonicalize chemical structures for comparability, and structure checkers, which detect errors. Extended connectivity fingerprints (ECFP) are described as circular fingerprints that encode molecular structure. Case studies demonstrate using ECFP descriptors and standardizers to train deep neural networks for hERG activity prediction, achieving over 80% accuracy. Combining ECFP with topological descriptors led to slightly better performance than ECFP alone in a random forest model. In summary, customizable fingerprints and standardizers allow application in different tasks, and combining fingerprints with other descriptors can increase model performance.
David Parnas - Documentation Based Software Testing - SoftTest IrelandDavid O'Dowd
This document discusses documentation-based software testing and testing approaches. It advocates planning testing early and basing tests on documentation prepared throughout the design process. This allows test plans and evaluation to be determined in advance so high quality standards can be enforced on a project. The document also discusses different types of testing like black box, clear box, and grey box testing and notes that while black box testing tests against specifications, knowledge of internal structure can provide better test coverage.
Webinar Slides - How KeyBank Liberated its IT Ops from Rules-Based Event Mana...Moogsoft
Managing IT Operations is a challenging job that’s only getting harder. Humans can no longer effectively process the volumes of event data intended to help identify and remediate IT issues. So what’s an enterprise to do?
This fundamental question leads to another: is your legacy event management system still up to the job? For most enterprises, their legacy tool is based on technology that still relies on RULES.
KeyBank and Moogsoft describe the technical limitations of rules-based solutions, and how AIOps solutions represent the intelligent automation of the future. They also cover:
* How to move your monitoring regime from Reactive to Proactive to Predictive
* How AIOps can support the delivery of a great Customer Experience (Cx)
* The KeyBank story of AIOps adoption.
Challenges in Practicing High Frequency Releases in Cloud Environments Liming Zhu
Talk at RELENG 2014
Full paper: http://www.nicta.com.au/pub?doc=7925
The continuous delivery trend is dramatically shortening release cycles from months into hours. Applications with high frequency releases often rely heavily on automated deployment tools using cloud infrastructure APIs. We report some results from experiments on reliability issues of cloud infrastructure and trade-offs between using heavily-baked and lightly-baked images. Our experiments were based on Amazon Web Service (AWS) OpsWorks APIs and configuration management tool Chef. As a result of our experiments, we then propose error handling practices that can be included in tailor-made continuous deployment facilities.
More related info at our DevOps book http://www.ssrg.nicta.com.au/projects/devops_book/
In the age of Big Data, what role for Software Engineers?CS, NcState
This document discusses the role of software engineers in the age of big data. It begins by outlining two perspectives on whether data analysis is a "systems" task that can be fully automated or a "human" task requiring human analysts. The document then introduces several concepts including the "CPU crisis" caused by exponentially growing data and models, search-based software engineering which applies optimization techniques to software engineering problems, and goal-oriented requirements engineering which uses goals to structure requirements. It presents two case studies on how goal-oriented reasoning and search-based techniques can help tackle challenges related to big data and the CPU crisis: one on optimizing feature maps for product line engineering, and another proposing a new technique called GALE for actively and
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
This document discusses search-based testing and its applications in software testing. It outlines some key strengths of search-based software testing (SBST) such as being scalable, parallelizable, versatile, and flexible. It also discusses some limitations of search-based approaches for problems that require formal verification to establish properties for all possible usages. The document compares classical optimization approaches, which build solutions incrementally, to stochastic optimization approaches used in SBST, which sample solutions in a randomized way. It notes that while testing can find bugs, it cannot prove their absence. Finally, it discusses how SBST can be combined with other techniques like constraint solving and machine learning.
This document provides an overview of the MCT 3325 Control Systems Design course. It outlines the teaching team, lecture hours, required textbook, course outline, objectives, learning outcomes, class attendance policy, evaluation method, tips for success, and expectations. It also directs students to the learning management system website for additional course materials. The lecture introduces digital control systems, comparing them to analog systems. It defines the key elements of each, including converters, and discusses their relative advantages and disadvantages. The lecture concludes that while digital controls present some challenges, their benefits generally outweigh limitations.
Finding interesting patterns in data can lead to uncovering new knowledge. New patterns that haven’t occurred before can signify events of interest. Depending on context, these can be called novelties, anomalies, outliers or events. Whatever they are called, they are interesting because they tell a story different from the norm. In this talk, we will call them anomalies. Two diverse applications of anomaly detection are detecting fraudulent credit card transactions and identifying astronomical anomalies such as solar flares.
However, there are many challenges in anomaly detection including high false positive rates and low predictive accuracy. Ensemble learning is a way of combining many algorithms or models to obtain better predictive performance. Anomaly detection is generally an unsupervised task, that is, we do not train models using labelled data. Constructing an unsupervised anomaly detection ensemble is challenging because we do not know the labels. In this talk we discuss two topics in anomaly detection. First, we introduce an anomaly detection ensemble using Item Response Theory (IRT) – a class of models used in educational psychometrics. Using IRT we construct an ensemble that can downplay noisy, non-discriminatory methods and accentuate sharper methods.
Then we explore anomaly detection in computer network security. With cyber incidents and data breaches becoming increasingly common, we have seen a massive increase in computer network attacks over the years. Anomaly detection methods, even though used to detect suspicious behaviour, are criticized for high false positive rates. In addition, computer networks produce a large amount of complex data. We go through the end-to-end process of detecting anomalies in this scenario and show how we can minimize false positives and visualise anomalies developing over time.
The document discusses various software failures caused by bugs in software systems and the importance of software testing. Some key points:
- A rocket launch failed after 37 seconds due to an undetected bug in the control software that caused an exception. The failure cost over $1 billion.
- Medical radiation equipment killed patients in the 1980s due to race conditions in the software that allowed high-energy radiation to operate unsafely.
- A Mars lander crashed in 1999 because the descent engines shut down prematurely due to a single line of bad code that caused sensors to falsely indicate the craft had landed.
Testing Safety Critical Systems (10-02-2014, VU amsterdam)Jaap van Ekris
Presentation about the steps required for Verifying and Validating safety critical systems, as well as the test approach used. It goes beyond the simple processes, and also talks about the required safety culture and people required. The presentation contains examples of real-life IEC 61508 SIL 4 systems used on stormsurge barriers...
WHAT IS AN OPERATING SYSTEM?
• An interface between users and hardware - an environment "architecture” • Allows convenient usage; hides the tedious stuff • Allows efficient usage; parallel activity, avoids wasted cycles • Provides information protection • Gives each user a slice of the resources • Acts as a control program.
Testing Dynamic Behavior in Executable Software Models - Making Cyber-physica...Lionel Briand
This document discusses testing dynamic behavior in executable software models for cyber-physical systems. It presents challenges for model-in-the-loop (MiL) testing due to large input spaces, expensive simulations, and lack of simple oracles. The document proposes using search-based testing to generate critical test cases by formulating it as a multi-objective optimization problem. It demonstrates the approach on an advanced driver assistance system and discusses improving performance with surrogate modeling.
The document discusses black-box behavioral model inference for automated systems like autopilots. It proposes using a hybrid deep neural network to infer internal states and detect state changes of black-box systems from input/output data. Experiments on two datasets show the approach outperforms baselines at detecting state changes and predicting internal states, with improvements of 88-102% and up to 19% respectively. The approach was replicated successfully on another system and autopilot data, confirming its feasibility.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
1. How Do Deep Learning Faults Affect
AI-Enabled Cyber-Physical Systems in
Operation? A PreliminaryStudy Based
on DeepCrime Mutation Operators
Aitor Arrieta, Pablo Valle, Asier Iriarte, Miren Illarramendi
7. 7
• Based on real faults (e.g., low quality data, faults in the DNN
architecture)
• Consider randomized nature of DL systems
𝑖𝑠𝐾𝑖𝑙𝑙𝑒𝑑 𝑃, 𝑀, 𝑇𝑒𝑠𝑡𝐷 = ቐ
𝑡𝑟𝑢𝑒 if 𝑒𝑓𝑓𝑒𝑐𝑡𝑆𝑖𝑧𝑒 ≥ 𝛽 𝑎𝑛𝑑 𝑝_𝑣𝑎𝑙𝑢𝑒 < 𝛼
𝑓𝑎𝑙𝑠𝑒 Otherwise
Mutation Testing of DL systems
8. 8
Research Questions
RQ1 — How do DL faults affect AI-enabled CPSs in
operation?
RQ2 — How do DL differ when deployed in an AI-
enabled CPS as compared to when executed in an
off-line fashion?
RQ3 – Are there differences in terms of killability
between the type of DL faults when deployed in
operation?
9. 9
• Case study system and used circuit
Experimental setup
10. 10
• Deep learning faults
• 4 DL mutation operators selected from DeepCrime
• New Learning Rates (HLR)
• New Number of Epochs (HNE)
• Add Noise to Training Data (TAN)
• Change Labels of Training Data (TCL)
• 5 configurations each
• 10 runs to account for stochasticity
Experimental setup
200 DNN models in total + 10 DNN models for the
original study
11. 11
• Evaluation metrics
• Mean Squared Error (MSE) for the off-line testing
• For operational
• Time required by the robot for completing two entire laps
• Whether the robot went out or not
• Other considerations
• Controlled light of the environment
• Manual time was considered by recording the time twice
Experimental setup
13. 13
RQ1 – Faults affecting physical rover
35% of the mutants were detected in the circuit
14. 14
RQ2 – Off-line vs Physical
95% of the mutants were detected off-line
These results contrast with other studies
15. 15
RQ3 – Type of mutation operator
All mutants from TCL were detected
Two mutants from HLR were detected
TAN and HNE were not detected
16. 16
• Faults do not manifest that easily during operation
• Off-line seems to find further faults than with physical
testing
• Preliminary study: More faults and other case studies
required to generalize our findings
Conclusion
17. 17
• More CPS case study systems
• More faults
• Other angles of research
• What about simulation?
• What about systems with multiple DNNs?
• Control levels: Low-level controlling functions vs High-
level controlling functions
Future work