Paper: Estimating Software Maintenance Effort from Use Cases: an Industrial Case Study
Authors:Yan Ku, Jing Du, Ye Yang, Qing Wang
Session: Industry Tracking 5: Metrics and
Estimation
An Efficient Design Approach of ROI Based DWT Using Vedic and Wallace Tree Mu...IJECEIAES
In digital image processing, the compression mechanism is utilized to enhance the visual perception and storage cost. By using hardware architectures, reconstruction of medical images especially Region of interest (ROI) part using Lossy image compression is a challenging task. In this paper, the ROI Based Discrete wavelet transformation (DWT) using separate Wallace- tree multiplier (WM) and modified Vedic Multiplier (VM) methods are designed. The Lifting based DWT method is used for the ROI compression and reconstruction. The 9/7 filter coefficients are multiplied in DWT using Wallace- tree multiplier (WM) and modified Vedic Multiplier (VM). The designed Wallace tree multiplier works with the parallel mechanism using pipeline architecture results with optimized hardware resources, and 8x8 Vedic multiplier designs improves the ROI reconstruction image quality and fast computation. To evaluate the performance metrics between ROI Based DWT-WM and DWT-VM on FPGA platform, The PSNR and MSE are calculated for different Brain MRI images, and also hardware constraints include Area, Delay, maximum operating frequency and power results are tabulated. The proposed model is designed using Xilinx platform using Verilog-HDL and simulated using ModelSim and Implemented on Artix-7 FPGA device.
Presentation of my senior Project about "A real time automatic eye tracking system for ophthalmology"
In the presentation, it briefly explains about conventional object tracking method "template matching" based on Sum-of-Square difference. Therefore we also present the powerful matching technique called Gradient Orientation Pattern Matching (GOPM) proposed by T.Kondo and we proposed an improved version of GOPM called time-vary GOPM to solve a illumination and noise problem.
Program Comprehension - An Evaluation of the Strategies of Sorting, Filtering...ICSM 2011
Paper: An Evaluation of the Strategies of Sorting, Filtering, and Grouping API Methods for Code Completion
Authors: Daqing Hou and Dave Pletcher
Session: Research Track Session 8 -Program Comprehension
Industry - The Evolution of Information Systems. A Case Study on Document Man...ICSM 2011
Paper : The Evolution of Information Systems. A Case Study on Document Management
Authors : Paolo Salvaneschi
Session: Industry Track Session 3: Evolution and migration
An Efficient Design Approach of ROI Based DWT Using Vedic and Wallace Tree Mu...IJECEIAES
In digital image processing, the compression mechanism is utilized to enhance the visual perception and storage cost. By using hardware architectures, reconstruction of medical images especially Region of interest (ROI) part using Lossy image compression is a challenging task. In this paper, the ROI Based Discrete wavelet transformation (DWT) using separate Wallace- tree multiplier (WM) and modified Vedic Multiplier (VM) methods are designed. The Lifting based DWT method is used for the ROI compression and reconstruction. The 9/7 filter coefficients are multiplied in DWT using Wallace- tree multiplier (WM) and modified Vedic Multiplier (VM). The designed Wallace tree multiplier works with the parallel mechanism using pipeline architecture results with optimized hardware resources, and 8x8 Vedic multiplier designs improves the ROI reconstruction image quality and fast computation. To evaluate the performance metrics between ROI Based DWT-WM and DWT-VM on FPGA platform, The PSNR and MSE are calculated for different Brain MRI images, and also hardware constraints include Area, Delay, maximum operating frequency and power results are tabulated. The proposed model is designed using Xilinx platform using Verilog-HDL and simulated using ModelSim and Implemented on Artix-7 FPGA device.
Presentation of my senior Project about "A real time automatic eye tracking system for ophthalmology"
In the presentation, it briefly explains about conventional object tracking method "template matching" based on Sum-of-Square difference. Therefore we also present the powerful matching technique called Gradient Orientation Pattern Matching (GOPM) proposed by T.Kondo and we proposed an improved version of GOPM called time-vary GOPM to solve a illumination and noise problem.
Program Comprehension - An Evaluation of the Strategies of Sorting, Filtering...ICSM 2011
Paper: An Evaluation of the Strategies of Sorting, Filtering, and Grouping API Methods for Code Completion
Authors: Daqing Hou and Dave Pletcher
Session: Research Track Session 8 -Program Comprehension
Industry - The Evolution of Information Systems. A Case Study on Document Man...ICSM 2011
Paper : The Evolution of Information Systems. A Case Study on Document Management
Authors : Paolo Salvaneschi
Session: Industry Track Session 3: Evolution and migration
Paper: Tracking Technical Debt- An Exploratory Case Study
Authors: Yuepu Guo, Carolyn Seaman, Rebeka Gomes, Antonio Cavalcanti, Graziela Tonin, Fabio Q. B. Da Silva, André L. M. Santos, Clauirton Siebra
Session: Early Research Achievement Track Session 3
Components - Crossing the Boundaries while Analyzing Heterogeneous Component-...ICSM 2011
Paper: "Crossing the Boundaries while Analyzing Heterogeneous Component-Based Software Systems"
Authors: Amir Reza Yazdanshenas, Leon Moonen
Session: Research Track Session 7: Components
Faults and Regression testing - Localizing Failure-Inducing Program Edits Bas...ICSM 2011
Paper: Localizing Failure-Inducing Program Edits Based on Spectrum Information.
Authors: Lingming Zhang, Miryung Kim, Sarfraz Khurshid.
Session: Research Track Session 1: Faults and Regression Testing
Industry - Testing & Quality Assurance in Data Migration Projects ICSM 2011
Paper: Testing & Quality Assurance in Data Migration Projects
Authors: Klaus Haller, Florian Matthes, Christopher Schulz
Session: Industry Track Session 3: Evolution and migration
Natural Language Analysis - Mining Java Class Naming ConventionsICSM 2011
Paper: Mining Java Class Naming Conventions
Authors: Simon Butler, Michel Wermelinger, Yijun Yu and Helen Sharp
Session: Research Track 4 - Natural Language Analysis
Industry - Evolution and migration - Incremental and Iterative Reengineering ...ICSM 2011
Paper: Incremental and Iterative Reengineering towards Software Product Line: An Industrial Case Study
Authors: Gang Zhang, Liwei Shen, Xin Peng, Zhenchang Xing and Wenyun Zhao
Session: Industry Track Session 3: Evolution and migration
Abstract:
Botnets, which are networks of malware-infected machines that are controlled by an adversary, are the root cause of a large number of security threats on the Internet. A particularly sophisticated and insidious type of bot is Torpig, which is a malware program that is designed to harvest sensitive information (such as bank account and credit card data) from its victims. In this talk, I will report on our efforts to take control of the Torpig botnet for ten days. Over this period, we observed more than 180 thousand infections and recorded more than 70 GB of data that the bots collected.
While botnets have been hijacked before, the Torpig botnet exhibits certain properties that make the analysis of the data particularly interesting. First, it is possible (with reasonable accuracy) to identify unique bot infections and relate that number to the more than 1.2 million IP addresses that contacted our command and control server during the ten day period. This shows that botnet estimates that are based on IP addresses are likely to report inflated numbers. Second, the Torpig botnet is large, targets a variety of applications, and gathers a rich and diverse set of information from the infected victims. This allowed us to perform interesting data analysis that goes well beyond simply counting the number of stolen credit cards. In this talk I will discuss the analysis that we performed on the data collected and the lessons learned from the analysis, as well as from the process of obtaining (and losing) the botnet.
Bio:
Richard A. Kemmerer is the Computer Science Leadership Professor and a past Department Chair of the Department of Computer Science at the University of California, Santa Barbara. Dr. Kemmerer received the B.S. degree in Mathematics from the Pennsylvania State University in 1966, and the M.S. and Ph.D. degrees in Computer Science from the University of California, Los Angeles, in 1976 and 1979, respectively. His research interests include formal specification and verification of systems, computer system security and reliability, programming and specification language design, and software engineering.
Dr. Kemmerer is a Fellow of the IEEE Computer Society, a Fellow of the Association for Computing Machinery, and he is the 2007 recipient of The Applied Security Associates Distinguished Practitioner Award. He is a member of the IFIP Working Group 11.3 on Database Security, and a member of the International Association for Cryptologic Research. He is a past Editor-in-Chief of IEEE Transactions on Software Engineering, and he has served on the editorial boards of the ACM Computing Surveys and IEEE Security and Privacy and on the Board of Governors of the IEEE Computer Society. He served on Microsoft’s Trustworthy Computing Academic Advisory Board (2002-2010) and on the National Science Foundations/CISE Advisory Board (2002-2004).
Abstract:
Though in essence an engineering discipline, software engineering research has always been struggling to demonstrate impact. This is reflected in part by the funding challenges that the discipline faces in many countries, the difficulties we have to attract industrial participants to our conferences, and the scarcity of papers reporting industrial case studies.
There are clear historical reasons for this but we nevertheless need, as a community, to question our research paradigms and peer evaluation processes in order to improve the situation. From a personal standpoint, relevance and impact are concerns that I have been struggling with for a long time, which eventually led me to leave a comfortable academic position and a research chair to work in industry-driven research.
I will use some concrete research project examples to argue why we need more inductive research, that is, research working from specific observations in real settings to broader generalizations and theories. Among other things, the examples will show how a more thorough understanding of practice and closer interactions with practitioners can profoundly influence the definition of research problems, and the development and evaluation of solutions to these problems. Furthermore, these examples will illustrate why, to a large extent, useful research is necessarily multidisciplinary. I will also address issues regarding the implementation of such a research paradigm and show how our own bias as a research community worsens the situation and undermines our very own interests.
On a more humorous note, the title hints at the fact that being a scientist in software engineering and aiming at having impact on practice often entails leading two parallel careers and impersonate different roles to different peers and partners.
Bio:
Lionel Briand is heading the Certus center on software verification and validation at Simula Research Laboratory, where he is leading research projects with industrial partners. He is also a professor at the University of Oslo (Norway). Before that, he was on the faculty of the department of Systems and Computer Engineering, Carleton University, Ottawa, Canada, where he was full professor and held the Canada Research Chair (Tier I) in Software Quality Engineering. He is the coeditor-in-chief of Empirical Software Engineering (Springer) and is a member of the editorial boards of Systems and Software Modeling (Springer) and Software Testing, Verification, and Reliability (Wiley). He was on the board of IEEE Transactions on Software Engineering from 2000 to 2004. Lionel was elevated to the grade of IEEE Fellow for his work on the testing of object-oriented systems. His research interests include: model-driven development, testing and verification, search-based software engineering, and empirical software engineering.
ERA - Measuring Maintainability of Spreadsheets in the Wild ICSM 2011
Paper: Measuring Maintainability of Spreadsheets in the Wild
Authors: José Pedro Correia and Miguel Alexandre Ferreira
Session: Early Research Achievements Track Session 2: Software Changes and Maintainability
Faults and Regression Testing - Fault interaction and its repercussionsICSM 2011
Paper: Fault Interaction and its Repercussions
Authors: Nicholas DiGiuseppe and James A. Jones
Seesion: Research Track 1: Faults and Regression Testing
Natural Language Analysis - Expanding Identifiers to Normalize Source Code Vo...ICSM 2011
Paper: Expanding Identifiers to Normalize Source Code Vocabulary
Authors: Dave Binkley and Dawn Lawrie
Session: Research Track 4: Natural Language Analysis
Industry - Precise Detection of Un-Initialized Variables in Large, Real-life ...ICSM 2011
Paper: "Precise Detection of Un-Initialized Variables in Large, Real-life COBOL Programs in Presence of Un-realizable Paths"
Authors: Rahul Jiresal, Adnan Contractor and Ravindra Naik
Session: Industry Track Session 4: Program analysis and Verification
Components - Graph Based Detection of Library API LimitationsICSM 2011
Paper: Graph-based Detection of Library API Imitations
Authors: Chengnian Sun, Siau-Cheng Khoo, Shao Jie Zhang (All from National University of Singapore)
Session: Research Track Session 7: Component
Industry - Relating Developers' Concepts and Artefact Vocabulary in a Financ...ICSM 2011
Paper: Relating Developers' Concepts and Artefact Vocabulary in a Financial
Software Module
Authors: Tezcan Dilshener and Michel Wermelinger
Session: Industry Track 2 - Reverse Engineering
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/imaging-systems-for-applied-reinforcement-learning-control-a-presentation-from-nanotronics/
Damas Limoge, Senior R&D Engineer at Nanotronics, presents the “Imaging Systems for Applied Reinforcement Learning Control” tutorial at the September 2020 Embedded Vision Summit.
Reinforcement learning has generated human-level decision-making strategies in highly complex game scenarios. But most industries, such as manufacturing, have not seen impressive results from the application of these algorithms, belying the utility hoped for by their creators. The limitations of reinforcement learning in real use cases intuitively manifest from the number of exploration examples needed to train the underlying models, but also from incomplete state representations for an artificial agent to act on.
In an effort to improve automated inspection for factory control through reinforcement learning, Nanotronics’ research is focused on improving the state representation of a manufacturing process using optical inspection as a basis for agent optimization. In this presentation, Limoge focuses on the imaging system: its design, implementation and utilization, in the context of a reinforcement agent.
Reconfigurable CORDIC Low-Power Implementation of Complex Signal Processing f...Editor IJMTER
In recent years, CORDIC algorithms has been used extensively for various image processing
system& biomedical applications. By using CORDIC algorithm we can able to reducing the number of
iteration to process the image in the system. Low power design is to be challenging process during system
operations. Previous approaches scope to minimize the power consumption without image quality
consideration. In this paper CORDIC Based Low Power DCT iterations process equally based upon their
image quality. An hardware implementation of ROM & control logic circuit to require large hardware
space in this system. Look-ahead CORDIC Approach is used to finish the iteration at one time. When
reducing hardware area & reducing number of iterations for maximize battery lifetime. This idea used to
achieve the low power design of image and video compression application
Paper: Tracking Technical Debt- An Exploratory Case Study
Authors: Yuepu Guo, Carolyn Seaman, Rebeka Gomes, Antonio Cavalcanti, Graziela Tonin, Fabio Q. B. Da Silva, André L. M. Santos, Clauirton Siebra
Session: Early Research Achievement Track Session 3
Components - Crossing the Boundaries while Analyzing Heterogeneous Component-...ICSM 2011
Paper: "Crossing the Boundaries while Analyzing Heterogeneous Component-Based Software Systems"
Authors: Amir Reza Yazdanshenas, Leon Moonen
Session: Research Track Session 7: Components
Faults and Regression testing - Localizing Failure-Inducing Program Edits Bas...ICSM 2011
Paper: Localizing Failure-Inducing Program Edits Based on Spectrum Information.
Authors: Lingming Zhang, Miryung Kim, Sarfraz Khurshid.
Session: Research Track Session 1: Faults and Regression Testing
Industry - Testing & Quality Assurance in Data Migration Projects ICSM 2011
Paper: Testing & Quality Assurance in Data Migration Projects
Authors: Klaus Haller, Florian Matthes, Christopher Schulz
Session: Industry Track Session 3: Evolution and migration
Natural Language Analysis - Mining Java Class Naming ConventionsICSM 2011
Paper: Mining Java Class Naming Conventions
Authors: Simon Butler, Michel Wermelinger, Yijun Yu and Helen Sharp
Session: Research Track 4 - Natural Language Analysis
Industry - Evolution and migration - Incremental and Iterative Reengineering ...ICSM 2011
Paper: Incremental and Iterative Reengineering towards Software Product Line: An Industrial Case Study
Authors: Gang Zhang, Liwei Shen, Xin Peng, Zhenchang Xing and Wenyun Zhao
Session: Industry Track Session 3: Evolution and migration
Abstract:
Botnets, which are networks of malware-infected machines that are controlled by an adversary, are the root cause of a large number of security threats on the Internet. A particularly sophisticated and insidious type of bot is Torpig, which is a malware program that is designed to harvest sensitive information (such as bank account and credit card data) from its victims. In this talk, I will report on our efforts to take control of the Torpig botnet for ten days. Over this period, we observed more than 180 thousand infections and recorded more than 70 GB of data that the bots collected.
While botnets have been hijacked before, the Torpig botnet exhibits certain properties that make the analysis of the data particularly interesting. First, it is possible (with reasonable accuracy) to identify unique bot infections and relate that number to the more than 1.2 million IP addresses that contacted our command and control server during the ten day period. This shows that botnet estimates that are based on IP addresses are likely to report inflated numbers. Second, the Torpig botnet is large, targets a variety of applications, and gathers a rich and diverse set of information from the infected victims. This allowed us to perform interesting data analysis that goes well beyond simply counting the number of stolen credit cards. In this talk I will discuss the analysis that we performed on the data collected and the lessons learned from the analysis, as well as from the process of obtaining (and losing) the botnet.
Bio:
Richard A. Kemmerer is the Computer Science Leadership Professor and a past Department Chair of the Department of Computer Science at the University of California, Santa Barbara. Dr. Kemmerer received the B.S. degree in Mathematics from the Pennsylvania State University in 1966, and the M.S. and Ph.D. degrees in Computer Science from the University of California, Los Angeles, in 1976 and 1979, respectively. His research interests include formal specification and verification of systems, computer system security and reliability, programming and specification language design, and software engineering.
Dr. Kemmerer is a Fellow of the IEEE Computer Society, a Fellow of the Association for Computing Machinery, and he is the 2007 recipient of The Applied Security Associates Distinguished Practitioner Award. He is a member of the IFIP Working Group 11.3 on Database Security, and a member of the International Association for Cryptologic Research. He is a past Editor-in-Chief of IEEE Transactions on Software Engineering, and he has served on the editorial boards of the ACM Computing Surveys and IEEE Security and Privacy and on the Board of Governors of the IEEE Computer Society. He served on Microsoft’s Trustworthy Computing Academic Advisory Board (2002-2010) and on the National Science Foundations/CISE Advisory Board (2002-2004).
Abstract:
Though in essence an engineering discipline, software engineering research has always been struggling to demonstrate impact. This is reflected in part by the funding challenges that the discipline faces in many countries, the difficulties we have to attract industrial participants to our conferences, and the scarcity of papers reporting industrial case studies.
There are clear historical reasons for this but we nevertheless need, as a community, to question our research paradigms and peer evaluation processes in order to improve the situation. From a personal standpoint, relevance and impact are concerns that I have been struggling with for a long time, which eventually led me to leave a comfortable academic position and a research chair to work in industry-driven research.
I will use some concrete research project examples to argue why we need more inductive research, that is, research working from specific observations in real settings to broader generalizations and theories. Among other things, the examples will show how a more thorough understanding of practice and closer interactions with practitioners can profoundly influence the definition of research problems, and the development and evaluation of solutions to these problems. Furthermore, these examples will illustrate why, to a large extent, useful research is necessarily multidisciplinary. I will also address issues regarding the implementation of such a research paradigm and show how our own bias as a research community worsens the situation and undermines our very own interests.
On a more humorous note, the title hints at the fact that being a scientist in software engineering and aiming at having impact on practice often entails leading two parallel careers and impersonate different roles to different peers and partners.
Bio:
Lionel Briand is heading the Certus center on software verification and validation at Simula Research Laboratory, where he is leading research projects with industrial partners. He is also a professor at the University of Oslo (Norway). Before that, he was on the faculty of the department of Systems and Computer Engineering, Carleton University, Ottawa, Canada, where he was full professor and held the Canada Research Chair (Tier I) in Software Quality Engineering. He is the coeditor-in-chief of Empirical Software Engineering (Springer) and is a member of the editorial boards of Systems and Software Modeling (Springer) and Software Testing, Verification, and Reliability (Wiley). He was on the board of IEEE Transactions on Software Engineering from 2000 to 2004. Lionel was elevated to the grade of IEEE Fellow for his work on the testing of object-oriented systems. His research interests include: model-driven development, testing and verification, search-based software engineering, and empirical software engineering.
ERA - Measuring Maintainability of Spreadsheets in the Wild ICSM 2011
Paper: Measuring Maintainability of Spreadsheets in the Wild
Authors: José Pedro Correia and Miguel Alexandre Ferreira
Session: Early Research Achievements Track Session 2: Software Changes and Maintainability
Faults and Regression Testing - Fault interaction and its repercussionsICSM 2011
Paper: Fault Interaction and its Repercussions
Authors: Nicholas DiGiuseppe and James A. Jones
Seesion: Research Track 1: Faults and Regression Testing
Natural Language Analysis - Expanding Identifiers to Normalize Source Code Vo...ICSM 2011
Paper: Expanding Identifiers to Normalize Source Code Vocabulary
Authors: Dave Binkley and Dawn Lawrie
Session: Research Track 4: Natural Language Analysis
Industry - Precise Detection of Un-Initialized Variables in Large, Real-life ...ICSM 2011
Paper: "Precise Detection of Un-Initialized Variables in Large, Real-life COBOL Programs in Presence of Un-realizable Paths"
Authors: Rahul Jiresal, Adnan Contractor and Ravindra Naik
Session: Industry Track Session 4: Program analysis and Verification
Components - Graph Based Detection of Library API LimitationsICSM 2011
Paper: Graph-based Detection of Library API Imitations
Authors: Chengnian Sun, Siau-Cheng Khoo, Shao Jie Zhang (All from National University of Singapore)
Session: Research Track Session 7: Component
Industry - Relating Developers' Concepts and Artefact Vocabulary in a Financ...ICSM 2011
Paper: Relating Developers' Concepts and Artefact Vocabulary in a Financial
Software Module
Authors: Tezcan Dilshener and Michel Wermelinger
Session: Industry Track 2 - Reverse Engineering
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/imaging-systems-for-applied-reinforcement-learning-control-a-presentation-from-nanotronics/
Damas Limoge, Senior R&D Engineer at Nanotronics, presents the “Imaging Systems for Applied Reinforcement Learning Control” tutorial at the September 2020 Embedded Vision Summit.
Reinforcement learning has generated human-level decision-making strategies in highly complex game scenarios. But most industries, such as manufacturing, have not seen impressive results from the application of these algorithms, belying the utility hoped for by their creators. The limitations of reinforcement learning in real use cases intuitively manifest from the number of exploration examples needed to train the underlying models, but also from incomplete state representations for an artificial agent to act on.
In an effort to improve automated inspection for factory control through reinforcement learning, Nanotronics’ research is focused on improving the state representation of a manufacturing process using optical inspection as a basis for agent optimization. In this presentation, Limoge focuses on the imaging system: its design, implementation and utilization, in the context of a reinforcement agent.
Reconfigurable CORDIC Low-Power Implementation of Complex Signal Processing f...Editor IJMTER
In recent years, CORDIC algorithms has been used extensively for various image processing
system& biomedical applications. By using CORDIC algorithm we can able to reducing the number of
iteration to process the image in the system. Low power design is to be challenging process during system
operations. Previous approaches scope to minimize the power consumption without image quality
consideration. In this paper CORDIC Based Low Power DCT iterations process equally based upon their
image quality. An hardware implementation of ROM & control logic circuit to require large hardware
space in this system. Look-ahead CORDIC Approach is used to finish the iteration at one time. When
reducing hardware area & reducing number of iterations for maximize battery lifetime. This idea used to
achieve the low power design of image and video compression application
ASSESSMENT OF INTRICATE DG PLANNING WITH PRACTICAL LOAD MODELS BY USING PSO ecij
This paper presents the optimal sizing and placement of DG by assuming practical load models. The particle swarm optimization technique is used to minimize the multi-objective fitness function (MOFF). This MOFF has considered the performance indices such as a voltage difference index, active power loss index and reactive power loss index. Most of the studies have considered the constant load for distribution system planning which may mislead the exact assessment of the system performance. Thus the voltage dependency of load models is found in a highly demanding issue in updating researches. Keeping in view the urgent need of precise and flawless distribution system planning the effect of different load models on the total load, voltage profile, active and reactive power loss has been evaluated and presented in this paper. The efficacy of the proposed method has been executed by implementing it on the 33-bus radial test system.
Product defect detection based on convolutional autoencoder and one-class cla...IAESIJAI
To meet customer expectations and remain competitive, industrials try constantly to improve their quality control systems. There is hence increasing demand for adopting automatic defect detection solutions. However, the biggest issue in addressing such systems is the imbalanced aspect of industrial datasets. Often, defect-free samples far exceed the defected ones, due to continuous improvement approaches adopted by manufacturing companies. In this sense, we propose an automatic defect detection system based on one-class classification (OCC) since it involves only normal samples during training. It consists of three sub-models, first, a convolutional autoencoder serves as latent features extractor, the extracted features vectors are subsequently fed into the dimensionality reduction process by performing principal component analysis (PCA), then the reduced-dimensional data are used to train the one-class classifier support vector data description (SVDD). During the test phase, both normal and defected images are used. The first two stages of the trained model generate a low-dimensional features vector, whereas the SVDD classifies the new input, whether it is defect-free or defected. This approach is evaluated on the carpet images from the industrial inspection dataset MVTec anomaly detection (MVTec AD). During training, only normal images were used. The results showed that the proposed method outperforms the state-of-the-art methods.
Marc Stein, Underwrite.ai - Driverless AI Use Cases in Finance and Cancer Gen...Sri Ambati
This session was recorded in San Francisco on February 9th, 2019 and can be viewed here: https://youtu.be/6KY4CSA1AzU
Marc Stein is the founder and CEO of Underwrite.ai. Underwrite.ai applies advances in artificial intelligence derived from genomics and particle physics to provide lenders with non-linear, dynamic models of credit risk which radically outperform traditional approaches. Marc’s career has always revolved around deep interests in artificial intelligence, quantum physics, genomics, sugar cream pie, and all ice cream flavors found at Berthillon and the challenge of how to combine all these in practical applications.
We present a system to support generalized SQL workload analysis and management for multi-tenant and multi-database platforms. Workload analysis applications are becoming more sophisticated to support database administration, model user behavior, audit security, and route queries, but the methods rely on specialized feature engineering, and therefore must be carefully implemented and reimplemented for each SQL dialect, database system, and application. Meanwhile, the size and complexity of workloads are increasing as systems centralize in the cloud. We model workload analysis and management tasks as variations on query labeling, and propose a system design that can support general query labeling routines across multiple applications and database backends. The design relies on the use of learned vector embeddings for SQL queries as a replacement for application-specific syntactic features, reducing custom code and allowing the use of off-the-shelf machine learning algorithms for labeling. The key hypothesis, for which we provide evidence in this paper, is that these learned features can outperform conventional feature engineering on representative machine learning tasks. We present the design of a database-agnostic workload management and analytics service, describe potential applications, and show that separating workload representation from labeling tasks affords new capabilities and can outperform existing solutions for representative tasks, including workload sampling for index recommendation and user labeling for security audits.
Implementation on Quality of Control for Image Based Control Systems using Al...YogeshIJTSRD
Picture Processing IP applications have gotten renowned with the presence of capable computations and negligible exertion CMOS cameras with significant standard. In any case, IP applications are register concentrated, eat up a huge load of energy and have long taking care of times. Picture assessment has been proposed by late works for an energy capable arrangement of these applications. It also diminishes the impact of long getting ready occasions. The test here is that the IP applications oftentimes work as a piece of more prominent shut circle control systems, for instance advanced driver help structure ADAS . We propose a construction for execution appraisal of picture surmise on a shut circle auto IBC structure. Our construction is written in C and uses V REP as the propagation environment. For the generation, V REP runs as a laborer and the C module as a client in concurrent mode. We show the electiveness of our framework using a fantasy based equal control model. Miss. Badde Suma | Mr. Parasurama N | Kirla Jyothsna "Implementation on Quality-of-Control for Image-Based Control Systems using Algorithmic Approximation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd40028.pdf Paper URL: https://www.ijtsrd.comengineering/electronics-and-communication-engineering/40028/implementation-on-qualityofcontrol-for-imagebased-control-systems-using-algorithmic-approximation/miss-badde-suma
Reengineering framework for open source software using decision tree approachIJECEIAES
A Software engineering is an approach to software development. Once software gets developed and delivered, it needs maintenance. Changes in software incur due to new requirements of the end-user, identification of bug in software or failure to achieve system objective. It has been observed that successive maintenance in the developed software reduces software quality and degrades the performance of software system. Reengineering is an approach of retaining the software quality and improving maintainability of the software system. But the question arises “when to reengineer the software”. The paper proposed a framework for software reengineering process using decision tree approach which helps decision makers to decide whether to maintain or reengineer the software systems.
Paper presented at the 6th International Work-Conference on Ambient Assisted Living.
Abstract: Due to the increasing demand of multi-camera setup and long-term monitoring in vision applications, real-time multi-view action recognition has gain a great interest in recent years. In this paper, we propose a multiple kernel learning based fusion framework that employs a motion-based person detector for finding regions of interest and local descriptors with bag-of-words quantisation for feature representation. The experimental results on a multi-view action dataset suggest that the proposed framework significantly outperforms simple fusion techniques and state-of-the-art methods.
Partial half fine-tuning for object detection with unmanned aerial vehiclesIAESIJAI
Deep learning has shown outstanding performance in object detection tasks with unmanned aerial vehicles (UAVs), which involve the fine-tuning technique to improve performance by transferring features from pre-trained models to specific tasks. However, despite the immense popularity of fine-tuning, no works focused on to study of the precise fine-tuning effects of object detection tasks with UAVs. In this research, we conduct an experimental analysis of each existing fine-tuning strategy to answer which is the best procedure for transferring features with fine-tuning techniques. We also proposed a partial half fine-tuning strategy which we divided into two techniques: first half fine-tuning (First half F-T) and final half fine-tuning (Final half F-T). We use the VisDrone dataset for the training and validation process. Here we show that the partial half fine-tuning: Final half F-T can outperform other fine-tuning techniques and are also better than one of the state-of-the-art methods by a difference of 19.7% from the best results of previous studies.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Industry - Estimating software maintenance effort from use cases an industrial case study
1. Estimating Software Maintenance
Effort from Use Cases: an
Industrial Case Study
Yan Ku, Jing Du, Ye Yang, Qing Wang
Institute of Software, Chinese Academy of Sciences
2011-09-29
*This work is supported by the National Natural Science Foundation of China under Grant Nos. 90718042, 60873072, 60903050 and
61073044; the National Hi-Tech Research and Development Plan of China under Grant Nos. 2007AA010303, 2007AA01Z186 and
2007AA01Z179.
3. Motivation
An anecdote in software engineering domain*:
Elephant: Discipline
Monkey: Agile
Elephant & Monkey: Practical
*Barry W. Boehm, Richard Turner, Balancing Agility and Discipline: A Guide for the Perplexed, Aug, 2003
*Picture Source: www.image.baidu.com 3
4. Motivation
“Speaking of the problems we are facing during software lifecycle,
hmmm, there is no way a short list. … I hate to, but I have to say that
effort estimation especially for maintenance project, if not totally
impossible, is really a big challenge
-----A Project Manager
Formal estimation models: time-consuming, requiring
enough information and data---- the elephant
Expert Judgment: expert-dependent, easily lead to
thumb-up---- the monkey
*Picture Source: http://www.zcool.com.cn 4
5. Industrial Setting
The industrial research is inspired by the estimation
dilemma a mentioned before.
The problem occurred in developing the leading product
of a medium-size software enterprise with CMMI ML4
in China.
The product, named QONE, is a commercial software
process management tool.
It has contributed to the process improvement for more
than 300 small and medium-sized software companies
and organizations in China.
5
6. Industrial Setting (Con)
Since 2004, QONE has released several major versions as well
as branches for special customized ones in succession.
Several of the evolving versions are maintenance projects.
Expert estimates were mainly used in the past effort estimation
of QONE. Versions Begin Date End Date
The estimation results are not so stable due2004-11-15
v1 2004-10-8 to the objectivity
and other issue. v2 2005-7-11 2005-11-30
The actual effortv3 other data including use case documents
and 2006-1-16 2007-3-30
have been accumulated by QONE itself. 2007-10-31
v4 2007-5-28
v5 2007-12-10 2008-7-31
v6 2008-3-20 2008-8-21
v7 2008-9-1 2009-3-20
6
7. Methodology
Goal: Achieving the balance of simplicity, early-
estimating and accuracy in one effort estimate .
Methodology Principles:
Apply use cases as the size metric and introduce
requirement elaboration factors to make the estimate in
Maintenance task type is not distinguished
advance
due to the difficulty for effort classification
Introduce adjusted factors as few as possible in order to
reduce the complexity
Take advantage of the history data to help improve the
estimation accuracy.
7
8. Modeling Process
Get the lowest level Use the same
requirements data unit
8
9. Count Data
Use case: number of use cases
newUC: new-added
modUC: modified
reuUC: reused without modified
delUC: delete
9
11. Count Data & Construct Model
Weight
Wmod/ Wreu / Wdel :effort ratio of
modified/reused/deleted use case to a new-added one
Sizeadjusted = newUC + Wmod * modUC
+Wreu * reuUC + Wdel * delUC
Effort = A * (Sizeadjusted) B
Where
Effort is the maintenance effort;
Sizeadjusted is the adjusted product size;
A is the multiplicative calibration constant;
B is the exponential calibration constant 11
13. Validation
Metric Definition
MRE = |predictive effort – actual effort| / actual effort
Referred Measures
MMRE: mean magnitude relative erro
MdMRE: median magnitude relative error
PRED25: the % of the data points with RE<=0.25
PRED30: the % of the data points with RE<=0.30
13
14. Prediction Process
Data collection
Prediction
To apply in different phases during lifecycle,
elaboration factors are referred to estimate size input.
Sizere-adjusted = EF * Sizeadjusted,where EF is the
elaboration factor between higher and lower-level
requirements
1 n NUC i whereNRi is the number of higher-
EF = ∑
n i=1 NR i
level requirements and NUCi is the number of lower-leverl
requirements in ith data point
*Picture Source: Alistair Cockburn, Writing Effective Use Cases 14
15. Case Study
Historical data are the projects mentioned in
slice 6.
Requirements are described in three level:
capability goals (CG, least detailed)
capability requirements (CR)
use cases (UC, most detailed)
15
16. Number Number of Number of Number of Actual Effort
Versions Requirements Levels
of new modified reused deleted (person/hour)
Capability goals(CG) 0 5 13 0
v1 Capability requirements(CR) 0 7 35 0 2284.5
Use cases(UC) 3 10 216 0
Capability goals(CG) 0 11 7 0
v2 Capability requirements(CR) 0 16 26 0 3941
Use cases(UC) 7 22 207 0
Capability goals(CG) 4 15 3 0
v3 Capability requirements(CR) 12 30 12 0 30945
Use cases(UC) 86 94 134 8
Capability goals(CG) 1 13 6 3
v4 Capability requirements(CR) 3 33 17 4 10340.1
Use cases(UC) 50 61 229 17
Capability goals(CG) 1 12 7 1
v5 Capability requirements(CR) 1 18 33 2 7477.5
Use cases(UC) 12 31 301 15
Capability goals(CG) 1 8 12 0
v6 Capability requirements(CR) 2 20 32 0 14903.6
Use cases(UC) 37 30 311 3
Capability goals(CG) 0 3 18 0
v7 Capability requirements(CR) 3 3 51 0 7166
16
Use cases(UC) 15 8 366 4
18. Validation Result
Leave-one-out cross validation is applied.
Elaboration factors are referred from A. A. Malik’s research*
since the data used there is the subset of our dataset.
Metrics CG CR UC
MMRE 36.87% 26.18% 26.94%
UC yields the best
MdMRE 27.09% 24.13% 20.01% result
PRED25 0.4286 0.5714 0.7143
PRED30 0.5714 0.7143 0.8571
*A. A. Malik, B. W. Boehm, Y. Ku, and Y. Yang, “Comparative Analysis of Requirements Elaboration of an Industrial Product,”
Proceedings of the 2nd International Conference on Software Technology and Engineering(ICSTE 2010), Oct. 2010, pp. 46-50 18
19. Validation Result (Cont)
Versions Adjusted Size Actual Effort Predictive Effort MRE (%)
v1 15.8 2284.5 2916.8925 27.68
v2 21.75 3941 3767.9859 4.39
v3 111.5 30945 23276.7727 24.78
v4 73.65 10340.1 19263.9780 86.30
v5 33.25 7477.5 6136.3455 17.94
v6 58.55 14903.6 11921.1786 20.01
v7 34.9 7166 6628.5125 7.50
1. COTS has been used
2. Increased productivity through
requirement management
19
20. Method Comparison
Analogy
Database used for analogy is from China Software
Benchmarking Standard Group (CSBSG)
999 software project data from 140 organizations distributed
in 15 regions across China
COCOMO2000
Methods MMRE MDMRE PRED25 PRED30
Analogy 33.09% 31.59% 0.2857 0.2857
COCOMO 32.47% 15.47% 0.5714 0.5714
Use case 26.94% 20.01% 0.7143 0.8571
20
21. Discussion
Lessons Learned:
Use Case Metrics
Requirement Elaboration Factors
Advantages of use-case based estimation
Linear vs. Exponential relationship between effort and use case
Threats to Validity
Internal threats:
Weak outlier tolerance
Complexity of use cases
External threats:
Use case weight
21