Testing of artificial intelligence; AI quality engineering skils - an introdu...Rik Marselis
Testing of AI will require a new skillset related to interpreting a system’s boundaries or tolerances. Indeed, as our paper points out, the complex functioning of an AI system means, amongst other things, that the focus of testing shifts from output to input to verify a robust solution. Also we introduce the 6 angles of quality for Artificial Intelligence and Robotics.
This paper was written by Humayun Shaukat, Toni Gansel and Rik Marselis.
Testing of artificial intelligence; AI quality engineering skils - an introdu...Rik Marselis
Testing of AI will require a new skillset related to interpreting a system’s boundaries or tolerances. Indeed, as our paper points out, the complex functioning of an AI system means, amongst other things, that the focus of testing shifts from output to input to verify a robust solution. Also we introduce the 6 angles of quality for Artificial Intelligence and Robotics.
This paper was written by Humayun Shaukat, Toni Gansel and Rik Marselis.
Cross-project Defect Prediction Using A Connectivity-based Unsupervised Class...Feng Zhang
Defect prediction on projects with limited historical data has attracted great interest from both researchers and practitioners. Cross-project defect prediction has been the main area of progress by reusing classifiers from other projects. However, existing approaches require some degree of homogeneity (e.g., a similar distribution of metric values) between the training projects and the target project. Satisfying the homogeneity requirement often requires significant effort (currently a very active area of research).
An unsupervised classifier does not require any training data, therefore the heterogeneity challenge is no longer an issue. In this paper, we examine two types of unsupervised classifiers: a) distance-based classifiers (e.g., k-means); and b) connectivity-based classifiers. While distance-based unsupervised classifiers have been previously used in the defect prediction literature with disappointing performance, connectivity-based classifiers have never been explored before in our community.
We compare the performance of unsupervised classifiers versus supervised classifiers using data from 26 projects from three publicly available datasets (i.e., AEEEM, NASA, and PROMISE). In the cross-project setting, our proposed connectivity-based classifier (via spectral clustering) ranks as one of the top classifiers among five widely-used supervised classifiers (i.e., random forest, naive Bayes, logistic regression, decision tree, and logistic model tree) and five unsupervised classifiers (i.e., k-means, partition around medoids, fuzzy C-means, neural-gas, and spectral clustering). In the within-project setting (i.e., models are built and applied on the same project), our spectral classifier ranks in the second tier, while only random forest ranks in the first tier. Hence, connectivity-based unsupervised classifiers offer a viable solution for cross and within project defect predictions.
Defect Prediction Over Software Life Cycle in Automotive DomainRAKESH RANA
Defect Prediction Over Software Life Cycle in Automotive Domain
Presented at:
9th International Joint Conference on Software Technologies (ICSOFT-EA), Vienna, Austria
Get full text of publication at:
http://rakeshrana.website/index.php/work/publications/
Data collection for software defect predictionAmmAr mobark
It is one of the important stages that software companies need it, it will be after produce the program and published, to know the reactions of the users and their impressions about the program and work on developing and improving it.
The Contents
* BACKGROUND AND RELATED WORK
* EXPERIMENTAL PLANNING
-Research Goal -Research Questions -Experimental Subjects
-Experimental Material -Tasks and Methods
-Experimental Design
عمار عبد الكريم صاحب مبارك
AmmAr Abdualkareem sahib mobark
It Does What You Say, Not What You Mean: Lessons From A Decade of Program RepairClaire Le Goues
In this talk we present lessons learned, good ideas, and thoughts on the future, with an eye toward informing junior researchers about the realities and opportunities of a long-running project. We highlight some notions from the original paper that stood the test of time, some that were not as prescient, and some that became more relevant as industrial practice advanced. We place the work in context, highlighting perceptions from software engineering and evolutionary computing, then and now, of how program repair could possibly work. We discuss the importance of measurable benchmarks and reproducible research in bringing scientists together and advancing the area. We give our thoughts on the role of quality requirements and properties in program repair. From testing to metrics to scalability to human factors to technology transfer, software repair touches many aspects of software engineering, and we hope a behind-the-scenes exploration of some of our struggles and successes may benefit researchers pursuing new projects.
In the age of Big Data, what role for Software Engineers?CS, NcState
ABSTRACT:
Consider the premise of Big Data:
better conclusions = same algorithms + more data + more cpu
If this were always true, then there would be no role for human analysts
that reflected over the domain to offer insights that produce better solutions
(since all such insight is now automatically generated from the CPUs).
This talk proposes a marriage of sorts between Big Data and software
engineering. It reviews over a decade of work by the author in exploring
user goals using CPU-intensive methods. It will be shown that analyst-insight was
useful from building “better" tools (where “better” means generate
more succinct recommendations, runs faster, scales to much larger problems).
The conclusion will be that in the age of big data, human analysis is still
useful and necessary. But a new kind of software engineering analyst is required- one
that know how to take full advantage of the power of Big Data.
ABOUT THE AUTHOR:
Tim Menzies (P.hD., UNSW) is a Professor in CS at WVU; the author of
over 230 referred publications; and is one of the 50 most cited
authors in software engineering (out of 50,000+ researchers, see
http://goo.gl/wqpQl). At WVU, he has been a lead researcher on
projects for NSF, NIJ, DoD, NASA, USDA, as well as joint research work
with private companies. He teaches data mining and artificial
intelligence and programming languages.
Prof. Menzies is the co-founder of the PROMISE conference series
devoted to reproducible experiments in software engineering (see
http://promisedata.googlecode.com). He is an associate editor of IEEE
Transactions on Software Engineering, Empirical Software Engineering
and the Automated Software Engineering Journal. In 2012, he served as
co-chair of the program committee for the IEEE Automated Software
Engineering conference. In 2015, he will serve as co-chair for the
ICSE'15 NIER track. For more information, see his web site
http://menzies.us or his vita at http://goo.gl/8eNhY or his list of
pubs at http://goo.gl/0SWJ2p.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
TLC2018 Thomas Haver: The Automation Firehose - Be Strategic and TacticalAnna Royzman
Thomas Haver teaches how to automate both strategically and tactically to maximize the benefits of automation - at Test Leadership Congress 2018.
http://testleadershipcongress-ny.com
Search-based testing of procedural programs:iterative single-target or multi-...Vrije Universiteit Brussel
In the context of testing of Object-Oriented (OO) software systems, researchers have recently proposed search based approaches to automatically generate whole test suites by considering simultaneously all targets (e.g., branches) defined by the coverage criterion (multi-target approach). The goal of whole suite approaches is to overcome the problem of wasting search budget that iterative single-target approaches (which iteratively generate test cases for each target) can encounter in case of infeasible targets. However, whole suite approaches have not been implemented and experimented in the context of procedural programs. In this paper we present OCELOT (Optimal Coverage sEarch-based tooL for sOftware Testing), a test data generation tool for C programs which implements both a state-of-the-art whole suite approach and an iterative single-target approach designed for a parsimonious use of the search budget. We also present an empirical study conducted on 35 open-source C programs to compare the two approaches implemented in OCELOT. The results indicate that the iterative single-target approach provides a higher efficiency while achieving the same or an even higher level of coverage than the whole suite approach.
Advanced Optimization for the Enterprise WebinarSigOpt
Building on the TWIML eBook, TWIMLcon event and TWIML podcast series that explore Machine Learning Platforms in great detail, this webinar examines the machine learning platforms that power enterprise leaders in AI. SigOpt CEO Scott Clark will provide an overview of critical technical capabilities that our customers have prioritized in their ML platforms.
Review these slides to learn about:
- Critical capabilities for data, experiment and model management
- Tradeoffs between building and buying these capabilities
- Lessons from the implementation of these platforms by AI leaders
Why focus on these platforms and the capabilities that power them? Nearly every company is investing in machine learning that differentiates products or generates revenue. These so-called "differentiated models" represent the biggest opportunity for AI to transform the business. Most of these teams find success hiring expert data scientists and machine learning engineers who can build these models. But most of these teams also struggle to create a more sustainable, scalable and reproducible process for model development, and have begun building ML platforms to tackle this challenge.
Domain-specific modeling languages and generators have been
shown to significantly improve the productivity and quality of
system and software development. These benefits are typically
reported without explaining the size of the initial investment in
creating the languages, generators and related tooling. We compare the investment needed across ten cases, in two different ways, focusing on the effort to develop a complete modeling solution for a particular domain with the MetaEdit+ tool. Firstly, we use a case study research method to obtain detailed data on the development effort of implementing two realistically-sized domain-specific modeling solutions. Secondly, we review eight publicly available cases
from various companies to obtain data from industry experiences with the same tool, and compare them with the results from our case studies. Both the case studies and the industry reports indicate that, for this tool, the investment required to create domain-specific modeling support is modest: ranging from 3 to 15 man-days with an average of 10 days
Cross-project Defect Prediction Using A Connectivity-based Unsupervised Class...Feng Zhang
Defect prediction on projects with limited historical data has attracted great interest from both researchers and practitioners. Cross-project defect prediction has been the main area of progress by reusing classifiers from other projects. However, existing approaches require some degree of homogeneity (e.g., a similar distribution of metric values) between the training projects and the target project. Satisfying the homogeneity requirement often requires significant effort (currently a very active area of research).
An unsupervised classifier does not require any training data, therefore the heterogeneity challenge is no longer an issue. In this paper, we examine two types of unsupervised classifiers: a) distance-based classifiers (e.g., k-means); and b) connectivity-based classifiers. While distance-based unsupervised classifiers have been previously used in the defect prediction literature with disappointing performance, connectivity-based classifiers have never been explored before in our community.
We compare the performance of unsupervised classifiers versus supervised classifiers using data from 26 projects from three publicly available datasets (i.e., AEEEM, NASA, and PROMISE). In the cross-project setting, our proposed connectivity-based classifier (via spectral clustering) ranks as one of the top classifiers among five widely-used supervised classifiers (i.e., random forest, naive Bayes, logistic regression, decision tree, and logistic model tree) and five unsupervised classifiers (i.e., k-means, partition around medoids, fuzzy C-means, neural-gas, and spectral clustering). In the within-project setting (i.e., models are built and applied on the same project), our spectral classifier ranks in the second tier, while only random forest ranks in the first tier. Hence, connectivity-based unsupervised classifiers offer a viable solution for cross and within project defect predictions.
Defect Prediction Over Software Life Cycle in Automotive DomainRAKESH RANA
Defect Prediction Over Software Life Cycle in Automotive Domain
Presented at:
9th International Joint Conference on Software Technologies (ICSOFT-EA), Vienna, Austria
Get full text of publication at:
http://rakeshrana.website/index.php/work/publications/
Data collection for software defect predictionAmmAr mobark
It is one of the important stages that software companies need it, it will be after produce the program and published, to know the reactions of the users and their impressions about the program and work on developing and improving it.
The Contents
* BACKGROUND AND RELATED WORK
* EXPERIMENTAL PLANNING
-Research Goal -Research Questions -Experimental Subjects
-Experimental Material -Tasks and Methods
-Experimental Design
عمار عبد الكريم صاحب مبارك
AmmAr Abdualkareem sahib mobark
It Does What You Say, Not What You Mean: Lessons From A Decade of Program RepairClaire Le Goues
In this talk we present lessons learned, good ideas, and thoughts on the future, with an eye toward informing junior researchers about the realities and opportunities of a long-running project. We highlight some notions from the original paper that stood the test of time, some that were not as prescient, and some that became more relevant as industrial practice advanced. We place the work in context, highlighting perceptions from software engineering and evolutionary computing, then and now, of how program repair could possibly work. We discuss the importance of measurable benchmarks and reproducible research in bringing scientists together and advancing the area. We give our thoughts on the role of quality requirements and properties in program repair. From testing to metrics to scalability to human factors to technology transfer, software repair touches many aspects of software engineering, and we hope a behind-the-scenes exploration of some of our struggles and successes may benefit researchers pursuing new projects.
In the age of Big Data, what role for Software Engineers?CS, NcState
ABSTRACT:
Consider the premise of Big Data:
better conclusions = same algorithms + more data + more cpu
If this were always true, then there would be no role for human analysts
that reflected over the domain to offer insights that produce better solutions
(since all such insight is now automatically generated from the CPUs).
This talk proposes a marriage of sorts between Big Data and software
engineering. It reviews over a decade of work by the author in exploring
user goals using CPU-intensive methods. It will be shown that analyst-insight was
useful from building “better" tools (where “better” means generate
more succinct recommendations, runs faster, scales to much larger problems).
The conclusion will be that in the age of big data, human analysis is still
useful and necessary. But a new kind of software engineering analyst is required- one
that know how to take full advantage of the power of Big Data.
ABOUT THE AUTHOR:
Tim Menzies (P.hD., UNSW) is a Professor in CS at WVU; the author of
over 230 referred publications; and is one of the 50 most cited
authors in software engineering (out of 50,000+ researchers, see
http://goo.gl/wqpQl). At WVU, he has been a lead researcher on
projects for NSF, NIJ, DoD, NASA, USDA, as well as joint research work
with private companies. He teaches data mining and artificial
intelligence and programming languages.
Prof. Menzies is the co-founder of the PROMISE conference series
devoted to reproducible experiments in software engineering (see
http://promisedata.googlecode.com). He is an associate editor of IEEE
Transactions on Software Engineering, Empirical Software Engineering
and the Automated Software Engineering Journal. In 2012, he served as
co-chair of the program committee for the IEEE Automated Software
Engineering conference. In 2015, he will serve as co-chair for the
ICSE'15 NIER track. For more information, see his web site
http://menzies.us or his vita at http://goo.gl/8eNhY or his list of
pubs at http://goo.gl/0SWJ2p.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
TLC2018 Thomas Haver: The Automation Firehose - Be Strategic and TacticalAnna Royzman
Thomas Haver teaches how to automate both strategically and tactically to maximize the benefits of automation - at Test Leadership Congress 2018.
http://testleadershipcongress-ny.com
Search-based testing of procedural programs:iterative single-target or multi-...Vrije Universiteit Brussel
In the context of testing of Object-Oriented (OO) software systems, researchers have recently proposed search based approaches to automatically generate whole test suites by considering simultaneously all targets (e.g., branches) defined by the coverage criterion (multi-target approach). The goal of whole suite approaches is to overcome the problem of wasting search budget that iterative single-target approaches (which iteratively generate test cases for each target) can encounter in case of infeasible targets. However, whole suite approaches have not been implemented and experimented in the context of procedural programs. In this paper we present OCELOT (Optimal Coverage sEarch-based tooL for sOftware Testing), a test data generation tool for C programs which implements both a state-of-the-art whole suite approach and an iterative single-target approach designed for a parsimonious use of the search budget. We also present an empirical study conducted on 35 open-source C programs to compare the two approaches implemented in OCELOT. The results indicate that the iterative single-target approach provides a higher efficiency while achieving the same or an even higher level of coverage than the whole suite approach.
Advanced Optimization for the Enterprise WebinarSigOpt
Building on the TWIML eBook, TWIMLcon event and TWIML podcast series that explore Machine Learning Platforms in great detail, this webinar examines the machine learning platforms that power enterprise leaders in AI. SigOpt CEO Scott Clark will provide an overview of critical technical capabilities that our customers have prioritized in their ML platforms.
Review these slides to learn about:
- Critical capabilities for data, experiment and model management
- Tradeoffs between building and buying these capabilities
- Lessons from the implementation of these platforms by AI leaders
Why focus on these platforms and the capabilities that power them? Nearly every company is investing in machine learning that differentiates products or generates revenue. These so-called "differentiated models" represent the biggest opportunity for AI to transform the business. Most of these teams find success hiring expert data scientists and machine learning engineers who can build these models. But most of these teams also struggle to create a more sustainable, scalable and reproducible process for model development, and have begun building ML platforms to tackle this challenge.
Domain-specific modeling languages and generators have been
shown to significantly improve the productivity and quality of
system and software development. These benefits are typically
reported without explaining the size of the initial investment in
creating the languages, generators and related tooling. We compare the investment needed across ten cases, in two different ways, focusing on the effort to develop a complete modeling solution for a particular domain with the MetaEdit+ tool. Firstly, we use a case study research method to obtain detailed data on the development effort of implementing two realistically-sized domain-specific modeling solutions. Secondly, we review eight publicly available cases
from various companies to obtain data from industry experiences with the same tool, and compare them with the results from our case studies. Both the case studies and the industry reports indicate that, for this tool, the investment required to create domain-specific modeling support is modest: ranging from 3 to 15 man-days with an average of 10 days
In this video from SC17 in Denver, Dan Reed moderates a panel discussion on HPC Software for Energy Efficiency.
"We have already achieved major gains in energy-efficiency for both the datacenter and HPC equipment. For example, the PUE of the Swiss Supercomputer (CSCS) datacenter prior to 2012 was 1.8, but the current PUE is about 1.25; a factor of ~1.5 improvement. HPC system improvements have also been very strong, as evidenced by FLOPS/Watt performance on the Green500 List. While we have seen gains from data center and HPC system efficiency, there are also energy-efficiency gains to be had from software- application performance improvements, for example. This panel will explore what HPC software capabilities were most helpful over the past years in improving HPC system energy efficiency? It will then look forward; asking in what layers of the software stack should a priority be put on introducing energy-awareness; e.g., runtime, scheduling, applications? What is needed moving forward? Who is responsible for that forward momentum?"
Watch the video: https://wp.me/p3RLHQ-hHQ
Learn more: https://sc17.supercomputing.org/presentation/?id=pan103&sess=sess245
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Navigate, Understand, Communicate: How Developers Locate Performance BugsSebastian Baltes
Background: Performance bugs can lead to severe issues regarding computation efficiency, power consumption, and user experience. Locating these bugs is a difficult task because developers have to judge for every costly operation whether runtime is consumed necessarily or unnecessarily. Objective: We wanted to investigate how developers, when locating performance bugs, navigate through the code, understand the program, and communicate the detected issues.
Method: We performed a qualitative user study observing twelve developers trying to fix documented performance bugs in two open source projects. The developers worked with a profiling and analysis tool that visually depicts runtime information in a list representation and embedded into the source code view.
Results: We identified typical navigation strategies developers used for pinpointing the bug, for instance, following method calls based on runtime consumption. The integration of visualization and code helped developers to understand the bug. Sketches visualizing data structures and algorithms turned out to be valuable for externalizing and communicating the comprehension process for complex bugs.
Conclusion: Fixing a performance bug is a code comprehension and navigation problem. Flexible navigation features based on executed methods and a close integration of source code and performance information support the process.
Strategies oled optimization jmp 2016 09-19David Lee
Every experiment yields multiple data types, each requiring unique analyses and controls due to the sub-micron nature of an innovative organic light-emitting diode (OLED). Three specific data methods will be discussed. First, the premise of the study centers on a six-factor definitive screening design that was built utilizing new features incorporated in JMP 13 for improved power and signal detection. Multiple responses were modeled with a defect model generated via use of the Profiler and Simulation studies. Second, devices are continually monitored for radiance loss in an accelerated fade test. Frequently, devices are removed from the test prior to reaching their failure point. Predicted failure times can be estimated by utilizing a custom nonlinear model in either the Reliability Degradation or Nonlinear Model platforms. Estimated failure times were then incorporated into traditional parametric survival techniques, as well as new features in the Generalized Regression platform. Lastly, radiance data is collected across the visual spectrum, resulting in approximately 100 correlated responses.
Strategies for Optimization of an OLED DeviceDavid Lee
Every experiment yields multiple data types, each requiring unique analyses and controls due to the sub-micron nature of an innovative organic light-emitting diode (OLED). Three specific data methods will be discussed. First, the premise of the study centers on a six-factor definitive screening design that was built utilizing new features incorporated in JMP 13 for improved power and signal detection. Multiple responses were modeled with a defect model generated via use of the Profiler and Simulation studies. Second, devices are continually monitored for radiance loss in an accelerated fade test. Frequently, devices are removed from the test prior to reaching their failure point. Predicted failure times can be estimated by utilizing a custom nonlinear model in either the Reliability Degradation or Nonlinear Model platforms. Estimated failure times were then incorporated into traditional parametric survival techniques, as well as new features in the Generalized Regression platform. Lastly, radiance data is collected across the visual spectrum, resulting in approximately 100 correlated responses.
Computer Engineering Ethics course @ Birzeit University
Slides set 1: what is professionalism. Importance of Engineering. Course outcomes. Codes of Ethics. Challenger shuttle disaster. Ford Pinto case.
Pareto-Optimal Search-Based Software Engineering (POSBSE): A Literature SurveyAbdel Salam Sayyad
Paper presented at the 2nd International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE’13), San Francisco, USA. May 2013.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Unit 8 - Information and Communication Technology (Paper I).pdf
On the Value of User Preferences in Search-Based Software Engineering
1. On the Value of User Preferences in
Search-Based Software Engineering:
A Case Study in Software Product Lines
Abdel Salam Sayyad
Tim Menzies
Hany Ammar
West Virginia University, USA
International Conference on Software Engineering
May 23rd, 2013
2. Sound bites
Feature-based SE
Representation and validation: important
Multi-objective Configuration: interesting
2
Stop tinkering with small stuff!
Minor improvements welcome…
… but not insightful!
Enough with the usual suspects:
NSGA-II, SPEA2, etc.
If user preferences matter
Then the best optimizer understands preferences the best
5. Welcome to the new world!
Old world:
product-based SE
e.g. Microsoft office
New world:
app-based SE
e.g. Apple app store
5
6. The times, they are a changin’
Old world: product-based SE
e.g. Microsoft office
6
• Vendors tried to retain their
user base via some
complete ecologies
• One software solution for
all user needs (e.g.
Microsoft Office).
• Large, complex, software
platforms,
– very slow to change.
7. The times, they are a changin’
New worlde: app-based SE
• Smart phones and tablet-
based software
• Users choosing many
numbers of small apps from
different vendors,
– each performing a specific
small task.
• Vendors must quickly and
continually reconfigure apps
– To retain and extend their
customer base.
7
e.g. Apple app store
8. Feature–oriented domain analysis [Kang 1990]
• Feature models = a
lightweight method for
defining a space of options
• De facto standard for
modeling variability
8
Cross-Tree Constraints
Cross-Tree Constraints
9. What are the user preferences?
• Multi-objective optimization = navigating competing concerns
– Success criteria = choose features that achieve the user preferences!
• Suppose each feature had the following metrics:
1. Boolean USED_BEFORE?
2. Integer DEFECTS
3. Real COST
• Show me the space of “best options” according to the objectives:
1. That satisfies most domain constraints (0 ≤ #violations ≤ 100%)
2. That offers most features
3. That we have used most before
4. Using features with least known defects
5. Using features with least cost
9
10. Diving deeper
• Much prior work in SBSE (*)
--------------------------
(*) Sayyad and Ammar, RAISE’13
10
explored tiny objective spaces
(2 or 3 objectives)
Used NSGA-II
Didn’t state why!
13. Survival of the fittest
(according to NSGA-II [Deb et al. 2002])
13
Boolean dominance (x Dominates y, or does not):
- In no objective is x worse than y
- In at least one objective, x is better than y Crowd
pruning
14. Tinkering with small stuff…
ssNSGA-II: steady state [Durillo ‘09+
14
Other Algorithms: Ranking criteria:
NSGA-II
SPEA2 *Zitzler ‘01+ More focus on diversity, with a
new diversity measure.
FastPGA [Eskandari ‘07+
Borrowed criteria from
NSGA-II, and diversity
measure from SPEA2
MOCell (Cellular GA) [Nebro ‘09] NSGA-II
NSGA-IIMOCHC (re-designed selection, crossover,
and mutation) [Nebro ‘07+
Boolean Dominance
Boolean Dominance
Boolean Dominance
Boolean Dominance
Boolean Dominance
15. Indicator-Based Evolutionary
Algorithm (IBEA) [Zitzler and Kunzli ‘04+
15
1) For {old generation + new generation} do
– Add up every individual’s amount of dominance with
respect to everyone else
– Sort all instances by F
– Delete worst, recalculate, delete worst, recalculate, …
2) Then, standard GA (cross-over, mutation) on the
survivors Create a new generation Back to 1.
Continuous Dominance
16. An important property of IBEA
• Allows solutions to crowd close to the zero point.
16
• In our problem: Allows many solutions that achieve
violations equal or close to zero… then allows
optimization in the other 4 dimensions…
• Crowd pruning can hurt!
• Using real-valued benchmark functions, Wagner et al.
(EMO 2007) confirm that the performance of NSGA-II and
SPEA2 rapidly deteriorates with increasing dimension,
whereas indicator-based algorithms (e.g. IBEA) cope very
well with high-dimensional objective spaces.
19. Algorithms
• 7 Algorithms from jMetal: http://jmetal.sourceforge.net/
– IBEA, NSGA-II, ssNSGA-II, SPEA2, FastPGA, MOCell, MOCHC.
19
• Features are bits in the “product” string.
• Same settings for all algorithms.
20. 4 studies:
Two, three, four, five- objectives
20
The five objectives:
1. satisfy most domain constraints (0 ≤ violations ≤ 100%)
2. offer most features
3. most used before
4. least known defects
5. least cost
Two objectives = 1,2
Three objectives = 1,2,5
Four objectives = 1,2,3,5
Five objectives = 1,2,3,4,5
21. 21
HV = hypervolume of dominated region
Spread = coverage of frontier
% correct = % of solutions that fully satisfy the constraints
22. 22Comment 1: all about the same for the 2-objective problem
HV = hypervolume of dominated region
Spread = coverage of frontier
% correct = % of solutions that fully satisfy the constraints
23. 23Comment 2: IBEA is better on HV.
HV = hypervolume of dominated region
Spread = coverage of frontier
% correct = % of solutions that fully satisfy the constraints
24. 24Comment 3: IBEA has no spread operators, but gets best spread
HV = hypervolume of dominated region
Spread = coverage of frontier
% correct = % of solutions that fully satisfy the constraints
25. 25Comment 4: All the non-IBEA algorithms are very similar
HV = hypervolume of dominated region
Spread = coverage of frontier
% correct = % of solutions that fully satisfy the constraints
26. HV = hypervolume of dominated region
Spread = coverage of frontier
% correct = % of solutions that fully satisfy the constraints
26Comment 5: IBEA does much, much better on constraints
27. HV = hypervolume of dominated region
Spread = coverage of frontier
% correct = % of solutions that fully satisfy the constraints
27Comment 6: IBEA’s advantage is more striking with E-Shop
28. Why is this interesting?
Other Algorithms
• The usual suspects are widely,
uncritically used in many
applications
– E.g. especially NSGA-II and SPEA2
• Focused on internal algorithmic
tricks
– Techniques for
• improving spread
• Improving HV
• Avoid overlaps in cross-over of
dominated space
• etc.
• And the net effect of all those
differences?
– Not much
IBEA
• Rather stupid on those
internal tricks
– Just does same old crossover
mutate GA
– Plus: aggressive exploration
of the preference space
• And the net effect of all
those differences
– Better spreads
– Better HV
– Fewer constraint violations
28
Conclusion:
preference is power
30. Current Achievements and Future Work
• ICSE’13: initial results proving IBEA’s advantage.
• CMSBSE’13: IBEA runtime improvement.
• (Under review):
– mutation operator tailored to feature tree. 2-3
orders-of-magnitude improvement in runtime.
– Scalability to 6000+ real feature model using
innovative “population seeding” method.
30
• Future Work:
– Further scalability.
– Interactive configuration.
31. Sound bites
The new age of the app
In this new world: use FODA
(feature-oriented domain analysis)
31
Stop tinkering with small stuff
Many MEOAs have strikingly
similar performance
Enough with the usual suspects:
NSGA-II, SPEA2, etc.
Too much uncritical application of these algorithms
If user preferences matter
Then the best optimizer understands preferences the best
IBEA: aggressive preference exploration
Acknowledgment
This research work was
funded by the Qatar
National Research Fund
(QNRF) under the National
Priorities Research Program
(NPRP)
Grant No.: 09-1205-2-470.