This document describes a study on the relationship between term dispersion in source code identifiers and fault proneness. It introduces measures of physical dispersion (entropy) and conceptual dispersion (context coverage) to quantify how terms are scattered across identifiers. An aggregated metric (numHEHCC) counts the number of terms with high entropy and coverage. The study aims to determine if numHEHCC captures different characteristics than size alone, and whether higher dispersion is related to higher fault risk. It presents dispersion measures, outlines analyzing their relevance compared to size, and relating them to faults using two Java projects as case studies.
Physical and Conceptual Identifier Dispersion: Measures and Relation to Fault...ICSM 2010
This document describes a study on the relationship between term dispersion in source code identifiers and fault proneness. It introduces measures of physical dispersion (entropy) and conceptual dispersion (context coverage) to quantify how terms are scattered across identifiers. An aggregated metric (numHEHCC) counts the number of terms with high entropy and coverage. The study aims to determine if numHEHCC captures different characteristics than size alone, and whether higher dispersion is related to higher fault risk. It presents dispersion measures, outlines analyzing their relevance compared to size, and relating them to faults using two Java projects as case studies.
This document discusses measuring and relating identifier dispersion to fault proneness. It introduces physical and conceptual dispersion measures, including entropy and context coverage. An aggregated metric called numHEHCC is presented that counts the number of terms with high entropy and different contexts within an entity. The study aims to examine if numHEHCC captures characteristics beyond size alone and to determine if term entropy and context coverage help explain faults in an entity.
The document presents an exploratory study on identifier renamings in software projects. It introduces a taxonomy to classify identifier renamings based on entity, semantics, string distance, and grammar. An empirical study is conducted on two projects, Eclipse and Tomcat, to understand when renamings occur, who performs them, and what types of changes based on the taxonomy. The results show that renamings are concentrated in specific time frames and performed by a subset of developers, with most changes to classes and interfaces and some towards opposite semantics. Future work is needed to better understand why developers rename identifiers.
The document summarizes a study on identifier renaming in software projects. The study aimed to understand when and how identifier renamings occur by analyzing changes to identifiers in the Eclipse and Tomcat projects. Key findings include:
- Renamings are concentrated in specific time frames and performed by a subset of developers.
- Most renamings change the semantic meaning of identifiers, such as adding or removing meaning. Some renamings correct wrong semantics.
- Small string changes to identifiers are often due to typos or abbreviations, while larger changes usually change the semantic meaning.
The document summarizes a study on identifier renaming in software projects. The study aimed to understand when and why identifier renamings occur. It analyzed renamings in the Eclipse and Tomcat projects. Key findings include:
- Renamings are concentrated in specific time frames and performed by a subset of developers.
- Most renamings change class and interface names. Renamings often add or change the meaning of terms in a semantic way.
- Small string changes to identifiers are often due to typos or abbreviations, while unrelated renamings have high string distance.
The document proposes a hybrid approach to estimating biophysical parameters from remote sensing data that combines a theoretical forward model with available reference samples. It aims to improve both accuracy and robustness of estimates. The approach formulates the estimation problem and characterizes the deviation between model outputs and observations using reference samples. An experimental analysis applies the approach to soil moisture estimation using microwave data, demonstrating improved performance over solely using the theoretical model.
A Two Stage Estimator of Instrumental Variable Quantile Regression for Panel ...ijtsrd
This paper proposes a two stage instrumental variable quantile regression 2S IVQR estimation to estimate the time invariant effects in panel data model. In the first stage, we introduce the dummy variables to represent the time invariant effects, and use quantile regression to estimate effects of individual covariates. The advantage of the first stage is that it can reduce calculations and the number of estimation parameters. Then in the second stage, we adapt instrument variables approach and 2SLS method. In addition, we present a proof of 2S IVQR estimators large sample properties. Monte Carlo simulation study shows that with increasing sample size, the Bias and RMSE of our estimator are decreased. Besides, our estimator has lower Bias and RMSE than those of the other two estimators. Tao Li "A Two-Stage Estimator of Instrumental Variable Quantile Regression for Panel Data with Time-Invariant Effects" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-6 , October 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47716.pdf Paper URL : https://www.ijtsrd.com/other-scientific-research-area/other/47716/a-twostage-estimator-of-instrumental-variable-quantile-regression-for-panel-data-with-timeinvariant-effects/tao-li
The document describes an empirical study on the impact of two antipatterns - Blob and Spaghetti Code - on program comprehension. It outlines three experiments conducted with 24 subjects that measured subjects' performance on comprehension tasks with code containing antipatterns versus code without. The results showed that subjects had statistically significantly higher effort, longer time, and lower accuracy on tasks involving code with antipatterns compared to code without, indicating that antipatterns negatively impact program comprehension.
Physical and Conceptual Identifier Dispersion: Measures and Relation to Fault...ICSM 2010
This document describes a study on the relationship between term dispersion in source code identifiers and fault proneness. It introduces measures of physical dispersion (entropy) and conceptual dispersion (context coverage) to quantify how terms are scattered across identifiers. An aggregated metric (numHEHCC) counts the number of terms with high entropy and coverage. The study aims to determine if numHEHCC captures different characteristics than size alone, and whether higher dispersion is related to higher fault risk. It presents dispersion measures, outlines analyzing their relevance compared to size, and relating them to faults using two Java projects as case studies.
This document discusses measuring and relating identifier dispersion to fault proneness. It introduces physical and conceptual dispersion measures, including entropy and context coverage. An aggregated metric called numHEHCC is presented that counts the number of terms with high entropy and different contexts within an entity. The study aims to examine if numHEHCC captures characteristics beyond size alone and to determine if term entropy and context coverage help explain faults in an entity.
The document presents an exploratory study on identifier renamings in software projects. It introduces a taxonomy to classify identifier renamings based on entity, semantics, string distance, and grammar. An empirical study is conducted on two projects, Eclipse and Tomcat, to understand when renamings occur, who performs them, and what types of changes based on the taxonomy. The results show that renamings are concentrated in specific time frames and performed by a subset of developers, with most changes to classes and interfaces and some towards opposite semantics. Future work is needed to better understand why developers rename identifiers.
The document summarizes a study on identifier renaming in software projects. The study aimed to understand when and how identifier renamings occur by analyzing changes to identifiers in the Eclipse and Tomcat projects. Key findings include:
- Renamings are concentrated in specific time frames and performed by a subset of developers.
- Most renamings change the semantic meaning of identifiers, such as adding or removing meaning. Some renamings correct wrong semantics.
- Small string changes to identifiers are often due to typos or abbreviations, while larger changes usually change the semantic meaning.
The document summarizes a study on identifier renaming in software projects. The study aimed to understand when and why identifier renamings occur. It analyzed renamings in the Eclipse and Tomcat projects. Key findings include:
- Renamings are concentrated in specific time frames and performed by a subset of developers.
- Most renamings change class and interface names. Renamings often add or change the meaning of terms in a semantic way.
- Small string changes to identifiers are often due to typos or abbreviations, while unrelated renamings have high string distance.
The document proposes a hybrid approach to estimating biophysical parameters from remote sensing data that combines a theoretical forward model with available reference samples. It aims to improve both accuracy and robustness of estimates. The approach formulates the estimation problem and characterizes the deviation between model outputs and observations using reference samples. An experimental analysis applies the approach to soil moisture estimation using microwave data, demonstrating improved performance over solely using the theoretical model.
A Two Stage Estimator of Instrumental Variable Quantile Regression for Panel ...ijtsrd
This paper proposes a two stage instrumental variable quantile regression 2S IVQR estimation to estimate the time invariant effects in panel data model. In the first stage, we introduce the dummy variables to represent the time invariant effects, and use quantile regression to estimate effects of individual covariates. The advantage of the first stage is that it can reduce calculations and the number of estimation parameters. Then in the second stage, we adapt instrument variables approach and 2SLS method. In addition, we present a proof of 2S IVQR estimators large sample properties. Monte Carlo simulation study shows that with increasing sample size, the Bias and RMSE of our estimator are decreased. Besides, our estimator has lower Bias and RMSE than those of the other two estimators. Tao Li "A Two-Stage Estimator of Instrumental Variable Quantile Regression for Panel Data with Time-Invariant Effects" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-6 , October 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47716.pdf Paper URL : https://www.ijtsrd.com/other-scientific-research-area/other/47716/a-twostage-estimator-of-instrumental-variable-quantile-regression-for-panel-data-with-timeinvariant-effects/tao-li
The document describes an empirical study on the impact of two antipatterns - Blob and Spaghetti Code - on program comprehension. It outlines three experiments conducted with 24 subjects that measured subjects' performance on comprehension tasks with code containing antipatterns versus code without. The results showed that subjects had statistically significantly higher effort, longer time, and lower accuracy on tasks involving code with antipatterns compared to code without, indicating that antipatterns negatively impact program comprehension.
The document summarizes research on daily living activity recognition using efficient combination of high and low level cues. The researchers propose an approach that fuses body pose estimation and low-level cues like optical flow to produce an enriched descriptor. A Fisher kernel representation is then used to model the temporal variation in video sequences for recognizing activities. The approach achieves state-of-the-art results on the ADL Rochester dataset.
A PROCEDURE FOR IDENTIFYING PRECURSORS TOPROBLEM BEHAVIOR.docxbartholomeocoombs
A PROCEDURE FOR IDENTIFYING PRECURSORS TO
PROBLEM BEHAVIOR
BRANDON HERSCOVITCH, EILEEN M. ROSCOE, MYRNA E. LIBBY,
JASON C. BOURRET, AND WILLIAM H. AHEARN
NEW ENGLAND CENTER FOR CHILDREN
NORTHEASTERN UNIVERSITY
We describe a procedure for differentiating among potential precursor responses for use in a
functional analysis. Conditional probability analysis of descriptive assessment data identified
three potential precursors. Results from the indirect assessment corresponded with those
obtained from the descriptive assessment. The top-ranked response identified as a precursor
according to the indirect assessment had the strongest relation according to the probability
analysis. When contingencies were arranged for the precursor in a functional analysis, the same
function was identified as for target behavior, supporting the utility of indirect and descriptive
methods to identify precursor behavior empirically.
DESCRIPTORS: descriptive assessment, functional analysis, precursors, problem behavior,
response-class hierarchies
_______________________________________________________________________________
Functional analysis (Iwata, Dorsey, Slifer,
Bauman, & Richman, 1982/1994) involves
manipulating antecedents and consequences
for the target behavior of interest. Because a
functional analysis requires the repeated occur-
rence of a target response, it may not be
appropriate for response topographies that pose
risk of harm to others (e.g., severe aggression) or
the client (e.g., self-injury). One modification
that has addressed this concern involves a
functional analysis of precursor behavior (i.e.,
arranging contingencies for responses that
reliably precede the target behavior) based on
previous research showing that response topog-
raphies that occur in close temporal proximity
are often members of the same response class,
and by providing differential reinforcement for
earlier responses in the response-class hierarchy,
later more severe responses occur less often
(Harding et al., 2001; Lalli, Mace, Wohn, &
Livezey, 1995; Richman, Wacker, Asmus,
Casey, & Andelman, 1999).
Smith and Churchill (2002) conducted a
functional analysis of precursor behavior and
found similar outcomes from a functional
analysis of the target behavior and a functional
analysis of the hypothesized precursor behavior.
A study by Najdowski, Wallace, Ellsworth,
MacAleese, and Cleveland (2008) extended this
work by demonstrating that an intervention
based on a functional analysis of precursor
behavior was effective in eliminating partici-
pants’ precursor behavior. The implication of
these findings is that outcomes from functional
analyses of precursor responses may be used to
infer the function of more severe topographies
that occur later in the response-class hierarchy.
A potential limitation associated with both of
these studies is that indirect assessments alone
were used to identify precursor responses. Such
assessments have sometimes been found to have
poor reliab.
Las ecuaciones diferenciales, esenciales en la modelación matemática de fenómenos de diversas disciplinas como la física, biología, economía y muchas otras, son herramientas que permiten describir cómo varían ciertas cantidades en relación con otras. Una ecuación diferencial es una relación matemática que involucra una función desconocida y sus derivadas. Estas ecuaciones pueden clasificarse en ecuaciones diferenciales ordinarias (EDO) y ecuaciones diferenciales parciales (EDP), según si la función desconocida depende de una sola variable independiente o de varias, respectivamente. Las EDO se subdividen en ecuaciones de primer orden, segundo orden y órdenes superiores, dependiendo del mayor número de derivadas involucradas en la ecuación. Las EDP, por su parte, son fundamentales en la descripción de fenómenos donde las variables dependen de múltiples dimensiones espaciales y temporales, como en la ecuación del calor, la ecuación de onda y la ecuación de Laplace. Para resolver estas ecuaciones, se emplean diversos métodos analíticos y numéricos. Entre los métodos analíticos para las EDO de primer orden, el método de separación de variables se usa cuando la ecuación puede ser expresada en la forma
𝑔
(
𝑦
)
𝑑
𝑦
=
𝑓
(
𝑥
)
𝑑
𝑥
g(y)dy=f(x)dx, permitiendo la integración directa de ambos lados. El método de integración por factores es otra técnica valiosa para las ecuaciones lineales de primer orden, escritas como
𝑑
𝑦
𝑑
𝑥
+
𝑃
(
𝑥
)
𝑦
=
𝑄
(
𝑥
)
dx
dy
+P(x)y=Q(x), y se basa en multiplicar la ecuación por un factor integrante adecuado que simplifique su integración. Las ecuaciones de segundo orden y órdenes superiores también tienen métodos específicos, como el método de variación de parámetros y el método de coeficientes indeterminados, ambos útiles para resolver ecuaciones lineales homogéneas y no homogéneas. Los métodos de serie, como la serie de potencias y la serie de Fourier, son técnicas poderosas para encontrar soluciones en forma de series infinitas, especialmente útiles cuando las soluciones no pueden ser expresadas en términos de funciones elementales. Las transformadas, como la transformada de Laplace y la transformada de Fourier, son herramientas clave para transformar ecuaciones diferenciales en el dominio del tiempo a ecuaciones algebraicas en el dominio de la frecuencia, facilitando su resolución y permitiendo el tratamiento de problemas con condiciones iniciales y de contorno complejas. Las ecuaciones diferenciales parciales, que describen fenómenos como la difusión del calor, la propagación de ondas y la dinámica de fluidos, se resuelven mediante técnicas como la separación de variables, la transformada de Fourier y la transformada de Laplace, así como métodos numéricos como el método de diferencias finitas, el método de elementos finitos y el método de volúmenes finitos. Estos métodos numéricos son especialmente útiles para problemas donde las soluciones exactas no son posibles debido a la complejidad
Automatic eye fixations identification based on analysis of variance and cova...Giuseppe Fineschi
Eye movement is the simplest and repetitive movement that enables humans to interact with the environment. The common daily activities, such as reading a book or watching television, involve this natural
activity, which consists of rapidly shifting our gaze from one region to another. In clinical application, the
identification of the main components of eye movement during visual exploration, such as fixations and
saccades, is the objective of the analysis of eye movements: however, in patients affected by motor control disorder the identification of fixation is not banal. This work presents a new fixation identification
algorithm based on the analysis of variance and covariance: the main idea was to use bivariate statistical
analysis to compare variance overxandyto identify fixation. We describe the new algorithm, and we
compare it with the common fixations algorithm based on dispersion. To demonstrate the performance
of our approach, we tested the algorithm in a group of healthy subjects and patients affected by motor
control disorder
Digital images of oral cavities were analyzed by 7 raters to test inter-rater reliability of measurements. Raters measured areas of the total oral cavity, tongue, teeth, and empty spaces. Low average standard deviations and variances between raters demonstrated high inter-rater reliability for the digital imaging analysis method. This objective analysis technique aims to improve predictions of difficult intubation by eliminating subjectivity compared to prior subjective tests.
The document describes an empirical study on the impact of two antipatterns - Blob and Spaghetti Code - on program comprehension. It presents three experiments where subjects performed comprehension tasks on code with and without the antipatterns. The experiments measured subjects' performance in terms of effort, time taken, and percentage of correct answers. The results were analyzed to test hypotheses about whether the antipatterns negatively or positively impacted comprehension. The goal was to provide quantitative evidence on the relationship between antipatterns and program comprehension.
Wilberth Herrera has over 5 years of research experience in acoustic wave propagation and its engineering applications. He has particular expertise in computational modeling, processing, and inversion of acoustic signals in petroleum exploration. His research focuses on simulating and interpreting acoustic well logging data. He has extensive experience with scientific programming and parallel computing.
Design of Field Experiments in Biodiversity Impact Assessment Dr Stephen Ambrose
This document discusses the importance of using scientifically rigorous experimental designs in ecological field projects that assess impacts on biodiversity. It outlines different approaches to ecological field monitoring and emphasizes that mensurative and manipulative experiments with appropriate replication are needed. An example is provided to illustrate key components of developing hypotheses based on observations and models. The document stresses that null hypotheses must be formulated and experiments designed to eliminate incorrect models, with treatments, controls and replication used to reduce sources of variability. Rigorous experimental design is said to be important for impact assessment projects conducted by ecological consultants.
This study evaluated the reproducibility of projective mapping, a sensory characterization method using consumers, across six different studies with varying product types and sample differences. The studies compared individual consumer responses and overall consensus maps from two separate sessions. While individual reproducibility was generally low, the consensus maps showed high reproducibility (RV coefficients above 0.75), suggesting projective mapping provides relatively stable results at the aggregate level even without replicates. However, some differences in perceived sample similarities between sessions were found, indicating care is needed when relying on single-session results, especially for similar samples. Stability indices of the consensus maps correlated with reproducibility and could help decide if replication is necessary.
C3.04: Assessing the impact of observations on ocean forecasts and reanalyses...Blue Planet Symposium
Under GODAE OceanView the operational ocean modelling community has developed a suite of global ocean forecast, reanalysis and analysis systems. Each system has a critical dependence on ocean observations – routinely assimilating observations of in-situ temperature and salinity, and satellite sea-level anomaly and sea surface temperature. Under GODAE OceanView (GOV), the Observing System Evaluation Task Team (https://www.godae-oceanview.org/science/task-teams/observing-system-evaluation-tt-oseval-tt/) regularly coordinates analyses from the GOV community to demonstration the value and impact of ocean observations on different global and regional data-assimilating forecast and reanalysis systems. Highlights of the latest suite of demonstrations will be presented here. Results show that Argo data are critically important – the most critical for seasonal prediction, and as critical as satellite altimetry for eddy-resolving applications. Most systems show that TAO data are as important as Argo in the tropical Pacific, and that XBT data have an impact that is comparable to other data types in the vicinity of XBT transects. It is clear that no currently available data type is redundant. On the contrary, the components of the global ocean observing system complement each other remarkably well, providing sufficient information to monitor and forecast the global ocean.
Case-control Study on 2nd Hammertoe Deformity Correction TechniquesWenjay Sung
This is my case-control study on second hammertoe deformity correction techniques: arthroplasty, arthrodesis, and interpositional implant arthroplasty.
Hammer Toe Correction Comparative StudyWenjay Sung
This study compared outcomes of 3 surgical treatments for hammertoe deformities: arthroplasty, arthrodesis, and interpositional implant arthroplasty. 114 patients underwent one of the procedures and were followed for at least 12 months. All treatments significantly improved pain and sagittal plane correction, but only implant arthroplasty provided significant transverse plane correction and had the lowest revision rate at 10.4%. The study demonstrates implant arthroplasty may have advantages over the other procedures for hammertoe correction.
The document discusses research design and experimentation. It defines research design as the blueprint that guides the research process. An effective research design maximizes systematic variance between variables, minimizes error variance, and controls for confounding variables. This is known as the MAXMINCON principle. Experimental designs have higher internal validity while non-experimental correlational designs have higher external validity but lower internal validity since they cannot control for confounding variables.
Paolo Giacometti received his PhD in biomedical engineering from Dartmouth College in 2014. His research focused on developing multimodal brain imaging technologies combining EEG, NIRS, and fMRI. He has published journal articles and book chapters on these topics and holds a patent. Currently he is a postdoctoral researcher at Dartmouth transferring his brain imaging technology from prototypes to medical devices for studying patients with MS.
This document discusses the development of models for predicting toxicity after radiotherapy for prostate cancer. Early models focused only on dosimetric variables but were limited. Later models incorporated clinical variables and improved predictions. Current research aims to include genetic and biomolecular factors to account for variability in individual radiosensitivity. While some models exist for acute and late rectal toxicity, validation and inclusion of additional variables is still needed. Future multifactorial models integrating dosimetric, clinical, and genetic data may enable more individualized risk assessments and isotoxic treatment planning.
Description and Composition of Bio-Inspired Design Patterns: The Gradient CaseFernandez-Marquez
3rd Workshop on Bio-Inspired and Self-* Algorithms for Distributed Systems. Slides of the presentation: Description and Composition of Bio-Inspired Design Patterns: The Gradient Case
In this presentation I will show a set of important topics about Software Engineering Empirical Studies that can be useful for increasing quality on your thesis and monographs in general. You can read this presentation and to think about how to do a good experimentation by apply its objectives, validation methods, questions, answers expected, define metrics and measuring it.I will exhibit how the researchers selected the data for avoid case studies in a biased way using a GQM methodology to sort the study in a simpler view as well.
This document provides an overview of LOINC (Logical Observation Identifiers Names and Codes) presented by Daniel Vreeman. In 3 sentences: LOINC is a universal standard for identifying health measurements and observations that allows for data exchange between systems. It has over 60,000 codes covering laboratory and clinical observations. The LOINC community is open-source and has over 14,000 members from 145 countries contributing to its ongoing development and adoption worldwide.
Limiting Logical Violations in Ontology Alignnment Through NegotiationErnesto Jimenez Ruiz
Slides presented in the KR conference 2016, Cape Town, South Africa.
Authors: Ernesto Jiménez-Ruiz, Terry R. Payne, Alessandro Solimando, Valentina A. M. Tamma
Prediction of Plantar Plate Injury using MRIWenjay Sung
Magnetic resonance imaging (MRI) is useful for diagnosing plantar plate tears but may not reliably rule out tears. A prospective study of 41 patients underwent MRI of the foot before surgery for suspected plantar plate pathology. MRI correctly identified 39 of 41 tears but missed 2 tears, giving it a sensitivity of 95% and specificity of 100%. MRI appears good for confirming a tear but may miss some, with a negative predictive value of 67%. MRI can help clinicians diagnose plantar plate tears but ultrasound may also be useful to evaluate.
Some Pitfalls with Python and Their Possible Solutions v1.0Yann-Gaël Guéhéneuc
Python is a very popular programming language that comes with many pitfalls. This presentation describes some of these pitfalls, especially when they could trick unsuspecting object-oriented developers. It proposes solutions to these pitfalls, in particular regarding inheritance, which is easily broken because of the implementation choice of Python for explicit delegation, its method resolution order, and its use of the C3 algorithm. It discusses some advantages of using Python, especially regarding meta-classes.
Advice for writing a NSERC Discovery grant application v0.5Yann-Gaël Guéhéneuc
NSERC Discovery grant applications are judged according to four criteria: (1) Excellence of the researcher, (2) Merit of the proposal, (3) Contribution to the training of HQP, and (4) Cost of research. Each criterion has six possible merit indicators: Exceptional, Outstanding, Very strong, Strong, Moderate, and Insufficient. This presentation describes the process from a candidate's point of view and a reviewer's point of view. It discusses funding decisions, including bins and ER vs. ECR. It gives some advice, including graduating PhD students, having a story, and limiting the number of main objectives.
The document summarizes research on daily living activity recognition using efficient combination of high and low level cues. The researchers propose an approach that fuses body pose estimation and low-level cues like optical flow to produce an enriched descriptor. A Fisher kernel representation is then used to model the temporal variation in video sequences for recognizing activities. The approach achieves state-of-the-art results on the ADL Rochester dataset.
A PROCEDURE FOR IDENTIFYING PRECURSORS TOPROBLEM BEHAVIOR.docxbartholomeocoombs
A PROCEDURE FOR IDENTIFYING PRECURSORS TO
PROBLEM BEHAVIOR
BRANDON HERSCOVITCH, EILEEN M. ROSCOE, MYRNA E. LIBBY,
JASON C. BOURRET, AND WILLIAM H. AHEARN
NEW ENGLAND CENTER FOR CHILDREN
NORTHEASTERN UNIVERSITY
We describe a procedure for differentiating among potential precursor responses for use in a
functional analysis. Conditional probability analysis of descriptive assessment data identified
three potential precursors. Results from the indirect assessment corresponded with those
obtained from the descriptive assessment. The top-ranked response identified as a precursor
according to the indirect assessment had the strongest relation according to the probability
analysis. When contingencies were arranged for the precursor in a functional analysis, the same
function was identified as for target behavior, supporting the utility of indirect and descriptive
methods to identify precursor behavior empirically.
DESCRIPTORS: descriptive assessment, functional analysis, precursors, problem behavior,
response-class hierarchies
_______________________________________________________________________________
Functional analysis (Iwata, Dorsey, Slifer,
Bauman, & Richman, 1982/1994) involves
manipulating antecedents and consequences
for the target behavior of interest. Because a
functional analysis requires the repeated occur-
rence of a target response, it may not be
appropriate for response topographies that pose
risk of harm to others (e.g., severe aggression) or
the client (e.g., self-injury). One modification
that has addressed this concern involves a
functional analysis of precursor behavior (i.e.,
arranging contingencies for responses that
reliably precede the target behavior) based on
previous research showing that response topog-
raphies that occur in close temporal proximity
are often members of the same response class,
and by providing differential reinforcement for
earlier responses in the response-class hierarchy,
later more severe responses occur less often
(Harding et al., 2001; Lalli, Mace, Wohn, &
Livezey, 1995; Richman, Wacker, Asmus,
Casey, & Andelman, 1999).
Smith and Churchill (2002) conducted a
functional analysis of precursor behavior and
found similar outcomes from a functional
analysis of the target behavior and a functional
analysis of the hypothesized precursor behavior.
A study by Najdowski, Wallace, Ellsworth,
MacAleese, and Cleveland (2008) extended this
work by demonstrating that an intervention
based on a functional analysis of precursor
behavior was effective in eliminating partici-
pants’ precursor behavior. The implication of
these findings is that outcomes from functional
analyses of precursor responses may be used to
infer the function of more severe topographies
that occur later in the response-class hierarchy.
A potential limitation associated with both of
these studies is that indirect assessments alone
were used to identify precursor responses. Such
assessments have sometimes been found to have
poor reliab.
Las ecuaciones diferenciales, esenciales en la modelación matemática de fenómenos de diversas disciplinas como la física, biología, economía y muchas otras, son herramientas que permiten describir cómo varían ciertas cantidades en relación con otras. Una ecuación diferencial es una relación matemática que involucra una función desconocida y sus derivadas. Estas ecuaciones pueden clasificarse en ecuaciones diferenciales ordinarias (EDO) y ecuaciones diferenciales parciales (EDP), según si la función desconocida depende de una sola variable independiente o de varias, respectivamente. Las EDO se subdividen en ecuaciones de primer orden, segundo orden y órdenes superiores, dependiendo del mayor número de derivadas involucradas en la ecuación. Las EDP, por su parte, son fundamentales en la descripción de fenómenos donde las variables dependen de múltiples dimensiones espaciales y temporales, como en la ecuación del calor, la ecuación de onda y la ecuación de Laplace. Para resolver estas ecuaciones, se emplean diversos métodos analíticos y numéricos. Entre los métodos analíticos para las EDO de primer orden, el método de separación de variables se usa cuando la ecuación puede ser expresada en la forma
𝑔
(
𝑦
)
𝑑
𝑦
=
𝑓
(
𝑥
)
𝑑
𝑥
g(y)dy=f(x)dx, permitiendo la integración directa de ambos lados. El método de integración por factores es otra técnica valiosa para las ecuaciones lineales de primer orden, escritas como
𝑑
𝑦
𝑑
𝑥
+
𝑃
(
𝑥
)
𝑦
=
𝑄
(
𝑥
)
dx
dy
+P(x)y=Q(x), y se basa en multiplicar la ecuación por un factor integrante adecuado que simplifique su integración. Las ecuaciones de segundo orden y órdenes superiores también tienen métodos específicos, como el método de variación de parámetros y el método de coeficientes indeterminados, ambos útiles para resolver ecuaciones lineales homogéneas y no homogéneas. Los métodos de serie, como la serie de potencias y la serie de Fourier, son técnicas poderosas para encontrar soluciones en forma de series infinitas, especialmente útiles cuando las soluciones no pueden ser expresadas en términos de funciones elementales. Las transformadas, como la transformada de Laplace y la transformada de Fourier, son herramientas clave para transformar ecuaciones diferenciales en el dominio del tiempo a ecuaciones algebraicas en el dominio de la frecuencia, facilitando su resolución y permitiendo el tratamiento de problemas con condiciones iniciales y de contorno complejas. Las ecuaciones diferenciales parciales, que describen fenómenos como la difusión del calor, la propagación de ondas y la dinámica de fluidos, se resuelven mediante técnicas como la separación de variables, la transformada de Fourier y la transformada de Laplace, así como métodos numéricos como el método de diferencias finitas, el método de elementos finitos y el método de volúmenes finitos. Estos métodos numéricos son especialmente útiles para problemas donde las soluciones exactas no son posibles debido a la complejidad
Automatic eye fixations identification based on analysis of variance and cova...Giuseppe Fineschi
Eye movement is the simplest and repetitive movement that enables humans to interact with the environment. The common daily activities, such as reading a book or watching television, involve this natural
activity, which consists of rapidly shifting our gaze from one region to another. In clinical application, the
identification of the main components of eye movement during visual exploration, such as fixations and
saccades, is the objective of the analysis of eye movements: however, in patients affected by motor control disorder the identification of fixation is not banal. This work presents a new fixation identification
algorithm based on the analysis of variance and covariance: the main idea was to use bivariate statistical
analysis to compare variance overxandyto identify fixation. We describe the new algorithm, and we
compare it with the common fixations algorithm based on dispersion. To demonstrate the performance
of our approach, we tested the algorithm in a group of healthy subjects and patients affected by motor
control disorder
Digital images of oral cavities were analyzed by 7 raters to test inter-rater reliability of measurements. Raters measured areas of the total oral cavity, tongue, teeth, and empty spaces. Low average standard deviations and variances between raters demonstrated high inter-rater reliability for the digital imaging analysis method. This objective analysis technique aims to improve predictions of difficult intubation by eliminating subjectivity compared to prior subjective tests.
The document describes an empirical study on the impact of two antipatterns - Blob and Spaghetti Code - on program comprehension. It presents three experiments where subjects performed comprehension tasks on code with and without the antipatterns. The experiments measured subjects' performance in terms of effort, time taken, and percentage of correct answers. The results were analyzed to test hypotheses about whether the antipatterns negatively or positively impacted comprehension. The goal was to provide quantitative evidence on the relationship between antipatterns and program comprehension.
Wilberth Herrera has over 5 years of research experience in acoustic wave propagation and its engineering applications. He has particular expertise in computational modeling, processing, and inversion of acoustic signals in petroleum exploration. His research focuses on simulating and interpreting acoustic well logging data. He has extensive experience with scientific programming and parallel computing.
Design of Field Experiments in Biodiversity Impact Assessment Dr Stephen Ambrose
This document discusses the importance of using scientifically rigorous experimental designs in ecological field projects that assess impacts on biodiversity. It outlines different approaches to ecological field monitoring and emphasizes that mensurative and manipulative experiments with appropriate replication are needed. An example is provided to illustrate key components of developing hypotheses based on observations and models. The document stresses that null hypotheses must be formulated and experiments designed to eliminate incorrect models, with treatments, controls and replication used to reduce sources of variability. Rigorous experimental design is said to be important for impact assessment projects conducted by ecological consultants.
This study evaluated the reproducibility of projective mapping, a sensory characterization method using consumers, across six different studies with varying product types and sample differences. The studies compared individual consumer responses and overall consensus maps from two separate sessions. While individual reproducibility was generally low, the consensus maps showed high reproducibility (RV coefficients above 0.75), suggesting projective mapping provides relatively stable results at the aggregate level even without replicates. However, some differences in perceived sample similarities between sessions were found, indicating care is needed when relying on single-session results, especially for similar samples. Stability indices of the consensus maps correlated with reproducibility and could help decide if replication is necessary.
C3.04: Assessing the impact of observations on ocean forecasts and reanalyses...Blue Planet Symposium
Under GODAE OceanView the operational ocean modelling community has developed a suite of global ocean forecast, reanalysis and analysis systems. Each system has a critical dependence on ocean observations – routinely assimilating observations of in-situ temperature and salinity, and satellite sea-level anomaly and sea surface temperature. Under GODAE OceanView (GOV), the Observing System Evaluation Task Team (https://www.godae-oceanview.org/science/task-teams/observing-system-evaluation-tt-oseval-tt/) regularly coordinates analyses from the GOV community to demonstration the value and impact of ocean observations on different global and regional data-assimilating forecast and reanalysis systems. Highlights of the latest suite of demonstrations will be presented here. Results show that Argo data are critically important – the most critical for seasonal prediction, and as critical as satellite altimetry for eddy-resolving applications. Most systems show that TAO data are as important as Argo in the tropical Pacific, and that XBT data have an impact that is comparable to other data types in the vicinity of XBT transects. It is clear that no currently available data type is redundant. On the contrary, the components of the global ocean observing system complement each other remarkably well, providing sufficient information to monitor and forecast the global ocean.
Case-control Study on 2nd Hammertoe Deformity Correction TechniquesWenjay Sung
This is my case-control study on second hammertoe deformity correction techniques: arthroplasty, arthrodesis, and interpositional implant arthroplasty.
Hammer Toe Correction Comparative StudyWenjay Sung
This study compared outcomes of 3 surgical treatments for hammertoe deformities: arthroplasty, arthrodesis, and interpositional implant arthroplasty. 114 patients underwent one of the procedures and were followed for at least 12 months. All treatments significantly improved pain and sagittal plane correction, but only implant arthroplasty provided significant transverse plane correction and had the lowest revision rate at 10.4%. The study demonstrates implant arthroplasty may have advantages over the other procedures for hammertoe correction.
The document discusses research design and experimentation. It defines research design as the blueprint that guides the research process. An effective research design maximizes systematic variance between variables, minimizes error variance, and controls for confounding variables. This is known as the MAXMINCON principle. Experimental designs have higher internal validity while non-experimental correlational designs have higher external validity but lower internal validity since they cannot control for confounding variables.
Paolo Giacometti received his PhD in biomedical engineering from Dartmouth College in 2014. His research focused on developing multimodal brain imaging technologies combining EEG, NIRS, and fMRI. He has published journal articles and book chapters on these topics and holds a patent. Currently he is a postdoctoral researcher at Dartmouth transferring his brain imaging technology from prototypes to medical devices for studying patients with MS.
This document discusses the development of models for predicting toxicity after radiotherapy for prostate cancer. Early models focused only on dosimetric variables but were limited. Later models incorporated clinical variables and improved predictions. Current research aims to include genetic and biomolecular factors to account for variability in individual radiosensitivity. While some models exist for acute and late rectal toxicity, validation and inclusion of additional variables is still needed. Future multifactorial models integrating dosimetric, clinical, and genetic data may enable more individualized risk assessments and isotoxic treatment planning.
Description and Composition of Bio-Inspired Design Patterns: The Gradient CaseFernandez-Marquez
3rd Workshop on Bio-Inspired and Self-* Algorithms for Distributed Systems. Slides of the presentation: Description and Composition of Bio-Inspired Design Patterns: The Gradient Case
In this presentation I will show a set of important topics about Software Engineering Empirical Studies that can be useful for increasing quality on your thesis and monographs in general. You can read this presentation and to think about how to do a good experimentation by apply its objectives, validation methods, questions, answers expected, define metrics and measuring it.I will exhibit how the researchers selected the data for avoid case studies in a biased way using a GQM methodology to sort the study in a simpler view as well.
This document provides an overview of LOINC (Logical Observation Identifiers Names and Codes) presented by Daniel Vreeman. In 3 sentences: LOINC is a universal standard for identifying health measurements and observations that allows for data exchange between systems. It has over 60,000 codes covering laboratory and clinical observations. The LOINC community is open-source and has over 14,000 members from 145 countries contributing to its ongoing development and adoption worldwide.
Limiting Logical Violations in Ontology Alignnment Through NegotiationErnesto Jimenez Ruiz
Slides presented in the KR conference 2016, Cape Town, South Africa.
Authors: Ernesto Jiménez-Ruiz, Terry R. Payne, Alessandro Solimando, Valentina A. M. Tamma
Prediction of Plantar Plate Injury using MRIWenjay Sung
Magnetic resonance imaging (MRI) is useful for diagnosing plantar plate tears but may not reliably rule out tears. A prospective study of 41 patients underwent MRI of the foot before surgery for suspected plantar plate pathology. MRI correctly identified 39 of 41 tears but missed 2 tears, giving it a sensitivity of 95% and specificity of 100%. MRI appears good for confirming a tear but may miss some, with a negative predictive value of 67%. MRI can help clinicians diagnose plantar plate tears but ultrasound may also be useful to evaluate.
Some Pitfalls with Python and Their Possible Solutions v1.0Yann-Gaël Guéhéneuc
Python is a very popular programming language that comes with many pitfalls. This presentation describes some of these pitfalls, especially when they could trick unsuspecting object-oriented developers. It proposes solutions to these pitfalls, in particular regarding inheritance, which is easily broken because of the implementation choice of Python for explicit delegation, its method resolution order, and its use of the C3 algorithm. It discusses some advantages of using Python, especially regarding meta-classes.
Advice for writing a NSERC Discovery grant application v0.5Yann-Gaël Guéhéneuc
NSERC Discovery grant applications are judged according to four criteria: (1) Excellence of the researcher, (2) Merit of the proposal, (3) Contribution to the training of HQP, and (4) Cost of research. Each criterion has six possible merit indicators: Exceptional, Outstanding, Very strong, Strong, Moderate, and Insufficient. This presentation describes the process from a candidate's point of view and a reviewer's point of view. It discusses funding decisions, including bins and ER vs. ECR. It gives some advice, including graduating PhD students, having a story, and limiting the number of main objectives.
Ptidej Architecture, Design, and Implementation in Action v2.1Yann-Gaël Guéhéneuc
A set of process, architecture, design, and implementation patterns from a real, large program, the Ptidej Tool Suite. This set shows concrete problems and their solutions in Java. It includes: Be A Profiler, Tests as Documentation, Multi-layered Architecture, Proxy Console, Proxy Disk, Hidden Language, Internal Observer, Run-time Deprecation, String Parsimony, Object Identity, Object Address, Final Construction, StringBuffer as Positioning Element.
Examples of (bad) consequences of a lack of software quality and some solutions. This presentation presents some examples of (bad) consequences of a lack of software quality, in particular how poor software quality led to the direct deaths of 89 people. It then provides some background on software quality, especially the concept of Quality Without a Name. It then discusses many principles, their usefulness, and their positive consequences on software quality. Some of these principles are well-known in object-oriented programming while many others are taken from the book 97 Programmers. They include: abstraction, encapsulation, inheritance, types, polymorphism, SOLID, GRASP, YAGNI, KISS, DRY, Do Not Reinvent the Wheel, Law of Demeter, Beware of Assumptions, Deletable Code, coding with reason, and functional programming. They pertain to dependencies, domains, and tools.
(In details: Beautify is Simplicity, The Boy Scout Rule, You Gotta Care About the Code, The Longevity of Interim Solutions, Beware the Share, Encapsulate Behaviour not Just State, Single Responsibility Principle, WET Dilutes Performance Bottlenecks, Convenience Is Not an -ility, Code in the Language of the Domain, Comment Only What the Code Cannot Say, Distinguish Business Exception from Technical, Prefer Domain-specific Types to Primitive Types, Automate Your Coding Standards, Code Layout Matters, Before You Refactor, Improve Code by Removing It, Put the Mouse Down and Step Away from the Keyboard)
Some Pitfalls with Python and Their Possible Solutions v0.9Yann-Gaël Guéhéneuc
Python is a very popular programming language that comes with many pitfalls. This presentation describes some of these pitfalls, especially when they could trick unsuspecting object-oriented developers. It proposes solutions to these pitfalls, in particular regarding inheritance, which is easily broken because of the implementation choice of Python for explicit delegation, its method resolution order, and its use of the C3 algorithm. It discusses some advantages of using Python, especially regarding meta-classes.
An Explanation of the Unicode, the Text Encoding Standard, Its Usages and Imp...Yann-Gaël Guéhéneuc
Unicode is currently the world standard for encoding text. It supports all of the world's major writing systems. With its version 15.1 of 2023/09/12, it defines 149,813 characters and 161 scripts. This presentation starts with the, seemingly, simple example of the polar bear emoji. It then defines the key terms of any such standard. It then asks how a software system can render orthographic characters into glyphs, i.e., to render characters into (combined) glyphs. It introduces the concept of abstract characters and describes a brief history of encoding standards, from ASCII to Unicode. It shows how, by adding one level of indirection, the Unicode standard answers this question. It then presents code examples to display text written in Unicode: HarfBuzz (for shaping) and FreeType (for rendering).
An Explanation of the Halting Problem and Its ConsequencesYann-Gaël Guéhéneuc
The halting problem is an important, famous, and consequential problem in computer science. It is about writing a program that decides if another problem will stop. There is no general solution to this problem, which shows that such a problem is undecidable, with important consequences: for example, it is not possible to write tests that would exhaustively test entirely an arbitrary program. This presentation was written in collaboration with <a href="https://www.iro.umontreal.ca/~hahn/">Gena Hahn</a>.
A presentation summarising FPGAs, their history, their benefits, and showing how to program them. It provides some historical background on the development of computers, from the Difference Engine to the Intel 4004 to the AMD Ryzen Threadripper PRO 3995WX. It shows how the number of transistors increased dramatically but also how this increase led to more complexity and more bugs. It then introduces Field-programmable gate arrays (FPGA) as an alternative. It then presents how to program such FPGA using data-flow graphs. It discusses some tools (Yosys, NextPnR, and IceStorm) and illustrates them with a typical "Hello World" (i.e., blinking an LED) using Cygwin on Windows 10.
A set of brief presentations of some of the women and men who made the history of computer science and software engineering.
- 1936: Alan Turing
- 1948: Claude Elwood Shannon
- 1950: Grace Murray Hopper
- 1960: John McCarthy
- 1966: Frances E. Allen
- 1967: Ole-Johan Dahl
- 1967: Kristen Nygaard
- 1969: Charles A. R. Hoare
- 1970: Edgar F. Codd
- 1972: Dave Parnas
- 1974: Manny Lehman
- 1975: Frederick Brooks
- 1986: Edward Yourdon
- 1987: Barbara Liskov
- 1994: Erich Gamma
- 1997: Grady Booch
- 2001: Butler Lampson
A tutorial on the history, use, and caveats of Java generics. Using the simple example of an interface for sort algorithms, the tutorial presents the history of generics and describes the problems being solved by generics. It also provides definitions, and examples in Java and C++, and discusses Duck Typing. It then describes two scenarios: (1) Scenario 1: you want to enforce type safety for containers and remove the need for typecasts when using these containers and (2) Scenario 2: you want to build generic algorithms that work on several types of (possibly unrelated) things. It also summarises caveats with generics, in particular type erasure.
A tutorial on reflection, with a particular emphasis on Java, with a comparison with C++, Python, and Smalltalk. It describes different scenarios in which reflection is useful, a brief history of reflection and MOPs, a comparison with C++, Python, and Smalltalk, and some particulars about Java. The source code of the examples in Java (Eclipse project), Smalltalk (Squeak image v3.10.6), Python (Eclipse project), and C++ (Eclipse projects and Visual Studio solution) are available. (C++ Eclipse projects require Mirror.) Big thanks to Matúš Chochlík and Marcus Denker for their kind and precious help with C++ and Smalltalk.
The tutorial focuses on four common problems:
- Avoid using instanceof when code must bypass the compiler and virtual machine’s choice of the method to call.
- Create external, user-defined pieces of code loaded, used, and unloaded at run-time.
- Translate data structures or object states into a format that can be stored (file, network...).
- Monitor the execution of a program to understand its behaviour, and measure its space and time complexity.
It shows working examples of Java, Smalltalk, Python, and C++ code solving the four common problems through four scenarios:
- Scenario 1: invoke an arbitrary method on an object (see the problems with instanceof and plugins).
- Scenario 2: access the complete (including private) state of an object (see the problem with serialisation).
- Scenario 3: count the number of instances of a class created at runtime (see the problem with debugging/profiling).
- Scenario 4: patch the method of a class to change its behaviour (see the problem with patching).
It also discusses the different kinds of interconnections among objects that are available in common programming languages (linking, forking, subclassing, inter-process communication, and dynamic loading/invoking), a bit of theory about reflection, and specifically the class-loading mechanism of Java.
REST APIs are nowadays the de-facto standard for Web applications. However, as more systems and services adopt the REST architectural style, many problems arise regularly. To avoid these repetitive problems, developers should follow good practices and avoid bad practices. Thus, research on good and bad practices and how to design a simple but effective REST API are essential. Yet, to the best of our knowledge, there are only a few concrete solutions to recurring REST API practices, like “API Versioning”. There are works on defining or detecting some practices, but not on solutions to the practices. We present the most up-to-date list of REST API practices and formalize them in the form of REST API (anti)patterns. We validate our design (anti)patterns with a survey and interviews of 55 developers.
Analyzing and Visualizing Projects and their Relations in Software Ecosystems presents an approach to help developers understand and navigate between projects in related software ecosystems. The approach generates word clouds from project documentation to summarize projects. It then maps relationships between word clouds to identify related projects. This allows developers to better understand the scope and connections between projects within software ecosystems.
This document presents an approach for automatically identifying antipatterns in microservice-based systems. It defines a meta-model with 13 components to capture necessary information about a system and its microservices. It also identifies 15 common microservice antipatterns. Detection rules are defined for each antipattern based on analyzing the system's source code, dependencies, configuration and other artifacts. The goal is to develop a tool based on this approach to help developers minimize antipatterns in microservice systems and improve their maintenance and evolution.
The document presents a preliminary study comparing several open-source IoT development frameworks: Eclipse Vorto, ThingML, Node-red, and OpenHab. The researchers designed the study to evaluate the frameworks' ability to support basic IoT application requirements. They implemented examples from three common IoT application categories using each framework and analyzed the results. Overall, Node-red required the least effort while the other frameworks had more limitations. Future work could study how the frameworks complement each other and implement more complex examples.
This document describes a dataset of software engineering problems in video game development extracted from over 200 postmortems published between 1997 and 2019. The dataset contains 1,035 problems across 20 problem types and is intended to summarize developers' experiences and difficulties during game development. It is available on GitHub at the listed URL.
This document summarizes research into software engineering patterns for designing machine learning systems. A survey found that ML developers have little knowledge of applicable architecture and design patterns. A literature review identified 19 scholarly papers and 19 gray documents discussing practices. The research aims to classify ML patterns according to the typical ML pipeline process and software development lifecycle. It identifies 12 architecture patterns, 13 design patterns, and 8 anti-patterns for ML systems. Future work includes documenting the patterns fully and analyzing their impact on ML system quality attributes.
This document describes a type-sensitive service identification approach called ServiceMiner to support the migration of legacy systems to service-oriented architectures. ServiceMiner uses static analysis and detection rules tailored to specific service types to identify services. It was validated on an open-source ERP system, Compiere, where it achieved higher identification accuracy than other approaches and reduced the effort required to identify architecturally significant services. The approach automates service identification, allows prioritizing types, and is extensible to new technologies.
WhatsApp offers simple, reliable, and private messaging and calling services for free worldwide. With end-to-end encryption, your personal messages and calls are secure, ensuring only you and the recipient can access them. Enjoy voice and video calls to stay connected with loved ones or colleagues. Express yourself using stickers, GIFs, or by sharing moments on Status. WhatsApp Business enables global customer outreach, facilitating sales growth and relationship building through showcasing products and services. Stay connected effortlessly with group chats for planning outings with friends or staying updated on family conversations.
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Revolutionizing Visual Effects Mastering AI Face Swaps.pdfUndress Baby
The quest for the best AI face swap solution is marked by an amalgamation of technological prowess and artistic finesse, where cutting-edge algorithms seamlessly replace faces in images or videos with striking realism. Leveraging advanced deep learning techniques, the best AI face swap tools meticulously analyze facial features, lighting conditions, and expressions to execute flawless transformations, ensuring natural-looking results that blur the line between reality and illusion, captivating users with their ingenuity and sophistication.
Web:- https://undressbaby.com/
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
Why Mobile App Regression Testing is Critical for Sustained Success_ A Detail...kalichargn70th171
A dynamic process unfolds in the intricate realm of software development, dedicated to crafting and sustaining products that effortlessly address user needs. Amidst vital stages like market analysis and requirement assessments, the heart of software development lies in the meticulous creation and upkeep of source code. Code alterations are inherent, challenging code quality, particularly under stringent deadlines.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
1. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Physical and Conceptual Identifier Dispersion:
Measures and Relation to Fault Proneness
Venera Arnaoudova Laleh Eshkevari Rocco Oliveto
Yann-Ga¨el Gu´eh´eneuc Giuliano Antoniol
SOCCER Lab. – DGIGL, ´Ecole Polytechnique de Montr´eal, Qc, Canada
SE@SA Lab – DMI, University of Salerno - Salerno - Italy
Ptidej Team – DGIGL, ´Ecole Polytechnique de Montr´eal, Qc, Canada
September 15, 2010
SOftware Cost-effective Change and Evolution Research Lab
Software Engineering @ SAlerno
Pattern Trace Identification, Detection, and Enhancement in Java
2. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Outline
Introduction
Our study
Dispersion measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and future work
2 / 16
3. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Introduction
Fault identification
size (e.g., [Gyim´othy et al., 2005])
cohesion (e.g., [Liu et al., 2009])
coupling (e.g., [Marcus et al., 2008])
number of changes (e.g., [Zimmermann et al., 2007])
Importance of linguistic information
program comprehension (e.g.,
[Takang et al., 1996, Deissenboeck and Pizka, 2006,
Haiduc and Marcus, 2008, Binkley et al., 2009])
code quality (e.g., [Marcus et al., 2008,
Poshyvanyk and Marcus, 2006, Butler et al., 2009])
3 / 16
4. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Our study
Term dispersion
We are interested in studying the relation between term
dispersion and the quality of the source code.
term basic component of identifiers
dispersion the way terms are scattered among different
entities (attributes and methods)
quality absence of faults
Example: What is the impact of using getRelativePath,
returnAbsolutePath, and setPath as method names on
the fault proneness of those methods?
4 / 16
5. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
(1/3)
Physical dispersion - Entropy
fee
foo
bar
Terms
Entities
E1 E2 E3 E4 E5
Entropy
The circle indicates the occurrences of a term in an entity.
The higher the size of the circle the higher the number of occurrences.
5 / 16
6. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
(2/3)
Conceptual dispersion - Context Coverage
E1
E3
E2
E5
E4
C1
C3
C2
C4
Entity Contexts
Entity contexts are identified taking into account
the terms contained in the entities.
fee
foo
bar
Terms
ContextsC1 C2 C3 C4
Context
coverage
The star indicates that the term appears in the particular context.
6 / 16
7. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
7 / 16
8. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
?
7 / 16
9. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
H: used in few identifiers
CC: used in similar contexts
7 / 16
10. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
?
7 / 16
11. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
H: used in many identifiers
CC: used in similar contexts
7 / 16
12. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
?
7 / 16
13. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
H: used in few identifiers
CC: used in different contexts
7 / 16
14. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
?
7 / 16
15. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
H: used in many identifiers
CC: used in different contexts
7 / 16
16. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
H: used in many identifiers
CC: used in different contexts
!
7 / 16
17. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Dispersion measures
Aggregated metric - numHEHCC
(3/3)
Context Coverage
Entropy
th
H
th
CC
H: used in many identifiers
CC: used in different contexts
!
For each entity, numHEHCC counts the number of
such terms
7 / 16
18. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Our study - refined
(1/2)
Research question 1
RQ1 – Metric Relevance: Does numHEHCC capture
characteristics different from size?
Our believe: Yes it does, although we expect some
overlap.
To this end, we verify the following:
1. To what extend numHEHCC and size vary together.
2. Can size explain numHEHCC?
3. Does numHEHCC bring additional information to size
for fault explanation?
8 / 16
19. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Our study - refined
(2/2)
Research question 2
RQ2 – Relation to Faults: Do term entropy and
context coverage help to explain the presence of faults
in an entity?
Our believe: Yes it does!
How?
1. Estimate the risk of being faulty when entities contain
terms with high entropy and high context coverage.
9 / 16
20. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Objects
Objects
ArgoUML v0.16 – a UML modeling CASE tool.
Rhino v1.4R3 – a JavaScript/ECMAScript interpreter
and compiler.
Program LOC # Entities # Terms
ArgoUML 97,946 12,423 2517
Rhino 18,163 1,624 949
We consider as entities both methods and attributes.
10 / 16
21. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Case study
RQ1 – Metric Relevance (1/3)
Results for RQ1 – Metric Relevance
To what extend numHEHCC and size vary together?
ArgoUML: 40%
Rhino: 43%
Correlation between numHEHCC and LOC
numHEHCC
LOC
11 / 16
22. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Case study
RQ1 – Metric Relevance (2/3)
Results for RQ1 – Metric Relevance
Can size explain numHEHCC?
ArgoUML: 17%
Rhino: 19%
Composition of numHEHCC.
12 / 16
23. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Case study
RQ1 – Metric Relevance (3/3)
Results for RQ1 – Metric Relevance (cont’d)
Does numHEHCC bring additional information to size
for fault explanation?
Variables Coefficients p-values
MArgoUML
Intercept -1.688e+00 2e − 16
LOC 7.703e-03 8.34e − 10
numHEHCC 7.490e-02 1.42e − 05
LOC:numHEHCC -2.819e-04 0.000211
MRhino
Intercept -4.9625130 2e − 16
LOC 0.0041486 0.17100
numHEHCC 0.2446853 0.00310
LOC:numHEHCC -0.0004976 0.29788
13 / 16
24. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Case study
Results for RQ2 – Relation to Faults (1/1)
The risk of being faulty when entities contain terms
with high entropy and high context coverage.
All entities
14 / 16
25. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Case study
Results for RQ2 – Relation to Faults (1/1)
The risk of being faulty when entities contain terms
with high entropy and high context coverage.
All entities
14 / 16
26. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Case study
Results for RQ2 – Relation to Faults (1/1)
The risk of being faulty when entities contain terms
with high entropy and high context coverage.
All entities
numHEHCC
10% of the
entities
14 / 16
27. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Case study
Results for RQ2 – Relation to Faults (1/1)
The risk of being faulty when entities contain terms
with high entropy and high context coverage.
All entities
numHEHCC
10% of the
entities
Risk of being faulty?
14 / 16
28. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Case study
Results for RQ2 – Relation to Faults (1/1)
The risk of being faulty when entities contain terms
with high entropy and high context coverage.
All entities
numHEHCC
10% of the
entities
Risk of being faulty?
ArgoUML: 2 x higher
Rhino: 6 x higher
14 / 16
29. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Conclusions and future work
Conclusions
Entropy and context coverage, together, capture
characteristics different from size!
Entropy and context coverage, together, help to explain
the presence of faults in entities!
Future directions
Replicate the study to other systems.
Use entropy and context coverage to suggest
refactoring.
Study the impact of lexicon evolution on entropy and
context coverage.
15 / 16
31. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Binkley, D., Davis, M., Lawrie, D., and Morrell, C.
(2009).
To CamelCase or Under score.
In Proceedings of 17th IEEE International Conference on
Program Comprehension. IEEE CS Press.
Butler, S., Wermelinger, M., Yu, Y., and Sharp, H.
(2009).
Relating identifier naming flaws and code quality: An
empirical study.
In Proceedings of the 16th Working Conference on
Reverse Engineering, pages 31–35. IEEE CS Press.
Deissenboeck, F. and Pizka, M. (2006).
Concise and consistent naming.
Software Quality Journal, 14(3):261–282.
Gyim´othy, T., Ferenc, R., and Siket, I. (2005).
Empirical validation of object-oriented metrics on open
source software for fault prediction.
16 / 16
32. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
IEEE Transactions on Software Engineering,
31(10):897–910.
Haiduc, S. and Marcus, A. (2008).
On the use of domain terms in source code.
In Proceedings of 16th IEEE International Conference on
Program Comprehension, pages 113–122. IEEE CS
Press.
Liu, Y., Poshyvanyk, D., Ferenc, R., Gyim´othy, T., and
Chrisochoides, N. (2009).
Modelling class cohesion as mixtures of latent topics.
In Proceedings of 25th IEEE International Conference on
Software Maintenance, pages 233–242, Edmonton,
Canada. IEEE CS Press.
Marcus, A., Poshyvanyk, D., and Ferenc, R. (2008).
Using the conceptual cohesion of classes for fault
prediction in object-oriented systems.
IEEE Transactions on Software Engineering,
34(2):287–300.
16 / 16
33. Physical and
Conceptual
Identifier
Dispersion
Venera
Arnaoudova, Laleh
Eshkevari, Rocco
Oliveto, Yann-Ga¨el
Gu´eh´eneuc,
Giuliano Antoniol
Introduction
Our study
Dispersion
measures
Our study - refined
Case study
RQ1 – Metric Relevance
RQ2 – Relation to Faults
Conclusions and
future work
Poshyvanyk, D. and Marcus, A. (2006).
The conceptual coupling metrics for object-oriented
systems.
In Proceedings of 22nd IEEE International Conference on
Software Maintenance, pages 469 – 478. IEEE CS Press.
Takang, A., Grubb, P., and Macredie, R. (1996).
The effects of comments and identifier names on
program comprehensibility: an experiential study.
Journal of Program Languages, 4(3):143–167.
Zimmermann, T., Premraj, R., and Zeller, A. (2007).
Predicting defects for eclipse.
In Proceedings of the Third International Workshop on
Predictor Models in Software Engineering.
16 / 16