This paper studies a new, quantitative approach using fractal geometry to analyze basic tenets of good programming style. Experiments on C source of the GNU/Linux Core Utilities, a
collection of 114 programs or approximately 70,000 lines of code, show systematic changes in style are correlated with statistically significant changes in fractal dimension (P≤0.0009). The data further show positive but weak correlation between lines of code and fractal dimension (r=0.0878). These results suggest the fractal dimension is a reliable metric of changes that
affect good style, the knowledge of which may be useful for maintaining a code base.
A NOVEL FEATURE SET FOR RECOGNITION OF SIMILAR SHAPED HANDWRITTEN HINDI CHARA...cscpconf
The growing need of handwritten Hindi character recognition in Indian offices such as passport, railway etc, has made it a vital area of research. Similar shaped characters are more prone to misclassification. In this paper four Machine Learning (ML) algorithms namely Bayesian Network, Radial Basis Function Network (RBFN), Multilayer Perceptron (MLP), and C4.5 Decision Tree are used for recognition of Similar Shaped Handwritten Hindi Characters (SSHHC) and their performance is compared. A novel feature set of 85 features is generated on the basis of character geometry. Due to the high dimensionality of feature vector, the classifiers can be computationally complex. So, its dimensionality is reduced to 11 and 4 using Correlation-Based (CFS) and Consistency-Based (CON) feature selection techniques respectively. Experimental results show that Bayesian Network is a better choice when used with CFS while C4.5 gives better performance with CON features.
COQUEL: A CONCEPTUAL QUERY LANGUAGE BASED ON THE ENTITYRELATIONSHIP MODELcsandit
As more and more collections of data are available on the Internet, end users but not experts in
Computer Science demand easy solutions for retrieving data from these collections. A good
solution for these users is the conceptual query languages, which facilitate the composition of
queries by means of a graphical interface. In this paper, we present (1) CoQueL, a conceptual
query language specified on E/R models and (2) a translation architecture for translating
CoQueL queries into languages such as XQuery or SQL..
Towards a semantic for uml activity diagram based on institution theory for i...csandit
In this article, we define an approach for model transformation. We use the example of UML
Activity Diagram (UML AD) and Event-B as a source and a target formalism. Before doing the
transformation, a formal semantic is given to the source formalism. We use the institution
theory to define the intended semantic. With this theory, we gain a algebraic specification for
this formalism. Thus, the source formalism will be defined in its own natural semantic meaning
without any intermediate semantic. Model transformation will be performed by a set of
transformation schema which preserve the semantic expressed in the source model during the
transformation process. The generated model expressed in Event-B language will be used for
the formal verification of the source model. As a result, some model expressed in a precise
formalism, the verification of this model can be seen as the verification of the Event-B model
semantically equivalent to the source model. Then, in the present work we combine the
institution theory, Event-Bmethod and graph grammar to develop an approach supporting the
specification, the transformation and the verification of UML AD.
Using Met-modeling Graph Grammars and R-Maude to Process and Simulate LRN ModelsWaqas Tariq
Nowadays, code mobility technology is one of the most attractive research domains. Numerous domains are concerned, many platforms are developed and interest applications are realized. However, the poorness of modeling languages to deal with code mobility at requirement phase has incited to suggest new formalisms. Among these, we find Labeled Reconfigurable Nets (LRN) [9], This new formalism allows explicit modeling of computational environments and processes mobility between them. it allows, in a simple and an intuitive approach, modeling mobile code paradigms (mobile agent, code on demand, remote evaluation). In this paper, we propose an approach based on the combined use of Meta-modeling and Graph Grammars to automatically generate a visual modeling tool for LRN for analysis and simulation purposes. In our approach, the UML Class diagram formalism is used to define a meta-model of LRN. The meta-modeling tool ATOM3 is used to generate a visual modeling tool according to the proposed LRN meta-model. We have also proposed a graph grammar to generate R-Maude [22] specification of the graphically specified LRN models. Then the reconfigurable rewriting logic language R-Maude is used to perform the simulation of the resulted R-Maude specification. Our approach is illustrated through examples.
Functional Verification of Large-integers Circuits using a Cosimulation-base...IJECEIAES
Cryptography and computational algebra designs are complex systems based on modular arithmetic and build on multi-level modules where bit-width is generally larger than 64-bit. Because of their particularity, such designs pose a real challenge for verification, in part because large-integer‘s functions are not supported in actual hardware description languages (HDLs), therefore limiting the HDL testbench utility. In another hand, high-level verification approach proved its efficiency in the last decade over HDL testbench technique by raising the latter at a higher abstraction level. In this work, we propose a high-level platform to verify such designs, by leveraging the capabilities of a popular tool (Matlab/Simulink) to meet the requirements of a cycle accurate verification without bit-size restrictions and in multi-level inside the design architecture. The proposed high-level platform is augmented by an assertion-based verification to complete the verification coverage. The platform experimental results of the testcase provided good evidence of its performance and re-usability.
A NOVEL FEATURE SET FOR RECOGNITION OF SIMILAR SHAPED HANDWRITTEN HINDI CHARA...cscpconf
The growing need of handwritten Hindi character recognition in Indian offices such as passport, railway etc, has made it a vital area of research. Similar shaped characters are more prone to misclassification. In this paper four Machine Learning (ML) algorithms namely Bayesian Network, Radial Basis Function Network (RBFN), Multilayer Perceptron (MLP), and C4.5 Decision Tree are used for recognition of Similar Shaped Handwritten Hindi Characters (SSHHC) and their performance is compared. A novel feature set of 85 features is generated on the basis of character geometry. Due to the high dimensionality of feature vector, the classifiers can be computationally complex. So, its dimensionality is reduced to 11 and 4 using Correlation-Based (CFS) and Consistency-Based (CON) feature selection techniques respectively. Experimental results show that Bayesian Network is a better choice when used with CFS while C4.5 gives better performance with CON features.
COQUEL: A CONCEPTUAL QUERY LANGUAGE BASED ON THE ENTITYRELATIONSHIP MODELcsandit
As more and more collections of data are available on the Internet, end users but not experts in
Computer Science demand easy solutions for retrieving data from these collections. A good
solution for these users is the conceptual query languages, which facilitate the composition of
queries by means of a graphical interface. In this paper, we present (1) CoQueL, a conceptual
query language specified on E/R models and (2) a translation architecture for translating
CoQueL queries into languages such as XQuery or SQL..
Towards a semantic for uml activity diagram based on institution theory for i...csandit
In this article, we define an approach for model transformation. We use the example of UML
Activity Diagram (UML AD) and Event-B as a source and a target formalism. Before doing the
transformation, a formal semantic is given to the source formalism. We use the institution
theory to define the intended semantic. With this theory, we gain a algebraic specification for
this formalism. Thus, the source formalism will be defined in its own natural semantic meaning
without any intermediate semantic. Model transformation will be performed by a set of
transformation schema which preserve the semantic expressed in the source model during the
transformation process. The generated model expressed in Event-B language will be used for
the formal verification of the source model. As a result, some model expressed in a precise
formalism, the verification of this model can be seen as the verification of the Event-B model
semantically equivalent to the source model. Then, in the present work we combine the
institution theory, Event-Bmethod and graph grammar to develop an approach supporting the
specification, the transformation and the verification of UML AD.
Using Met-modeling Graph Grammars and R-Maude to Process and Simulate LRN ModelsWaqas Tariq
Nowadays, code mobility technology is one of the most attractive research domains. Numerous domains are concerned, many platforms are developed and interest applications are realized. However, the poorness of modeling languages to deal with code mobility at requirement phase has incited to suggest new formalisms. Among these, we find Labeled Reconfigurable Nets (LRN) [9], This new formalism allows explicit modeling of computational environments and processes mobility between them. it allows, in a simple and an intuitive approach, modeling mobile code paradigms (mobile agent, code on demand, remote evaluation). In this paper, we propose an approach based on the combined use of Meta-modeling and Graph Grammars to automatically generate a visual modeling tool for LRN for analysis and simulation purposes. In our approach, the UML Class diagram formalism is used to define a meta-model of LRN. The meta-modeling tool ATOM3 is used to generate a visual modeling tool according to the proposed LRN meta-model. We have also proposed a graph grammar to generate R-Maude [22] specification of the graphically specified LRN models. Then the reconfigurable rewriting logic language R-Maude is used to perform the simulation of the resulted R-Maude specification. Our approach is illustrated through examples.
Functional Verification of Large-integers Circuits using a Cosimulation-base...IJECEIAES
Cryptography and computational algebra designs are complex systems based on modular arithmetic and build on multi-level modules where bit-width is generally larger than 64-bit. Because of their particularity, such designs pose a real challenge for verification, in part because large-integer‘s functions are not supported in actual hardware description languages (HDLs), therefore limiting the HDL testbench utility. In another hand, high-level verification approach proved its efficiency in the last decade over HDL testbench technique by raising the latter at a higher abstraction level. In this work, we propose a high-level platform to verify such designs, by leveraging the capabilities of a popular tool (Matlab/Simulink) to meet the requirements of a cycle accurate verification without bit-size restrictions and in multi-level inside the design architecture. The proposed high-level platform is augmented by an assertion-based verification to complete the verification coverage. The platform experimental results of the testcase provided good evidence of its performance and re-usability.
Image-Based Literal Node Matching for Linked Data IntegrationIJwest
This paper proposes a method of identifying and aggregating literal nodes that have the same meaning in Linked Open Data (LOD) in order to facilitate cross-domain search. LOD has a graph structure in which most nodes are represented by Uniform Resource Identifiers (URIs), and thus LOD sets are connected and searched through different domains.However, 5% of the values are literal values (strings without URI) even in a de facto hub of LOD, DBpedia. In SPARQL Protocol and RDF Query Language (SPARQL) queries, we need to rely on regular expression to match and trace the literal nodes. Therefore, we propose a novel method, in which part of the LOD graph structure is regarded as a block image, and then the matching is calculated by image features of LOD. In experiments, we created about 30,000 literal pairs from a Japanese music category of DBpedia Japanese and Freebase, and confirmed that the proposed method determines literal identity with F-measure of 76.1-85.0%.
Bca3020– data base management system(dbms)smumbahelp
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
Mit202 data base management system(dbms)smumbahelp
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
Cmaps as intellectual prosthesis (GERAS 34, Paris)Lawrie Hunter
At the present time, 'increasing accessibility of technology' is readily read as 'increasing accessibility of electronic information technology', but this is to ignore a history of pre-electronic technologies which have generally been conflated with the original media of education, first speech and rather later the writing of continuous text.
The insertion of spaces between words in text was a technology for accessibility of encoding. The paragraph was a technology for the signaling of rhetorical shifts. The bullet list is used for the representation of clusters of notions, either atomic (listing) or aggregates (classification). More substantial technological innovations include the data table and the graph.
One revolutionary technology that has not become mainstream in instructional communication is the Novakian concept map (i.e. the map whose links have text labels to specify the relation between two nodes). This technology has been substantially migrated to electronic information technology, and is arguably more prevalent there than in the traditional sphere, though it is still largely regarded as a novelty or non-essential element of instructional discourse.
This paper reports a case study of a fruitful application of Novakian mapping, wherein EAP learners of academic writing for management discover intellectual leverage in mapping, and develop their own use of the technique, in an iterative manner, in counterpoint with text analysis work. It tracks the cycling between moves analysis and concept mapping as these members of a graduate seminar work to unpack a paper that they have identified as a 'good model', but which they have realized is not a well-written paper.
The observations made here suggest that concept mapping is a pre-electronic technology that deserves a place amongst the essential tools for instructional discourse, particularly in settings such as EAP where the identification of rhetorical orchestration is difficult and where argument is often masked by other rhetorical devices.
Modeling and Evaluation of Performance and Reliability of Component-based So...Editor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional requirements
(such as performance and reliability). Whereas establishing the non-functional requirements have significant effect on success of software
systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software performance has
been specified based on performance models, may be evaluated at the primary stages of software development cycle. Therefore, modeling
and evaluation of non-functional requirements in software architecture level, that are designed at the primary stages of software systems
development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance and reliability of software systems, based on formal models (hierarchical timed
colored petri nets) in software architecture level. In this approach, the software architecture is described by UML use case, activity and
component diagrams, then UML model is transformed to an executable model based on hierarchical timed colored petri nets (HTCPN) by a
proposed algorithm. Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including
performance (such as response time) and reliability may be evaluated in software architecture level.
The Download: Tech Talks by the HPCC Systems Community, Episode 16HPCC Systems
This episode will feature our 2018 HPCC Systems summer interns:
Shah Muhammad Hamdi, PhD student, CS at Georgia State University - Dimensionality Reduction and Feature Selection in ECL-ML
Hamdi will discuss the parallel implementation of Principal Component Analysis (PCA) using the Parallel Block Basic Linear Algebra Subsystem (PBblas) library and ECL implementations of feature selection algorithms for the HPCC Systems platform.
Robert Kennedy, PhD student in Computer Science at Florida Atlantic University - Parallel Distributed Deep Learning on HPCC Systems
Robert will cover what he implemented during his summer internship. Combining HPCC Systems and Google’s TensorFlow, Robert created a parallel stochastic gradient descent algorithm to provide a basis for future deep neural network research and to enhance HPCC System’s distributed neural network training capabilities.
Aramis Tanelus, programmer and senior at American Heritage High School where he is the lead programmer for the Advanced Robotics Team - Developing HPCC Systems Data Ingestion APIs for Common Robotic Sensors.
Aramis’s project will make it easy for anyone in robotics around the world to ingest data from common robotic sensors into an HPCC Systems platform for use in data analysis. Aramis will be speaking about his work on the autonomous agricultural robot and implementing new packages for the Robotics Operating System to interface with HPCC Systems for big data analysis.
Saminda Wijeratne, Masters student, Computational Science and Engineering at Georgia Institute of Technology, Atlanta - MPI Proof of Concept
The built-in "Message Passing" library in HPCC Systems is designed to handle these communications among dissimilar components and perform non-trivial communication patterns among them. Saminda will explore how this library currently operates and how we can introduce a different implementation such as an existing popular library called MPI.
A design pattern is a general solution to a commonly occurring problem in software design. It is a
template to solve a problem that can be used in many different situations. Patterns formalize best practices
that the programmer can use to solve common problems when designing an application or systems. In this
article we have focused our attention on it, how the proposed UML diagrams can be implemented in C#
language and whether it is possible to make the diagram implementation in the program code with the
greatest possible precision.
Tracing Requirements as a Problem of Machine Learning ijseajournal
Software requirement engineering and evolution essential to software development process, which defines and elaborates what is to be built in a project. Requirements are mostly written in text and will later evolve to fine-grained and actionable artifacts with details about system configurations, technology stacks, etc. Tracing the evolution of requirements enables stakeholders to determine the origin of each requirement and
understand how well the software’s design reflects to its requirements. Reckoning requirements traceability
is not a trivial task, a machine learning approach is used to classify traceability between various associated requirements. In particular, a 2-learner, ontology-based, pseudo-instances-enhanced approach, where two classifiers are trained to separately exploit two types of features, lexical features and features derived from a hand-built ontology, is investigated for such task. The hand-built ontology is also leveraged to generate
pseudo training instances to improve machine learning results. In comparison to a supervised baseline system that uses only lexical features, our approach yields a relative error reduction of 56.0%. Most interestingly, results do not deteriorate when the hand-built ontology is replaced with its automatically
constructed counterpart.
A Case Elaboration Methodology for a Semantic Web Service Discovery System Ba...IJERA Editor
The Case Based Reasoning is a paradigm of intelligent reasoning which consists on reusing results of previously solved problems (Source Cases) to solve new problems (Target Cases). It has been formalized as a five-step process consisting of: "Elaboration", "Retrieve", "Reuse", "Revise" and "Retain". In this paper we focus on the first phase of the CBR cycle with all of the required modeling to formalize a Case in our CBR-based system for semantic Web service discovery (CBR4WSD). This phase consists in formalizing the problem description and its structuring before launching the “Retrieve” phase and select the most appropriate Source Cases from the Case Base. We identify a set of basic descriptors to formalize Cases handled in our CBR4WSD system. In this conduct and in accordance with CBR policies, we put forward our Case representation model.
FEATURES MATCHING USING NATURAL LANGUAGE PROCESSINGIJCI JOURNAL
The feature matching is a basic step in matching different datasets. This article proposes shows a new hybrid model of a pretrained Natural Language Processing (NLP) based model called BERT used in parallel with a statistical model based on Jaccard similarity to measure the similarity between list of features from two different datasets. This reduces the time required to search for correlations or manually match each feature from one dataset to another.
Image-Based Literal Node Matching for Linked Data IntegrationIJwest
This paper proposes a method of identifying and aggregating literal nodes that have the same meaning in Linked Open Data (LOD) in order to facilitate cross-domain search. LOD has a graph structure in which most nodes are represented by Uniform Resource Identifiers (URIs), and thus LOD sets are connected and searched through different domains.However, 5% of the values are literal values (strings without URI) even in a de facto hub of LOD, DBpedia. In SPARQL Protocol and RDF Query Language (SPARQL) queries, we need to rely on regular expression to match and trace the literal nodes. Therefore, we propose a novel method, in which part of the LOD graph structure is regarded as a block image, and then the matching is calculated by image features of LOD. In experiments, we created about 30,000 literal pairs from a Japanese music category of DBpedia Japanese and Freebase, and confirmed that the proposed method determines literal identity with F-measure of 76.1-85.0%.
Bca3020– data base management system(dbms)smumbahelp
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
Mit202 data base management system(dbms)smumbahelp
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
Cmaps as intellectual prosthesis (GERAS 34, Paris)Lawrie Hunter
At the present time, 'increasing accessibility of technology' is readily read as 'increasing accessibility of electronic information technology', but this is to ignore a history of pre-electronic technologies which have generally been conflated with the original media of education, first speech and rather later the writing of continuous text.
The insertion of spaces between words in text was a technology for accessibility of encoding. The paragraph was a technology for the signaling of rhetorical shifts. The bullet list is used for the representation of clusters of notions, either atomic (listing) or aggregates (classification). More substantial technological innovations include the data table and the graph.
One revolutionary technology that has not become mainstream in instructional communication is the Novakian concept map (i.e. the map whose links have text labels to specify the relation between two nodes). This technology has been substantially migrated to electronic information technology, and is arguably more prevalent there than in the traditional sphere, though it is still largely regarded as a novelty or non-essential element of instructional discourse.
This paper reports a case study of a fruitful application of Novakian mapping, wherein EAP learners of academic writing for management discover intellectual leverage in mapping, and develop their own use of the technique, in an iterative manner, in counterpoint with text analysis work. It tracks the cycling between moves analysis and concept mapping as these members of a graduate seminar work to unpack a paper that they have identified as a 'good model', but which they have realized is not a well-written paper.
The observations made here suggest that concept mapping is a pre-electronic technology that deserves a place amongst the essential tools for instructional discourse, particularly in settings such as EAP where the identification of rhetorical orchestration is difficult and where argument is often masked by other rhetorical devices.
Modeling and Evaluation of Performance and Reliability of Component-based So...Editor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional requirements
(such as performance and reliability). Whereas establishing the non-functional requirements have significant effect on success of software
systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software performance has
been specified based on performance models, may be evaluated at the primary stages of software development cycle. Therefore, modeling
and evaluation of non-functional requirements in software architecture level, that are designed at the primary stages of software systems
development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance and reliability of software systems, based on formal models (hierarchical timed
colored petri nets) in software architecture level. In this approach, the software architecture is described by UML use case, activity and
component diagrams, then UML model is transformed to an executable model based on hierarchical timed colored petri nets (HTCPN) by a
proposed algorithm. Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including
performance (such as response time) and reliability may be evaluated in software architecture level.
The Download: Tech Talks by the HPCC Systems Community, Episode 16HPCC Systems
This episode will feature our 2018 HPCC Systems summer interns:
Shah Muhammad Hamdi, PhD student, CS at Georgia State University - Dimensionality Reduction and Feature Selection in ECL-ML
Hamdi will discuss the parallel implementation of Principal Component Analysis (PCA) using the Parallel Block Basic Linear Algebra Subsystem (PBblas) library and ECL implementations of feature selection algorithms for the HPCC Systems platform.
Robert Kennedy, PhD student in Computer Science at Florida Atlantic University - Parallel Distributed Deep Learning on HPCC Systems
Robert will cover what he implemented during his summer internship. Combining HPCC Systems and Google’s TensorFlow, Robert created a parallel stochastic gradient descent algorithm to provide a basis for future deep neural network research and to enhance HPCC System’s distributed neural network training capabilities.
Aramis Tanelus, programmer and senior at American Heritage High School where he is the lead programmer for the Advanced Robotics Team - Developing HPCC Systems Data Ingestion APIs for Common Robotic Sensors.
Aramis’s project will make it easy for anyone in robotics around the world to ingest data from common robotic sensors into an HPCC Systems platform for use in data analysis. Aramis will be speaking about his work on the autonomous agricultural robot and implementing new packages for the Robotics Operating System to interface with HPCC Systems for big data analysis.
Saminda Wijeratne, Masters student, Computational Science and Engineering at Georgia Institute of Technology, Atlanta - MPI Proof of Concept
The built-in "Message Passing" library in HPCC Systems is designed to handle these communications among dissimilar components and perform non-trivial communication patterns among them. Saminda will explore how this library currently operates and how we can introduce a different implementation such as an existing popular library called MPI.
A design pattern is a general solution to a commonly occurring problem in software design. It is a
template to solve a problem that can be used in many different situations. Patterns formalize best practices
that the programmer can use to solve common problems when designing an application or systems. In this
article we have focused our attention on it, how the proposed UML diagrams can be implemented in C#
language and whether it is possible to make the diagram implementation in the program code with the
greatest possible precision.
Tracing Requirements as a Problem of Machine Learning ijseajournal
Software requirement engineering and evolution essential to software development process, which defines and elaborates what is to be built in a project. Requirements are mostly written in text and will later evolve to fine-grained and actionable artifacts with details about system configurations, technology stacks, etc. Tracing the evolution of requirements enables stakeholders to determine the origin of each requirement and
understand how well the software’s design reflects to its requirements. Reckoning requirements traceability
is not a trivial task, a machine learning approach is used to classify traceability between various associated requirements. In particular, a 2-learner, ontology-based, pseudo-instances-enhanced approach, where two classifiers are trained to separately exploit two types of features, lexical features and features derived from a hand-built ontology, is investigated for such task. The hand-built ontology is also leveraged to generate
pseudo training instances to improve machine learning results. In comparison to a supervised baseline system that uses only lexical features, our approach yields a relative error reduction of 56.0%. Most interestingly, results do not deteriorate when the hand-built ontology is replaced with its automatically
constructed counterpart.
A Case Elaboration Methodology for a Semantic Web Service Discovery System Ba...IJERA Editor
The Case Based Reasoning is a paradigm of intelligent reasoning which consists on reusing results of previously solved problems (Source Cases) to solve new problems (Target Cases). It has been formalized as a five-step process consisting of: "Elaboration", "Retrieve", "Reuse", "Revise" and "Retain". In this paper we focus on the first phase of the CBR cycle with all of the required modeling to formalize a Case in our CBR-based system for semantic Web service discovery (CBR4WSD). This phase consists in formalizing the problem description and its structuring before launching the “Retrieve” phase and select the most appropriate Source Cases from the Case Base. We identify a set of basic descriptors to formalize Cases handled in our CBR4WSD system. In this conduct and in accordance with CBR policies, we put forward our Case representation model.
FEATURES MATCHING USING NATURAL LANGUAGE PROCESSINGIJCI JOURNAL
The feature matching is a basic step in matching different datasets. This article proposes shows a new hybrid model of a pretrained Natural Language Processing (NLP) based model called BERT used in parallel with a statistical model based on Jaccard similarity to measure the similarity between list of features from two different datasets. This reduces the time required to search for correlations or manually match each feature from one dataset to another.
The large-scale cyberinformatics method to replication is defined not only by the analysis of local-area networks, but also by the structured need for the Internet. Here, we confirm the refinement of superpages, which embodies the unfortunate principles of operating systems. SHODE, our new methodology for secure methodologies, is the solution to all of these obstacles.
Surrogate modeling for industrial designShinwoo Jang
We describe GTApprox | a new tool for medium-scale surrogate modeling in industrial design. Compared to existing software, GTApprox brings several innovations: a few novel approximation algorithms, several advanced methods of automated model selection, novel options in the form of hints. We demonstrate the efficiency of GTApprox on a large collection of test problems. In addition, we describe several applications of GTApprox to real engineering problems.
Clone group mapping has a very important significance in the evolution of code clone. The topic modeling techniques were applied into code clone firstly and a new clone group mapping method was proposed. By using topic modeling techniques to transform the mapping problem of
high-dimensional code space into a low-dimensional topic space, the goal of clone group mapping was indirectly reached by mapping clone group topics. Experiments on four open source software show that the recall and precision are up to 0.99, thus the method can effectively and accurately reach the goal of clone group mapping.
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
Using social media in education provides learners with an informal way for communication. Informal communication tends to remove barriers and hence promotes student engagement. This paper presents our experience in using three different social media technologies in teaching software project management course. We conducted different surveys at the end of every semester to evaluate students’ satisfaction and engagement. Results show that using social media enhances students’ engagement and satisfaction. However, familiarity with the tool is an important factor for student satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The amount of piracy in the streaming digital content in general and the music industry in specific is posing a real challenge to digital content owners. This paper presents a DRM solution to monetizing, tracking and controlling online streaming content cross platforms for IP enabled devices. The paper benefits from the current advances in Blockchain and cryptocurrencies. Specifically, the paper presents a Global Music Asset Assurance (GoMAA) digital currency and presents the iMediaStreams Blockchain to enable the secure dissemination and tracking of the streamed content. The proposed solution provides the data owner the ability to control the flow of information even after it has been released by creating a secure, selfinstalled, cross platform reader located on the digital content file header. The proposed system provides the content owners’ options to manage their digital information (audio, video, speech, etc.), including the tracking of the most consumed segments, once it is release. The system benefits from token distribution between the content owner (Music Bands), the content distributer (Online Radio Stations) and the content consumer(Fans) on the system blockchain.
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This paper discusses the importance of verb suffix mapping in Discourse translation system. In
discourse translation, the crucial step is Anaphora resolution and generation. In Anaphora
resolution, cohesion links like pronouns are identified between portions of text. These binders
make the text cohesive by referring to nouns appearing in the previous sentences or nouns
appearing in sentences after them. In Machine Translation systems, to convert the source
language sentences into meaningful target language sentences the verb suffixes should be
changed as per the cohesion links identified. This step of translation process is emphasized in
the present paper. Specifically, the discussion is on how the verbs change according to the
subjects and anaphors. To explain the concept, English is used as the source language (SL) and
an Indian language Telugu is used as Target language (TL)
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The using of information technology resources is rapidly increasing in organizations,
businesses, and even governments, that led to arise various attacks, and vulnerabilities in the
field. All resources make it a must to do frequently a penetration test (PT) for the environment
and see what can the attacker gain and what is the current environment's vulnerabilities. This
paper reviews some of the automated penetration testing techniques and presents its
enhancement over the traditional manual approaches. To the best of our knowledge, it is the
first research that takes into consideration the concept of penetration testing and the standards
in the area.This research tackles the comparison between the manual and automated
penetration testing, the main tools used in penetration testing. Additionally, compares between
some methodologies used to build an automated penetration testing platform.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
In order to treat and analyze real datasets, fuzzy association rules have been proposed. Several
algorithms have been introduced to extract these rules. However, these algorithms suffer from
the problems of utility, redundancy and large number of extracted fuzzy association rules. The
expert will then be confronted with this huge amount of fuzzy association rules. The task of
validation becomes fastidious. In order to solve these problems, we propose a new validation
method. Our method is based on three steps. (i) We extract a generic base of non redundant
fuzzy association rules by applying EFAR-PN algorithm based on fuzzy formal concept analysis.
(ii) we categorize extracted rules into groups and (iii) we evaluate the relevance of these rules
using structural equation model.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. 2 Computer Science & Information Technology (CS & IT)
anecdotal arguments rather than metrics to reason about aesthetic outcomes, and no research
effort heretofore has investigated the problem or its opportunities.
In this paper, we study a new, quantitative way to analyse basic tenets good programming style
using fractal geometry [7]. Fractals are often associated with beauty in nature and human designs
[8]. Furthermore since fractals are self-similar and scale-invariant, we hypothesized a fractal
approach might be inherently robust for handling distributions of source sizes.
Experiments with the C source code of the GNU/Linux Core Utilities [9], 114 commands of the
Linux shell or about 70,000 lines of code (LOC), show systematic changes in programming style
are correlated with statistically significant changes (P≤0.0002) in fractal dimension [10]. The data
further show that while the baseline sizes of C source files vary widely, there is a positive but
weak correlation with fractal dimension (r=0.0878). These data suggest the fractal dimension is a
reliable metric of changes in source that affect good style, the knowledge of which may be useful
for maintaining a code base.
2. RELATED WORK
Aesthetic value in source is not the same as readability [11] [12], although the two are related.
The latter is more about comprehending code whereas the former, appreciating it, l’art pour l’art.
Beauty in source is also not the same as functional complexity [13]. Complexity relates to design
and efficiency in algorithms and data structures, which may have appeal in a conceptual, though
not necessarily a visual sense, although here again there is overlap. Beautiful Code [14] explores
just this sort of conceptual aesthetic, not only in source but also in debugging and testing which
are not subjects we consider. Gabriel [15] argues against clarity and conceptual beauty as primary
goals of software in favour of what the author calls “habitability.” Yet comfort with the code is
independent of style since programmers might forgo style best practices as long as they can live
with it, whereas our starting point is good style. The fractal dimension has been applied to a wide
range of disciplines, though not software development [16]. Our code depends on Fractop [17], a
Java library originally developed to categorize neural tissue. We have reused this library to
analyze source code. Some researchers have employed the fractal dimension to study paintings of
artists [18]; others working in a similar vein have used the fractal dimension to authenticate
Jackson Pollack’s “action paintings” [19] [20]. Still others have used the fractal dimension to
examine aesthetic appeal in artificially intelligent path finding in videogames [21] [22] [23]. An
investigation of Scala repositories on GitHub.com found sources are organized according to
power-law distributions [24] [25] but that effort did not consider style. Kokol, et al, [26] [27] [28]
reported evidence of fractal structure and long-range correlations in source; however, they were
investigating not style but fine details, character, operator, and string patterns in a small sample
of randomly generated Pascal programs. We study style in a moderate size sample of highly
functional C programs.
3. METHODS
We use a multi-phase operation to process a single source file: 1) beautify or de-beautify the
source style, if necessary; convert the result to an in-memory representation called an artefact; 3)
calculate the fractal dimension of the artefact.
3. Computer Science & Information Technology (CS & IT) 3
To beautify the source in phase 1, we use a combination of the GNU/Linux indent command and
a kit we developed called Mango [29] (see below). The indent manual page [6] gives input
options for beautifying the source according to four distinct C styles: GNU, K&R (Kernighan and
Ritchie), Berkeley, and Linux (kernel). They affect indentation, spacing, and comments and
differences can be found in the manual page. The command, indent, does not, however, change
mnemonics.
Mango is a kit written in Scala, C, and to drive the experiments, Korn shell scripts. During the
first phase of processing, Mango mostly does the reverse of indent: it “mangles” or de-beautifies
C source and outputs new source as we discuss below.
3.1. Base lining measurements
To get baseline measurements of the source, Mango skips phase 1 and sends the unmodified
source directly to phases 2 and 3 to generate the artefact and calculate the fractal dimension,
respectively.
3.2. De-beautifying source
When de-beautifying source in phase 1, Mango does one of the following: remove indentation,
randomize indentation, remove comments, or make the names of variables, functions, macros,
and labels less mnemonic. To remove indentation, Mango trims each line of spaces. To
randomize the indentation, Mango inserts a random number of spaces to the beginning of the line.
To remove comments, Mango strips the file of both block (/* … */) and line (//) comments.
Finally, to make names less mnemonic, Mango shortens them according to the algorithm below.
3.3. Non-mnemonic algorithm
The algorithm to shorten names requires two passes over the source. During the first pass Mango
filters key words, compiler directives, library references, names with less than a minimum length
(l=3), and names appearing less than a minimum frequency (n=3). For names that get through
these filters, Mango calculates new, non-mnemonic names as follows. If a name has at least one
under bar (“_”), Mango splits the name along the under bar and recombines the first letter of each
subsequent sub-name with the whole first sub-name followed by an under bar. If a name is
uppercase name, Mango uses every other letter to reform the name, effectively, cutting the name
in half. If a name is neither of these, it shortens the name by half. Mango puts the old name and
the new name in a database for lookup and substitution back into the source during the second
pass. The table below gives some examples of how the algorithm works.
Table 1. Example changes by non-mnemonic algorithm
Old name New name
i i
T_FATE_INIT T_FI
NOUPDATE NUDT
linkname link
4. 4 Computer Science & Information Technology (CS & IT)
3.4. Mnemonic algorithm
Mango also has a beautify mode of phase 1 to make names more mnemonic. Mango does not, of
course, know the intention of programmers or semantics of names. However, it can simulate
these by lengthening names. The algorithm to lengthen names is similar to the one to shorten
them. During the first pass Mango collects appropriately filtered candidate names of a maximum
length (l=3) and with a minimum frequency (n=3). Mango makes these names a maximum of
length of four by repeating the letters in the name or adding an under bar after the name. The
table below gives some examples of how the algorithm works.
Table 2. Example changes by the mnemonic algorithm
Old name New name
loop loop
foo foo_
go gogo
i iiii
3.5. Artefact generation
Phase 2 of Mango converts an input source file it to an artefact, which has one of two types of
encodings: literal and block.
With literal encoding, the flat text of the source is written to a buffered image using a graphics
context. The text is Courier New, ten-point, plain style, and black foreground over a white
background with ten-point line height. In this case, the artefact looks identical the flat text except
it’s in bitmap form.
With block encoding, each character in the input is written to the graphics context as “blocks” or
8×10 (pixels) black filled rectangles over a white background with two pixels between each
rectangle. Spaces are 10×10 pixels. A block artifact resembles the source but in digital outline.
Block encoding has two advantages. It makes the artefact more robust, more independent
language. Similarly, it makes the mnemonic and non-mnemonic algorithms more robust. In fact,
for these algorithms with block encoding, only the length of the name is relevant, not the name
itself.
The figure below is an example of a simple C program.
Figure 1. Simple C file which is identical to its literal artefact encoding except in bitmap form
A literal artifact looks identical to the figure above except it is a bitmap.
The figure below shows the same C program as a block artifact.
#include <stdio.h>
int main(int argc, char** argv) {
printf("Hello, world!");
return 0;
}
5. Computer Science & Information Technology (CS & IT)
Figure 2. Same C file as an artefact with block encoding
As the reader can see from the figure above, all the language details have been “bloc
the digital outline persists.
3.6. Fractal dimension calculation
The third and final phase of Mango measures the fractal dimension of the artefact. Mandelbrot [9]
described fractals as geometric objects, which are no
self-similar at different scales. We use the geometric interpretation based on reticular cell
counting or the box counting dimension. We choose this method for two reasons. Firstly, the box
counting dimension is conceptually and computat
provides a tested, high quality implementation.
Mandelbrot also said fractal objects have fractional dimension,
called the fractional dimension. Mathematically,
where S represents a set of points on a surface (e.g., coastlines, brush strokes, source lines of
code, etc.), ε is the size of the measuring tool or ruler and
objects or subcomponents covered by the measuring tool. For fractal objects, log
greater than log (1/ε) by a fractional amount. If the tool is a uniform grid of square cells, then a
straight line passes through twice as many cells if the
fractal object passes through more than twice as many cells.
The artefact is S from Equation 1. Mango uses the Fractop default grid sizes of 2, 3, 4, 6, 8, 12,
16, 32, 64, and 128 measured in pixels for
which is the slope of the line of the log proportion of cells intersected by the surface increases as
log cell size decreases.
4. EXPERIMENT DESIGN
The GNU/Linux Core Utilities version 8.10 [8] comprise 114 dot
generated descriptive statistics for this test bed for number of files and LOC.
We then ran three experiments as follows
1. Established baseline D
artefact encodings.
Computer Science & Information Technology (CS & IT)
. Same C file as an artefact with block encoding
As the reader can see from the figure above, all the language details have been “bloc
3.6. Fractal dimension calculation
The third and final phase of Mango measures the fractal dimension of the artefact. Mandelbrot [9]
described fractals as geometric objects, which are no-where differentiable, that is, textured, and
similar at different scales. We use the geometric interpretation based on reticular cell
counting or the box counting dimension. We choose this method for two reasons. Firstly, the box
counting dimension is conceptually and computationally straightforward. Secondly, Fractop [x]
provides a tested, high quality implementation.
Mandelbrot also said fractal objects have fractional dimension, D, namely, a non-whole number
Mathematically, D is given by the Hausdorff dimension [15]:
ܦሺܵሻ ൌ lim
Ԫ→ஶ
݈ܰ݃Ԫ
log ሺ
1
Ԫ
ሻ
represents a set of points on a surface (e.g., coastlines, brush strokes, source lines of
is the size of the measuring tool or ruler and Nε(S) is the number of self
objects or subcomponents covered by the measuring tool. For fractal objects, log
) by a fractional amount. If the tool is a uniform grid of square cells, then a
straight line passes through twice as many cells if the cell length is reduced by a factor of two. A
fractal object passes through more than twice as many cells.
from Equation 1. Mango uses the Fractop default grid sizes of 2, 3, 4, 6, 8, 12,
16, 32, 64, and 128 measured in pixels for ε. For any given input artefact, Mango returns
which is the slope of the line of the log proportion of cells intersected by the surface increases as
ESIGN
The GNU/Linux Core Utilities version 8.10 [8] comprise 114 dot C source files. First, we
generated descriptive statistics for this test bed for number of files and LOC.
We then ran three experiments as follows
using the original, unmodified C files with literal and block
5
As the reader can see from the figure above, all the language details have been “blocked”. Only
The third and final phase of Mango measures the fractal dimension of the artefact. Mandelbrot [9]
is, textured, and
similar at different scales. We use the geometric interpretation based on reticular cell
counting or the box counting dimension. We choose this method for two reasons. Firstly, the box
ionally straightforward. Secondly, Fractop [x]
whole number
e Hausdorff dimension [15]:
(1)
represents a set of points on a surface (e.g., coastlines, brush strokes, source lines of
is the number of self-similar
objects or subcomponents covered by the measuring tool. For fractal objects, log Nε(S) will be
) by a fractional amount. If the tool is a uniform grid of square cells, then a
cell length is reduced by a factor of two. A
from Equation 1. Mango uses the Fractop default grid sizes of 2, 3, 4, 6, 8, 12,
any given input artefact, Mango returns D,
which is the slope of the line of the log proportion of cells intersected by the surface increases as
C source files. First, we
using the original, unmodified C files with literal and block
6. 6 Computer Science & Information Technology (CS & IT)
2. Treat the source with de-beautifying regimes using Mango to i) remove indentation, ii)
randomize indentation by 0-20 spaces, iii) randomize indentation by 0-40 spaces, iv)
make names non-mnemonic, and v) remove comments.
3. Treat the source with beautifying regimes using Mango to i) make names more and using
GNU/Linux indent to refactor the source with ii) GNU, iii) K&R, iv) Berkeley, and v)
Linux style settings.
We observed the frequency and direction in which D changes relative to the baseline. We
computed the percentage change and the one-tailed P-value using the Binomial test [30]. We also
measured the rank correlation coefficient, Spearman’s rho [30], between the baseline D and lines
of code over all source files.
5. RESULTS
The table below gives the test bed summary statistics. The range of LOC is fairly wide, from files
with just two lines to several thousand lines.
Table 3. Test bed summary statistics
Files 114
Total LOC 69,722
Median LOC 356
Maximum LOC 4,733
Minimum LOC 2
The table below gives the baseline fractal dimension values for literal and block encodings.
Table 4. Baseline analysis
Literal Block
Median D 1.4592 1.6500
Maximum D 1.5448 1.7176
Minimum D 0.9836 1.4011
r (LOC v. D) 0.0878 0.0878
5.1 De-beautifying treatments
The tables below give the direction and the frequency of changes D decreases in relation to the
baseline. As the reader can see the fractal dimension decreases in each case with a small
difference between literal and block encoded artefacts. Removing indents is statistically
significant, however, as a contrarian indicator. In other words, rather than decreasing D, it
increases it in relation to the baseline. We explore this matter further below.
7. Computer Science & Information Technology (CS & IT) 7
Table 5. Changes in D in relation to the baseline with literal encoding
Treatment Dir. Freq. Rate P
Random indents 0-20 down 112 98% <0.0001
Random indents 0-40 down 109 96% <0.0001
Remove indents up 107 94% <0.0001
Remove comments down 82 72% <0.0001
Non-mnemonic down 104 91% <0.0001
Table 6. Changes in D in relation to the baseline with block encoding
Treatment Dir. Freq. Rate P
Random indents 0-20 down 113 99% <0.0001
Random indents 0-40 down 113 99% <0.0001
Remove indents up 107 94% <0.0001
Remove comments down 112 98% <0.0001
Non-mnemonic down 106 93% <0.0001
5.2 Beautifying treatments
The tables below give the direction and the frequency of changes D decreases in relation to the
baseline.
Table 7. Changes in D in relation to the baseline with literal encoding
Treatment Dir. Freq. Rate P
GNU style up 100 88% <0.0001
K&R style up 105 92% <0.0001
Berkeley style up 74 65% 0.0009
Linux style up 106 93% <0.0001
Mnemonic up 97 85% <0.0001
Table 8. Changes in D in relation to the baseline with block encoding
Treatment Dir. Freq. Rate P
GNU style up 112 98% <0.0001
K&R style up 104 91% <0.0001
Berkeley style up 78 68% <0.0001
Linux style up 105 92% <0.0001
Mnemonic up 99 87% <0.0001
5.3 No indentation as contrarian indicator
The experimental results in section 5.1, “De-beautifying treatments,” removed indentation on all
the source lines and we found D increased. We hypothesized that if removing indentation were a
contrary indicator, we expect D to rise from the baseline (0% rate) to complete indentation
removal (100% rate). The null hypothesis is no change in D is affected by the removal rate. To
test the null hypothesis, namely, no change in D with change in removal rate, we examined
several files and found we could reject the null, at least on a subset of typical size files. For
instance, mktemp.c has 358 LOC, which is very close to the median size file. We removed the
8. 8 Computer Science & Information Technology (CS & IT)
indentation on randomly selected lines at 75%, 50%, and 25% rates and measured D in ten trials
using literal encoding. The data for mktemp.c is in the table below is typical for other programs
we examined.
Table 9 D for different random remove rates over ten trials for mktemp.c
Indentation removal rate
Trial 25% 50% 75%
1 1.468205428 1.470438295 1.476648907
2 1.46463698 1.472219091 1.47721244
3 1.465692458 1.470056954 1.475848552
4 1.465102815 1.47256331 1.479550183
5 1.464691894 1.469024252 1.477846232
6 1.464413407 1.470376845 1.480434004
7 1.465313286 1.474732486 1.481568639
8 1.466252928 1.470800863 1.480060737
9 1.469609632 1.470203698 1.474179211
10 1.467231153 1.468487205 1.480865379
Median 1.465502872 1.47040757 1.478698207
The chart below shows the plot with the median values for 25%, 50%, and 75% removal rates,
the baseline (0%), and complete removal (100%).
Figure 3 The rate of indentation removal rate vs. D for mktemp.c where 0% is the baseline and 100% is
removal of all indentation.
6. DISCUSSION
The first observation we make is generally Dliteral
< Dblock
. This makes sense since the block
encoding covers more surface area, S, in the artefact than the literal encoding. Our preference is
for block encoding because of its robustness we mentioned earlier. Nevertheless the pattern of
9. Computer Science & Information Technology (CS & IT) 9
results is consistent between literal and block encoding. When we de-beautify the source, D
decreases; when we beautify the source, D increases.
The exception, we noted, is the removal of all indentation. Yet Figure 1 suggests that removing
indentation is a contrarian indicator of style. We believe the contrariness is a peculiar property of
the fractal dimension. That is, keeping in mind that D=2 means there is no texture and we have a
completely covered surface of a solid colour, the larger D for removing indentation implies
greater surface area. Thus, having all the text aligned on the left gives a more compact, and thus
complete, surface.
All the beautifying treatments increase in D. The indent command programmed with Linux style
is the most effective for raising D and Berkeley style, the least effective.
What is most interesting is that since the GNU/Linux Core Utilities were presumably written with
the GNU style guide, the GNU style-beautifying regime nonetheless increases D. If changes in D
are represent changes in style as the data suggests, then it appears there may be room yet for style
improvements in the Core Utilities.
This observation offers insight into how to formulate a relative aesthetic value. Consider, for
instance, the conflict between regimes that beautify code and increase D and the contrarian effect
of removing all indentation, which de-beautify the code but also increase D. One way to resolve
this is to randomly sample the removal of indentation at different rates, measure D for each rate
as we did above, and test the slope of the line. If it is near zero, we assume there must be poor
indentation. In fact, the slope might be the aesthetic value of the indentation. A similar process
could be developed for documentation and mnemonics.
7. CONCLUSIONS
We have seen how systematic changes in the style of C programs affect the fractal dimension in a
statistically significant manner. Future research may consider the nature of these changes, i.e.,
how much beauty was added or removed by a change in style as suggested in the discussion.
Another useful avenue is confirming these results for programming languages other than C.
REFERENCES
[1] Vermeulen, Allan & Ambler, Scott W., (2000) The Elements of Java Style, Cambridge
[2] Oulline, S., (1992) C Elements of Style: The Programmer’s Style Manual for Elegant C and C++
Programs, M&T, 1992
[3] Google, Inc., (2015) “google-styleguide”, http://code.google.com/p/google-styleguide/, accessed 11-
May-2015
[4] NOAA National Weather Service, National Weather Service Office of Hydrologic Development,
(2007) “General Software Development Standards and Guidelines Version 3.5”
[5] Kant, Immanuel, (1978) The Critique of Judgment (1790), translation by J. C. Meredith, Oxford
University Press
[6] Free Software Foundation, (2015) http://linux.die.net/man/1/indent, access 13-May-2015
[7] Mandelbrot, Benoit, (1967) “How long is the coast of Britain? Statistical self-similarity and fractional
dimension,” Science, vol. 156 (3775), p. 636-638
[8] Peltgen, Heinz-Otto & Richter, P.H., (1986) The Beauty of Fractals, Springer, 1986
10. 10 Computer Science & Information Technology (CS & IT)
[9] Free Software Foundation (2015) http://www.gnu.org/software/coreutils/coreutils.html, accessed 11-
May-2015
[10] Mandelbrot, Benoit, (1982) Fractal Geometry of Nature, Freeman, 1982
[11] Posnett, Daryl, Hindle, Abram & Devanbu, Prem, (2011) “A Simpler Model of Software
Readability”, MSR ’11 Proceedings of the 8th Working Conference on Mining Software Repositories
[12] Buse, Raymond P.L., & Weimer, Westley R., (2008) “A metric for software readability,” ISSTA '08
Proceedings of the 2008 international symposium on Software testing and analysis
[13] Tran-Cao, De, Lévesque, Ghislain, & Meunier, Jean-Guy, (2004) "A Field Study of Software
Functional Complexity Measurement," Proceedings of the 14th International Workshop on Software
Measurement
[14] Oram, Andy & Wilson, Greg, eds. (2007) Beautiful Code, O’Reilly
[15] Gabriel, Richard, (1996) Patterns of Software, Oxford
[16] Schroeder, M., (2009) Fractals, Chaos, and Power Laws, Dover, 2009
[17] Cornforth, David, Jelinek, Herbert, Peichl, Leo, (2002) “Fractop: A Tool for Automated Biological
Image Classification,” Proceedings of the Sixth Australia-Japan Joint Workshop on Intelligent and
Evolutionary Systems, p. 1-8
[18] Gerl, Peter, Schönlieb, Carola, Wang, Kung Cheih, (2004) “The Use of Fractal Dimension in Arts
Analysis,” Harmonic and Fractal Image Analysis, 2004, p. 70-73
[19] Coddington, Jim, Elton, John, & Rockmore, Daniel, Wang, Yang, (2008) “Multifractal analysis and
authentication of Jackson Pollock paintings” Proc. SPIE 6810, Computer Image Analysis in the Study
of Art, 68100F; doi: 10.1117/12.765015
[20] Taylor, R.P, Micolich, A.P., Jonas, D., (1999) “Fractal analysis of Pollock’s drip paintings,” Nature,
vol. 399, June 1999
[21] Coleman, R, (2009) “Long-Memory of Pathfinding Aesthetics,” International Journal of Computer
Games Technology, Volume 2009, Article ID 318505
[22] Coleman, R., (2009) “Fractal Analysis of Stealthy Pathfinding,” International Journal of Computer
Games Technology, Special Issue on Artificial Intelligence for Computer Games, Volume 2009,
Article ID 670459
[23] Coleman, R., (2008) “Fractal Analysis of Pathfinding Aesthetics,” International Journal of Simulation
Modeling, Vol. 7, No. 2
[24] Coleman, Ron, Johnson, Matthew, (2014) ”A Study of Scala Repositories on Github”, International
Journal of Advanced Computer Science Applications, vol. 5, issue 7, August 2014
[25] Coleman, Ron, Johnson, Matthew, (2014) “Power-Laws and Structure in Functional Programs,”
Proceedings of 2014 International Conference on Computational Science & Computational
Intelligence, Las Vegas, NV, IEEE Computer Society
[26] P. Kokol, J. Brest, and V. Zumer, “Long-range correlations in computer programs,” Cybernetics and
systems, 28(1), 1997, p43-57
[27] P. Kokol, J. Brest, “Fractal structure of random programs,” SIGPLAN notices 33(6), 1998, p33-38
[28] P. Kokol “Searching for fractal structure in computer programs,” SIGPLAN 29(1), 1994
[29] Coleman, R., Pretty project, (2015) http://github.com/roncoleman125/Pretty, accessed 11-May-2015
[30] Conover, W.J., (1999) Practical Non-Parametric Statistics, Wiley