The complexity of software development abstraction and the new development in multi-core computers have shifted the burden of distributed software performance from network and chip designers to software architectures and developers. We need to look at software development strategies that will integrate parallelization of code, concurrency factors, multithreading, distributed resources allocation and distributed processing. In this paper, a new software development strategy that integrates these factors is further experimented on parallelism. The strategy is multidimensional aligns distributed conceptualization along a path. This development strategy mandates application developers to reason along usability, simplicity, resource distribution, parallelization of code where necessary, processing time and cost factors realignment as well as security and concurrency issues in a balanced path from the originating point of the network application to its retirement.
This paper discusses the Aeosop system components and connectors briefly then, it discusses the architectural connectors of Apache ActiveMQ, graphs are used to support the readability and modularity of the document.
This paper discusses the Aeosop system components and connectors briefly then, it discusses the architectural connectors of Apache ActiveMQ, graphs are used to support the readability and modularity of the document.
Solid Principles, for better cohesion and lower coupling Mohammad Shawahneh
SOLID principle and how it helps in satisfying design goals low coupling and high cohesiveness, during this paper we have discussed how SOLID principles can help designers towards better cohesive design, and lower coupling with examples.
Evolving role of Software,Legacy software,CASE tools,Process Models,CMMInimmik4u
The Evolving role of Software – Software – The changing Nature of Software – Legacy software, Introduction to CASE tools, A generic view of process– A layered Technology – A Process Framework – The Capability Maturity Model Integration (CMMI) – Process Assessment – Personal and Team Process Models. Product and Process. Process Models – The Waterfall Model – Incremental Process Models – Incremental Model – The RAD Model – Evolutionary Process Models – Prototyping – The Spiral Model – The Concurrent Development Model – Specialized Process Models – the Unified Process.
Software Architecture by Reuse, Composition and Customization Ivano Malavolta
Ivano Malavolta.
Research Fellow at the Computer Science Department of the University of L'Aquila (Italy).
PhD thesis presentation, University of L'Aquila, March 2012.
The full PhD thesis is available here:
http:www.di.univaq.it/malavolta/files/IvanoMalavoltaPhDThesis.pdf
software engineering , its characteristic ,changing nature of software,evolving nature of software,legacy software,generic view of software,process flow ,umbrella activity,CMMI,PROCESS ASSESSMENT ,team and personal software process
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
Presentation covers all aspects about Software Designing that are followed by Software Engineering Industries. Readers can do detailed study about the Software Design Concepts like (Abstraction, Architecture, Patterns, Modularity, Information Hiding, Refinement, Functional Dependence, Cohesion, Coupling & Refactoring) plus Design Process.
Later then Design Principles are there to understand with Architectural Design, Architectural Styles, Data Centered Architecture, Data Flow Architecture, Call & Return Architecture, Object Oriented Architecture, Layered Architecture with other architectures are named at end of it.
Later then, Component Level Design is discussed. Then after UI Design & Rules of it, UI Design Models, Web Application Design, WebApp Interface Design are discussed at the end.
Comment back if you have any query about it.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Introduction to Software Engineering & Information TechnologyGaditek
For Introduction to Software Engineering & Information Technology this slide will guide you many things about Introduction to Software Engineering & Information Technology.
Solid Principles, for better cohesion and lower coupling Mohammad Shawahneh
SOLID principle and how it helps in satisfying design goals low coupling and high cohesiveness, during this paper we have discussed how SOLID principles can help designers towards better cohesive design, and lower coupling with examples.
Evolving role of Software,Legacy software,CASE tools,Process Models,CMMInimmik4u
The Evolving role of Software – Software – The changing Nature of Software – Legacy software, Introduction to CASE tools, A generic view of process– A layered Technology – A Process Framework – The Capability Maturity Model Integration (CMMI) – Process Assessment – Personal and Team Process Models. Product and Process. Process Models – The Waterfall Model – Incremental Process Models – Incremental Model – The RAD Model – Evolutionary Process Models – Prototyping – The Spiral Model – The Concurrent Development Model – Specialized Process Models – the Unified Process.
Software Architecture by Reuse, Composition and Customization Ivano Malavolta
Ivano Malavolta.
Research Fellow at the Computer Science Department of the University of L'Aquila (Italy).
PhD thesis presentation, University of L'Aquila, March 2012.
The full PhD thesis is available here:
http:www.di.univaq.it/malavolta/files/IvanoMalavoltaPhDThesis.pdf
software engineering , its characteristic ,changing nature of software,evolving nature of software,legacy software,generic view of software,process flow ,umbrella activity,CMMI,PROCESS ASSESSMENT ,team and personal software process
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
Presentation covers all aspects about Software Designing that are followed by Software Engineering Industries. Readers can do detailed study about the Software Design Concepts like (Abstraction, Architecture, Patterns, Modularity, Information Hiding, Refinement, Functional Dependence, Cohesion, Coupling & Refactoring) plus Design Process.
Later then Design Principles are there to understand with Architectural Design, Architectural Styles, Data Centered Architecture, Data Flow Architecture, Call & Return Architecture, Object Oriented Architecture, Layered Architecture with other architectures are named at end of it.
Later then, Component Level Design is discussed. Then after UI Design & Rules of it, UI Design Models, Web Application Design, WebApp Interface Design are discussed at the end.
Comment back if you have any query about it.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Introduction to Software Engineering & Information TechnologyGaditek
For Introduction to Software Engineering & Information Technology this slide will guide you many things about Introduction to Software Engineering & Information Technology.
A Review of Data Access Optimization Techniques in a Distributed Database Man...Editor IJCATR
In today's computing world, accessing and managing data has become one of the most significant elements. Applications as
varied as weather satellite feedback to military operation details employ huge databases that store graphics images, texts and other
forms of data. The main concern in maintaining this information is to access them in an efficient manner. Database optimization
techniques have been derived to address this issue that may otherwise limit the performance of a database to an extent of vulnerability.
We therefore discuss the aspects of performance optimization related to data access in distributed databases. We further looked at the
effect of these optimization techniques.
An article that was placed in the company newsletter reviewing the efforts to develop and grow the electronics and semiconductor business at Air Products.
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...Editor IJCATR
Amyotrophic lateral sclerosis (ALS) patients cannot control their muscle except eyes in the later stage of the disease
progress. This paper aims to develop an eye-based assistive system that is controlled by the eye gaze to help ALS patients improve
their life quality. Two main functions are proposed in this paper. The first one is called HelpCall that can detect the users’ eye gaze to
active the corresponding events. ALS patients can “talk” with other people more easily by looking at specific buttons in the HelpCall
system. The second one is an eye-control browser that allows the users browsing web pages in Internet. We design an interface that
embeds the IE browser into several buttons controlled by the user eye gaze. ALS patients can visit the Internet only using their eyes in
our proposed system. This paper discusses our ideas for the assistive system and then describes the design and implementation of our
proposed system in details.
Review of the Introduction and Use of RFIDEditor IJCATR
We live in an age where technology is an integral part of human daily life. The aim of the technology is secure, accurate,
correct use of time for mankind and nature brings it may also have disadvantages and challenges. In this paper we describe the RFID
technology. Today, this technology in hospitals, shops, airports, tracking birds are used. It can be used in hospital intensive care unit
for monitoring and remote patient care, which causes it to print patients and physicians are not required to comply with very close
distance, the result of which can cause safety Patients and physicians should be. The security technology implementation is a
challenge. This paper introduces the applications, the challenges we have this technology.
Plant Introductions & Evolution: Hybrid Speciation and Gene TransferUniversity of Adelaide
Professor Richard Abbott presents a seminar entitled "Gene transfer and plant evolution: What we have learnt from Senecio." Richard has been at St Andrews University since October 1971 and currently holds a Chair in Plant Evolution. He is also an Editor of New Phytologist, and Associate Editor of Molecular Ecology, and Plant Ecology & Diversity. Richard’s main research focus is on the evolutionary consequences of hybridization in plants using the genus Senecio (Asteraceae) as a system for study.
A Comparative Study of Forward and Reverse Engineeringijsrd.com
With the software development at its boom compared to 20 years in the past, software developed in the past may or may not have a well-supported documentation during the software evolution. This may increase the specification gap between the document and the legacy code to make further evolutions and updates. Understanding the legacy code of the underlying decisions made during development is the prime motto, which is very well supported by Reverse Engineering. In this paper, we compare the Transformational Forward engineering, where a stepwise abstraction is obtained with the Transformational Reverse Methodology. While the forward transformation process produces overlap of the decisions, performance is affected. Hence, the use of transformational method of Reverse Engineering which is a backwards Forward Engineering process is suitable. Besides the design recognition obtained is a domain knowledge which can be used in future by the forward engineers.
Introduction to software engineering
Software products
Why Software is Important?
Software costs
Features of Software?
Software Applications
Software—New Categories
Software Engineering
Importance of Software Engineering
Essential attributes / Characteristics of good software
Software Components
Software Process
Five Activities of a Generic Process framework
Relative Costs of Fixing Software Faults
Software Qualities
Software crisis
Software Development Stages/SDLC
What is Software Verification
Advantages of Software Verification
Advantages of Validation
A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELO...ijiert bestjournal
There are number of routing protocols proposed for the data transmission in WSN. Initially single path routing schemes with number of variations are proposed. Sti ll there were some drawbacks in single path routing . Single path routing was unable to provide the reliability and h igh throughput. Also security level was not conside red while routing. Recently,to remove the drawbacks of the s ingle path routing new routing technique is propose d called as multipath routing. In this paper we discussed the different multipath routing protocols with number of variants. Initiall y multipath routing was proposed for the purpose of guaranteed delivery of packet to sink in case of link or node failure. There are other protocols which are proposed for the reli ability,energy saving,security and high throughpu t. Some multipath routing protocols have discussed the load balancing and security during packet transmission.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Abstract
Researchers in the field of software engineering, business process improvement and information engineering all want to drastically modernize software life-cycle processes and technologies to correct the problems and to improve the quality of software. Research goals have included ancillary issues, such as improving user services through conversion to new platforms and facilitating software processes by adopting automated tools. Automated tools for software development, understanding, maintenance, and documentation add to process maturity, leading to better quality and reliability of computer services and greater customer satisfaction. This paper focuses on critical issues of legacy program improvement. The program improvement needs the estimation of program from various perspectives. The paper highlights various elements of legacy program complexity which further can be taken in account for further program development.
Keywords: Legacy, Program, Software complexity, Code, Integration
Similar to Improved Strategy for Distributed Processing and Network Application Development (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
When stars align: studies in data quality, knowledge graphs, and machine lear...
Improved Strategy for Distributed Processing and Network Application Development
1. International Journal of Computer Applications Technology and Research
Volume 4– Issue 9, 668 - 672, 2015, ISSN: 2319–8656
www.ijcat.com 668
Improved Strategy for Distributed Processing and
Network Application Development
Eke B. O.
Department of Computer Science,
University of Port Harcourt
Port Harcourt, Nigeria
Onuodu F. E.
Department of Computer Science,
University of Port Harcourt
Port Harcourt, Nigeria
Abstract: The complexity of software development abstraction and the new development in multi-core computers have shifted the
burden of distributed software performance from network and chip designers to software architectures and developers. We need to
look at software development strategies that will integrate parallelization of code, concurrency factors, multithreading, distributed
resources allocation and distributed processing. In this paper, a new software development strategy that integrates these factors is
further experimented on parallelism. The strategy is multidimensional aligns distributed conceptualization along a path. This
development strategy mandates application developers to reason along usability, simplicity, resource distribution, parallelization of
code where necessary, processing time and cost factors realignment as well as security and concurrency issues in a balanced path from
the originating point of the network application to its retirement.
Keywords: Parallelization, EE-Path, Distribution, Usability, Concurrency
1. INTRODUCTION
The software strategy referred in this work proffers solution to
distributed software development by using the abstraction of
user requirements and design-time distribution of processes
across multi-core computer powers in multidimensional
visualization, network development, parallelism and
alignment of conceptualization along a path known as the EE-
Path [1]. The technique uses ideas in computational geometry
in trying to resolve a given network, parallelism and
distributed software engineering problem. It is common to see
software specified from the view point of the owners and from
the ideas of similar existing application. It can also be seen
from the point of cost and benefit as well as processing time
and computer resources in a combined or peered manner.
Multi-core computers have shifted the burden of software
performance from chip designers to software architects and
developers. In order to gain the full benefits of this new
hardware, we need to parallelize our code [2]. Parallelization,
therefore, need a design strategy that can guide the software
development process in a distributed system from the
inception to the deployment of the software.
The goal of this paper is to use a development strategy (EE-
Path) that aligns parallelization, usability, distribution, user
requirement abstraction along a balance path during software
development. Since we must overcome software complexity
paradox to achieve the level of simplicity demanded by users
we must think not just along the specified requirements of the
user as classical strategy demand but also on the unspecified
requirements and machine commitment which belong to the
other dimensions in the EE-path. The weakness in the other
strategies is their inability to distribute software design and
development load across processes and processors as well as
the unspecified requirements into the software building plan.
They often ignore or allow programmers to take distribution
and parallelism responsibility. Problems often arise where
programmers depend on the software plan in developing the
system.
2. PARALLELIZATION
Parallelism is a form of computation in which many
calculations are carried out simultaneously, operating on the
principle that large problems can often be divided into smaller
ones, which are then solved concurrently, or ‘in parallel’.
Parallelism is all about decomposing a single task into smaller
ones to enable concurrent execution [2]. Usually, processor
would execute instructions sequentially, which meant that the
vast majority of software was typically written for serial
computation. While we were able to improve the speed of our
processors by increasing the frequency and transistor count, it
was only when computer scientists realized that they had
reached the processor frequency limitation that they started to
explore new methods for improving processor performance
[3]. They explored the use of the germanium in place of
silicon, co-locating many low frequency and power
consuming cores together, adding specialized cores, 3D
transistors, and others. In this era of multi-core processors
exploiting large-scale parallel hardware will be essential for
improving application performance and its capabilities in
terms of executing speed.
Multithreading can be on a single-processor machine, but
parallelism can only occur on a multi-processor machine.
Multiple running threads can be referred to as being concurrent
but not parallel. Concurrency is often used in servers that
operate multiple threads to process requests. However,
parallelism is about decomposing a single task into smaller
ones to enable execution on multiple processors in a
collaborative manner to complete one task. Distributed systems
are a form of parallel computing; however, in distributed
computing, a program is split up into parts that run
simultaneously on multiple computers communicating and
sharing data over a network. By their very nature, distributed
systems must deal with heterogeneous environments, network
links of varying latencies, and unpredictable failures in the
network and the computers.
2. International Journal of Computer Applications Technology and Research
Volume 4– Issue 9, 668 - 672, 2015, ISSN: 2319–8656
www.ijcat.com 669
3. THE SOFTWARE DEVELOPMENT
STRATEGY
The EE-Path software development technique aims at
resolving the software development complexity resulting from
improper or lack of provision for the Unknown network,
parallelism and user requirements at the time of the software
specification. An architectural pattern understanding is
complex in terms of the three quality attributes: modifiability,
performance, and availability. Software architects on the other
hand think in terms of architectural patterns [4]. However,
what the architecture needs is a global characterization of
architectural patterns in terms of the factors that affect quality
attribute behaviour so that a software design can be
understood in terms of those quality attributes. Software
engineers must not shay away from complexities of seemingly
intractable parallelism concerns in specifying software
requirement analysis and design.
The quality attributes of architectural patterns are the design
primitives of the software and they are system independent. In
designing software architecture for a product line, the long life
and the flexibility of the software must be of paramount
importance. The full set of requirements of the system is
sparsely known. When the actual products are created there
still remain the Unknowns or better still the unknowable in the
product line. New users may emerge, newer needs may arise
and working environments may change such as operating
system, databases, server changes and machine speed
improvements, multi-core processor changes. These changes
will drive the entire system to reflect the realities of the trend,
creating the need for rapid response to such changes. Our
software development strategy provides a solution to this need
by making architectural provision for the Unknown and also a
room for the Unknown in the entire life cycle of the software.
In areas where parallelism is previously envisaged dummy
checks can be deployed to recover from drawbacks such as
system slowdown, thread race conditions and unforeseen
dependencies.
The Unknown implies all the unspecified requirements of the
software at inception. It also includes all the unforeseen user
need that could give rise to the deployment of parallelism
such as video and heavy image inclusions in software. It
seems that irrespective of the software development method
used, it is the user or software client that specifies what the
software is to do. Irrespective of the way it was specified or
the way the information is collected the target of the software
will depend largely on what the users or software clients
actually want whether they know what they want or not. It is
also true that in most cases the users do not know how to
specify the details of what they want even when they are well
consulted. Some of their specifications are capable of
compromising speed, multi-core processor efficiency and
concurrency. Clients may not be software gurus and may not
specify the software requirement to the extent that all
requirements are covered. Even where all requirements are
covered, external environmental factors such as operating
system changes, network expansion or upgrades and database
upgrades, introduction of new data for processing, new
formulas as well as security loop holes may make the software
vulnerable, and the need to update the software based on the
new requirements may arises. These unforeseen requirements
we generally refer to as the Unknown.
The capturing of the Unknown involves software abstraction
embedded in the conceptual architecture of the system. The
conceptual architecture is one of four different architectures
identified by Hofmeister, Nord and Soni [5]. It describes the
system(s) being designed in terms of the major design
elements and the relationships among them. The EE-Path
strategy determines the balance of the known architectural
drivers, the known environmental factors, the known
parallelism conditions, the known user simplicity factors as
well as all other hidden factors-(Unknown). The software is
then built along this path at least conceptually. A model of the
EE-Path strategy is illustrated in figure 1.
The architectural drivers are the combination of business,
quality and functional requirements that “shape” the
architecture. The known architecture drivers are represented
in the y-axis while the unknown architectural drivers are
represented in its shadow as Architectural drivers 2. Similarly,
other well known parallelism requirement specification are
represented in the z-axis while their unknown is also
represented using its envisaged shadow as Others RS 2. In
analysis, design and construction the EE-Path takes all the
axis into consideration as providing the necessary balance it
requires to remain on its path of move as the software tends to
retirement. The EE-Path will terminate at a point when the
software peters out, but the issue of when this will take place
also throws up another unknown.
Figure 1: The EE-Path Software Strategy Model
Newer upgrades are more likely to surface with additions of
parallelism requirements that were hitherto unknown at the
earlier versions when the software first hit the market. In
integrating parallelism, two types of data parallelism are
considered:
Explicitly Data Parallelism
Implicitly Data Parallelism
In Explicitly Data Parallelism one just plans a loop that
executes in parallel. This can be done by adding OpenMP
Software
Environment
(Known)
EE-Path
Architectural Drivers
(Known)
Architectural Drivers 2
(UnKnown)
Software
Environment 2
(UnKnown)
Others RS 2
(UnKnown)
Parallelism (Known)
3. International Journal of Computer Applications Technology and Research
Volume 4– Issue 9, 668 - 672, 2015, ISSN: 2319–8656
www.ijcat.com 670
pragmas around the loop code, or using a parallel algorithm
from Intel® Threading Building Blocks (TBB), from other
source or by developing one.
In Implicitly Data Parallelism one just call some method that
manipulates the data and the infrastructure (i.e. a compiler, a
framework, or the runtime) that is responsible for parallelizing
the work. For instance, the .NET platform provides LINQ
(Language Integrated Query) that allows the use of the
extension methods, and lambda expressions to manipulate the
data like dynamic languages. The following example
demonstrates implicit data manipulation and parallelism:
C# implicit data manipulation using LINQ
string[] students = { “Bartho”,“Yuntho”, “Barry”,”Friday”};
var student = students.Where(p => p.startsWith(“B”));
C# parallel implicit data manipulation using LINQ (Note the
AsParallel method)
string[] students = { “Bartho”, “Yuntho”, “Barry”,”Friday”};
var student = students.AsParallel().Where(p => p.startsWith(“B”));
In language and compiler-based parallelism, the compiler
understands some special keywords to parallelize part of the
code; for example, in OpenMP you can write the following to
parallelize a loop in C++ [6]:
#pragma omp parallelfor
for ( int j = 0; j< max; j++)
{
Num[j] = 1.0;
}
Language and compiler-based parallelism is easy to use
because the majority of the work falls on the compiler. In
library-based parallelism, the programmer should call the
exposed parallel APIs. For example, if one want to parallelize
a for loop in .NET 4 (C#, or VB) a call on For method from
the System.Thread.Parallel class will suffice in C#:
Parallel.For(0,max,j =>
{
Num[j] = 1.0;
});
This method accepts two integers (from, to) and delegates to
the loop body.
4. EE-PATH IN THE APPLICATION
LIFE CYCLE
The EE-Path is a very flexible strategy. It is suited for
complex; highly interactive applications, where very high
integration is required providing good utilization of
underlying hardware within the network and within the multi-
core machine. The strategy promotes reusability of application
components and possibly performance since design
components planned as unknown is reused when the
requirement gets clear. Software requirements have functional
both abstract and concrete, quality and business constraints.
The abstract requirements are used to generate the software
design while the concrete requirements are used to validate
the decisions made as a result of the abstract requirements [7].
The EE-Path remains a guiding path which the requirements
need to follow during the specification. The path is not
introducing any requirement but it provides a structure and a
reference point in the specification of the software
requirements. The use case has the functionality in the system
that gives a user a result of value and captures the functional
requirements [7]. The use case therefore needs to be projected
along the EE-Path to be able to reflect both the known and the
unknown requirements.
Klain [9] believes that the choice of architectural style is
based on the architectural drivers for the design elements that
fit the need at hand. We however believe that architectural
style should not just be based on the need at hand but also on
envisaged need and the unknown future needs. These
unknown needs should be represented using any appropriate
representation in the architecture. Design consideration also
must take into account the specified unknown so that the
unknown can be well specified at least at the abstract
component design level where commitment is yet to be made
to actual software components. The unknown is therefore well
represented in the modular design and aggregated in the
object-oriented class abstraction even if the abstraction is at
worst a dummy. The class abstraction has an inert effect at
making sure some force is exerted to keep the software
development effort on the EE-Path. In the path, the logical,
process, implementation and deployment views are realigned
with the parallelism views even when the software is targeted
at a standalone machine. The standalone can be multi-core
and can equally migrate easily to multi-user when new
requirements surface. This path alignment boosts the
modifiability of the software even when it is already
deployed.
5. DISCUSSION OF EE-PATH BENEFITS
AND PARALLELISM
The EE-Path strategy increases the usability of software since
aggregation is encouraged by patterning one or more actions
on more than one object, even when the object is unknown. It
also makes the system, rather than the user, responsible for
iteration. Furthermore, it is very easy to recover from failure
since the unknown is taking into consideration right from the
architectural stage of the software. Recovery could easily be
based on the unknown functionality of the environment, such
as OS failure and machine failures and even unknown
dependency conditions in parallelized system implementation.
In order to take advantage of the EE-Path in software
development specification for multi-core machines, programs
must be parallelized. Multiple paths of execution have to work
together to complete the tasks the program has to perform,
and that needs to happen concurrently, wherever possible and
in an integrated manner with other requirements. Only then is
it possible to speed up the program . Amdahl’s law expresses
this as [10]:
4. International Journal of Computer Applications Technology and Research
Volume 4– Issue 9, 668 - 672, 2015, ISSN: 2319–8656
www.ijcat.com 671
Figure 2 : Illustrating Speedup and Number of processors
where S is the speed-up of the program (as a factor of its
original sequential runtime), and P is the fraction that is
parallelizable.
Determining when and where in the software to inject
parallelism is a challenge and if wrongly decided could have
retrogressive consequences hence most of decision could be
provided as unknown at certain stage of the system. The good
thing in the EE-Path is that strong provision is made for its
implementation at worst as a dummy implementation. This
will help developers to plan ahead even when it is not feasible
to implement it at the earlier releases of the software. One
probably do not need to parallelize if the application is really
simple and the code is running fast enough already. But we
know that a simple application today may turn to a complex
application with time and a fast code could slow down when
new users use it in a network or when newer features are
added hence the need to plan for the unknown via the EE-
Path. Network applications use shared data, hence
dependency issues can be planned using the EE-Path to take
other factors into considerations to avoid pitfalls of delays as a
result of dependency and thread locks. Decisions when made
at the planning stage guides developers in the choice of
parallel frameworks and APIs to use during the application
implementation. These will help in leveraging the power of all
the extra cores on developers and users machines. The EE-
Path strategy encourages developers to leverage their
knowledge and also to develop systems in relatively
unfamiliar parallelized contexts as offered by distributed
application environments. The unknown is not fixed but it
remains the unknown as long as it has not been unraveled and
since no human can have full and final insight of any matter at
any given time, progressive development is encouraged by
our strategy.
6. OUR CONTRIBUTIONS.
In this paper, we incorporate a new software strategy which is
able to implement a multidimensional requirement
visualization of three or more lines of simultaneous
requirement alignment. It inculcates the unspecified
requirement that we see as forming the core of modern system
design and allow parallelism requirement analysis and design
to varying level of implementation. When the requirement is
not needed at the moment we postulate it can be allowed to be
implemented as an abstract class in the system without any
derivation or with dummy derivation. Some of the
requirement issues to be considered include security,
concurrency, interoperability, reusability and the Unknown.
The Unknown class can be specified with all possible
abstraction that can be modified in the future when the need
for the Unknown requirement arises. This design technique
takes care of the Unknown making the system to be
extendible without the need to redesign the system. This
design technique takes care of the light-speed changes in
requirements resulting in the development of newer versions
of software within very short period of time ranging from few
days to few months. It breaks parallelism conditions in the
software requirement to determine where it can be
implemented to maximize speed. It also articulates pitfalls to
avoid deployment of parallelism to those areas where
parallelism could lead to processing slow-down or incorrect
generation of result. There are many parallelism frameworks,
and debugging tools aimed at simplifying the task of parallel
programming, such as:
Intel Parallel Studio, Microsoft CCR and DSS, MS PPL -
Microsoft Parallel Pattern Library (was released in 2009 Q4),
MS .NET 4 - Microsoft .NET Framework 4 (will released in
2009 Q4), Java 7 (will release in 2009), PRL - Parallel
Runtime Library (Beta 1 released in June 2009) [2]. Software
engineers need to integrate this entire requirement in system
development early in the system life-cycle while making
provision for the unknown.
7. CONCLUSION
The EE-Path software development strategy provides a means
of guiding developers and software architects in qualitative
measures of marginal building blocks in choosing and
developing architectural styles and in conceptualization of the
system at hand from the inception to the conclusion. Based on
the evaluation of software complexity and other models it can
be seen that if the EE-path is followed, a better preparation for
the unknown is made. Furthermore, it can be seen that for the
parallelization of network application the EE-Path model
offers variables for consideration and integration of other
factors and requirements in the development of software in a
hitherto different network platforms. These provide improved
standardization of development even at the architectural level
of software development. It can therefore be concluded that
developing network software using the EE-Path concept
results in building a software today with provision made for
change which itself appears to be a constant in the world of
software engineering.
8. REFERENCES
[1] Eke B. O. and Nwachukwu E. O. (2011), Software
Engineering Process: Yaam Deployment in E-
Bookshop Use Case Scenario, Journal of Theoretical
and Applied Information Technology, Vol 30 No. 2
August, 2011, JATIT and LLS, www.jatit.org E-
ISSN:1817-3195, ISSN:1992-8645, Islamabad,Pakistan,
http://www.jatit.org/volumes/Vol30No2/2Vol30No2.pdf
5. International Journal of Computer Applications Technology and Research
Volume 4– Issue 9, 668 - 672, 2015, ISSN: 2319–8656
www.ijcat.com 672
[2] Fadeel, H. (2009) Getting Started with Parallel
Programming, http://library.dzone.com/category/
Accessed, June 2015.
[3] James Reinders (2009) Getting Started With Intel
Threading Building Blocks, http://library.dzone.com
Accessed July 2009
[4] Katzman, R.; Barbacci, M.; Carriere S. J.; and Woods, S.
J. (2014) Experience with Performing Architectural Styles,
Software Architecture Tradeoff Analysis. 54-63. Proceedings
of ICSE99. Los Angeles, CA, May 1999.
[5] Hofmeister, C.; Nord, R.; and Soni, P. (2000) Applied
Software Architecture. Reading MA: Addison Wesley.
[6] James P. C.; Jack W. D.; C++ Program Design An
Introduction to Programming and Object-Oriented Design
WCB McGraw-Hill USA
[7] Bass L.; Klein M.; Bachman F. (2000) Quality Attribute
Design Primitives (CMU/SEI-2000IN-2000-017). Pittsburgh,
PA: Software Engineering Institute, Carnegie Mellon
University.
[8] Jacobson, I.(1992); Object-Oriented Software
Engineering, Addison-Wesley, USA
[9] Klein M.; Kazman R.; Barbacci M.; Carriere S.;and
Lipson, H.(1999) Attribute-Based Architectural Styles,
Software Architecture. 225-243. Proceedings of the First
Working IFIP Conference on Software Architecture
(WICSA1), San Antonio, TX, February 1999.
[10] Wikipedia, Parallel computing,
www.en.wikipedia.org/wiki/parallel_computing accessed July
2014
7. ACKNOWLEDGMENTS
Our thanks to the Oyol Computer Consult Inc and Fonglo
Research Center for their contribution in typesetting and
towards development of the work.
8. ABOUT THE AUTHORS
Eke Bartholomew PhD, MCPN,
MACM, FIPMD is a Software
Engineering / Computer
Science Lecturer at the
University of Port Harcourt and
Mobile Application Developer
in Oyol Computer Consult Inc.
His research interest is in SE
Methodologies and Mobile IT
deployment.
Dr. Onuodu, Friday E. is a
Lecturer at the University of
Port Harcourt. His research
interest is in Data Mining and
Data Extraction using both
Mobile Devices and Desktops.
He also has interest in IT
deployment analysis. He has
many publication in learned
journals.