Today, the importance of information and accessing information is increasing rapidly. With the advancement of technology, one of the greatest means of achieving knowledge are, computers have entered in many areas of our lives. But the most important of them are the communication fields. This study will be a practical guide for understanding how to assemble and analyze various parameters in network performance evaluation and when designing a network what is necessary to looking for to remove the consequences of degrading performance. Therefore, what can you do in a network performance evaluation using simulation tools such as Network Simulation or Packet tracer and how various parameters can be brought together successfully? CCNA, CCNP, HCNA and HCNP educational level has been used and important setting has been simulated one by one. At the result this is a good guide for a local or wide area network. Finally, the performance issues precautions described. Considering the necessary parameters, imaginary networks were designed and evaluated both in CISCO Packet Tracer and Huawei's eNSP simulation program. But it should not be left unsaid that the networks have been designed and evaluated in free virtual environments, not in a real laboratory. Therefore, it is impossible to make actual performance appraisal and output as there is no actual data available.
#ATAGTR2021 Presentation : "Unlocking the Power of Machine Learning in the Mo...Agile Testing Alliance
Interactive Session on "Unlocking the Power of Machine Learning in the Mobile NFT world" by Niruphan Rajendran,Senior Manager Qualitest, Karthikeyan Lakshminarayanan Non-Fucntional Test Consultant Qualitest at #ATAGTR2021.
#ATAGTR2021 was the 6th Edition of Global Testing Retreat.
The video recording of the session is now available on the following link: https://www.youtube.com/watch?v=DIDZjUEnfyw
To know more about #ATAGTR2021, please visit:https://gtr.agiletestingalliance.org/
Benchmark methods to analyze embedded processors and systemsXMOS
xCORE multicore microcontrollers are 100x more responsive than traditional micros. The unparalleled responsiveness of the xCORE I/O ports is rooted in some fundamental features:
- Single cycle instruction execution
- No interrupts
- No cache
- Multiple cores allow concurrent independent task execution
- Hardware scheduler performs 'RTOS-like' functions
#ATAGTR2021 Presentation : "Unlocking the Power of Machine Learning in the Mo...Agile Testing Alliance
Interactive Session on "Unlocking the Power of Machine Learning in the Mobile NFT world" by Niruphan Rajendran,Senior Manager Qualitest, Karthikeyan Lakshminarayanan Non-Fucntional Test Consultant Qualitest at #ATAGTR2021.
#ATAGTR2021 was the 6th Edition of Global Testing Retreat.
The video recording of the session is now available on the following link: https://www.youtube.com/watch?v=DIDZjUEnfyw
To know more about #ATAGTR2021, please visit:https://gtr.agiletestingalliance.org/
Benchmark methods to analyze embedded processors and systemsXMOS
xCORE multicore microcontrollers are 100x more responsive than traditional micros. The unparalleled responsiveness of the xCORE I/O ports is rooted in some fundamental features:
- Single cycle instruction execution
- No interrupts
- No cache
- Multiple cores allow concurrent independent task execution
- Hardware scheduler performs 'RTOS-like' functions
Performance testing interview questions and answersGaruda Trainings
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Software Testing and Quality Assurance Assignment 2Gurpreet singh
Short questions :
Q1: What is stress testing?
Q2: What is Cyclomatic complexity?
Q3: Define Object Oriented Testing
Q4: What is regression testing? When it is done?
Q5: How loop testing is different from the path testing?
Q6: What is client server environment?
Q7: What is graph based testing?
Q8: How security testing is useful in real applications?
Q9: What are main characteristics of real time system?
Q10: What are the benefits of data flow testing?
Long Questions:
Q1: Design test case for: ERP, Traffic controller and university management system?
Q2: Assuming a real time system of your choice, discuss the concepts. Analysis and design factors of same, elaborate
Q3: How testing in multiplatform environment is performed?
Q4: Explain graph based testing in detail
Q5: Differentiate between Equivalence partitioning and boundary value analysis
Using Semi-supervised Classifier to Forecast Extreme CPU Utilizationgerogepatton
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable
load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with
large number of applications running concurrently. This proposed model forecasts the likelihood of a
scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU
utilization under extreme stress conditions. The enterprise IT environment consists of a large number of
applications running in a real time system. Load features are extracted while analysing an envelope of the
patterns of work-load traffic which are hidden in the transactional data of these applications. This method
simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test
environment and use our model to predict the excessive CPU utilization under peak load conditions for
validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the
parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of
this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to
few days in a complex enterprise environment. Workload demand prediction and profiling has enormous
potential in optimizing usages of IT resources with minimal risk.
USING SEMI-SUPERVISED CLASSIFIER TO FORECAST EXTREME CPU UTILIZATIONijaia
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with large number of applications running concurrently. This proposed model forecasts the likelihood of a scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU utilization under extreme stress conditions. The enterprise IT environment consists of a large number of applications running in a real time system. Load features are extracted while analysing an envelope of the patterns of work-load traffic which are hidden in the transactional data of these applications. This method simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load conditions for validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to few days in a complex enterprise environment. Workload demand prediction and profiling has enormous potential in optimizing usages of IT resources with minimal risk.
USING SEMI-SUPERVISED CLASSIFIER TO FORECAST EXTREME CPU UTILIZATIONgerogepatton
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with large number of applications running concurrently. This proposed model forecasts the likelihood of a scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU utilization under extreme stress conditions. The enterprise IT environment consists of a large number of applications running in a real time system. Load features are extracted while analysing an envelope of the patterns of work-load traffic which are hidden in the transactional data of these applications. This method simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load conditions for validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to few days in a complex enterprise environment. Workload demand prediction and profiling has enormous potential in optimizing usages of IT resources with minimal risk.
There are many ways to ruin a performance testing project, there is just a handful of ways to do it right. This publication analyses the most widespread performance testing blunders. It is impossible in one article to expose all the varieties of testing wrongdoings; as such, this publication is definitely an open-ended.
Harnessing the Cloud for Performance Testing- Impetus White PaperImpetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
The paper provides insights on the various benefits of using the Cloud for Performance Testing as well as how to address the various challenges associated with this approach.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
www.impulse.net.in
Email: ieeeprojects@yahoo.com/ imbpulse@gmail.com
Managing the performance of enterprise applications is hard. Managing and optimizing the performance of enterprise applications on shared virtualized infrastructure (i.e. cloud computing) is even harder. This article outlines the specifics of capacity planning and performance management of EAs deployed in the cloud.
Land Cover maps supply information about the physical material at the surface of the Earth (i.e. grass, trees, bare ground, asphalt, water, etc.). Usually they are 2D representations so to present variability of land covers about latitude and longitude or other type of earth coordinates. Possibility to link this variability to the terrain elevation is very useful because it permits to investigate probable correlations between the type of physical material at the surface and the relief. This paper is aimed to describe the approach to be followed to obtain 3D visualizations of land cover maps in GIS (Geographic Information System) environment. Particularly Corine Land Cover vector files concerning Campania Region (Italy) are considered: transformed raster files are overlapped to DEM (Digital Elevation Model) with adequate resolution and 3D visualizations of them are obtained using GIS tool. The resulting models are discussed in terms of their possible use to support scientific studies on Campania Land Cover.
Comparison between Cisco ACI and VMWARE NSXIOSRjournaljce
Software-Defined Networking(SDN) allows you to have a logical image of the components in the data center, also you could arrange the components logically and use them according to the software application needs. This paper gives an overview about the architectural features of Cisco’s Application Centric Infrastructure (ACI) and Vmware’s NSX and also compares both the architectures and their benefit
More Related Content
Similar to Performance Evaluation of a Network Using Simulation Tools or Packet Tracer
Performance testing interview questions and answersGaruda Trainings
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Software Testing and Quality Assurance Assignment 2Gurpreet singh
Short questions :
Q1: What is stress testing?
Q2: What is Cyclomatic complexity?
Q3: Define Object Oriented Testing
Q4: What is regression testing? When it is done?
Q5: How loop testing is different from the path testing?
Q6: What is client server environment?
Q7: What is graph based testing?
Q8: How security testing is useful in real applications?
Q9: What are main characteristics of real time system?
Q10: What are the benefits of data flow testing?
Long Questions:
Q1: Design test case for: ERP, Traffic controller and university management system?
Q2: Assuming a real time system of your choice, discuss the concepts. Analysis and design factors of same, elaborate
Q3: How testing in multiplatform environment is performed?
Q4: Explain graph based testing in detail
Q5: Differentiate between Equivalence partitioning and boundary value analysis
Using Semi-supervised Classifier to Forecast Extreme CPU Utilizationgerogepatton
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable
load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with
large number of applications running concurrently. This proposed model forecasts the likelihood of a
scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU
utilization under extreme stress conditions. The enterprise IT environment consists of a large number of
applications running in a real time system. Load features are extracted while analysing an envelope of the
patterns of work-load traffic which are hidden in the transactional data of these applications. This method
simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test
environment and use our model to predict the excessive CPU utilization under peak load conditions for
validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the
parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of
this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to
few days in a complex enterprise environment. Workload demand prediction and profiling has enormous
potential in optimizing usages of IT resources with minimal risk.
USING SEMI-SUPERVISED CLASSIFIER TO FORECAST EXTREME CPU UTILIZATIONijaia
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with large number of applications running concurrently. This proposed model forecasts the likelihood of a scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU utilization under extreme stress conditions. The enterprise IT environment consists of a large number of applications running in a real time system. Load features are extracted while analysing an envelope of the patterns of work-load traffic which are hidden in the transactional data of these applications. This method simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load conditions for validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to few days in a complex enterprise environment. Workload demand prediction and profiling has enormous potential in optimizing usages of IT resources with minimal risk.
USING SEMI-SUPERVISED CLASSIFIER TO FORECAST EXTREME CPU UTILIZATIONgerogepatton
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with large number of applications running concurrently. This proposed model forecasts the likelihood of a scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU utilization under extreme stress conditions. The enterprise IT environment consists of a large number of applications running in a real time system. Load features are extracted while analysing an envelope of the patterns of work-load traffic which are hidden in the transactional data of these applications. This method simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load conditions for validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to few days in a complex enterprise environment. Workload demand prediction and profiling has enormous potential in optimizing usages of IT resources with minimal risk.
There are many ways to ruin a performance testing project, there is just a handful of ways to do it right. This publication analyses the most widespread performance testing blunders. It is impossible in one article to expose all the varieties of testing wrongdoings; as such, this publication is definitely an open-ended.
Harnessing the Cloud for Performance Testing- Impetus White PaperImpetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
The paper provides insights on the various benefits of using the Cloud for Performance Testing as well as how to address the various challenges associated with this approach.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
www.impulse.net.in
Email: ieeeprojects@yahoo.com/ imbpulse@gmail.com
Managing the performance of enterprise applications is hard. Managing and optimizing the performance of enterprise applications on shared virtualized infrastructure (i.e. cloud computing) is even harder. This article outlines the specifics of capacity planning and performance management of EAs deployed in the cloud.
Land Cover maps supply information about the physical material at the surface of the Earth (i.e. grass, trees, bare ground, asphalt, water, etc.). Usually they are 2D representations so to present variability of land covers about latitude and longitude or other type of earth coordinates. Possibility to link this variability to the terrain elevation is very useful because it permits to investigate probable correlations between the type of physical material at the surface and the relief. This paper is aimed to describe the approach to be followed to obtain 3D visualizations of land cover maps in GIS (Geographic Information System) environment. Particularly Corine Land Cover vector files concerning Campania Region (Italy) are considered: transformed raster files are overlapped to DEM (Digital Elevation Model) with adequate resolution and 3D visualizations of them are obtained using GIS tool. The resulting models are discussed in terms of their possible use to support scientific studies on Campania Land Cover.
Comparison between Cisco ACI and VMWARE NSXIOSRjournaljce
Software-Defined Networking(SDN) allows you to have a logical image of the components in the data center, also you could arrange the components logically and use them according to the software application needs. This paper gives an overview about the architectural features of Cisco’s Application Centric Infrastructure (ACI) and Vmware’s NSX and also compares both the architectures and their benefit
Student’s Skills Evaluation Techniques using Data Mining.IOSRjournaljce
Technological Advancement is the mainstay of the Indian economy. Software Development is a primary factor which offers many sources of employment across the state. The major role of Software Company is to provide qualitized product by Managerial Decisions. To improve the Software Quality and Maintenance, Organization concentrates the Programmer’s ability and restricts the Placement with various kinds of audiences such as Written test, Personal Interview, Technical Round, Group Discussion etc. Accordingly, Programmer’s has to be formed. So Higher Educational institutes have to amend the student’s Performance as their expectations. The student’s Programming Skill is analyzed with the help of Decision making and Knowledge Discovery in Data Mining Techniques. In this example, the actual data are collected from Private College, which holds data about academic performance for pupils. This report suggests a method to judge their accomplishments through the utilization of data mining methods and specifies the means to identify their skills and assist them to improve their Knowledge by predicting Training Programs.
Data mining is a process to extract information from a huge amount of data and transform it into an
understandable structure. Data mining provides the number of tasks to extract data from large databases such
as Classification, Clustering, Regression, Association rule mining. This paper provides the concept of
Classification. Classification is an important data mining technique based on machine learning which is used to
classify the each item on the bases of features of the item with respect to the predefined set of classes or groups.
This paper summarises various techniques that are implemented for the classification such as k-NN, Decision
Tree, Naïve Bayes, SVM, ANN and RF. The techniques are analyzed and compared on the basis of their
advantages and disadvantages
Analyzing the Difference of Cluster, Grid, Utility & Cloud ComputingIOSRjournaljce
: Virtualization and cloud computing is creating a fundamental change in computer architecture,
software and tools development, in the way we store, distribute and consume information. In the recent era of
autonomic computing it comes the importance and need of basic concepts of having and sharing various
hardware and software and other resources & applications that can manage themself with high level of human
guidance. Virtualization or Autonomic computing is not a new to the world, but it developed rapidly with Cloud
computing. In this paper there give an overview of various types of computing. There will be discussion on
Cluster, Grid computing, Utility & Cloud Computing. Analysis architecture, differences between them,
characteristics , its working, advantages and disadvantages
: Cloud processing is turning into an inexorably mainstream endeavor demonstrate in which figuring assets are made accessible on-request to the client as required. The one of a kind incentivized offer of distributed computing makes new chances to adjust IT and business objectives. Distributed computing utilizes the web advancements for conveyance of IT-Enabled abilities 'as an administration' to any required clients i.e. through distributed computing we can get to anything that we need from anyplace to any PC without agonizing over anything like about their stockpiling, cost, administration etc. In this paper, I give a far-reaching study on the inspiration variables of receiving distributed computing, audit the few cloud sending and administration models. It additionally investigates certain advantages of distributed computing over customary IT benefit environment-including versatility, adaptability, decreased capital and higher asset usage are considered as appropriation explanations behind distributed computing environment. I additionally incorporate security, protection, and web reliance and accessibility as shirking issues. The later incorporates vertical versatility as specialized test in cloud environment.
An Experimental Study of Diabetes Disease Prediction System Using Classificat...IOSRjournaljce
Data mining means to the process of collecting, searching through, and analyzing a large amount of data in a database. Classification in one of the well-known data mining techniques for analyzing the performance of Naive Bayes, Random Forest, and Naïve Bayes tree (NB-Tree) classifier during the classification to improve precision, recall, f-measure, and accuracy. These three algorithms, of Naive Bayes, Random Forest, and NB-Tree are useful and efficient, has been tested in the medical dataset for diabetes disease and solving classification problem in data mining. In this paper, we compare the three different algorithms, and results indicate the Naive Bayes algorithms are able to achieve high accuracy rate along with minimum error rate when compared to other algorithms.
Candidate Ranking and Evaluation System based on Digital FootprintsIOSRjournaljce
Digital resume provides insights about a candidate to the organization. This paper proposes a system where digital resumes of candidates are generated by extracting data from social networking sites like Facebook, Twitter and LinkedIn. Data which is relevant to recruitment is obtained from unstructured data using Data Mining algorithms. Candidates are evaluated based on their digital resumes and ranked accordingly. Ranking is done based on the requirements specified by an organization for a key position. The key aspects of this paper are a) Specification and design of system. b) Generation of digital Resume. c) Ranking of candidates. According to the ranking provided by this system, Recruiters can shortlist candidates for interviews. Thus, it revolutionizes the traditional recruitment process.
Multi Class Cervical Cancer Classification by using ERSTCM, EMSD & CFE method...IOSRjournaljce
Cervical cancer is the highest rate of incidence after breast cancer, gastric cancer, colorectal cancer, thyroid cancer among all malignant that occurs to females ; also it is the most prevalent cancer among female genital cancers. Manual cervical cancer diagnosis methods are costly and sometimes result inaccurate diagnosis caused by human error but machine assisted classification system can reduce financial costs and increase screening accuracy. In this research article, we have developed multi class cervical classification system by using Pap Smear Images according to the WHO descriptive Classification of Cervical Histology. Then, this system classifies the cell of the Pap Smear image into anyone of five types of the classes of normal cell, mild dysplasia, moderate dysplasia, severe dysplasia and carcinoma in situ (CIS) by using individual and Combining individual feature extraction method with the classification technique. In this paper three Feature Extraction methods were used: From that three, two were individual feature extraction method namely Enriched Rough Set Texton Co-Occurrence Matrix (ERSTCM) and Enriched Micro Structure Descriptor (EMSD) and the remained one was combining individual feature extraction method namely concatenated feature extraction method (CFE). The CFE method represents all the individual feature extraction methods of ERSTCM & EMSD features are combining together to one feature to assess their joint performance. Then these three feature extraction methods are tested over Fuzzy Logic based Hybrid Kernel Support Vector Machine (FL-HKSVM) Classifier. This Examination was conducted over a set of single cervical cell based pap smear images. The dataset contains five classes of images, with a total of 952 images. The distribution of number of images per class is not uniform. Then the performance was evaluated in both the individual and combining individual feature extraction method with the classification techniques by using the statistical parameters of sensitivity, specificity & accuracy. Hence the resultant values of the statistical parameters described in individual feature extraction method with the classification technique, proposed EMSD+FLHKSVM Classifier had given the better results than the other ERSTCM+FLHKSVM Classifier and combining individual feature extraction method with the classification technique described, proposed CFE+FLHKSVM Classifier had given the better results than other EMSD+FLHKSVM & ERSTCM+FLHKSVM classifiers.
The Systematic Methodology for Accurate Test Packet Generation and Fault Loca...IOSRjournaljce
As we probably aware now a days networks are broadly dispersed so administrators relies on upon different devices, for example, pings and follow course to troubleshoot the issue in network. So we proposed a robotized and orderly approach for testing and troubleshooting network called "Automatic Test Packet Generation"(ATPG). ATPG first peruses switch arrangement and produces a gadget free model. The model is utilized to produce least number of test packets to cover each connection in a network and every control in network. ATPG is equipped for researching both useful and execution issues. Test packets are sent at customary interims and separate strategy is utilized to confine flaws. The working of few disconnected devices which automatically create test packets are additionally given, yet ATPG goes past the prior work in static (checking liveness and fault localization).
The Use of K-NN and Bees Algorithm for Big Data Intrusion Detection SystemIOSRjournaljce
Big data problem in intrusion detection system is mainly due to the large volume of the data. The dimension of the original data is 41. Some of the feature of original data are unnecessary. In this process, the volume of data has expanded into hundreds and thousands of gigabytes(GB) of information. The dimension span of data and volume can be reduced and the system is enhanced by using K-NN and BA. The reduction ratio of KDD datasets and processing speed is very slow so the data has been reduced for extracting features by Bees Algorithm (AB) and use K-nearest neighbors as classification (KNN). So, the KDD99 datasets applied in the experiments with significant features. The results have gave higher detection and accuracy rate as well as reduced false positive rate. Keywords: Big Data; Intru
Study and analysis of E-Governance Information Security (InfoSec) in Indian C...IOSRjournaljce
The purpose of the study is to explore and find a research gap in E-Governance Information Security (InfoSec) domain in Indian Context. The study identifies the research gap in E-Governance InfoSec domain and substantiates given research gap with relevant literature review. The study outcomes clearly depict the requirement of research in the field of InfoSec in e-governance domain in a country like India.
The purpose of the study is to explore web security status of the E-Governance websites and web applications. The central theme of the paper is to study and analyze the security vulnerabilities in the technologies utilized for E-Governance website and web application development. The study was conducted in the State of Gujarat,India. The data related to web development technologies, vulnerabilities affecting those technologies, vulnerability severity, and vulnerability type were gathered from 26 E-Governance website/Web application for detail analysis. The outcome of the study depicts the relationship between technology vis-a-vis vulnerability type and vulnerability severity.
Exploring 3D-Virtual Learning Environments with Adaptive RepetitionsIOSRjournaljce
: In spatial tasks, the use of cognitive aids reduce mental load and therefore being appealing to trainers and trainees. However, these aids can act as shortcuts and prevents the trainees from active exploration which is necessary to perform the task independently in non-supervised environment. In this paper we used adaptive repetition as control strategy to explore the 3D- Virtual Learning environments. The proposed approach enables the trainee to get the benefits of cognitive support while at the same time he is actively involved in the learning process. Experimental results show the effectiveness of the proposed approach
Human Face Detection Systemin ANew AlgorithmIOSRjournaljce
Trying to detecting a face from any photo is big problem and got these days a focusing because of it importance, in face recognition system,face detection in one of the basic components. A lot of troubles are there to be solved in order to create a successful face detection algorithm. The skin of face has its properties in color domain and also a texture which may help in the algorithm for detecting faces because of its ability to find skins from photo. Here we are going to create a new algorithm for human face detection depending on skin color tone specially YCbCr color tone as an approach to slice the photo into parts. In addition, Gray level has been used to detect the area which contains a skin, after that anotherlevel used to erase the area that does not contain skin. The system which proposed applied on many photos and passed with great accuracy of detecting faces and it has a good efficient especially to separate the area that does not contain skin or face from the area which contain face and skin. It has been agreed and approved that the accuracy of the proposed system is 98% in human face detection.
Value Based Decision Control: Preferences Portfolio Allocation, Winer and Col...IOSRjournaljce
The paper presents an innovative approach to mathematical modeling of complex systems „humandynamical process”. The approach is based on the theory of measurement and utility theory and permits inclusion of human preferences in the objective function. The objective utility function is constructed by recurrent stochastic procedure which represents machine learning based on the human preferences. The approach is demonstrated by two case studies, portfolio allocation with Wiener process and portfolio allocation in the case of financial process with colored noise. The presented formulations could serve as foundation of development of decision support tools for design of management/control. This value-oriented modeling leads to the development of preferences-based decision support in machine learning environment and control/management value based design.
Assessment of the Approaches Used in Indigenous Software Products Development...IOSRjournaljce
Acceptability of indigenous software is always low in most of the African countries especially in Nigeria. This paper then study and presents the results on the assessment of the approaches used in indigenous software development products. The study involved ICT firms that specialized in software development and educational institutions who were part of major stakeholders as well as users of software packages. The primary tool for data collection was questionnaire, which was used to elicit information from software developers on the various approaches adopted in their operations. This was also complemented with information from secondary sources. The identified approaches were measured on a five-point Likert scale rating of 5 to 1 to determine their relative strength index (RSI) in the factors. The result revealed the various approaches adopted for software development had significant difference of chi (45)1699.06 at p≤ 0.001 with spiral (6.02), agile (5.86), prototyping (5.67), object oriented (5.48), rotational unified process (5.32), computer and incremental case (4.50), waterfall (3.66) and integrated (1.98) were the commonly adopted approaches used for software development. Similarly, the approaches adopted by software development firms were correlated and returned a significant difference of (Z = 1699.06, p≤ 0.001). The result implies that these approaches had a great impact on the domestic use of software products and perhaps is the most important driver of software industry growth for emerging technologies.
Secure Data Sharing and Search in Cloud Based Data Using Authoritywise Dynami...IOSRjournaljce
The Data sharing is an important functionality in cloud storage. We describe new public key crypto systems which produce constant-size cipher texts such that efficient delegation of decryption rights for any set of cipher texts are possible. The novelty is that one can aggregate any set of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. Ensuring the security of cloud computing is second major factor and dealing with because of service availability failure the single cloud providers demonstrated less famous failure and possibility malicious insiders in the single cloud. A movement towards Multi-Clouds, In other words ”Inter-Clouds” or ”Cloud-Of-Clouds” as emerged recently. This works aim to reduce security risk and better flexibility and efficiency to the user. Multi-cloud environment has ability to reduce the security risks as well as it can ensure the security and reliability.
Panorama Technique for 3D Animation movie, Design and EvaluatingIOSRjournaljce
This paper presents an applied approach for Panorama 3D movies enhanced with visual sound effects. The case study that considered is IIPS@UOIT. Many selected S/W have been used to introduce the 3D Movie. 3D Animation is a modern technology in the field of the world of filmmaking and is considered the core of multimedia, where the vast majority of movies such as Hollywood movies that we see today, it was using 3D technology. Where this technique is used in all the magazines, such as medical experiments, engineering, astronomy, planets and stars, to prove scientific theories, history, geography, etc., where they are building models or scenes or characters simulates reality and the movement of the viewer to the heart of the event. A three-dimensional film was made to (IIPS @ UOITC) to give it a future vision and published in the global sites such as YouTube, Facebook and Google earth. By using many specialized 3d software and cinematic tricks, with a focusing on movement, characters, lighting, cameras and final render.
Density Driven Image Coding for Tumor Detection in mri ImageIOSRjournaljce
The significant of multi spectral band resolution is explored towards selection of feature coefficients based on its energy density. Toward the feature representiaon in transformed domain, multi wavelet transformations were used for finer spectral representation. However, due to a large feature count these features are not optimal under low resource computing system. In the recognition units, running with low resources a new coding approach of feature selection, considering the band spectral density is developed. The effective selection of feature element, based on its spectral density achieve two objective of pattern recognition, the feature coefficient representiaon is minimized, hence leading to lower resource requirement, and dominant feature representation, resulting in higher retrieval performance.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Performance Evaluation of a Network Using Simulation Tools or Packet Tracer
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 19, Issue 1, Ver. I (Jan.-Feb. 2017), PP 01-05
www.iosrjournals.org
DOI: 10.9790/0661-1901010105 www.iosrjournals.org 1 | Page
Performance Evaluation of a Network Using Simulation Tools or
Packet Tracer
Sayed Mansoor Hashimi, Ali Güneş
Abstract: Today, the importance of information and accessing information is increasing rapidly. With the
advancement of technology, one of the greatest means of achieving knowledge are, computers have entered in
many areas of our lives. But the most important of them are the communication fields. This study will be a
practical guide for understanding how to assemble and analyze various parameters in network performance
evaluation and when designing a network what is necessary to looking for to remove the consequences of
degrading performance. Therefore, what can you do in a network performance evaluation using simulation
tools such as Network Simulation or Packet tracer and how various parameters can be brought together
successfully? CCNA, CCNP, HCNA and HCNP educational level has been used and important setting has been
simulated one by one. At the result this is a good guide for a local or wide area network. Finally, the
performance issues precautions described. Considering the necessary parameters, imaginary networks were
designed and evaluated both in CISCO Packet Tracer and Huawei's eNSP simulation program. But it should not
be left unsaid that the networks have been designed and evaluated in free virtual environments, not in a real
laboratory. Therefore, it is impossible to make actual performance appraisal and output as there is no actual
data available.
Keywords: Router, Network Performance Evaluation, eNSP, Packet Tracer
I. Introduction
Network performance evaluation with simulation program has been the subject of much research over
the past years. However, the impact of performance on a network’s users is much less understood from a
scientific standpoint.This study will be a practical guide for understanding how to assemble and analyze various
parameters in the evaluation of network performance, and what to consider when designing a network to remove
the causes that reduce performance as a result.the configuration of network which has been done as a virtual in
simulation program can be done in real area of network configuration.Those who complete these trainings can
configure the LAN, WAN, WLAN and WWAN network design, configuration of active network devices such
as router and switch, network optimization and performance settings, and can also manage the settings if
necessary. This leads to network configuration greater profits for the enterprise each level that uses the network
service. In turn, the increased profits, justify current and future investments in improving the performance
network of the services.
Introducing the Large distributed network services are at core of a wide range of network specialist
activities in the field of networking. Billions of clients and users routinely interact with networks of various
kinds for different commerce, news, entertainment, and social networking. Computer Networks are usually
designed with a conclusion of performance in mind [1].
II. DifferencebetweenPerformanceTesting,LoadTestingandStressTesting
PerformanceTesting
While connecting a device with the network to send or receive data from a specific resource to a
specific destination. The significant matter is speed of network performances that how fast the data is going
from one node to another. To recognize and determine the component parts of a system performance and how
networks are performing in a particular situation, usage of resources, reliability and validity of the product is
under testing of subtest of performance engineering. Focusing on each nodes addressing performance issues in
the design, architecture of software products.
Performance testing goal: The ultimate goal for performance testing is to sit a standard benchmark behavior to
the network performance. Many industries defined benchmark that should meet during performance testing.
Performance testing does not aim to address and find the defect in the application of performance, but it’s
addressing more critical task of testing the benchmark and standards of the application, validity and accuracy
and monitoring of the performance is the primary goal of performance testing.
Example: Testing the performance of network on connection speed vs latency chart, latency is the time
difference between data to reach from source to destination. So, A 65Kb Page would take about less 15 second
to load in for a worst connection of 28.8kbps modem, in other hand same pages will appear within 5 second
2. Performance Evaluation of a Network Using Simulation Tools or Packet Tracer
DOI: 10.9790/0661-1901010105 www.iosrjournals.org 2 | Page
with average connection of 256kbps DSL (Latency = 100 millisecond) 1.5mbps T1 would have their
performance benchmark set within 1 second to achieve the target. The time difference between the generation of
request and acknowledgement of response should be in the range of x ms (milliseconds) and y ms, where x and
y are standard digits. A successful performance testing should project most of the performance issues, which
could be related to database, network, software, hardware etc [2].
Load Testing
Load testing job is to test the system gradually and sustain to add the load of system until the time get
the start point. It is the easiest method of testing to use automation tools, for instance, LoadRunner or any suited
system tools, those we have already. Load testing is known as volume testing and endurance testing which are
really worldwide names for load testing. The critical aim of load testing is to give a job for system to most main
jobs which can possibly control the test and get their outcome through endurance and monitoring. The
marvelous fact is that sometimes the result of the system is zero, it is because of the system is sometimes
nourished with empty task in order to determine the behavior and reaction.
Load Testing Goal: The targets of load testing are to show and uncover the weaknesses and mistakes in
application, such as memory problems, memory trickles and misconduct of memory. Another aim of load
testing is to know the top limit of the whole components of application as network, database and hardware
etc…thus it will be able to control and manage the estimated load in upcoming activities. Load balancing
dilemmas, squad width problems, capability of current system and so on are the things that would finally figure
out as a result of load testing [3].
Example: For instance, more than 1000 people will be busy with their email at same time, when you want to
check email functionality of each application transactions, some users will be replying, some will be sending,
deleting, forward, read and so on, in various ways. If we suppose a single transaction per person each hour, then
we will have 1000 transactions in one hour, by pretending 10 transactions per user, we are able to load test the
email server with surrounding it with 10000 transactions per hour.
Stress Testing
Testing under stress, different tasks to load more the current resources with better jobs are known
which have effort to make the system down. Negative testing, which contain termination of one aspect or part
from system and also known as component of stress testing. Furthermore, is named as fatigue testing, as we
know that fatigue is mostly in exercise but here we can call stress testing as fatigue testing, this testing must take
the maintaining of the application by testing it instead of its bandwidth capability. The aim of stress testing is to
find out the error of system and to control how the system discover and recover back well. Before they launch
the test anywhere they had a challenge that to set up a monitored environment that can exactly take the behavior
of system again and again, under stress testing is mostly unprosecuted situations.
Stress Testing Goal: The target of stress testing is to examine post-smash reports to express the attitude of
application after failed one. The hugest problem is to make sure that the system does not negotiate with the
safety of important data after the failed one. In a good stress testing, the system will be back to the first
condition along with all its parts, when the most awful situation break down after that [4].
III. Cisco Packet Tracer
Almost all network specialists have network simulators that help them. Because it is not always
possible to work in the laboratory. And these simulation programs is very good for network configuration
practice and after that easily can configure network in a real area, these simulators are such as NS (Network
simulator) Cisco Packet Tracer and eNSP. Practicing the network configuration in a real network laboratory
requires a lot of bagdage and times. This study has focused mainly on network performance evaluation and the
configuration of network on eNSP and Cisco simulation programs. The Cisco Packet Tracer and eNSP program
is a simulation program that provides a network lab environment for users to perform Cisco operations or
applications without requiring the use of any physical machine or vehicle. With this software developed by
Cisco, networks can be modeled virtually and become very easy to process. These Cisco programs run on
virtually any platform and are user-friendly, making them simple to set up and use. The Cisco Packet Tracer has
a simple interface (Figure 1), which allows your creation to perform operations by simply dragging and
dropping the topology and entering the packet metric Interfaces can be easily referenced as desired on devices.
all operations can be done on the network. [5]
3. Performance Evaluation of a Network Using Simulation Tools or Packet Tracer
DOI: 10.9790/0661-1901010105 www.iosrjournals.org 3 | Page
Figure 1 Interface of Cisco Packet Tracer
Nowadays, the Cisco Packet Tracer interface is so simple that you do not even need to write a ping
command to see if a network device is working, it allows you to ping through an envelope-based device on the
device so you can understand the status of the device.
IV. Simulation
In this study, the Packet Tracer which can perform network performance measurement and network
laboratory operations by the simulation drawing and the cisco system is made through the net simulation
program. It is drawn in details in Figure 5 as it will be seen as the basic network. There is three-part scheme, An
ISP and routers, modem and computers are used. While the cables connected to the router are fiber optic, other
twisted pairs are preferred. Serial ports and router ip addresses are manually entered and all other are statically
configured. Fast Ethernet is used for ingress and the IP addresses and MAC addresses are matched as given [6].
In addition, in this study many scenarios have been simulated. The devices, which are used in the network,
provide information about the cabling local area network topology, and Package follower environment software
included in this simulation Data transfer methods are divided into 3: single point, multipoint, and multiple
transfers. Multiple addresses must be sent to multiple addresses for a single destination address transfer the
broadcast is transmitted to all nodes in the network's dataset. All these transmissions are sent in one package. As
a result, and the data will always be transmitted over all the networks. Figure 2 represents the network
measurement which has 3 main parts: office, home, and central office. The cable from the modem is shared
between the wireless modem. Simulation experiments have proven that the network topology works and
preforming is done correctly.
Figure 2 Sample of 3 network (office, home and central office) measurement
When designing network topologies and configuring networks we need for software that will test what we will
do while working with network systems, there are many simulator programs to get rid of those needs. When we
operate, for configuration of our network eNSP is used as a Simulator program. (ENSP) or Enterprise Network
Simulation Platform is extensible, free and graphic network simulation platform developed by Huawei
Company. By simulating Huawei enterprise switches and routers, it demonstrates network devices deployment
4. Performance Evaluation of a Network Using Simulation Tools or Packet Tracer
DOI: 10.9790/0661-1901010105 www.iosrjournals.org 4 | Page
scenarios. ENSP can simulate small and large-sized networks. Users and clients can perform trial tests of
network and performing and learn network technologies without using real devices. We can download free
eNSP simulation program from internet and the setup time firewall must be turned on because the Router
Switch, Access Point, Access Controller, Access Controller, Firewall and Cloud Engine devices must be
allowed to operate each of them individually. When installing this program, Wire shark Win cap, and Oracle
VM Virtual Box software, is also installing together [7].
Network Designing Configuration
In designing of network which has been done in eNSP simulation program has consists of 5 routers, 9 switches,
2 access controllers, 12 wired computers, 6 laptop computers, 2 mobile phones, DHCP (Dynamic Host
Configuration Protocol) and DNS (Domain Name System) servers.
Figure 3
In the first part of the design of the network, all the devices placed and their names were given as
shown (Figure 3) in the network map [8].
Then STP (Spanning Tree Protocol) will be applied When STP is implemented measures will be taken
to affect performance also other reason that access controller configured with central switches and the design
mixed. In the second part of the design, VLANs (Virtual Local Area Network) has created and ip addresses has
given as shown in the network map. The main purpose of creating different VLANs for each office is to take
communication time against performance degradation. At the same time, if there is an infiltration for any
VLAN, only those VLANs can be harmed and this will be very good for security too.
In the third part of the design, one AC (Access Controller) is connected to the center switches and one
AP is added to each side switch. First, the central switches (BBS1, BBS2, BBS3, and BBS4) has configured to
AC1 and AC2, and the configurations of the ports connecting the edge switches to the APs will be entered.
Later, configurations will be made to broadcast on AC1 and AC2 devices.
In the fourth part of the design, Local DNSs, admin PCs, wired and wireless devices will be connected.
After all of the devices are turned on, for connecting the wireless part of design double click on any of the
wireless devices, select the broadcast you want to connect from the pop-up window, and click the Connect
button. If the network has password, we should write the password of the selected network correctly and then
click on the arrow button to connect to the network. If the network is broadcasting unencrypted, it should be
expected to connect to the network only after selecting the broadcast. The WLAN will be entered as the default
VLAN from which VLAN you wish to receive IPs from the wired devices that connect to the network and
5. Performance Evaluation of a Network Using Simulation Tools or Packet Tracer
DOI: 10.9790/0661-1901010105 www.iosrjournals.org 5 | Page
connect the wired devices to the edge switches to obtain the correct IP from the DHCP server. Then click twice
on any wired device, select the DHCP option from the pop-up window and click the Apply button [9].
In the fifth part of the design, only offices that make cable broadcasts has designed. Routers connected to the
switches and switches connected to the PCs the same as A, B and C offices, In the sixth part of the design, the
OSPF protocol will be implemented to enable the A, B, C, D and E offices to interact with each other in a
performance way, without loss of data.
Once the OSPF(Open short Path First) protocol is configured, all devices will be able to communicate
with each other.after connecting all offices with each other Access List can be used to allow or denny any PC or
any connection because if all the clients have connection with each other the performance will be down for
Example if a user access the A office administrator he can do each configuration and the security also will be
down for removing these problems Access list is used in the configuration of network design.As noted above,
we must not forget that this design is in virtual labs that are free to share in real lab environments. Because of
this, the configurations we can make and the devices we can operate are limited. For accessing the network from
far way the necessary configurations are made to enable TELNET connection to the devices [10].
V. Result
In this study, LAN, WAN, WLAN networks including routers, switches, hubs and necessary servers,
personal computers, phones, DHCP Server, DNS Server are designed and simulated. A packet monitor software
simulation was used and the switch were configured with the same device as the actual router in the network
topology.
Cisco packet tracer and eNSP emulation programs has been used Instead of using physical devices
which allows you to work with multiple network devices in a virtual environment. Along with this there will be
a more advantageous work to be done since there will not be wasted time consumed or laboratory to make the
cable connections between the laboratory equipment’s and much less time is spent on more scenarios with more
topology, allowed to run without spending any cost or budget.
In this study, many scenarios that could be applied were given precedents. By increasing these
scenarios, it will offer many innovations for network, advanced network, network security and performance is
done. Also both static and dynamic IP configurations has been simulated on a work-time basis. The simulation
shows that the results on the network can be reached without any problems between a source package and the
destination. With Cisco Packet Tracer software, many topologies have been accomplished by performing one
real simulation using NS. They have the same configuration as the actual router, switch and Hubs.
Wide networks have been defined and configured on the required devices. For Computers Dynamic
Host Configuration Protocol, it has been determined that DHCP pools has been set up and computers accept IP
addresses. Also DHCP pool has been created. VoIP operations have been done for IP phones. They have
received IP addresses of IP phones and exchanged data with each other. Enhanced Interior Gateway Routing
Protocol (EIGRP), static and RIP routing protocols are specified on the Router and it is problematic to check
whether the frame relay switch is performing the necessary operations on the router. Network Time Protocol
server master and client data center internet network are defined. The DNS server is detected and distributed to
the computers through Dynamic Host Configuration Protocol pools.
Referance
[1]. OLIVIER BONAVENTURE (2015), Computer Networking : Principles, Protocols and Practicehttp://www.computerhope.com/
jargon/n/network.htm.pdf
[2]. JD MEIER (2015) https://www.quora.com/What-is-the-difference-between-performance-testing- load-test.
[3]. İTÜBİDB (2015) Captive Portal (Restriction Portal). İstanbul. http://bidb.itu.edu.tr/seyirdefteri/blog/2013/09/07/captive-portal-
(k%C4%B1s%C4%B1tlama-portal%C4%B1)
[4]. MEGEP. (2015). Wireless Network Systems. http://www.megep.meb.gov.tr /mte_program_modul/moduller_pdf/Kablosuz%20A%
C4%9F%20Siste
[5]. CISCO (2015) VLANTrunk Protocol. Cisco: http://www.cisco.com/c/en/us/support/docs/lan- switching/vtp/10558-21.html#req
adresinden alındı
[6]. PRESS, C. (2015). Fiber Distributed Data Interface. High-speed computer network: http://www.cisco.com/cpress/cc/td/cpress/
fund/ith2nd/it2408.htm
[7]. HUAWEI. (2015). Feature Description - Security. huawei: http://support.huawei.com/enterprise/docinforeader. action?
contentId=DOC0100534396
[8]. SAVAŞAL, S. (2015). Huawei eNSP. Date 6/10/2016 selcuk.savasal: http://www.selcuk.savasal.com/?p=255
[9]. LINKSYS, D. (2015). Linksys configuration. http://www.linksys.com/ca/support-article?articleNum=142912
[10]. BAYDAR, S. (2015). Everything About OSPF. 6 12, 2016 Date: http://www.btyardim.com/ospf-hakkinda-hersey.php