This document summarizes a research paper that proposes a dynamic value engineering method to optimize risk on real-time operating systems. The paper discusses how value engineering aims to maximize functionality at minimal cost. It also discusses how risk assessment is an important part of value engineering and risk management. The paper proposes developing a dynamic value engineering method that uses systems engineering principles to better understand complex systems and optimize risk, cost, time and throughput on real-time operating systems. It discusses collecting baseline system data and defining preventative controls to secure the system as part of the proposed methodology.
This PPT is prepared for students and other professionals regarding Management Information System (MIS) subject as lecture notes.This PPT aims to help students to recognize, specify and communicate effectively to data processing personnel to about information system. It also helps students to interpret new developments in information technology and fit into an overall framework. .OTHER TOPICS ARE DISCUSSED IN THE CONSECUTIVE PPTS
What is Software or System ?
How to develop a good Software or System ?
What attributes of designing a good Software or System ?
Which methodology should be to design a good Software or System ?
What is SDLC ?
How many phases available in SDLC ?
Comparison of Dynamic Scheduling Techniques in Flexible Manufacturing SystemIJERA Editor
Scheduling is an important tool in the manufacturing area since productivity is inherently linked to how well the resources are used to increase efficiency and reduce waste. The present article analyzes and provides comparison of modern techniques used for solving dynamic scheduling problem in flexible manufacturing system. These techniques are often impractical in dynamic real world environments where there are complex constraints and a variety of unexpected disruptions. This paper defines the modern techniques of dynamic scheduling and provides a literature survey of scheduling which are presented in recent few years. The principles of several dynamic scheduling techniques, namely dispatching rules, heuristics, genetic algorithms and artificial intelligence techniques are describe in details and comparison of their potential.
This PPT is prepared for students and other professionals regarding Management Information System (MIS) subject as lecture notes.This PPT aims to help students to recognize, specify and communicate effectively to data processing personnel to about information system. It also helps students to interpret new developments in information technology and fit into an overall framework. .OTHER TOPICS ARE DISCUSSED IN THE CONSECUTIVE PPTS
What is Software or System ?
How to develop a good Software or System ?
What attributes of designing a good Software or System ?
Which methodology should be to design a good Software or System ?
What is SDLC ?
How many phases available in SDLC ?
Comparison of Dynamic Scheduling Techniques in Flexible Manufacturing SystemIJERA Editor
Scheduling is an important tool in the manufacturing area since productivity is inherently linked to how well the resources are used to increase efficiency and reduce waste. The present article analyzes and provides comparison of modern techniques used for solving dynamic scheduling problem in flexible manufacturing system. These techniques are often impractical in dynamic real world environments where there are complex constraints and a variety of unexpected disruptions. This paper defines the modern techniques of dynamic scheduling and provides a literature survey of scheduling which are presented in recent few years. The principles of several dynamic scheduling techniques, namely dispatching rules, heuristics, genetic algorithms and artificial intelligence techniques are describe in details and comparison of their potential.
Modeling SYN Flooding DoS Attacks using Attack Countermeasure Trees and Findi...idescitation
In this paper, a greedy algorithm is proposed, to find optimal set of
countermeasures that
minimizes total cost of Security Investment subject to constraint
that it covers entire set of attack events.
This algorithm makes use of Birnbaum’s
Structural Importance Measure to find compute criticality of basic attack events in
achieving the goal. It helps in prioritizing the countermeasures covering the attack events.
Management Information System (MIS) unit-1Manoj Kumar
This PPT is prepared for students and other professionals regarding Management Information System (MIS) subject as lecture notes.This PPT aims to help students to recognize, specify and communicate effectively to data processing personnel to about information system. It also helps students to interpret new developments in information technology and fit into an overall framework. OTHER TOPICS ARE DISCUSSED IN THE CONSECUTIVE PPTs
Unit 1
Introduction to software engineering, the software as product and a process
software process models – waterfall model, incremental development, reuse
oriented software engineering, introduction to agile.
Systems approach vs engineering approach,
case studies to explain 1) the importance of information systems, 2) availability
and reliability of information systems, 3) flexibility of information systems.
Unit 2
A. Software Development process : SDLC
B. Requirements Engineering – characteristics of requirement, requirement
elicitation and analysis, validation and verification
C. Identification of attributes.
D. Feasibility Analysis : technical and economic
Unit 3
3.1Data Flow Diagrams : Symbols, describing a good system with DFD
3.2DFD : leveling of DFD, logical and physical DFD
3.3Process Specification, Decision Tables.
3.4Introduction to ER Diagrams and Data Dictionary.
Unit 4
4.1Data Input Methods : Data input, coding techniques.
4.2Designing outputs : objectives of output design, design of output reports.
4.3Software development – introduction to project and modules, coupling
and cohesion
4.4 Case studies on DFD, ERD
Unit 5.
5.1Introduction and importance of software testing
5.2Software Security concept and software maintenance
5.3Control of information system
5.4Audit of information system
Unit 6
6.1Introduction to software development and deployment environment
6.2Introduction to component based software engineering
6.3Introduction to distributed software engineering
6.4Introduction to service oriented architecture
Seminarie Computernetwerken 2012-2013: Lecture I, 26-02-2013Vincenzo De Florio
Seminarie Computernetwerken is a course given at Universiteit Antwerpen, Belgium
A series of seminars focusing on various themes changing from year to year.
This year's themes are: resilience, behaviour, evolvability; in systems, networks, and organizations
In what follows we describe:
themes of the course
view to the seminars
rules of the game
Dynamic RWX ACM Model Optimizing the Risk on Real Time Unix File SystemRadita Apriana
The preventive control is one of the well advance controls for recent security for protection of data
and services from the uncertainty. Because, increasing the importance of business, communication
technologies and growing the external risk is a very common phenomenon now-a-days. The system
security risks put forward to the management focus on IT infrastructure (OS). The top management has to
decide whether to accept expected losses or to invest into technical security mechanisms in order to
minimize the frequency of attacks, thefts as well as uncertainty. This work contributes to the development
of an optimization model that aims to determine the optimal cost to be invested into security mechanisms
deciding on the measure component of UFS attribute. Our model should be design in such way, the Read,
Write & Execute automatically Protected, Detected and Corrected on RTOS. We have to optimize the
system attacks and down time by implementing RWX ACM mechanism based on semi-group structure,
mean while improving the throughput of the Business, Resources & Technology.
EReeRisk- EFFICIENT RISK IMPACT MEASUREMENT TOOL FOR REENGINEERING PROCESS OF...ijpla
EReeRisk (Efficient Reengineering Risk) is a risk impact measurement tool which automatically identifies
and measure impact of various risk components involve in reengineering process of legacy software system.
EReeRisk takes data directly from users of legacy system and establishes various risk measurement metrics
according to different risk measurement scheme of ReeRisk framework [1]. Furthermore EReeRisk present
a variety of statistical quantities for project management to obtain decision concerning at what time
evolution of a legacy system through reengineering is successful. Its enhanced user interface greatly
simplifies the risk assessment procedures and the usage reaming time. The tool can perform the following
tasks to support decision concern with the selection of reengineering as a system evolution strategy.
Modeling SYN Flooding DoS Attacks using Attack Countermeasure Trees and Findi...idescitation
In this paper, a greedy algorithm is proposed, to find optimal set of
countermeasures that
minimizes total cost of Security Investment subject to constraint
that it covers entire set of attack events.
This algorithm makes use of Birnbaum’s
Structural Importance Measure to find compute criticality of basic attack events in
achieving the goal. It helps in prioritizing the countermeasures covering the attack events.
Management Information System (MIS) unit-1Manoj Kumar
This PPT is prepared for students and other professionals regarding Management Information System (MIS) subject as lecture notes.This PPT aims to help students to recognize, specify and communicate effectively to data processing personnel to about information system. It also helps students to interpret new developments in information technology and fit into an overall framework. OTHER TOPICS ARE DISCUSSED IN THE CONSECUTIVE PPTs
Unit 1
Introduction to software engineering, the software as product and a process
software process models – waterfall model, incremental development, reuse
oriented software engineering, introduction to agile.
Systems approach vs engineering approach,
case studies to explain 1) the importance of information systems, 2) availability
and reliability of information systems, 3) flexibility of information systems.
Unit 2
A. Software Development process : SDLC
B. Requirements Engineering – characteristics of requirement, requirement
elicitation and analysis, validation and verification
C. Identification of attributes.
D. Feasibility Analysis : technical and economic
Unit 3
3.1Data Flow Diagrams : Symbols, describing a good system with DFD
3.2DFD : leveling of DFD, logical and physical DFD
3.3Process Specification, Decision Tables.
3.4Introduction to ER Diagrams and Data Dictionary.
Unit 4
4.1Data Input Methods : Data input, coding techniques.
4.2Designing outputs : objectives of output design, design of output reports.
4.3Software development – introduction to project and modules, coupling
and cohesion
4.4 Case studies on DFD, ERD
Unit 5.
5.1Introduction and importance of software testing
5.2Software Security concept and software maintenance
5.3Control of information system
5.4Audit of information system
Unit 6
6.1Introduction to software development and deployment environment
6.2Introduction to component based software engineering
6.3Introduction to distributed software engineering
6.4Introduction to service oriented architecture
Seminarie Computernetwerken 2012-2013: Lecture I, 26-02-2013Vincenzo De Florio
Seminarie Computernetwerken is a course given at Universiteit Antwerpen, Belgium
A series of seminars focusing on various themes changing from year to year.
This year's themes are: resilience, behaviour, evolvability; in systems, networks, and organizations
In what follows we describe:
themes of the course
view to the seminars
rules of the game
Dynamic RWX ACM Model Optimizing the Risk on Real Time Unix File SystemRadita Apriana
The preventive control is one of the well advance controls for recent security for protection of data
and services from the uncertainty. Because, increasing the importance of business, communication
technologies and growing the external risk is a very common phenomenon now-a-days. The system
security risks put forward to the management focus on IT infrastructure (OS). The top management has to
decide whether to accept expected losses or to invest into technical security mechanisms in order to
minimize the frequency of attacks, thefts as well as uncertainty. This work contributes to the development
of an optimization model that aims to determine the optimal cost to be invested into security mechanisms
deciding on the measure component of UFS attribute. Our model should be design in such way, the Read,
Write & Execute automatically Protected, Detected and Corrected on RTOS. We have to optimize the
system attacks and down time by implementing RWX ACM mechanism based on semi-group structure,
mean while improving the throughput of the Business, Resources & Technology.
EReeRisk- EFFICIENT RISK IMPACT MEASUREMENT TOOL FOR REENGINEERING PROCESS OF...ijpla
EReeRisk (Efficient Reengineering Risk) is a risk impact measurement tool which automatically identifies
and measure impact of various risk components involve in reengineering process of legacy software system.
EReeRisk takes data directly from users of legacy system and establishes various risk measurement metrics
according to different risk measurement scheme of ReeRisk framework [1]. Furthermore EReeRisk present
a variety of statistical quantities for project management to obtain decision concerning at what time
evolution of a legacy system through reengineering is successful. Its enhanced user interface greatly
simplifies the risk assessment procedures and the usage reaming time. The tool can perform the following
tasks to support decision concern with the selection of reengineering as a system evolution strategy.
Systematic Review Automation in Cyber SecurityYogeshIJTSRD
Many aspects of cyber security are carried by automation systems and service applications. The initial steps of cyber chain mainly focus on different automation tools with almost same task objective. Automation operations are carried only after detail study on particular task pre engagement phase , the tool is going to perform, measurement of dataset handling of tool produced output. The algorithm is going to make use of after comparing the existing tools efficiency, the throughput time, output format for reusable input and mainly the resource’s consumption. In this paper we are going to study the existing methodology in application and system pen testing, automation tool’s efficiency over growing technology and their behaviour study on unintended platform assignment. Nitin | Dr. Lakshmi J. V. N "Systematic Review: Automation in Cyber Security" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd41315.pdf Paper URL: https://www.ijtsrd.comcomputer-science/computer-security/41315/systematic-review-automation-in-cyber-security/nitin
Software plays a critical role in businesses, governments, and societies. To improve
performance and quality of the software are important goals of software engineering. Mining
data has recently emerged as a promising means to meet this goal due to two main trends:
The increasing abundance of such data and its demonstrated helpfulness in solving numerous
real-world problems. Poor performance costs the software industry millions of money
annually in the form of lost revenue, hardware costs, damaged customer relations and
decreased productivity. Performance analysis and evaluation through data mining technique
will result performance improvement suggestions for software developers.
user centric machine learning framework for cyber security operations centerVenkat Projects
In order to ensure a company's Internet security, SIEM (Security Information and Event Management) system is in place to simplify the various preventive technologies and flag alerts for security events. Inspectors (SOC) investigate warnings to determine if this is true or not. However, the number of warnings in general is wrong with the majority and is more than the ability of SCO to handle all awareness. Because of this, malicious possibility. Attacks and compromised hosts may be wrong. Machine learning is a possible approach to improving the wrong positive rate and improving the productivity of SOC analysts. In this article, we create a user-centric engineer learning framework for the Internet Safety Functional Center in the real organizational context. We discuss regular data sources in SOC, their work flow, and how to process this data and create an effective machine learning system. This article is aimed at two groups of readers. The first group is intelligent researchers who have no knowledge of data scientists or computer safety fields but who engineer should develop machine learning systems for machine safety. The second groups of visitors are Internet security practitioners that have deep knowledge and expertise in Cyber Security, but do Machine learning experiences do not exist and I'd like to create one by themselves. At the end of the paper, we use the account as an example to demonstrate full steps from data collection, label creation, feature engineering, machine learning algorithm and sample performance evaluations using the computer built in the SOC production of Seyondike.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Conference: 43rd Annual Conference of the IEEE
Industrial Electronics Society (IECON2017)
29 October – 1 November, 2017
China National Convention Center, Beijing, China
Title of the paper: Principles and risk assessment of
managing distributed ontologies hosted by
embedded devices for controlling industrial systems
Authors: Borja ramis Ferrer, Samuel Olaiya
Afolaranmi, Jose Luis Martinez Lastra
Security has always been a great concern for all software systems due to the increased incursion of the wireless devices in recent years. Generally software engineering processes tries to compel the security measures during the various design phases which results into an inefficient measure. So this calls for a new process of software engineering in which we would try to give a proper framework for integrating the security requirements with the SDLC, and in this requirement engineers must discover all the security requirements related to a particular system, so security requirement could be analyzed and simultaneously prioritized in one go. In this paper we will present a new technique for prioritizing these requirement based on the risk measurement techniques. The true security requirements should be easily identified as early as possible so that these could be systematically analyzed and then every architecture team can choose the most appropriate mechanism to implement them.
Similar to Dynamic Value Engineering Method Optimizing the Risk on Real Time Operating System (20)
An Heterogeneous Population-Based Genetic Algorithm for Data Clusteringijeei-iaes
As a primary data mining method for knowledge discovery, clustering is a technique of classifying a dataset into groups of similar objects. The most popular method for data clustering K-means suffers from the drawbacks of requiring the number of clusters and their initial centers, which should be provided by the user. In the literature, several methods have proposed in a form of k-means variants, genetic algorithms, or combinations between them for calculating the number of clusters and finding proper clusters centers. However, none of these solutions has provided satisfactory results and determining the number of clusters and the initial centers are still the main challenge in clustering processes. In this paper we present an approach to automatically generate such parameters to achieve optimal clusters using a modified genetic algorithm operating on varied individual structures and using a new crossover operator. Experimental results show that our modified genetic algorithm is a better efficient alternative to the existing approaches.
Development of a Wireless Sensors Network for Greenhouse Monitoring and Controlijeei-iaes
Wireless sensor networks (WSN) could be used to monitor and control many parameters of environment such as temperature, humidity, and radiation leakage. In greenhouse the weather and soil should be independent of the natural agents. To achieve this condition a wireless sensor nodes could be deployed and communicate with a central base station to measure and transmit the sensed required environment factors. In this paper a WSN was implemented by deployed wireless sensor nodes in a greenhouse with temperature, humidity, moisture light, and CO2 sensors. The proposed model was built and tested, and the result shows an excellent improvement in the sensed parameters. To control the environmental factors, the used microcontroller programmed to control the parameters according to preset values, or manually through a user interface panel.
Analysis of Genetic Algorithm for Effective power Delivery and with Best Upsurgeijeei-iaes
Wireless network is ready for hundreds or thousands of nodes, where each node is connected to one or sometimes more sensors. WSN sensor integrated circuits, embedded systems, networks, modems, wireless communication and dissemination of information. The sensor may be an obligation to technology and science. Recent developments underway to miniaturization and low power consumption. They act as a gateway, and prospective clients, I usually have the data on the server WSN. Other components separate routing network routers, called calculating and distributing routing tables. Discussed the routing of wireless energy balance. Optimization solutions, we have created a genetic algorithm. Before selecting an algorithm proposed for the construction of the center console. In this study, the algorithms proposed model simulated results based on "parameters depending dead nodes, the number of bits transmitted to a base station, where the number of units sent to the heads of fuel consumption compared to replay and show that the proposed algorithm has a network of a relative.
Design for Postplacement Mousing based on GSM in Long-Distanceijeei-iaes
This design for mousing is made up of power control module, infrared sensor module, signal processing module, distance information transportation based on GSM and device of power grid. The design consists of two sets of conductors, separately linked by fire wire and null line and distributing alternatively. The major innovation is infrared sensor module with Fresnel lens, and that the infrared detecting area should be spread in one direction at least. When the mouse get into the infrared detecting area, the sensor signal of infrared detecting device is sent to power control module through signal element and then starts the device of power grid to power up to make the mouse be shocked or die. GSM module is adopted to tell that the mouse is caught successfully. This design can be placed in any position that the mouse is always out and no need of baits.
Investigation of TTMC-SVPWM Strategies for Diode Clamped and Cascaded H-bridg...ijeei-iaes
This paper presents a concept of two types multilevel inverters such as diode clamped and cascaded H-bridge for harmonic reduction on high power applications. Normally, multilevel inverters can be used to reduce the harmonic problems in electrical distribution systems. This paer focused on the performance and analysis of a three phase seven level inverter including diode clamped and cascaded H-bridge based on new tripizodal triangular space vector PWM technique approaches. TTMC based modified Space vector Pulse width modulation technique so called tripizodal triangular Space vector Pulse width modulation (TTMC-SVPWM) technique. In this paper the reference sine wave generated as in case of conventional off set injected SVPWM technique. It is observed that the TTMC-Space vector pulse width modulation ensures excellent, close to optimized pulse distribution results and THD is compared to seven level, diode clamped and cascaded multi level inverters. Theoretical investigations were confirmed by the digital simulations using MATLAB/SIMULINK software.
Optimal Power Flow with Reactive Power Compensation for Cost And Loss Minimiz...ijeei-iaes
One of the concerns of power system planners is the problem of optimum cost of generation as well as loss minimization on the grid system. This issue can be addressed in a number of ways; one of such ways is the use of reactive power support (shunt capacitor compensation). This paper used the method of shunt capacitor placement for cost and transmission loss minimization on Nigerian power grid system which is a 24-bus, 330kV network interconnecting four thermal generating stations (Sapele, Delta, Afam and Egbin) and three hydro stations to various load points. Simulation in MATLAB was performed on the Nigerian 330kV transmission grid system. The technique employed was based on the optimal power flow formulations using Newton-Raphson iterative method for the load flow analysis of the grid system. The results show that when shunt capacitor was employed as the inequality constraints on the power system, there is a reduction in the total cost of generation accompanied with reduction in the total system losses with a significant improvement in the system voltage profile
Mitigation of Power Quality Problems Using Custom Power Devices: A Reviewijeei-iaes
Electrical power quality (EPQ) in distribution systems is a critical issue for commercial, industrial and residential applications. The new concept of advanced power electronic based Custom Power Devices (CPDs) mainly distributed static synchronous compensator (D-STATCOM), dynamic voltage restorer (DVR) and unified power quality conditioner (UPQC) have been developed due to lacking the performance of traditional compensating devices to minimize power quality disturbances. This paper presents a comprehensive review on D-STATCOM, DVR and UPQC to solve the electrical power quality problems of the distribution networks. This is intended to present a broad overview of the various possible DSTATCOM, DVR and UPQC configurations for single-phase (two wire) and three-phase (three-wire and four-wire) networks and control strategies for the compensation of various power quality disturbances. Apart from this, comprehensive explanation, comparison, and discussion on D-STATCOM, DVR, and UPQC are presented. This paper is aimed to explore a broad prospective on the status of D-STATCOMs, DVRs, and UPQCs to researchers, engineers and the community dealing with the power quality enhancement. A classified list of some latest research publications on the topic is also appended for a quick reference.
Comparison of Dynamic Stability Response of A SMIB with PI and Fuzzy Controll...ijeei-iaes
Consumer utilities are non –linear in nature. This injects increased flow of current and reduced voltage with distortions which cause adverse effect on the stability of consumer utilities. To overcome this problem we are using a modern Flexible Alternating Current Transmission System controller i.e. distributed power flow controller (DPFC). This controller is similar to UPFC, which can be installed in a transmission line between the two electrical areas. In DPFC, instead of the common Dc link capacitor three single phase converters are used. In this paper we are concentrating on system stability (oscillation damping). For analyzing the stability of a single machine infinite bus system (SMIB) we have used PI controlled Distributed Power Flow Controller (DPFC) and Fuzzy controlled DPFC. All these models are simulated using MATLAB/SIMULINK. Simulation results shows Fuzzy controlled DPFC are better than PI controlled DPFC. The significance of the results are better stability and constant power supply.
Embellished Particle Swarm Optimization Algorithm for Solving Reactive Power ...ijeei-iaes
This paper proposes Embellished Particle Swarm Optimization (EPSO) algorithm for solving reactive power problem .The main concept of Embellished Particle Swarm Optimization is to extend the single population PSO to the interacting multi-swarm model. Through this multi-swarm cooperative approach, diversity in the whole swarm community can be upheld. Concurrently, the swarm-to-swarm mechanism drastically speeds up the swarm community to converge to the global near optimum. In order to evaluate the performance of the proposed algorithm, it has been tested in standard IEEE 57,118 bus systems and results show that Embellished Particle Swarm Optimization (EPSO) is more efficient in reducing the Real power losses when compared to other standard reported algorithms.
Intelligent Management on the Home Consumers with Zero Energy Consumptionijeei-iaes
The energy and environment crisis has forced modern humans to think about new and clean energy sources and in particular, renewable energy sources. With the development of home network, the residents have the opportunity to plan the home electricity usage with the goal of reducing the cost of electricity. In this regard, to improve the energy consumption efficiency in residential buildings, smart buildings with zero energy consumption were considered as a proper option. Zero-energy building is a building that has smart equipment whose integral of generated and consumed power within a year is zero. In this article, smart devices submit their power consumption with regard to the requested activity associated with the user’s time setting for run times and end times of the work to the energy management unit and ultimately the time to start work will be determined. The problem’s target function is reducing the energy cost for the consumer with taking into account the applicable limitations.
Analysing Transportation Data with Open Source Big Data Analytic Toolsijeei-iaes
Big data analytics allows a vast amount of structured and unstructured data to be effectively processed so that correlations, hidden patterns, and other useful information can be mined from the data. Several open source big data analytic tools that can perform tasks such as dimensionality reduction, feature extraction, transformation, optimization, are now available. One interesting area where such tools can provide effective solutions is transportation. Big data analytics can be used to efficiently manage transport infrastructure assets such as roads, airports, bus stations or ports. In this paper an overview of two open source big data analytic tools is first provided followed by a simple demonstration of application of these tools on transport dataset.
A Pattern Classification Based approach for Blur Classificationijeei-iaes
Blur type identification is one of the most crucial step of image restoration. In case of blind restoration of such images, it is generally assumed that the blur type is known prior to restoration of such images. However, it is not practical in real applications. So, blur type identification is extremely desirable before application of blind restoration technique to restore a blurred image. An approach to categorize blur in three classes namely motion, defocus, and combined blur is presented in this paper. Curvelet transform based energy features are utilized as features of blur patterns and a neural network is designed for classification. The simulation results show preciseness of proposed approach.
Computing Some Degree-Based Topological Indices of Grapheneijeei-iaes
Graphene is one of the most promising nanomaterial because of its unique combination of superb properties, which opens a way for its exploitation in a wide spectrum of applications ranging from electronics to optics, sensors, and bio devices. Inspired by recent work on Graphene of computing topological indices, here we compute new topological indices viz. Arithmetic-Geometric index (AG2 index), SK3 index and Sanskruti index of a molecular graph G and obtain the explicit formulae of these indices for Graphene.
A Lyapunov Based Approach to Enchance Wind Turbine Stabilityijeei-iaes
This paper introduces a nonlinear control of a wind turbine based on a Double Feed Induction Generator. The Rotor Side converter is controlled by using field oriented control and Backstepping strategy to enhance the dynamic stability response. The Grid Side converter is controlled by a sliding mode. These methods aim to increase dynamic system stability for variable wind speed. Hence, The Doubly Fed Induction Generator (DFIG) is studied in order to illustrate its behavior in case of severe disturbance, and its dynamic response in grid connected mode for variable speed wind operation. The model is presented and simulated under Matlab/ Simulink.
Fuzzy Control of a Large Crane Structureijeei-iaes
The usage of tower cranes, one type of rotary cranes, is common in many industrial structures, e.g., shipyards, factories, etc. With the size of these cranes becoming larger and the motion expected to be faster and has no prescribed path, their manual operation becomes difficult and hence, automatic closed-loop control schemes are very important in the operation of rotary crane. In this paper, the plant of concern is a tower crane consists of a rotatable jib that carries a trolley which is capable of traveling over the length of the jib. There is a pendulum-like end line attached to the trolley through a cable of variable length. A fuzzy logic controller with various types of membership functions is implemented for controlling the position of the trolley and damping the load oscillations. It consists of two main types of controllers radial and rotational each of two fuzzy inference engines (FIEs). The radial controller is used to control the trolley position and the rotational is used for damping the load oscillations. Computer simulations are used to verify the performance of the controller. The results from the simulations show the effectiveness of the method in the control of tower crane keeping load swings small at the end of motion.
Site Diversity Technique Application on Rain Attenuation for Lagosijeei-iaes
This paper studied the impact of site diversity (SD) as a fade mitigation technique on rain attenuation at 12 GHz for Lagos. SD is one of the most effective methods to overcome such large fades due to rain attenuation that takes advantage of the usually localized nature of intense rainfall by receiving the satellite downlink signal at two or more earth stations to minimize the prospect of potential diversity stations being simultaneously subjected to significant rain attenuation. One year (January to December 2011) hourly rain gauge data was sourced from the Nigerian Meteorological Agency (NIMET) for three sites (Ikeja, Ikorodu and Marina) in Lagos, Nigeria. Significant improvement in both performance and availability was observed with the application of SD technique; again, separation distance was seen to be responsible for this observed performance improvements.
Impact of Next Generation Cognitive Radio Network on the Wireless Green Eco s...ijeei-iaes
Land mobile communication is burdened with typical propagation constraints due to the channel characteristics in radio systems.Also,the propagation characteristics vary form place to place and also as the mobile unit moves,from time to time.Hence,the tramsmission path between transmitter and receiver varies from simple direct LOS to the one which is severely obstructed by buildings, foliage and terrain. Multipath propagation and shadow fading effects affect the signal strength of an arbitrary Transmitter-Receiver due to the rapid fluctuations in the phase and amplitude of signal which also determines the average power over an area of tens or hundreds of meters. Shadowing introduces additional fluctuations, so the received local mean power varies around the area –mean. The present paper deals with the performance analysis of impact of next generation wireless cognitive radio network on wireless green eco system through signal and interference level based k coverage probability under the shadow fading effects.
Music Recommendation System with User-based and Item-based Collaborative Filt...ijeei-iaes
Internet and E-commerce are the generators of abundant of data, causing information Overloading. The problem of information overloading is addressed by Recommendation Systems (RS). RS can provide suggestions about a new product, movie or music etc. This paper is about Music Recommendation System, which will recommend songs to users based on their past history i.e. taste. In this paper we proposed a collaborative filtering technique based on users and items. First user-item rating matrix is used to form user clusters and item clusters. Next these clusters are used to find the most similar user cluster or most similar item cluster to a target user. Finally songs are recommended from the most similar user and item clusters. The proposed algorithm is implemented on the benchmark dataset Last.fm. Results show that the performance of proposed method is better than the most popular baseline method.
A Real-Time Implementation of Moving Object Action Recognition System Based o...ijeei-iaes
This paper proposes a PixelStreams-based FPGA implementation of a real-time system that can detect and recognize human activity using Handel-C. In the first part of our work, we propose a GUI programmed using Visual C++ to facilitate the implementation for novice users. Using this GUI, the user can program/erase the FPGA or change the parameters of different algorithms and filters. The second part of this work details the hardware implementation of a real-time video surveillance system on an FPGA, including all the stages, i.e., capture, processing, and display, using DK IDE. The targeted circuit is an XC2V1000 FPGA embedded on Agility’s RC200E board. The PixelStreams-based implementation was successfully realized and validated for real-time motion detection and recognition.
Wireless Sensor Network for Radiation Detectionijeei-iaes
n this paper a wireless sensor network (WSN) is designed from a group of radiation detector stations with different types of sensors. These stations are located in different areas and each sensor transmits its data through GSM network to the main monitoring and control station. The design includes GPS module to determine the location of mobile and fixed station. The data is transmitted with GSM/GPRS modem. Instead of using traditional SMS data string or word messages a digital data frame is constructed and transmitted as SMS data. In the main monitoring station graphical user interface (GUI) software is designed to shows information and statues of the all stations in the network. It reports any radiation leaks, in addition to the data; the GUI contains a geographical map to display the location of the leakage station and can control the stations power consumption by sending a special command to it.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Dynamic Value Engineering Method Optimizing the Risk on Real Time Operating System
1. Indonesian Journal of Electrical Engineering and Informatics (IJEEI)
Vol. 2, No. 2, June 2014, pp. 101~110
ISSN: 2089-3272 101
Received February 24, 2014; Revised April 29, 2014; Accepted May 16, 2014
Dynamic Value Engineering Method Optimizing the
Risk on Real Time Operating System
Dr Prashant Kumar Patra1
, Padma Lochan Pradhan2
1
Dept. of CSE, College of Engineering & Technology, BPUT, Bhubaneswar-751003, Orissa, India.
2
Dept. of CSE, Central Institute of Technology, Raipur, CG, India.
e-mail: citrprcs@rediffmail.com.
Abstract
The value engineering is the umbrella of the many more sub-system like quality assurance,
quality control, quality function design and development for manufacturability. The system engineering &
value engineering is two part of the coin. The value engineering is the high level of technology
management for every aspect of engineering fields. The value engineering is the high utilization of System
Product (i.e. Processor, Memory & Encryption key), Services, Business and Resources at minimal cost.
The high end operating system providing highest services at optimal cost & time. The value engineering
provides the maximum performance, accountability, reliability, integrity and availability of processor,
memory, encryption key and other inter dependency sub-components. The value engineering is the ratio of
the maximum functionality of individual components to the optimal cost. The VE is directly proportional to
performance of individual components and inversely proportional to the minimal cost. The VE is directly
proportional to the risk assessment. The VE maximize the business throughput & decision process mean
while minimize the risk and down time. We have to develop the dynamic value engineering method for risk
optimization over a complex real time operating system.
Keywords: Value Engineering, Product Specification, Business Specification, Encryption, Processor,
Memory, Value Method, High Availability
1. Introduction
In the present age, the technology management can be defined as the integrated
planning, designing, organizing, operation and control of product & services at minimal cost.
The technology management improves the value engineering in a systematic & continuous
manner. The high performance of individual components at minimum cost will be high utilize at
lowest cost, then improve the productivity of services and satisfy to the risk mgmt system. It is a
systematic & continuous process require for high productivity with low cost. [1], [6-7]
Product (P, M, E, C, A)
Business Resources
Figure 1. PBR Optimize the PME
The value engineering (VE) is a systematic approach to improve the value of goods or
products and services by using real time experimental method. The value can therefore be
increased by either improving the functionality and reducing the cost or both simultaneously.
The value engineering is specifically establishing and maintaining cost-effective value
2. ISSN: 2089-3272
IJEEI Vol. 2, No. 2, June 2014 : 101 – 110
102
engineering procedures and process in a systematic way. The value engineering is based on
system engineering. The value engineering concept can be develop by combining system
engineering (SE), reliability engineering (RE) and security engineering (SC). [3], [12].
The value methodology (VM) is a dynamic, systematic and structured approach,
improves projects, products and processes. The VM is used to analyze manufacturing products
and processes define, design, develop the projects, business and administrative processes. The
VM helps to achieve balance among required functions, performance, quality, reliability,
scalability, high availability, safety and scope with the cost and other resources necessary to
accomplish those requirements. The proper balance results in the maximum value for the
project. [3], [12].
1.1 Value Engineering: Value = Function/Cost
Value is the reliable performance of functions to meet customer needs (Requirement
Analysis) at the lowest overall cost.
Function is the natural or characteristic action and reaction (behaviors and characteristics of
components) performed by a product or service.
Cost is the expenditure necessary to produce a project, goods, service, process and
structure.
When the system engineering & value engineering both meet each other, the risk
assessment definite would be optimize at lowest cost. The risk assessment is the first process
of the risk management methodology. Organizations use risk management to determine the
extent of the potential threat and the risk associated with an IT system and sub system
throughout its life cycle as the steps follow. The output of this process helps to identify
appropriate controls (Preventive control, detective control, & corrective control) for reducing or
eliminating risk during the risk mitigation process, as discussed in proposed VE method. Risk is
a function of the likelihood of a given threat-source’s exercising a particular potential
vulnerability, and the resulting impact of that adverse event on the organization. To determine
the likelihood of a future adverse event, threats to an IT system must be analyzed in conjunction
with the potential vulnerabilities and the controls in place for the IT system. Impact refers to the
magnitude of harm that could be caused by a threat’s exercise of vulnerability. The level of
impact is governed by the potential mission impacts and in turn produces a relative value for the
IT assets and resources affected (e.g., the criticality and sensitivity of the IT system
components, devices and data [13], [18].
1.2 Operating System
The operating system is a collection of hardware, software & application that manages
system resources and provides common services for resources, program, application & users.
The operating system is an essential component of the system software (processor, memory,
encryption, shell, file & kernel) in computer system. The high level language (application
programs) usually require an operating system to function. The Real time operating system is a
multitasking, time sharing & distributed operating system that aims at executing real-time
applications. The real-time operating systems often use specialized scheduling algorithms so
that they can achieve a deterministic nature of behavior. The main objective of real-time
operating systems is their quick and predictable response to events. They have an event-driven
or time-sharing design and often aspects of both. An event-driven system switches between
tasks based on their priorities or external (resources) events while time-sharing operating
systems switch tasks based on clock interrupts. [9-10], [15-16].
Operating system control is a step by step process of securely configuring a system to
protect it against unauthorized access, while also taking steps to make the system more reliable
and available. Generally anything that is done in the name of system. Preventive control
ensures the system is secure, reliable and high available for high IT culture.
VE role in regards to Risk Mgmt:
Communicate risk to top management by help of mobile message (mobichart).
Organize & understand variables affecting risk (High, Medium & Low).
Traditional cascading risk charts, risk matrix, risk register (Physical /Logical).
Implement quantitative risk analysis, design & implement.
Assess current design state & existing method and approach. (/var/adm/message) VE
3. IJEEI ISSN: 2089-3272
Dynamic Value Engineering Method Optimizing the Risk on Real Time … (Prashant KP)
103
products provides decision making system.
Removes or disable emotion element and objects from the system.
Enables fact-based decisions making & acquisition to top mgmt by help of mobile
computing.
Decision Analysis-builds consensus, defines alternatives, assigns priority and in a
systematic manner.
Define, design, develop and deployment and decide (D^4) in a systematic manner.
2. Literature Survey [2], [9-10], [15-16]
The technical literature survey in IS Security area is very, risk, critical & tedious work to
collect the actual data and analysis, investigation & evidence in the real life system. It is one of
the ongoing processes in a continuous manner. It is a very time consuming to investigate &
judge the information.
There are many text book & reference books help to us to find out the real issue. The
reference books like: Applied Cryptography by Bruce Schneier & Cryptography & Network
Security by William Stalling is very much help full to expand our idea. The proposed model &
method is very helpful to application of cryptographic key management issue. The Sun Micro-
system UNIX sun Solaris system administration guide: Vol 1 & Vol 2. & O’ Reilly, Essential of
System Administration is very helpful to collect the basic data [17], [18].
In our past experience, we observed on operating system as well as network server,
there are many system parameters are defaults lacking behind over a larger application &
multiple resources and business on heterogeneous platform. There are many more issue arising
like, memory, performance, bandwidth, network & packets are slow down. This issue is
highlighted in our problem statement, action plan & proposed method.
We have to find out some method to make the more efficient, secure, high available &
robust high end operating system. No where develop the detail methods in Graphically as well
as Mathematically about the risk management of the OS. There are many issues are not
develop till now like: Risk Identification, Risk Analysis, Risk Mitigation & risk results in the
operating system level. We have to develop Risk Identification, Risk Analysis, Risk Mitigation in
both analytical & graphical way. There many documents are available in general sense of risk
identifications, risk analysis, risk mitigation, but operating system level the classification &
categorization of risk is not available on today itself. We have to focus on the system specific
like OS & system software on VE, RM & decision process. We can develop some optimization
model, method and mechanism for risk mitigation based on technology survey. We have to
consider the functionality of indivusal components of operating system, Product, Business &
Resources.
2.1 Data Collection Based on Existing System Engineering: (Basic Data)
There are number of system engineering preventive control methods developed as per
requirement of the secure computing to achieve the highest level of business objective. UNIX
file system have to be develop as per business requirement [2], [9-10], [15-16].
4. ISSN: 2089-3272
IJEEI Vol. 2, No. 2, June 2014 : 101 – 110
104
Table 1. Basic Data
S
N
SYSTEM FILES INPUT
(Owner)
ACTION PLAN REMARKS OUPUT
1 /etc/system [PS] Product Implement the kernel & n-bit processer Can be improve the system
performance
2 /etc/hosts Develop the scripts:allow/disallow as per policy,
chmod000= /etc/hostnnmmxmdisallow
Preventative control
Access control mechanism
3 /etc/services [BS]
Services
Disable the third parties services. Remove the ftp, http,
telnet, port no, printer, IP services. Those services are not
required.
Preventative control
4 /use/bin/rsh, etc/ pam.conf Disable all remote services: chmod 000 /usr/bin/rsh,
rksh,rcp, ruser,rlogin, uptime.
Preventative control
5 /var/adm/message Date & time stamp (events mgmt) Internal audit purpose Detective
control
6 /etc/rc.conf script Run level script
Run level script have to develop as per requirement.
/etc/init.conf,rc2.d example:httpd_flags="NO"
Preventative control
7 /etc/init.d OS services, run level Preventative control
8 etc/ssh/sshd_config
[PS]
Automated control
Cryptography enable through ssh implementation AES: 256
bits chipper.
chiper blowfish-CBC,aes256-CBC, aes256-chr.ssh-keygen
-b 1024 -f /etc/ssh_host_key -n '' chmod - - -
/etc/ssh/ssh_config
Preventative control n=1024,
2048, 4096 chmod r w x (i. e. 4
2 1 ) – blank is nothing
3. Existing Risk Assessment Method
There are many preventive control has to design, developed and available in past &
present for risk assessment on Information System. The existing preventive controls are
available for secure & betterment of IT standard. There are six major area of the value
engineering method as follows.
3.1. Existing System
The systematic and structural approach comes from the VM job plan. The VM job plan
consists of six phases: [3], [9], [12].
1. Information Phase: Gather information to better understand the project definition. (Initial
Stage)
2. Function Analysis Phase: Analyze the project to understand and clarify the required
functions (RA).
3. Creative Phase: Generate ideas on all the possible ways to accomplish the required
functions. (New product)
4. Evaluation Phase: Synthesize and analysis the ideas and concepts to select the feasible
ideas for development into specific value improvement.
5. Development Phase: Select and prepare the best suggestions & method alternative(s) for
improving value of goods & services.
6. Presentation Phase: Present the value recommendation to the project stakeholders/vendor,
customer (services level).
The VM process produces the best results when applied by a multi-disciplined team with
experience and expertise relative to the type of project to be studies in system engineering,
reliable engineering & security engineering.
3.2. Existing Problem on Value Engineering (Technology, Engineering, Business)
The system engineering is a process which is not easy to accept under the normal
conditions; the problem is commutative when it running several multiple jobs and
applications simultaneously under complex IT infrastructure which using millions of user
accessing the same piece of data and information in around the clock (24 x 7 x 52).
When too many packets are present in the subnet, performances degrade and in this
situation data/packet congestion is happening. In this way transmission error is happening on
network (LAN-WAN). At the high end traffic, performance collapses completely and almost no
packets are delivered. If there is insufficient memory to hold all of them, packet will be lost. If
slow the processor can also cause the congestion. Similarly, the low bandwidth can also cause
congestion. Therefore, the OS became hungering & highly utilizing of CPU Times, system
5. IJEEI ISSN: 2089-3272
Dynamic Value Engineering Method Optimizing the Risk on Real Time … (Prashant KP)
105
throughput became slow down, also slow down the network resources & loss of communication
system.
There is no automatic protection, detection & correction on the system components. There
is no balance ratio among the Kernel, Processor, Memory, File System [Encryption Key] &
Time slot of the high end OS. The high level decision process is required to implement
resources like kernel, processor and instruction level parallelism (SISD, SIMD, MISD,
MIMD) & high memory & encryption key sizes for high end business. The high end
technology would be match with high quality of business and decision.
3.2 Research Questions
Now a day increasing the third parties multiple users, applications of business,
computer and communications system by IT industries has increased the risk of theft of
proprietary data & services. The operating system control & audit is a primary method of
protecting, detecting, correcting, operation and services of complex system resources.
Increasing the millions of multiple as well as multipurpose users.
Decreasing the performance, throughput, operation & services over a complex
infrastructure.
Increasing the multiple layering & distributed object oriented technology (SOA) to resolve
the multiple requirements of Customers & Clients, but mean while increasing the hacker,
risk, uncertainty, theft & unsecure.
Increasing the hardware & software capabilities (n-th bits processor & no of CPU, Memory).
Increase the business & technology, but technology & business fully depends on political &
economic condition in around the globe.
4. Methodology
Proposed Risk Assessment Method:
There are many preventive method have to define, design, developed and deployment on a
complex heterogeneous platform. The proposed preventive value engineering methods are
available for secure, reliable & high available for betterment of IT standard. There are seven
points are mentioned on the development section as follows:
4.1. Proposed SE Verification and Validation to Achieve our Objective to Gain the Value
Engineering, which can be Optimize the Risk, Cost, Time and Maximize Throughput:
The development of dynamic control algorithms, microprocessor (hardware) design
system, and analysis of environmental systems also come within the purview of systems &
value engineering. The systems engineering encourages the use of tools and methods to better
comprehend and manage the complexity in systems. Some examples of these tools can be
define here as follow as: system model, system definition, behavior, architecture, optimization,
reliability & decision analysis.
In philosophically, thinking the concept of distributed object oriented system with the
multi-disciplinary approach and method to system engineering is inherently complex since the
characteristics & behaviors of interaction among system components ( objects) is not always
immediately clear, defined and understand. The SE method is defining, designing & developing
the behavior & characteristics such systems, subsystems (objects), resources and the
interactions among them is one of the goals of systems engineering. In this way, the gap that
exists among informal requirements from clients & customer, users, operators, business
requirements, resources (objects) requirements and technical specifications is successfully a
long bridged.
We have to maintain the risk free environments in the hardware, software & application
level on the basis of the following data. We can update the SE parameters dynamically as per
business & technology requirement any where & any time. That’s why we are calling dynamic
value engineering method.
4.2. Define
We have to define, design, develop and deployment the various method, model,
mechanism, services and fix up majors automated system configuration to maintain residual
risk. Meanwhile, we have to maintain the system control by applying automated method, model,
6. ISSN: 2089-3272
IJEEI Vol. 2, No. 2, June 2014 : 101 – 110
106
mechanism (M^3) & tools on operating system level to optimize the risk and maximize the
decision management to achieve the highest business objective. We have to define and
initialize the Product, Business & Resources to measure security domain for risk optimization
and assessment.
SE: DECISION FACTOR OF THE PRODUCT SPECIFICATION:
(INITIAL STAGE: PREVENTION MATRIX)
Table 2. Proposed PME Data
( DYNAMIC & DERIVED DATA ): ENCRYPTION
E 128 256 512 A=2^n AES HA
S 512 1024 2048 4096 8192 S=2^n SSH HA
P 32 64 128 256 512 P=2^n Processor HA
M 16 32 64 128 256 M=2^n Memory(GB) HA
C H H M M L K=2^n Control HA
(L- LOW RISK, M-Medium RISK, H-HIGH RISK) (PC+DC+CC=C) [AES =k.1/R] FUZZ’S LAW
FRAME WORK:
PRODUCT SPECIFICATION (INPUT) OF OPERATING SYSTEM:
The hardware designers have to decide the product specification to mitigate the risk
(i.e. target specification)
Let us consider;
PS= Product Specification, TS=Target Specification, BS=Business Specification, RS=Resource
Specification
4.3. Algorithm: [Dynamic Product Development]
a. Decide the product specification (PS) as per business requirement(BS).(INPUT)
b. Analysis the Business & Resource Specification
c. Select the metric elements should be dependable ( variables P, M & E)
d. Select metric elements should be practicable ( variables P, M & E).
e. Refine the product specification as per target specification(TS) (i.e. RM)
f. Establishing target specification(TS) (i.e. RM) as per business requirement.(OUTPUT)
g. Reflect on results and process.
BLACK BOX BLOCK DIAGRAM:
The Encryption key(E) should be define at the time of initial design stage for better
system engineering as per top management decision. In this way, we can improve the value
engineering.
INPUT PS(P,M,E) PROCESSING OUPUT(TS=RM)
[(BS, RS)]
PRODUCER UNIT PROCESSING UNIT CONSUMERS UNIT
Figure 2. Black Box Diagram
4.4. Design Stage
We have to design the high reliable, scalable, security and highly available architecture
to run the complex business on complex heterogeneous IT infrastructure to meet over the multi
Relation, Function, Operation & Service Level. [6-7]
We can planning, analysis and design of the following these two directed graph based
on the (PS)
7. IJEEI ISSN: 2089-3272
Dynamic Value Engineering Method Optimizing the Risk on Real Time … (Prashant KP)
107
PRODUCT
BUSINESS RESOURCES
Figure 3 (a)
Associative Low: (P U B) U R = P U (B U R), (P ∩ B) ∩ R = P ∩ (B ∩ R)
Distributive Low: P U (B ∩ R) = (P U B) ∩ (P U R), P ∩ (B U R) = (P ∩ B) U (P ∩ R)
ENCRYPTION
PROCESSOR MEMORY
Fig:3 (b)
Associative Low: (E U P) U M = E U (P U M), (E ∩ P) ∩ M = M ∩ (P ∩ M)
Distributive Low: E U (P ∩ M) = (E U P) ∩ (E U P), E ∩ (PD U M) = (E ∩ P) U (E ∩ M)
4.4. Development
We have to go forward to finding alternate optimization process on specification,
process, engineering & services for risk optimization based on operating system components
(P, M & E). This scalable complex composion model definitely will be resolve our risk and
security issue on complex real time system for multiple client application, business and
resources available for multi location, vendor, customer on any time around the clock.
PROPOSED METHOD OF OPTIMIZATION ON OPERATING SYSTEM RISK(GREEDY
METHOD )
Our abstract is fully satisfied as theoretically, practically, analytically and graphically and
assumption based data & attributes defined in this method. The value engineering is the ratio of
functionalities of individual components (objects) to the optimal cost. Mathematical Deduction as
follows:
8. ISSN: 2089-3272
IJEEI Vol. 2, No. 2, June 2014 : 101 – 110
108
Table 3. Proposed Data
SN Equations Descriptions Remarks
01 VE = F( P, M, E, C, A)/
Optimal Cost
Equation is satisfying to the operating
system. It is indicating characteristics
& behavior of the system.
PRIMARY DECISION
System Engineering: Improving the
performance of individual elements
Processor, Memory, Encryption key and
Availability (P, M, E, C, A) with respect to
optimal cost.
PRIMARY RISK ASSESSMENT
02 VE= F( PS, BS, RS)/ Minimal
cost.
As per Algorithm.
Where: PS: Product Specification, BS:
Business Specification, Resource
Specification & TS: Target
Specification.
PRIMARY DECISION
It will be help to define, design,
development & implementation stage.
Encryption key should be added into design
phase, then cost & time will be optimize.
Dynamic product development satisfying
the automated control.
PRIMARY RISK ASSESSMENT
03 VE = F ( P, S, O, M )/ Optimal
Cost
Where: P: Product, S: Services, O:
Operation, M: Maintenance
SECONDARY DECISION
Improving the product, services, operation
& maintenance at minimum cost.
SECONDARY RISK ASSESSMENT
04 R=k. 1/C, C = k .S R-Risk, S- Standard, C-Control, k-prop
constant
SECONDARY DECISION
Improve the quality, value & reliability and
std. of system engineering.
SECONDARY RISK ASSESSMENT
05 C=P+D+C
VE=F(P,D,C)/Minimal cost
VE=F(TC)/MC
P-Prevention, D-detection, C-
Correction. TC: Total Control, MC: Min
Cost. SECONDARY DECISION
Maximize the prevention, detection &
correction at law cost.
SECONDARY RISK ASSESSMENT
06 VE=k. RA Risk Assessment, k:Prop. Constant. Optimize the risk
SECONDARY DECISION SECONDARY RISK ASSESSMENT
07 ∑RM=∑(VE+SE+RE+SE), (VE
U SE U RE U SC)= ∑ RM
Composion of all four.
TOTAL DECISION
Union of All. Satisfying to SC, RE, SE &
VE. TOTAL RISK ASSESSMENT.
4.5 Dynamic Composition Model Maximize Value Engineerng and Minimize the Risk
MODEL & MECHANISM
TECHNICAL MODEL OF Specification (INPUT) BUSINESS MODEL OF Specification
COMPOSITION LAW A ((PROCESS) COMPOSITION LAW B
x P M E C RM x PS BS RS HA RM
P P M E RM PS PS BS RS HA RM
M M P E RM BS PS RS RM
E E C M RM RS BS HA S BS RM
C C P M P RM HA HA RS BS RM
RM RM RM RM RM RM RM RM RM RM RM RM
Both diagonal objects are not meeting our Objective Both diagonal objects are not meeting our Purpose.
Where C:- Control, HA :- High Availability
ENGINEERING MODEL OF Specification. SERVICE MODEL OF Specification(OUTPUT)
(PROCESS) COMPOSITION LAW C COMPOSITION LAW D
x VE SE RE SC RM x P S O M RM
VE SE RE RM P P S O M RM
SE VE RE RM S S M O RM
RE RE SC SE RM O M S RM
SC SC RE SE VE RM M M O S RM
RM RM RM RM RM RM RM RM RM RM RM RM
Both diagonal objects are not meeting our Engineering Both diagonal objects are not giving any Output ( TS)
9. IJEEI ISSN: 2089-3272
Dynamic Value Engineering Method Optimizing the Risk on Real Time … (Prashant KP)
109
The Raw and Column total are meeting our Target Specificication, Output, Objective & Purpose
on multiple Product, Business, Resources & Applications over a heterogeneous complex
platform.
5. Results & Discussion (Services)
We have to gain the maximum objective as per mix culture of the theoretical as well as
practical services over a complex real time operating system.
Maximize the protection, detection, correction, operation & services at optimal cost and
time.
Maximize the (functionalities) performance, integration, availability, reliability at optimal cost.
Maximum utilization of product, business, resources at minimal cost at right time in right
way.
The symmetrical object are not meeting the risk mitigation, but the anti-symmetrical
objects only resolving our purpose.
In this ways, we can improve the business and optimize the resource & technology cost,
mean while improve the performance of product & services, Therefore, the value engineering
ultimately & automatically optimize the risk mgmt system and help to the decision making to the
top mgmt. It is not only value engineering but also satisfying to the reliability engineering as well
as security engineering.
6. Conclusion
In the final value analysis of the product, value engineering is not only beneficial, but
also essential because of:
The functionality of the project (PM) is often improved as well as producing tremendous
savings, both initial and Life-Cycle Cost. (Block Diagram)
A second look at the design produced by the (designer & reengineering (Table 3) architect
and engineers gives the assurance that all reasonable alternatives have been explored.
[PS(P, M, E)]
The cost estimates, reduction and scope statements are checked thoroughly assuring that
nothing has been omitted or underestimated. (Cost optimization)
Assures that the best value will be obtained over the life of the building. (Define, design,
development, deployment & decision (TS).
An automated system engineering used to be develop on the decision criteria when it is
important to secure as much as possible of what is wanted from each components (objects) or
unit of the resource used. The resource may be money, space, time, man, machine, material,
energy, market, method and so on. The system is unique in that it effectively uses both
knowledge, creativity and provides step-by-step techniques for maximizing the benefits from
each component. It promotes development of alternatives suitable for the future as well as the
present. This is accomplished by identifying and studying each function that is wanted by the
customers and clients or user, then applying knowledge and creativity to achieve the desired
functions. Resources are converted into costs to achieve direct, meaningful comparisons. By
using this methods of value engineering, we can achieve the 30% to 40% reduction in the
required resources often results.
The dynamic value engineering methodology helps to the any organization the more
effectively in local, national and international markets at any time and any place around the
clock follows:
Minimizing costs & risk.
Maximize profits, ROI & TCO.
Improving functionality, quality & decision (TQM).
Maximizing Production, Sales, Market, Share & generate the Capital.
Optimizing time & Maximize the utilization of Man, Machine, Material, Market, Money &
Method (M^5).
Solving multiple problems at right time with optimal cost.
Utilizing overall resources more effectively at right time and right place.
10. ISSN: 2089-3272
IJEEI Vol. 2, No. 2, June 2014 : 101 – 110
110
VE is a Better to Butter:
The changes are executed at the initial stages only (PS). We are already defined in BLOCK
DIAGRAM.
It requires specific technical knowledge. [SE] Table: 2 & 3
The value engineering provides accountability for individual’s functionality of each
components of the real time operating system over a application, system software, server and
network. This accountability is accomplished through optimization model and mechanisms that
require, accountability, availability, reliability & integrity of the automated optimization control
functions which is call Security Engineering, Reliability Engineering & System Engineering.
That’s why this value engineering is practically and theoretically working as process of risk
optimization and decision making criteria on technology management system in around the
globe.
Recommendations
Future Advancement of this Work:
We have to keep balance & managing work load ratio among the Product, Business,
Resources over a multiple application, network and infrastructure by applying value
engineering model, method & mechanism.
We have to develop the distributed object oriented value engineering methods for multiple
business, product & resources over a multiple applications & heterogeneous platform as per
customer requirements.
References
[1] Bernard, Kolman. Discrete Mathematical Structures. New Delhi, India: Person Education India. (PHI).
2007.
[2] Bruce, Schneier. Applied Cryptography, New Delhi, India: Wiley Publishing Inc. 1996.
[3] B Mohadevan. Operation Management, New Delhi India: Person Education India. PHI 2008.
[4] CISA Review Mannual. ISACA, USA. 2003.
[5] Coriolis. CISSP Exam Cram, Coriolis Group Books, New Delhi, India: Dreamatech. 2002.
[6] Edgar G. Discrete Mathematics with Graph Theory. New Delhi, India: Person India. (PHI). 2007.
[7] Joe L Matt. Discrete Mathematics for Scientist and Mathematician. New Delhi, India: Person
Education India. (PHI). 2008.
[8] John B Kramer. The CISA Prep Guide, New Delhi, India: Wiley Publishing Inc. 2003.
[9] Hwang, Kai. Advance Computer Architecture. New Delhi, India: Tata McGraw Hill. 2008.
[10] O’ Reilly. Essential of System Administration. O’ Reilly Media: USA. 1995.
[11] Pressman, Software Engineering. New Delhi, India: Tata McGraw Hill. 2001.
[12] Richard B Chase, Robert, Nicholas, Nitin. Operations Management. New Delhi, India: Tata McGraw
Hill. 2006.
[13] Shon, Harrish. CISSP Exam study guide, New Delhi, India: Dreamtech. 2002.
[14] Shon, Harrish. Security Mgmt Practices. New Delhi, India: Wiley Publishing Inc. 2002.
[15] Sumitabh, Das. UNIX System V UNIX Concept & Application. Delhi, India: Tata McGraw Hill. 2009.
[16] Sun-Microsystem. UNIX Sun Solaris system administration. USA. 2002.
[17] William, Stalling. Cryptography and Network Security. New Delhi, India: Person India. 2006.
[18] Weber, Ron. Information System Control & Audit. New Delhi, India: Person Education India. (PHI).
2002.