Information flow control (IFC) is useful in preventing information leakage during software execution. Our
survey reveals that no IFC model is applied on the entire software development process. Applying an IFC
model on the entire software development process offers the following features: (1) viewpoints of all
stakeholders (i.e., customers and analysts) can be included and (2) the IFC model helps correcting
statements that may leak information during every development phase. In addition that no IFC model is
applied to the entire software development process, we failed to identify an IFC model that can reduce
runtime overhead. According to the above description, we designed a new IFC model named PrcIFC
(process IFC). PrcIFC is applied on the entire software development process. Moreover, PrcIFC is
disabled after software testing to reduce runtime overhead.
APPLICATION WHITELISTING: APPROACHES AND CHALLENGESIJCSEIT Journal
Malware is a continuously evolving problem for enterprise networks and home computers. Even security
aware users using updated security solutions fall into trap of zero day attacks. Moreover, blacklisting
based solutions suffer from problems of false positives and false negatives. From here, idea of Application
whitelisting was coined among security vendors and various solutions were evolved with same underlying
technology idea. This paper provides the details about design and implementation approaches and
discusses challenges while developing an effective whitelisting solution.
APPLICATION WHITELISTING: APPROACHES AND CHALLENGESIJCSEIT Journal
Malware is a continuously evolving problem for enterprise networks and home computers. Even security
aware users using updated security solutions fall into trap of zero day attacks. Moreover, blacklisting
based solutions suffer from problems of false positives and false negatives. From here, idea of Application
whitelisting was coined among security vendors and various solutions were evolved with same underlying
technology idea. This paper provides the details about design and implementation approaches and
discusses challenges while developing an effective whitelisting solution.
Industrial perspective on static analysisChirag Thumar
by BA Wichmann, AA. Canning, D.L. Clutterbuck, LA Winsborrow,
N.J. Ward and D.W.R. Marsh
Static analysis within industrial applications
provides a means of gaining higher assurance
for critical software. This survey notes several
problems, such as the lack of adequate
standards, difficulty in assessing benefits,
validation of the model used and acceptance
by regulatory bodies. It concludes by outlining
potential solutions and future directions.
SOURCE CODE ANALYSIS TO REMOVE SECURITY VULNERABILITIES IN JAVA SOCKET PROGR...IJNSA Journal
This paper presents the source code analysis of a file reader server socket program (connection-oriented
sockets) developed in Java, to illustrate the identification, impact analysis and solutions to remove five
important software security vulnerabilities, which if left unattended could severely impact the server
running the software and also the network hosting the server. The five vulnerabilities we study in this
paper are: (1) Resource Injection, (2) Path Manipulation, (3) System Information Leak, (4) Denial of
Service and (5) Unreleased Resource vulnerabilities. We analyze the reason why each of these
vulnerabilities occur in the file reader server socket program, discuss the impact of leaving them
unattended in the program, and propose solutions to remove each of these vulnerabilities from the
program. We also analyze any potential performance tradeoffs (such as increase in code size and loss of
features) that could arise while incorporating the proposed solutions on the server program. The
proposed solutions are very generic in nature, and can be suitably modified to correct any such
vulnerabilities in software developed in any other programming language. We use the Fortify Source
Code Analyzer to conduct the source code analysis of the file reader server program, implemented on a
Windows XP virtual machine with the standard J2SE v.7 development kit
MANCAFChat - An Application to Evaluate MANCAF Frameworkiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Testing desktop application police station information management systemSalam Shah
The police stations have adequate importance in the society to control the law and order situations of the country. In Pakistan, police stations manage criminal records and information manually. We have previously developed and improved a desktop application for the record keeping of the different registers of the police stations. The data of police stations is sensitive and that need to be handled within secured and fully functional software to avoid any unauthorized access. For the proper utilization of the newly developed software, it is necessary to test and analyze the system before deployment into the real environment. In this paper, we have performed the testing of an application. For this purpose, we have used Ranorex, automated testing tool for the functional and performance testing, and reported the results of test cases as pass or fail.
EVALUATION OF SOFTWARE DEGRADATION AND FORECASTING FUTURE DEVELOPMENT NEEDS I...ijseajournal
This article is an extended version of a previously published conference paper. In this research, JHotDraw (JHD), a well-tested and widely used open source Java-based graphics framework developed with the best software engineering practice was selected as a test suite. Six versions of this software were profiled, and data collected dynamically, from which four metrics namely (1) entropy (2) software maturity index, COCOMO effort and duration metrics were used to analyze software degradation, maturity level and use
the obtained results as input to time series analysis in order to predict effort and duration period that may
be needed for the development of future versions. The novel idea is that, historical evolution data is used to
project, predict and forecast resource requirements for future developments. The technique presented in
this paper will empower software development decision makers with a viable tool for planning and decision
making.
Taint analysis is the trending approach of analysing software for security purposes. By using the taint analysis technique, tainted tags are added to the data entering from the sensitive sources into the applications, then the propagations of the tainted data are monitored carefully. Taint analysis can be done in two ways including static taint analysis where analysis is conducted without executing the program, and dynamic taint analysis where the tainted data is monitored during the program execution. This paper reviews the taint analysis technique, with a focus on dynamic taint analysis. In addition, some of the existing taint analysis tools and their application areas are reviewed. In the end, the paper summarises the defects associated with each of the tools and presents some of them.
Defect Management Practices and Problems in Free/Open Source Software ProjectsWaqas Tariq
With the advent of Free/Open Source Software (F/OSS) paradigm, a large number of projects are evolving that make use of F/OSS infrastructure and development practices. Defect Management System is an important component in F/OSS infrastructure which maintains defect records as well as tracks their status. The defect data comprising more than 60,000 defect reports from 20 F/OSS Projects is analyzed from various perspectives, with special focus on evaluating the efficiency and effectiveness in resolving defects and determining responsiveness towards users. Major problems and inefficiencies encountered in Defect Management among F/OSS Projects have been identified. A process is proposed to distribute roles and responsibilities among F/OSS participants which can help F/OSS Projects to improve the effectiveness and efficiency of Defect Management and hence assure better quality of F/OSS Projects.
Software Defect Prediction Techniques in the Automotive Domain: Evaluation, S...RAKESH RANA
Software Defect Prediction Techniques in the Automotive Domain: Evaluation, Selection and Adoption
PhD Defense, Göteborg, Sweden
Feb, 2015
Get full text of publication at:
http://rakeshrana.website/index.php/work/publications/
Proposed an Integrated Model to Detect The Defect in Software Development Pro...Waqas Tariq
Currently, software quality assurance does not apply completely on the development of software industry and this leads to some challenges in the industry of software, specially, concerning cost and time consuming. While, the success of software development project depends on the application quality assurance standards, which starts by the pre-project and continue through the project till it reaches the user at the end. The aim of this model is the challenge to produce not only the software development project without defects but also the customer’s acceptance and satisfaction of the software. Thus to prevent software development project failure, this defect should be detected at an early stage.This study will deal with; the quality assurance inspection and testing, in the whole stages of software development projects. Then proposing the quality assurance inspection and testing model for the software development projects process, which known as “Integrated Model”.
2016 state of industrial internet application development.
Study Highlights
This study, carried out in collaboration with GE Digital,
surveyed the existing industrial developer landscape, to better
understand who industrial developers are, how they allocate their time and resources when developing applications, the challenges faced in the development process, and the technological opportunities available to them. The study, a survey of over 1,200 industrial developers, concludes that there is a need within the industrial developer community for focused tools and that these developers would receive significant benefit from using PaaS and infrastructures such as Predix. Relevant findings include the following:
Trusted Computing intends to make PC platform trustworthy so that a user can have level of trust when
working with it. To build a level of trust TCG gave specification of TPM, as integral part of TCB, for
providing root(s) of trust. Further TCG defined Dynamic Root of Trust Measurement in Trusted Computing
systems in its specification as a technology for measured platform initialization while system is in running
state. The DRTM approach is contrary to Static Root of Trust Measurement where measurements are taken
during boot process. In this study, since this technology was first introduced, we list and discuss upon
publically available open source solutions that either implement DRTM or are applications of these DRTM
based solutions. Further, the challenges faced by the DRTM technology along with observations from
authors are listed.
AN EFFICIENT ROUTING PROTOCOL FOR MOBILE AD HOC NETWORK FOR SECURED COMMUNICA...pijans
Security and reliable communication is challenging task in mobile Ad Hoc network. Through mobility of
network device compromised with attack and loss of data. For the prevention of attack and reliable
communication, various authors proposed a method of secured routing protocol such as SAODV and SBRP
(secured backup routing protocol). The process of these methods work along with route discovery and
route maintains, discovery and route maintained needed more power consumption for that process. The
power of devices is decrease during such process and network lifetimes expire. In this paper, we modified
the secured stateless protocol for secured routing and minimized the utilization of power during path
discovering and establishment. For the authentication of group node used group signature technique and
sleep mode threshold concept for power minimization. Our proposed technique is simulated in ns-2 and
compare to other routing protocol gives a better performance in comparison to energy consumption and
throughput of network.
Industrial perspective on static analysisChirag Thumar
by BA Wichmann, AA. Canning, D.L. Clutterbuck, LA Winsborrow,
N.J. Ward and D.W.R. Marsh
Static analysis within industrial applications
provides a means of gaining higher assurance
for critical software. This survey notes several
problems, such as the lack of adequate
standards, difficulty in assessing benefits,
validation of the model used and acceptance
by regulatory bodies. It concludes by outlining
potential solutions and future directions.
SOURCE CODE ANALYSIS TO REMOVE SECURITY VULNERABILITIES IN JAVA SOCKET PROGR...IJNSA Journal
This paper presents the source code analysis of a file reader server socket program (connection-oriented
sockets) developed in Java, to illustrate the identification, impact analysis and solutions to remove five
important software security vulnerabilities, which if left unattended could severely impact the server
running the software and also the network hosting the server. The five vulnerabilities we study in this
paper are: (1) Resource Injection, (2) Path Manipulation, (3) System Information Leak, (4) Denial of
Service and (5) Unreleased Resource vulnerabilities. We analyze the reason why each of these
vulnerabilities occur in the file reader server socket program, discuss the impact of leaving them
unattended in the program, and propose solutions to remove each of these vulnerabilities from the
program. We also analyze any potential performance tradeoffs (such as increase in code size and loss of
features) that could arise while incorporating the proposed solutions on the server program. The
proposed solutions are very generic in nature, and can be suitably modified to correct any such
vulnerabilities in software developed in any other programming language. We use the Fortify Source
Code Analyzer to conduct the source code analysis of the file reader server program, implemented on a
Windows XP virtual machine with the standard J2SE v.7 development kit
MANCAFChat - An Application to Evaluate MANCAF Frameworkiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Testing desktop application police station information management systemSalam Shah
The police stations have adequate importance in the society to control the law and order situations of the country. In Pakistan, police stations manage criminal records and information manually. We have previously developed and improved a desktop application for the record keeping of the different registers of the police stations. The data of police stations is sensitive and that need to be handled within secured and fully functional software to avoid any unauthorized access. For the proper utilization of the newly developed software, it is necessary to test and analyze the system before deployment into the real environment. In this paper, we have performed the testing of an application. For this purpose, we have used Ranorex, automated testing tool for the functional and performance testing, and reported the results of test cases as pass or fail.
EVALUATION OF SOFTWARE DEGRADATION AND FORECASTING FUTURE DEVELOPMENT NEEDS I...ijseajournal
This article is an extended version of a previously published conference paper. In this research, JHotDraw (JHD), a well-tested and widely used open source Java-based graphics framework developed with the best software engineering practice was selected as a test suite. Six versions of this software were profiled, and data collected dynamically, from which four metrics namely (1) entropy (2) software maturity index, COCOMO effort and duration metrics were used to analyze software degradation, maturity level and use
the obtained results as input to time series analysis in order to predict effort and duration period that may
be needed for the development of future versions. The novel idea is that, historical evolution data is used to
project, predict and forecast resource requirements for future developments. The technique presented in
this paper will empower software development decision makers with a viable tool for planning and decision
making.
Taint analysis is the trending approach of analysing software for security purposes. By using the taint analysis technique, tainted tags are added to the data entering from the sensitive sources into the applications, then the propagations of the tainted data are monitored carefully. Taint analysis can be done in two ways including static taint analysis where analysis is conducted without executing the program, and dynamic taint analysis where the tainted data is monitored during the program execution. This paper reviews the taint analysis technique, with a focus on dynamic taint analysis. In addition, some of the existing taint analysis tools and their application areas are reviewed. In the end, the paper summarises the defects associated with each of the tools and presents some of them.
Defect Management Practices and Problems in Free/Open Source Software ProjectsWaqas Tariq
With the advent of Free/Open Source Software (F/OSS) paradigm, a large number of projects are evolving that make use of F/OSS infrastructure and development practices. Defect Management System is an important component in F/OSS infrastructure which maintains defect records as well as tracks their status. The defect data comprising more than 60,000 defect reports from 20 F/OSS Projects is analyzed from various perspectives, with special focus on evaluating the efficiency and effectiveness in resolving defects and determining responsiveness towards users. Major problems and inefficiencies encountered in Defect Management among F/OSS Projects have been identified. A process is proposed to distribute roles and responsibilities among F/OSS participants which can help F/OSS Projects to improve the effectiveness and efficiency of Defect Management and hence assure better quality of F/OSS Projects.
Software Defect Prediction Techniques in the Automotive Domain: Evaluation, S...RAKESH RANA
Software Defect Prediction Techniques in the Automotive Domain: Evaluation, Selection and Adoption
PhD Defense, Göteborg, Sweden
Feb, 2015
Get full text of publication at:
http://rakeshrana.website/index.php/work/publications/
Proposed an Integrated Model to Detect The Defect in Software Development Pro...Waqas Tariq
Currently, software quality assurance does not apply completely on the development of software industry and this leads to some challenges in the industry of software, specially, concerning cost and time consuming. While, the success of software development project depends on the application quality assurance standards, which starts by the pre-project and continue through the project till it reaches the user at the end. The aim of this model is the challenge to produce not only the software development project without defects but also the customer’s acceptance and satisfaction of the software. Thus to prevent software development project failure, this defect should be detected at an early stage.This study will deal with; the quality assurance inspection and testing, in the whole stages of software development projects. Then proposing the quality assurance inspection and testing model for the software development projects process, which known as “Integrated Model”.
2016 state of industrial internet application development.
Study Highlights
This study, carried out in collaboration with GE Digital,
surveyed the existing industrial developer landscape, to better
understand who industrial developers are, how they allocate their time and resources when developing applications, the challenges faced in the development process, and the technological opportunities available to them. The study, a survey of over 1,200 industrial developers, concludes that there is a need within the industrial developer community for focused tools and that these developers would receive significant benefit from using PaaS and infrastructures such as Predix. Relevant findings include the following:
Trusted Computing intends to make PC platform trustworthy so that a user can have level of trust when
working with it. To build a level of trust TCG gave specification of TPM, as integral part of TCB, for
providing root(s) of trust. Further TCG defined Dynamic Root of Trust Measurement in Trusted Computing
systems in its specification as a technology for measured platform initialization while system is in running
state. The DRTM approach is contrary to Static Root of Trust Measurement where measurements are taken
during boot process. In this study, since this technology was first introduced, we list and discuss upon
publically available open source solutions that either implement DRTM or are applications of these DRTM
based solutions. Further, the challenges faced by the DRTM technology along with observations from
authors are listed.
AN EFFICIENT ROUTING PROTOCOL FOR MOBILE AD HOC NETWORK FOR SECURED COMMUNICA...pijans
Security and reliable communication is challenging task in mobile Ad Hoc network. Through mobility of
network device compromised with attack and loss of data. For the prevention of attack and reliable
communication, various authors proposed a method of secured routing protocol such as SAODV and SBRP
(secured backup routing protocol). The process of these methods work along with route discovery and
route maintains, discovery and route maintained needed more power consumption for that process. The
power of devices is decrease during such process and network lifetimes expire. In this paper, we modified
the secured stateless protocol for secured routing and minimized the utilization of power during path
discovering and establishment. For the authentication of group node used group signature technique and
sleep mode threshold concept for power minimization. Our proposed technique is simulated in ns-2 and
compare to other routing protocol gives a better performance in comparison to energy consumption and
throughput of network.
USING LATTICE TO DYNAMICALLY PREVENT INFORMATION LEAKAGE FOR WEB SERVICESijsptm
The use of web services becomes important. Since web services may be provided by unknown providers,
many researchers designed mechanisms to ensure secure access of web services. Existing mechanisms
generally statically decides whether a requester can invoke a web service but ignore the importance of
dynamically preventing information leakage during web service execution. This paper proposes a latticebased
information flow control model WSIFC (web service information flow control) to dynamically
prevent information leakage. It uses simple rules to monitor the flows of sensitive information during web
service execution.
AN EFFECTIVE SEMANTIC ENCRYPTED RELATIONAL DATA USING K-NN MODELijsptm
Data exchange and data publishing are becoming an important part of business and academic practices.
Data owners need to maintain the rights over the datasets they share. A right-protection mechanism can be
provided for the ownership of shared data, without revealing its usage under a wide range of machine
learning and mining. In the approach provide two algorithms: the Nearest-Neighbors (NN) and determiner
preserves the Minimum Spanning Tree (MST). The K-NN protocol guarantees that relations between object
remain unaltered. The algorithms preserve the both right protection and utility preservation. The rightprotection
scheme is based on watermarking. Watermarking methodology preserves the distance
relationships.
A mobile adhoc network is a collection of wireless mobile nodes that communicate with one another without any fixed networking infrastructure. Since the nodes in this network are mobile, the power management and energy conservation become very critical in mobile adhoc network. The nodes in this network have limited battery power and limited computational power with a small amount of memory. Such nodes must conserve energy during routing to prolong their usefulness and increase network lifetime. This research paper proposes a scheme that takes into consideration the power awareness during route selection. This scheme observes power status of each and every node in the topology and further ensures the fast selection of routes with minimal efforts and faster recovery. The scheme is incorporated with the AODV protocol and the performance has been studied through simulation over NS-2.
NETWORK INTRUSION DETECTION AND COUNTERMEASURE SELECTION IN VIRTUAL NETWORK (...ijsptm
Intrusion in a network or a system is a problem today as the trend of successful network attacks continue to
rise. Intruders can explore vulnerabilities of a network system to gain access in order to deploy some virus
or malware such as Denial of Service (DOS) attack. In this work, a frequency-based Intrusion Detection
System (IDS) is proposed to detect DOS attack. The frequency data is extracted from the time-series data
created by the traffic flow using Discrete Fourier Transform (DFT). An algorithm is developed for
anomaly-based intrusion detection with fewer false alarms which further detect known and unknown attack
signature in a network. The frequency of the traffic data of the virus or malware would be inconsistent with
the frequency of the legitimate traffic data. A Centralized Traffic Analyzer Intrusion Detection System
called CTA-IDS is introduced to further detect inside attackers in a network. The strategy is effective in
detecting abnormal content in the traffic data during information passing from one node to another and
also detects known attack signature and unknown attack. This approach is tested by running the artificial
network intrusion data in simulated networks using the Network Simulator2 (NS2) software.
Verification of the protection services in antivirus systems by using nusmv m...ijfcstjournal
In this paper, a model of protection services in the antivirus system is proposed. The antivirus system
behavior separate in to preventive and control behaviors. We extract the properties which are expected
from the model of antivirus system approach from control behavior in the form of CTL and LTL temporal
logic formulas. To implement the behavior models of antivirus system approach, the ArgoUML tool and the
NuSMV model checker are employed. The results show that the antivirus system approach can detects
fairness, reachability, deadlock free and verify some properties of the proposed model verified by using
NuSMV model checker.
IT 8003 Cloud ComputingFor this activi.docxvrickens
IT 8003 Cloud Computing
For this activity you need to divide your class in groups
1
Group Activity 1 “SuperTAX Software”
2
SuperTax Overview
Did you know President Abraham Lincoln, one of America's most beloved leaders, also instituted one of its least liked obligations - the income tax? In this brief history of taxes, see the historical events which shaped income taxes in the United States today.
SuperTax is an American tax preparation software package developed in the mid-1980s.
SuperTax Corporation is headquartered in Mountain View, California.
2
Group Activity 1 “SuperTAX Software”
3
SuperTax Information
Desktop Software.
Support MS Windows and Mac OS.
Software method: CD/DVD media format.
Different versions:
SuperTAX Basic, Deluxe, Premier, and Home & Business.
Used by millions of users and organizations.
Group Activity 1 “SuperTAX Software”
4
SuperTAX Project
SuperTAX has hired your group as a consultant to move their Desktop Software to a Traditional IT Hosted Software, available Online.
Group Activity 1 “SuperTAX Software”
5
For Discussion:
Find the challenges that your team will encounter attempting to move SuperTAX Software to the new platform.
Prepared a presentation for the class.
On your Group you will need to define positions.
For example:
Project Manager, Senior Project Network, Senior Project Engineer, etc.
Group Activity 1 “SuperTAX Software”
6
Infrastructure
Software Development
Software Testing
Marketing & Business Model
Project Management
CHALLENGES
Group Activity 1 “SuperTAX Software”
7
Infrastructure
No more test in a single machine. (CD/DVD format model)
Test in a production cluster. (20, 30 users?)
A larger cluster can bring problems. (1000’s of users)
Testing must be done for different clients (mobile, desktops, OS)
Small performance bottleneck. Slow performance.
CHALLENGES
Group Activity 1 “SuperTAX Software”
8
Marketing & Business Model
One time fixed cost vs. subscription model
Before a CD was sold, now a subscription model.
Maintenance and replacement of cooling, power, and server is required
CHALLENGES
Group Activity 1 “SuperTAX Software”
9
Project Management
Project can take many months to years for Software Development cycle.
What model is appropriate for Hosted application. (Agile vs. waterfall)
Ability to try new features faster.
CHALLENGES
RUNNING HEAD: INTERSESSION 5 FINAL PROJECT PROJECTION 1
INTERSESSION 5 FINAL PROJECT PROJECTION 5
INTERSESSION 5 FINAL PROJECT PROJECTION
Shalini Kantamneni
Ottawa University
Intersession 5 Final Project Projection
The Design Process
This process involves the formulation of a model to be used in deriving a comprehensive cloud application. In this case, the model-view-controller design pattern will be used. This type of design pattern partitions the logic of the application into three distinct domains that are to be interconnected to provide a working cloud application (Jailia et al., 2016). ...
SECURITY FOR DEVOPS DEPLOYMENT PROCESSES: DEFENSES, RISKS, RESEARCH DIRECTIONSijseajournal
DevOps is an emerging collection of software management practices intended to shorten time to market for
new software features and to reduce the risk of costly deployment errors. In this paper we examine the security implications of two of the key DevOps practices, automation of the deployment pipeline using a deployment toolchain and infrastructure-as-code to specify the environment of the eployed software. We focus on identifying what changes when an organization moves from manual deployments to DevOps
automated deployment processes.We reviewed the literature and conducted three case studies using simple configurations of common DevOps tools. This allowed us to identify specific:
• Positive influences on security where automation enhances defenses.
• Negative influences, where automation enables different kinds of attacks and increases the attack
surface.
• Research directions that look promising to support this new approach to software management.
• Recommendations for DevOps adopters
Here is an paper published on software PLC Checker by Itris Automation Square, in the French journal "Mesures" : "La qualité des programmes vérifiée par leurs concepteurs".
Enjoy the reading!
Find us at http://www.itris-automation.com/
Contact us at commercial@itris-automation.com for more information.
The Indo-American Journal of Agricultural and Veterinary Sciences is an online international journal published quarterly. It is a peer-reviewed journal that focuses on disseminating high-quality original research work, reviews, and short communications of the publishable paper.
A UML Profile for Security and Code Generation IJECEIAES
Recently, many research studies have suggested the integration of safety engineering at an early stage of modeling and system development using Model-Driven Architecture (MDA). This concept consists in deploying the UML (Unified Modeling Language) standard as aprincipal metamodel for the abstractions of different systems. To our knowledge, most of this work has focused on integrating security requirements after the implementation phase without taking them into account when designing systems. In this work, we focused our efforts on non-functional aspects such as the business logic layer, data flow monitoring, and high-quality service delivery. Practically, we have proposed a new UML profile for security integration and code generation for the Java platform. Therefore, the security properties will be described by a UML profile and the OCL language to verify the requirements of confidentiality, authorization, availability, data integrity, and data encryption. Finally, the source code such as the application security configuration, the method signatures and their bodies, the persistent entities and the security controllers generated from sequence diagram of system’s internal behavior after its extension with this profile and applying a set of transformations.
A VNF modeling approach for verification purposesIJECEIAES
Network Function Virtualization (NFV) architectures are emerging to increase networks flexibility. However, this renewed scenario poses new challenges, because virtualized networks, need to be carefully verified before being actually deployed in production environments in order to preserve network coherency (e.g., absence of forwarding loops, preservation of security on network traffic, etc.). Nowadays, model checking tools, SAT solvers, and Theorem Provers are available for formal verification of such properties in virtualized networks. Unfortunately, most of those verification tools accept input descriptions written in specification languages that are difficult to use for people not experienced in formal methods. Also, in order to enable the use of formal verification tools in real scenarios, vendors of Virtual Network Functions (VNFs) should provide abstract mathematical models of their functions, coded in the specific input languages of the verification tools. This process is error-prone, time-consuming, and often outside the VNF developers’ expertise. This paper presents a framework that we designed for automatically extracting verification models starting from a Java-based representation of a given VNF. It comprises a Java library of classes to define VNFs in a more developer-friendly way, and a tool to translate VNF definitions into formal verification models of different verification tools.
Abstract - Various aspects of three proposed architectures for distributed software are examined. A Crucial need to
create an ideal model for optimal architecture which meets the needs of the organization for flexibility, extensibility
and integration, to fulfill exhaustive performance for potential talents processes and opportunities in the corporations
a permanent and ongoing need. The excellence of the proposed architecture is demonstrated by presenting a rigor scenario based proof of adaptively and compatibility of the architecture in cases of merging and varying organizations, where the whole structure of hierarchies is revised.
Keywords: ERP, Data-centric architecture, architecture Component-based, Plug in architecture, distributed systems
Automated server-side model for recognition of security vulnerabilities in sc...IJECEIAES
With the increase of global accessibility of web applications, maintaining a reasonable security level for both user data and server resources has become an extremely challenging issue. Therefore, static code analysis systems can help web developers to reduce time and cost. In this paper, a new static analysis model is proposed. This model is designed to discover the security problems in scripting languages. The proposed model is implemented in a prototype SCAT, which is a static code analysis tool. SCAT applies the phases of the proposed model to catch security vulnerabilities in PHP 5.3. Empirical results attest that the proposed prototype is feasible and is able to contribute to the security of real-world web applications. SCAT managed to detect 94% of security vulnerabilities found in the testing benchmarks; this clearly indicates that the proposed model is able to provide an effective solution to complicated web systems by offering benefits of securing private data for users and maintaining web application stability for web applications providers.
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...IJNSA Journal
This study examines the traditional approach to software development within the United Kingdom Government and the accreditation process. Initially we look at the Waterfall methodology that has been used for several years. We discuss the pros and cons of Waterfall before moving onto the Agile Scrum methodology. Agile has been adopted by the majority of Government digital departments including the Government Digital Services. Agile, despite its ability to achieve high rates of productivity organized in short, flexible, iterations, has faced security professionals’ disbelief when working within the U.K. Government. One of the major issues is that we develop in Agile but the accreditation process is conducted using Waterfall resulting in delays to go live dates. Taking a brief look into the accreditation process that is used within Government for I.T. systems and applications, we focus on giving the accreditor the assurance they need when developing new applications and systems. A framework has been produced by utilising the Open Web Application Security Project’s (OWASP) Application Security Verification Standard (ASVS). This framework will allow security and Agile to work side by side and produce secure code.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
CONTROLLING INFORMATION FLOWS DURING SOFTWARE DEVELOPMENT
1. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 3, August 2016
DOI : 10.5121/ijsptm.2016.5301 1
CONTROLLING INFORMATION FLOWS DURING
SOFTWARE DEVELOPMENT
Shih-Chien Chou
Department of Computer Science and Information Engineering, National Dong Hwa
University, Taiwan
ABSTRACT
Information flow control (IFC) is useful in preventing information leakage during software execution. Our
survey reveals that no IFC model is applied on the entire software development process. Applying an IFC
model on the entire software development process offers the following features: (1) viewpoints of all
stakeholders (i.e., customers and analysts) can be included and (2) the IFC model helps correcting
statements that may leak information during every development phase. In addition that no IFC model is
applied to the entire software development process, we failed to identify an IFC model that can reduce
runtime overhead. According to the above description, we designed a new IFC model named PrcIFC
(process IFC). PrcIFC is applied on the entire software development process. Moreover, PrcIFC is
disabled after software testing to reduce runtime overhead.
KEYWORDS
Software security, information flow control, software development process, reduce runtime overhead
1. INTRODUCTION
Information security covers many issues such as cryptography, authentication, and access control.
Information flow control (IFC) is a branch of access control. It prevents information leakage
during software execution. Information leakage occurs when persons or other software illegally
obtain sensitive information from a software system. In general, persons can obtain information
from files or output devices and software can obtain information from files. Information leakage
may thus happen when sensitive information is output.
In the early days, IFC is applied in a single system. In the recent years, IFC has been applied to
operating systems [1-3], web services [4-7], and even cloud applications [8-9]. In other words,
IFC has been applied to the entire Internet. IFC is thus essential in information security. We
involved in the research of IFC for years and identified that no IFC model is applied on the entire
software development process. Applying IFC on the entire software development process offers
the following features: (1) viewpoints of all stakeholders (i.e., customers and analysts) can be
included and (2) the IFC model helps correcting statements that may leak information during
every development phase. We also identified that no IFC model take runtime overhead reducing
into consideration, in which the overhead is induced by embedding IFC models in software. We
thus designed a new IFC model to overcome the problems mentioned above. The design is based
on the following considerations:
2. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 3, August 2016
2
a. Embedding the IFC model into software specifications and design documents. Traditional
models are embedded in software only, which may exclude the viewpoints of some
stakeholders.
b. Executing the specifications and design documents in which the IFC model is embedded.
This execution causes the IFC correctness of specifications and design documents to be
checked.
c. Embedding the IFC model in the software during testing. The testing thus tests both the
correctness of functions and IFC. After testing, the model is disabled to remove runtime
overhead induced by IFC model.
As described in term c above, the IFC model proposed in this paper is disabled after testing.
Nevertheless, software that passes the testing is not ensured to be correct. Therefore, the operating
system should intercept software output information to ensure security (OS intercepts only output
information because only output information may be leaked to persons or other software). In our
research, we design the IFC model and let the OS interception as a future work. Since the model
is applied in the entire software development process, we name it PrcIFC (process IFC). The core
of PrcIFC is the same as NetIFC [10] we designed before. When designing PrcIFC, we ignored
the affect of viruses and Trojan horses because they belong to other research areas. During the
development of PrcIFC, we identified the following facts.
a. Since only output information may be leaked (to persons or other programs), only output
statements need to be controlled. As to other statements, the join operation [11] should be
used to adjust variable sensitivity after execution. This adjustment prevents possible
information leakage when the variable is output later.
b. Information should be separated in different group for compatibility. For example, prices with
the units of USD and EUR are incomparable.
According to the facts, PrcIFC controls output statements and uses join operations to adjust
information sensitivity for other statements. PrcIFC presented in this paper offers the following
major features:
a. PrcIFC is applied on the entire software development process. Therefore, it will not exclude
the viewpoints of any stakeholders and helps correcting statements that may leak information
during every phase of the development.
b. PrcIFC is removed when software is online, which remove runtime overhead.
2. RELATED WORK
Traditional lattice-based model [12-13] is a MAC (mandatory access control) which is criticized
as too restricted. The decentralized label model [11, 14] uses labels to mark the security levels of
variables. A label is composed of policies to be simultaneously obeyed. The model in [15] uses
ACLs of objects to compute ACLs of executions.
The model Trust-Serv [16] uses state machines to dynamically choose web services at run time. It
uses trust negotiation [17] to select web services that can be invoked. The kernel component to
determine whether a web service can be invoked is credential. The model proposed in [18] uses
digital credentials for negotiation. It defines strategies for negotiation policies. SCIFC [6] uses
various mechanism and algorithms to make sure whether a service chain can be successfully
invoked.
3. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 3, August 2016
3
Flume [1] is a decentralized information flow control (DIFC) model for operating systems. It
tracks information flows in a system using tags and labels. The control granularity is detailed to
processes. The secrecy tags prevent information leakage and integrity tags prevent information
corruption. Flume also avoids information leaked to untrusted channel. The function of Laminar
[2] is similar to that of Flume.
The model proposed in [19] uses X-GTRBAC [20] to control the access of web services. X-
GTRBAC can be used in heterogeneous and distributed sites. Moreover, it applies TRBAC [21]
to control the factor related to time. The model in [22] uses RBAC (role-based access control)
concept [23-24] to define policies of accessing a web service. It is a two-levelled mechanism. The
first level checks the roles assigned to requesters and web services. The second level uses
parameters as attributes and assigns permissions to the attributes. An authorized requester can
invoke a web service only when it possesses the permissions to access the attributes.
The model in [25] identifies dependencies among I/O parameters of services to decide whether
service invocations will leak information. The model in [26] offers functions to check intra-
component information flows through the JIF language [27]. The model in [8] applied the
Chinese Wall model [28] to control information flows in the IaaS level. As to services in the SaaS
level, the model cannot control their information flows. In the recent survey of [9], the authors
especially emphasize the importance of data privacy in a cloud computing environment. They
believed that information flow control is a good solution for the problem. However, the article did
not propose a concrete information flow control model.
According to our survey, no research applies IFC in the entire software development process and
almost no one discusses runtime overhead.
Figure 1. Design Philosophy Of Prcifc
4. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 3, August 2016
4
3. PRCIFC
Figure 1 depicts the design philosophy of PrcIFC. Before describing Figure 1, we assume that
software development tools, such as the UML development platform, are trusted. Therefore,
PrcIFC needs not handle possible problems induced by software development tools.
Figure 1 shows that we embed PrcIFC in specifications then execute them. To execute a
specification, we use our rapid prototyping language [29] to write the specification. That is, we
embed PrcIFC in our prototyping language. If a specification passes the test of both functional
and IFC requirements, the specification can be designed. Otherwise, the requirement should be
re-analyzed. During the design, we use the target language (such as C) to write the design
documents and embed PrcIFC in the documents. The documents are then executed and tested. If
the design documents pass the test of both functional and IFC requirements, the design document
can be implemented. Otherwise, the design activity should be repeated. During implementation,
PrcIFC is embedded in the programs. The programs are then executed. If the programs pass the
test of both functional and IFC requirements, the programs are released to the customers as a
software system. Before the software system is online, PrcIFC is disabled to completely remove
the runtime overhead. When the software is online, the OS intercepts the output of the software to
check possible information leakage. However, the interception is out of the scope of this paper.
According to the description in section 1, information should be associated with a mechanism to
represent its sensitivity. The sensitivity should include a security level number and group.
Moreover, we need a join operation to adjust the sensitivity of variables in non-output statements.
In this regard, we let a variable’s sensitivity be “(Sln, Gv)”, in which Sln is the security level
number whereas Gv is the group. We call the sensitivity a security level. Therefore, every piece of
information that should be protected is associated with a security level. The definition of security
levels results in a lattice. According to the lattice, only output statements are controlled strictly
and other statements applies the join operation to adjust the security level of variables. Suppose a
piece of information derived from the variable set “{bi | bi is a variable associated with the
security level ),(S ii bb Gvlv }” is outputting to the port a (here a port is a device or a file), which is
associated with a security level ),(S aa Gvlv . Then, PrcIFC will check the security of the output
statement using the following rule
Rule 1. ))(()( ababi SlvSlvMAXGpGp ii
≤∧≠∩∩ φ (1)
The former part of the rule requires that the output port a and the variables bi in the variable set
should be compatible. Otherwise, the outputting is illegal. The latter part requires that the security
level number of the output port should be at least the same as the variables in the variable set.
This prevents leaking information carried by bi to an output port. As to non-output statement,
suppose a piece of information a, which is associated with a security level ),(S aa Gvlv , is derived
from the variable set “{bi | bi is a variable associated with the security level ),(S ii bb Gvlv }”. Then,
PrcIFC requires that the condition “ )( φ≠∩∩ abi GpGp i
” to be true, which means that the
information a and variables bi in the variable sets are compatible. If the condition is true, the
following rule is used to adjust the security of the information a.
Rule 2. Slva = )( ibSlvMAX (2)
5. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 3, August 2016
5
4. PROVE OF CORRECTNESS
Information leakage may occur during output, including outputting to files and devices.
Case 1. Outputting information to a device will not leak information under the control of PrcIFC.
Proof. Information output to a device Dv may only leak to persons. Suppose person P possesses a
security level number SLVp and SLVp< SlvDv. If Dv and all information sources are in the same
group, Rule 1 allows the output only when )( ibSlvMAX ≤ SlvDv. If the output is allowed, P
cannot access information output to Dv because SLVp< SlvDv. Since bi derives the information
output to Dv, no bi will be leaked to P.
Case 2. Outputting information to file will not leak information under the control of PrcIFC.
Proof. Information output to a file Fi may be leaked to persons or software. If Fi and all sources
are in the same group, Rule 1 allows the output only when )( ibSlvMAX ≤SlvFi. Case 1
above states that no bi will be leaked to a person P with a security level number SLVp and
SLVp< SlvFi. On the other hand, if the software Sfi intends to read the information of V from
Fi to the variable V1, Rule 2 causes vSlv to be the same as 1vSlv . If Sfi outputs V1 to a
device, Case 1 states that V1 will not be leaked to a person. If V1 is output to another file, the
proof goes back to the beginning of this proof. However, we know that the information of a
sensitive variable may be: (a) operating by a software system, (b) output to a device, or (c)
output to a file. Information operating by a software system will not be leaked without
output. Information output to a device will not be leaked (see Case 1). As to the information
output to a file, it will not be leaked to persons. #
5. EXPERIMENT RESULTS
We required students to develop a simple library system using UML as the development model.
The target language is C. We organized two groups of students for the experiment, in which every
group was consisted of ten students. For the first group, we required students to develop the
system without PrcIFC. PrcIFC was embedded in their systems during implementation. That is,
the students needed not use the prototyping language to write specifications and needed not use
the target language to write design documents. On the other hand, we required students in the
other group to follow the process in Figure 1 to develop their software. During software
development, we required the students to collect the time they develop documents. During
testing, we required students to collect the number of statements that violate PrcIFC. The
collected data were then averaged. Table 1 depicts the averaged data, in which the development
time of the first group is normalized to one.
Table 1. Experiment result
Group Requirement analysis
time System design time Implementation
time
Statements violating
PrcIFC
1 1 1 1 5.33
2 1.25 1.32 0.69 1.98
6. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 3, August 2016
6
Table 1 shows that the requirement analysis and design time of the second group was more than
that of the first group. This is reasonable because students in the group should write their
specifications and design documents using the prototyping language and the target language,
respectively. Moreover, PrcIFC should be embedded in the languages. On the other hand, the
implementation time of the second group was much less because the design documents had been
written in the target language. Moreover, students in the first group should embed PrcIFC in their
systems. What we especially concerned is the statements that violate PrcIFC. The experiment
result exercises us and we believe that PrcIFC is indeed valuable, because many statements that
may leak information have been removed during software development phases before
implementation.
6. CONCLUSION
Information flow control (IFC) is useful in preventing information leakage during software
execution. According to our survey, we cannot identify an IFC model that is applied on the entire
software development process. Applying IFC model on the entire software development process
will not exclude the viewpoints of any stakeholders. Moreover, statements that may leak
information can be identified during every phase of software development. The other problem of
traditional IFC models is large runtime overhead. We cannot identify a model that can remove
this overhead. According to the description above, we design a new IFC model named PrcIFC
(process IFC). It offers the following major features:
1. PrcIFC is applied on the entire software development process. Therefore, it will not exclude
the viewpoints of any stakeholders and helps correcting statements that may leak
information during every phase of the development.
2. PrcIFC is disabled when software is online, which remove runtime overhead.
In the future, we will require students to apply PrcIFC on their exercises for checking the
advantages and disadvantages of the model in depth. We hope to improve PrcIFC from the
responses of students.
REFERENCES
[1] M. Krohn, A. Yip, M. Brodsky, and N. Cliffer, M. F. Kaashoek, E. Kohler, and R. Morris, “Information
Flow Control for Standard OS Abstractions”, SOSP’07, 2007.
[2] I. Roy, D. E. Porter, M. D. Bond, K. S. McKinley, and E. Witchel, “Laminar: Practical Fine-Grained
Decentralized Information Flow Control”, PLDI’09, 2009.
[3] N. Zeldovich, S. Boyd-Wickizer, and D. Mazières, “Securing Distributed Systems with Information Flow
Control”, NSDI '08, pp. 293–308, 2008
[4] S. –C. Chou and C. –H. Huang, “An Extended XACML Model to Ensure Secure Information Access for
Web Services”, Journal of Systems and Software, vol. 83, no. 1, pp. 77-84, 2010.
[5] S. –C. Chou, “Dynamically Preventing Information Leakage for Web Services using Lattice”, to appear in
5’th International Conference on Computer Sciences and Convergence Information Technology (ICCIT),
2010.
[6] W. She, I. –L. Yen, B. Thuraisingham, and E. Bertino, “The SCIFC Model for Information Flow Control in
Web Service Composition”, 2009 IEEE International Conference on Web Services, 2009.
[7] W. She, I. –L. Yen, B. Thuraisingham, and E. Bertino, “Effective and Efficient Implementation of an
Information Flow Control Protocol for Service Composition”, IEEE International Conference on Service-
Oriented Computing and Applications, 2009.
7. International Journal of Security, Privacy and Trust Management (IJSPTM) Vol 5, No 3, August 2016
7
[8] R. Wu, G. –J., Ahn, H. Hu, and M. Singhal, “Information Flow Control in Cloud Computing”, Proceedings
of the 6th International Conference on Collaborative Computing: Networking, Applications and
Worksharing, 2010
[9] J. Bacon, D. Eyers, T. F. J. –M. Pasquier, J. Singh, and P. Piezuch, “Information Flow Control for Secure
Cloud Computing”, IEEE Trans. Network and Service Management, 11(1), pp. 76-89, 2014.
[10] S. -C. Chou, “Controlling Information Flows in Net Services with Low Runtime Overhead”, International
Journal of Computer Network and Information Security, vol. 7, no. 3, pp. 1-9, 2015.
[11] Myers A., and Liskov, B., 1998. Complete, Safe Information Flow with Decentralized Labels. Proc. 14’th
IEEE Symp. Security and Privacy, 186-197.
[12] D. E., Denning, 1976. “A Lattice Model of Secure Information Flow”, Comm. ACM, vol. 19, no. 5, 236-
243, 1976.
[13] D. E., Denning, P. J., and Denning, Certification of Program for Secure Information Flow”, Comm. ACM,
vol. 20, no. 7, 504-513, 1977.
[14] Myers A., and Liskov, B., 2000. Protecting Privacy using the Decentralized Label Model. ACM Trans.
Software Eng. Methodology, vol. 9, no. 4, 410-442.
[15] Samarati, P., Bertino, E., Ciampichetti, A., and Jajodia, S., 1997. Information Flow Control in Object-
Oriented Systems. IEEE Trans. Knowledge Data Eng., vol. 9, no. 4, 524-538.
[16] Samarati, P., Bertino, E., Ciampichetti, A., and Jajodia, S., 1997. Information Flow Control in Object-
Oriented Systems. IEEE Trans. Knowledge Data Eng., vol. 9, no. 4, 524-538.
[17] Yu, T., Winslett, M., Seamons, K., 2003. Supporting Structured Credentials and Sensitive Policies through
Interoperable Strategies for Automated Trust Negotiation. ACM Transactions on Information and System
Security, vol. 6, no. 1, 1-42.
[18] Koshutanski, H., Massacci, F., 2005. Interactive Credential Negotiation for Stateful Business Processes,
Lecture notes in computer science, 256-272.
[19] Bhatti, R., Bertino, E., Ghafoor, A., 2004. A Trust-based Context-aware Access Control Model for Web
Services. IEEE ICW’04, 184 – 191.
[20] Bhatti, R., Ghafoor, A., Bertino, E., Joshi, J. B. D., 2005. X-GTRBAC: An XML-Based Policy
Specification Framework and Architecture for Enterprise-Wide Access Control. ACM Transactions on
Information and System Security (TISSEC), Vol. 8, No. 2, 187 – 227.
[21] Bertino, E., Bonatti P. A., Ferrari, E., 2001. TRBAC: A Temporal Role-Based Access Control Model.
ACM Transactions on Information and System Security, Vol. 4, No. 3, 191 – 233.
[22] Wonohoesodo R., Tari, Z., 2004. A Role Based Access Control for Web Services. Proceedings of the 2004
IEEE International Conference on Service Computing, 49-56.
[23] Sandhu, R. S., Coyne, E. J., Feinstein, H. L., Youman, C. E., 1996. Role-Based Access Control Models.
IEEE Computer, vol. 29, no. 2, 38-47.
[24] R. Sandhu, D. F., Ferraiolo, and D. R., Kuhn, D. "The NIST Model for Role Based Access Control:
Toward a Unified Standard", 5th ACM Workshop Role-Based Access Control. pp. 47–63, 2000
[25] W. She, I. -L. Yen, B. ThuraiSingham, and S. –Y. Huang, “Rule-Based Run-Time Information Flow
Control in Service Cloud”, 2011 IEEE International Conferences on Web Services, pp. 524-531, 2011.
[26] L. Sfaxi, T. Abdellatif, R. Robbana, and Y. Lakhnech, “ Information Flow Control of Component-Based
Distributed Systems”, Concurrency and Computation: Practice and Experience, available on
http://www.bbhedia.org/robbana/Wiley.pdf
[27] JIF website, “Jif: Java + information flow”, available on http://www.cs.cornell.edu/jif/
[28] Brewer, D.F.C., Nash, M.J., 1989. The Chinese Wall Security Policy. In: Proceedings of the 5’th IEEE
Symposium on Security and Privacy, 206-214.
[29] S. -C. Chou, “A Prototyping Technique to Verify Requirements”, ICS 2002, Hualien, Taiwan.
AUTHORS
Shih-Chien Chou is currently a professor in the Department of Computer Science and
Information Engineering, National Dong Hwa University, Hualien, Taiwan. His research
interests include software engineering, process environment, software reuse, and
information flow control.