This document summarizes several projects that have formally verified operating system kernels:
- The UCLA project in the 1980s formally specified and verified parts of the Unix kernel using multiple specification layers and consistency proofs. It found errors and demonstrated the need for formal verification.
- The KIT project in the 1990s was the first to verify an OS kernel at the assembly level. It proved isolation between processes in a small kernel written for a simple machine.
- Other projects discussed include PSOS, VFiasco, EROS, and seL4, which take different approaches to formally verifying properties of OS kernels. The document surveys the methodology and contributions of these verification projects.
A Unique Test Bench for Various System-on-a-Chip IJECEIAES
This paper discusses a standard flow on how an automated test bench environment which is randomized with constraints can verify a SOC efficiently for its functionality and coverage. Today, in the time of multimillion gate ASICs, reusable intellectual property (IP), and system-ona-chip (SoC) designs, verification consumes about 70 % of the design effort. Automation means a machine completes a task autonomously, quicker and with predictable results. Automation requires standard processes with welldefined inputs and outputs. By using this efficient methodology it is possible to provide a general purpose automation solution for verification, given today’s technology. Tools automating various portions of the verification process are being introduced. Here, we have Communication based SOC The content of the paper discusses about the methodology used to verify such a SOC-based environment. Cadence Efficient Verification Methodology libraries are explored for the solution of this problem. We can take this as a state of art approach in verifying SOC environments. The goal of this paper is to emphasize the unique testbench for different SOC using Efficient Verification Constructs implemented in system verilog for SOC verification.
A Survey of functional verification techniquesIJSRD
In this paper, we present a survey of various techniques used in functional verification of industry hardware designs. Although the use of formal verification techniques has been increasing over time, there is still a need for an immediate practical solution resulting in an increased interest in hybrid verification techniques. Hybrid techniques combine formal and informal (traditional simulation based) techniques to take the advantage of both the worlds. A typical hybrid technique aims to address the verification bottleneck by enhancing the state space coverage.
DOTNET 2013 IEEE MOBILECOMPUTING PROJECT Model based analysis of wireless sys...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
This is chapter 2 of ISTQB Advance Technical Test Analyst certification. This presentation helps aspirants understand and prepare the content of the certification.
A Unique Test Bench for Various System-on-a-Chip IJECEIAES
This paper discusses a standard flow on how an automated test bench environment which is randomized with constraints can verify a SOC efficiently for its functionality and coverage. Today, in the time of multimillion gate ASICs, reusable intellectual property (IP), and system-ona-chip (SoC) designs, verification consumes about 70 % of the design effort. Automation means a machine completes a task autonomously, quicker and with predictable results. Automation requires standard processes with welldefined inputs and outputs. By using this efficient methodology it is possible to provide a general purpose automation solution for verification, given today’s technology. Tools automating various portions of the verification process are being introduced. Here, we have Communication based SOC The content of the paper discusses about the methodology used to verify such a SOC-based environment. Cadence Efficient Verification Methodology libraries are explored for the solution of this problem. We can take this as a state of art approach in verifying SOC environments. The goal of this paper is to emphasize the unique testbench for different SOC using Efficient Verification Constructs implemented in system verilog for SOC verification.
A Survey of functional verification techniquesIJSRD
In this paper, we present a survey of various techniques used in functional verification of industry hardware designs. Although the use of formal verification techniques has been increasing over time, there is still a need for an immediate practical solution resulting in an increased interest in hybrid verification techniques. Hybrid techniques combine formal and informal (traditional simulation based) techniques to take the advantage of both the worlds. A typical hybrid technique aims to address the verification bottleneck by enhancing the state space coverage.
DOTNET 2013 IEEE MOBILECOMPUTING PROJECT Model based analysis of wireless sys...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
This is chapter 2 of ISTQB Advance Technical Test Analyst certification. This presentation helps aspirants understand and prepare the content of the certification.
This is chapter 3 of ISTQB Advance Technical Test Analyst certification. This presentation helps aspirants understand and prepare the content of the certification.
1. CATEGORIES OF TEST DESIGN TECHNIQUES
Recall reasons that both specification-based (black-box) and structure-based (white-box) approaches to test case design are useful, and list the common techniques for each. (K1)
Chapter 6 - Transitioning Manual Testing to an Automation EnvironmentNeeraj Kumar Singh
This is the chapter 6 of ISTQB Advance Test Automation Engineer certification. This presentation helps aspirants understand and prepare content of certification.
Bio-Inspired Modelling of Software Verification by Modified Moran ProcessesIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software systems will be presented in this paper. The model is applied on a macroscopic system level and based on so-called Moran processes, which originate from mathematical biology and allow for the description of phenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its application on a real-world example from the medical engineering domain will be discussed.
Software Testing Outline Performances and Measurementsijtsrd
The procedure of carrying out a program or else scheme by means of the target of ruling bugs called “Software s w Testing”. It is whichever action intended by estimating a characteristic or else ability of a program system plus shaping that it congregates its requisite consequences. Testing is an essential piece in s w growth. It is generally arranged in each stage in the s w progress sequence. Classically, in excess of fifty two perecent of the progress period is used up in testing. Metrics are attainmenting significance plus receiving in commercial segments as associations raise, grown up and endeavour to get better venture values. This study talks about s w testing methods as well as measurements. Indu Maurya "Software Testing Outline: Performances and Measurements" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-2 , February 2021, URL: https://www.ijtsrd.com/papers/ijtsrd38550.pdf Paper Url: https://www.ijtsrd.com/computer-science/other/38550/software-testing-outline-performances-and-measurements/indu-maurya
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
BIO-INSPIRED MODELLING OF SOFTWARE VERIFICATION BY MODIFIED MORAN PROCESSESIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
In order to achieve the wide range of the robotic application it is necessary to provide iterative motions
among points of the goals. For instance, in the industry mobile robots can replace any components between
a storehouse and an assembly department. Ammunition replacement is widely used in military services.
Working place is possible in ports, airports, waste site and etc. Mobile agents can be used for monitoring if
it is necessary to observe control points in the secret place. The paper deals with path planning programme
for mobile robots. The aim of the research paper is to analyse motion-planning algorithms that contain the
design of modelling programme. The programme is needed as environment modelling to obtain the
simulation data. The simulation data give the possibility to conduct the wide analyses for selected
algorithm. Analysis means the simulation data interpretation and comparison with other data obtained
using the motion-planning. The results of the careful analysis were considered for optimal path planning
algorithms. The experimental evidence was proposed to demonstrate the effectiveness of the algorithm for
steady covered space. The results described in this work can be extended in a number of directions, and
applied to other algorithms.
THE SURVEY OF SENTIMENT AND OPINION MINING FOR BEHAVIOR ANALYSIS OF SOCIAL MEDIAIJCSES Journal
Nowadays, internet has changed the world into a global village. Social Media has reduced the gaps among
the individuals. Previously communication was a time consuming and expensive task between the people.
Social Media has earned fame because it is a cheaper and faster communication provider. Besides, social
media has allowed us to reduce the gaps of physical distance, it also generates and preserves huge amount
of data. The data are very valuable and it presents association degree between people and their opinions.The comprehensive analysis of the methods which are used on user behavior prediction is presented in this paper. This comparison will provide a detailed information, pros and cons in the domain of sentiment and
opinion mining.
This is chapter 3 of ISTQB Advance Technical Test Analyst certification. This presentation helps aspirants understand and prepare the content of the certification.
1. CATEGORIES OF TEST DESIGN TECHNIQUES
Recall reasons that both specification-based (black-box) and structure-based (white-box) approaches to test case design are useful, and list the common techniques for each. (K1)
Chapter 6 - Transitioning Manual Testing to an Automation EnvironmentNeeraj Kumar Singh
This is the chapter 6 of ISTQB Advance Test Automation Engineer certification. This presentation helps aspirants understand and prepare content of certification.
Bio-Inspired Modelling of Software Verification by Modified Moran ProcessesIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software systems will be presented in this paper. The model is applied on a macroscopic system level and based on so-called Moran processes, which originate from mathematical biology and allow for the description of phenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its application on a real-world example from the medical engineering domain will be discussed.
Software Testing Outline Performances and Measurementsijtsrd
The procedure of carrying out a program or else scheme by means of the target of ruling bugs called “Software s w Testing”. It is whichever action intended by estimating a characteristic or else ability of a program system plus shaping that it congregates its requisite consequences. Testing is an essential piece in s w growth. It is generally arranged in each stage in the s w progress sequence. Classically, in excess of fifty two perecent of the progress period is used up in testing. Metrics are attainmenting significance plus receiving in commercial segments as associations raise, grown up and endeavour to get better venture values. This study talks about s w testing methods as well as measurements. Indu Maurya "Software Testing Outline: Performances and Measurements" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-2 , February 2021, URL: https://www.ijtsrd.com/papers/ijtsrd38550.pdf Paper Url: https://www.ijtsrd.com/computer-science/other/38550/software-testing-outline-performances-and-measurements/indu-maurya
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
BIO-INSPIRED MODELLING OF SOFTWARE VERIFICATION BY MODIFIED MORAN PROCESSESIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
In order to achieve the wide range of the robotic application it is necessary to provide iterative motions
among points of the goals. For instance, in the industry mobile robots can replace any components between
a storehouse and an assembly department. Ammunition replacement is widely used in military services.
Working place is possible in ports, airports, waste site and etc. Mobile agents can be used for monitoring if
it is necessary to observe control points in the secret place. The paper deals with path planning programme
for mobile robots. The aim of the research paper is to analyse motion-planning algorithms that contain the
design of modelling programme. The programme is needed as environment modelling to obtain the
simulation data. The simulation data give the possibility to conduct the wide analyses for selected
algorithm. Analysis means the simulation data interpretation and comparison with other data obtained
using the motion-planning. The results of the careful analysis were considered for optimal path planning
algorithms. The experimental evidence was proposed to demonstrate the effectiveness of the algorithm for
steady covered space. The results described in this work can be extended in a number of directions, and
applied to other algorithms.
THE SURVEY OF SENTIMENT AND OPINION MINING FOR BEHAVIOR ANALYSIS OF SOCIAL MEDIAIJCSES Journal
Nowadays, internet has changed the world into a global village. Social Media has reduced the gaps among
the individuals. Previously communication was a time consuming and expensive task between the people.
Social Media has earned fame because it is a cheaper and faster communication provider. Besides, social
media has allowed us to reduce the gaps of physical distance, it also generates and preserves huge amount
of data. The data are very valuable and it presents association degree between people and their opinions.The comprehensive analysis of the methods which are used on user behavior prediction is presented in this paper. This comparison will provide a detailed information, pros and cons in the domain of sentiment and
opinion mining.
Cloud computing is a technique that has a great capabilities and benefits for users. Cloud characteristics
encourage many organizations to move to this technology. But many consideration faces transmission
process. This paper outline some of these considerations and considerable efforts solved cloud scalability
issues.
The internet has been playing an increasingly important role in our daily life, with the availability of many web services such as email and search engines. However, these are often threatened by attacks from computer programs such as bots. To address this problem, CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) was developed to distinguish between computer programs and human users. Although this mechanism offers good security and limits automatic registration to web services, some CAPTCHAs have several weaknesses which allow hackers to infiltrate the mechanism of the CAPTCHA. This paper examines recent research on various CAPTCHA methods and their categories. Moreover it discusses the weakness and strength of these types.
Phishing is the fraudulent acquisition of personal information like username, password, credit card information, etc. by tricking an individual into believing that the attacker is a trustworthy entity. It is affecting all the major sector of industry day by day with lots of misuse of user’s credentials. So in today
online environment we need to protect the data from phishing and safeguard our information, which can be done through anti-phishing tools. Currently there are many freely available anti-phishing browser extensions tools that warns user when they are browsing a suspected phishing site. In this paper we did a literature survey of some of the commonly and popularly used anti-phishing browser extensions by reviewing the existing anti-phishing techniques along with their merits and demerits.
an error in that computer program. In order to improve the software quality, prediction of faulty modules is
necessary. Various Metric suites and techniques are available to predict the modules which are critical and
likely to be fault prone. Genetic Algorithm is a problem solving algorithm. It uses genetics as its model of
problem solving. It’s a search technique to find approximate solutions to optimization and search
problems.Genetic algorithm is applied for solving the problem of faulty module prediction and as well as
for finding the most important attribute for fault occurrence. In order to perform the analysis, performance
validation of the Genetic Algorithm using open source software jEdit is done. The results are measured in
terms Accuracy and Error in predicting by calculating probability of detection and probability of false
Alarms
This paper presents a set of methods that uses a genetic algorithm for automatic test-data generation in
software testing. For several years researchers have proposed several methods for generating test data
which had different drawbacks. In this paper, we have presented various Genetic Algorithm (GA) based test
methods which will be having different parameters to automate the structural-oriented test data generation
on the basis of internal program structure. The factors discovered are used in evaluating the fitness
function of Genetic algorithm for selecting the best possible Test method. These methods take the test
populations as an input and then evaluate the test cases for that program. This integration will help in
improving the overall performance of genetic algorithm in search space exploration and exploitation fields
with better convergence rate.
The community detection in complex networks has attracted a growing interest and is the subject of several
researches that have been proposed to understand the network structure and analyze the network
properties. In this paper, we give a thorough overview of different community discovery strategies, we
propose taxonomy of these methods, and we specify the differences between the suggested classes which
helping designers to compare and choose the most suitable strategy for the various types of network
encountered in the real world.
Loan financings belong and parcel of any type of human being when the entire globe is viewing an uptrend of costs for all important commodities permanently to go perfectly. The increase in cost has actually been so high that every person is feeling the crunch of money to stabilize the need and the supply.
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
Design and Development of Arm-Based Control System for Nursing Bed IJCSES Journal
This paper introduces a kind of ARM embedded system as the control systemof the nursing bed.The
embedded control system takes the ARM9 S3C2440 chip as the core of data processing. The design ofthe
control system includes hardware design, software design and PC monitoring system design.
בעיצוב חווית משתמש צריך תמיד לקחת בחשבון את ההשפעה שלנו על המשתמש, כיצד אנחנו עושים לו נעים, מה מעניין אותו, איך נראית המציאות שלו, מה יהיה לו אינטואיטיבי..
עשינו מצגת לסקירה כיצד הטכנולוגיה מעצבת את המציאות שלנו, ממנה אפשר ללמוד כמה דברים על החוויה שלנו עם מגוון ממשקים בחיי היום יום
This paper explains new imaging techniques that show promising results in breast cancer detection. The
presented techniques use microwave-based methods, wavelet analyses, and neural networks to get a
suitable resolution for the breast image. One of the presented techniques (hybrid method) uses a
combination of microwaves and acoustic signals to improve the detection capability. Some promising
results are shown and explained.
A study of techniques for facial detection and expression classificationIJCSES Journal
Automatic recognition of facial expressions is an important component for human-machine interfaces. It
has lot of attraction in research area since 1990's.Although humans recognize face without effort or
delay, recognition by a machine is still a challenge. Some of its challenges are highly dynamic in their
orientation, lightening, scale, facial expression and occlusion. Applications are in the fields like user
authentication, person identification, video surveillance, information security, data privacy etc. The
various approaches for facial recognition are categorized into two namely holistic based facial
recognition and feature based facial recognition. Holistic based treat the image data as one entity without
isolating different region in the face where as feature based methods identify certain points on the face
such as eyes, nose and mouth etc. In this paper, facial expression recognition is analyzed with various
methods of facial detection,facial feature extraction and classification.
International Journal of Engineering Research and DevelopmentIJERD Editor
• Electrical, Electronics and Computer Engineering,
• Information Engineering and Technology,
• Mechanical, Industrial and Manufacturing Engineering,
• Automation and Mechatronics Engineering,
• Material and Chemical Engineering,
• Civil and Architecture Engineering,
• Biotechnology and Bio Engineering,
• Environmental Engineering,
• Petroleum and Mining Engineering,
• Marine and Agriculture engineering,
• Aerospace Engineering.
Verification and validation of knowledge bases using test cases generated by ...Waqas Tariq
Knowledge based systems have been developed to solve many problems. Their main characteristic consists on the use of a knowledge representation of a specific domain to solve problems in such a way that it emulates the reasoning of a human specialist. As conventional systems, knowledge based systems are not free of failures. This justifies the need for validation and verification for this class of systems. Due to the lack of techniques which can guarantee their quality and reliability, this paper proposes a process to support validation of specific knowledge bases. In order to validate the knowledge base, restriction rules are used. These rules are elicit and represented as If Then Not rules and executed using a backward chaining reasoning process. As the result of this process test cases are created and submitted to the knowledge base in order to prove whether there are inconsistencies in the domain representation. Two main advantages can be highlighted here: the use of restriction rules which are considered as meta-knowledge (these rules improve the knowledge representation power of the system) and a process that can generate useful test cases (test cases are usually difficult and expensive to be created).
Advanced Verification Methodology for Complex System on Chip VerificationVLSICS Design
Verification remains the most significant challenge in getting advanced SOC devices in market. The
important challenge to be solved in the Semiconductor industry is the growing complexity of SOCs.
Industry experts consider that the verification effort is almost 70% to 75% of the overall design effort.
Verification language cannot alone increase verification productivity but it must be accompanied by a
methodology to facilitate reuse to the maximum extent under different design IP configurations. This
Advanced reusable test bench development will decrease the time to market for a chip. It will help in code
reuse so that the same code used in sub-block level can be used in block level and top level as well that
helps in saving cost for a tape-out of a chip. This test bench development technique will help us to achieve
faster time to market and will help reducing the cost for the chip up to a large extent.
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...ijccmsjournal
Monte-Carlo simulation is widely used in distributed embedded system in our present era. In this
research work, we have put an emphasis on reliability assessment of any distributed embedded system
through Monte-Carlo simulation. We have done this assessment on random data which represents input
voltages ranging from 0 volt to 12 volt; several numbers of trials have been executed on those data to
check the average case behavior of a distributed real time embedded system. From the experimental result, a saturation point has been achieved against the time behavior which shows the average case behavior of the concerned distributed embedded system.
CHAPTER 15Security Quality Assurance TestingIn this chapter yoJinElias52
CHAPTER 15
Security Quality Assurance Testing
In this chapter you will
• Explore the aspects of testing software for security
• Learn about standards for software quality assurance
• Discover the basic approaches to functional testing
• Examine types of security testing
• Explore the use of the bug bar and defect tracking in an effort to improve the SDL process
Testing is a critical part of any development process and testing in a secure development lifecycle (SDL) environment is an essential part of the security process. Designing in security is one step, coding is another, and testing provides the assurance that what was desired and planned becomes reality. Validation and verification have been essential parts of quality efforts for decades, and software is no exception. This chapter looks at how and what to test to obtain an understanding of the security posture of software.
Standards for Software Quality Assurance
Quality is defined as fitness for use according to certain requirements. This can be different from security, yet there is tremendous overlap in the practical implementation and methodologies employed. In this regard, lessons can be learned from international quality assurance standards, for although they may be more expansive in goals than just security, they can make sense there as well.
ISO 9216
The International Standard ISO/IEC 9216 provides guidance for establishing quality in software products. With respect to testing, this standard focuses on a quality model built around functionality, reliability, and usability. Additional issues of efficiency, maintainability, and portability are included in the quality model of the standard. With respect to security and testing, it is important to remember the differences between quality and security. Quality is defined as fitness for use, or conformance to requirements. Security is less cleanly defined, but can be defined by requirements. One issue addressed by the standard is the human side of quality, where requirements can shift over time, or be less clear than needed for proper addressing by the development team. These are common issues in all projects, and the standard works to ensure a common understanding of the goals and objectives of the projects as described by requirements. This information is equally applicable to security concerns and requirements.
SSE-CMM
The Systems Security Engineering Capability Maturity Model (SSE-CMM) is also known as ISO/IEC 21827, and is an international standard for the secure engineering of systems. The SSE-CMM addresses security engineering activities that span the entire trusted product or secure system lifecycle, including concept definition, requirements analysis, design, development, integration, installation, operations, maintenance, and decommissioning. The SSE-CMM is designed to be employed as a tool to evaluate security engineering practices and assist in the definition of improvements to them. The SSE-CMM is organized into p ...
COURSE IS NOW FULLY AVAILABLE AND LIVE HERE: https://goo.gl/gVukvc
What you will learn in this second section
Software Testing Methodologies. Waterfall, V-Model and Iterative
What is unity or component system testing
What is integration, system and acceptance means
Differences between functional and non-functional testing
What is a structural testing
Change-related testing
Maintenance testing
Access my blog for much more material and the mock exams.
www.rogeriodasilva.com
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
Verification of the protection services in antivirus systems by using nusmv m...ijfcstjournal
In this paper, a model of protection services in the antivirus system is proposed. The antivirus system
behavior separate in to preventive and control behaviors. We extract the properties which are expected
from the model of antivirus system approach from control behavior in the form of CTL and LTL temporal
logic formulas. To implement the behavior models of antivirus system approach, the ArgoUML tool and the
NuSMV model checker are employed. The results show that the antivirus system approach can detects
fairness, reachability, deadlock free and verify some properties of the proposed model verified by using
NuSMV model checker.
Formal Verification of Distributed Checkpointing Using Event-Bijcsit
The development of complex system makes challenging task for correct software development. Due to faulty
specification, software may involve errors. The traditional testing methods are not sufficient to verify the
correctness of such complex system. In order to capture correct system requirements and rigorous
reasoning about the problems, formal methods are required. Formal methods are mathematical techniques
that provide precise specification of problems with their solutions and proof of correctness. In this paper,
we have done formal verification of check pointing process in a distributed database system using Event B.
Event-B is an event driven formal method which is used to develop formal models of distributed database
systems. In a distributed database system, the database is stored at different sites that are connected
together through the network. Checkpoint is a recovery point which contains the state information about
the site. In order to do recovery of a distributed transaction a global checkpoint number (GCPN) is
required. A global checkpoint number decides which transaction will be included for recovery purpose. All
transactions whose timestamp are less than global checkpoint number will be marked as before checkpoint
transaction (BCPT) and will be considered for recovery purpose. The transactions whose timestamp are
greater than GCPN will be marked as after checkpoint transaction (ACPT) and will be part of next global
checkpoint number.
AN EFFECTIVE VERIFICATION AND VALIDATION STRATEGY FOR SAFETY-CRITICAL EMBEDDE...IJSEA
This paper presents the best practices to carry out the verification and validation (V&V) for a safetycritical
embedded system, part of a larger system-of-systems. The paper talks about the effectiveness of this
strategy from performance and time schedule requirement of a project. The best practices employed for the
V &Vis a modification of the conventional V&V approach. The proposed approach is iterative which
introduces new testing methodologies apart from the conventional testing methodologies, an effective way
of implementing the phases of the V&V and also analyzing the V&V results. The new testing methodologies
include the random and non-real time testing apart from the static and dynamic tests. The process phases
are logically carried out in parallel and credit of the results of the different phases are taken to ensure that
the embedded system that goes for the field testing is bug free. The paper also demonstrates the iterative
qualities of the process where the iterations successively find faults in the embedded system and executing
the process within a stipulated time frame, thus maintaining the required reliability of the system. This
approach is implemented in the most critical applications —-aerospace application where safety of the
system cannot be compromised
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
National Security Agency - NSA mobile device best practices
OS VERIFICATION- A SURVEY AS A SOURCE OF FUTURE CHALLENGES
1. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
DOI:10.5121/ijcses.2015.6401 1
OS VERIFICATION- A SURVEY AS A SOURCE OF
FUTURE CHALLENGES
Kushal Anjaria and Arun Mishra
Department of Computer Science &Engineering, DIAT, Pune, India
ABSTRACT
Formal verification of an operating system kernel manifests absence of errors in the kernel and establishes
trust in it. This paper evaluates various projects on operating system kernel verification and presents in-
depth survey of them. The methodologies and contributions of operating system verification projects have
been discussed in the present work. At the end, few unattended and interesting future challenges in
operating system verification area have been discussed and possible directions towards the challenge
solution have been described in brief.
KEYWORDS
Formal verification, operating system, kernel.
1. INTRODUCTION
The security and reliability of computer system is dependent on the underlying operating system
kernel; kernel is the core of operating system. The kernel provide mechanism for user level
applications to access hardware, scheduling and inter-process communication. Therefore, if
anything goes wrong in the kernel while programming or implementation, it will affect the
operation of entire system. To ensure the correct working of a kernel, testing and/or verification
techniques have been used. Testing reduces frequency of failures, while verification detects errors
and eliminates failures. Consider the fragment of C code shown in figure-1
Figure1. Fragment of C code
Here r being an integer, if testing has been performed with p=8, q=4 and p=16, q=32, full
coverage of code can be achieved. Full coverage implies each line of code is executed at least
once and every condition has been tested once. But here still two divide by zero errors are
remaining. To identify these errors, human tester will immediately suggest that p=0, q=-1 and p=-
1, q=0 should be tested. But for bigger and non-trivial program, it is difficult to find these cases
and tester can never be sure that all such cases are covered.
int div(int p, int q)
{
int r;
if (p<q)
r=p/q;
else
r=q/p;
return r;
}
2. International Journal of Computer Science & Engineering Survey (IJCSES)
From the example similar to above, G. Klein[19] has concluded that humans are good at
creativity, they are not so good at r
system verification is complex and needs many repetitive tasks. That’s why formal verification of
operating system is feasible compared to normal testing. Formal verification of an operating
system kernel produces mathematical proofs of correctness of an operating system. Formal
verification may differ from the user’s view of correctness, this difference is called semantic
gap[51]. Bridging this semantic gap is called formalisation[52]. The figure
entire verification procedure of any system with formalisation. The specification block in the
figure-2 describes the collection of mathematical entities; these entities will be in the form which
can be analysed by mathematical methods later
representation of the system at some chosen level of abstraction. The verification tool will take
mathematical model and mathematical entities as input and it will verify that the model is correct
for all the entities or not. After that it will generate the verification result.
Figure
After brief overview of the verification procedure, some of the formal verification projects of
operating system have been surveyed. Before moving on to the operating system verification
survey, some basic terminologies related to OS verification have been
a. Model checking: Model checking is formal verification method for finite state concurrent
system. Large systems, like OS can reside in many states. That means they have large state space.
Model checking can be applied to abstract model of r
the conclusion which can be drawn from model checking for operating system[27, 55, 56].
b. Proof-carrying code: Proof carrying code is an approach in which the kernel accepts only
those extensions which are accompanied with valid proof for particular security policy. It tackles
the problem of untrusted code execution in kernel mode[27, 57, 58].
c. Static source-code checking:
source code. The static source-
about the absence of errors, while testing runs the system for code analysis. In this way it is
different from testing[27, 59].
d. Functional correctness: The functional correctness in O
implementation always strictly follows high
Functional correctness makes it feasible to prove security properties at the code level. It does not
necessarily imply the security. Functional correctness provides reasons about an implementation
and a specification. Functional correctness proof has been considered as the right first step and
basis for proving high level properties[27, 60, 61] .
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
From the example similar to above, G. Klein[19] has concluded that humans are good at
creativity, they are not so good at repetitive, complex and high detailed task. But operating
system verification is complex and needs many repetitive tasks. That’s why formal verification of
operating system is feasible compared to normal testing. Formal verification of an operating
ernel produces mathematical proofs of correctness of an operating system. Formal
verification may differ from the user’s view of correctness, this difference is called semantic
gap[51]. Bridging this semantic gap is called formalisation[52]. The figure-2 below shows the
entire verification procedure of any system with formalisation. The specification block in the
2 describes the collection of mathematical entities; these entities will be in the form which
can be analysed by mathematical methods later. The model block describes mathematical
representation of the system at some chosen level of abstraction. The verification tool will take
mathematical model and mathematical entities as input and it will verify that the model is correct
es or not. After that it will generate the verification result.
Figure 2. Verification procedure [52]
After brief overview of the verification procedure, some of the formal verification projects of
operating system have been surveyed. Before moving on to the operating system verification
survey, some basic terminologies related to OS verification have been explained:
Model checking is formal verification method for finite state concurrent
system. Large systems, like OS can reside in many states. That means they have large state space.
Model checking can be applied to abstract model of real systems only. This diminution restricts
the conclusion which can be drawn from model checking for operating system[27, 55, 56].
Proof carrying code is an approach in which the kernel accepts only
ompanied with valid proof for particular security policy. It tackles
the problem of untrusted code execution in kernel mode[27, 57, 58].
code checking: Static source-code checking statically performs the analysis of
-code checking analyzes the source code and gives guarantees
about the absence of errors, while testing runs the system for code analysis. In this way it is
The functional correctness in OS kernel verification means that the
implementation always strictly follows high-level abstract specification of kernel behaviour[43].
Functional correctness makes it feasible to prove security properties at the code level. It does not
e security. Functional correctness provides reasons about an implementation
and a specification. Functional correctness proof has been considered as the right first step and
basis for proving high level properties[27, 60, 61] .
Vol.6, No.4, August 2015
2
From the example similar to above, G. Klein[19] has concluded that humans are good at
epetitive, complex and high detailed task. But operating
system verification is complex and needs many repetitive tasks. That’s why formal verification of
operating system is feasible compared to normal testing. Formal verification of an operating
ernel produces mathematical proofs of correctness of an operating system. Formal
verification may differ from the user’s view of correctness, this difference is called semantic
elow shows the
entire verification procedure of any system with formalisation. The specification block in the
2 describes the collection of mathematical entities; these entities will be in the form which
. The model block describes mathematical
representation of the system at some chosen level of abstraction. The verification tool will take
mathematical model and mathematical entities as input and it will verify that the model is correct
After brief overview of the verification procedure, some of the formal verification projects of
operating system have been surveyed. Before moving on to the operating system verification
Model checking is formal verification method for finite state concurrent
system. Large systems, like OS can reside in many states. That means they have large state space.
eal systems only. This diminution restricts
the conclusion which can be drawn from model checking for operating system[27, 55, 56].
Proof carrying code is an approach in which the kernel accepts only
ompanied with valid proof for particular security policy. It tackles
code checking statically performs the analysis of
code checking analyzes the source code and gives guarantees
about the absence of errors, while testing runs the system for code analysis. In this way it is
S kernel verification means that the
level abstract specification of kernel behaviour[43].
Functional correctness makes it feasible to prove security properties at the code level. It does not
e security. Functional correctness provides reasons about an implementation
and a specification. Functional correctness proof has been considered as the right first step and
3. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
3
2. AN OVERVIEW OF OPERATING SYSTEM VERIFICATION PROJECTS
To capture the wide domain of operating system verification, following projects have been
surveyed. In the present survey UCLA, KIT, PSOS, VFiasco, EROS and seL4 projects have been
discussed. These projects are among the prominent OS verification projects and it has been
believed in the present work that survey of these projects is sufficient to find new challenges in
OS verification field.
2.1. UCLA Project
The verification of UCLA system is based on data security model provided by G. Popek and D.
Farber[1]. The data security model can be used to verify many of those properties in an operating
system which are necessary to ensure reliable security enforcement. They haven’t tried to prove
that an operating system is entirely correct; instead they centralized all the operations which
affect the security into nucleus. This nucleus is called the security kernel. If the operations of this
kernel are correct then this implies that entire system is secure. Besides UCLA, the application of
this approach has been used in other areas also, e.g. to design security model for military message
systems[2] and to decide dependability of trusted bases[3].
UCLA secure unix[4] and Provably secure operating system (PSOS)[15, 16] have been
considered as first serious attempts to verify an OS kernel. These projects were attempted 35
years ago. The UCLA secure unix had been developed as an operating system for the DEC PDP-
11/45 computer. The project tried formal modelling and verification of Unix kernel, which was
written in simplified pascal.
Project Implementation
The project was divided into two parts: first, a four level specification, ranging from Pascal code
at bottom to top level specification was developed. Then in verification, it needs to be proved that
different levels of abstractions were consistent with each other. The UCLA project managed to
finish 90% of their specification and 20% of their proofs in 5 person-year.
Figure-3 shows the detail of implementation of UCLA secure unix project. Left hand side figure
shows the specification layer, used in this project. All the specifications in the specification layers
are called state machine. Here instead of one specification layer, multiple specification layers
were designed because proof consistency can be handled easily with the multiple layers. The
Pascal code block is actual Pascal code of kernel. The low level specification block in figure-3
describes data structures of implementation, in which some of the details are omitted. The
abstract level contains specific objects like process, pages and devices. The top level specification
described in figure-3 actually contains data security notion. This data security notion has been
discussed by G. Popek and D. Farber[1]. The figure in right shows the consistency proofs
between the levels. These proofs show that the specifications are consistent with each other. The
proofs define functions. The function maps program state of concrete level to the program state of
abstract level. The figure in right describes that for each operation in concrete system,
corresponding operation in abstract system transforms the mapped concrete state accordingly [8].
Now the points below show some important findings that can be derived from UCLA project,
which will connect us with the current formal verification scenario:
• Prior efforts to this project, to make operating system secure merely found the flaws in
the system, so it became clear that piecemeal alterations were unlikely ever to succeed
[6]. So, this UCLA project has been seen as a systematic formal approach that control
OS’s design and implementation.
4. International Journal of Computer Science & Engineering Survey (IJCSES)
• In UCLA they first generate the nucleus using the approach described in [1]. This nucleus
was very close to modern microkernels[27], although at that time microkernels hadn’t
been invented[5].
Figure 3. UCLA specification layers and consistency proofs [8]
• It has been discussed by the authors that the error has been discovered which justifies the
need of formal specification. The error was related to boundary condition of kernel ca
Generally boundary condition of kernel call maps a page into user
been properly handled. Mischievous process can read and read or modify the memory
pages adjacent to its own.
• When the project took place, formal refinement hadn’t ca
technique used in this project is formal refinement, defined by Morgan[7]. In fact, it is
data refinement technique[8].
• At the end of this project, it has been observed that performance of the system was poor,
it was slower than the standard unix in some cases. This was because of the nature of the
pascal compiler and high context switch cost. Modern microkernels have overcome this
limitation. For example SeL4 kernel. The Sel4 project has been described later in this
paper.
• Authors have observed that the approach of program verification and proof development
before or during the software development is not practical. They have also said that if
system needs to be verified then it should be developed with verification in mind.
similar conclusion has been drawn from other projects as well for example
SeL4[43,44,45] project.
2.2. KIT Project
KIT(Kernel for Isolated Task)[9] was the first operating system which had been verified at the
assembly level. Main target of multitasking operating system is to implement the processes. The
purpose of KIT project was to verify whether all these processes
stands for ‘Kernel for Isolated Task’, that means task can communicate only in specified way.
KIT is written in the machine language of uniprocessor von
provides exception handling, access to async
passing. KIT kernel had been implemented in artificial, yet realistic assembly language. Its code
size was 620 lines, which was very small. Out of these 620 lines only 300 lines were of actual
instructions [5].
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
In UCLA they first generate the nucleus using the approach described in [1]. This nucleus
was very close to modern microkernels[27], although at that time microkernels hadn’t
3. UCLA specification layers and consistency proofs [8]
It has been discussed by the authors that the error has been discovered which justifies the
need of formal specification. The error was related to boundary condition of kernel ca
Generally boundary condition of kernel call maps a page into user-address space hadn’t
been properly handled. Mischievous process can read and read or modify the memory
pages adjacent to its own.
When the project took place, formal refinement hadn’t came into existence, though the
technique used in this project is formal refinement, defined by Morgan[7]. In fact, it is
data refinement technique[8].
At the end of this project, it has been observed that performance of the system was poor,
than the standard unix in some cases. This was because of the nature of the
pascal compiler and high context switch cost. Modern microkernels have overcome this
limitation. For example SeL4 kernel. The Sel4 project has been described later in this
uthors have observed that the approach of program verification and proof development
before or during the software development is not practical. They have also said that if
system needs to be verified then it should be developed with verification in mind.
similar conclusion has been drawn from other projects as well for example
SeL4[43,44,45] project.
KIT(Kernel for Isolated Task)[9] was the first operating system which had been verified at the
assembly level. Main target of multitasking operating system is to implement the processes. The
purpose of KIT project was to verify whether all these processes are isolated or not. Name KIT
stands for ‘Kernel for Isolated Task’, that means task can communicate only in specified way.
KIT is written in the machine language of uniprocessor von-Neumann computer. KIT kernel
provides exception handling, access to asynchronous I/O devices and a single word message
passing. KIT kernel had been implemented in artificial, yet realistic assembly language. Its code
size was 620 lines, which was very small. Out of these 620 lines only 300 lines were of actual
Vol.6, No.4, August 2015
4
In UCLA they first generate the nucleus using the approach described in [1]. This nucleus
was very close to modern microkernels[27], although at that time microkernels hadn’t
It has been discussed by the authors that the error has been discovered which justifies the
need of formal specification. The error was related to boundary condition of kernel call.
address space hadn’t
been properly handled. Mischievous process can read and read or modify the memory
me into existence, though the
technique used in this project is formal refinement, defined by Morgan[7]. In fact, it is
At the end of this project, it has been observed that performance of the system was poor,
than the standard unix in some cases. This was because of the nature of the
pascal compiler and high context switch cost. Modern microkernels have overcome this
limitation. For example SeL4 kernel. The Sel4 project has been described later in this
uthors have observed that the approach of program verification and proof development
before or during the software development is not practical. They have also said that if
system needs to be verified then it should be developed with verification in mind. The
similar conclusion has been drawn from other projects as well for example
KIT(Kernel for Isolated Task)[9] was the first operating system which had been verified at the
assembly level. Main target of multitasking operating system is to implement the processes. The
are isolated or not. Name KIT
stands for ‘Kernel for Isolated Task’, that means task can communicate only in specified way.
Neumann computer. KIT kernel
hronous I/O devices and a single word message
passing. KIT kernel had been implemented in artificial, yet realistic assembly language. Its code
size was 620 lines, which was very small. Out of these 620 lines only 300 lines were of actual
5. International Journal of Computer Science & Engineering Survey (IJCSES)
Project Implementation
The KIT kernel has been formalized in Boyer
checked mechanically by Boyer
proving correspondence between the two finite stat
state machine and target machine finite state machine. To describe finite state machines,
description of the set of machine states and a definition of each transition on a machine state are
required.
Now the points below show some important findings that can be derived from KIT project:
• There is a fiction in KIT OS that each process owns the processor. The processor state
maintains this fiction. The verification of KIT proves that the processor state has bee
saved correctly.
• An interpreter equivalence theorem establishes implementation relation. This relation is
similar to the Milner’s weak simulation
• KIT described process isolation properties down to object code level, but it was simpler
and had less general abstraction than modern
verification, KIT had same order of complexity as modern
• KIT was different from current microkernels because, it doesn’t provide dynamic process
creation, virtual memory in modern sense and no support for shared
2.3. PSOS
PSOS (Provably Secure Operating System)[15,16] project was based on Robinson
paper[14], which had introduced the concept of formal mappings between different level of
functional specifications that represented abstract implementations of each layer as a function of
the lower layers. This layered system architecture can be seen in figure
between each layer serves as a high level functional specification for lower layer and at the same
time it serves as a machine model for the higher layer.
Figure
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
The KIT kernel has been formalized in Boyer-Moore logic[10] and all the proofs have been
checked mechanically by Boyer-Moore theorem prover[10]. KIT verification has been done by
proving correspondence between the two finite state machines’ behaviour: abstract kernel finite
state machine and target machine finite state machine. To describe finite state machines,
description of the set of machine states and a definition of each transition on a machine state are
oints below show some important findings that can be derived from KIT project:
There is a fiction in KIT OS that each process owns the processor. The processor state
maintains this fiction. The verification of KIT proves that the processor state has bee
An interpreter equivalence theorem establishes implementation relation. This relation is
similar to the Milner’s weak simulation relation [11].
KIT described process isolation properties down to object code level, but it was simpler
and had less general abstraction than modern microkernel [12]. But in terms of
verification, KIT had same order of complexity as modern microkernel [13].
rent from current microkernels because, it doesn’t provide dynamic process
creation, virtual memory in modern sense and no support for shared memory [
PSOS (Provably Secure Operating System)[15,16] project was based on Robinson
14], which had introduced the concept of formal mappings between different level of
functional specifications that represented abstract implementations of each layer as a function of
the lower layers. This layered system architecture can be seen in figure-4 below. The interface
between each layer serves as a high level functional specification for lower layer and at the same
time it serves as a machine model for the higher layer.
Figure 4. PSOS layered system architecture [8]
Vol.6, No.4, August 2015
5
Moore logic[10] and all the proofs have been
Moore theorem prover[10]. KIT verification has been done by
e machines’ behaviour: abstract kernel finite
state machine and target machine finite state machine. To describe finite state machines,
description of the set of machine states and a definition of each transition on a machine state are
oints below show some important findings that can be derived from KIT project:
There is a fiction in KIT OS that each process owns the processor. The processor state
maintains this fiction. The verification of KIT proves that the processor state has been
An interpreter equivalence theorem establishes implementation relation. This relation is
KIT described process isolation properties down to object code level, but it was simpler
12]. But in terms of
13].
rent from current microkernels because, it doesn’t provide dynamic process
memory [5].
PSOS (Provably Secure Operating System)[15,16] project was based on Robinson–Levitt
14], which had introduced the concept of formal mappings between different level of
functional specifications that represented abstract implementations of each layer as a function of
4 below. The interface
between each layer serves as a high level functional specification for lower layer and at the same
6. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
6
Project implementation
As shown in the figure-4, PSOS has 17 layers in its layered system architecture. Out of these 17
layers, bottom six layers are implemented using hardware and layers seven to seventeen are
implemented using software. As shown in the figure, layer 0 is tagged. Tagged means it has
hardware enforced capabilities; hardware supports a bit that indicates whether a word in memory
stores a capability or not. This layered architecture is similar to the TCP/IP network stack, that’s
why in the figure it has been shown that the top layer contains application. Similar to the TCP/IP
network stack, the top layer contains all the applications which have been needed by the layers
below it.
It had been believed in the project that layering would make the formal verification easier. PSOS
was a capability based system in which incorporated hardware implemented capability. Now the
points below show some important findings that can be derived from PSOS project:
• Hardware implemented capability in PSOS had weakened the primitive similar in some
respects to the diminish operator; existing access rights can be selectively retained. It is
unclear from the literature whether the application of this operation was imposed by an
access right or was discretionary [17].
• PSOS also implemented concept named “store permissions”. This mechanism could
selectively control which capabilities can be stored to which capability segments. This
feature can be used to enforce write down permissions [17].
• The properties of application specific object type could enforce with the hardware
assistance provided with the capability based access control. The design allowed
application layers to efficiently execute instructions, with object-oriented capability-
based addressing directly to the hardware, although it appeared at a much higher level of
abstraction in design specification [18].
2.4. VFiasco project
The main challenge in VFiasco project [27] is to entitle high level reasoning in terms of typed
objects during verification, yet assume only low level hardware properties. The VFiasco project
aims at mechanical verification of security relevant properties of Fiasco microkernel. Fiasco
microkernel is L4-compatible Fiasco microkernel [28]. The aim of the project is an OS kernel
which provides security guarantees which has been verified.
Project implementation
Fiasco kernel has been implemented in C++. The language C++ is not the language with precise
semantics. For this purpose, M. Hohmuth et al.[27] developed a language which had precise
semantics called ‘safe C++’. After converting code to safe C++, the verification will be carried
out in the theorem prover Isabelle/HOL [29]. In this project, conversion from safe C++ to HOL
semantics has been done automatically by the logic compiler. For verification, authors have
abstraction level of virtual machine that provides a type-safe object store which is a memory that
supports reading and writing of typed values.
7. International Journal of Computer Science & Engineering Survey (IJCSES)
Figure 5. The VFiasco project verification overview [40]
In this project the existence of an object store layer with strong properties is a proof not an
assumption. The approach used in this project to a semantics of C++ is
in LOOP project[30] of Java. Figure
First semantic compiler will translate C++ code into semantics, formulated in higher order logic.
Then hardware model and C++ semantic
After that theorem prover verifies the semantic specification against security properties. Finally
verification results in proof.
Now the points below show some important findings that can be derive
which will connect us with the current formal verification scenario:
• The VFiasco project uses coalgebraic[31] methods. It describes coalgebraic class
specification language called CCSL. Coalgebraic proof methods are not only charac
capturing formalism for non
system[32].
• In VFiasco project source code verification has been directly applied to the unmodified
source of Fiasco microkernel operating system written in C+
jump across the function boundaries. So formal reconstruction of goto loop, which has
been described in [33] needs to be applied. For complete C++ semantics one needs the
semantics for data-types that can deal with the type
verification, state transformation approach has been used to get relatively simple semantics
to statements like break, continue and even goto[27]. The state transformation has been
explained by M. Huisman and B.
• The VFiasco project contains single layer, i.e. object store layer. This layer provides
functions for typed objects, in such a way that typed objects can be safely manipulated.
Because of this single layer, The VFiasco project re
category (ShengWen Gong [
categories. These four categories are: (i) the component
approach[36], (iii) the single
• The VFiasco project stopped at the source
map results down to lower systems or language layers. This project doesn’t involve any
compiler verification [38].
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
5. The VFiasco project verification overview [40]
In this project the existence of an object store layer with strong properties is a proof not an
assumption. The approach used in this project to a semantics of C++ is very close to the one used
in LOOP project[30] of Java. Figure-5 below gives an overview of VFiasco verification process.
First semantic compiler will translate C++ code into semantics, formulated in higher order logic.
Then hardware model and C++ semantic library provide functions to express program semantics.
After that theorem prover verifies the semantic specification against security properties. Finally
Now the points below show some important findings that can be derived from VFiasco project,
which will connect us with the current formal verification scenario:
The VFiasco project uses coalgebraic[31] methods. It describes coalgebraic class
specification language called CCSL. Coalgebraic proof methods are not only charac
capturing formalism for non-terminating program, but also used for labelled transition
In VFiasco project source code verification has been directly applied to the unmodified
source of Fiasco microkernel operating system written in C++. In C++ it is not possible to
jump across the function boundaries. So formal reconstruction of goto loop, which has
been described in [33] needs to be applied. For complete C++ semantics one needs the
types that can deal with the type cast of data and of pointers. During
verification, state transformation approach has been used to get relatively simple semantics
to statements like break, continue and even goto[27]. The state transformation has been
explained by M. Huisman and B. Jacob [30].
The VFiasco project contains single layer, i.e. object store layer. This layer provides
functions for typed objects, in such a way that typed objects can be safely manipulated.
Because of this single layer, The VFiasco project resides in ‘single-layer
Gong [34] has classified verification efforts/projects into four
categories. These four categories are: (i) the component approach [35], (ii) the parallel
approach[36], (iii) the single-layer approach[27], (iv) the pervasive approach[37]).
The VFiasco project stopped at the source-code level. That means, there are no attempts to
map results down to lower systems or language layers. This project doesn’t involve any
38].
Vol.6, No.4, August 2015
7
In this project the existence of an object store layer with strong properties is a proof not an
very close to the one used
5 below gives an overview of VFiasco verification process.
First semantic compiler will translate C++ code into semantics, formulated in higher order logic.
library provide functions to express program semantics.
After that theorem prover verifies the semantic specification against security properties. Finally
d from VFiasco project,
The VFiasco project uses coalgebraic[31] methods. It describes coalgebraic class
specification language called CCSL. Coalgebraic proof methods are not only characteristic
terminating program, but also used for labelled transition
In VFiasco project source code verification has been directly applied to the unmodified
+. In C++ it is not possible to
jump across the function boundaries. So formal reconstruction of goto loop, which has
been described in [33] needs to be applied. For complete C++ semantics one needs the
cast of data and of pointers. During
verification, state transformation approach has been used to get relatively simple semantics
to statements like break, continue and even goto[27]. The state transformation has been
The VFiasco project contains single layer, i.e. object store layer. This layer provides
functions for typed objects, in such a way that typed objects can be safely manipulated.
layer approach’
34] has classified verification efforts/projects into four
35], (ii) the parallel
pproach[37]).
code level. That means, there are no attempts to
map results down to lower systems or language layers. This project doesn’t involve any
8. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
8
2.5. EROS project
The EROS (Extremely Reliable Operating System)[47,48] is a capability based operating system.
It is for commodity processors which uses a single level storage model. J. Shapiro et al. have
formalized and analyzed the security model of EROS using the pen and paper approach. The
security model was implemented based on take-grant model[42] of capability distribution. The
security model of the EROS was not formally connected to the implementation.
Project implementation
The EROS project actually started by verifying confinement mechanism. J. S. Shapiro and S.
Weber [48] provided formal definition of confinement policy in their work. After that they have
explained operations and security requirements of real operating system. In their work,
methodology and proof structures have been developed for confinement policy in capability
based structure. They have claimed that their methodology can be generalized to solve
information flow problems in many capability based architecture.
Now the points below show some important findings that can be derived from EROS project:
• One thing about EROS is, when machine starts, it loads a fully pre-initialised state. This
task makes it possible to inspect the initialised state offline[49].
• S. Maffeis et al.[50] provide the object capability based model that provides approach for
isolating untrusted components in web applications. Their work is close to EROS
confinement mechanism. While there are some other similarities between their framework
and our general setup, one substantial difference is that instead of defining authority as an
over-approximation of heap actions that can be performed by a single object; they define
authority for the whole system[50].
• The Coyotos kernel[64, 65] was successor to the EROS kernel. From the security model of
EROS kernel, Shapiro et al.[54] concluded that there should be a formal connection
between security model of the kernel and the implementation. They have tried to establish
this formal connection in Coyotos kernel[8].
• The EROS system currently runs on Pentium hardware. Future details about the project can
be obtained from its website[62].
• The CapROS (Capability based Reliable Operating System)[63] project is the continuation
project of EROS. It is using same EROS code base. The CapROS project is being led by
Charles Landau.
2.6. seL4 verification project
The seL4 (Secure Embedded L4) kernel is an evolution of the L4 microkernel. It is third
generation microkernel of L4 provenance. It targets embedded devices. The seL4 implements
capability based protection system, capabilities in seL4 are immutable. Similar to seL4 micro
kernel, seL4 provides address spaces, inter-process communication and threads. In seL4 all
system calls are invocations of capabilities. The seL4 comprises of 8,700 lines of C code and 600
lines of assembler.
Project implementation
The seL4 verification project[43,44,45] was the project which provided the proof of functional
correctness of a complete general purpose operating system for the first time. In this project,
formal machine-checked verification of seL4 microkernel has been performed from an abstract
specification down to the C implementation. In this project, first the access control model has
9. International Journal of Computer Science & Engineering Survey (IJCSES)
been verified and then the actual functional verification of kernel had been started. Before the
project, correctness of compiler, assembl
has been assumed.
An access control model of seL4 has been verified by D. Elkaduwe et al.[41] in the theorem
prover isabelle/HOL[29]. In their work the take
rule and with more realistic create rule that is explicitly authorized by capability. In their
formalization, remove rule has also been modified. In seL4, remove
capability instead of removing few parts. D. Elkaduwe et al.[4
mechanisms are sufficient to enforce mandatory isolation between subsystems. They have shown
that it is possible to build fully spatially separated system on top of seL4. So here spatial memory
separation has been guaranteed but authors have said that with current stock hardware preventing
all covert timing channels is not possible.
The seL4 kernel design process has been shown in the figure
formal artefacts. These artefacts have direc
figure represent implementation effort; the single arrows represent design influence of artefacts
on other artefacts. The central artefact is the actual Haskell prototype of the kernel. The prototype
requires the design and implementation of algorithms that manage the low
The figure-6 shows that hardware and Haskell prototype has design influence on formal
executable specification.
After the design process, verification process too
interactive, machine assisted and machine checked proof. Figure
which were used in verification of seL4. Abstract specification layer shows the details to specify
outer interface of the kernel. It doesn’t describe in detail how the effects or interfaces are
implemented in kernel; in short it describes what the system does without saying how it is done.
The executable specification layer has been generated from Haskell into the t
contains all data structure and implementation details which have been expected in the final C
implementation. The high performance C implementation layer deals with the formally
semantics.
Fig
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
been verified and then the actual functional verification of kernel had been started. Before the
project, correctness of compiler, assembly code, boot code, management of caches and hardware
An access control model of seL4 has been verified by D. Elkaduwe et al.[41] in the theorem
prover isabelle/HOL[29]. In their work the take-grant[42] model has been used, but with
rule and with more realistic create rule that is explicitly authorized by capability. In their
formalization, remove rule has also been modified. In seL4, remove-rule will remove whole
capability instead of removing few parts. D. Elkaduwe et al.[41] have proved that kernel provided
mechanisms are sufficient to enforce mandatory isolation between subsystems. They have shown
that it is possible to build fully spatially separated system on top of seL4. So here spatial memory
teed but authors have said that with current stock hardware preventing
all covert timing channels is not possible.
The seL4 kernel design process has been shown in the figure-6 below. The square boxes show the
formal artefacts. These artefacts have direct role in the proof. The double arrows shown in the
figure represent implementation effort; the single arrows represent design influence of artefacts
on other artefacts. The central artefact is the actual Haskell prototype of the kernel. The prototype
ires the design and implementation of algorithms that manage the low-level hardware details.
6 shows that hardware and Haskell prototype has design influence on formal
After the design process, verification process took place. In this project, the verification was
interactive, machine assisted and machine checked proof. Figure-7 shows the specification layers
which were used in verification of seL4. Abstract specification layer shows the details to specify
ce of the kernel. It doesn’t describe in detail how the effects or interfaces are
implemented in kernel; in short it describes what the system does without saying how it is done.
The executable specification layer has been generated from Haskell into the theorem Prover. It
contains all data structure and implementation details which have been expected in the final C
implementation. The high performance C implementation layer deals with the formally
Figure 6. The seL4 design process [43]
Vol.6, No.4, August 2015
9
been verified and then the actual functional verification of kernel had been started. Before the
y code, boot code, management of caches and hardware
An access control model of seL4 has been verified by D. Elkaduwe et al.[41] in the theorem
grant[42] model has been used, but without take
rule and with more realistic create rule that is explicitly authorized by capability. In their
rule will remove whole
1] have proved that kernel provided
mechanisms are sufficient to enforce mandatory isolation between subsystems. They have shown
that it is possible to build fully spatially separated system on top of seL4. So here spatial memory
teed but authors have said that with current stock hardware preventing
6 below. The square boxes show the
t role in the proof. The double arrows shown in the
figure represent implementation effort; the single arrows represent design influence of artefacts
on other artefacts. The central artefact is the actual Haskell prototype of the kernel. The prototype
level hardware details.
6 shows that hardware and Haskell prototype has design influence on formal
k place. In this project, the verification was
7 shows the specification layers
which were used in verification of seL4. Abstract specification layer shows the details to specify
ce of the kernel. It doesn’t describe in detail how the effects or interfaces are
implemented in kernel; in short it describes what the system does without saying how it is done.
heorem Prover. It
contains all data structure and implementation details which have been expected in the final C
implementation. The high performance C implementation layer deals with the formally-defined
10. International Journal of Computer Science & Engineering Survey (IJCSES)
Figure 7. The refinement layers in the verification of seL4 [43]
Now the points below show some important findings that can be derived from seL4 project,
which will connect us with the current formal verification scenario:
• There are many techniques for formal verification like model checking, static analysis or
kernel implementations in type safe language. But in this project it has been believed that
functional correctness is stronger and more precise technique compared to the techniques
mentioned above[43].
• In this project, fusion of traditional operating system and formal method technique had
been used i.e. rapid kernel design and implementation had been used together. Because of
this implementation, the verification focus was improved
with the better performance[43].
• In UCLA it had been observed that simplification of kernel to make verification feasible
made the kernel little bit slower. But in this project it has been shown that with modern
tools and technique, this is not the case these days.
The summary of the aforementioned projects has been shown in the table
column, the question mark in parenthesis shows that the year is not known and year in parenthesis
shows estimated completion date. Further, many verification projects like bias variance
tradeoffs[66], the Xtratum[67], L4Android[68], ORIENTAIS[69], VTOS[70] and CHERI[71]
have been studied. They are evolved from the aforementioned projects and follow same path as
described in above projects for verification of OS.
Project Highest
level
Lowest
level
UCLA Security
model
Pascal
KIT Isolated
task
Assembly
PSOS Applicatio
n level
Secure code
VFiasco Doesn’t
crash
C++
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
7. The refinement layers in the verification of seL4 [43]
Now the points below show some important findings that can be derived from seL4 project,
which will connect us with the current formal verification scenario:
chniques for formal verification like model checking, static analysis or
kernel implementations in type safe language. But in this project it has been believed that
functional correctness is stronger and more precise technique compared to the techniques
In this project, fusion of traditional operating system and formal method technique had
been used i.e. rapid kernel design and implementation had been used together. Because of
this implementation, the verification focus was improved and design was not conflicting
with the better performance[43].
In UCLA it had been observed that simplification of kernel to make verification feasible
made the kernel little bit slower. But in this project it has been shown that with modern
echnique, this is not the case these days.
The summary of the aforementioned projects has been shown in the table-1 below. In the year
column, the question mark in parenthesis shows that the year is not known and year in parenthesis
etion date. Further, many verification projects like bias variance
tradeoffs[66], the Xtratum[67], L4Android[68], ORIENTAIS[69], VTOS[70] and CHERI[71]
have been studied. They are evolved from the aforementioned projects and follow same path as
in above projects for verification of OS.
Table 1. OS verification projects [8]
Lowest Specs Proofs Prover Approach
90% 20% XIVUS Alphard
Assembly 100% 100% Boyer
Moore
Interpreter
equivalence
Secure code 17
layers
0% SPECI
AL
HDM
70% 0% PVS Semantic
compiler
Vol.6, No.4, August 2015
10
Now the points below show some important findings that can be derived from seL4 project,
chniques for formal verification like model checking, static analysis or
kernel implementations in type safe language. But in this project it has been believed that
functional correctness is stronger and more precise technique compared to the techniques
In this project, fusion of traditional operating system and formal method technique had
been used i.e. rapid kernel design and implementation had been used together. Because of
and design was not conflicting
In UCLA it had been observed that simplification of kernel to make verification feasible
made the kernel little bit slower. But in this project it has been shown that with modern
1 below. In the year
column, the question mark in parenthesis shows that the year is not known and year in parenthesis
etion date. Further, many verification projects like bias variance
tradeoffs[66], the Xtratum[67], L4Android[68], ORIENTAIS[69], VTOS[70] and CHERI[71]
have been studied. They are evolved from the aforementioned projects and follow same path as
Year
(?)-1980
(?)-1987
1973-
1983
2001-
2008
11. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
11
EROS Security
model
BitC Security
model
0% ACL2(
?)
Language
based
2004-(?)
L4
verified
Security
model
C/assembly 100% 70% Isabell
e
Performance
production
code
2005-
(2008)
3. FUTURE CHALLENGES
Further, challenges out of the above projects and inherited projects have been discussed. In the
present survey it has not been claimed that the challenges described in this section haven’t been
implemented at all. But to the best of our knowledge, the areas described in this section haven’t
been covered in detail in literatures.
From all the above projects, especially from the SeL4 project it can be concluded that functional
correctness is one of the strongest properties that can be proven about the system. The functional
correctness can help to make precise formal prediction of how the kernel behaves in all possible
situations for all possible inputs. G. Klein[19] has suggested that, if any specific property is
needed to be checked then it can be expressed in Hoare’s logic, it is enough to work with this
formal prediction, with the specification. But functional correctness property of any system can’t
prove that the system is secure, it just says that system is functionally correct [19]. The word
‘secure’ in secure system requires formal definition, but this depends on what you want to use the
kernel for. So verification of all security properties or secure system which has been built on top
of the OS kernel is one of the recent challenges in formal verification field.
In SeL4 project specific security properties hadn’t been proven, but G. Klein[19] believed that if
the functional correctness property of the operating system can be proven, then below
assumptions can be made easily about that OS: (1) No code injection attacks (2) No buffer
overflow attack (3) No NULL pointer access (4) No ill-typed pointer access (5) No memory leaks
(6) No non-termination (7) No arithmetic or other exceptions (8) No unchecked user arguments.
He has explained these assumptions with reasons. He has also explained that the functional
correctness can tell following properties about the code: (1) Aligned object (2) Well formed data
structures (3) Algorithmic invariants and (4) Correct book-keeping. So research challenges here
are: can functional correctness property be used to verify other security properties or can
framework be made that covers verification of most of the security properties.
G. Heiser et al.[20] have provided some research challenges in different way, they have told that
the users of the fully verified kernel have trustworthy foundation for the entire system. For
example, the kernel seL4 is fully verified and it can be used as trustworthy foundation. Now the
challenge is how the trustworthy foundation of the system can be used further. They have
suggested three uses of the trustworthy foundation: (1) Secure web browsing (2) Increase the
usefulness of TPM (3) Cost-reduced database. But still there are many challenges related to the
kernel used as trustworthy foundation.
Let’s start with the secure web browsing topic. One security policy of the browser is Same Origin
Policy (SOP). SOP means web pages from different sources cannot observe or alter each other’s
state and behaviour and script running inside the web page must denied unauthorized access to
OS resources. Recent browsers like Chrome[22] address this problem by encapsulating security
policy in separate module, the browser kernel. So, this approach is based on the OS, specifically
dependent on browser’s TCB. So one challenge here is can TCB of the browser be reduced using
microkernel approach. IBOS[21] project has actually shown that TCB of browser can be reduced
by using the microkernel approach. The authors have proposed architecture for secure browsing
12. International Journal of Computer Science & Engineering Survey (IJCSES)
which contains two trusted parts: microkernel and user level security process which are shown in
figure-8.
Figure 8. Secure web browser architecture. Components that belong to the TCB of the system are
Figure-8 shows that monitor is the only part in the TCB that needs to be verified. The monitor
instantiates browser process with permissions t
stack processes and the monitor itself[20]. Once this security monitor is verified one can have
completely verified TCB for secure browsing. So, research challenge here is can the
aforementioned architecture be extended to include complete OS stack and that too running inside
a VM (Virtual Machine).
Next challenge is to increase usefulness of TPM. Trusted Computing Group[23] has introduced
the Trusted Platform Module (TPM). The TPM provides remote attestation facility which
provides evidence that the trusted software stack has been loaded by the remot
booting time. Now let’s take classical example of bank transaction. The bank can reject the
transaction request from the remote system if the trusted software stack hasn’t been loaded by the
remote machine. Bank clients want to use the sm
smart phones and computers have many apps and software for the bank to manage. Many of these
apps and software are not trusted. So, for successful remote attestation it has become necessary to
kept these out of TCB and therefore the system. To solve this problem, the concept of Dynamic
Root of Trust Measurement (DRTM)[24] has been introduced. DRTM allows user to switch
between trusted and untrusted environment. But to achieve this trusted code should be small a
it can run only for a fraction of seconds. But bank transaction needs more time than few seconds.
So above approach needs user to suspend OS while he is doing the bank transaction. But this is
not a practical solution. So to solve this issue TCB is need
and change in controlled fashion. After concluding this G. Heiser et al. believed that formally
verified kernel do not change or changes very rarely because it won’t require any bug fixes. They
have given example of seL4 kernel also. The challenge here is to minimize TCB and isolate TCB
from rest of the general purpose OS components using the trusted system which contains fully
verified kernel.
Final challenge that G. Heiser et al.[20] have mentioned is about the databa
databases can have disk failures, power failures and OS crash[20]. These days RAID protects
database from disk failure and UPS protects from power failure. This leaves protection against
OS crash as one of the open research problem. Here re
properties of kernel be verified in such a way that it can make kernel crash
be directly implemented on verified kernel and if it is possible then what kind of changes required
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
which contains two trusted parts: microkernel and user level security process which are shown in
web browser architecture. Components that belong to the TCB of the system are
highlighted using grey shading[20]
8 shows that monitor is the only part in the TCB that needs to be verified. The monitor
instantiates browser process with permissions to directly communicate with their private network,
stack processes and the monitor itself[20]. Once this security monitor is verified one can have
completely verified TCB for secure browsing. So, research challenge here is can the
ure be extended to include complete OS stack and that too running inside
Next challenge is to increase usefulness of TPM. Trusted Computing Group[23] has introduced
the Trusted Platform Module (TPM). The TPM provides remote attestation facility which
provides evidence that the trusted software stack has been loaded by the remote system from its
booting time. Now let’s take classical example of bank transaction. The bank can reject the
transaction request from the remote system if the trusted software stack hasn’t been loaded by the
remote machine. Bank clients want to use the smart phone or computer for the transaction. The
smart phones and computers have many apps and software for the bank to manage. Many of these
apps and software are not trusted. So, for successful remote attestation it has become necessary to
f TCB and therefore the system. To solve this problem, the concept of Dynamic
Root of Trust Measurement (DRTM)[24] has been introduced. DRTM allows user to switch
between trusted and untrusted environment. But to achieve this trusted code should be small a
it can run only for a fraction of seconds. But bank transaction needs more time than few seconds.
So above approach needs user to suspend OS while he is doing the bank transaction. But this is
not a practical solution. So to solve this issue TCB is needed that do not change, change rarely
and change in controlled fashion. After concluding this G. Heiser et al. believed that formally
verified kernel do not change or changes very rarely because it won’t require any bug fixes. They
4 kernel also. The challenge here is to minimize TCB and isolate TCB
from rest of the general purpose OS components using the trusted system which contains fully
Final challenge that G. Heiser et al.[20] have mentioned is about the database systems. The
databases can have disk failures, power failures and OS crash[20]. These days RAID protects
database from disk failure and UPS protects from power failure. This leaves protection against
OS crash as one of the open research problem. Here research questions can be formed: can
properties of kernel be verified in such a way that it can make kernel crash-proof, Can database
be directly implemented on verified kernel and if it is possible then what kind of changes required
Vol.6, No.4, August 2015
12
which contains two trusted parts: microkernel and user level security process which are shown in
web browser architecture. Components that belong to the TCB of the system are
8 shows that monitor is the only part in the TCB that needs to be verified. The monitor
o directly communicate with their private network,
stack processes and the monitor itself[20]. Once this security monitor is verified one can have
completely verified TCB for secure browsing. So, research challenge here is can the
ure be extended to include complete OS stack and that too running inside
Next challenge is to increase usefulness of TPM. Trusted Computing Group[23] has introduced
the Trusted Platform Module (TPM). The TPM provides remote attestation facility which
e system from its
booting time. Now let’s take classical example of bank transaction. The bank can reject the
transaction request from the remote system if the trusted software stack hasn’t been loaded by the
art phone or computer for the transaction. The
smart phones and computers have many apps and software for the bank to manage. Many of these
apps and software are not trusted. So, for successful remote attestation it has become necessary to
f TCB and therefore the system. To solve this problem, the concept of Dynamic
Root of Trust Measurement (DRTM)[24] has been introduced. DRTM allows user to switch
between trusted and untrusted environment. But to achieve this trusted code should be small and
it can run only for a fraction of seconds. But bank transaction needs more time than few seconds.
So above approach needs user to suspend OS while he is doing the bank transaction. But this is
ed that do not change, change rarely
and change in controlled fashion. After concluding this G. Heiser et al. believed that formally
verified kernel do not change or changes very rarely because it won’t require any bug fixes. They
4 kernel also. The challenge here is to minimize TCB and isolate TCB
from rest of the general purpose OS components using the trusted system which contains fully
se systems. The
databases can have disk failures, power failures and OS crash[20]. These days RAID protects
database from disk failure and UPS protects from power failure. This leaves protection against
search questions can be formed: can
proof, Can database
be directly implemented on verified kernel and if it is possible then what kind of changes required
13. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
13
in the lower layers of DBMS. Can database be implemented on verified kernel in such a way that
the changes required in the lower layer of database become minimal and practical to achieve.
J. Andronick et al.[25] have tried to solve a research problem related to scalability of the formal
verification. They have dealt with the large and complex system which uses seL4 for which
security guarantees can be given. They have proposed a framework to build such large and
complex systems. Authors’ vision here is that not all the software in a large system necessarily
contributes to a security property of interest. From the vision, the methodology has been
developed: (1) isolate the software parts which are not critical to a targeted property and prove
that for specific property nothing more is needed to be proven about them (2) formally verify and
prove that remaining part satisfy the targeted security property. In this paper the case study of
Secure Access Controller (SAC) has been taken. Access control-based security policy has been
taken as a property of interest here. Authors have explained that verifying such a property for
large system is far beyond the abilities of current verification methods. To verify this property
large code base has been divided into trusted code and untrusted code. The authors have used here
seL4[26] kernel to get the code isolation in this system which is already verified. So question
arises that if the kernel which hasn’t been verified is used for the large and complex system, can
we get security for such system and get the code separation.
In 2001, during VFiasco project[27] M. Hohmuth et al. Observed that huge bug affected
monolithic kernels are outside the scope of verification technology which were available at that
time. Although the microkernels are smart choice for constructing verified secure system, but is it
possible to have verification technology now which can accommodate verification of monolithic
kernel. Verification of monolithic kernel can be costly in terms of time and efforts, but it can be
taken as a recent research challenge.
The formal verification community has nice security properties, high level formal model and
ways of architecting secure systems, but still no signs of implementation level proofs. Even
recently implemented seL4 microkernel doesn’t have these implementation level proofs. G. Klein
et al.[39] believes that this needs to be changed. It is obvious that if the system is large then it can
be secured by reducing the Trusted Computing Base (TCB). For large systems, microkernels
provide good foundation, though with reduced TCB of large systems, nobody has proved security
down to the implementation level. With type 1 hypervisors, it has been assumed that they will
perform the role of separation kernel, but there are no implementation level proofs for these
hypervisors either[39]. For a moment let’s assume that in next few years, kernels are available
with fully formally verified functional correctness down to the implementation level. But then
what next? Such kernels do not automatically imply security [19].
After the completion of sel4 verification, G. Klein et al.[39] have mentioned in their work that the
mandatory access control can be implemented on OSes for example in Linux, but it is not
possible to get provable or assurable security for such OSes at least for next few years. Assume
that all security properties of kernel have been proved in next few years (Sorry, this will never
happen), still we as a formal verification community are not done yet. Our ultimate goal should
be to achieve proof that whole systems enforce their security goals; we can get much stronger
assurance for much larger systems than what is thought feasible today. It should be remembered
that proofs say that the code will follow specification; it doesn’t say that specification enforces
specific security property.
From the projects like VFiasco and seL4, it can be observed that there is a gap between idealized
security properties and properties that hold real kernels. G. Klein et al.[39] have mentioned in
their work that there is no good formal handle available on timing and time based covert channels
for practical implementations, so it is a research challenge. The authors are not expecting that the
same level formal proofs will be obtained in near future as it is available for storage channel.
14. International Journal of Computer Science & Engineering Survey (IJCSES)
K Elphinstone[46] has described challenge that takes place while using verified microkernel like
seL4 in Digital Rights Management (DRM), and based on this he has provided some new issues
which can be addressed by research communities. One can think solutions of this problem
direction, not necessarily in formal verification point of view. The DRM is the concept of
specifying, enforcing and limiting rights associated with digital content. Before identifying issues,
the architecture of DRM has been explained. The DRM arc
9[46]. First user posses the device upon which he wishes to view content. Then the user must be
authenticated by the content provider.
Figure
The content must be securely transferred to user device, for this encryption has been used. The
user player should be able to decrypt the content when user wishes to view it.
Here the content-use policy can be violated by end
engineering, modifying players or running the player on modified operating system. This is the
research challenge. One solution is to provide assurance that a trusted player on a trusted
operating system is the only software that has access to the content, other solutions
of in the context of hardware or networking.
Next, the challenge is about the gap between formal model of OS and hardware implementation.
A. Cohn[51] has described that the physical hardware is a realisation of a model, and correct
hardware operations are beyond the scope of the formal verification. One example of this is:
manufacturers can’t prove absence of manufacturing defects. So, even if the verified processor or
kernel is available, the gap between formal model of such kernel or proc
will always exist[51]. The research challenge is how to minimize this gap.
Tuch H. et al.[52] observed UCLA, KIT and VFiasco closely. They have described in their work
that in kernel verification, challenges related to performance
still available. They have mentioned that features like direct hardware access, pointer arithmetic
and embedded assembly code haven’t been the subjects of mainstream verification research, so
research scope is available in these areas.
The KIT was multitasking OS, which gave direction to the present survey to find verification
challenges in concurrent OS. S. Rajamani et al.[53] have discussed challenges of an OS which
support concurrent execution of the program. Sup
execution of program has normal page size. This OS supports third party plug
OS, there are many research challenges: (1) Is it possible to guarantee isolation in this OS where
third party plug-ins can be ill-behaved[53]? (2) With the isolation guarantee in OS, is it possible
to manage properties like safety and permissiveness? (3) Is it possible to achieve isolation without
the problems like deadlocks, livelocks[53], memory fragmentation and conditio
handling? (4) Various operating systems provide various types of memory protections. How to
handle these variations if memory protection is used for isolation?
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
s described challenge that takes place while using verified microkernel like
seL4 in Digital Rights Management (DRM), and based on this he has provided some new issues
which can be addressed by research communities. One can think solutions of this problem
direction, not necessarily in formal verification point of view. The DRM is the concept of
specifying, enforcing and limiting rights associated with digital content. Before identifying issues,
the architecture of DRM has been explained. The DRM architecture has been shown in the figure
9[46]. First user posses the device upon which he wishes to view content. Then the user must be
provider.
Figure 9. General DRM architecture [46]
transferred to user device, for this encryption has been used. The
user player should be able to decrypt the content when user wishes to view it.
use policy can be violated by end-users in many ways, ranging from reverse
fying players or running the player on modified operating system. This is the
research challenge. One solution is to provide assurance that a trusted player on a trusted
operating system is the only software that has access to the content, other solutions can be thought
of in the context of hardware or networking.
Next, the challenge is about the gap between formal model of OS and hardware implementation.
A. Cohn[51] has described that the physical hardware is a realisation of a model, and correct
e operations are beyond the scope of the formal verification. One example of this is:
manufacturers can’t prove absence of manufacturing defects. So, even if the verified processor or
kernel is available, the gap between formal model of such kernel or processor and implementation
will always exist[51]. The research challenge is how to minimize this gap.
Tuch H. et al.[52] observed UCLA, KIT and VFiasco closely. They have described in their work
that in kernel verification, challenges related to performance, size and the level of abstraction is
still available. They have mentioned that features like direct hardware access, pointer arithmetic
and embedded assembly code haven’t been the subjects of mainstream verification research, so
le in these areas.
The KIT was multitasking OS, which gave direction to the present survey to find verification
challenges in concurrent OS. S. Rajamani et al.[53] have discussed challenges of an OS which
support concurrent execution of the program. Suppose an OS which supports concurrent
execution of program has normal page size. This OS supports third party plug-ins. In this type of
OS, there are many research challenges: (1) Is it possible to guarantee isolation in this OS where
behaved[53]? (2) With the isolation guarantee in OS, is it possible
to manage properties like safety and permissiveness? (3) Is it possible to achieve isolation without
the problems like deadlocks, livelocks[53], memory fragmentation and conditio
handling? (4) Various operating systems provide various types of memory protections. How to
handle these variations if memory protection is used for isolation?
Vol.6, No.4, August 2015
14
s described challenge that takes place while using verified microkernel like
seL4 in Digital Rights Management (DRM), and based on this he has provided some new issues
which can be addressed by research communities. One can think solutions of this problem in any
direction, not necessarily in formal verification point of view. The DRM is the concept of
specifying, enforcing and limiting rights associated with digital content. Before identifying issues,
hitecture has been shown in the figure-
9[46]. First user posses the device upon which he wishes to view content. Then the user must be
transferred to user device, for this encryption has been used. The
users in many ways, ranging from reverse
fying players or running the player on modified operating system. This is the
research challenge. One solution is to provide assurance that a trusted player on a trusted
can be thought
Next, the challenge is about the gap between formal model of OS and hardware implementation.
A. Cohn[51] has described that the physical hardware is a realisation of a model, and correct
e operations are beyond the scope of the formal verification. One example of this is:
manufacturers can’t prove absence of manufacturing defects. So, even if the verified processor or
essor and implementation
Tuch H. et al.[52] observed UCLA, KIT and VFiasco closely. They have described in their work
, size and the level of abstraction is
still available. They have mentioned that features like direct hardware access, pointer arithmetic
and embedded assembly code haven’t been the subjects of mainstream verification research, so
The KIT was multitasking OS, which gave direction to the present survey to find verification
challenges in concurrent OS. S. Rajamani et al.[53] have discussed challenges of an OS which
pose an OS which supports concurrent
ins. In this type of
OS, there are many research challenges: (1) Is it possible to guarantee isolation in this OS where
behaved[53]? (2) With the isolation guarantee in OS, is it possible
to manage properties like safety and permissiveness? (3) Is it possible to achieve isolation without
the problems like deadlocks, livelocks[53], memory fragmentation and condition variable
handling? (4) Various operating systems provide various types of memory protections. How to
15. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
15
The challenges are summarized in a table below:
Table 2. Challenges in formal verification
Challenge Explanation Source Suggestion
Functional
correctness
Although functional
correctness is strong
property, it can’t prove that
system is secure. To verify
more number of security
properties together with or
without the functional
correctness is a challenge.
seL4 project R. Akella and Bruce M.[72]
have verified security
properties like information
flow and non deducibility of
cyber-physical system. They
have described that
information flow security
can’t be checked using
functional correctness. So,
they have provided different
framework for verification
using process algebra. Same
methodologies can be
applied to OS verification
also.
Kernel as a
trustworthy
foundation
Three uses of the
trustworthy foundation: (1)
Secure web browsing (2)
Increase the usefulness of
TPM (3) Cost-reduced
database. The challenge is
how the usage of
trustworthy foundation in
different areas can be
increased.
seL4 project G. Heiser et al.[73] have
provided prototype system
RapiLog which is based on
verified seL4 hypervisors.
This system is used to
reduce system complexity by
leveraging verification
instead of using special
hardware. Similarly more
ways can be found where
kernel can be used as
trustworthy foundation.
Security in large
system
Not all software on the large
system contributes to the
security of system. So to
verify the large system,
system should be divided in
trusted and untrusted code.
The division of large system
in trusted and untrusted
code is a challenge.
VFiasco, seL4 Stefan et al[79]. have
verified cyber-physical
system. Their work provides
hint to verify larger and
more complex systems.
Implementation
level proof
No implementation level
proofs so far for any kernel
verification. This is a
challenge.
Recent
projects like
seL4, VFiasco
Nana S. et al.[74] have
provided framework to
verify hardware and software
co-verification. IEEE
802.11ac WLAN system has
been selected by them for
case study. Their work might
provide a hint to solve this
research challenge.
16. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
16
Time-based
covert channel
To create formal handle for
time-based covert channel is
a challenge.
Recent
projects like
seL4, VFiasco
The problem with covert
channel is it is difficult to
verify it on case-by-case
bases. So, first unique model
to unified approach is
needed and then verification
can be done. So P. L.
Shrestha et al[78] have
provided this unique model
which can be used to verify
time-based covert channel.
Monolithic
kernel
Huge, bug affected
monolithic kernel
verification is a challenge.
VFiasco
project
M. Lange et al[77]. have
provided hint to solve this
research challenge. Their
work focuses on L4Android
which is monolithic
architecture. Although the
project is not entirely on the
monolithic kernel
verification, but it can
provide help to solve this
research problem.
Challenges
related to
unexplored
areas
Pointer arithmetic and
embedded assembly code
haven’t been the subjects of
mainstream verification
research so far. These fields
can be explored more.
UCLA,
PSOS, KIT,
VFiasco,
EROS
The work of Thomas S. et
al[75]. gives hint in the
direction of pointer
arithmetic. J Kobashi et
al[76] have provided
directions to verify
embedded assembly code.
Their work can be explored
further to solve these
challenges.
Challenges
related to
Concurrency
(1) To achieve and verify
isolation when third party
plug-ins are ill-behaved. (2)
With isolation, manage the
properties like safety and
permissiveness (3) Avoid
problems like deadlocks,
livelocks, memory
fragmentation and condition
variable handling (4)
Handle variations in
memory protection if it has
been used to achieve
isolation.
KIT is
multitasking
kernel. It has
provided base
to explore
concurrency
related areas.
Solutions of some of the
problem have been discussed
by S. Rajamani et al[53].
They have tried to solve this
challenge for concurrent
programs. These solutions
can be expanded for the
large, concurrent operating
system kernel and the
isolation properties can be
verified.
4. CONCLUSIONS
In the present paper, an overview of kernel verification projects has been provided, in specific
UCLA, KIT, PSOS, VFiasco, EROS and seL4 projects have been surveyed. In the present survey,
17. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
17
highest level specification, lowest level of specification, model checker and approach of
verification have been described for each project. During the survey it has been observed that
proofs of automation, memory models, proof libraries, and program logics have been developed
significantly, that’s why OS verification is not as hard as it was before 35 years.
The present article also describes challenges in kernel verification area. They are related to
monolithic kernel verification, functional correctness, implementation level proof, time-based
covert channel, direct hardware access, pointer arithmetic, concurrency and use of kernel as
trustworthy foundation areas. Future efforts in the direction of solving these challenges will make
verification faster and precise.
REFERENCES
[1] G. Popek and D. Farber, (1978) “A model for verification of data security in operating systems,”
Communications of the ACM, vol. 21,(9), pp. 737–749.
[2] Landwehr, C. E., Heitmeyer, C. L., and Mc Lean, J., (1984) “A security model for military message”
systems. ACM Trans. Comput. Syst, 2, 3, 198–222.
[3] E. Kang and D. Jackson, (2010) Dependability arguments with trusted bases. In RE, pages 262-271.
IEEE Computer Society.
[4] Walker, B. J., Kemmere, R. A., and Popek, G. J., (1979) “Specification and Verification of the UCLA
Unix Security Kernel”. In Proceedings of the 7th ACM Symposium on Operating Systems Principles
(SOSP), pp. 64–65.
[5] T. In der Rieden, (2009) “Verified Linking for Modular Kernel Verification”. PhD thesis, Saarland
University, Computer Science Department.
[6] Popek, G.J., (1974) Protection Structures. Computer, 7(6), 22-33.
[7] C. C.Morgan, (1990) Programming from specifications. Prentice-Hall.
[8] G. Klein, (2009) “Operating system verification — an overview”. Sadhana, 34(1):27–69.
[9] Bevier,W.R., (1989) “Kit: a study in operating system verification” IEEE Trans. Softw. Eng., 15(11),
1382–1396
[10] R. S. Boyer and J. S. Moore, (1988) A Computational Logic Handbook. New York: Academic.
[11] R. Milner, (1971) “An algebraic definition of simulation between programs”, Stanford AI Project.
Tech. Rep., AIM-142,.
[12] G. Heiser, K. Elphinstone, I. Kuz, G. Klein, and S. M. Petters, (2007) “Towards trustworthy
computing systems: Taking microkernels to the next level” ACM Operating Systems Review, 41(3).
[13] J. S. Shapiro, M. S. Doerrie, E. Northup, S. Sridhar, and M. Miller (2004) “Towards a Verified,
General-Purpose Operating System Kernel” In Proceedings of the 1st NICTA Workshop on Operating
System Verification, pages 1–18.
[14] L. Robinson and K.N. Levitt, (1977) “Proof techniques for hierarchically structured programs”,
Communications of the ACM, 20(4):271–283.
[15] P.G. Neumann, R.S. Boyer, R.J. Feiertag, K.N. Levitt, and L. Robinson, (1980) “A Provably Secure
Operating System: The system, its applications, and proofs” Technical report, Computer Science
Laboratory, SRI International, Menlo Park, California, 2nd edition, Report CSL-116.
[16] P.G. Neumann and R.J. Feiertag, (2003) “PSOS revisited”, In Proceedings of the 19th Annual
Computer Security Applications Conference, Classic Papers section, pages 208–216,
[17] Jonathan S. Shapiro, (2003) “The practical application of a decidable access model” Technical
Report SRL, November, Baltimore, MD 21218.
[18] R. N. Watson, P. G. Neumann, J. Woodruff, J. Anderson, D. Chisnall, B. Davis, B. Laurie, S. W.
Moore, S. J. Murdoch, and M. Roe, (2014) “Capability Hardware Enhanced RISC Instructions:
CHERI Instruction-set architecture”, Technical Report UCAM-CL-TR-850, University of Cambridge,
Computer Laboratory, Apr. URL
[19] G. Klein (2009) “Correct OS kernel? proof? done!”, USENIX, 34(6):28–34.
[20] G. Heiser, L. Ryzhyk, M. von Tessin, and A. Budzynowski, (2011) “What if you could actually Trust
your kernel?” In 13th HotOS, Napa, CA, USA,
[21] Tang, S., Mai, H., And King, S. T., (2010) “Trust and protection in the Illinois Browser Operating
System”, In 9th OSDI, Vancouver, Canada, pp. 1–15.
18. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
18
[22] Barth, A., Jackson, C., Reis, C., And The Google Chrome team, (2008) “The security architecture of
the Chromium browser”. Technical report, Stanford Security Laboratory.
[23] Trusted Computing Group. Trusted Platform Module.
http://www.trustedcomputinggroup.org/developers/trustedplatform_module.
[24] McCune, J. M., Parno, B., Perrig, A., Reiter, M. K., AND Isozaki, H. (2008) “Flicker: An execution
infrastructure for TCB minimization”, In 3rd EuroSys Conf.
[25] June Andronick, David Greenaway, and Kevin Elphinstone, (2010) “Towards proving security in the
presence of large untrusted components”, In Gerwin Klein, Ralf Huuck, and Bastian Schlich, editors,
Proceedings of the 5th Workshop on Systems Software Verification, Vancouver, Canada, USENIX.
[26] seL4Website. http://ertos.nicta.com.au/research/sel4/, Jun 2010.
[27] M. Hohmuth, H. Tews, and S. G. Stephens, (2002) “Applying source-code verification to a
microkernel — the VFiasco project” (extended abstract) In Proceedings of the Tenth ACM SIGOPS
European Workshop, September.
[28] M. Hohmuth and H. Härtig, (2001) “Pragmatic nonblocking synchronization for real-time systems” In
USENIX Annual Technical Conference, Boston, MA.
[29] L. C. Paulson, (1994) Isabelle: A Generic Theorem Prover. Number 828 in LNCS. Springer, Berlin.
[30] M. Huisman and B. Jacobs, (2000) “Java program verification via a Hoare logic with abrupt
termination”, In T.Maibaum, editor, Fundamental Approaches to Software Engineering, number 1783
in LNCS.
[31] Jan Rothe, Hendrik Tews, and Bart Jacobs. (2001) “The Coalgebraic Class Specification Language
CCSL”, J. Univ. Comp. Sc., 7(2):175–193.
[32] Glesner, S., Leitner, J., Blech, J.O. (2006) “Coinductive verification of program optimizations using
similarity relations”, In Knoop, J., Necula, G. C., Zimmermann, W. (Eds.): Proc. of 5th Int. Wksh. on
Compiler Optimization Meets Compiler Verification, COCV ’06 Vienna,. Electron. Notes in Theor.
Comput. Sci., Vol. 176(3), Elsevier pp. 61–77.
[33] Tews, Hendrik, (2004) “Verifying Duff’s device”.
[34] Gong, Sheng Wen, (2013) “Formal Model of Classic Operating System Kernel”, Advanced Materials
Research, pp. 1020- 1023.
[35] William R. Bevier, Richard Cohen, and Jeff Turner, (2013) “A specification for the Synergy file
system”, Technical Report 120. Computational Logic Inc.
[36] Hermann Hartig, Michael Hohmuth, Norman Feske, Christian Helmuth, Adam Lackorzynski, Frank
Mehnert, and Michael Peter, (2005) “The Nizza secure-system architecture”, Proc. of the 1st
International Conference on Collaborative Computing: Networking, Applications and Worksharing.
[37] J. Strother Moore, (2002) “A grand challenge proposal for formal methods: A verified stack”, Proc. of
the 10th Anniversary Colloquium of UNU/IIST. pp. 161-172.
[38] Leinenbach, D., (2008) “Compiler Verification in the Context of Pervasive System Verification”, PhD
thesis, , Saarland University, Saarbrücken .
[39] G. Klein, T. Murray, P. Gammie, T. Sewell, and S. Winwood, (2011) “Provable security: How
feasible is it?” In 13th HotOS, Napa, CA, USA, pp. 28–32.
[40] Matthias Daum, (2003) “Develoment of a Semantics Compiler for C++”, Diploma thesis, TU
Dresden.
[41] D. Elkaduwe, G. Klein, and K. Elphinstone, (2008) “Verified protection model of the seL4
microkernel”, In J. Woodcock and N. Shankar, editors, VSTTE 2008 — Verified Softw.: Theories,
Tools & Experiments, volume 5295 of LNCS, Springer , pp. 99–114.
[42] R. J. Lipton and L. Snyder, (1977) A linear time algorithm for deciding subject security, ACM,
24(3):455–464.
[43] G. Klein, K. Elphinstone, G. Heiser, J. Andronick, D. Cock, P. Derrin, D. Elkaduwe, K. Engelhardt,
R. Kolanski, M. Norrish, T. Sewell, H. Tuch, and S. Winwood (2009) “seL4: Formal verification of
an OS kernel” In Proceedings of the 22nd ACM Symposium on Operating Systems Principles.
[44] Gerwin Klein, June Andronick, Kevin Elphinstone, Gernot Heiser, David Cock, Philip Derrin,
Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, Thomas Sewell, Harvey
Tuch, and Simon Winwood, (2010) “seL4: formal verification of an operating-system kernel”, 53(6),
pp.107–115.
[45] G. Klein, P. Derrin, and K. Elphinstone, (2009) “Experience report: seL4 — formally verifying a
high-performance microkernel”. In 14th ICFP.
[46] Kevin Elphinstone, (2004) “Future directions in the evolution of the L4 microkernel”, In Proceedings
of the NICTA workshop on OS verification.
19. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
19
[47] Shapiro J S, Smith JM, Farber D J., (1999) “EROS: a fast capability system”, In: SOSP 99:
Proceedings of the seventeenth ACM symposium on Operating systems principles, New York, NY,
USA: ACM pp. 170–185.
[48] Shapiro J S, Weber S, (2000) “Verifying the EROS confinement mechanism”, In: Proceedings of the
IEEE Symposium on Security and Privacy, IEEE Computer Society, Washington, DC, USA pp. 166–
176.
[49] Andrew Boyton, June Andronick, Callum Bannister, Matthew Fernandez, Xin Gao, David
Greenaway, Gerwin Klein, Corey Lewis, and Thomas Sewell, (2013) “Formally verified system
initialisation”, In Lindsay Groves and Jing Sun, editors, Proceedings of the 15th International
Conference on Formal Engineering Methods, Queenstown, New Zealand, Springer pp. 70–85.
[50] S. Maffeis, J. C. Mitchell, and A. Taly, (2010) “Object Capabilities and Isolation of Untrusted Web
Applications”, In Proceedings of the IEEE Symposium on Security and Privacy, pp. 125–140.
[51] Cohn, (1989) “The notion of proof in hardware verification” Journal of Automated Reasoning, 5(2)
pp. 127-139.
[52] Tuch H, Klein G, Heiser G. OS verification—now!. In: Proceedings of the 10th Workshop on Hot
Topics in Operating Systems, 2005,USENIX, Santa Fe, NM, USA , pp. 7–12
[53] Sriram K. Rajamani, G. Ramalingam, Venkatesh Prasad Ranganath, and Kapil Vaswani, (2009)
“Isolator: dynamically ensuring isolation in comcurrent programs” In ASPLOS 09: Architectural
Support for Programming Languages and Operating Systems, pp. 181–192.
[54] Shapiro J S, Doerrie M S, Northup E, Sridhar S, Miller M (2004) “Towards a verified, general-
purpose operating system kernel” In: G Klein, ed., Proceedings of the NICTA Formal Methods
Workshop on Operating Systems Verification, Technical Report 0401005T-1, NICTA, Sydney,
Australia
[55] McMillan, Kenneth L. (1993) “Symbolic model checking”, Springer US.
[56] Clarke, Edmund M., Orna Grumberg, and Doron Peled, (1999) “Model checking”, MIT press.
[57] Necula, George C. (2002) “Proof-carrying code design and implementation”, Springer, Netherlands.
[58] Appel, Andrew W., (2001) "Foundational proof-carrying code" Logic in Computer Science, 2001
Proceedings 16th Annual IEEE Symposium on. IEEE.
[59] Holzmann, Gerard J. (2002) "Static source code checking for user-defined properties."Proc. IDPT,
Vol. 2.
[60] Leino, K. Rustan M. Dafny, (2010) “An automatic program verifier for functional correctness. Logic
for Programming”, Artificial Intelligence, and Reasoning, Springer Berlin Heidelberg.
[61] Budd, Timothy A., et al., (1980) "Theoretical and empirical studies on using program mutation to test
the functional correctness of programs." Proceedings of the 7th ACM SIGPLAN SIGACT symposium
on Principles of programming languages. ACM.
[62] J. S. Shapiro. The EROS Web Site. http://www.eros-os.org. (Link visited March 2015)
[63] Charles Landau The CapROS Web Site. http://www.capros.org (Link visited March 2015)
[64] Shapiro J S Coytos web site, http://www.coyotos.org/ (Link visited March, 2015)
[65] Shapiro, J. S., Northup, E., Doerrie, M. S., Sridhar, S., Walfield, N. H., & Brinkmann, M. Coyotos
(2007) “microkernel specification”, The EROS Group, LLC, 0.5 edition.
[66] Sharma, Rahul, Aditya V. Nori, and Alex Aiken, (2014) “Bias-variance tradeoffs in program
analysis”, ACM SIGPLAN Notices, Vol. 49(1). ACM.
[67] Sanán, David, Andrew Butterfield, and Mike Hinchey, (2014) “Separation Kernel Verification: The
Xtratum Case Study” Verified Software: Theories, Tools and Experiments. Springer International
Publishing, pp. 133-149.
[68] Lange, Matthias, et al., (2011) "L4Android: a generic operating system framework for secure
smartphones." Proceedings of the 1st ACM workshop on Security and privacy in smartphones and
mobile devices. ACM.
[69] Shi, Jianqi, et al., (2012) "ORIENTAIS: Formal verified OSEK/VDX real-time operating system."
Engineering of Complex Computer Systems (ICECCS), 17th International Conference on. IEEE.
[70] Qian, Zhenjiang, Hao Huang, and Fangmin Song, (2013) "VTOS: Research on Methodology of
“Light-Weight” Formal Design and Verification for Microkernel OS." Information and
Communications Security. Springer International Publishing, pp. 17-32.
[71] Woodruff, Jonathan D., (2014) “ CHERI: A RISC capability machine for practical memory safety”
University of Cambridge, Computer Laboratory, Technical Report, UCAM-CL-TR-858.
[72] Akella, Ravi, and Bruce M. McMillin, (2013) “Modeling and verification of security properties for
critical infrastructure protection”, Proceedings of the Eighth Annual Cyber Security and Information
Intelligence Research Workshop. ACM.
20. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.6, No.4, August 2015
20
[73] Heiser, Gernot, et al., (2013) "RapiLog: reducing system complexity through
verification." Proceedings of the 8th ACM European Conference on Computer Systems, ACM.
[74] Sutisna Nana, et al., (2014) “Live demonstration: Hardware-software co-verification for very large
scale SoC using synopsys HAPS platform, Circuits and Systems” (APCCAS), IEEE Asia Pacific
Conference, IEEE.
[75] Ströder, Thomas, et al., (2014) “Proving termination and memory safety for programs with pointer
arithmetic, Automated Reasoning”, Springer International Publishing, pp.208-223.
[76] Kobashi, Jumpei, Satoshi Yamane, and Atsushi Takeshita, (2014) “Development of SMT-Based
Bounded Model Checker for embedded assembly program” Consumer Electronics (GCCE), IEEE 3rd
Global Conference.
[77] Lange, Matthias, et al. (2011) "L4Android: a generic operating system framework for secure
smartphones." Proceedings of the 1st ACM workshop on Security and privacy in smartphones and
mobile devices. ACM.
[78] Shrestha Pradhumna Lai, Michael Hempel, and Hamid Sharif, (2014) "Towards a unified model for
the analysis of timing-based covert channels."Communications (ICC), 2014 IEEE International
Conference, IEEE.
[79] Mitsch Stefan, Sarah M. Loos, and André Platzer, (2012) "Towards formal verification of freeway
traffic control." Cyber-Physical Systems (ICCPS), 2012 IEEE/ACM Third International Conference.
Authors
Kushal Anjaria is a PhD scholar at the department of computer science and engineering
of Defence Institute of Advanced Technology, Pune-India. He received M Tech in
computer science and engineering from Manipal Institute of Technology, Manipal in
2012. His work is currently focused on operating system security and formal verification.
Arun Mishra is an assistant professor at the department of computer science and
engineering of Defence Institute of Advanced Technology, Pune-India. He got his PhD
in computer science from Motilal Nehru National Institute of Technology, Allahabad
(India). His research activity is based on Automated Systems, Trusted Computing,
Secure Software Engineering, Formal Modelling and Component based Software
Engineering. More information is available at:
http://www.diat.ac.in/index.php?option=com_content&view=article&id=206&Itemid=3
59&Itemid=267