This document proposes a new notion of implementation that allows one specification to implement another, even when the specifications are parameterized or loose (having multiple models). It defines implementation based on a new concept of the simulation of a theory by an algebra. It proves that implementations can be composed both vertically (successive steps) and horizontally (separate implementations of a parameterized specification and its actual parameter).
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An Algorithm For Vector Quantizer DesignAngie Miller
The document presents an algorithm for designing vector quantizers. The algorithm is efficient, intuitive, and can be used for quantizers with general distortion measures and large block lengths. It is based on Lloyd's approach but does not require differentiation, making it applicable even when the data distribution has discrete components. The algorithm finds quantizers that meet necessary optimality conditions. Examples show it converges well and finds near-optimal quantizers for memoryless Gaussian sources. It is also used successfully to quantize LPC speech parameters with a complicated distortion measure.
The document describes a new method for optimizing binary tree representations of logic functions to improve throughput. The method aims to reduce logic depth by minimizing delay through grouping Boolean terms with high literal matching. Experimental results on FPGA show the method achieves 10-13% higher maximum throughput and 44-45% lower resource usage compared to an existing method.
The document discusses modifications to the PC algorithm for constraint-based causal structure learning that remove its order-dependence, which can lead to highly variable results in high-dimensional settings; the modified algorithms are order-independent while maintaining consistency under the same conditions, and simulations and analysis of yeast gene expression data show they improve performance over the original PC algorithm in high-dimensional settings.
Algorithms for computing the static single assignment form.pdfSara Parker
This document presents several algorithms for converting programs to Static Single Assignment (SSA) form by inserting φ-functions. It proposes a framework to systematically derive properties of SSA form and φ-placement algorithms. This framework is based on a new "merge" relation that captures the relevant structure of a program's control flow graph. The document derives several φ-placement algorithms from this framework, including those previously described in literature as well as new ones. It also describes an optimal algorithm for multi-variable φ-placement in structured programs.
The document presents a differentially private algorithm for producing heatmaps from aggregated user data while protecting individual privacy. At the core of the algorithm is a procedure that takes in a set of sparse distributions and outputs an approximation of their average that has low error under the Earth Mover's Distance metric. The algorithm is shown to achieve an error bound of O(√k/n) for aggregating k-sparse distributions from n users, and this bound is proved to be optimal. Experimental results demonstrate the practical effectiveness of the algorithm on real and synthetic location datasets.
This document presents versat, a formally verified SAT solver implemented in the GURU programming language. The solver incorporates modern SAT solving techniques like clause learning and watched literals. Unlike previous verified SAT solvers, versat uses efficient low-level data structures like C arrays. The specification requires that for any "unsat" result, there exists a resolution proof of unsatisfiability from the input clauses. The implementation and proofs are written in GURU to verify that versat meets this specification, providing trustworthy "unsat" results without generating proofs at runtime. Empirical results show versat can solve benchmarks on the scale of modern SAT solvers.
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...Pooyan Jamshidi
https://arxiv.org/abs/1606.06543
Finding optimal configurations for Stream Processing Systems (SPS) is a challenging problem due to the large number of parameters that can influence their performance and the lack of analytical models to anticipate the effect of a change. To tackle this issue, we consider tuning methods where an experimenter is given a limited budget of experiments and needs to carefully allocate this budget to find optimal configurations. We propose in this setting Bayesian Optimization for Configuration Optimization (BO4CO), an auto-tuning algorithm that leverages Gaussian Processes (GPs) to iteratively capture posterior distributions of the configuration spaces and sequentially drive the experimentation. Validation based on Apache Storm demonstrates that our approach locates optimal configurations within a limited experimental budget, with an improvement of SPS performance typically of at least an order of magnitude compared to existing configuration algorithms.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An Algorithm For Vector Quantizer DesignAngie Miller
The document presents an algorithm for designing vector quantizers. The algorithm is efficient, intuitive, and can be used for quantizers with general distortion measures and large block lengths. It is based on Lloyd's approach but does not require differentiation, making it applicable even when the data distribution has discrete components. The algorithm finds quantizers that meet necessary optimality conditions. Examples show it converges well and finds near-optimal quantizers for memoryless Gaussian sources. It is also used successfully to quantize LPC speech parameters with a complicated distortion measure.
The document describes a new method for optimizing binary tree representations of logic functions to improve throughput. The method aims to reduce logic depth by minimizing delay through grouping Boolean terms with high literal matching. Experimental results on FPGA show the method achieves 10-13% higher maximum throughput and 44-45% lower resource usage compared to an existing method.
The document discusses modifications to the PC algorithm for constraint-based causal structure learning that remove its order-dependence, which can lead to highly variable results in high-dimensional settings; the modified algorithms are order-independent while maintaining consistency under the same conditions, and simulations and analysis of yeast gene expression data show they improve performance over the original PC algorithm in high-dimensional settings.
Algorithms for computing the static single assignment form.pdfSara Parker
This document presents several algorithms for converting programs to Static Single Assignment (SSA) form by inserting φ-functions. It proposes a framework to systematically derive properties of SSA form and φ-placement algorithms. This framework is based on a new "merge" relation that captures the relevant structure of a program's control flow graph. The document derives several φ-placement algorithms from this framework, including those previously described in literature as well as new ones. It also describes an optimal algorithm for multi-variable φ-placement in structured programs.
The document presents a differentially private algorithm for producing heatmaps from aggregated user data while protecting individual privacy. At the core of the algorithm is a procedure that takes in a set of sparse distributions and outputs an approximation of their average that has low error under the Earth Mover's Distance metric. The algorithm is shown to achieve an error bound of O(√k/n) for aggregating k-sparse distributions from n users, and this bound is proved to be optimal. Experimental results demonstrate the practical effectiveness of the algorithm on real and synthetic location datasets.
This document presents versat, a formally verified SAT solver implemented in the GURU programming language. The solver incorporates modern SAT solving techniques like clause learning and watched literals. Unlike previous verified SAT solvers, versat uses efficient low-level data structures like C arrays. The specification requires that for any "unsat" result, there exists a resolution proof of unsatisfiability from the input clauses. The implementation and proofs are written in GURU to verify that versat meets this specification, providing trustworthy "unsat" results without generating proofs at runtime. Empirical results show versat can solve benchmarks on the scale of modern SAT solvers.
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...Pooyan Jamshidi
https://arxiv.org/abs/1606.06543
Finding optimal configurations for Stream Processing Systems (SPS) is a challenging problem due to the large number of parameters that can influence their performance and the lack of analytical models to anticipate the effect of a change. To tackle this issue, we consider tuning methods where an experimenter is given a limited budget of experiments and needs to carefully allocate this budget to find optimal configurations. We propose in this setting Bayesian Optimization for Configuration Optimization (BO4CO), an auto-tuning algorithm that leverages Gaussian Processes (GPs) to iteratively capture posterior distributions of the configuration spaces and sequentially drive the experimentation. Validation based on Apache Storm demonstrates that our approach locates optimal configurations within a limited experimental budget, with an improvement of SPS performance typically of at least an order of magnitude compared to existing configuration algorithms.
A Runtime Representation For Ada Variables And TypesGina Rizzo
This paper presents a runtime representation scheme for Ada variables and types used in the CHARRETTE Ada implementation. The scheme treats Ada discriminants and discriminant constraints as a form of parameterized types, where different instances of a type can have different variants and array field sizes. Each Ada type definition is represented by a type template containing tags and formal parameters. When a type is used, constraints become actual parameters to the template. Variables reference their type's template along with actual parameter values, allowing subtype and constraint information to be associated with variables and objects.
This document discusses strategies for parallelizing spectral methods. Spectral methods are global in nature due to their use of global basis functions, making them challenging to parallelize on fine-grained architectures. However, the document finds that spectral methods can be effectively parallelized. The main computational steps in spectral methods are the calculation of differential operators on functions and solving linear systems, both of which can exploit parallelism. Domain decomposition techniques may also help parallelize computations over non-Cartesian domains.
AN ALGORITHM FOR OPTIMIZED SEARCHING USING NON-OVERLAPPING ITERATIVE NEIGHBOR...IJCSEA Journal
The document presents an algorithm for optimized searching using non-overlapping iterative neighbor intervals. The algorithm aims to reduce the number of checked conditions by saving the frequency of replicated words and using non-overlapping intervals based on the plane sweep algorithm. It does this by focusing the search on a smaller subspace of relevant intervals near the minimum frequency keyword. The algorithm iterates through ranges, eliminating unsatisfied keywords to detect relevant ranges. This improves efficiency and reduces the number of comparisons compared to previous methods.
A Cryptographic Hardware Revolution in Communication Systems using Verilog HDLidescitation
Advanced Encryption Standard (AES), is an
advancement of Federal Information Processing Standard
(FIPS) which is an initiated Process Standard of NIST. The
AES specifies the Rijndael algorithm, in which a symmetric
block cipher that processes fixed 128 bit data blocks using
cipher keys with different lengths of 128, 192 and 256 bits.
The earliest Rijndael algorithm had the advantage of
combining both data block sizes of 128, 192 and 256 bits with
any key lengths. AES can be programmed in pure hardware
Verilog HDL, Which includes Multiplexer to enhance more
secure to Cipher text. The results indicate that the hardware
implementation proposed in this project is Decrementing
Utilization of resource and power consumption of 113 mW
than other implementation. Using FPGA lead to reliability on
source modulations. This project presents the AES algorithm
with regard to FPGA and Verilog HDL. The software used for
Simulation is ModelSim-Altera 6.3g_p1 (Quartus II 8.1).
Synthesis and implementation of the code is carried out on
Xilinx ISE 13.4 (XC6VCX240T) device is used for hardware
evaluation.
This document summarizes a research paper that proposes implementing the Advanced Encryption Standard (AES) cryptographic algorithm using Verilog HDL for hardware implementation on FPGAs. The paper describes the AES algorithm, its encryption and decryption processes, and a hardware design for AES that was tested on a Xilinx FPGA. The results showed the hardware implementation utilized less resources and had lower power consumption compared to other AES FPGA designs.
Performance of Matching Algorithmsfor Signal Approximationiosrjce
The document summarizes and compares several algorithms for signal approximation and sparse signal recovery, including Equivalent Detection (ED), Non-negative Equivalent Detection, Orthogonal Matching Pursuit (OMP), and Stagewise Orthogonal Matching Pursuit (StOMP). It discusses how each algorithm works, including iteratively selecting atoms from a dictionary to build up a sparse representation of the signal. OMP selects one atom per iteration while StOMP selects all atoms above a threshold. The document also discusses computational complexities of the different algorithms.
Design of airfoil using backpropagation training with mixed approachEditor Jacotech
Levenberg-Marquardt back-propagation training method has some limitations associated with over fitting and local optimum problems. Here, we proposed a new algorithm to increase the convergence speed of Backpropagation learning to design the airfoil. The aerodynamic force coefficients corresponding to series of airfoil are stored in a database along with the airfoil coordinates. A feedforward neural network is created with aerodynamic coefficient as input to produce the airfoil coordinates as output. In the proposed algorithm, for output layer, we used the cost function having linear & nonlinear error terms then for the hidden layer, we used steepest descent cost function. Results indicate that this mixed approach greatly enhances the training of artificial neural network and may accurately predict airfoil profile.
Design of airfoil using backpropagation training with mixed approachEditor Jacotech
The document describes a new algorithm for designing airfoils using neural networks. The algorithm uses a mixed training approach: it trains the output layer of the neural network using a cost function with linear and nonlinear error terms for faster convergence, while training the hidden layer using steepest descent. Results show the mixed approach converges much faster than traditional backpropagation or Levenberg-Marquardt algorithms alone. The algorithm more accurately predicts airfoil profiles with fewer training iterations.
This document discusses basic loops and functions in R programming. It covers control statements like loops and if/else, arithmetic and boolean operators, default argument values, and returning values from functions. It also describes R programming structures, recursion, and provides an example of implementing quicksort recursively and constructing a binary search tree. The key topics are loops, control flow, functions, recursion, and examples of sorting and binary trees.
This document proposes a novel framework called smooth sparse coding for learning sparse representations of data. It incorporates feature similarity or temporal information present in data sets via non-parametric kernel smoothing. The approach constructs codes that represent neighborhoods of samples rather than individual samples, leading to lower reconstruction error. It also proposes using marginal regression rather than lasso for obtaining sparse codes, providing a dramatic speedup of up to two orders of magnitude without sacrificing accuracy. The document contributes a framework for incorporating domain information into sparse coding, sample complexity results for dictionary learning using smooth sparse coding, an efficient marginal regression training procedure, and successful application to classification tasks with improved accuracy and speed.
Continuous Architecting of Stream-Based SystemsCHOOSE
Pooyan Jamshidi CHOOSE Talk 2016-11-01
Big data architectures have been gaining momentum in recent years. For instance, Twitter uses stream processing frameworks like Storm to analyse billions of tweets per minute and learn the trending topics. However, architectures that process big data involve many different components interconnected via semantically different connectors making it a difficult task for software architects to refactor the initial designs. As an aid to designers and developers, we developed OSTIA (On-the-fly Static Topology Inference Analysis) that allows: (a) visualizing big data architectures for the purpose of design-time refactoring while maintaining constraints that would only be evaluated at later stages such as deployment and run-time; (b) detecting the occurrence of common anti-patterns across big data architectures; (c) exploiting software verification techniques on the elicited architectural models. In the lecture, OSTIA will be shown on three industrial-scale case studies.
See: http://www.choose.s-i.ch/events/jamshidi-2016/
The document discusses algorithms, data abstraction, asymptotic analysis, arrays, polynomials, and sparse matrices. It defines algorithms and discusses their advantages and disadvantages. It explains how to design an algorithm and describes iterative and recursive algorithms. It defines data abstraction and gives an example using smartphones. It discusses time and space complexity analysis and different asymptotic notations like Big O, Omega, and Theta. It describes what arrays are, different types of arrays, and applications of arrays. It explains how to represent and add polynomials using linked lists. Finally, it defines sparse matrices and two methods to represent them using arrays and linked lists.
The document discusses data structures and algorithms. It defines a data structure as a particular way of organizing data in a computer so that it can be used efficiently. Common data structures include arrays, stacks, queues, linked lists, trees, heaps, and hash tables. An algorithm is defined as a finite set of instructions to accomplish a particular task. Analyzing algorithms involves determining how resources like time and storage change with input size. Key considerations in algorithm design include requirements, analysis, data objects, operations, refinement, coding, verification, and testing.
Preference of Efficient Architectures for GF(p) Elliptic Curve Crypto Operati...CSCJournals
This paper explores architecture possibilities to utilize more than one multiplier to speedup the computation of GF(p) elliptic curve crypto systems. The architectures considers projective coordinates to reduce the GF(p) inversion complexity through additional multiplication operations. The study compares the standard projective coordinates (X/Z,Y/Z) with the Jacobian coordinates (X/Z2,Y/Z3) exploiting their multiplication operations parallelism. We assume using 2, 3, 4, and 5 parallel multipliers and accordingly choose the appropriate projective coordinate efficiently. The study proved that the Jacobian coordinates (X/Z2,Y/Z3) is preferred when single or two multipliers are used. Whenever 3 or 4 multipliers are available, the standard projective coordinates (X/Z,Y/Z) are favored. We found that designs with 5 multipliers have no benefit over the 4 multipliers because of the data dependency. These architectures study are particularly attractive for elliptic curve cryptosystems when hardware area optimization is the key concern.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
A Branch And Bound Algorithm For The Generalized Assignment ProblemSteven Wallach
This paper describes a branch and bound algorithm for solving the generalized assignment problem. The algorithm works by first solving a relaxation of the problem to obtain a lower bound. It then checks constraints for violations and solves binary knapsack problems to further refine the lower bound. The algorithm exploits the structure of the problem by using solutions to binary knapsack problems to determine bounds rather than linear programming. Computational results are provided for problems with up to 4,000 variables.
Supplementary material for my following paper: Infinite Latent Process Decomp...Tomonari Masada
This document proposes an infinite latent process decomposition (iLPD) model for microarray data. iLPD extends latent process decomposition (LPD) to allow for an infinite number of latent processes. It presents iLPD's generative process and joint distribution. The document also introduces auxiliary variables and provides a collapsed variational Bayesian inference approach for iLPD. This involves deriving a lower bound for the log evidence to evaluate iLPD's efficiency and compare it to LPD. The proposed method improves on past work by treating full posterior distributions over hyperparameters and removing dependencies in computing the variational lower bound.
This document proposes and evaluates several designs for implementing the AES encryption algorithm in hardware. It presents new composite field constructions for the AES S-box that improve on prior work in terms of implementation area and speed. It also introduces a novel fault-tolerant AES model that incorporates Hamming error correction codes to detect and correct single event upsets, making it suitable for use in space-based applications. The designs are implemented on an FPGA and evaluation shows improvements in area requirements, timing, and power consumption compared to previous implementations.
SASUM: A Sharing-based Approach to Fast Approximate Subgraph Matching for Lar...Kyong-Ha Lee
This document proposes a new approach called SASUM for approximate subgraph matching in large graphs. Approximate subgraph matching allows missing edges in query matches, which is important for real-world graphs that may be incomplete. SASUM improves upon the basic approach of generating all possible query subgraphs and doing exact matching for each. It exploits the overlapping nature of query subgraphs to reduce the number that require costly exact matching. SASUM uses a lattice framework to identify sharing opportunities between query subgraphs. It generates small "base graphs" that are shared between queries and chooses a minimum set of these to match, from which it can derive matches for all queries. The approach outperforms the state-of-the-art by orders of
1) TESCO PLC is a UK-based supermarket chain that has expanded globally.
2) Global factors like improved production methods, transnational companies, and transportation infrastructure have facilitated international trade and allowed companies like TESCO to do business globally with few barriers.
3) Factors such as international trade systems and facilities have enabled TESCO to operate and conduct business across borders.
Gradient Paper Candy Color Writing Envelope Paper GKimberly Williams
1. Faye Abdellah was a pioneering nursing researcher who helped transform nursing theory, care, and education through her vision and leadership.
2. The media's role should be to factually report on political candidates' views, policies, and polling without inserting opinions, but some outlets analyzed Bernie Sanders' status as a socialist rather than Democrat.
3. When Christian missionaries established schools in Kenya, their goal was religious conversion, and they learned local languages to facilitate teaching and translating the Bible. However, this later caused cultural conflicts as more land was seized.
More Related Content
Similar to Implementation Of Parameterised Specifications
A Runtime Representation For Ada Variables And TypesGina Rizzo
This paper presents a runtime representation scheme for Ada variables and types used in the CHARRETTE Ada implementation. The scheme treats Ada discriminants and discriminant constraints as a form of parameterized types, where different instances of a type can have different variants and array field sizes. Each Ada type definition is represented by a type template containing tags and formal parameters. When a type is used, constraints become actual parameters to the template. Variables reference their type's template along with actual parameter values, allowing subtype and constraint information to be associated with variables and objects.
This document discusses strategies for parallelizing spectral methods. Spectral methods are global in nature due to their use of global basis functions, making them challenging to parallelize on fine-grained architectures. However, the document finds that spectral methods can be effectively parallelized. The main computational steps in spectral methods are the calculation of differential operators on functions and solving linear systems, both of which can exploit parallelism. Domain decomposition techniques may also help parallelize computations over non-Cartesian domains.
AN ALGORITHM FOR OPTIMIZED SEARCHING USING NON-OVERLAPPING ITERATIVE NEIGHBOR...IJCSEA Journal
The document presents an algorithm for optimized searching using non-overlapping iterative neighbor intervals. The algorithm aims to reduce the number of checked conditions by saving the frequency of replicated words and using non-overlapping intervals based on the plane sweep algorithm. It does this by focusing the search on a smaller subspace of relevant intervals near the minimum frequency keyword. The algorithm iterates through ranges, eliminating unsatisfied keywords to detect relevant ranges. This improves efficiency and reduces the number of comparisons compared to previous methods.
A Cryptographic Hardware Revolution in Communication Systems using Verilog HDLidescitation
Advanced Encryption Standard (AES), is an
advancement of Federal Information Processing Standard
(FIPS) which is an initiated Process Standard of NIST. The
AES specifies the Rijndael algorithm, in which a symmetric
block cipher that processes fixed 128 bit data blocks using
cipher keys with different lengths of 128, 192 and 256 bits.
The earliest Rijndael algorithm had the advantage of
combining both data block sizes of 128, 192 and 256 bits with
any key lengths. AES can be programmed in pure hardware
Verilog HDL, Which includes Multiplexer to enhance more
secure to Cipher text. The results indicate that the hardware
implementation proposed in this project is Decrementing
Utilization of resource and power consumption of 113 mW
than other implementation. Using FPGA lead to reliability on
source modulations. This project presents the AES algorithm
with regard to FPGA and Verilog HDL. The software used for
Simulation is ModelSim-Altera 6.3g_p1 (Quartus II 8.1).
Synthesis and implementation of the code is carried out on
Xilinx ISE 13.4 (XC6VCX240T) device is used for hardware
evaluation.
This document summarizes a research paper that proposes implementing the Advanced Encryption Standard (AES) cryptographic algorithm using Verilog HDL for hardware implementation on FPGAs. The paper describes the AES algorithm, its encryption and decryption processes, and a hardware design for AES that was tested on a Xilinx FPGA. The results showed the hardware implementation utilized less resources and had lower power consumption compared to other AES FPGA designs.
Performance of Matching Algorithmsfor Signal Approximationiosrjce
The document summarizes and compares several algorithms for signal approximation and sparse signal recovery, including Equivalent Detection (ED), Non-negative Equivalent Detection, Orthogonal Matching Pursuit (OMP), and Stagewise Orthogonal Matching Pursuit (StOMP). It discusses how each algorithm works, including iteratively selecting atoms from a dictionary to build up a sparse representation of the signal. OMP selects one atom per iteration while StOMP selects all atoms above a threshold. The document also discusses computational complexities of the different algorithms.
Design of airfoil using backpropagation training with mixed approachEditor Jacotech
Levenberg-Marquardt back-propagation training method has some limitations associated with over fitting and local optimum problems. Here, we proposed a new algorithm to increase the convergence speed of Backpropagation learning to design the airfoil. The aerodynamic force coefficients corresponding to series of airfoil are stored in a database along with the airfoil coordinates. A feedforward neural network is created with aerodynamic coefficient as input to produce the airfoil coordinates as output. In the proposed algorithm, for output layer, we used the cost function having linear & nonlinear error terms then for the hidden layer, we used steepest descent cost function. Results indicate that this mixed approach greatly enhances the training of artificial neural network and may accurately predict airfoil profile.
Design of airfoil using backpropagation training with mixed approachEditor Jacotech
The document describes a new algorithm for designing airfoils using neural networks. The algorithm uses a mixed training approach: it trains the output layer of the neural network using a cost function with linear and nonlinear error terms for faster convergence, while training the hidden layer using steepest descent. Results show the mixed approach converges much faster than traditional backpropagation or Levenberg-Marquardt algorithms alone. The algorithm more accurately predicts airfoil profiles with fewer training iterations.
This document discusses basic loops and functions in R programming. It covers control statements like loops and if/else, arithmetic and boolean operators, default argument values, and returning values from functions. It also describes R programming structures, recursion, and provides an example of implementing quicksort recursively and constructing a binary search tree. The key topics are loops, control flow, functions, recursion, and examples of sorting and binary trees.
This document proposes a novel framework called smooth sparse coding for learning sparse representations of data. It incorporates feature similarity or temporal information present in data sets via non-parametric kernel smoothing. The approach constructs codes that represent neighborhoods of samples rather than individual samples, leading to lower reconstruction error. It also proposes using marginal regression rather than lasso for obtaining sparse codes, providing a dramatic speedup of up to two orders of magnitude without sacrificing accuracy. The document contributes a framework for incorporating domain information into sparse coding, sample complexity results for dictionary learning using smooth sparse coding, an efficient marginal regression training procedure, and successful application to classification tasks with improved accuracy and speed.
Continuous Architecting of Stream-Based SystemsCHOOSE
Pooyan Jamshidi CHOOSE Talk 2016-11-01
Big data architectures have been gaining momentum in recent years. For instance, Twitter uses stream processing frameworks like Storm to analyse billions of tweets per minute and learn the trending topics. However, architectures that process big data involve many different components interconnected via semantically different connectors making it a difficult task for software architects to refactor the initial designs. As an aid to designers and developers, we developed OSTIA (On-the-fly Static Topology Inference Analysis) that allows: (a) visualizing big data architectures for the purpose of design-time refactoring while maintaining constraints that would only be evaluated at later stages such as deployment and run-time; (b) detecting the occurrence of common anti-patterns across big data architectures; (c) exploiting software verification techniques on the elicited architectural models. In the lecture, OSTIA will be shown on three industrial-scale case studies.
See: http://www.choose.s-i.ch/events/jamshidi-2016/
The document discusses algorithms, data abstraction, asymptotic analysis, arrays, polynomials, and sparse matrices. It defines algorithms and discusses their advantages and disadvantages. It explains how to design an algorithm and describes iterative and recursive algorithms. It defines data abstraction and gives an example using smartphones. It discusses time and space complexity analysis and different asymptotic notations like Big O, Omega, and Theta. It describes what arrays are, different types of arrays, and applications of arrays. It explains how to represent and add polynomials using linked lists. Finally, it defines sparse matrices and two methods to represent them using arrays and linked lists.
The document discusses data structures and algorithms. It defines a data structure as a particular way of organizing data in a computer so that it can be used efficiently. Common data structures include arrays, stacks, queues, linked lists, trees, heaps, and hash tables. An algorithm is defined as a finite set of instructions to accomplish a particular task. Analyzing algorithms involves determining how resources like time and storage change with input size. Key considerations in algorithm design include requirements, analysis, data objects, operations, refinement, coding, verification, and testing.
Preference of Efficient Architectures for GF(p) Elliptic Curve Crypto Operati...CSCJournals
This paper explores architecture possibilities to utilize more than one multiplier to speedup the computation of GF(p) elliptic curve crypto systems. The architectures considers projective coordinates to reduce the GF(p) inversion complexity through additional multiplication operations. The study compares the standard projective coordinates (X/Z,Y/Z) with the Jacobian coordinates (X/Z2,Y/Z3) exploiting their multiplication operations parallelism. We assume using 2, 3, 4, and 5 parallel multipliers and accordingly choose the appropriate projective coordinate efficiently. The study proved that the Jacobian coordinates (X/Z2,Y/Z3) is preferred when single or two multipliers are used. Whenever 3 or 4 multipliers are available, the standard projective coordinates (X/Z,Y/Z) are favored. We found that designs with 5 multipliers have no benefit over the 4 multipliers because of the data dependency. These architectures study are particularly attractive for elliptic curve cryptosystems when hardware area optimization is the key concern.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
A Branch And Bound Algorithm For The Generalized Assignment ProblemSteven Wallach
This paper describes a branch and bound algorithm for solving the generalized assignment problem. The algorithm works by first solving a relaxation of the problem to obtain a lower bound. It then checks constraints for violations and solves binary knapsack problems to further refine the lower bound. The algorithm exploits the structure of the problem by using solutions to binary knapsack problems to determine bounds rather than linear programming. Computational results are provided for problems with up to 4,000 variables.
Supplementary material for my following paper: Infinite Latent Process Decomp...Tomonari Masada
This document proposes an infinite latent process decomposition (iLPD) model for microarray data. iLPD extends latent process decomposition (LPD) to allow for an infinite number of latent processes. It presents iLPD's generative process and joint distribution. The document also introduces auxiliary variables and provides a collapsed variational Bayesian inference approach for iLPD. This involves deriving a lower bound for the log evidence to evaluate iLPD's efficiency and compare it to LPD. The proposed method improves on past work by treating full posterior distributions over hyperparameters and removing dependencies in computing the variational lower bound.
This document proposes and evaluates several designs for implementing the AES encryption algorithm in hardware. It presents new composite field constructions for the AES S-box that improve on prior work in terms of implementation area and speed. It also introduces a novel fault-tolerant AES model that incorporates Hamming error correction codes to detect and correct single event upsets, making it suitable for use in space-based applications. The designs are implemented on an FPGA and evaluation shows improvements in area requirements, timing, and power consumption compared to previous implementations.
SASUM: A Sharing-based Approach to Fast Approximate Subgraph Matching for Lar...Kyong-Ha Lee
This document proposes a new approach called SASUM for approximate subgraph matching in large graphs. Approximate subgraph matching allows missing edges in query matches, which is important for real-world graphs that may be incomplete. SASUM improves upon the basic approach of generating all possible query subgraphs and doing exact matching for each. It exploits the overlapping nature of query subgraphs to reduce the number that require costly exact matching. SASUM uses a lattice framework to identify sharing opportunities between query subgraphs. It generates small "base graphs" that are shared between queries and chooses a minimum set of these to match, from which it can derive matches for all queries. The approach outperforms the state-of-the-art by orders of
Similar to Implementation Of Parameterised Specifications (20)
1) TESCO PLC is a UK-based supermarket chain that has expanded globally.
2) Global factors like improved production methods, transnational companies, and transportation infrastructure have facilitated international trade and allowed companies like TESCO to do business globally with few barriers.
3) Factors such as international trade systems and facilities have enabled TESCO to operate and conduct business across borders.
Gradient Paper Candy Color Writing Envelope Paper GKimberly Williams
1. Faye Abdellah was a pioneering nursing researcher who helped transform nursing theory, care, and education through her vision and leadership.
2. The media's role should be to factually report on political candidates' views, policies, and polling without inserting opinions, but some outlets analyzed Bernie Sanders' status as a socialist rather than Democrat.
3. When Christian missionaries established schools in Kenya, their goal was religious conversion, and they learned local languages to facilitate teaching and translating the Bible. However, this later caused cultural conflicts as more land was seized.
The document provides instructions for requesting an assignment writing service from HelpWriting.net in 5 steps:
1. Create an account with a password and email.
2. Complete a 10-minute order form providing instructions, sources, and deadline. Attach a sample if imitating writing style.
3. Review bids from writers and choose one based on qualifications, history, and feedback, then pay a deposit.
4. Review the completed paper and authorize full payment if pleased, or request revisions for free.
5. Request multiple revisions to ensure satisfaction, with a full refund option for plagiarized content.
1. The document discusses Robert F. Kennedy's speech in Indianapolis on April 4, 1968 after learning of Martin Luther King Jr.'s assassination.
2. Kennedy aims to break the news to the mostly African American audience and incite them to believe America can overcome King's assassination.
3. Kennedy employs rhetorical strategies of pathos, allusion, and anaphora to connect with his grieving audience and urge them towards nonviolence.
The document provides instructions for creating an account and requesting paper writing assistance from HelpWriting.net. It is a 5-step process: 1) Create an account with an email and password. 2) Complete a 10-minute order form providing instructions, sources, and deadline. 3) Review bids from writers and choose one based on qualifications. 4) Receive the paper and approve for payment or request revisions. 5) Request revisions until satisfied with the paper.
The document discusses the differences between AAOIFI (Accounting and Auditing Organization for Islamic Financial Institutions) standards and IFRS (International Financial Reporting Standards). Some key differences are:
1) AAOIFI standards are based on Sharia principles of Islamic finance while IFRS are more generally applicable.
2) AAOIFI provides industry-specific guidance for institutions like Islamic banks, while IFRS is a general framework.
3) There are some differences in recognition and measurement of items like murabaha, ijara, and sukuk between the two standards.
Siddhartha Gautama was a great religious leader who founded Buddhism. He was born a prince in Nepal but abandoned his privileged life to pursue enlightenment. He developed philosophies like the Four Noble Truths and Eightfold Path that influenced many people and allowed Buddhism to spread worldwide as the fourth largest religion. As someone who grew up in Nepal learning about Buddhism in school, the author has a firsthand understanding of Gautama's legacy and achievements as a leader who changed religious and cultural traditions.
The document discusses how Antigone and Creon can both be considered tragic heroes in the play. It explains that a tragic hero according to Aristotle is neither completely virtuous nor does he/she undergo a downfall entirely due to their own flaws. The plot of Antigone shows how Creon and Antigone's noble but opposing goals lead to their mutual downfall. The function of the plot is to make the audience consider what will happen as a result of the characters' hubris and how it will lead to their own destruction.
Persuasive Essay Against School Uniforms IntroductioKimberly Williams
The document discusses how men and women often exhibit differences in career and educational choices, with more males pursuing STEM fields. It aims to examine whether there are innate biological reasons for these behavioral differences between genders, beyond just environmental factors. Specifically, it will analyze potential cognitive and brain processing differences between male and female brains that may underlie variations in their choices and abilities.
Cheap Assignment Help Uk - Courseworkpaperboy.Web.FKimberly Williams
The document discusses how the Barbary pirates from North Africa extorted ransom payments from ships sailing in the Mediterranean Sea. When the United States declared independence, it lost its protection under the British navy and American ships became targets of the Barbary pirates. This forced the young U.S. to address the pirate issue through diplomatic and military means in order to secure naval routes and protect its growing merchant fleet, gaining valuable combat experience. Defeating the Barbary States helped establish American naval security and allowed the nation to continue expanding internationally in its early history.
The document provides instructions for requesting and completing an assignment writing request on the HelpWriting.net website. It outlines a 5-step process: 1) Create an account with a password and email. 2) Complete a 10-minute order form providing instructions, sources, and deadline. 3) Review bids from writers and select one. 4) Review the completed paper and authorize payment. 5) Request revisions to ensure satisfaction, with refund available for plagiarized work.
The document provides instructions for students to request and receive help writing an assignment through the HelpWriting.net website. It outlines a 5-step process: 1) Create an account with an email and password. 2) Complete a form with assignment details. 3) Review bids from writers and select one. 4) Review and approve the completed paper. 5) Request revisions if needed, as HelpWriting provides free revisions.
I. Hopkins wrote a series of sonnets called Sonnets of Desolation that explored his experience with depression.
II. In the sonnets, Hopkins conveyed his inner turmoil and feelings of loneliness, despair, and being cut off from God.
III. The sonnets provide insight into Hopkins' mental state during periods of deep depression and spiritual doubt.
The document provides instructions for creating an account on HelpWriting.net in order to request writing assistance. It outlines a 5-step process: 1) Create an account with a password and email, 2) Complete a form with assignment details, 3) Review writer bids and qualifications and select a writer, 4) Review the completed paper and authorize payment, 5) Request revisions to ensure satisfaction. The service promises original, high-quality papers and refunds for plagiarized work.
The document discusses various types of essays required for college admissions, including informative essays, literary analyses, persuasive essays, and "who am I" essays. It provides definitions and explanations of these different essay types and discusses what admissions officers look for in exceptional essays. The document also provides tips for writing standout college admission essays and analyzing how sample essay can help students craft compelling essays to include with their applications.
The document discusses various tips and guidelines for writing effective college admission essays. It provides information on the basic structures for different types of essays like the five-paragraph essay and commentary essays. It also discusses key elements that make essays stand out like unique personality, experiences and aspirations. Additionally, it offers advice on writing compelling essays that can help applicants get admitted to their college of choice.
This document contains multiple paragraphs about various topics related to writing academic papers, including how to write discussion papers, profile papers, issue papers, and research papers. It also contains information on APA style and getting help writing papers. Specific topics discussed include how to start writing a profile paper, common mistakes to avoid in APA format, where to get help writing a thesis online, and why George Orwell wrote the book "Animal Farm".
To write a research synopsis or abstract, summarize the key points of a research paper without copying sentences exactly. The synopsis should briefly outline the paper's content in 1-3 sentences. Proper research is essential for writing a quality paper in APA format, including finding reliable sources and understanding their context. A discussion paper analyzes both sides of an issue in depth.
The document discusses various types of essays, including literary essays, admission essays, informative essays, and the five-paragraph essay format. It provides definitions and tips for writing different kinds of essays that are often assigned for college applications or coursework. Several sections focus on crafting effective college admission essays, highlighting the importance of authenticity and using examples to showcase personality and skills.
The document provides guidance on various aspects of writing research papers, including conducting thorough research, following the proper APA format, writing different types of research papers such as profiles, discussions, and action research. It emphasizes the importance of planning and organization in writing effective research papers and provides tips for writing different sections such as introductions, discussions, and synopses.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
Assessment and Planning in Educational technology.pptxKavitha Krishnan
In an education system, it is understood that assessment is only for the students, but on the other hand, the Assessment of teachers is also an important aspect of the education system that ensures teachers are providing high-quality instruction to students. The assessment process can be used to provide feedback and support for professional development, to inform decisions about teacher retention or promotion, or to evaluate teacher effectiveness for accountability purposes.
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
1. IMPLEMENTATION OF PARAMETERISED SPECIFICATIONS
-- Extended Abstract*--
Donald Sannella
Department of Computer Science
University of Edinburgh
Martin Wirsing
Institut fSr Informatik
Technische Universit~t MUnchen
Abstract
A new notion is given for the implementation of one specification by another.
Unlike most previous notions, this generalises to handle parameterised specifications
as well as loose specifications (having an assortment of non-isomorphic models).
Examples are given to illustrate the notion. The definition~.~f implementation is
based on a new notion of the simulation of a theory by an algebra. For the bulk of
the paper we employ a variant of the Clear specification language [BG 77] in which
the notion of a data constraint is replaced by the weaker notion of a hierarchy
constraint. All results hold for Clear with data constraints as well, but only under
more restrictive conditions.
We prove that implementations compose vertically (~wo successive implementation
steps compose to give one large step) and that they compose horizontally under
application of (well-behaved) parameterised specifications (separate implementations
of the parameterised specification and the actual parameter compose to give an
implementation of the application).
1. Introduction
Algebraic specifications can be viewed as abstract programs. Some specifications
are so completely abstract that they give no hint of a method for finding an answer.
For example, the function inv:matrix->matrix for inverting an nxn matrix can be
specified as follows:
inv(A) x A = I
A x inv(A) = I
(provided that matrix multiplication and the identity nxn matrix have already been
specified). Other specifications are so concrete that they resemble programs. For
example:
reverse(nil) = nil
reverse(cons(a,l)) = append(reverse(1),cons(a,nil))
(this specification of the reverse function on lists amounts to an executable program
in the HOPE functional programming language [BMS 80]).
It is usually easiest to specify a problem at a relatively abstract level. We can
then work ~radually and systematically toward a low-level 'program' which satisfies
the specification. This will normally involve the introduction of auxiliary
functions, particular data representations and so on. This approach to program
development is related to the well-known programming discipline of stepwise
refinement advocated by Wirth [Wit 71] and Dijkstra [Dij 72].
*The full version of this paper is available as Report CSR-I03-82, Department of
Computer Science, University of Edinburgh.
2. 474
A formalisation of this programming methodology depends on some precise notion of
the implementation of a specification by a lower-level specification. Previous
notions have been given for the implementation of non-parameterised ([GTW 78],
[Nou 79], [Hup 80], [EKP 80], [Ehr 82]) and parameterised ([Gan 81], [Hup 81])*~
specifications, but none of these approaches deals fully with 'structured' algebraic
specifications (as in Clear [BG 77] or CIP-L [Bau 81]) which may be constructed in a
hierarchical fashion and may be loose (with an assortment of non-isomorphic models).
We present a definition of implementation which agrees with our intuitive notions
built upon programming experience and which handles such loose hierarchical
specifications, based on a new (and seemingly fundamental) concept of the simulation
of a theory by an algebra. We show how this definition extends to give a definition
of the implementation of parameterised specifications. An example of an
implementation is given and several other examples are sketched.
We work within the framework of the Clear specification language [BG 77] which
allows large specifications to be built from small easy-to-understand bits. For most
of the paper we employ a variant of Clear in which the notion of a data constraint is
replaced by the weaker notion of a hierarchy constraint. The result is still a
viable specification language, although specifications tend to be somewhat longer
than in ordinary Clear. We later show that all results hold for 'ordinary' Clear
(with data constraints), but only under more restrictive conditions.
The 'putting-together' theme of Clear and the ideas incorporated in CAT [GB 80] (a
proposed system for systematic program development using Clear) lead us to wonder if
implementations can be put together as well. We prove that if P is implemented by P'
(where P and P' are 'well-behaved' parameterised specifications) and A is implemented
by A', then P(A) is implemented by P'(A').
We prove that implementations compose in another dimension as well. If a high-
level specification A is implemented by a lower-level specification B which is in
turn implemented by a still lower-level specification C (and an extra compatibility
condition is satisfied), then A is implemented by C. These two results allow large
specifications to be refined in a gradual and modular fashion, a little bit at a
time.
2. Preliminaries -- Clear and data/hierarchy constraints
This section is devoted to a very brief review of the notions underlying Clear
along with a discussion of data and hierarchy constraints. See [BG 77] for an
informal description of Clear, [San 81] and [BG 80] for its formal semantics, and
[WB 81] for more about hierarchy constraints. For the usual notions of signature ~,
signature morphism (inclusion) ~=<f,g> (f maps sorts, g maps operators), E-algebra
A=<A,~> (A~ is the carrier for sort s, ~(~) is the function associated with operator
~), homomorphlsm, ~-equatlon, and satisfaction of a set of ~-equations by a ~-algebra
see [BG 80]. The following less conventional definitions are taken (with minor
changes) from the same source.
Def: If ~=<f,g> is a signature morphism ~:~--~' and A'=<A',~'> is a ~'-algebra,
then the E-restriction of A' (along ~), written A'I~ is--the ~-algebra <A,~> where
A =At.. a-~~;~g(~))~ Normally ~ is obvious Trom context, in which case the
s Its)
notation ~'I~ may be used.
Def: A simple E-theory presentation is a pair <~,E> where ~ is a signature and E
is a set of ~-equations. This is simple because no constraints (see below) are
included.
Def: A ~-algebra A satisfies a simple theory presentation <~,E> if A satisfies E.
Then A is called a model of <~,E>. A theory presentation T specifies a set of
**Also [EK 82] which we discovered while writing the final version.
3. 475
algebras, namely the set of its models denoted Models(T). A theory is called
sati__sfiable if it has at least one model.
Def: If E is a set of ~-equations, let E* be the set of all ~-algebras which
satisfy E. If M is a set of ~-algebras, let M* be the set of all ~-equations which
are satisfied by each algebra in M. The closure of a set E of ~-equations is the set
E , written E. E is closed if E=E.
Def: A simple ~-theory T is a simple theory presentation <~,E> where E is closed.
The simple ~-theory presented by the presentation <~,E> is <~,E>.
Def: A simple theory morphism (inclusion) ~ : <~,E> ---> <~',E'>, where <~,E> and
<~',E'> are simple theories, is a signature morphism (inclusion) ~:~--~' such that
~(e)gE' for each ecE.
Note that any algebra satisfying a simple theory (presentation) is a model of that
simple theory. This is in contrast to the 'initial algebra approach' of ADJ [GTW 78]
in which only the initial algebra of a theory is accepted as a model. Clear permits
us to write loose specifications such as the following:
const ApproxSqrt : enrich Nat b__yy
opns sqrt : nat -> nat
eqns x - (sqrt(x)) 2 < x/2 + I = true enden
m
This specifies an approximate square-root function on the natural numbers. Any
function with at least the specified accuracy will do (for example, sqrt(100) may be
7, 8, 9 or 10). Under the initial algebra approach such a specification yields a
single model with extra values of sort nat.
Even in Clear, we often want to restrict the class of models. For instance, if no
restriction is present in the above example then trivial models (where all natural
numbers are equal to 0) and models with extra values of sort nat (other than succn(o)
for n>0) are allowed. This facility is provided by Cleat's data operation, which may
be applied when enriching a theory by some new sorts, operators and equations. Each
application of the data operation contributes a data constraint to the theory which
results from the enrichment. Unfortunately, implementations (section 3) do not seem
to have nice properties in the presence of data constraints. Accordingly we use for
the bulk of this paper a variant of Clear in which data constraints are replaced by
hierarchy constraints, contributed by the 'data' operation (see [BDPPW 79] and
[WB 81]). Hierarchy constraints are weaker than data constraints so specifications
tend to be somewhat longer than in ordinary Clear (as in the terminal algebra
approach, it is sometimes necessary to add extra operators to avoid trivial models).
We show later that all results hold for Clear with data constraints, but only under
more restrictive conditions. We now give some definitions Concerning data and
hierarchy constraints; note that in most respects the two kinds of constraints are
identical9
Def: A ~-data (hierarchy) constraint c is a pair <i,~> where i:TC-->T ' is a simple
theory inclusion and ~:signature(T')-->~ is a signature morphism.
A data (hierarchy) constraint is a description of an enrichment (the theory
inclusion goes from the theory to be enriched to the enriched theory) together with a
signature morphism 'translating' the constraint to the signature ~.
A signature morphism from ~ to another signature ~' can be applied to a
I-constraint, translating it to--a ~'-constraint, just as--it can be applied to a
~-equation to give a ~'-equation.
Def: If ~':~-->~' is a signature morphism and <i,~> is a E_-data (hierarchy)
constraint, then ~' applied to <i,~> gives the ~'-data (hierarchy) constraint
<i,~ ~,>
A data (hierarchy) constraint imposes a restriction on a set of ~-algebras, just
as an equation does. The only difference between a data constraint and a hierarchy
4. 476
constraint is in this restriction; compare the "no confusion" and "no crime,,
conditions in the following definitions.
Def: A ~--algebra A satisfies a ~--data.constraint <i:TC-->T',~:sig(T')-->~>
O" 1.0"
(letting Atarget = Alsig(T,) and Asource : Alsig(T)) Atarget is a model of T' and:
"No " 9
- confuszon". Atarget does not satisfy any sig(T')-equation e with
variables only in sorts of T for any injective assignment of variables to
A_~ource values unless e is in eqns(T')IJAsource.
- "No junk": Every element in Atarget is the value of a T'-term which has
variables only in sorts of T, for some assignment of ~ource values.
if
Without loss of generality we assume that every theory contains the theory BOol
(with sort tool and constants true and false) as a primitive subtheory.
Def: A 7=-algebra A satisfies a F-hierarchy constraint <i:Tr if
(with Atarget and ~source as above) ~target is a model of ~' and:
- "No crime": A ~ true,false.
- "No junk": As above.
Since data (hierarchy) constraints behave just like equations, they can be added
to the equation set in a simple theory presentation to give a data (hierarchical)
theory presentation.
Def: A data (hierarchical) Y-theory presentation is a pair <~,EC> where Z is a
signature and EC is a set of Z-equations and F-data (hierarchy) constraints.
The notions of data (hierarchical) theory, satisfaction (of a data or hierarchical
theory), closure and data (hierarchical) theory morphism follow as in the 'simple'
case. The denotation of a (hierarchical) Clear specification is a data
(hierarchical) theory <~,EC>, specifying all ~-algebras which satisfy the equations
and data (hierarchy) constraints in EC. For the remainder of the paper (except where
noted at the end of section 5) all discussion will concern only hierarchical Clear.
We will use terms like 'theory' in place of longer terms like 'hierarchical theory'.
Def: A sort or operator of a theory is called constrained if it is in
~(sig(~')-sig(~)) for some constraint <i:TC-->T',~:sig(T')~> in that theory.
Lack of space permits only a single brief example to illustrate data and hierarchy
constraints. Consider the following specification:
const Triv = enrich Bool by
sorts element enden
const T : enrich Triv by
data sorts newelem
opns f : element -> newelem enden
T includes a data constraint <Triv~-->T,id>. Given a sig(T)-algebra, we can check if
it satisfies this constraint. For example:
Aelemen t = {0,I,2} Anewele m = {a,b,c} f(0)=a f(1)=b f(2)=a
~th the usual interpretation of Bool). This fails to satisfy the "no confusion"
con~k~tion (consider the equation f(x)=f(y) under the injective assignment [xr-~0,
y~->2~)~ It also violates the "no junk" condition (the element c is not the value of
any term)~ But if the function f is altered so that f(2)=c then the constraint is
satisfied. In general, any algebra satisfying this data constraint will have both
5. 477
carriers of the same cardinality with f I-I and onto.
Changing data above to 'data' changes the data constraint to a hierarchy
constraint. The following algebra is then a model of T, although it does not satisfy
the "no confusion" condition:
Aelemen t : {0,I,2} Anewelem : {a} f(O) : f(1) : f(2) : a
(again with the usual interpretation of Bool). It is necessary to add some new
operators (e.g. ==:element,element->bool and ==:newelem,newelem->bool) and equations
(e.g. f(x)==f(y) = x==y) to retain the original class of models.
The data constraints described here are a special case of those discussed in
[BG 80], where the theory inclusion is replaced by an arbitrary theory morphism;
essentially the same concept is described by Reichel [Rei 80] (cf. [KR 71]). General
data constraints never actually arise in Clear. The definition of data constraint
setisfaction given above is an attempt to capture, in this special case, the
definition of [BG 80] using a different approach.
A consequence of the inclusion of data or hierarchy constraints in Clear theories
is that no complete proof system can exist for Clear (see [MS 82]). This means that
the model-theoretic closure of a set of equations and constraints (as defined above)
is not always the same as its proof-theoretic closure.
For later results we need a generalisation of Guttag's notion of sufficient
completeness [GH 78] and of the classical notion of conservativeness from logic:
Def: A theory T is sufficiently complete with respect to a set of operators ~,
sorts S, a subset ~' of ~, and variables of sorts X (where S,X~sorts(T), ~gopns(T))
if for every term t of an S sort containing operators of ~ and variables of X sor~s,
there exists a term t' with variables of X sorts and operators of ~' such that
!Ft:t'.
Def: A theory T is conservative with respect to a theory T'gT if for all
equations e containing operators only of T', T~e ~ T' ~e.
Sufficient completeness means that T does not contain any new term of an old sort
which is not provably equal to an old term (where 'new' and 'old' depend on ~, S, ~'
and X). Conservativeness means that old terms (from T') are not newly identified in
T. Instances of these general notions guarantee thas models of a theory possess
a convenient hierarchical structure.
3. A notion of implementation
A formal approach to stepwise refinement of specifications must begin with some
notion of the implementatio__n of a specification by another (lower level)
specification. Armed with a precise definition of this notion, we can prove the
correctness of refinement steps, providing a basis for a methodology for the
systematic development of programs which are guaranteed to satisfy their
specifications. But first we must be certain that the definition itself is sound and
agrees with our intuitive notions built upon programming experience.
Suppose we are given two theories T=<~,EC> and T'=<~',EC'>. We want to implement
the theory T (the abstract specification) using the sorts and operators provided by
T' (the eon~ete specification). Previous formal approaches (see [GTW 78], [Nou 79],
~Hup 80], [EKP 80], [Gan 81], [Ehr 82]) agree that T' implements T if there is some
way of deriving sorts and operators like those of T--from the sorts and operators of
T'. Each approach considers a different way of making the 'bridge' from T' to T. We
will require that there be a more or less direct correspondence between the sorts and
operators of T and those of T'. Each sort or operator in ~ must be implemented by a
sort or opera--tor in ~' ----this correspondence will be=embodied by a signature
morphism ~:~-->~'. Note that two different sorts or operators in ~ may map to the
Same ~' sort or operator, and also that there may be some (auxiliary) sorts and
operators in ~' which remain unused. This is a simplification over previous
approaches, which generally allow some kind of restricted enrichment of !' to T~"
6. 478
before matching T with T". But the power is the same; we would say that T"
implements T and leave the enrichment from T' to T" to the user. As a consequence ~f
a later theorem (see section 5) our results extend to more complex notions.
Given a signature morphism ~:~-->~', what relationship must hold between T and T'
before we can say that T' implements T? One might suspect that ~(EC) ~EC, Ys
required -- i.e. that for--any model A' o~ T', A'[~ Models(T) -- but this condition
is too strong. Roughly speaking, T' implements T~f the ;-restriction of each model
of T' simulates T. A (~-restricted) T' model need not be a model of ~, but need Only
have the appearance of a model of T. In particular, values of the T' model must
'represent' the values of the corresponding ~-terms, but the carriers o~ the T' model
need not match the carriers of any T model. Some flexibility is necessary ~n order
to approximate our intuitive notion ~f an implementation:
- A subset of the values of a T' sort may be used to represent all the values
of a T sort. Example: implementing the natural numbers using the integers
-- the negative integers are not needed.
- More than one T' value may be used to represent the same T value. Example:
implementing sets by strings -- the order does not matter, so
"1.2.3" = "3.2.1" (as sets).
Now T' implements T if (and only if) the ~-restriction of any model of T' is a model
of T after these two considerations have been taken into account. This ensures that
corr-esponding operators will yield the same result (modulo data representation) which
is intuitively the decisive criterion for a correct implementation.
Our definition of implementation will proceed in two steps. First we define the
simulation of a theory by an algebra, making precise the vague and informal ideas
outlined above. The notion of simulation is then used to give a simple definition of
implementation. We chose to highlight the notion of simulation because it seems to
be a fundamental concept which may be useful outside the present context. The
following diagram shows how the definitions given below fit together to give notions
of simulation and implementation:
X (a ~-~y) cr . T' Ca E'-theory)
V AT, model
An implementation T---~T'
For the definition of simulation we need an auxiliary notion. As mentioned above,
a subset of the available 'concrete' values may be used to represent all 'abstract'
values. Restricting the carriers of the concrete algebra to the values which are
actually used yields an intermediate algebra which plays an important role in the
definition of simulation. We do not want to restrict the carrier for every sort, but
only for those sorts of Z which are constrained by hierarchy constraints in T (for
unconstrained sorts we do not know which values are unused). This is where we depart
from the usual practice of restricting to 'reachable' values (see for example
[EKP 80]). We want the subalgebra which has been reduced just enough to satisfy the
"no junk" condition for each of these constraints.
Def: If ~ is a signature, A is a i--algebra and T is a i-theory, then restrictT(~)
is the largest subalgebra A' of A satisfying the "no junk" condition (section 2)-for
every hierarchy constraint<i:T'~-->~",~:sig(~")-->~> in ~.
7. 479
Note that the subalgebra A' does not always exist. Consider the following
example:
const T = let Nat : enrich Bool by
-- 'data' sort--s nat
opns 0 : nat
succ : nat -> nat enden in
enrich Nat by
opns neg : nat enden
Let Z be the signature of T. Suppose A is the Z-algebra with carrier {-I,0,1 .... },
the usual interpretation for the operators 0 and succ, and neg=-1. Now restrictT(A)
does not exist because every subalgebra of A must contain -I (the value of neg)-and
hence fails to satisfy the "no junk" condition for the constraint of T.
A i_-algebra A simulates a ~__-theory T if it satisfies the equations and constraints
of T after allowing for unused carrier elements and multiple representations.
Def: If ~ is a signature, A is a ~_-algebra and T:<~,EC> is a ~__-theory, then
simmul---ates T if restrictT(A)/mEC (call this RI(A)) exists and is a model of ~.
[ m=c is the ~-eongr~ence generated by EC -- i.e. the least E-congruence on
res~rictT(~) containing the relation determined by the equations in EC--]
RI stands for restrict-identify, the composite operation which forms the heart of
this definition. To determine if a ~-algebra A simulates a Z-theory f, we restrict
A, removing those elements from the carrier which are not used to represent the value
of any ~=-term, for constrained sorts; the result of this satisfies the "no junk"
condition for each constraint in T. We then identify multiple concrete
representations of the same abstract value by quotienting the result by the
Z-congruence generated by the equations of T, obtaining an algebra which (of course)
satisfies those equations and also continues to satisfy the "no junk" condition of
the constraints. If this is a model of T (i.e. it satisfies the "no crime" condition
for each constraint in T) then A simulates T. Note that any model of T simulates T.
It has been shown in-- [EKP 80] that the-- order restrict-identify gives greater
generality than identify-restrict.
Most work on algebraic specifications concentrates on the specification of
abstract data types, following the lead of ADJ [GTW 78] and Guttag et al [GH 78]. As
pointed out by ADJ, the initial model (in the category of all models of a theory)
best captures the meaning of "abstract" as used in the term "abstract data type", so
other models are generally ignored (there is some disagreement on this point -- other
authors prefer e.g. final models [Wan 79] -- but in any case some particular model is
singled out for special attention). This is not the case in Clear (the ordinary
version or our variant); although the 'data' operation may be used to restrict the
set of models as discussed in section 2, no particular model is singled out so in
general a theory may have many nonisomorphie models (as in the Munich approach).
Such a loose theory need not be implemented by a theory with the same broad range of
models. A loose theory leaves certain details unspecified and an implementation may
choose among the possibilities or not as is convenient. That is:
- A loose theory may be implemented by a 'tighter' theory. Example:
implementing the operator choose:set->integer (choose an element from a
set) by an operator which chooses the smallest.
This is intuitively necessary because it would be silly to require that a program
(the final result of the refinement process) embody all the vagueness of its original
specification. This kind of flexibility is already taken into account by the
discussion above, and is an important feature of our notion of implementation.
Previous notions do not allow for it because they concentrate on implementation of
abstract data types and so consider only a single model for any specification.
Now we are finally prepared to define our notion of the implementation of one
theory by another. This definition is inspired by the notion of [EKP 80] but it is
not the same; they allow a more elaborate 'bridge' but otherwise their notion is more
restrictive than ours. Our notion is even closer to the one of Broy et al [BMPW 80]
8. 480
but there the 'bridge' is less elaborate than ours. It also bears some resemblance
to a ~ore programming-oriented notion due to Schoett [Sch 81].
Def: If T=<~,EC> and T'=<~',EC'> are satisfiable theories and v:~-->:;' is a
signs---ture morphism, then T' implements T (via ~), written T c~ .~T', if for any model
A' of T', A'I~ simulates_T. -- - --
Note that any theory morphism ~:T--~T' where T' is satisfiable is an implementation
T~ >T '. In particular, if T' is an enrichment of T (e.g. by equations which
TtighEen' a loose theory) then ~ :~T'.
A simple example will show how this definition works (other implementation
examples are given in the next section). Consider the theory of the natural numbers
modulo 2, specified as follows:
const Natmod2 = enrich Bool b__yy
'data' sorts natmod2
opns O, I : natmod2
suec: natmod2 -> natmod2
iszero : natmod2 -> bool
eqns succ(O) : I succ(1) = 0
iszero(O) = true iszero(1) = false enden
Can this be implemented by the following theory?
const Fourvalues = enrich Bool by
'data' sorts fourvals
opns zero, one, zero', extra : fourvals
succ : fourvals-> fourvals
iszero : fourvals-> bool
eq : fourvals, fourvals -> bool
eqns succ(zero) = one succ(one) = zero'
suet(zero') = one succ(extra) = zero
iszero(zero) = true iszero(one) = false
iszero(zero') = true iszero(extra) = false
eq(zero,one) = false eq(zero,zero') = false
o 9 9 9
eq(p,q)'='eq(q,p) eq(p,p) = true enden
The iszero operator of Natmod2 and the eq operator of Fourvalues are needed to avoid
trivial models.
All models of Fourvalues have a carrier containing 4 elements, and all models of
Natmod2 have a 2-element carrier. Now consider the signature morphism
~:sig(Natmod2)-->sig(Fourvalues) given by [natmod2~->fourvals, O~->zero, I~-->one,
suec~-~succ, iszero~->iszero] (and everything in Bool maps to itself)9 Intuitively,
Natmod2 ~ >Fourvalues (zero and zero' both represent O, one represents I and extra is
unused) but is this an implementation according to the definition? Consider any
model of Fourvalues (e.g. the term model -- all models are isomorphic). 'Forgetting'
to the signature sig(Natmod2) eliminates the operators zero', extra and eq. Now we
check if this algebra (call it A) simulates Natmod2.
- 'Restrict' removes the Value of extra from the carrier.
- 'Identify' identifies the values of the terms "succ(1)" (=zero') and "0" (:zero).
The "no crime" condition of Natmod2's constraint requires that the values of true
and false remain separate; this condition is satisfied, so A simulates Natmod2 and
Natmod2:V~>Fourvalues is an implementation.
Suppose that the equation succ(zero')=one in Fourvalues were replaced by
suec(zero')=zero. Forget (producing an algebra B) followed by restrict has the same
effect on any model of Fourvalues, but now identify collapses the carrier for sort
natmod2 to a single element (the closure of the equations in Natmod2 includes the
9. 481
equation succ(succ(p))=p, so "succ(succ(O))" (=zero') is identified with "0" (=zero),
and ,,suet(suet(1))" (=zero) is identified with "I" (=one)). Furthermore, the carrier
for sort tool collapses; "iszero(succ(succ(1)))" (=true) is identified with
,iszerO(1)" (=false). The result fails to satisfy the "no crime" condition of the
constraint, so B does not simulate Natmod2 and Natmod2 -~ ~Fourvalues is no longer an
implementation.
impl___ementation of parameterised theories
Farameterised theories in Clear are like functions in a programming language; they
take zero or more values as arguments and return another value as a result. In Clear
these values are theories. Here is an example of a parameterised theory (usually
called a theory 2rocedure in Clear):
meta Ident = enrich Bool by
sorts element
opns eq : element,element -> tool
eqns eq(a,a) = true
eq(a,b) = eq(b,a)
eq(a,b) and eq(b,c) --> eq(a,c) = true enden
proc Set(X:Ident) =
let SetO = enrich X by
'data' sorts set
opns ~ : set
singleton : element -> set
U : set,set -> set
is in : element,set -> tool
eqns ~ U S = S
SUS=S
SUT=TUS
S U (T U V) : (S U T) U V
a is in 0 = false
a is--in singleton(b) = eq(a,b)
a is--in S U T = a is in S or a is in T enden in
enrich SetO by
opns choose : set -> element
eqns choose(singleton(a) U S) is_in (singleton(a) U S) = true enden
Ident is a metatheory; it describes a class of theories rather than a class of
algebras. Ident describes those theories having at least one sort together with an
operator which satisfies the laws for an equivalence relation on that sort.
Ident is used to give the 'type' of the parameter for the procedure Set. The idea
is that Set can be applied to any theory which matches Ident. Ident is called the
metasort or requirement of Set. When Set is supplied with an appropriate actual
parameter theory, it gives the theory of sets over the sort which matches element in
Ident. For example
Set(Nat[element i_ssnat, eq is ==])
gives the theory of sets of natural numbers (assuming that Nat includes an equality
operator :=). Notice that a theory morphism (called the fitting morphism) must be
provided to match Ident with the actual parameter. The result of an application is
defined using pushouts as in [Ehr 82] (see [San 81] and [BG 80] for this and other
aspects of Cleat's semantics) but it is not necessary (for now) to know the details.
In this paper we will consider only the single, parameter case; the extension to
multiple parameters should pose no problems.
Note that parameterised theories in Clear are different from the parameterised
Specifications discussed by ADJ [TWW 78]. An ADJ parameterised specification works
at the level of algebras, producing an algebra for every model of the parameter. A
Clear parameterised theory produces a theory for each parameter theory. The result
P(A) may have 'more' models than the theory A (this is the case when Set is applied
to Nat, for example). Since ADJ parameterised specifications are a special case of
Clear parameterised theories, all results given here hold for them as well.
10. 482
Since a parsmeterised theory Rc--.>P (that is, a procedure with requirement theory R
and body P -- R will always be included in P) is a function taking a theory A as a~
parameter and producing a theory P(A) as a result, an implementation R'~-->[' of RC-~p
is s function as well which takes any parameter theory A of P as argument and
produces a theory P'(A) which implements P(A) as result. But this does not specify
what relation (if any) must hold between the theories R and R'. Since every actual
parameter A of R~-->P (which must match R) should he an--actua~ parameter of R'c-->p,
it must match R' as well. This requires a theory morphism ~:R'-->R (then a fitting
morphism ~:R-->A gives a fitting morphism ~.@:R'-->A).
Def: If Rr"-'~P and R'c-->P' are parameterised theories, ~:R'-->R is a theory morphism
and ~:sig(P)-->sig(P') is a signature morphism, then R'c-->P' implements R~-->P (via
and ~), written RC-~p ~ R'c-->p', if for all theories A with fitting morphism
~:R-->A, P(A[@])=~:4P'(A[~.~]) where ~ is the extension of ~ from P to P(A[~]) by the
identity id (i.e. ~Isig(P)-sig(R)= ~ and &Isig(A)=
id).
Ordinarily R and R' will be the same theory, or at least the same modulo a change
of signature. --Other~se R' must be weaker than R.
Sometimes it is natural to split the implementation of a parameterised theory into
two or more cases, implementing it for reasons of efficiency in different ways
depending on some additional conditions on the parameters. For example:
- Sets: A set can be represented as a binary sequence if the range of
possible values is small; otherwise it must be represented as a sequence
(or tree, etc) of values.
- Parsing: Different algorithms can be applied depending on the nature of
the grammar (operator precedence, LR, context sensitive, etc).
- Sorting: Distribution sort can be used if the range of values is small;
otherwise quicksort.
In each instance the cases must exhaust the domain of possibilities, but they need
not be mutually exclusive.
Our present notion of implementation does not treat such cases. We could extend
it to give a definition of the implementation of a parameterised theory RL-->P by a
R +R e__>p
collection of parameterised theories R'+R~e--~P~ ..... --' ~n' --n' (where for every
theory A with a theory morphism ~:R-->A there must exlst some i>I such that
~':R'+R~.-~A exists). But we force the case split to the abstract level, rather than
entang[~ it--with the already complex transition from abstrac-tt--oc-onerete:
R~-->p R_+RIr--->PI = ~(R+R1) "
R._+RRnC---~Pn : P(R._+Rn)
This collection of n parameterised theories is equivalent to the original R~-->P, in
the sense that every theory P(A[~]) with ~:R-->A is the same as the theory Pi(A[~'])
with ~':R+R.-->A for some 5>I. (A theory of the transformation ~--Clear
specifications is needed to discuss this matter in a more precise fashion; no such
theory exists at present.) Now each case may be handled separately, using the normal
definition of parameterised implementation:
R_+_RI~-~_PI ~ > R,+R~P_~
R+R c__>p ~ R,+RVc__~pt
-- -'11 --n -- --n --n
11. 483
~. Examples
Sets (as defined in the last section) can be implemented using sequences. We must
define sequences as well as operators on sequences corresponding to all the operators
in Set. We begin by defining everything except the choose operator:
meta Triv = theory sorts element endth
proc Sequence(X:Triv) =
enrich X + Bool by
'data' sorts sequence
opns empty : sequence
unit : element -> sequence
9 : sequence,sequence -> sequence
head : sequence -> element
tail : sequence -> sequence
eqns empty.s = s
s.empty = s
s.(t.v) = (s.t).v
head(unit(a).s) = a
tail(unit(a).s) = s enden
pr,o,c SequenceOpns(X:Ident) =
enrich Sequence(X) by
opns is in : element,sequence -> bool
add : element,sequence -> sequence
U : sequence,sequence -> sequence
eqns a is in empty = false
-- a is--in unit(b) : eq(a,b)
a is--in s.t = a is in s or a is in t
add(a,s) = s if a--is in s --
add(a,s) = unit(a).s if not(a is in s)
empty U s = s
unit(a).t U s = add(a,t U s) enden
The head and tail operators of Sequence and their defining equations are needed to
avoid trivial models; they serve no other function in the specification.
Before dealing with the choose operator, we split Set into two cases:
meta TotalOrder = enrich Ident b__yy
opns < : element,element -> bool
eqns a<a : true
-- a~b and b<a --> eq(a,b) = true
a<b and b<c --> a<c = true
a~b or b<a = true- enden
Id ent r~-->Set ~ Ident r~->Set
"~ TotalOrderc-->Set' = Set(TotalOrder)
These two cases may be handled separately. The choose operator can select the
minimum element when the element type is totally ordered; otherwise we can leave the
precise choice unspecified as before.
proc SequenceAsSet(X:Ident) =
enrich SequenceOpns(X) by
opns choose : sequence -> element
eqns choose(unit(a).t) is__in (unit(a).t) : true enden
~roc SequenceAsSet'(X:TotalOrder) =
enrich SequenceOpns(X) b_!
opns choose : sequence -> element
eqns choose(unit(a)) = a
choose(unit(a).unit(b).s) = choose(unit(a).s) if a<b
else choose(unit(b).s) enden
12. 484
Now Ident~-->Set m~@ Identa-->SequenceAsSet and TotalOrdera-->Set ' -~0
TotalOrder=-->SequenceAsS#t', where ~ = [element~->element, eq~->eq, set wOsequen~e,
Ow->empty, singleton~->unit, U~->U, is__in~->is_in, choose~->choose] (and everything in
the signature of Bool maps to itself), and ~ and ~' are the identity morphisms on
Ident and TotalOrder respectively. Note that choose is not specified for empty sets
and sequences -- although the same notion of implementation should work for error
theories and algebras, we prefer to avoid the issue of errors for now. Also note
that an incorrect implementation results if choose in SequenceAsSet is changed to
select the first element; Set contains an equation
choose(singleton(x) U singleton(y))=choose(singleton(y) U singleton(x)), so the
identify step would collapse the parameter sort (and consequently bool).
This example illustrates all of the features of our notion of implementation. Not
all sequences are needed to represent sets -- sequences with repeated elements are
not used. Each set is represented by many sequences, since the sequence
representation of a set keeps track of the order in which elements were inserted.
Set is split into two theories before implementation, and finally SequenceAsSet' is
'tighter' than Set' because the choose operator (select an element) is implemented by
an operator which chooses the minimum element.
A nonparameterised example is obtained by applying Set or Set' and SequenceAsSet
or SequenceAsSet' to an argument, for example:
Set(Nat[element i__ssnat, eq i__ss==])~>SequenceAsSet(Nat[element i_~snat, eq is==])
where ~ is the same as ~ above except that element~->element is replaced by
nat~->nat.
Two additional examples:
- Lists can be implemented using arrays of (value,index) pairs, where the
index points to the next value in the list (and where some distinguished
index value denotes nil). There are many representations for the same list
(the relative positions of cells in the array are irrelevant, for example)
and circular structures are not needed to represent the value of any list.
- The specification of matrix inversion in the ;ntroduction can be
implemented by a specification of matrix inversion using the Gauss-Seidel
method. Conversely, this specification can be implemented by the
specification in the Introduction (enriched by some auxiliary functions).
The matrix inversion example shows that the expectation that A-- ~B should imply that
B is 'lower level' than A is not always justified. This is because the definition of
implementation is concerned with classes of models rather than with the equations
used to describe those classes. In this case both theories will have the same class
of models except that the Gauss-Seidel method will probably require auxiliary
operators.
5. Horizontal and vertical composition
Large specifications are needed to solve large problems. But a large monolithic
specification of a compiler (for example) would be impossible to understand because
of the sheer numbers of interacting operators and equations. The value of a
specification depends on the ease with which it was written and can be understood; a
large number of pages full of equations are not of much use to anybody.
The Clear [BG 77] and CIP-L [Bau 81] specification languages were invented to
combat just this problem. Clear and CIP-L are languages for writing structured
specifications; that is, they provide facilities for combining small theories in
various ways to make large theories. A large specification can thus be built from
small easy-to-understand bits. Following [GB 80] this shall be called horizontal
structure.
13. 485
Likewise, the implementation of a large specification is not done all at once; it
is good programming practice to implement and test pieces of the specification
separately and then construct a final system from the finished components. If the
theories which make up a Clear or CIP-L specification are implemented separately, it
should be possible to put together (horizontally compose) the implementations in the
same way that the theories themselves are put together, yielding an implementation of
the entire specification.
Although the problem of developing a program from a specification is simplified by
dividing it into smaller units, the step from specification of a component to its
implementation as a program is still often uncomfortably large. A way to conquer
this is to break the development of a program into a series of consecutive refinement
steps. That is, the specification is refined to a lower level specification, which
is in turn refined to a still lower level specification, and so on until a program is
obtained. Again following [GB 80], this is called the vertical structure (of the
development process). If a specification A is implemented by another specification
B, and B is implemented by C, then these implementations should vertically compose to
give an implementation of A by C. Goguen and Burstall [GB 80] propose a system
called CAT for the structured development of programs from specifications by
composing implementations in both the horizontal and vertical dimensions.
The vertical composition of two implementations is not always an implementation.
For example, consider the following theories:
eonst T = enrich Bool b_
Z
opns extra : bool enden
cons, T' = enrich Bool by
opns extra : bool
eqns extra = true enden
const T" = theory 'data' sorts three
opns tt, ff, extra : threevals endth
Now T-- >T' and T' >T" but T-?I:~T" (consider the model of T" where tt~ffMextra).
The theories must satisfy an extra condition.
Def: A theory T is reachably complete with respect to a parameterised theory R~-->P
with P~ ~ if T is sufficiently complete with respect to opns(P),
constrained-sorts(P~ constrained-opns(P), and variables of
sorts(R)Uunconstrained-sorts(P). A theory T is reachably complete with respect to a
nonparameterised theory A if it is reachably--complete with respect to Om-->A.
In the example above T" is not reachably complete with respect to T because extra
is not provably equal to either tt or ff.
Vertical composition theorem
I. [Reflexivity] Ti-~dTo
2. [Transitivity] If T--q~->~' and T'-~T" and T" is reachably complete with
respect to ~.~'(~, then T-~'~ ".
Corollar[
I. [Reflexivity of parameterised implementations] R c-->P1-1i~-dd~Rr
2. [Transitivity of parameterised implementations] If Rc-->P -~ R,c-->p, and
R'~-->P' ~R"~-->P" and P" is reachably complete with respect to
,
~.~'(R)~-->~.~'(P), then Rc-->P ~ R"~-->[".
In the absence of constraints (as in the initial algebra [GTW 78] and final
algebra [Wan 79] approaches), reachable completeness is guaranteed so this extra
condition is unnecessary.
14. 486
To prove that implementations of large theories can be built by arbitrary
horizontal composition of small theories, it is necessary to prove that each of
Clear's theory-building operations (combine, enrich, derive and apply) preserves
implementations. We will concentrate here on the application of parameterised
theories and the enrich operation. Extension of these results to the remaining
operations should not be difficult.
For the apply operation our object is to prove the following property of
implementations:
Horizontal Composition Property: R~-->P .....
> R'~-->P' and A ....
~A' implies
p(A)--~->p'(A').
But this is not true in general; in fact, P'(A') is not even always defined.
Again, some extra conditions must be satisfied for the desired property to hold.
Def: Let R~-->P be a parameterised theory.
- Rc-->P is called structurally complete if P is sufficiently complete with respect to
opns(P), sorts(R)Uconstrained-sorts(P), opns(R) Uconstrained-opns(P), and
variables of sorts(R) Uunconstrained-sorts(P). A nonparameterised theory A is
called structurally complete if ~c-->A is structurally complete.
- R~-->P is called parameter consistent if ~ is conservative with respect to R.
If R'c-->P' is structurally complete, parameter consistent and reachably complete,
and A' is structurally complete and a valid actual parameter of R'~-->P', then the
horizontal composition property holds.
Horizontal composition theorem: If Rc--->P and R,c__>p, are parameterised theories
with R'c--->P' structurally complete and parameter consistent, P' is reachably complete
with respect to E(R)c__>~(p), R~__>p ~R,g_~[, and A--q~A' are implementations with
.R --->A are
A' structurally complete, and e:R-->A and '" ' ' theory morphisms where
~'=~.~.~' , then ZCA[e]): ~'~P' (A' [e' ]), where ~'Isig(P (A[@]))_sig(A) = id and
Corollary (Horizontal composition for enrich): If A::~A' is an implementation,
B = enrich A by <stuff> and B' = enrich A' by ~<stuff>, A,c_->B, is structurally
complete and parameter consistent, B' is reachably complete with respect to
then B .....
~B', where ~Isig(B)_sig(A):id
~(A)C__>~(B) and A' is structurally complete,
and &]sig(A): ~.
A consequence of this corollary is that our vertical and horizontal composition
theorems extend to more elaborate notions of implementation such as the one discussed
in [EKP 80]. Again, reachable completeness is guaranteed in the absence of
constraints.
The vertical and horizontal composition theorems give us freedom to build the
implementation of a large specification from many small implementation steps. The
correctness of all the small steps guarantees the correctness of the entire
implementation, which in turn guarantees the correctness of the low-level 'program'
with respect to the high-level specification. This provides a formal foundation for
a methodology of programming by stepwise refinement. CAT's 'double law' [GB 80] is
an easy consequence of the vertical and horizontal composition theorems. This means
that the order in which parts of an implementation are carried out makes no
difference, and that our notion of implementation is appropriate for use in CAT.
Our notions of simulation and implementation extend without modification to
ordinary Clear (with data constraints rather than hierarchy constraints); all of the
results in this paper then remain valid except for the horizontal composition theorem
and its corollary. These results hold only under an additional condition.
15. 487
Def: A data theory ~ is hierarchical submodel consistent if for every model M of
and every hierarchical submodel M- of M (i.e. every submodel of M satisfying the
constraints of T when viewed as hierarchy constraints), M- satisfies the data
constraints of ~.
Horizontal composition theorem (with data): In Clear with data, if Re-->P and
R,c-->P' are parameterised theories with R,a__>p, structurally complete and parameter
consistent, P' is hierarchical submodel consistent and reachably complete with
respect to ~(R)c_->~(p), Re.__>p_~R,e._@p, and ~ A ' are implementations with ~'
structurally complete, and ~:R_-->A and ~':R'-->A' are theory morphisms where
~,=p.~.~', then P(A[@])~'~I>P'(A'[~']).
The horizontal composition theorem for enrich extends analogously.
This result is encouraging because ordinary Clear is easier to use than our
,hierarchical' variant. However, the extra condition on the horizontal composition
theorem is rather strong and it may be that it is too restrictive to be of practical
use.
Acknowledgements
We are grateful to the work of Ehrig, Kreowski and Padawitz [EKP 80] for a start
in the right direction. Thanks: from DS to Rod Burstall for guidance, from MW to
Manfred Broy and Jacek Leszczylowski for interesting discussions, to Burstall and
Goguen for Clear, to Bernhard M~ller for finding a mistake, and to Oliver Schoett for
helpful criticism. This work was supported by the University of Edinburgh, by the
Science and Engineering Research Council, and by the Sonderforschungsbereich 49,
Programmiertechnik, M~nchen.
REFERENCES
Note: LNCS n = Springer Lecture Notes in Computer Science, Volume n
[Bau 81] Bauer, F.L. et al (the CIP Language Group) Report on a wide spectrum
language for program specification and development (tentative version). Report
TUM-I8104, Technische Univ. MUnchen.
[BDPPW 79] Broy, M., Dosch, W., Partsch, H., Pepper, P. and Wirsing, M. Existential
quantifiers in abstract data types. Proc. 6th ICALP, Graz, Austria. LNCS 71,
pP. 73-87.
[BMPW 80] Broy, M., MSller, B., Pepper, P. and Wirsing, M. A model-independent
approach to implementations of abstract data types. Proc. of the Symp. on
Algorithmic Logic and the Programming Language LOGLAN, Poznan, Poland. LNCS (to
appear).
[BG 77] Burstall, R.M. and Goguen, J.A. Putting theories together to make
specifications. Proo. 5th IJCAI, Cambridge, Massachusetts, pp. 1045-1058.
[BG 80] Burstall, R.M. and Goguen, J.A. The semantics of Clear, a specification
language. Proo. of Advanced Course on Abstract Software Specifications, Copenhagen.
LNCS 86, pp. 292-332.
[BMS 80] Burstall, R.M., MacQueen, D.B. and Sannella, D.T. HOPE: an experimental
applicative language. Proc. 1980 LISP Conference, Stanford, California, pp. 136-143;
also Report CSR-62-80, Dept. of Computer Science, Univ. of Edinburgh.
[Dij 72] Dijkstra, E.W. Notes on structured programming. Notes on Structured
Programming (Dahl O.-J., Dijkstra, E.W. and Hoare, C.A.R.), Academic Press, pp. 1-82.
[Ehr 81] Ehrich, H.-D. On realization and implementation. Proc. 10th MFCS, Strbske
Pleso, Czechoslovakia. LNCS 118.
[Ehr 82] Ehrich, H.-D. On the theory of specification, implementation, and
parameterization of abstract data types. JACM 29, I pp. 206-227.
16. 488
[EK 82] Ehrig, Ho and Kreowski, H.-J. Parameter passing commutes with implementatlo~
of parameterized data types. Proc. 9th ICALP, Aarhus, Denmark (this volume).
[EKP 80] Ehrig, H., Kre0wski, H.-J. and Padawitz, P. Algebraic implementation of
abstract da~a types: concept, syntax, semantics and correctness. Proc. 7th ICALp,
Noordwijkerhout, Netherlands. LNCS 85, pp. 142-156.
[Gan 81] Ganzinger, H. Parameterized specifications: parameter passing and
implementation. TOPLAS (to appear).
[GB 80] Goguen, J.A. and Burstall, R.M. CAT, a system for the structured elaboration
of correct programs from structured specifications. Computer Science Dept., 8RI
International.
[GTW 78] Goguen, J.A., Thatcher, J.W. and Wagner, E.G. An initial algebra approach
to the specification, correctness, and implementation of abstract data types. C_urr~ent
Trends in Programming Methodology, Vol. 4: Data Structuring (R.T. Yeh, ed.),
Prentice-Hall, pp. 80-149.
[Gr~ 79] Gr~tzer, G. Universal Algebra (2nd edition), Springer.
[GH 78] Guttag, J.V. and Homing, J.J. The algebraic specification of abstract data
types. Acta Informatica 10 pp. 27-52.
[Hup 80] Hupbach, U.L. Abstract implementation of abstract data types. Proc. 9th
MFCS, Rydzyna, Poland. LNCS 88, pp. 291-304.
[Hup 81] Hupbach, U.L. Abstract implementation and parameter substitution. Proe,
3rd Hungarian Computer Science Conference, Budapest.
[KR 71] Kaphengst, H. and Reichel, H. Aigebraische Algorithmentheorie. VEB
Robotron, Zentrum f~r Forschung und Technik, Dresden.
[MS 82] MacQueen, D.B. and Sannella, D.T. Completeness of proof systems for
equational specifications. In preparation.
[Nou 79] Nourani, F. Constructive extension and implementation of abstract data
types and algorithms. Ph.D. thesis, Dept. of Computer Science, UCLA.
[Rei 80] Reichel, H. Initially-restricting algebraic theories~ Proc. 9th MFCS,
Rydzyna, Poland. LNCS 88, pp. 504-514.
[San 81] Sannella, D.T. A new semantics for Clear~ Report CSR-79-81, Dept. of
Computer Science, Univ. of Edinburgh.
[Sch 81] Schoett, O. Ein Modulkonzept in der Theorie Abstrakter Datentypen~ Report
IFI-HH-B-81/81, Fachbereich Informatik, Universit~t Hamburg.
[TWW 78] Thatcher, J.W., Wagner, E.G. and Wright, J.B. Data type specification:
parameterization and the power of specification techniques. SIGACT 10th Annual Symp.
on the Theory of Computing, San Diego, California.
[Wan 79] Wand, M. Final algebra semantics and data type extensions. JCSS 19
pp. 27-44.
[WB 81] Wirsing, M. and Broy, M. An analysis of semantic models for algebraic
specifications. International Summer School on Theoretical Foundations of
Programming Methodology, Marktoberdorf.
[Wit 71] Wirth, N. Program development by stepwise refinement. CACM 14, 4
pp. 221-227.
View publication stats
View publication stats