SDCCH definition, understanding, and troubleshooting.
What is the SDCCHs blocking rate?
The sdcch_blocking_rate statistic tracks the percentage of attempts to allocate an sdcch that were blocked due to no available sdcch resources
Call Setup Success Rate Definition and Troubleshooting Assim Mubder
The CSSR indicates the probability of successful calls initiated by the MS. The CSSR is an important KPI for evaluating the network performance. If this KPI is too low, the subscribers are not likely to make calls successfully. The user experience is thus affected.
SDCCH definition, understanding, and troubleshooting.
What is the SDCCHs blocking rate?
The sdcch_blocking_rate statistic tracks the percentage of attempts to allocate an sdcch that were blocked due to no available sdcch resources
Call Setup Success Rate Definition and Troubleshooting Assim Mubder
The CSSR indicates the probability of successful calls initiated by the MS. The CSSR is an important KPI for evaluating the network performance. If this KPI is too low, the subscribers are not likely to make calls successfully. The user experience is thus affected.
Performance of Vehicle-to-Vehicle Communication using IEEE 802.11p in Vehicul...IJNSA Journal
Traffic safety applications using vehicle-to-vehicle (V2V) communication is an emerging and promising area within the ITS environment. Many of these applications require real-time communication with high reliability. To meet a real-time deadline, timely and predictable access to the channel is paramount. The medium access method used in 802.11p, CSMA with collision avoidance, does not guarantee channel
access before a finite deadline. The well-known property of CSMA is undesirable for critical communications scenarios. The simulation results reveal that a specific vehicle is forced to drop over 80% of its packets because no channel access was possible before the next message was generated. To overcome this problem, we propose to use STDMA for real-time data traffic between vehicles. The realtime properties of STDMA are investigated by means of the highway road simulation scenario, with promising results.
System reliability is an important issue in designing modern multiprocessor systems. This paper proposes a fault-tolerant, scalable, multiprocessor system architecture that adopts a pipeline scheme. To verify the performance of the proposed system, the SimEvent/Stateflow tool of the MATLAB program was used to simulate the system. The proposed system uses twelve processors (P), connected in a linear array, to build a ten-stage system with two backup processors (BP). However, the system can be expanded by adding more processors to increase pipeline stages and performance, and more backup processors to increase system reliability. The system can automatically reorganize itself in the event of a failure of one or two processors and execution continues without interruption. Each processor communicates with its neighboring processors through input/output (I/O) ports which are used as bypass links between the processors. In the event of a processor failure, the function of the faulty processor is assigned to the next processor that is free from faults. The fast Fourier transform (FFT) algorithm is implemented on the simulated circuit to evaluate the performance of the proposed system. The results showed that the system can continue to execute even if one or two processors fail without a noticeable decrease in performance.
Performance of Vehicle-to-Vehicle Communication using IEEE 802.11p in Vehicul...IJNSA Journal
Traffic safety applications using vehicle-to-vehicle (V2V) communication is an emerging and promising area within the ITS environment. Many of these applications require real-time communication with high reliability. To meet a real-time deadline, timely and predictable access to the channel is paramount. The medium access method used in 802.11p, CSMA with collision avoidance, does not guarantee channel
access before a finite deadline. The well-known property of CSMA is undesirable for critical communications scenarios. The simulation results reveal that a specific vehicle is forced to drop over 80% of its packets because no channel access was possible before the next message was generated. To overcome this problem, we propose to use STDMA for real-time data traffic between vehicles. The realtime properties of STDMA are investigated by means of the highway road simulation scenario, with promising results.
System reliability is an important issue in designing modern multiprocessor systems. This paper proposes a fault-tolerant, scalable, multiprocessor system architecture that adopts a pipeline scheme. To verify the performance of the proposed system, the SimEvent/Stateflow tool of the MATLAB program was used to simulate the system. The proposed system uses twelve processors (P), connected in a linear array, to build a ten-stage system with two backup processors (BP). However, the system can be expanded by adding more processors to increase pipeline stages and performance, and more backup processors to increase system reliability. The system can automatically reorganize itself in the event of a failure of one or two processors and execution continues without interruption. Each processor communicates with its neighboring processors through input/output (I/O) ports which are used as bypass links between the processors. In the event of a processor failure, the function of the faulty processor is assigned to the next processor that is free from faults. The fast Fourier transform (FFT) algorithm is implemented on the simulated circuit to evaluate the performance of the proposed system. The results showed that the system can continue to execute even if one or two processors fail without a noticeable decrease in performance.
Checkpointing and Rollback Recovery Algorithms for Fault Tolerance in MANETs:...Eswar Publications
Mobile Ad Hoc Networks (MANETs) are emerging as a major technology in mobile computing. A MANET is a collection of mobile devices or nodes that communicate with each other using wireless links without availability of any static infrastructure or centralized control. A node in such a network should be fault tolerable and failure free execution of processes on the network nodes is vital. In order to make devices fault tolerant checkpoint based recovery technique can be used. Checkpointing is a technique that can be used to make device or node fault tolerant and reduce the recovery time in case of failure. It takes the snapshot of current application state of process and stores it in some memory area and then using it to resume the computation from current checkpoint instead of resuming it from the beginning. Some limitations of MANETs such as mobility, dynamic topology,
limited bandwidth of channel, limited storage space and power restrictions makes checkpointing as a major challenge in mobile ad hoc networks. This paper presents the survey of some existing algorithms, which have been proposed for making MANETs fault tolerant and implementing or deploying checkpointing in mobile ad hoc network.
Efficient failure detection and consensus at extreme-scale systemsIJECEIAES
Distributed systems and extreme-scale systems are ubiquitous in recent years and have seen throughout academia organizations, business, home, and government sectors. Peer-to-peer (P2P) technology is a typical distributed system model that is gaining popularity for delivering computing resources and services. Distributed systems try to increase its availability in the event of frequent component failures and functioning the system in such scenario is notoriously difficult. In order to identify component failures in the system and achieve global agreement (consensus) among failed components, this paper implemented an efficient failure detection and consensus algorithm based on fail-stop type process failures. The proposed algorithm is fault-tolerant to process failures occurring before and during the execution of the algorithm. The proposed algorithm works with the epidemic gossip protocol, which is a randomly generated paradigm of computation and communication that is both fault-tolerant and scalable. A simulation of an extreme-scale information dissemination process shows that global agreement can be achieved. A P2P simulator, PeerSim, is used in the paper to implement and test the proposed algorithm. The proposed algorithm results exhibited high scalability and at the same time detected all the process failures. The status of all the processes is maintained in a Boolean matrix.
Software Testing and Quality Assurance Assignment 2Gurpreet singh
Short questions :
Q1: What is stress testing?
Q2: What is Cyclomatic complexity?
Q3: Define Object Oriented Testing
Q4: What is regression testing? When it is done?
Q5: How loop testing is different from the path testing?
Q6: What is client server environment?
Q7: What is graph based testing?
Q8: How security testing is useful in real applications?
Q9: What are main characteristics of real time system?
Q10: What are the benefits of data flow testing?
Long Questions:
Q1: Design test case for: ERP, Traffic controller and university management system?
Q2: Assuming a real time system of your choice, discuss the concepts. Analysis and design factors of same, elaborate
Q3: How testing in multiplatform environment is performed?
Q4: Explain graph based testing in detail
Q5: Differentiate between Equivalence partitioning and boundary value analysis
Is Your School Network Ready? Network readiness is an important factor in any new IT project at organizations both large and small. New applications seem to require more bandwidth than the applications that they replace. These applications can also bring in new users, sometimes by the thousands. In some cases, the network services and/or capacity are not ready for these changes, and the IT rollout suffers.
Does this same scenario apply to K–12 education networks? Absolutely—as new educational concepts and trends are tested and adopted, such as 1:1 learning, flipped learning,
Network Readiness Guide
bring your own device (BYOD), live video and video on demand, and professional development, it is important to understand how the new applications required to address these trends function, the amount of additional bandwidth needed to support them, and the impact they have on existing applications. This guide suggests what questions you should ask, provides a process to consider, and outlines action points around network preparedness for standardized or College and Career Readiness (CCR) assessments for grades 3–11.
This guide covers the following topics: bandwidth, wireless, firewalls, content filtering, and proxy servers. Go to www.esgjrconsultinginc.com to learn more.
Load Balancing traffic in OpenStack neutron sufianfauzani
Load balancing refers to efficiently distributing incoming network traffic across a group of backend servers, also known as a server farm or server pool.
Load balancing enables OpenStack tenants to load-balance their traffic between ports.
Case study of dcs upgrade how to reduce stress during executionJohn Kingsley
iFluids Engineering ICS / DCS / SCADA Engineering Design, Procurement, Integration, Testing, Commissioning & Troubleshooting Services
Article Source: This guest blog post was written by Sunny R. Desai, an engineer in the DCS/PLC/SCADA department at Reliance Industries Ltd. A version of this article originally was published at InTech magazine.
Medium access with adaptive relay selection in cooperative wireless networksLeMeniz Infotech
Medium access with adaptive relay selection in cooperative wireless networks
We specify and evaluate a protocol for cooperative relay communications in wireless networks targeted for low-budget and energy-constrained off-the-shelf hardware.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
1. Multilevel Constraint-Based Configuration
The goal of MyCon a prototype expert system for configuring computer systems,
is to relieve the support engineer of the tedious task of configuration of a
customer order.
by Robert I. Marcus
A new algorithm for handling computer system configuration problems has been
implemented in a prototype called MyCon developed at the Hewlett- Packard Knowledge
Systems Laboratory.
The original MyCon was written in Portable Standard Lisp using the HP AI Workstation
environment1 consisting of an objects package, a frame-based package, and an EMACS-
type editor, NMODE. This environment made it possible to construct an interface in
which all the knowledge about the devices, constraints, and configurations of a system is
easily accessible to the user. This first prototype focused on placing devices on I/O
channels.
This prototype has been converted into HP standard Pascal and is running on the HP
3000, configuring the HP 3000 Series 68. The latest version also performs slot, junction
panel, and cable configuration. The ultimate goal is to produce a program that can free the
support engineer from the tedious task of logical configuration of a customer order.
The fundamental idea of the MyCon algorithm is very simple, but the algorithm appears
to have a wide range of applications. The basic principle is that at every stage of the
configuration process there are explicit constraints that cannot be violated.
The basic problem solved by MyCon is to construct a system configuration that satisfies a
series of constraints supplied by an expert. The constraints fall into two differ ent classes:
required and suggested. A configuration is feasible only if it satisfies all of the required
constraints. The suggested constraints are guides for selecting the best choice among the
feasible configurations.
The prototype performs the logical configuration of a computer system. This problem
consists of taking a set of devices and attaching them to I/O channels to maximize
performance at a reasonable cost. Typical required constraints are “The maximum number
of devices that can be attached to a channel is 6” and “High-speed devices must be
2. attached to high speed channels” Typical suggested constraints are “Spread the high-
speed devices evenly” and “Keep low-speed devices on low-speed channels”. The
difficult part of the problem is to use the knowledge embedded in the suggested
constraints to control and evaluate the system configuration. The method of multilevel
constraints was developed to handle this part of the problem.
Previous work on configuration problems, such as the Rl system developed by Digital
Equipment Corporation have used forward chaining rules to guide the configuration. The
advantage of a rule-based or constraint-based system is the ease of maintenance and
updating. In this paper, multilevel constraints are discussed as an alternative or
supplement to forward chaining. The method can be extended to any problem in which
there are qualitative preferences among a range of feasible solutions.
Basic Method as Used by MyCon
The steps in the basic method are:
1. Choose an initial algorithm that can generate candidate partial configurations as steps
towards a final configuration.
2. Use the required constraints as a pruning mechanism to ensure that the initial algorithm
will produce only feasible configurations.
3. Add suggested constraints as necessary to maximize the quality of the final
configuration and to guide the con figuration away from unsuccessful partial
configuration that will require backtracking.
4. If the initial algorithm generates a sufficient number of possibilities, and all the
required constraints are stated, and the suggested constraints are chosen well, then the
method provides an efficient algorithm for optimal configuration.
The idea of multilevel constraints is to assign each constraint to a level determining when
it is active and must be satisfied. In MyCon, the required constraints are assigned a level
of 100. Suggested constraints are assigned lower levels ranging from 0 to 90 in steps of
10. The program begins attaching devices to channels keeping all constraints active. If a
feasible configuration cannot be completed, then constraints at level 0 are dropped, the
level is incremented by 10, and an attempt is made to attach the remaining devices. This
process of dropping constraints is continued until the configuration is successfully
completed or the level reaches 100.
This method offers several advantages over a strictly for ward chaining approach. The
number of constraints is smaller than would be necessary with forward chaining. It is not
3. necessary to introduce additional predicates to resolve conflicts and to prevent the
premature firing of rules. It is possible to evaluate the results of the configuration by
keeping track of the suggested constraints it vio lated. Finally, the user of the system can
suggest an alternate configuration for the program to evaluate and compare with the
original result. This last feature is very important, because it allows a user to determine
why the program has selected a particular configuration and which alternatives are
equally suitable.
The MyCon Prototype
The user interface to MyCon is an interactive version of the form used by salespersons
when taking a customer order. The form will only accept orders that satisfy all the basic
system requirements regarding maximum and mini mum numbers of devices and
peripherals. MyCon can also determine if an order contains too many devices to be
configured legally and will advise the user on how to reduce the order.
When a correct order has been entered, the program chooses the number of high-speed
I/O channels and the system disc following guidelines suggested by an expert in
configuration. The user is informed of the reasons governing these choices and is allowed
to overrule them.
Next, MyCon attempts to place devices on the I/O channels using the active constraints to
determine if the placement can be done. The devices are arranged in an order based on
heuristics used by experts. In general, devices that are subject to more constraints, such as
the system disc, are placed first. The user is able to watch the devices being placed one at
a time with levels reported if desired.
A device is placed on the first I/O channel for which no active constraint is violated. If a
device can't be placed at a level, then no further attempt is made at that level to place
devices of the same type. When all devices possible have been attached at a given level,
Mycon checks to see if the configuration is completed. If any devices remain to be
placed, the level is raised and constraints below the current level are dropped. If the level
reaches 100 without a successful configuration, the user is informed that it is necessary to
reduce the number of high-speed I/O channels, which will lower the performance of the
system. If the user decides to continue, the number of high-speed I/O channels is reduced,
all devices are removed from the channels, the level is set at 0, and the configuration is
restarted.
This process is repeated until a successful configuration is produced or no further
reduction is possible. The last possibility has been eliminated in practice by using a
4. maximal packing estimator on the configurability of the original order before starting
configuration.
Finally, the user can reconfigure the system by adding, moving, and deleting devices.
MyCon will evaluate the user's configuration and compare it with the original result.
Examples of Multilevel Constraints
Multilevel constraints serve a dual purpose in configuration. In one sense they are a
means of knowledge representation that is easily understood. The levels convey an
estimate of the importance attached to satisfying a constraint. However, the levels are
also a control mechanism which can be used to govern the dynamics of the configuration
process. For example, to ensure that high-speed devices are spread evenly on high-speed
channels, the following constraints and levels are used:
Maximum of one device on high-speed channels Level = 10
Maximum of two devices on high-speed channels Level = 20
Maximum of three devices on high-speed channels Level = 30
Maximum of four devices on high-speed channels Level = 40
Maximum of five devices on high-speed channels Level = 50
A fundamental constraint of the system is:
Maximum of six devices on high-speed channels Level = 100
The placement of a low-speed device on a high-speed channel at an early stage of the
configuration would not strongly affect the performance of a successful configuration.
However, it might stop the later placement of a high speed device and prevent the
configuration process from succeeding. Therefore, it is important that no low-speed
devices be placed on high-speed channels until all high speed devices have been attached.
Since the constraints above may prevent this until the level is greater than 50, the
following constraint is enforced:
No low-speed devices on high-speed channels Level = 60
It is possible to have MyCon maintain its knowledge base by insisting that certain
inequality relations between constraint levels be enforced. For example:
Level of (No low-speed devices on high-speed channels) > Level of (Maximum of five devices on high-speed channels).
The constraints can be chosen to avoid having devices placed on the same channel. For
example:
5. Minimum of (Number of system discs attached to channel)
AND (Number of magnetic tape drives attached to channel) = 0. Level = 90
Number of high-speed master discs attached to channel < = 1 Level = 30
It is also possible to insist that certain devices not be placed with those of another type.
For example:
(Number of slave discs attached to channel = 0)
OR (Number of master d i s c s a t t a c h e d t o c h a n n e l) = 0 . L e v e l = 1 0 0
Observations
It has been possible to translate all of the required and suggested constraints for device
placement into the form of multilevel constraints easily. The selection of levels for the
suggested constraints required some analysis and experimentation. The easy-to-
understand form of the constraints facilitates addition, deletion, modification, and testing
by a configuration expert untrained in program ming.
The expansion of the prototype to handle placement into slots and junction panels greatly
increased the knowledge required by MyCon. This was accommodated by adding
constraints to the device placement phase of the program. Using these additional
constraints made possible a procedural implementation of the slot and junction panel con
figuration.
The use of constraints also permitted an easy transformation of the program from Lisp to
Pascal. Using Lisp as a development language hastened development of the prototype by
providing flexibility in choosing data structures and an excellent debugging environment.
The conversion into Pascal was done for reasons of portability and maintainability.
Conclusion
The method of multilevel constraints has proven very effective for logical configuration
of computer systems. It combines simplicity and power in knowledge representation and
control. The method can be extended to many problems in which a system is constructed
systematically using knowledge of a domain. It can be used as a supplement or
alternative to procedural or forward chaining methods. The constraints complement these
methods by telling the program what actions not to take.
Constraints seem to be a natural way for experts to ex press their knowledge of a domain.
By tracing the constraints and the levels it is possible to monitor very closely the progress
of the configuration and detect gaps or errors in the knowledge.
6. The real power of the algorithm has not been fully tested by the logical configuration
problem. For example, express ing multilevel constraints in mathematical form allows the
introduction into expert systems of the tremendous amount of software constructed for
linear and mathematical programming. In the field of artificial intelligence, there has been
extensive work on constraint satisfaction and propagation; this could be used to extend
the present algorithm. In the future, it seems possible that the method of multilevel
constraints could become a standard tool of knowledge engineering.
Acknowledgments
Keith Harrison introduced me to the configuration problem and suggested many
improvements in the user interface.
References
1. M.R. Cagan, "An Introduction to Hewlett-Packard's AI Workstation March Hewlett-Packard Journal,
Vol. 37, no. 3, March 1986.
2. R.M. Stallman, EMACS — The Extensible, Customizable, Self- Documenting Display Editor, Memo
No. 519, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, June 1979.
3. J. Sys "R-l: A Rule-Based Configurer of Computer Systems," Artificial Intelligence, Vol. 11, no. 1,
1982.