This document summarizes an application-level technique for farmer-worker parallel programs that allows workers to be dynamically added or removed without affecting the overall computation outcome. The key aspects are:
1) A dispatcher asynchronously distributes work units (blocks) to workers on demand and tracks each block's processing status.
2) Workers request new work units from the dispatcher after completing units. The dispatcher prioritizes distributing fresher, less processed blocks.
3) A collector receives processed blocks from workers and notifies the dispatcher when slots are filled, allowing the farmer to send new blocks. This decouples processes and enables dynamic scaling.
Optimization of Collective Communication in MPICH Lino Possamai
This is a lecture about the paper: "Optimization of Collective Communication in MPICH". Department of Computer Science, University Ca' Foscari of Venice, Italy
Simulation of BRKSS Architecture for Data Warehouse Employing Shared Nothing ...Dr. Amarjeet Singh
The BRKSS Architecture is based upon shared
nothing clustering that can scale-up to a large number of
computers, increase their speed and maintain the work load.
The architecture comprises of a console along with a CPU that
also acts as a buffer and stores information based on the
processing of transactions, when a batch enters into the
system. This console is connected to a switch (p-ports) which is
again connected to the c-number of clusters through their
respective hubs. The architecture can be used for personal
databases and for online databases like cloud through router.
This architecture uses the concept of load balancing by
moving the transaction among various nodes within the
clusters so that the overhead of a particular node can be
minimised. In this paper we have simulated the working of
BRKSS architecture using JDK 1.7 with Net beans 8.0.2. We
compared the result of performance parameters sch as
turnaround time, throughput and waiting time with existing
hierarchical clustering model.
Multistage Interconnection Networks (MINs) are designed to provide effective communication in switching. MINs networks consist of stages that can route the switching through the path. In this types of network the major problem occur when the switch failed to route in the stage. If these situations occur the switching needs to be routed to an alternative path to avoid system failure. Shuffle-exchange networks have been widely considered as practical interconnection systems due to their size of it switching elements and uncomplicated configuration. It can help in fault tolerance and reduce the latency.
Manifold Blurring Mean Shift algorithms for manifold denoising, report, 2012Florent Renucci
(General) To retrieve a clean dataset by deleting outliers.
(Computer Vision) the recovery of a digital image that has been contaminated by additive white Gaussian noise.
Manifold Blurring Mean Shift algorithms for manifold denoising, presentation,...Florent Renucci
(General) To retrieve a clean dataset by deleting outliers.
(Computer Vision) the recovery of a digital image that has been contaminated by additive white Gaussian noise.
Basic communication operations - One to all BroadcastRashiJoshi11
Brief description of Basic communication operations in parallel computing along with description of One to all Broadcast, its implementation on ring, mesh and hypercube, cost of and how to improve speed of one to all broadcast.
I am Martin J. I am a Digital Signal Processing Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. in Matlab, Arizona University, USA. I have been helping students with their homework for the past 6 years. I solve assignments related to Digital Signal Processing.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Digital Signal Processing Assignments.
Optimization of Collective Communication in MPICH Lino Possamai
This is a lecture about the paper: "Optimization of Collective Communication in MPICH". Department of Computer Science, University Ca' Foscari of Venice, Italy
Simulation of BRKSS Architecture for Data Warehouse Employing Shared Nothing ...Dr. Amarjeet Singh
The BRKSS Architecture is based upon shared
nothing clustering that can scale-up to a large number of
computers, increase their speed and maintain the work load.
The architecture comprises of a console along with a CPU that
also acts as a buffer and stores information based on the
processing of transactions, when a batch enters into the
system. This console is connected to a switch (p-ports) which is
again connected to the c-number of clusters through their
respective hubs. The architecture can be used for personal
databases and for online databases like cloud through router.
This architecture uses the concept of load balancing by
moving the transaction among various nodes within the
clusters so that the overhead of a particular node can be
minimised. In this paper we have simulated the working of
BRKSS architecture using JDK 1.7 with Net beans 8.0.2. We
compared the result of performance parameters sch as
turnaround time, throughput and waiting time with existing
hierarchical clustering model.
Multistage Interconnection Networks (MINs) are designed to provide effective communication in switching. MINs networks consist of stages that can route the switching through the path. In this types of network the major problem occur when the switch failed to route in the stage. If these situations occur the switching needs to be routed to an alternative path to avoid system failure. Shuffle-exchange networks have been widely considered as practical interconnection systems due to their size of it switching elements and uncomplicated configuration. It can help in fault tolerance and reduce the latency.
Manifold Blurring Mean Shift algorithms for manifold denoising, report, 2012Florent Renucci
(General) To retrieve a clean dataset by deleting outliers.
(Computer Vision) the recovery of a digital image that has been contaminated by additive white Gaussian noise.
Manifold Blurring Mean Shift algorithms for manifold denoising, presentation,...Florent Renucci
(General) To retrieve a clean dataset by deleting outliers.
(Computer Vision) the recovery of a digital image that has been contaminated by additive white Gaussian noise.
Basic communication operations - One to all BroadcastRashiJoshi11
Brief description of Basic communication operations in parallel computing along with description of One to all Broadcast, its implementation on ring, mesh and hypercube, cost of and how to improve speed of one to all broadcast.
I am Martin J. I am a Digital Signal Processing Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. in Matlab, Arizona University, USA. I have been helping students with their homework for the past 6 years. I solve assignments related to Digital Signal Processing.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Digital Signal Processing Assignments.
Approaches to online quantile estimationData Con LA
Data Con LA 2020
Description
This talk will explore and compare several compact data structures for estimation of quantiles on streams, including a discussion of how they balance accuracy against computational resource efficiency. A new approach providing more flexibility in specifying how computational resources should be expended across the distribution will also be explained. Quantiles (e.g., median, 99th percentile) are fundamental summary statistics of one-dimensional distributions. They are particularly important for SLA-type calculations and characterizing latency distributions, but unlike their simpler counterparts such as the mean and standard deviation, their computation is somewhat more expensive. The increasing importance of stream processing (in observability and other domains) and the impossibility of exact online quantile calculation together motivate the construction of compact data structures for estimation of quantiles on streams. In this talk we will explore and compare several such data structures (e.g., moment-based, KLL sketch, t-digest) with an eye towards how they balance accuracy against resource efficiency, theoretical guarantees, and desirable properties such as mergeability. We will also discuss a recent variation of the t-digest which provides more flexibility in specifying how computational resources should be expended across the distribution. No prior knowledge of the subject is assumed. Some familiarity with the general problem area would be helpful but is not required.
Speaker
Joe Ross, Splunk, Principal Data Scientist
Sparse Random Network Coding for Reliable Multicast ServicesAndrea Tassi
Point-to-Multipoint communications are expected to play a pivotal role in next-generation networks. This talk refers to a cellular system transmitting layered multicast services to a Multicast Group (MG) of users. Reliability of communications is ensured via different Random Linear Network Coding (RLNC) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the batteries of mobile devices drain. By referring to several sparse RLNC techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet transmissions needed to recover one or more service layers. We design a convex resource allocation framework that allows to minimize the complexity of the RLNC decoder by jointly optimizing the transmission parameters and the sparsity of the code. The designed optimization framework also ensures service guarantees to predetermined fractions of users. Performance of the proposed optimization framework is then investigated in a LTE-A eMBMS network multicasting H.264/SVC video.
I am Felix T. I am an Electrical Engineering Assignment Expert at eduassignmenthelp.com. I hold a Master’s. in Electrical Engineering, University of Greenwich, UK. I have been helping students with their Assignments for the past 7 years. I solve assignments related to Electrical Engineering.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com . You can also call on +1 678 648 4277 for any assistance with Electrical Engineering Assignments.
A comparison of efficient algorithms for scheduling parallel data redistributionIJCNCJournal
Data redistribution in parallel is an often-address
ed issue in modern computer networks. In this conte
xt, we
study the case of data redistribution over a switch
ing network. Data from the source stations need to
be
transferred to the destination stations in the mini
mum time possible. Unfortunately the time required
to
complete the transfer is burdened by each switching
and thus producing an optimal schedule is proven t
o
be computationally intractable. For the purposes of
this paper we consider two algorithms, which have
been proved to be very efficient in the past. To ge
t improved results in comparison to previous approa
ches,
we propose splitting the data in two clusters depen
ding on the size of the data to be transferred. To
prove
the efficiency of our approach we ran experiments o
n all three algorithms, comparing the time span of
the
schedules produced as well as the running times to
produce those schedules. The test cases we ran
indicate that not only our newly proposed algorithm
yields better results in terms of the schedule pro
duced
but runs faster as well.
Load Rebalancing for Distributed Hash Tables in Cloud Computingiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
In the last few years energy efficiency of large scale infrastructures gained a lot of attention, as power consumption became one of the most impacting factors of the operative costs of a data-center and of its Total Cost of Ownership. Power consumption can be observed at different layers of the data-center: from the overall power grid, moving to each rack and arriving to each machine and system. Given the rise of application containers in the cloud computing scenario, it becomes more and more important to measure power consumption also at the application level, where power-aware schedulers and orchestrators can optimize the execution of the workloads not only from a performance perspective, but also considering performance/power trade-offs. DEEP-mon is a novel monitoring tool able to measure power consumption and attribute it for each thread and application container running in the system, without any previous knowledge regarding the characteristics of the application and without any kind of workload instrumentation. DEEP-mon is able to aggregate data for threads, application containers and hosts with a negligible impact on the monitored system and on the running workloads.
Information obtained with DEEP-mon open the way for a wide set of applications exploiting the capabilities offered by the monitoring tool, from power (and hence cost) metering of new software components deployed in the data center, to fine grained power capping and power-aware scheduling and co-location.
Elementary Parallel Algorithm - Sum of n numbers on Hypercube, Shuffle Exchange and Mesh SIMD computers, UMA multiprocessors, Broadcasting and pre-fix sum on multicomputer.
I am Felix T. I am a Digital Signal Processing Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. in Matlab, University of Greenwich, UK. I have been helping students with their homework for the past 4 years. I solve assignments related to Digital Signal Processing.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Digital Signal Processing Assignments.
Seminarie Computernetwerken 2012-2013: Lecture I, 26-02-2013Vincenzo De Florio
Seminarie Computernetwerken is a course given at Universiteit Antwerpen, Belgium
A series of seminars focusing on various themes changing from year to year.
This year's themes are: resilience, behaviour, evolvability; in systems, networks, and organizations
In what follows we describe:
themes of the course
view to the seminars
rules of the game
Tapping Into the Wells of Social Energy: A Case Study Based on Falls Identifi...Vincenzo De Florio
Are purely technological solutions the best answer we can get to the shortcomings our organizations are often experiencing today? The results we gathered in this work lead us to giving a negative answer to such question. Science and technology are powerful boosters, though when they are applied to the “local, static organization of an obsolete yesterday” they fail to translate in the solutions we need to our problems. Our stance here is that those boosters should be applied to novel, distributed, and dynamic models able to allow us to escape from the local minima our societies are currently locked in. One such model is simulated in this paper to demonstrate how it may be possible to tap into the vast basins of social energy of our human societies to realize ubiquitous computing sociotechnical services for the identification and timely response to falls.
Accompanying paper available at https://arxiv.org/abs/1508.06655
SAFETY ENHANCEMENT THROUGH SITUATION-AWARE USER INTERFACESVincenzo De Florio
Due to their privileged position halfway the physical and the
cyber universe, user interfaces may play an important role in
learning, preventing, and tolerating scenarios affecting the
safety of the mission and the user's quality of experience. This
vision is embodied here in the main ideas and a proof-of-
concepts
implementation of user interfaces combining
dynamic profiling with context- and situation-awareness and
autonomic software adaptation.
The user interface (UI) may be considered as the contact point
between two "universes"—the physical universe of the user
(let us refer to this universe as U) and the cyber universe
where required computer services are executed (C). The UI is
also the logical “place” where actions are selected and passed
for execution in C. As well known U and C are very different
from each other—in particular they have quite different
notions of time, behaviours, actions, and quality of service.
Despite so huge a difference, the consequences of the actions
in C often reverberate in U—to the point that when the
computer
service
is
safety-critical
failures
or
misinterpretations in C may induce catastrophic events in U
possibly involving the loss of goods, capital, and even lives.
As a matter of facts, the human factor is known as one of the
major causes for system failures [2,15], and the UI is often
the indirect player behind most interaction faults at the root of
computer failures.
Due to its central role in the emergence of the user's quality of
experience (QoE), the UI has been the subject of extensive
research. As a result, current interfaces are adaptive,
anticipative, personalized, and to some degree “intelligent”
[3].
We believe that much more can be done beyond this already
noteworthy progress. Thanks to its privileged position
halfway between the user and the computer, we argue that the
UI is well suited for hosting several non-functional tasks,
including:
Gathering contextual information from both sides of
the activity spectrum.
Deriving situational information about the current
interaction processes.
Producing logs of the knowledge accrued and
situations unveiled.
Executing corrective actions in U and C so as to
mitigate the extent of the consequences of safety or
security violations.
●
In this paper we propose an approach based on the above
argument. This approach instruments a UI so as to produce a
stream of atomic UI operations and their C-time of
occurrence.
On codes, machines, and environments: reflections and experiencesVincenzo De Florio
Code explicitly refers to a reference machine and, implicitly, to a set of conditions often called the system model and the fault model.
If one wants to guarantee an agreed-upon quality of service, one needs to either make assumptions about those conditions or adapt to them.
In this lecture I present this problem and a number of solutions, both practical and theoretical, that I have devised in the course of my career.
Although the main accent is on programming languages, here I provide links and references to other approaches that operate at algorithmic- and system-level.
How Resilient Are Our Societies?Analyses, Models, Preliminary ResultsVincenzo De Florio
Traditional social organizations such as those for the management of healthcare and civil defense are the result of designs and realizations that matched well with an operational
context considerably different from the one we are experiencing today: A simpler world, characterized by a greater amount of resources to match less users producing lower peaks of requests.
The new context reveals all the fragility of our societies: unmanageability is just around the corner unless we do not complement the “old recipes” with smarter forms of social organization.
Here we analyze this problem and propose a refinement to our fractal social organizations as a model for resilient cyber-physical societies. Evidence to our claims is provided by simulating our model in terms of multi-agent systems.
In this presentation we introduce a family of gossiping algorithms whose members share the same structure though they vary their performance in function of a combinatorial parameter. We show that such parameter may be considered as a “knob” controlling the amount of communication parallelism characterizing the algorithms. After this we introduce procedures to operate the knob and choose parameters matching the amount of communication channels currently provided by the available communication system(s). In so doing we provide a robust mechanism to tune the production of requests for communication after the current operational conditions of the consumers of such requests. This can be used to achieve high performance and programmatic avoidance of undesirable events such as message collisions.
Paper available at https://dl.dropboxusercontent.com/u/67040428/Articles/pdp12.pdf
Proposed pricing model for cloud computingAdeel Javaid
Cloud computing is an emerging technology of business computing and it is becoming a development trend. The process of entering into the cloud is generally in the form of queue, so that each user needs to wait until the current user is being served. In the system, each Cloud Computing User (CCU) requests Cloud Computing Service Provider (CCSP) to use the resources, if CCU(cloud computing user) finds that the server is busy then the user has to wait till the current user completes the job which leads to more queue length and increased waiting time. So to solve this problem, it is the work of CCSP’s to provide service to users with less waiting time otherwise there is a chance that the user might be leaving from queue. CCSP’s can use multiple servers for reducing queue length and waiting time. In this paper, we have shown how the multiple servers can reduce the mean queue length and waiting time. Our approach is to treat a multiserver system as an M/M/m queuing model, such that a profit maximization model could be worked out.
Approaches to online quantile estimationData Con LA
Data Con LA 2020
Description
This talk will explore and compare several compact data structures for estimation of quantiles on streams, including a discussion of how they balance accuracy against computational resource efficiency. A new approach providing more flexibility in specifying how computational resources should be expended across the distribution will also be explained. Quantiles (e.g., median, 99th percentile) are fundamental summary statistics of one-dimensional distributions. They are particularly important for SLA-type calculations and characterizing latency distributions, but unlike their simpler counterparts such as the mean and standard deviation, their computation is somewhat more expensive. The increasing importance of stream processing (in observability and other domains) and the impossibility of exact online quantile calculation together motivate the construction of compact data structures for estimation of quantiles on streams. In this talk we will explore and compare several such data structures (e.g., moment-based, KLL sketch, t-digest) with an eye towards how they balance accuracy against resource efficiency, theoretical guarantees, and desirable properties such as mergeability. We will also discuss a recent variation of the t-digest which provides more flexibility in specifying how computational resources should be expended across the distribution. No prior knowledge of the subject is assumed. Some familiarity with the general problem area would be helpful but is not required.
Speaker
Joe Ross, Splunk, Principal Data Scientist
Sparse Random Network Coding for Reliable Multicast ServicesAndrea Tassi
Point-to-Multipoint communications are expected to play a pivotal role in next-generation networks. This talk refers to a cellular system transmitting layered multicast services to a Multicast Group (MG) of users. Reliability of communications is ensured via different Random Linear Network Coding (RLNC) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the batteries of mobile devices drain. By referring to several sparse RLNC techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet transmissions needed to recover one or more service layers. We design a convex resource allocation framework that allows to minimize the complexity of the RLNC decoder by jointly optimizing the transmission parameters and the sparsity of the code. The designed optimization framework also ensures service guarantees to predetermined fractions of users. Performance of the proposed optimization framework is then investigated in a LTE-A eMBMS network multicasting H.264/SVC video.
I am Felix T. I am an Electrical Engineering Assignment Expert at eduassignmenthelp.com. I hold a Master’s. in Electrical Engineering, University of Greenwich, UK. I have been helping students with their Assignments for the past 7 years. I solve assignments related to Electrical Engineering.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com . You can also call on +1 678 648 4277 for any assistance with Electrical Engineering Assignments.
A comparison of efficient algorithms for scheduling parallel data redistributionIJCNCJournal
Data redistribution in parallel is an often-address
ed issue in modern computer networks. In this conte
xt, we
study the case of data redistribution over a switch
ing network. Data from the source stations need to
be
transferred to the destination stations in the mini
mum time possible. Unfortunately the time required
to
complete the transfer is burdened by each switching
and thus producing an optimal schedule is proven t
o
be computationally intractable. For the purposes of
this paper we consider two algorithms, which have
been proved to be very efficient in the past. To ge
t improved results in comparison to previous approa
ches,
we propose splitting the data in two clusters depen
ding on the size of the data to be transferred. To
prove
the efficiency of our approach we ran experiments o
n all three algorithms, comparing the time span of
the
schedules produced as well as the running times to
produce those schedules. The test cases we ran
indicate that not only our newly proposed algorithm
yields better results in terms of the schedule pro
duced
but runs faster as well.
Load Rebalancing for Distributed Hash Tables in Cloud Computingiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
In the last few years energy efficiency of large scale infrastructures gained a lot of attention, as power consumption became one of the most impacting factors of the operative costs of a data-center and of its Total Cost of Ownership. Power consumption can be observed at different layers of the data-center: from the overall power grid, moving to each rack and arriving to each machine and system. Given the rise of application containers in the cloud computing scenario, it becomes more and more important to measure power consumption also at the application level, where power-aware schedulers and orchestrators can optimize the execution of the workloads not only from a performance perspective, but also considering performance/power trade-offs. DEEP-mon is a novel monitoring tool able to measure power consumption and attribute it for each thread and application container running in the system, without any previous knowledge regarding the characteristics of the application and without any kind of workload instrumentation. DEEP-mon is able to aggregate data for threads, application containers and hosts with a negligible impact on the monitored system and on the running workloads.
Information obtained with DEEP-mon open the way for a wide set of applications exploiting the capabilities offered by the monitoring tool, from power (and hence cost) metering of new software components deployed in the data center, to fine grained power capping and power-aware scheduling and co-location.
Elementary Parallel Algorithm - Sum of n numbers on Hypercube, Shuffle Exchange and Mesh SIMD computers, UMA multiprocessors, Broadcasting and pre-fix sum on multicomputer.
I am Felix T. I am a Digital Signal Processing Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. in Matlab, University of Greenwich, UK. I have been helping students with their homework for the past 4 years. I solve assignments related to Digital Signal Processing.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Digital Signal Processing Assignments.
Seminarie Computernetwerken 2012-2013: Lecture I, 26-02-2013Vincenzo De Florio
Seminarie Computernetwerken is a course given at Universiteit Antwerpen, Belgium
A series of seminars focusing on various themes changing from year to year.
This year's themes are: resilience, behaviour, evolvability; in systems, networks, and organizations
In what follows we describe:
themes of the course
view to the seminars
rules of the game
Tapping Into the Wells of Social Energy: A Case Study Based on Falls Identifi...Vincenzo De Florio
Are purely technological solutions the best answer we can get to the shortcomings our organizations are often experiencing today? The results we gathered in this work lead us to giving a negative answer to such question. Science and technology are powerful boosters, though when they are applied to the “local, static organization of an obsolete yesterday” they fail to translate in the solutions we need to our problems. Our stance here is that those boosters should be applied to novel, distributed, and dynamic models able to allow us to escape from the local minima our societies are currently locked in. One such model is simulated in this paper to demonstrate how it may be possible to tap into the vast basins of social energy of our human societies to realize ubiquitous computing sociotechnical services for the identification and timely response to falls.
Accompanying paper available at https://arxiv.org/abs/1508.06655
SAFETY ENHANCEMENT THROUGH SITUATION-AWARE USER INTERFACESVincenzo De Florio
Due to their privileged position halfway the physical and the
cyber universe, user interfaces may play an important role in
learning, preventing, and tolerating scenarios affecting the
safety of the mission and the user's quality of experience. This
vision is embodied here in the main ideas and a proof-of-
concepts
implementation of user interfaces combining
dynamic profiling with context- and situation-awareness and
autonomic software adaptation.
The user interface (UI) may be considered as the contact point
between two "universes"—the physical universe of the user
(let us refer to this universe as U) and the cyber universe
where required computer services are executed (C). The UI is
also the logical “place” where actions are selected and passed
for execution in C. As well known U and C are very different
from each other—in particular they have quite different
notions of time, behaviours, actions, and quality of service.
Despite so huge a difference, the consequences of the actions
in C often reverberate in U—to the point that when the
computer
service
is
safety-critical
failures
or
misinterpretations in C may induce catastrophic events in U
possibly involving the loss of goods, capital, and even lives.
As a matter of facts, the human factor is known as one of the
major causes for system failures [2,15], and the UI is often
the indirect player behind most interaction faults at the root of
computer failures.
Due to its central role in the emergence of the user's quality of
experience (QoE), the UI has been the subject of extensive
research. As a result, current interfaces are adaptive,
anticipative, personalized, and to some degree “intelligent”
[3].
We believe that much more can be done beyond this already
noteworthy progress. Thanks to its privileged position
halfway between the user and the computer, we argue that the
UI is well suited for hosting several non-functional tasks,
including:
Gathering contextual information from both sides of
the activity spectrum.
Deriving situational information about the current
interaction processes.
Producing logs of the knowledge accrued and
situations unveiled.
Executing corrective actions in U and C so as to
mitigate the extent of the consequences of safety or
security violations.
●
In this paper we propose an approach based on the above
argument. This approach instruments a UI so as to produce a
stream of atomic UI operations and their C-time of
occurrence.
On codes, machines, and environments: reflections and experiencesVincenzo De Florio
Code explicitly refers to a reference machine and, implicitly, to a set of conditions often called the system model and the fault model.
If one wants to guarantee an agreed-upon quality of service, one needs to either make assumptions about those conditions or adapt to them.
In this lecture I present this problem and a number of solutions, both practical and theoretical, that I have devised in the course of my career.
Although the main accent is on programming languages, here I provide links and references to other approaches that operate at algorithmic- and system-level.
How Resilient Are Our Societies?Analyses, Models, Preliminary ResultsVincenzo De Florio
Traditional social organizations such as those for the management of healthcare and civil defense are the result of designs and realizations that matched well with an operational
context considerably different from the one we are experiencing today: A simpler world, characterized by a greater amount of resources to match less users producing lower peaks of requests.
The new context reveals all the fragility of our societies: unmanageability is just around the corner unless we do not complement the “old recipes” with smarter forms of social organization.
Here we analyze this problem and propose a refinement to our fractal social organizations as a model for resilient cyber-physical societies. Evidence to our claims is provided by simulating our model in terms of multi-agent systems.
In this presentation we introduce a family of gossiping algorithms whose members share the same structure though they vary their performance in function of a combinatorial parameter. We show that such parameter may be considered as a “knob” controlling the amount of communication parallelism characterizing the algorithms. After this we introduce procedures to operate the knob and choose parameters matching the amount of communication channels currently provided by the available communication system(s). In so doing we provide a robust mechanism to tune the production of requests for communication after the current operational conditions of the consumers of such requests. This can be used to achieve high performance and programmatic avoidance of undesirable events such as message collisions.
Paper available at https://dl.dropboxusercontent.com/u/67040428/Articles/pdp12.pdf
Proposed pricing model for cloud computingAdeel Javaid
Cloud computing is an emerging technology of business computing and it is becoming a development trend. The process of entering into the cloud is generally in the form of queue, so that each user needs to wait until the current user is being served. In the system, each Cloud Computing User (CCU) requests Cloud Computing Service Provider (CCSP) to use the resources, if CCU(cloud computing user) finds that the server is busy then the user has to wait till the current user completes the job which leads to more queue length and increased waiting time. So to solve this problem, it is the work of CCSP’s to provide service to users with less waiting time otherwise there is a chance that the user might be leaving from queue. CCSP’s can use multiple servers for reducing queue length and waiting time. In this paper, we have shown how the multiple servers can reduce the mean queue length and waiting time. Our approach is to treat a multiserver system as an M/M/m queuing model, such that a profit maximization model could be worked out.
Implementation of the trinity of the control system based on OPCIJRES Journal
The WinCC+PLC control system is a typical real-time control system. Many Engineering colleges Introduce corresponding control experiments in relevant courses to enhance the students' understanding of this knowledge. But it needs both venues and funds and has unsafety factors to equipped with varieties of experimental subjects for the laboratory. This paper gives a very good solution to this problem by introducing MATLAB virtual control object in the classic WinCC+PLC control system. What’s more,it realizes the seamless connection between the MATLAB and the WinCC+PLC control system after analysing how to make the PID controller in STEP7 .
Dynamic Load Calculation in A Distributed System using centralized approachIJARIIT
The building of networks and the establishment of communication protocols have led to distributed systems, in which computers that are linked in a network cooperate on a task. The task is divided by the master node into small parts (sub problems) and is given to the nodes of the distributed system to solve, which gives better performance in time complexity to solve the problem compared to the time required to solve the problem in a single machine. Load balancing is the process of redistributing the work load among nodes of the distributed system to improve both resource utilization and job response time while also avoiding a situation where some nodes are heavily loaded while others are idle or doing little work. So before sending these parts of problem by the master to the nodes, master node should know the actual work load of all the nodes. We try a dynamic approach to find out the work load of each participating nodes in the distributed system by the master before sending the parts of the problem to the nodes.
This paper describes an algorithm which runs in the master machine and collects information from the nodes of the distributed system (client server application) and calculates the current work load of the nodes of the distributed system. The algorithm is developed in such a way that it can calculate the loads of the nodes dynamically. This means the loads can be evaluated if new nodes are added or deleted or during current performance of the nodes. The whole system is implemented on linux machine and local area network.
Parallel Patterns for Window-based Stateful Operators on Data Streams: an Alg...Tiziano De Matteis
Talk given at HLPP 2015
For the version with transition please check: https://docs.google.com/presentation/d/1yhsSff97f434wR-VA1szlqKxx52YMYKkdw1GVkBDyF8/edit?usp=sharing
This paper presents a new simulator used to distribute and execute real-time simulations: the RT-LAB, developed by opal-RT technologies (Montreal, Canada). One of its essential characteristics is the perfect integration with MATLAB/Simulink. The RT-LAB allows the conversion of Simulink models in real time via real-time workshop (RTW) and their execution on one or more processors. In this context, the paper focuses on the RT-LAB real-time simulation as a complement to the MATLAB/Simulink environment, which has been used to perform the simulation of the Flywheel energy storage system (FESS -variable speed wind generation (VSWG) assembly. The purpose of employing a fairly new real-time platform (RT-LAB OP -5600) is to reduce the test and prototype time. This application will be executed on each element of our model that was previously developed under MATLAB/Simulink. The real-time simulation results are observed in the workstation.
IMPLEMENTATION OF UNSIGNED MULTIPLIER USING MODIFIED CSLAeeiej_journal
Multiplications and additions are most widely and more often used arithmetic computations performed in
all digital signal processing applications. Addition is the basic operation for many digital application. The
aim is to develop area efficient, high speed and low power devices. Accurate operation of a digital system
is mainly influenced by the performance of the adders. Multipliers are also very important component in
digital systems
Efficient Resource Allocation to Virtual Machine in Cloud Computing Using an ...ijceronline
The focus of the paper is to generate an advance algorithm of resource allocation and load balancing that can deduced and avoid the dead lock while allocating the processes to virtual machine. In VM while processes are allocate they executes in queue , the first process get resources , other remains in waiting state .As rest of VM remains idle . To utilize the resources, we have analyze the algorithm with the help of First-Come, First-Served (FCFS) Scheduling, Shortest-Job-First (SJR) Scheduling, Priority Scheduling, Round Robin (RR) and CloudSIM Simulator.
This slide contain the description about the various technique related to parallel Processing(vector Processing and array processor), Arithmetic pipeline, Instruction Pipeline, SIMD processor, Attached array processor
PID Tuning using Ziegler Nicholas - MATLAB ApproachWaleed El-Badry
This is an unreleased lab for undergraduate Mechatronics students to know how to practice Ziegler Nicholas method to find the PID factors using MATLAB.
In the Fifties, Arnold Schönberg introduced a model for music composition that he called "Grundgestalt", the basic shape. In this seminar I show how I interpreted this concept as a generative music model that translates the orbits of dynamic systems into musical components. I also describe a family of experiments that led me to the creation of simple and not-so-simple musical compositions, which I call "my little Grundgestalten”. Excerpts from a selection of those compositions will be presented.
Models and Concepts for Socio-technical Complex Systems: Towards Fractal Soci...Vincenzo De Florio
We introduce fractal social organizations—a novel class of socio-technical complex systems characterized
by a distributed, bio-inspired, hierarchical architecture. Based on a same building block that is recursively
applied at different layers, said systems provide a homogeneous way to model collective behaviors of
different complexity and scale. Key concepts and principles are enunciated by means of a case study and a
simple formalism. As preliminary evidence of the adequacy of the assumptions underlying our systems here
we define and study an algebraic model for a simple class of social organizations. We show how despite its
generic formulation, geometric representations of said model exhibit the spontaneous emergence of complex
hierarchical and modular patterns characterized by structured addition of complexity and fractal nature—
which closely correspond to the distinctive architectural traits of our fractal social organizations. Some
reflections on the significance of these results and a view to the next steps of our research conclude this
contribution.
On the Role of Perception and Apperception in Ubiquitous and Pervasive Enviro...Vincenzo De Florio
Building on top of classic work on the perception of natural systems this paper addresses the role played by such quality in
environments where change is the rule rather than the exception. As in natural systems, perception in software systems takes two major forms: sensory perception and awareness (also known as apperception). For each of these forms we introduce semi-formal models that allow us to discuss and characterize perception and apperception failures in software systems evolving in environments subjected to rapid and sudden changes—such as those typical of ubiquitous and pervasive computing. Our models also provide us with two partial orders to compare such software systems with one another as well as with reference environments. When those
environments evolve or change, or when the software themselves evolve after their environments, the above partial orders may be used to compute new environmental fits and different strategic fits and gain insight on the degree of resilience achieved through the current adaptation steps.
Service-oriented Communities: A Novel Organizational Architecture for Smarter...Vincenzo De Florio
The seminar I shall present at Masaryk University in Brno on May 19, 2016. A video of this presentation is available at https://www.youtube.com/edit?video_id=Fu5kv0sFWG4
This course teaches engineering students how to program in C. I gave this course for several years in the framework of the "Advanced Technology Higher Education Network" / SOCRATES program.
A framework for trustworthiness assessment based on fidelity in cyber and phy...Vincenzo De Florio
We introduce a method for the assessment of trust for n-open systems based on a measurement of fidelity and present a prototypic implementation of a complaint architecture. We construct a MAPE loop which monitors the compliance between corresponding figures of interest in cyber- and physical domains; derive measures of the system’s trustworthiness; and use them to plan and execute actions aiming at guaranteeing system safety and resilience. We conclude with a view on our future work.
Presented at ANTIFRAGILE'15
Companion paper available at http://www.sciencedirect.com/science/article/pii/S1877050915008923
A behavioural model for the discussion of resilience, elasticity, and antifra...Vincenzo De Florio
Resilience is one of those "general systems attributes" that appear to play a central role in several disciplines - including ecology, business, psychology, industrial safety, microeconomics, computer networks, security, management science, cybernetics, control theory, crisis and disaster management. Resilience thus seems to be "needed" everywhere; and yet, even in the framework of a same discipline, it is not easy to define it precisely and consensually. To add to the confusion, other terms such as elasticity, change tolerance, and antifragility, although clearly related to resilience, cannot be easily differentiated.
In this talk I tackle this problem by introducing a behavioural model of resilience. I interpret resilience as the property emerging from the interaction of the behaviours produced by two "players": a system and a hosting environment. The outcome of said interaction depends on both intrinsic and extrinsic factors, including the systemic "traits" of the system but also how the system's endowment matches the requirements expressed by the behaviours of the environment. I show how the behavioural approach provides a unifying framework within which it is possible to express coherent definitions for elasticity, change tolerance, and antifragility.
A Behavioral Interpretation of Resilience and AntifragilityVincenzo De Florio
In this presentation I discuss resilience and antifragility as behaviors resulting from the coupling of a system and its environment(s). Depending on the interactions between these two "ends" and on the quality of the individual behaviors that they may exercise, different strategies may be chosen: elasticity (change masking); entelechism (change tolerance); and antifragility (adapting to & learning from change). When the environment is very simple and only capable of so-called "random behavior", often the only effective strategy towards resilience is off-line dimensioning of redundancy as a result of a worst-case assessment of disturbances and/or threats. Much more complex and variegated is the case when both systems and environments are "intelligent" -- or at least able to exercise complex teleological and extrapolatory behaviors. In this case both system and ambient may choose among a variety of strategies in what could be regarded as a complex evolutionary game theory setting.
Community Resilience: Challenges, Requirements, and Organizational ModelsVincenzo De Florio
An important challenge for human societies is that of mastering the complexity of Community Resilience, namely “the sustained ability of a community to utilize available resources to respond to, withstand, and recover from adverse situations”. The above concise definition puts the accent on an important requirement: a community’s ability to
make use in an intelligent way of the available resources, both institutional and spontaneous, in order to match the complex evolution of the “significant multi-hazard threats characterizing a crisis”. Failing to address such requirement exposes a community to extensive failures that are known to exacerbate the consequences of natural and human-induced crises. As a consequence, we experience today an urgent need to respond to the challenges of community resilience engineering. This problem, some reflections, and preliminary prototypical contributions constitute
the topics of this presentation.
A companion article is available at https://dl.dropboxusercontent.com/u/67040428/Articles/serene14.pdf
On the Behavioral Interpretation of System-Environment Fit and Auto-ResilienceVincenzo De Florio
Already 71 years ago Rosenblueth, Wiener, and Bigelow introduced the concept of the “behavioristic study of natural events” and proposed a classification of systems according to the quality of the behaviors they are able to exercise. In this presentation we consider the problem of the resilience of a system when deployed in a changing environment, which we tackle by considering the behaviors both the system organs and the environment mutually exercise. We then introduce a partial order and a metric space for those behaviors, and we use them to define a behavioral interpretation of the concept of system-environment fit. Moreover we suggest that behaviors based on the extrapolation of future environmental requirements would allow systems to proactively improve their own system-environment fit and optimally evolve their resilience. Finally we describe how we plan to express a complex optimization strategy in terms of the concepts introduced in this presentation.
The paper accompanying this presentation is available at https://dl.dropboxusercontent.com/u/67040428/Articles/DF14b_Wiener21stA.pdf
Antifragility = Elasticity + Resilience + Machine Learning. Models and Algori...Vincenzo De Florio
Presentation for the ANTIFRAGILE 2014 workshop, https://sites.google.com/site/resilience2antifragile/
Abstract: We introduce a model of the fidelity of open systems—fidelity being interpreted here as the compliance between corresponding
figures of interest in two separate but communicating domains. A special case of fidelity is given by real-timeliness and synchrony,
in which the figure of interest is the physical and the system’s notion of time. Our model covers two orthogonal aspects of fidelity,
the first one focusing on a system’s steady state and the second one capturing that system’s dynamic and behavioral characteristics.
We discuss how the two aspects correspond respectively to elasticity and resilience and we highlight each aspect’s qualities and
limitations. Finally we sketch the elements of a new model coupling both of the first model’s aspects and complementing them
with machine learning. Finally, a conjecture is put forward that the new model may represent a first step towards compositional
criteria for antifragile systems.
Service-oriented Communities and Fractal Social Organizations - Models and co...Vincenzo De Florio
Presentation given by Vincenzo De Florio at the Ceremony for the handing of the 2013 Faculty Awards.
Keywords: Fractal social organizations; service oriented communities; mutual assistance communities
TOWARDS PARSIMONIOUS RESOURCE ALLOCATION IN CONTEXT-AWARE N-VERSION PROGRAMMINGVincenzo De Florio
Adopting classic redundancy-based fault-tolerant schemes in
highly dynamic distributed computing systems does not
necessarily result in the anticipated improvement in
dependability. This primarily stems from statically predefined
redundancy configurations employed within many classic
dependability strategies, which as well known may negatively
impact the schemes' overall effectiveness. In this paper, a
novel dependability strategy is introduced encompassing
advanced redundancy management, aiming to autonomously
tune its internal configuration in function of disturbances
observed. Policies for parsimonious resource allocation are
presented thereafter, intent upon increasing the scheme's cost
effectiveness without breaching its availability objective. Our
experimentation suggests that the suggested solution can
achieve a substantial improvement in availability, compared
to traditional, static redundancy strategies, and that tuning the
adopted degree of redundancy to the actual observed
disturbances allows unnecessary resource expenditure to be
reduced, therefore enhancing cost-effectiveness.
A Formal Model and an Algorithm for Generating the Permutations of a MultisetVincenzo De Florio
This paper may be considered as a mathematical divertissement as well as a didactical tool for
undergraduate students in a universitary course on algorithms and computation. The well-known problem of
generating the permutations of a multiset of marks is considered. We define a formal model and an abstract
machine (an extended Turing machine). Then we write an algorithm to compute on that machine the successor
of a given permutation in the lexicographically ordered set of permutations of a multiset. Within the model we
analyze the algorithm, prove its correctness, and show that the algorithm solves the above problem. Then we
describe a slight modification of the algorithm and we analyze in which cases it may result in an improvement of
execution times.
This paper, the ideas in it, and its realization are the work of the first author only.
A FAULT-TOLERANCE LINGUISTIC STRUCTURE FOR DISTRIBUTED APPLICATIONSVincenzo De Florio
The structures for the expression of fault-tolerance provisions into the application
software are the central topic of this dissertation.
Structuring techniques provide means to control complexity, the latter being a relevant factor
for the introduction of design faults. This fact and the ever increasing complexity of today’s dis-
tributed software justify the need for simple, coherent, and effective structures for the expression
of fault-tolerance in the application software. A first contribution of this dissertation is the defi-
nition of a base of structural attributes with which application-level fault-tolerance structures can
be qualitatively assessed and compared with each other and with respect to the above mentioned
need. This result is then used to provide an elaborated survey of the state-of-the-art of software
fault-tolerance structures.
The key contribution of this work is a novel structuring technique for the expression of the
fault-tolerance design concerns in the application layer of those distributed software systems
that are characterised by soft real-time requirements and with a number of processing nodes
known at compile-time. The main thesis of this dissertation is that this new structuring tech-
nique is capable of exhibiting satisfactory values of the structural attributes in the domain of soft
real-time, distributed and parallel applications. Following this novel approach, beside the con-
ventional programming language addressing the functional design concerns, a special-purpose
linguistic structure (the so-called “recovery language”) is available to address error recovery and
reconfiguration. This recovery language comes into play as soon as an error is detected by an
underlying error detection layer, or when some erroneous condition is signalled by the applica-
tion processes. Error recovery and reconfiguration are specified as a set of guarded actions, i.e.,
actions that require a pre-condition to be fulfilled in order to be executed. Recovery actions deal
with coarse-grained entities of the application and pre-conditions query the current state of those
entities.
An important added value of this so-called “recovery language approach” is that the exe-
cutable code is structured so that the portion addressing fault-tolerance is distinct and separated
from the rest of the code. This allows for division of complexity into distinct blocks that can be
tackled independently of each other.
Truly dependable software systems should be built with structuring techniques able to decompose the software complexity without
hiding important hypotheses and assumptions such as those regarding
their target execution environment and the expected fault- and system
models. A judicious assessment of what can be made transparent and
what should be translucent is necessary. This paper discusses a practical
example of a structuring technique built with these principles in mind:
Reflective and refractive variables. We show that our technique offers
an acceptable degree of separation of the design concerns, with limited
code intrusion; at the same time, by construction, it separates but does
not hide the complexity required for managing fault-tolerance. In particular, our technique offers access to collected system-wide information
and the knowledge extracted from that information. This can be used
to devise architectures that minimize the hazard of a mismatch between
dependable software and the target execution environments.
ARRL: A Criterion for Composable Safety and Systems EngineeringVincenzo De Florio
While safety engineering standards define rigorous and controllable
processes for system development, safety standards’ differences in distinct
domains are non-negligible. This paper focuses in particular on the aviation,
automotive, and railway standards, all related to the transportation market.
Many are the reasons for the said differences, ranging from historical reasons,
heuristic and established practices, and legal frameworks, but also from the
psychological perception of the safety risks. In particular we argue that the
Safety Integrity Levels are not sufficient to be used as a top level requirement
for developing a safety-critical system. We argue that Quality of Service is a
more generic criterion that takes the trustworthiness as perceived by users better
into account. In addition, safety engineering standards provide very little
guidance on how to compose safe systems from components, while this is the
established engineering practice. In this paper we develop a novel concept
called Assured Reliability and Resilience Level as a criterion that takes the
industrial practice into account and show how it complements the Safety
Integrity Level concept.
Implementing a Role Based Mutual Assistance Community with Semantic Service D...Vincenzo De Florio
The population of elderly people is increasing rapidly, which
becomes a predominant aspect of our society. For several reasons
so significant a share of the human society is simply regarded as
“retired” – a word condemning the elderly to a reduced
participation in all active life, regardless of their actual
conditions and abilities. In previous work, we discussed how
community resources can be organized in a better way. In
particular we introduced a so-called mutual assistance
community – a digital ecosystem that removes any predefined
and artificial distinction between care-givers and care-takers and
provides a service-oriented infrastructure for intelligent matching
of the supply and demand of services. According to this new
paradigm all people are potentially active participants to
activities defined by the people’s current needs, abilities,
locations, and availabilities. Moving from this conceptual view to
practical implementation calls for an architecture able to match
adequately demand and supply of services. This paper presents
an implementation of such an architecture based on semantic
service description and matching. In comparison with our
previous implementation, main added values include a greater
flexibility in service representation and service matching and
considerable improvements in performance.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Leading Change strategies and insights for effective change management pdf 1.pdf
Hpcn97
1. B. Hertzberger, P. Sloot (Eds.): Proc. Int. Conf. and Exhib. on
High-Performance Computing and Networking (HPCN Europe 1997)
Lecture Notes in Computer Science 1225 (Springer, Berlin, 1997): 644{653
An Application-Level Dependable Technique
for Farmer-Worker Parallel Programs
Vincenzo De Florio, Geert Deconinck, Rudy Lauwereins
Katholieke Universiteit Leuven
Electrical Engineering Dept - ACCA
Kard. Mercierlaan 94 { B-3001 Heverlee { Belgium
Abstract. An application-level technique is described for farmer-worker
parallel applications which allows a worker to be added or removed from
the computing farm at any moment of the run time without a ecting the
overall outcome of the computation. The technique is based on uncoupling the farmer from the workers by means of a separate module which
asynchronously feeds these latter with new units of work" on an ondemand basis, and on a special feeding strategy based on bookkeeping the
status of each work-unit. An augmentation of the LINDA model is nally
proposed to exploit the bookkeeping algorithm for tuple management.
1
Introduction
Parallel computing is nowadays the only technique that can be used in order to
achieve the impressive computing power needed to solve a number of challenging
problems; as such, it is being employed by an ever growing community of users
in spite of what we feel as two main disadvantages, namely:
1. harder-to-use programming models, programming techniques and development tools|if any,|which sometimes translate into programs that don't
match as e ciently as expected with the underlying parallel hardware, and
2. the inherently lower level of dependability that characterizes any such parallel hardware i.e., a higher probability for events like a node's permanent
or temporary failure.
A real, e ective exploitation of any given parallel computer asks for solutions
which take into a deep account the above outlined problems.
Let us consider for example the synchronous farmer-worker algorithm i.e., a
well-known model for structuring data-parallel applications: a master process,
namely the farmer, feeds a pool of slave processes, called workers, with some
units of work; then polls them until they return their partial results which are
eventually recollected and saved. Though quite simple, this scheme may give
good results, especially in homogeneous, dedicated environments.
But how does this model react to events like a failure of a worker, or more
simply to a worker's performance degradation due e.g., to the exhaustion of
2. any vital resource? Without substantial modi cations, this scheme is not able
to cope with these events|they would seriously a ect the whole application
or its overall performances, regardless the high degree of hardware redundancy
implicitly available in any parallel system. The same un exibility prevents a
failed worker to re-enter the computing farm once it has regained the proper
operational state.
As opposed to this synchronous structuring, it is possible for example to
implement the farmer-worker model by de-coupling the farmer from the workers
by means of an intermediate module, a dispatcher which asynchronously feeds
these latter and supplies them with new units of work on an on-demand basis.
This strategy guarantees some sort of a dynamic balancing of the workload even
in heterogeneous, distributed environments, thus exhibiting a higher matching
to the parallel hardware. The Live Data Structure computational paradigm,
known from the LINDA context, makes this particularly easy to set up (see for
example 1,2,4]).
With this approach it is also possible to add a new worker at run-time without
any noti cation to both the farmer and the intermediate module|the newcomer
will simply generate additional, non-distinguishable requests for work. But again,
if a worker fails or its performances degrade, the whole application may fail or
its overall outcome be a ected or seriously delayed. This is particularly important when one considers the inherent loss in dependability of any parallel (i.e.,
replicated) hardware.
Next sections introduce and discuss a modi cation to the above sketched
asynchronous scheme, which inherits the advantages of its parent and o ers new
ones, namely:
{ it allows a non-solitary, temporarily slowed down worker to be left out of the
processing farm as long as its performance degradation exists, and
{ it allows a non-solitary worker which has been permanently a ected by some
fault to be de nitively removed from the farm,
both of them without a ecting the overall outcome of the computation, and
dynamically spreading the workload among the active processors in a way that
results in an excellent match to various di erent MIMD architectures.
2
The Technique
For the purpose of describing the technique we de ne the following scenario: a
MIMD machine disposes of n+2 identical nodes" (n > 0), or processing entities,
connected by some communication line. On each node a number of independent
sequential processes are executed on a time-sharing basis. A message passing
library is available for sending and receiving messages across the communication
line. A synchronous communication approach is used: a sender blocks until the
intended receiver gets the message. A receiver blocks waiting for a message from
a speci c sender, or for a message from a number of senders. When a message
arrives, the receiver is awaken and is able to receive that message and to know the
3. identity of the sender. Nodes are numbered from 0 to n+1. Node 0 is connected
to an input line and node n + 1 is connected to an output line.
{ Node 0 runs:
a Farmer process, connected by the input line to an external producer
device. From now on we consider a camera as the producer device. A
control line wires again the Farmer to the camera, so that this latter
can be commanded to produce new data and eventually send this data
across the input line;
a Dispatcher process, yet to be described.
{ Node n + 1 runs a Collector process, to be described later on, connected by
the output line to an external storage device e.g., a disk;
{ Each of the nodes from 1 to n is purely devoted to the execution of one
instance of the Worker process. Each Worker is connected to the Dispatcher
and to the Collector processes.
STOP
Worker
SLEEP
RESUME
STOP
( k, w )
NEW-RUN
STOP
( k, bk )
Worker
i
Dispatcher
Farmer
k
( k, o )
Collector
i
Worker
k
Fig.1. Summary of the interactions among the processes.
2.1 Interactions Between the Farmer and the Dispatcher
On demand of the Farmer process, the camera sends it an input image. Once
it has received an image, the Farmer performs a prede ned, static data decomposition, creating m equally sized sub-images, or blocks. Blocks are numbered
from 1 to m, and are represented by variables bi ; 1 i m.
4. The Farmer process interacts exclusively with the camera and with the Dispatcher process.
{ Three classes of messages can be sent from the Farmer process to the Dispatcher (see Fig. 1):
1. a NEW RUN message, which means: a new bunch of data is available";
2. a STOP message, which means that no more input is available so the
whole process has to be terminated;
3. a couple (k; bk ); 1 k m i.e., an integer which identi es a particular
block (it will be referred from now on as a block-id"), followed by the
block itself.
{ The only type of message that the Dispatcher process sends to the Farmer
process is a block-id i.e., a single integer in the range f1; : : :; mg which
expresses the information that a certain block has been fully processed by a
Worker and recollected by the Collector (see x2.3.)
At the other end of the communication line, the Dispatcher is ready to process
a number of events triggered by message arrivals. For example, when a class-3
message comes in, the block is stored into a work bu er as follows:
receive (k; bk)
sk DISABLED
wk bk
(Here, receive is the function for receiving an incoming message, s is a vector of
m integers pre-initialized to DISABLED, which represents some status information
that will be described later on, and w is a vector of work bu ers", i.e., bunches
of memory able to store any block. DISABLED is an integer which is not in the
set f1; : : :; mg. The " sign is the assignment operator.)
As the Farmer process sends a class-1 message, that is, a NEW RUN signal, the
Dispatcher processes that event as follows:
s 0
broadcast RESUME
that is, it zeroes each element of s and then broadcasts the RESUME message
to the whole farm.
When the rst image arrives to the Farmer process, it produces a series
(bi)1 i m , and then a sequence of messages (i; bi)1 i m . Finally, the Farmer
sends a NEW RUN message.
Starting from the second image, and while there are images to process from
the camera, the Farmer performs the image decomposition in advance, thus
creating a complete set of (k; bk ) couples. These couples are then sent to the
Dispatcher on an on-demand basis: as soon as block-id i comes in, couple (i; bi)
is sent out. This is done for anticipating the transmission of the couples belonging
to the next run of the computation. When eventually the last block-id of a certain
run has been received, a complete set of brand-new" blocks is already in the
hands of the Dispatcher; at that point, sending the one NEW RUN message will
simultaneously enable all blocks.
5. 2.2 Interactions Between the Dispatcher and the Workers
The Dispatcher interacts with every instance of the Worker process.
{ Four classes of messages can be sent from the Dispatcher to the Workers (see
Fig. 1):
1. a SLEEP message, which sets the receiver into a wait condition;
2. a RESUME message, to get the receiver out of the waiting state;
3. a STOP message, which makes the Worker terminate;
4. a (k; w) couple, where w represents the input data to be elaborated.
{ Worker j, 1 j n, interacts with the Dispatcher by sending it its workerid message, i.e., the j integer. This happens when Worker j has nished
dealing with a previously sent w working bu er and is available for a new
(k; w) couple to work with.
In substance, Worker j continuously repeats the following loop:
j
send
to Dispatcher
receive message from Dispatcher
process message
Clearly, send transmits a message. The last instruction, in dependence with
the class of the incoming message, results in a number of di erent operations:
{ if the message is a SLEEP, the Worker waits until the arrival of a RESUME
message, which makes it resume the loop, or the arrival of any other message,
which means that an error has occurred;
{ if it is a STOP message, the Worker breaks the loop and exits the farm;
{ if it is a (k; w) couple, the Worker starts computing the value f(w), where f is
some user-de ned function e.g., an edge detector. If a RESUME event is raised
during the computation of f, that computation is immediately abandoned
and the Worker restarts the loop. Contrarywise, the output couple (k; f(w))
is sent to the Collector process.
When the Dispatcher gets a j integer from Worker j, its expected response is
a new (k; w) couple, or a SLEEP. What rules in this context is the s vector|if all
entries of s are DISABLED, then a SLEEP message is sent to Worker j. Otherwise,
an entry is selected among those with the minimumnon-negative value, say entry
l, and a (l; bl ) message is then sent as a response. sl is nally incremented by 1.
More formally, considered set S = fs 2 s j s 6= DISABLEDg; if S is non-empty
it is possible to partition S according to the equivalence relation R de ned as
follows:
8(a; b) 2 S S : aR b , sa = sb :
So the blocks of the partition are the equivalence classes:
x] def fs 2 S j 9y 2 f1 : : :mg 30 (s = sy ) ^ (sy = x)g:
=
6. Now, rst we consider
S
a = minfb j 9b 0 30 b] 2 R g;
then we choose l 2 a] in any way e.g., pseudo-randomly; nally, message (l; bl ) is
sent to Worker j, sl is incremented, and the partition is recon gured accordingly.
If S is the empty set, a SLEEP message is generated.
In other words, entry si when greater than or equal to 0 represents some sort
of a priority identi er (the lower the value, the higher the priority for block bi ).
The block to be sent to a requesting Worker process is always selected among
those with the highest priority; after the selection, si is updated incrementing
its value by 1. In this way, the content of si represents the degree of freshness"
of block bi : it substantially counts the number of times it has been picked up by
a Worker process; fresher blocks are always preferred.
As long as there are brand-new" blocks i.e., blocks with a freshness attribute
of 0, these are the blocks which are selected and distributed. Note that this means
that as long as the above condition is true, each Worker deals with a di erent
unit of work; on the contrary, as soon as the last brand-new block is distributed,
the model admits that a same block may be assigned to more than one Worker.
This is tolerated up to a certain threshold value; if any si becomes greater
than that value, an alarm event is raised|too many workers are dealing with
the same input data, which might mean that they are all a ected by the same
problem e.g., a software bug resulting in an error when bi is being processed.
We won't deal with this special case. Another possibility is that two or more
Workers had nished their work almost at the same time thus bringing rapidly
a ag to the threshold. Waiting for the processing time of one block may supply
the answer.
A value of DISABLED for any si means that its corresponding block is not
available to be computed. It is simply not considered during the selection procedure.
2.3 Interactions Between the Workers and the Collector
Any Worker may send one class of messages to the Collector; no message is sent
from this latter to any Worker (see Fig. 1).
The only allowed message is the couple (k; o) in which o is the fully processed
output of the Worker's activity on the kth block.
The Collector's task is to ll a number of slots", namely pi ; i = 1; : : :; m,
with the outputs coming from the Workers. As two or more Workers are allowed
to process a same block thus producing two or more (k; o) couples, the Collector
runs a vector of status bits which records the status of each slot: if fi is FREE
then pi is empty" i.e., it has never been lled in by any output before; if it is
BUSY, it already holds an output. f is rstly initialized to FREE.
For each incoming message from the Worker, the Collector repeats the following sequence of operations:
7. (k; o)
receive
from Worker
if k is equal to FREE
then
send
to Dispatcher
f
k
o
pk
fk
BUSY
check-if-full
else
detect
endif
where:
checks if, due to the last arrival, all entries of f have become
. In that case, a complete set of partial outputs has been recollected and,
after some user-de ned post-processing (for example, a polygonal approximation of the chains of edges produced by the Workers), a global output can
be saved, and the ag vector re-initialized:
check-if-full
BUSY
if
f
is equal to BUSY
then
post-process
save
FREE
endif
f
p
p
is a user-de ned functionality|he/she may choose to compare the two
o's so to be able to detect any inconsistency and start some recovery action,
or may simply ignore the whole message.
Note also that an acknowledgment message (the block-id) is sent from the
Collector to the Dispatcher, to inform it that an output slot has been occupied
i.e., a partial output has been gathered. This also means that the Farmer can
anticipate the transmission of a block which belongs to the next run, if any.
detect
2.4 Interactions Between the Collector and the Dispatcher
As just stated, upon acceptance of an output, the collector sends a block-id, say
integer k, to the Dispatcher|it is the only message that goes from the Collector
to the Dispatcher.
The Dispatcher then simply acts as follows:
sk DISABLED
send k to Farmer
that is, the Dispatcher disables" the kth unit of work|set S as de ned in x2.2
S
is reduced by one element and consequently partition R changes its shape; then
the block-id is propagated to the Farmer (see Fig. 1).
8. On the opposite direction, there is only one message that may travel from the
Dispatcher to the Collector: the STOP message that means that no more input is
available and so processing is over. Upon reception of this message, the Collector
stops itself, like it does any other receiver in the farm.
3
Discussions and Conclusions
The just proposed technique uses asynchronicity in order to e ciently match
to a huge class of parallel architectures. It also uses the redundancy which is
inherent to parallelism to make an application able to cope with events like e.g.,
a failure of a node, or a node being slowed down, temporarily or not.
{ If a node fails while it is processing block k, then no output block will be
transferred to the Collector. When no more brand-new" blocks are available,
block k will be assigned to one or more Worker processes, up to a certain
limit. During this phase the replicated processing modules of the parallel
machine may be thought of as part of a hardware redundancy fault tolerant
mechanism. This phase is over when any Worker module delivers its output
to the Collector and consequently all others are possibly explicitly forced to
resume their processing loop or, if too late, their output is discarded;
{ if a node has been for some reason drastically slowed down, then its block
will be probably assigned to other possibly non-slowed Workers. Again, the
rst who succeeds, its output is collected; the others are stopped or ignored.
In any case, from the point of view of the Farmer process, all these events
are completely masked. The mechanism may be provided to a user in the form
of some set of basic functions, making all technicalities concerning both parallel
programming and fault tolerance transparent to the programmer.
Of course, nothing prevents the concurrent use of other fault tolerance mechanisms in any of the involved processes e.g., using watchdog timers to understand
that a Worker has failed and consequently reset the proper entry of vector f .
The ability to re-enter the farm may also be exploited committing a reboot of a
failed node and restarting the Worker process on that node.
3.1 Reliability Analysis
In order to compare the original, synchronous farmer-worker model with the one
described in this paper, a rst step is given by observing that the synchronous
model depicts a series system 3] i.e., a system in which each element is required
not to have failed for the whole system to operate. This is not the case of the
model described in this paper, in which a subset of the elements, namely the
Worker farm, is a parallel system 3]: if at least one Worker has not failed, so it
is for the whole farm subsystem. Note how Fig. 1 may be also thought of as the
reliability block diagram of this system.
Considering the sole farm subsystem, if we let Ci(t); 1 i n be the event
that Worker on node i has not failed at time t, and we let R(t) be the reliability
9. of any Worker at time t then, under the assumption of mutual independency
between the events, we can conclude that:
Rs(t) def P(
=
n
n
Ci(t)) = Y R(t) = (R(t))n
(1)
i=1
i=1
being Rs (t) the reliability of the farm as a series system, and
Rp(t) def 1 P(
=
n
n
Ci(t)) = 1 Y(1
i=1
i=1
R(t)) = 1 (1 R(t))n
(2)
where Rp (t) represents the reliability of the farm as a parallel system. Of course
failures must be independent, so again data-induced errors are not considered.
Figure 2 shows the reliability of the farm in a series and in a parallel system as
a Worker's reliability goes from 0 to 1.
1
n=16
0.8
n=8
Rp (t)
or Rs (t)
n=4
0.6
n=2
n=1
0.4
n=2
0.2
0
0
0.2
0.4
R(t)
0.6
n=4
n=8
n=16
0.8
1
Fig.2. For a xed value t, a number of graphs of Rp (t) (the reliability of the parallel
system) and Rs (t) (the reliability of the series system) are portrayed as functions of
R(t), the reliability of a Worker at time t, and n, the number of the components. Each
graph is labeled with its value of n; those above the diagonal portray reliabilities of
parallel systems, while those below the diagonal pertain series systems. Note that for
n = 1 the models coincide, while for any n > 1 Rp (t) is always above Rs (t) except
when R(t) = 0 (no reliable Worker) and when R(t) = 1 (totally reliable, failure-free
Worker).
10. 3.2 An Augmented LINDA Model
The whole idea pictured in this paper may be implemented in a LINDA tuple
space manager (see for example 1,2]). Apart from the standard functions to
access common" tuples, a new set of functions may be supplied which deal
with book-kept tuples" i.e., tuples that are distributed to requestors by means
of the algorithm sketched in x2.2. As an example:
fout (for fault tolerant out) may create a book-kept tuple i.e., a contentaddressable object with book-kept accesses;
frd (fault tolerant rd) may get a copy of a matching book-kept tuple, chosen
according to the algorithm in x2.2;
fin (fault tolerant in) may read-and-erase a matching book-kept tuple, chosen
according to the algorithm in x2.2,
and so on. The ensuing augmented LINDA model results in an abstract, elegant,
e cient, dependable, and transparent mechanism to exploit a parallel hardware.
3.3 Future Directions
The described technique is currently being implemented on a Parsytec CC system with the EPX/AIX environment 5] using PowerPVM/EPX 6], a homogeneous version of the PVM message passing library; it will also be tested in
heterogeneous, networked environments managed by PVM. Some work towards
the de nition and the development of an augmented LINDA model is currently
being done.
Acknowledgments. This project is partly sponsored by the Belgian Interuniversity Pole of Attraction IUAP-50, by an NFWO Krediet aan Navorsers, and by
the Esprit-IV project 21012 EFTOS. Vincenzo De Florio is on leave from Tecnopolis CSATA Novus Ortus. Geert Deconinck has a grant from the Flemish
Institute for the Promotion of Scienti c and Technological Research in Industry
(IWT). Rudy Lauwereins is a Senior Research Associate of the Belgian National
Fund for Scienti c Research.
References
1. Carriero, N., Gelernter, D. How to write parallel programs: a guide to the perplexed.
ACM Comp. Surv. 21 (1989) 323{357
2. Carriero, N., Gelernter, D. LINDA in context. CACM 32 (1989) vol.4 444{558
3. Johnson, B.W.: Design and analysis of fault-tolerant digital systems. (AddisonWesley, New York, 1989)
4. De Florio, V., Murgolo, F.P., Spinelli, V.: PvmLinda: Integration of two di erent
computation paradigms. Proc. First EuroMicro Conf. on Massively Parallel Computing Systems, Ischia, Italy, 2{6 May 1994
5. Anonymous. Embedded Parix Programmer's Guide. In Parsytec CC Series Hardware Documentation. (Parsytec GmbH, Aachen, 1996)
6. Anonymous. PowerPVM/EPX for Parsytec CC Systems. (Genias Software GmbH,
Neutraubling, 1996)