Please discuss Superscalar Processors.
Please discuss embedded applications.
Please discuss typical design considerations.
Please discuss psrellel programming
Solution
Superscalar processor:
A superscalar processor is a cpu thatthat implements a form of parallelism called instruction-
level parallelism within a single processor. It therefore allows for more throughput (the number
of instructions that can be executed in a unit of time) than would otherwise be possible at a given
clock rate. A superscalar processor can execute more than one instruction during a clock cycle by
simultaneously dispatching multiple instructions to different execution units on the processor.
Each execution unit is not a separate processor (or a core if the processor is a multi-core
processor), but an execution resource within a single CPU such as an arithmetic logic
A single core superscalar processor can be known as SMID(Single Instruction stream,Multiple
Data streams).
A multi-core superscalar processor can be known as MMID(Multiple Instruction
stream,Multiple Data streams).
Embedded Applications:
A software application that permanently resides in an industrial or consumer device. Providing
some type of control function and/or user interface, the software is typically stored in a non-
volatile memory such as ROM or flash memory. Contrast with a general-purpose computer that
can be used to run all kinds of applications.Such type of applications are known as embedded
applications.
Design Considerations:
Design considerations vary for different elements. Some typical design considerations for a
software design are:
1.Compatibility - The software is able to operate with other products that are designed for
interoperability with another product. For example, a piece of software may be backward-
compatible with an older version of itself.
2.Extensibility - New capabilities can be added to the software without major changes to the
underlying architecture.
3.Modularity - the resulting software comprises well defined, independent components which
leads to better maintainability. The components could be then implemented and tested in
isolation before being integrated to form a desired software system. This allows division of work
in a software development project.
4.Fault-tolerance - The software is resistant to and able to recover from component failure.
5.Maintainability - A measure of how easily bug fixes or functional modifications can be
accomplished. High maintainability can be the product of modularity and extensibility.
6.Reliability (Software durability) - The software is able to perform a required function under
stated conditions for a specified period of time.
7.Reusability - The ability to use some or all of the aspects of the preexisting software in other
projects with little to no modification.
8.Robustness - The software is able to operate under stress or tolerate unpredictable or invalid
input. For example, it can be designed with a resilience to low memory conditions.
9.Security - The software is able to withstand and resist hostile acts and influences.
10.Usability - The software user interface must be usable for its target user/audience. Default
values for the parameters must be chosen so that they are a good choice for the majority of the
users.
11.Performance - The software performs its tasks within a time-frame that is acceptable for the
user, and does not require too much memory.
12.Portability - The software should be usable across a number of different conditions and
environments.
13.Scalability - The software adapts well to increasing data or number of users.
Parallel Programming:
Parallel programming is a type of computation in which many calculations are carried out
simultaneously, or the execution of processes are carried out simultaneously.Large problems can
often be divided into smaller ones, which can then be solved at the same time. There are several
different forms of parallel computing: bit-level, instruction-level, data, and task parallelism.
Parallelism has been employed for many years, mainly in high-performance computing, but
interest in it has grown lately due to the physical constraints preventing frequency scaling. As
power consumption (and consequently heat generation) by computers has become a concern in
recent years,parallel computing has become the dominant paradigm in computer architecture,
mainly in the form of multi-core processors.
Parallel computing is closely related to concurrent computing—they are frequently used
together, and often conflated, though the two are distinct: it is possible to have parallelism
without concurrency (such as bit-level parallelism), and concurrency without parallelism (such as
multitasking by time-sharing on a single-core CPU). In parallel computing, a computational task
is typically broken down in several, often many, very similar sub tasks that can be processed
independently and whose results are combined afterwards, upon completion. In contrast, in
concurrent computing, the various processes often do not address related tasks; when they do, as
is typical in distributed computing, the separate tasks may have a varied nature and often require
some inter-process communication during execution.

Please discuss Superscalar Processors.Please discuss embedded appl.pdf

  • 1.
    Please discuss SuperscalarProcessors. Please discuss embedded applications. Please discuss typical design considerations. Please discuss psrellel programming Solution Superscalar processor: A superscalar processor is a cpu thatthat implements a form of parallelism called instruction- level parallelism within a single processor. It therefore allows for more throughput (the number of instructions that can be executed in a unit of time) than would otherwise be possible at a given clock rate. A superscalar processor can execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to different execution units on the processor. Each execution unit is not a separate processor (or a core if the processor is a multi-core processor), but an execution resource within a single CPU such as an arithmetic logic A single core superscalar processor can be known as SMID(Single Instruction stream,Multiple Data streams). A multi-core superscalar processor can be known as MMID(Multiple Instruction stream,Multiple Data streams). Embedded Applications: A software application that permanently resides in an industrial or consumer device. Providing some type of control function and/or user interface, the software is typically stored in a non- volatile memory such as ROM or flash memory. Contrast with a general-purpose computer that can be used to run all kinds of applications.Such type of applications are known as embedded applications. Design Considerations: Design considerations vary for different elements. Some typical design considerations for a software design are: 1.Compatibility - The software is able to operate with other products that are designed for interoperability with another product. For example, a piece of software may be backward- compatible with an older version of itself.
  • 2.
    2.Extensibility - Newcapabilities can be added to the software without major changes to the underlying architecture. 3.Modularity - the resulting software comprises well defined, independent components which leads to better maintainability. The components could be then implemented and tested in isolation before being integrated to form a desired software system. This allows division of work in a software development project. 4.Fault-tolerance - The software is resistant to and able to recover from component failure. 5.Maintainability - A measure of how easily bug fixes or functional modifications can be accomplished. High maintainability can be the product of modularity and extensibility. 6.Reliability (Software durability) - The software is able to perform a required function under stated conditions for a specified period of time. 7.Reusability - The ability to use some or all of the aspects of the preexisting software in other projects with little to no modification. 8.Robustness - The software is able to operate under stress or tolerate unpredictable or invalid input. For example, it can be designed with a resilience to low memory conditions. 9.Security - The software is able to withstand and resist hostile acts and influences. 10.Usability - The software user interface must be usable for its target user/audience. Default values for the parameters must be chosen so that they are a good choice for the majority of the users. 11.Performance - The software performs its tasks within a time-frame that is acceptable for the user, and does not require too much memory. 12.Portability - The software should be usable across a number of different conditions and environments. 13.Scalability - The software adapts well to increasing data or number of users. Parallel Programming: Parallel programming is a type of computation in which many calculations are carried out simultaneously, or the execution of processes are carried out simultaneously.Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years,parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
  • 3.
    Parallel computing isclosely related to concurrent computing—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency (such as bit-level parallelism), and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU). In parallel computing, a computational task is typically broken down in several, often many, very similar sub tasks that can be processed independently and whose results are combined afterwards, upon completion. In contrast, in concurrent computing, the various processes often do not address related tasks; when they do, as is typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution.