(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
DSP Processors Architecture, Data Handling, and Programming
1.
2. DSP processors
◦ Architecture
◦ Data handling
◦ Program flow
◦ Programming
◦ Applications
3. DSP processors
◦ Architecture
◦ Data handling
◦ Program flow
◦ Programming
◦ Applications
4. A digital signal processor is a specialized microprocessor with
an architecture optimized for the fast operational needs of
digital signal processing.
◦ DSP is the application of mathematical operations to digitally represent
signals.
The source of these signals can be
◦ Audio
◦ Image
5. Digital signal processing enjoys several advantages
over analog signal processing:
◦ DSP systems are able to accomplish tasks inexpensively
that would be difficult or even impossible using analog
electronics. (Examples of such applications include speech synthesis
and speech recognition).
◦ Insensitivity to environment.
◦ Insensitivity to component tolerances.
◦ Repeatable behavior.
◦ Re-programmability.
◦ Size.
6. ◦ Arithmetic and Multiplication
(add, subtract, increment, decrement, negate, round,
absolute value) and multiplication.
With the exception of the Texas Instruments TMS320Clx
processor provide multiply-accumulate instructions as well.
◦ Logic Operations
and, or, exclusive-or, and not.
◦ Shifting
Arithmetic (left and right).
Logical (left and right).
7. ◦ Rotation
Left.
Right.
◦ Comparison
Most processors provide a set of status bits (ex: zero-Bit,
minus Bit and overflow Bit) that provide information about the
results of arithmetic operations.
used in conditional branches or conditional execution
instructions.
◦ Looping
◦ Subroutine Calls
may be called jump-to-subroutine instructions.
8. ◦ Branching
jump or goto instructions on some processors.
Conditional Un-conditional
10. DSP processors
◦ Architecture
◦ Data handling
◦ Program flow
◦ Programming
◦ Applications
11. Instruction sets
◦ A basic DSP processor supports RISC (Reduce Instruction Set
Computers) and CISC (Complex Instruction Set Computers)
instructions.
◦ Single instruction, multiple data (SIMD)
◦ Instruction-level parallelism (ILP)
12. Single instruction, multiple data (SIMD)
Single instruction, multiple data describes computers
with multiple processing elements that perform the
same operation on multiple data simultaneously.
13. Instruction-level parallelism (ILP)
◦ Instruction-level parallelism (ILP) is a measure of how many
of the operations in a computer program can be performed
simultaneously.
◦ Ex:
1. e = a + b
2. f = c + d independent
3. g = e * f
◦ ILP allows the compiler and the processor to overlap the
execution of multiple instructions or even to change the
order in instructions
14. ◦ Transferring information to and from memory includes
data, such as samples from the input signal and the filter
coefficients, as well as program instructions, the binary
codes that go into the program sequencer.
◦ Ex.
a b×a
15. There are mainly three types of architectures
employed for the processors:
1. Von Neumann architecture
2. Harvard architecture
3. Super Harvard Architecture
16. contains a single memory and a single bus for
transferring data into and out of the central processing
unit (CPU).
For example,
Memory
(instruction and
data)
CPU
a b×a
17. • Advantages:
• This type of architecture is cheap, and
• Simple to use because the programmer can place
instructions or data anywhere throughout the available
memory.
• Disadvantages:
• Von Neumann computers spend a lot of time moving data
to and from the memory, and hence slows the computer.
18. Separate memories for data and program instructions,
with separate buses for each.
• For example,
Program
Memory
(instruction only)
CPU
Data
Memory
(data only)
a b×a
19. • Advantages:
• Since the buses operate independently, program
instructions and data can be fetched at the same time,
improving the speed over the single bus design.
• Disadvantages:
• data memory bus is busier than the program memory
bus.
20. Improves upon the Harvard design by adding an instruction
cache and dedicated I/O controller.
• For example,
a b×a
Program
Memory
(instruction and
secondary data)
CPU Data
Memory
(data only)
I/O
Controller
Instruction
Cache
Data
21. • Advantages:
• the instruction cache improves the performance of the
Harvard architecture.
• I/O controller is connected to data memory.This
dedicated hardware allows the data streams to be
transferred directly into memory without having to pass
through the CPU's registers.
• Disadvantages:
• If we were executing random instructions, this situation
would be no better at all.
26. At the top of the diagram are two blocks labeled
Data Address Generator (DAG), one for each of the
two memories.
◦ These control the addresses sent to the program and data
memories, specifying where the information is to be read
from or written to.
27. The data register section : contains16 general
purpose registers of 40 bits each.
◦ These can hold intermediate calculations,
◦ prepare data for the math processor,
◦ serve as a buffer for data transfer,
◦ hold flags for program control.
28. The math processing is broken into three sections,
◦ a multiplier (MAC),
◦ an arithmetic logic unit (ALU), and
◦ a shifter.
29. DSP processors
◦ Architecture
◦ Data handling
◦ Program flow
◦ Programming
◦ Applications
30. DSP processors fall into two major categories
based on the way they represent numerical values
and implement numerical operations internally.
Fixed Point
Floating Point
31. Floating point
◦ Floating point processors primarily represent numbers in
floating point format.
◦ Advantages:
Easier to develop code..
The large dynamic range available means that dynamic range
limitations can be practically ignored in a design.
◦ Disadvantages:
More expensive because they implement more functionality
(complexity )in silicon and have wider buses (32 bit).
32. Fixed point
◦ Fixed point processors represent and manipulate numbers as
integers.
◦ Advantages:
lower cost and
higher speed
◦ Disadvantages:
Added design effort for algorithm implementation analysis, and
data and
Coefficient scaling to avoid accumulator overflow (16-20-24 bit).
33. Let’s take an example:
FIR filters (Finite Impulse Response)
y[n]=b0 x[n] + b1 x[n-1] + b2 x[n-2] + ……. + bN x[n-N]
Structurally, FIR filters consist of just two things:
◦ a sample delay line and
◦ a set of coefficients.
Round
Or
Truncate
(at fixed point)
34. DSP processors
◦ Architecture
◦ Data handling
◦ Program flow
◦ Programming
◦ Applications
37. A pipeline is a set of data processing elements connected in series, so that
the output of one element is the input of the next one.
Instruction pipelines, used in processors to allow overlapping execution of
multiple instructions.
Fetch
Decode
Execute
Fetch
‘A’
•2nd CLK
Cycle
Fetch
‘B’
Decode
‘A’
•3rd CLK
Cycle
Fetch
‘C’
Execute
‘A’
Decode
‘B’
•1st CLK
Cycle
38. 1st Approach
◦ Each clock cycle = 20ns
◦ One instruction = 80 ns
◦ each stage of instruction execution is idle 75 % of the time.
39. 2nd Approach
◦ One instruction is now completed every clock cycle (every
20 ns)
41. DSP algorithms frequently involve the repetitive execution of
a small number of instructions (ex: FIR and IIR filters, FFTs
and matrix multiplication)
DSP processors have evolved to include features to efficiently
handle this sort of repeated execution.
MOV #16,B
LOOP : MAC (R0)+,(R4+),A
DEC B
JNE LOOP
RPT #16
MAC (R0)+,(R4+),A
42. DSP processors
◦ Architecture
◦ Data handling
◦ Program flow
◦ Programming
◦ Applications
43. Most DSPs are programmed in special versions of
C.
DSP vendors will almost always provide support for
C++ programming, but it is not very popular in the
DSP software industry.
Some DSP software programmers will resort to
assembly programming for DSPs.
44. DSP processors
◦ Architecture
◦ Data handling
◦ Program flow
◦ Programming
◦ Applications