The document outlines the agenda for the Reconfigurable Computing Italian Meeting held on December 19, 2008 at Politecnico di Milano in Milan, Italy. The agenda included four sessions on trends in reconfigurable computing, the hArtes European project, applicative scenarios, and the High Level Reconfiguration project. Each session included 3-4 presentations on technical topics within the session theme, such as FPGA strategies, multi-core signal processing, evolvable hardware, and runtime core relocation management. The meeting concluded with wishes for a merry Christmas and a happy new year.
The document summarizes the plans and activities of DRESD, a research group on dynamic reconfigurability in embedded system design at Politecnico di Milano. It discusses DRESD's research objectives, collaboration with other universities, involvement in teaching courses, and plans to hold workshops and become an official association to support its research vision.
The document describes a methodology for designing dynamic reconfigurable multi-FPGA systems. It presents an intermediate representation for hierarchical circuits and a design flow with three main phases: design extraction from VHDL, static global layout partitioning and placement, and reuse through dynamic reconfiguration to minimize delays. Experimental results validate partitioning, placement and blocks reuse approaches. Future work includes improving clustering metrics, time estimation, and adding routing algorithms.
1. The document discusses Diopsis940, a microcontroller product from Atmel that features an ARM9 processor and floating point DSP for consumer applications.
2. It provides details on target applications including hands-free phones, high-end car audio, and sound processors. The microcontroller supports complex audio processing algorithms.
3. hArtes, an Atmel division, aims to reduce application development time through tools that streamline the process from conceptual design to implementation using their microcontroller products.
The document outlines the agenda for the Reconfigurable Computing Italian Meeting held on December 19, 2008 at Politecnico di Milano in Milan, Italy. The agenda included four sessions on trends in reconfigurable computing, the hArtes European project, applicative scenarios, and the High Level Reconfiguration project. Each session included 3-4 presentations on technical topics within the session theme, such as FPGA strategies, multi-core signal processing, evolvable hardware, and runtime core relocation management. The meeting concluded with wishes for a merry Christmas and a happy new year.
The document summarizes the plans and activities of DRESD, a research group on dynamic reconfigurability in embedded system design at Politecnico di Milano. It discusses DRESD's research objectives, collaboration with other universities, involvement in teaching courses, and plans to hold workshops and become an official association to support its research vision.
The document describes a methodology for designing dynamic reconfigurable multi-FPGA systems. It presents an intermediate representation for hierarchical circuits and a design flow with three main phases: design extraction from VHDL, static global layout partitioning and placement, and reuse through dynamic reconfiguration to minimize delays. Experimental results validate partitioning, placement and blocks reuse approaches. Future work includes improving clustering metrics, time estimation, and adding routing algorithms.
1. The document discusses Diopsis940, a microcontroller product from Atmel that features an ARM9 processor and floating point DSP for consumer applications.
2. It provides details on target applications including hands-free phones, high-end car audio, and sound processors. The microcontroller supports complex audio processing algorithms.
3. hArtes, an Atmel division, aims to reduce application development time through tools that streamline the process from conceptual design to implementation using their microcontroller products.
The document proposes a coarse-grain reconfigurable array (CGRA) for accelerating digital signal processing. The CGRA aims to provide an intermediate tradeoff between flexibility and performance compared to FPGAs and ASICs. It consists of an array of processing elements and distributed memory interconnected via programmable switches. Evaluation shows the CGRA achieves 4.8-8X speedup, 24-58% improved energy efficiency, and up to 40% reduced area compared to a Xilinx Virtex-4 FPGA for applications like color space conversion, FIR filtering, and DCT.
This document discusses Altera's FPGA strategy for reconfigurable hardware in industry applications. It defines reconfigurable hardware as an architecture that does not require on-the-fly timing analysis because product qualification is extensively done through temperature and cycle testing without hardware architecture changes. It then shows how programmable solutions have evolved from single CPU and DSP cores to multi-core processors and coarse-grained arrays with FPGAs moving to fine-grained, massively parallel arrays with embedded hard IP blocks. Future trends include challenges of scaling CPUs due to physical limits and the benefits of parallelism through hardware reconfiguration.
The document describes processes in VHDL. It defines a process as a concurrent statement that contains sequential logic. Processes run in parallel and can be conditioned by a sensitivity list or wait statement. Local variables retain their values between executions. It provides an example of a process with a sensitivity list and one with a wait statement. It also summarizes the general structure of a VHDL program and describes different types of process control including if-then-else, case statements, and decoders. Additional topics covered include flip-flops, counters, and finite state machines.
The document discusses requirements for enabling self-adaptivity at both the software and hardware levels. It proposes a layered model with controllers at the application, run-time environment, and hardware levels. A component-based approach is suggested to allow adaptations such as replacing or modifying components. Simulation results demonstrate how controllers at each level can coordinate to meet goals like high throughput while minimizing power usage. Reconfigurable computing platforms need to allow hardware components to be instantiated and interconnected to enable self-adaptation across software and hardware.
The document summarizes research on task scheduling techniques for dynamically reconfigurable systems. It presents (1) an integer linear programming model to formally define the scheduling problem, (2) the Napoleon heuristic scheduler to solve the problem in reasonable time based on the ILP model, and (3) experimental results validating that Napoleon obtains an average 18.6% better schedule length than other algorithms. Future work is outlined to integrate Napoleon into a general design framework and scheduling-aware partitioning flow.
The document summarizes key topics in reconfigurable computing, including motivations for reconfigurable systems, types of flexibility they provide, and challenges in reconfiguration. It discusses design flows to reduce complexity, maximizing reuse of reconfigurable modules to reduce latency, hiding reconfiguration times, and using relocation to further optimize schedules. Areas of reconfiguration and possible implementation scenarios involving relocation are illustrated.
The document discusses an approach for identifying cores for reconfigurable systems driven by specification self-similarity. It involves partitioning a specification graph into subsets of operations that can be mapped to reusable configurable modules. The approach identifies recurrent subgraphs in the specification that are good candidates for these cores. It works in two phases: first identifying isomorphic subgraph templates, and then selecting templates for implementation as reconfigurable modules based on metrics like largest size, most frequent usage, or minimizing communication. Experimental results on encryption benchmarks show the approach can cover a large portion of the specification with a small set of identified templates.
This document summarizes techniques for core allocation and relocation management in self-dynamically reconfigurable architectures. It introduces basic concepts like cores, IP cores, and reconfigurable regions. It then describes proposed 1D and 2D relocation solutions like BiRF and BiRF Square that allow runtime relocation with low overhead. A core allocation manager is introduced to choose core placements optimizing criteria like rejection rate and completion time with low management costs. Evaluation shows the techniques improve metrics like rejection rate and routing costs compared to other approaches.
The document discusses an hardware application platform developed for the hArtes project. It provides heterogeneous computing resources like DSPs, CPUs and FPGAs. Demonstrator applications focus on advanced audio processing for car infotainment and teleconferencing. The platform supports these applications by integrating different components, scaling computational power, and accommodating future additions. It also provides adequate I/O channels for audio signal processing.
The document describes the Janus system, an FPGA-based approach for simulating spin glass systems using Monte Carlo algorithms. The key aspects are:
1) Spin glass systems are computationally challenging to simulate due to the huge number of possible configurations.
2) The Janus system uses FPGAs to implement a large number of parallel update engines that can flip spins and accept/reject changes according to a Metropolis algorithm.
3) Each FPGA processor grid contains 4x4 processors that can communicate with neighbors. This allows simulations to be massively parallelized across the FPGA network.
This document provides an overview of architectural description languages (ADLs). It discusses that ADLs capture the structure and behavior of processor architectures to enable high-level modeling, analysis, and automatic prototype generation. ADLs can be classified as structural, behavioral, or mixed. Structural ADLs focus on low-level hardware details while behavioral ADLs model instruction sets for compiler generation. The document outlines different ADL types and their applications.
The document discusses design flows for partially reconfigurable systems on FPGAs. It provides an overview of Xilinx FPGA technology and configuration memory organization. It then summarizes several of Xilinx's design flows for partial reconfiguration (difference-based, module-based, EAPR). It outlines challenges with existing design flows and introduces the DRESD methodology and tools (INCA, Caronte) which aim to address these challenges by providing a more comprehensive framework for implementing dynamic reconfigurable embedded systems.
The document discusses some real needs for and limits of reconfigurable computing systems. It describes how partial dynamic reconfiguration can provide flexibility and enhance performance but introduces drawbacks. Simulation and verification tools are needed to design such systems. Reconfiguration times significantly impact latency so tasks should be reused and reconfiguration hidden when possible through techniques like relocation.
The document discusses concepts related to partial dynamic reconfiguration. It defines key terms like reconfigurable computing, object code, reconfiguration controller, and reconfiguration manager. It also discusses the 5 Ws of reconfiguration - who controls it, where the controller is located, when configurations are generated, which is the granularity, and in what dimension it operates. Examples of reconfiguration in everyday life like sports are provided. Reconfigurable architectures are characterized based on factors like embedded vs external, complete vs partial, and dynamic vs static. Finally, more definitions related to cores, IP cores, and reconfigurable functional units and regions are given.
This document provides an overview of reconfigurable computing systems and field programmable gate arrays (FPGAs). It discusses the basic idea and history of reconfigurable computing from the 1960s to present. It also outlines some of the academic efforts in this area and drivers for choosing FPGAs, including time, area, costs and power considerations. The document notes trends like programmable systems on a chip that integrate FPGAs with other components like DSPs and processors.
The Blanket project aims to advance reconfigurable architectures and runtime reconfiguration through several subprojects. The goals are to exploit dynamic reconfigurability for different architectures, design applicative solutions for real world needs, and explore novel architectural paradigms like bio-inspired systems. The subprojects include YaRA for reconfigurable SoCs, HARPE for multicore systems, ReCPU for regular expression matching, and SCAR to develop application-specific reconfigurable architectures. IPs are also being designed to enhance reconfiguration capabilities.
The document proposes a coarse-grain reconfigurable array (CGRA) for accelerating digital signal processing. The CGRA aims to provide an intermediate tradeoff between flexibility and performance compared to FPGAs and ASICs. It consists of an array of processing elements and distributed memory interconnected via programmable switches. Evaluation shows the CGRA achieves 4.8-8X speedup, 24-58% improved energy efficiency, and up to 40% reduced area compared to a Xilinx Virtex-4 FPGA for applications like color space conversion, FIR filtering, and DCT.
This document discusses Altera's FPGA strategy for reconfigurable hardware in industry applications. It defines reconfigurable hardware as an architecture that does not require on-the-fly timing analysis because product qualification is extensively done through temperature and cycle testing without hardware architecture changes. It then shows how programmable solutions have evolved from single CPU and DSP cores to multi-core processors and coarse-grained arrays with FPGAs moving to fine-grained, massively parallel arrays with embedded hard IP blocks. Future trends include challenges of scaling CPUs due to physical limits and the benefits of parallelism through hardware reconfiguration.
The document describes processes in VHDL. It defines a process as a concurrent statement that contains sequential logic. Processes run in parallel and can be conditioned by a sensitivity list or wait statement. Local variables retain their values between executions. It provides an example of a process with a sensitivity list and one with a wait statement. It also summarizes the general structure of a VHDL program and describes different types of process control including if-then-else, case statements, and decoders. Additional topics covered include flip-flops, counters, and finite state machines.
The document discusses requirements for enabling self-adaptivity at both the software and hardware levels. It proposes a layered model with controllers at the application, run-time environment, and hardware levels. A component-based approach is suggested to allow adaptations such as replacing or modifying components. Simulation results demonstrate how controllers at each level can coordinate to meet goals like high throughput while minimizing power usage. Reconfigurable computing platforms need to allow hardware components to be instantiated and interconnected to enable self-adaptation across software and hardware.
The document summarizes research on task scheduling techniques for dynamically reconfigurable systems. It presents (1) an integer linear programming model to formally define the scheduling problem, (2) the Napoleon heuristic scheduler to solve the problem in reasonable time based on the ILP model, and (3) experimental results validating that Napoleon obtains an average 18.6% better schedule length than other algorithms. Future work is outlined to integrate Napoleon into a general design framework and scheduling-aware partitioning flow.
The document summarizes key topics in reconfigurable computing, including motivations for reconfigurable systems, types of flexibility they provide, and challenges in reconfiguration. It discusses design flows to reduce complexity, maximizing reuse of reconfigurable modules to reduce latency, hiding reconfiguration times, and using relocation to further optimize schedules. Areas of reconfiguration and possible implementation scenarios involving relocation are illustrated.
The document discusses an approach for identifying cores for reconfigurable systems driven by specification self-similarity. It involves partitioning a specification graph into subsets of operations that can be mapped to reusable configurable modules. The approach identifies recurrent subgraphs in the specification that are good candidates for these cores. It works in two phases: first identifying isomorphic subgraph templates, and then selecting templates for implementation as reconfigurable modules based on metrics like largest size, most frequent usage, or minimizing communication. Experimental results on encryption benchmarks show the approach can cover a large portion of the specification with a small set of identified templates.
This document summarizes techniques for core allocation and relocation management in self-dynamically reconfigurable architectures. It introduces basic concepts like cores, IP cores, and reconfigurable regions. It then describes proposed 1D and 2D relocation solutions like BiRF and BiRF Square that allow runtime relocation with low overhead. A core allocation manager is introduced to choose core placements optimizing criteria like rejection rate and completion time with low management costs. Evaluation shows the techniques improve metrics like rejection rate and routing costs compared to other approaches.
The document discusses an hardware application platform developed for the hArtes project. It provides heterogeneous computing resources like DSPs, CPUs and FPGAs. Demonstrator applications focus on advanced audio processing for car infotainment and teleconferencing. The platform supports these applications by integrating different components, scaling computational power, and accommodating future additions. It also provides adequate I/O channels for audio signal processing.
The document describes the Janus system, an FPGA-based approach for simulating spin glass systems using Monte Carlo algorithms. The key aspects are:
1) Spin glass systems are computationally challenging to simulate due to the huge number of possible configurations.
2) The Janus system uses FPGAs to implement a large number of parallel update engines that can flip spins and accept/reject changes according to a Metropolis algorithm.
3) Each FPGA processor grid contains 4x4 processors that can communicate with neighbors. This allows simulations to be massively parallelized across the FPGA network.
This document provides an overview of architectural description languages (ADLs). It discusses that ADLs capture the structure and behavior of processor architectures to enable high-level modeling, analysis, and automatic prototype generation. ADLs can be classified as structural, behavioral, or mixed. Structural ADLs focus on low-level hardware details while behavioral ADLs model instruction sets for compiler generation. The document outlines different ADL types and their applications.
The document discusses design flows for partially reconfigurable systems on FPGAs. It provides an overview of Xilinx FPGA technology and configuration memory organization. It then summarizes several of Xilinx's design flows for partial reconfiguration (difference-based, module-based, EAPR). It outlines challenges with existing design flows and introduces the DRESD methodology and tools (INCA, Caronte) which aim to address these challenges by providing a more comprehensive framework for implementing dynamic reconfigurable embedded systems.
The document discusses some real needs for and limits of reconfigurable computing systems. It describes how partial dynamic reconfiguration can provide flexibility and enhance performance but introduces drawbacks. Simulation and verification tools are needed to design such systems. Reconfiguration times significantly impact latency so tasks should be reused and reconfiguration hidden when possible through techniques like relocation.
The document discusses concepts related to partial dynamic reconfiguration. It defines key terms like reconfigurable computing, object code, reconfiguration controller, and reconfiguration manager. It also discusses the 5 Ws of reconfiguration - who controls it, where the controller is located, when configurations are generated, which is the granularity, and in what dimension it operates. Examples of reconfiguration in everyday life like sports are provided. Reconfigurable architectures are characterized based on factors like embedded vs external, complete vs partial, and dynamic vs static. Finally, more definitions related to cores, IP cores, and reconfigurable functional units and regions are given.
This document provides an overview of reconfigurable computing systems and field programmable gate arrays (FPGAs). It discusses the basic idea and history of reconfigurable computing from the 1960s to present. It also outlines some of the academic efforts in this area and drivers for choosing FPGAs, including time, area, costs and power considerations. The document notes trends like programmable systems on a chip that integrate FPGAs with other components like DSPs and processors.
The Blanket project aims to advance reconfigurable architectures and runtime reconfiguration through several subprojects. The goals are to exploit dynamic reconfigurability for different architectures, design applicative solutions for real world needs, and explore novel architectural paradigms like bio-inspired systems. The subprojects include YaRA for reconfigurable SoCs, HARPE for multicore systems, ReCPU for regular expression matching, and SCAR to develop application-specific reconfigurable architectures. IPs are also being designed to enhance reconfiguration capabilities.
1. POLITECNICO DI MILANO
Sintesi comportamentale di
reti sequenziali sincrone
Ant Brain
DRESD How To (DHow2) – L6
DRESD Team
info@dresd.org
2. Sintesi
La sintesi si svolge nei seguenti passi:
1. Realizzazione del diagramma degli stati a partire dalle
specifiche informali del problema
2. Costruzione della tabella degli stati
3. Riduzione del numero degli stati: ottimizzazione
4. Costruzione della tabella delle transizioni
– Assegnamento degli stati: Codice & codifica
5. Costruzione della tabella delle eccitazioni
– Scelta degli elementi di memoria
6. Sintesi sia della rete combinatoria che realizza la
funzione stato prossimo sia della rete combinatoria che
realizza la funzione d'uscita
2
4. Ant Brain: Specifiche
Sensori:
L e R antenne;
Assumono valore 1 se toccano il muro.
Attuatori:
F: procedi dritto;
TL: Gira piano a sinistra;
TR : Gira piano a destra.
Goal: trovare l’uscita dal labirinto
Strategia: tenere il muro a destra
4
5. Comportamento Formica
A: Segui il muro toccandolo. B: Segui il muro senza toccarlo.
Vai dritto, girando leggermente
Vai dritto, girando leggermente
a destra.
a sinistra.
D: Toccato il muro nuovamente.
C: Rottura nel muro.
Vai dritto, girando leggermente Torna allo stato A.
a destra.
E: Muro di fronte. F: ...siamo qui, stessa
Gira a sinistra finchè… situazione dello stato B.
LOST: Dritto fino a quando non G: Gira a sinistra finchè…
Tocchi qualche cosa.
5
6. Sintesi
La sintesi si svolge nei seguenti passi:
1. Realizzazione del diagramma degli stati a partire
dalle specifiche informali del problema (15 min)
2. Costruzione della tabella degli stati
3. Riduzione del numero degli stati: ottimizzazione
4. Costruzione della tabella delle transizioni
– Assegnamento degli stati: Codice & codifica
5. Costruzione della tabella delle eccitazioni
– Scelta degli elementi di memoria
6. Sintesi sia della rete combinatoria che realizza la
funzione stato prossimo sia della rete combinatoria che
realizza la funzione d'uscita
6
7. Disegno del Cervello della Formica:
Diagramma degli Stati
L+R L’ R
L+R L
LOST A
E/G
(F) (TL, F)
(TL)
R
L’ R’ L’ R’ R
L’ R’
C
B
R’
(TR, F)
(TR, F)
R’
7
8. Sintesi
La sintesi si svolge nei seguenti passi:
1. Realizzazione del diagramma degli stati a partire
dalle specifiche informali del problema
2. Costruzione della tabella degli stati (10min)
3. Riduzione del numero degli stati: ottimizzazione
4. Costruzione della tabella delle transizioni
– Assegnamento degli stati: Codice & codifica
5. Costruzione della tabella delle eccitazioni
– Scelta degli elementi di memoria
6. Sintesi sia della rete combinatoria che realizza la
funzione stato prossimo sia della rete combinatoria che
realizza la funzione d'uscita
8
9. Tabella degli stati
Il comportamento di una FSM può essere descritto mediante la
Tabella degli stati
Gli indici di colonna sono i simboli di ingresso iα ∈ I
Gli indici di riga sono i simboli di stato sj ∈ S che indicano lo
stato presente
i1 i2 ..
Gli elementi della tabella sono: S1t Sjt+1/uj Skt+1/uk . . .
Macchine di Mealy S2t Smt+1/um Slt+1/ul . . .
la coppia {uβ ,sj }
. .. . . ... ...
uβ = λ(iα, sj ) è il simbolo di uscita
sj = δ(iα, sj ) è il simbolo stato prossimo i1 i2 ..
S1t Sjt+1 Skt+1 u1
...
S2t Smt+1 Slt+1 u2
...
Macchine di Moore ....... ... ..
Il simbolo stato prossimo sj
sj = δ(iα, sj ) è il simbolo stato prossimo
9 i simboli d'uscita sono associati allo stato presente
10. Macchine di Mealy e Macchine di Moore
Macchine di Mealy
la funzione di uscita costituisce la risposta della
macchina quando, trovandosi in un certo stato
presente, riceve un simbolo di ingresso
nelle macchine di Mealy, l’uscita va “letta” mentre la
macchina subisce una transizione di stato
Macchine di Moore
la funzione di uscita costituisce la risposta della
macchina associata allo stato in cui si trova nelle
macchine di Moore, l’uscita viene letta mentre la
macchina si trova in un determinato stato
E’ possibile trasformare una macchina di Mealy in una macchina
equivalente di Moore, e viceversa
10
11. Tabella degli stati: Moore
L+R L’ R
LOST E/G A
L+R L
(F) (TL) (TL, F)
R
L’ R’ L’ R’ R
L’ R’
B C
(TR, F) (TR, F)
R’
L’R’ L’R LR LR’ R’
Lost Lost E/G E/G E/G F TR’ TL’
E/G B E/G E/G E/G F’ TR’ TL
A B A E/G E/G F TR’ TL
B C A A C F TR TL’
C C A A C F TR TL’
11
13. Sintesi
La sintesi si svolge nei seguenti passi:
1. Realizzazione del diagramma degli stati a partire dalle
specifiche informali del problema
2. Costruzione della tabella degli stati
3. Riduzione del numero degli stati: ottimizzazione (15
min)
4. Costruzione della tabella delle transizioni
– Assegnamento degli stati: Codice & codifica
5. Costruzione della tabella delle eccitazioni
– Scelta degli elementi di memoria
6. Sintesi sia della rete combinatoria che realizza la
funzione stato prossimo sia della rete combinatoria che
realizza la funzione d'uscita
13
14. Riduzione del numero degli stati: macchine equivalenti
Date due macchine completamente specificate M1 e
M2 si dicono equivalenti se e solo se:
per ogni stato si di M1, esiste uno stato sj di M2 tale
che ponendo la macchina M1 in si e la macchina M2 in
sj
e applicando alle due macchine una qualunque sequenza
di ingresso I
le due sequenze di uscita sono identiche.
E viceversa per M2 rispetto ad M1
Nota: nella definizione di equivalenza sono considerate solo le
relazioni ingresso-uscita quindi le due macchine possono avere un
insieme di stati diverso e in particolare di diversa cardinalità
14
15. Rivisitazione dell’Ant Brain
Nessuno stato equivalente?
L+R L’ R
L+R L
LOST E/G A
(F) (TL) (TL, F)
R
L’ R’ L’ R’ R
L’ R’
B C
R’
(TR, F) (TR, F)
R’
15
16. Riduzione del numero degli stati:
stati indistinguibili di una stessa macchina
Data una macchina completamente specificata, siano:
Iα - una generica sequenza di ingresso ij, ..., ik
Uα - la sequenza d'uscita ad essa associata ottenuta attraverso
λ.
si, sj - due generici stati
I due stati si e sj appartenenti ad S sono indistinguibili se:
Uα,i = λ(si, Iα) = λ(sj, Iα) = Uα,j ∀ Iα
ponendo la macchina in si oppure in sj e applicando una qualsiasi
sequenza di ingresso, le uscite sono identiche.
L’indistinguibilità tra si e sj si indica con: si ~ sj
16
17. Riduzione del numero degli stati:
identificazione degli stati equivalenti
La definizione di indistinguibilità tra stati è di difficile
applicabilità poiché richiederebbe di considerare tutte
le sequenze di ingresso (a priori infinite)
Si ricorre ad una regola introdotta da Paull – Unger
Due stati si e sj appartenenti ad S sono indistinguibili
se e solo se per ogni simbolo di ingresso ia :
λ(si,ia )=λ(sj,ia) (Le uscite sono uguali per ogni
simbolo di ingresso)
δ(si,ia)~δ(sj,ia) (Gli stati prossimi sono
indistinguibili)
La regola di Paull – Unger è iterativa
17
20. Nuovo Cervello Migliorato
Unione degli stati B e C;
Il comportamento è esattamente quello del cervello
a 5 stati;
Ora ci servono solo 2 variabili di stato invece di 3.
L+R L’ R
L+R L
LOST E/G A
(F) (TL) (TL, F)
R
L’ R’ L’ R’
L’ R’
B/C
R’
(TR, F)
20
21. Sintesi
La sintesi si svolge nei seguenti passi:
1. Realizzazione del diagramma degli stati a partire dalle
specifiche informali del problema
2. Costruzione della tabella degli stati
3. Riduzione del numero degli stati: ottimizzazione
4. Costruzione della tabella delle transizioni (10 min)
– Assegnamento degli stati: Codice & codifica
5. Costruzione della tabella delle eccitazioni
– Scelta degli elementi di memoria
6. Sintesi sia della rete combinatoria che realizza la
funzione stato prossimo sia della rete combinatoria che
realizza la funzione d'uscita
21
22. Sintesi: Scelta del codice
Il processo di codifica degli stati ha l’obiettivo di
identificare per ogni rappresentazione simbolica dello stato
una corrispondente rappresentazione binaria.
Due problemi paralleli:
Scelta del codice.
A minimo numero di bit
– n°di elementi di memoria= log2 |S| (codifica densa)
One-Hot
– n°di elementi di memoria= |S| (codifica sparsa)
Distanza Minima
– Gli stati che sono in corrispondenza delle transizioni più
frequenti sono poste a distanza Hamming più piccola
possibile ponendo il vincolo del minor numero possibile di
bit.
…
Identificazione della codifica di ogni stato.
22
23. Assegnamento degli stati
2 Stati:
Richieste almeno 2 variabili di stato: X, Y;
Almeno 2 bistabili.
LOST - 00
E/G - 01
A - 10
B/C - 11
23
27. Sintesi
La sintesi si svolge nei seguenti passi:
1. Realizzazione del diagramma degli stati a partire dalle
specifiche informali del problema
2. Costruzione della tabella degli stati
3. Riduzione del numero degli stati: ottimizzazione
4. Costruzione della tabella delle transizioni
– Assegnamento degli stati: Codice & codifica
5. Costruzione della tabella delle eccitazioni (10 min)
– Scelta degli elementi di memoria
6. Sintesi sia della rete combinatoria che realizza la
funzione stato prossimo sia della rete combinatoria che
realizza la funzione d'uscita
27
28. Implementazione: caso ottimizzato
Le uscite sono funzioni solo dello stato corrente:
Macchiana di Moore
F
Logica d’uscita TR
TL
Nuovo Stato
L Logica
R Nuovo Stato
X+ Y+
Stato Corrente
X Y
28
29. Tabella delle eccitazioni
Tabella delle eccitazioni
di un bistabile di tipo D
Q Q’ D
0 0 0
0 1 1
1 0 0
1 1 1
Tabella delle
Tabella delle transizioni eccitazioni (con D)
00 01 11 10 LR
XY 00 01 11 10
00 00 01 01 01 00 00 01 01 01
01 11 01 01 01 01 11 01 01 01
11 11 10 10 11
11 11 10 10 11 10 11 10 10 01
10 11 10 10 01
29
30. Tabella delle eccitazioni
Tabella delle eccitazioni
di un bistabile di tipo SR
Q Q’ SR
0 0 0-
0 1 10
1 0 01
1 1 -0
Tabella delle eccitazioni
Tabella delle transizioni (con SR)
00 01 11 10 LR
00 01 11 10
XY
00 00 01 01 01 00 0- 0- 0- 10 0- 10 00 10
01 11 01 01 01 01 10 -0 0- -0 0- -0 0- -0
11 -0 -0 -0 01 -0 01 -0 -0
11 11 10 10 11 10 -0 10 -0 0- -0 0- 01 10
10 11 10 10 01
30
32. Implementazione: caso non ottimizzato
Le uscite sono funzioni solo dello stato corrente:
Macchiana di Moore
F
Logica d’uscita TR
TL
Nuovo Stato
L Logica
R Nuovo Stato
X+ Y+ Z+
Stato Corrente
X Y Z
32
33. Sintesi
La sintesi si svolge nei seguenti passi:
1. Realizzazione del diagramma degli stati a partire dalle
specifiche informali del problema
2. Costruzione della tabella degli stati
3. Riduzione del numero degli stati: ottimizzazione
4. Costruzione della tabella delle transizioni
– Assegnamento degli stati: Codice & codifica
5. Costruzione della tabella delle eccitazioni
– Scelta degli elementi di memoria
6. Sintesi sia della rete combinatoria che realizza la
funzione stato prossimo sia della rete combinatoria
che realizza la funzione d'uscita (casa)
33