This document discusses the history of CISC and RISC architecture designs. In the 1980s, CISC and RISC architectures differed in their instruction complexity and design constraints for desktop and server computing. CISC used complex instruction sets while RISC used reduced instruction sets. Over time, both architectures evolved with improvements in compiler technology, memory costs, and chip design. Now, CISC is commonly used in desktops and servers while RISC is used in applications requiring high performance like real-time systems. The key differences between CISC and RISC relate to performance, pricing strategies, and design approaches to instructions and addressing modes.
Comparative Study of RISC AND CISC ArchitecturesEditor IJCATR
Comparison between RISC and CISC in the language of computer architecture for research is not very simple because
a lot of researcher worked on RISC and CISC Architectures. Both these architecture differ substantially in terms of their underlying
platforms and hardware architectures. The type of chips used differs a lot and there exists too many variants as well. This paper
gives us the architectural comparison between RISC and CISC architectures. Also, we provide their advantages performance point
of view and share our idea to the new researchers.
Reduced instruction set computing, or RISC (pronounced 'risk', /ɹɪsk/), is a CPU design strategy based on the insight that a simplified instruction set provides higher performance when combined with a microprocessor architecture capable of executing those instructions using fewer microprocessor cycles per instruction.
Comparative Study of RISC AND CISC ArchitecturesEditor IJCATR
Comparison between RISC and CISC in the language of computer architecture for research is not very simple because
a lot of researcher worked on RISC and CISC Architectures. Both these architecture differ substantially in terms of their underlying
platforms and hardware architectures. The type of chips used differs a lot and there exists too many variants as well. This paper
gives us the architectural comparison between RISC and CISC architectures. Also, we provide their advantages performance point
of view and share our idea to the new researchers.
Reduced instruction set computing, or RISC (pronounced 'risk', /ɹɪsk/), is a CPU design strategy based on the insight that a simplified instruction set provides higher performance when combined with a microprocessor architecture capable of executing those instructions using fewer microprocessor cycles per instruction.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
RISC (reduced instruction set computer)LokmanArman
RISC
Reduced Instruction Set Computer
What Is RISC?
History Of RISC.
Characteristics Of RISC.
Five Design Principles Of RISC.
What Actually RISC Does?
In Real Life Uses Of RISC In Computer Architecture.
Computer Architecture & Organization.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
RISC (reduced instruction set computer)LokmanArman
RISC
Reduced Instruction Set Computer
What Is RISC?
History Of RISC.
Characteristics Of RISC.
Five Design Principles Of RISC.
What Actually RISC Does?
In Real Life Uses Of RISC In Computer Architecture.
Computer Architecture & Organization.
This slide contain the detail about the various organization of computer(Register based organization, Stack Based Organization and Accumulator Based Organization), Addressing Modes, Instruction Formats and finally RISC and CISC
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/12/a-new-golden-age-for-computer-architecture-processor-innovation-to-enable-ubiquitous-ai-a-keynote-presentation-from-david-patterson/
For the follow-on interview with David Patterson, please visit:
https://www.edge-ai-vision.com/2020/12/perspective-on-the-past-present-and-future-of-processor-design-an-alliance-interview-with-david-patterson/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
David Patterson, UC Berkeley professor of the graduate school, a Google distinguished engineer and the RISC-V Foundation Vice-Chair, presents the “A New Golden Age for Computer Architecture: Processor Innovation to Enable Ubiquitous AI” tutorial at the September 2020 Embedded Vision Summit.
Paradoxically, processors today are both a key enabler of and a painful obstacle to the widespread use of AI applications. Despite big recent advances in machine learning (ML) processors, many people creating ML algorithms and applications still need much better processors to make their ideas practical, affordable and scalable. What will it take to bring processors to the next level, so that ML-based solutions can be deployed widely? Uniquely qualified to answer these questions is keynote speaker and Turing Award winner David Patterson.
Patterson shares his perspective on the past, present, and future of processor design, highlighting key challenges, lessons learned, and the emergence of machine learning as a key driver of processor innovation. Using lessons learned from an earlier revolution in processor architecture, the RISC revolution, Patterson explains why today, the most promising direction in processor design is domain-specific architectures (DSAs) — processors that are optimized for specific types of workloads. To illustrate the concepts and advantages of DSAs, Patterson examines Google’s Tensor Processing Unit (TPU), one of the earliest DSAs to be widely deployed for machine learning applications.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Architectures and operating systems
1. 20051426
Abstract
RICS vs. CISC raged in 1980s when the chip and processor design complexity were principle
constrains to desktop and server computing background. The principle design of computing
significantly growth in desktop and mobile industry using desktop ARM system and for the
laptops running x86 CISC technology. In this report the history of CISC and RISC architecture is
discussed with its suitability on usage. Further the report elaborates on its design pattern,
compiler translate statements in (HLL), functional and non-functional requirements. The industry
situation has been discussed on the period with its effects on pricing strategy and the timely
evolution of system architecture.
O.M. Hiran Kanishka Chandrasena Page 1 of 15
2. 20051426
Table of Contents
Abstract ............................................................................................................................................1
1 CISC & RISC Architecture Processors ..............................................................................................3
1.1 Complex Instruction Set Computer .............................................................................................3
1.2 Reduced Instruction Set Computer..............................................................................................3
1.3 Debate of Design Patterns ..........................................................................................................4
1.4 Impact of Historical Framework Design, Architecture Process and Technology on Complex
Instruction Set Computer .................................................................................................................5
1.5 Impact of Historical Framework Design, Architecture Process and Technology on Reduced
Instruction Set Computer .................................................................................................................7
2 The Comparison of CISC and RISC Architecture Designs ................................................................ 10
3 Conclusion .................................................................................................................................... 13
4 References..................................................................................................................................... 15
O.M. Hiran Kanishka Chandrasena Page 2 of 15
3. 20051426
1 CISC & RISC Architecture Processors
1.1 Complex Instruction Set Computer
The Complex Instruction Set Computer architecture contains large number of set instructions
which use assemble language level in early stages. The data transaction time was very slow
because the programming was done by assembles language and due to low memory capacity.
The memory in CISC system is comparatively slow but the main memory used on set of
instructions reprocess is 10 times faster. The High Level Language allows the programmer to
create algorithms more concisely which support in-detail
object- oriented design patents. CISC
characteristics give general purpose registers
which has many addressing modes. E.g.: Intel
x86, IBM Z series Mainframe computers can take
CISC architecture process.
Figure 1 CISC Processor
1.2 Reduced Instruction Set Computer
Reduced Instruction Set Computer architecture supports HLL simpler. More complex type of
RISC microprocessor provides less instruction (faster instruction decoding) details and faster
executes time. Computers of earlier stages use only 20% of instruction sets for all the tasks. For
RISC circuits few transistors are needed which provides cost effective,
less heat and simple design with the reduced instruction set
computers.[1] As examples Power PC, Motorola 68000 and Sun Sparc
could be taken as such created by RISC process.
Figure 2 RISC Sun Sparc
O.M. Hiran Kanishka Chandrasena Page 3 of 15
4. 20051426
1.3 Debate of Design Patterns
The importance of two design patterns needed to take into consideration to develop set of
instructions, aspects which are performed on point in time: thus each design approach is
applicable to RISC and CISC architecture and its limitations will be considered with key
understanding. The key features, specification and benchmarks will require historical
background. The historical background of RISC and CISC were developed in early 60’s and
80’s with the art of memory, very large scale integration and compliers used to build faster
machines.
On early stages the usage purpose of Storage/ Memory Technology computers were only as core
memory magnetic tapes for storage program. This again incurred high costs and brought off slow
performance. Through introduction of Random Access Memory the performance rapidly
increased in comparison to tapes. But the price of the
RAM was still high; the secondary storage took more time
and got obstructed in many ways. The high cost of the
main memory and low secondary storage was a
challenging issue to expand the code. [2] The best way was
to fit small amount of memory for RAM, counted for
sharing overall cost of the system. In 90’s RAM accounted
for 36% of total system cost and after this RAM became
cheaper and more affordable to the market.
Figure 3 High Level Language Structured
The compiler translate statements in “HLL” like C programming and PASCAL, into the
assemble language. Then the assemble language is converted in the machine code, and there it
took a considerable time to provide output. The difference between operations provided in a high
level of computer architecture: the gap included compiler complexity and execution inefficiency.
“Something to keep in mind while reading the paper was how lousy the compilers were of that
generation. C programmers had to write the word "register" next to variables to try to get
compilers to use registers.” [3]
O.M. Hiran Kanishka Chandrasena Page 4 of 15
5. 20051426
The scope of the Very Large Scale Integration “VLSI” became declining in industry
environment. Since 1981 Patterson & Sequin proposed the first RISC architecture process. There
were a million transistors in one chip, transistors resources and CISC machines had function of
units in build numerous chips. The drawback of these chips were power consumption delay,
limited performance in data transferring process, high cost and heat generated.
1.4 Impact of Historical Framework Design, Architecture Process and Technology
on Complex Instruction Set Computer
Early in 60’s and 70’s the hardware market became less attractive economically: in contrary
software market was predicted to increase. The idea of the complexity of software to hardware
domain resulted the CISC creation. Some authors suggested implementation programs and
compilers to their workout to keep semantic gap between high level languages and assemble
language. So the assembler makes their codes in C and PASCAL languages. [4]
Programmers started on promoting HLL in CISC for following reasons,
Reduce the total system maintenance cost
Reduce the software implementation cost and save time for software developers
Decreasing semantic gap between programming and assemble language
Accuracy and efficiency
Easy debugging and easy to write compilers
The methodology of increasing performance brought about the need for complexity from
software to hardware and to make high performance at an affordable value. Increase in
performance reduces the time taken to run the program. When CISC machine tries to decrease
amount of the time, the number of instruction set per program to execute preforming task will
vary and that will increase real perform timing.
An example on how to drives increasing complexity of machine instruction sets in CISC
architecture is discussed on the following. There take cube 30 and store it as variable. For the
purpose a code which is written by Hypothetical High Level Language (H) is taken. The H
translates to assemble to ARS stand. There MOVE destination register to another register which
O.M. Hiran Kanishka Chandrasena Page 5 of 15
6. 20051426
the way. As MOVE [E, 7] thee number 7 replaced in register. “people would accept any piece of
junk you give them, as long as the code worked part of the reason was simply the speed of
processors and the size of memory” [David A. Patterson]
Figure 4 Addressing Modes
MOVE [E, F]: takes number stored in F and place in E. MUL multiply register take instruction
from destination register and multiply register and place it in destination registers. MUL [A, 50]
the value of A by 50 and that a part A result. MUL [A, C] this multiply A by the value of C and
place in result on D.
When it comes to technology, background of microprogrammings is significant innovation of the
system architecture to implement direct execution. The machine fetches instructions from the
memory locations in control unit (CU). CU input instructions are carried to machine floating
point. There the direct execution conformed the features of adding, shifting and normalization. If
the instruction execute every time it need more space, thus if the instructions size gets large and
complex, it takes lot of task to execute. The direct execution of instruction process has a limited
resources capacity. The ROM helps to control the memory and makes MM fast faster than main
memory. The technology improves with the addition of more functions resulting software faster
to cheaper than hardware.
O.M. Hiran Kanishka Chandrasena Page 6 of 15
7. 20051426
1.5 Impact of Historical Framework Design, Architecture Process and Technology
on Reduced Instruction Set Computer
By 1981the technology has changed, but the system architecture concept still remained from
software to hardware. The CISC implementation is complex and it cannot be crossed with
multiple circuits -not ideal. As result there was need of combining everything into one chip and
CPU came to the picture. To overcome the increased processing time, optimized system to do the
task in less time period was required. As turn out compiler technology got smooth and memory
was at a low cost, that drove to design complex instruction sets add to High level language
supported better software. [5]
Figure 5 RISC Architecture Structure
When RISC function come to over board the first thing microcode engine decorative instructions
through the programming code make compiler to their job easier. The indications of reducing
instruction sets liberate most essential information reduce and design user friendly systems in
term Reduced Instruction Set Computer. Introduction of RISC gives small chips with low cost,
O.M. Hiran Kanishka Chandrasena Page 7 of 15
8. 20051426
faster data transaction and more reliable direct performance to control the process. The number
of instructions reduced and the size of the instructions also reduced in assemble language to
complete one single cycle. The reason of these decisions based on research resulted microcode
instruction sets. The memory stored instructions and which could be used in assemble: further
many RISC instructions act as CISC machine. The average number of cycles increases which
help to execute machine instruction very operative way to run the program codes.
The pipeline takes an effect on when CPI performance, getting equal and increase considerably.
Addition of pipeline to other features, increasing number of instruction set in given program has
dropped the reduction per cycle instruction sets. In adding there were two key elements which
have been used to design pipeline in RISC- CPI and code bloat exclusion of composite quantity
of registers. The Reduced Instruction Set Computer has register process and LOAD/STORE
access memory. When data represent in LOAD instruction sets, using load operands memory it
registers. In other hand register exist to register represents MUL, STORE instructions sets using
result back to the memory. As a result instruction number sets increases which in turn make
memory usage and technology performance would be significantly increased.
Figure 6 RISC Pipeline Structure
O.M. Hiran Kanishka Chandrasena Page 8 of 15
9. 20051426
The HLL application profile code was used most frequently on operands in program thus the
subroutine load in to the registers, need purpose and hypothetical machines were used to carry
out load and store in memory to memory operations. This mean the ARS encounters the MUL
values of [2:4] , [6:8] the microcode translate these instructions set, [6]
I. LOAD 2:4 address in to registers
II. LOAD 6:8 address in to registers
III. MUL two registers address
IV. STORE 2:4 as result
The LOAD and STORE process multiple cycles in RISC architecture machine addresses
patterns. There is a difference between those two cycles: one change MUL instruction sets and
MUL results written by ARS program register address in. The LOAD and STORE secured in to
the MUL instruction, the complier cannot rearrange which result extreme efficiency. In RISC
architecture it has separate LOAD and STORE instruction sets to do their operation, in same
time this result a delay in cycles to load the data to the cycles in the registry.
The key element of one of the innovation in CISC system architecture to implementation
hardware products was Microprogramming. This helps direct execution of machine-fetch cycle
from the memory location to control unit. CU input data and process data converted to
information is then carries out machine fetch cycle floating point using ADD variable to direct
execution. It should be made sure that adding, shifting and normalization to complicit. The main
positive points are faster direct execution and no data transaction obstacles. The cons are the bit
size of the length is high. This results in the lengthening of instruction capacity to execute the
program. Inside micro programming, the controller is in microcode engine to execute the fetch
instructions. The CPU designs how to write microcode programs and how to store in memory
locations. There subroutine communicate on functionality of this program using growing
instruction sets values. Performance in high level and reduction of the memory cost had great
impact for this micro program chips. The disadvantage of this micro code had to debug because
the programs very large scales to elaborate microcode in control unit.
O.M. Hiran Kanishka Chandrasena Page 9 of 15
10. 20051426
Figure 7 Addressing Modes
Above managing memory access locations are different on RISC and CISC machine architecture
models. The RISC try to complier operands in the register to register, compilers try to determine
the memory addressing mode to add instructions to memory locations. The general terms to
design RISC prefers to desire register to register execution compilers which used to access. The
LOAD and STORE memory access is then being fetch, as a result memory to memory
architecture introduced by Patterson.
The success of the RISC machines depended on the system intelligent, improving roll of
responsibilities and compiler the codes. The compiler uses code high performance from
hardware to make developments in RISC. The hardware has simple process where as the
software engaged is more complex, both require the code and a minimum of registers to expand
register count. Thus RISC was developed with fewer transistors to create chips, less heat and low
cost advantages, which made it more appealing towards the modern society.
2 The Comparison of CISC and RISC Architecture Designs
It could be seen that initially “CISC” and “RISC” were both simple model where as they have
now evolved into more complex models with the requirement from the modern IT infrastructure.
The following is summary of RISC and CISC architectures features and there development
design decisions towards CPU make. The side by side comparisons brings about these two
architectural aspects focused on performance, price and design strategy. [7]
O.M. Hiran Kanishka Chandrasena Page 10 of 15
11. 20051426
CISC Architecture RISC Architecture
Performance Strategy
The performance was enhanced in
interpretation of the program structure. There
compilers ranges were more advantageous on
creating modifications process. The hardware
structure was more complex that makes to
identify chip to understand the program.
Reduced numbering sets in given instructions
and for one chip has transistors were lesser to
produce the program. The program execute
time was fast, hypothetically the speed
increase. The less instruction sets makes them
very efficient to write software with less
resource.
More addressing modes means to
implementation registers has lengthy
instruction codes.
The addressing mode was simple with less
than four codes to implement the sets
Pricing Strategy
Price different complexity to software to
hardware.
Price different complexity to hardware to
software.
Design Strategy
The instruction sets access memory to memory
and straight adding data two memory location
for each sets.
LOAD/STORE instruction
Memory location register to register.
The large instruction sets include performing
tasks which have many cycle instruction in
micro code statements in High Level Language
The single cycle instruction sets performance
of basic task function in direct execution
control unit to instruct CISC machines.
Mainly used on desktops, servers and
workstations in CISC platform.
Real time application process will run RISC
platform.
CISC execute time by time reduce total
number instruction programs.
In RISC reduce total number of instruction in
clock rate cycle time.
Example for CISC architectures Intel x86, IBM
Z series Mainframe computers
Example for RISC architecture Power PC,
Motorola 68000, Sun Sparc
O.M. Hiran Kanishka Chandrasena Page 11 of 15
12. 20051426
The above list takes many features of RISC and CISC architectures to explain how registers
structure, software supports for HLL and LOAD/STORE set addressing. CISC architecture is
used by ISA to programmers for the implementation of programming codes. The branch
execution prediction wasn’t in operation in 1981. The features that include in branch execution
added complexity to the hardware chip, which in turn resulted high performance. Once again it
should be noted that the matter of high performance not principle of RISC.
The current RISC architecture examples could be given from MIPS, SPARC and G3 which are
all called in Fast Instruction Set Computing (FISC), how special purpose which all include cycle
time can be kept down thus new RISC reduce the cycle time for each machine. For example if,
number of instructions was never reduced, but individual instructions reduced their cycle time
and the complexity of the machine structure. As an example: Mac users who used more G3
instructions sets for RISC circuits. “ A new computer design evolved optimizing compilers
could be used to compile normal programming languages down to instruction that were as
creative in large virtual address space to make instruction cycle time fast as technology would
allow. There machine would have fewer instruction reduced set and remaining of instructions
would generally execute one per clock cycle in reduced instruction set computers.” [8]
O.M. Hiran Kanishka Chandrasena Page 12 of 15
13. 20051426
3 Conclusion
The memory and storage devices can make data transaction speed faster and pricing more
economical. When installing programs a lot of companies consider their code bloat question: the
code size gets larger when passing instruction CISC platforms. In the same time RISC processor
gets instructions in extraordinary of variety of memory which are used to increase the design of
this platform. The number of transistors is counted on a higher rate: this problem is overcome by
fixing all transistors into the one silicon chip.
Figure 8 Addressing MIPS ARM Instruction
Compiler and memory access architecture are functional in modern design strategy. The
designers look for possibilities on integrating to create transistors. Reduction of cost and real
time performance has been considered on RISC for direct result to increase high level task of
transistors into designs. In RISC transactions use MIPS and Ultra SPARC architectures whereas
CISC uses ARM and Intelx86 architectures. Both chips mainly contain similar features but RISC
processor has details. Comparing CISC X86 ISA amounted to hardware simulation to transfer
O.M. Hiran Kanishka Chandrasena Page 13 of 15
14. 20051426
instructions. Main key elements of RISC features are LOAD/STORE memory access types,
pipeline structure, reduced instruction structure and register to register transaction data methods.
In the end of CISC and RISC discuss the historical development and currently
they are applied with MIPS, Ultra SPARC, ARM, Intelx86 and
LOAD/STORE. Now the modern technology climbs top of the market segment
and each type has different solutions. The modern architecture which makes
breaking decision to optimized Explicitly Parallel Instruction Computing
EPIC. This has “Itanium” processor 9500 series advanced design architecture which supports
pipelines, core threads, and memory (DRAM) instructions sets performance high frequency
clock rates. This Explicitly Parallel Instruction Computing architecture suggested developing the
new implementation for hardware and software.
O.M. Hiran Kanishka Chandrasena Page 14 of 15
15. 20051426
4 References
1. John L. Hennessy and David A. Patterson, Computer Architecture: A Quantitative
Approach, Second Edition. Morgan Kaufmann Publishers, Inc. San Francisco, CA. 1996.
Page 10
2. David A. Patterson and Carlo H. Sequin. RISC I: A Reduced Instruction Set VLSI
Computer. Computer architecture (selected papers), 2001, Pages 216 – 230
3. John L. Hennessy and David A. Patterson, Computer Organization and Design, Third
Edition: The Hardware/Software Interface. Morgan Kaufmann, 2005 , Pages 491 – 493
4. John L. Hennessy and David A. Patterson, Computer Organization and Design, Third
Edition: The Hardware/Software Interface. Morgan Kaufmann, 2005 , Pages 491 - 493
5. David A. Patterson and D.R Ditzel. The case for the reduced instruction set Computer
architecture (selected papers), 1980
6. John L. Hennessy and David A. Patterson, Computer Organization and Design, Third
Edition: The Hardware/Software Interface. Morgan Kaufmann, 2005 , Pages 588
7. Gerritsen, Armin: CISC vs. RISC. http://cpusite.examedia.nl/docs/cisc_vs_risc.html
8. “A new computer design evolved optimizing compilers could be used to compile normal
programming languages down to instruction that were as creative in large virtual address
space to make instruction cycle time fast as technology would allow. There machine
would have fewer instruction reduced set and remaining of instructions would generally
execute one per clock cycle in reduced instruction set computers.”
O.M. Hiran Kanishka Chandrasena Page 15 of 15