This document discusses computer architecture and trends in computer performance. It covers topics like computer architecture definitions, design goals that influence architecture choices, factors that affect performance like instructions per cycle and clock speed, different types of memory and their speeds, trends in processor architecture over time including increasing transistor counts and multicore processors, and how latency and bandwidth impact network and system performance. It also provides some interesting historical facts about competition between Intel and AMD in the CPU market.
CISC & RISC Architecture with contents
History Of CISC & RISC
Need Of CISC
CISC
CISC Characteristics
CISC Architecture
The Search for RISC
RISC Characteristics
Bus Architecture
Pipeline Architecture
Compiler Structure
Commercial Application
Reference
CISC & RISC Architecture with contents
History Of CISC & RISC
Need Of CISC
CISC
CISC Characteristics
CISC Architecture
The Search for RISC
RISC Characteristics
Bus Architecture
Pipeline Architecture
Compiler Structure
Commercial Application
Reference
In many I/O interfacing applications and certainly in data acquisation system. it is often necessary to transfer data to or from an interface at data rates higher than those possible using simple programmed I/O loops
A brief study on Storage Area Network (SAN), SAN architecture & its importance. It focuses on the techniques and the technologies that have evolved around SAN & its Security.
In many I/O interfacing applications and certainly in data acquisation system. it is often necessary to transfer data to or from an interface at data rates higher than those possible using simple programmed I/O loops
A brief study on Storage Area Network (SAN), SAN architecture & its importance. It focuses on the techniques and the technologies that have evolved around SAN & its Security.
HISTORY AND FUTURE TRENDS OF MULTICORE COMPUTER ARCHITECTUREijcga
The multicore technology concept is centered on the parallel computing possibility that can boost computer
efficiency and speed by integrating two or more CPUs (Central Processing Units) in a single chip. A
multicore architecture places multiple processor cores and groups them as one physical processor. The
primary goal is to develop a system that can handle and complete more than one task at the same time,
thereby getting a better system performance in general. This paper will describe the history and future
trends of multicore computer architecture.
History and Future Trends of Multicore Computer Architectureijcga
The multicore technology concept is centered on the parallel computing possibility that can boost computer efficiency and speed by integrating two or more CPUs (Central Processing Units) in a single chip. A multicore architecture places multiple processor cores and groups them as one physical processor. The primary goal is to develop a system that can handle and complete more than one task at the same time, thereby getting a better system performance in general. This paper will describe the history and future trends of multicore computer architecture.
I understand that physics and hardware emmaded on the use of finete .pdfanil0878
I understand that physics and hardware emmaded on the use of finete element methods to predict
fluid flow over airplane wings,that progress is likely to continue. However, in recent years, this
progress has been achieved through greatly increased hardware complexity with the rise of
multicore and manycore processors, and this is affecting the ability of application developers to
achieve the full potential of these systems. currently performance is measured on a dense
matrix–matrix multiplication test which has questionable relevance to real applications.the
incredible advances in processor technology and all of the accompanying aspects of computer
system design, such as the memory subsystem and networking
In embedded it seems to combination of both hardware and the software , it is used to be
combined function of action in the systems .while we do that the application to developed in the
achieve the full potential of the systems in advanced processer technology.
Hardware
(1) Memory
Advances in memory technology have struggled to keep pace with the phenomenal advances in
processors. This difficulty in improving the main memory bandwidth led to the development of a
cache hierarchy with data being held in different cache levels within the processor. The idea is
that instead of fetching the required data multiple times from the main memory, it is instead
brought into the cache once and re-used multiple times. Intel allocates about half of the chip to
cache, with the largest LLC (last-level cache) being 30MB in size. IBM\'s new Power8 CPU has
an even larger L3 cache of up to 96MB [4]. By contrast, the largest L2 cache in NVIDIA\'s
GPUs is only 1.5MB.These different hardware design choices are motivated by careful
consideration of the range of applications being run by typical users.
One complication which has become more common and more important in the past few years is
non-uniform memory access. Ten years ago, most shared-memory multiprocessors would have
several CPUs sharing a memory bus to access a single main memory. A final comment on the
memory subsystem concerns the energy cost of moving data compared to performing a single
floating point computation.
(2) Processors
CPUs had a single processing core, and the increase in performance came partly from an increase
in the number of computational pipelines, but mainly through an increase in clock frequency.
Unfortunately, the power consumption is approximately proportional to the cube of the
frequency and this led to CPUs with a power consumption of up to 250W.CPUs address memory
bandwidth limitations by devoting half or more of the chip to LLC, so that small applications can
be held entirely within the cache. They address the 200-cycle latency issue by using very
complex cores which are capable of out-of-order execution , By contrast, GPUs adopt a very
different design philosophy because of the different needs of the graphical applications they
target. A GPU usually has a number of functional u.
Table of Contents
History of Core technology 2
Magnetic Core Memory 2
Who Invented Core Memory? 2
What is Core-i Technology ? 2
Single core, Dual core, Quad core and Octa core. 3
ADVANTAGES : 3
CURRENT LINEUP CORE PROCESSORS 3
Q: 2 Computer transformation 4
Q:3: Vendors of Technology Hardware 5
Q: 4 Counter Argument 8
Will this be a hard sell? why or why not? 9
Q-5: Specialized Organizations In Creating Customized Software Applications For The Clients 9
I have introduced developments in multi-core computers along with their architectural developments. Also, I have explained about high performance computing, where these are used. At the end, openMP is introduced with many ready to run parallel programs.
This presentation gives a brief over view of Embedded Systems. It describes the common characteristics of Embedded systems, the design metrics, processor technologies and also summarizes differences between Microcontrollers and Microprocessors.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Climate Impact of Software Testing at Nordic Testing Days
Trends in computer architecture
1. PREPARED BY MAHAMMADSALEH ABBAS , AVAZ
QARAYEV, KANAN CHALABI
SUBJECT: COMPUTER ORGANIZATION
AND ARCHITECTURE
SPECIALITY: IT
TRENDS IN COMPUTER
ACRCHITECTURE
2. 2
COMPUTER ARCHITECTURE
In computer engineering, computer architecture is a set of rules and methods that
describe the functionality, organization, and implementation of computer systems. Some
definitions of architecture define it as describing the capabilities and programming
model of a computer but not a particular implementation. In other definitions computer
architecture involves instruction set architecture design, microarchitecture design, logic
design, and implementation.
3. DESIGN GOALS
The exact form of a computer system depends on the constraints and goals.
Computer architectures usually trade off standards, power versus performance,
cost, memory capacity, latency (latency is the amount of time that it takes for
information from one node to travel to the source) and throughput. Sometimes
other considerations, such as features, size, weight, reliability, and expandability are
also factors. The most common scheme does an in-depth power analysis and figures
out how to keep power consumption low while maintaining adequate performance.
4. PERFORMANCE
Modern computer performance is often described in instructions per cycle (IPC), which
measures the efficiency of the architecture at any clock frequency; a faster IPC rate
means the computer is faster. Older computers had IPC counts as low as 0.1 while
modern processors easily reach near 1. Superscalar processors may reach three to five
IPC by executing several instructions per clock cycle.
Counting machine-language instructions would be misleading because they can do
varying amounts of work in different ISAs. The "instruction" in the standard
measurements is not a count of the ISA's machine-language instructions, but a unit of
measurement, usually based on the speed of the VAX computer architecture.
Many people used to measure a computer's speed by the clock rate (usually in MHz or
GHz). This refers to the cycles per second of the main clock of the CPU. However, this
metric is somewhat misleading, as a machine with a higher clock rate may not
necessarily have greater performance. As a result, manufacturers have moved away
from clock speed as a measure of performance.
Other factors influence speed, such as the mix of functional units, bus speeds, available
memory, and the type and order of instructions in the programs.
5. ► There are two main types of speed: latency and throughput. Latency is the time between the start
of a process and its completion. Throughput is the amount of work done per unit time. Interrupt
latency is the guaranteed maximum response time of the system to an electronic event (like when
the disk drive finishes moving some data).
► Performance is affected by a very wide range of design choices — for example, pipelining a
processor usually makes latency worse, but makes throughput better. Computers that control
machinery usually need low interrupt latencies. These computers operate in a real-
time environment and fail if an operation is not completed in a specified amount of time. For
example, computer-controlled anti-lock brakes must begin braking within a predictable and limited
time period after the brake pedal is sensed or else failure of the brake will occur.
► Benchmarking takes all these factors into account by measuring the time a computer takes to run
through a series of test programs. Although benchmarking shows strengths, it shouldn't be how
you choose a computer. Often the measured machines split on different measures. For example,
one system might handle scientific applications quickly, while another might render video games
more smoothly. Furthermore, designers may target and add special features to their products,
through hardware or software, that permit a specific benchmark to execute quickly but don't offer
similar advantages to general tasks.
6. Past Trends
Processors have undergone a tremendous evolution throughout their history. A key milestone in this
evolution was the introduction of the microprocessor, term that refers to a processor that is implemented
in a single chip. The first microprocessor was introduced by Intel under the name of Intel 4004 in 1971. It
contained about 2,300 transistors, was clocked at 740 KHz and delivered 92,000 instructions per second
while dissipating around 0.5 watts. Since then, practically every year we have witnessed the launch of a
new microprocessor, delivering significant performance improvements over previous ones. Some studies
have estimated this growth to be exponential, in the order of about 50% per year, which results in a
cumulative growth of over three orders of magnitude in a time span of two decades . These improvements
have been fueled by advances in the manufacturing process and innovations in processor architecture.
According to several studies, both aspects contributed in a similar amount to the global gains.
7. Figure 3 shows a high-level block diagram of a typical
contemporary microprocessor. The main components
are a number of general purpose cores, a graphics
processing unit, a shared last level cache, a memory
and I/O interface, and an on-chip fabric to
interconnect all these components. Below, we briefly
describe the architecture of these modules.
7
Current Microprocessors
8. Multicore Processors
The vast majority of current microprocessors have multiple general
purpose based on architecture described in the previous section. Figure
5 depicts the main components of a multicore processor. There are a
number of cores, each one with private L1 caches (separate caches for
instructions and data) and a private second level cache (some
processors do not have a private L2 cache), a shared last level cache
and an interconnection network that allows all the cores to
communicate through the memory hierarchy. The lower levels of the
memory hierarchy are normally a main memory and a disk storage, and
they are located off-chip. A multicore processor can run multiple
threads simultaneously in different cores with no resource contention
among them except for the shared resources, which are basically the
memory hierarchy and the interconnection network. The architecture of
these two components is key for the performance of multicore
processors and are described in more detail below. The architecture of
each one of the individual cores is basically the same as in a single-core
processor, and has been described in the previous section.
9. 9
I'll talk about this on the
next page
In the past, CPU manufacturers thought that increasing MHz
would increase the CPU performance, but later realized that the
higher MHz, higher heat and energy consumption. And that's
why the increase in MHz stagnated after 2010, and then
declined in the following years.
increase
decrease
After the increasing number of cores in the
CPU, single-core performance has decreased
over time. In the past, manufacturers focused
more on single core performance but now its
changed .A rough example for you: imagine
that your team in a fight and your team with
more people beat only one man.
İf The more people in the team, your team will
be stronger , right? So Number of cores looks
like that.
Increasing the number of cores instead
of increasing the MHz has been a good
solution to get a better performance and
therefore the number of cores seems to
increase from year to year.
5 things affect cpu performance
10. TRANSISTOR PERFORMANCE
More transistors can be used to increase the processor throughput. Theoretically, doubling the number
of transistors in a chip provides it with the capability of performing twice the number of functions in the
same time, and increasing its storage by a factor of two. In practice, however, performance gains are
significantly lower. Fred Pollack made the observation long time ago that processor performance was
approximately proportional to the square root of its area, which is normally referred to as Pollack’s rule
of thumb . The increased transistor density allows architects to include more compute and storage units
and/or more complex units in the microprocessors, which used to provide an increase in performance of
about 40% per process generation . The 30% reduction in delay can provide an additional improvement,
as high as 40% but normally is lower due to the impact of wire delays .
Finally, additional performance improvements come from microarchitecture innovation. These innovations include deeper pipelines,
more effective cache memory organizations, new instruction set architecture (ISA) features, larger instruction windows and multicore
architectures just to name some of the most relevant.
11. 11
• Basically has been taken to mean that
the standard computer`s performance
improves , with a doubling time of 18
months
And ı want to say another thing.
Nowaday Moore’s Law
Breaking Down !
The end of Moore’s Law as we know
it was always inevitable. Because
There is a physical limit to what can
fit on a silicon chip after start
working with nanometers.
12. Latency and Bandwidth
The emergence and the fast growth of the web performance optimization industry within the past few years is a sign of
the growing importance and demand for faster user experiences by the consumer. And this is not simply a
psychological need for speed in our ever devoloping and connected world: EX
Faster sites lead to better user engagement.
Faster sites lead to better user retention.
Faster sites lead to higher conversions.
Simply put, speed is a feature. And to deliver it, we need to understand the many factors and fundamental limitations.
In this part, we will focus on the two critical components that dictate the performance of all network traffic: latency
and bandwidth
12
Latency
The time from the source sending a packet to the destination receiving it.
For example: 5G is becoming widespread nowadays and as far as we
know 5G has a very low latency (around 1ms) and it has many
benefits for us. Now doctors will be able to perform operations
remotely, cars can be managed via the internet and ( it can be
decrease in the number of accidents ) etc.
Bandwidth
Maximum throughput of a logical or physical communication path.
the maximum rate of data transfer across a given path
13. NETWORK
Let’s take a closer look at some common contributing components for a typical router on the Internet, which is responsible for
relaying a message between the client and the server:
Propagation delay
Amount of time required for a message to travel from the sender to receiver, which is a function of distance over speed with
which the signal propagates.
Transmission delay
Amount of time required to push all the packet’s bits into the link, which is a function of the packet’s length and data rate of the
link.
For example, to put a 10 megabyte (MB) file "on the wire" over a 1Mbps link, we will need 80 seconds. 10MB is equal to 80Mbps
because there are 8 bits for every byte! ex: You can download 1 Megabit of data per second with a speed of 1 Mbps. This means
only 128 Kilobytes (KB).
Processing delay
Amount of time required to process the packet header, check for bit-level errors, and determine the packet’s destination.
Queuing delay
Amount of time the packet is waiting in the queue until it can be processed.
The total latency between the client and the server is the sum of all the delays just listed.
In some countries today, internet speed has reached 1000 megabitpersecond.
In fact, Nasa's internet speed is 91 gigabits per second.
8 Gigabit = 1 Gigabyte
so
91 Gigabit = 11.375 Gigabytes .
So you can download gta5 in just 5 seconds
14. SSD vs. HDD Speed and Performance (SECONDARY MEMORY)
► Solid state drives (SSDs) are faster than conventional hard disk drives (HDDs) and they are also more
reliable and use less power. That means that when it comes to choosing between SSD or HDD storage,
SSDs would be preferably to HDDs in all cases if it weren't for one fact: SSDs are more expensive than
HDDs when measured by cost per Gigabyte of storage.
► To understand why there is a big difference between SSD v HDD speed, it's necessary to consider
the difference between SSD and HDD technology.
SSD have dramatically faster read and write
speeds when compared with hard disk
drives.
nowadays ssd's more popular than hdd
(TREND)
15. 15
But not in Azerbaijan (
you can see how bandwidth and latency
increase year by year
16. INTERESTING FACTS
I prepared this interesting subject with quotations from Mesut
Çevik's Youtube channel, I hope you like it.
If AMD did not exist, we would probably see only Intel in the
processor world and only Nvidia as the GPU manufacturer in the
graphics card world. And in a world where there is no competition,
users always pay more and buy less.
IBM chooses Intel's x86 architecture as its processor technology
while setting PC standards. As the operating system, it chooses
Microsoft's DOS operating system, which we all know very well
today.
Of course, DOS is far from the Windows we currently use, but this is
a milestone for both companies. But IBM makes Intel a condition to
avoid any trouble with processor supply, and they say that you will
share the x86 license with a second manufacturer. they choose it as
a manufacturer and license x86 to AMD and start producing the first
processors for IBM.
16
17. 17
Everything is going very well in the first years. İntel begins to do the first annoyances in 1982. It
refuses to fully share the 80286 architecture with AMD, and the source files and updates it
shares are of a nature that will not work for AMD. That's why AMD lags behind Intel in this
processor production for a long time. Now an AMD is always a must for Intel, but a weak AMD is
a must. He tries his best to make the company appear as a second backup supplier, each time
with different ways and different strategies. But the process is not limited to this. Because
Intel is constantly making agreements with companies in a monopoly movement against AMD.
But not looking at this, we see the world's first desktop processor that exceeds the 1 GHz clock
frequency from AMD. These challenges did not stop AMD, and the company continued to
improve its technology, introducing end users to dual-core desktop processors in 2005. But no
matter what AMD did, whatever technological advances it showed, how it got ahead of its
competitors, major manufacturers were not willing to sell AMD products. AMD also thought that
there might be different jobs behind it, and because it can support it with some evidence, it
filed a sizable monopoly suit against Intel in 2005.
18. 18
The first claim in AMD's indictment is that Intel is threatening big
manufacturers not to buy discounts. Another claim is that Intel
pays Sony millions of dollars just to do business with it. As a
result, AMD's market share in Sony products has dropped from 23
percent in 2002 to 8 percent in 2003 and zero percent today. So it
talks about 2005. You can no longer see any AMD processors in
Sony products and for these reasons, AMD is winning the case.
Things like this are progressing until today and AMD is now one of
the best graphics card and CPU manufacturers. If you like it, if
you want to know more about this subject, you can use the
YouTube channel of "Mesut Çevik".