** The MISD classification is not practical to implement.
In fact, no significant MISD computers have ever been build.
It is included only for completeness.
From the beginning of time, computer scientists have been challenging computers with larger and larger problems. Eventually, computer processors were combined together in parallel to work on the same task together. This is parallel processing. Types Of Parallel Processing SISD – Single Instruction stream, Single Data stream MISD – Multiple Instruction stream, Single Data stream SIMD – Single Instruction stream, Multiple Data stream MIMD – Multiple Instruction stream, Multiple Data stream
SISD One piece of data is sent to one processor. Ex: To multiply one hundred numbers by the number three, each number would be sent and calculated until all one hundred results were calculated. Data Multiply CPU
MISD One piece of data is broken up and sent to many processor. Ex: A database is broken up into sections of records and sent to several different processor, each of which searches the section for a specific key. Data Search CPU CPU CPU CPU
SIMD Multiple processors execute the same instruction of separate data. Ex: A SIMD machine with 100 processors could multiply 100 numbers, each by the number three, at the same time. Multiply CPU CPU CPU CPU Data Data Data Data
MIMD Multiple processors execute different instruction of separate data. This is the most complex form of parallel processing. It is used on complex simulations like modeling the growth of cities. Multiply CPU CPU CPU CPU Data Data Data Data Search Add Subtract
MIMD computers usually have a different program running on every processor. This makes for a very complex programming environment. What processor? Doing which task? At what time? What’s doing what when?
Memory latency The time between issuing a memory fetch and receiving the response. Simply put, if execution proceeds before the memory request responds, unexpected results will occur. What values are being used? Not the ones requested!
A similar problem can occur with instruction executions themselves. Synchronization The need to enforce the ordering of instruction executions according to their data dependencies. Instruction b must occur before instruction a.
Despite potential problems, MIMD can prove larger than life. MIMD Successes IBM Deep Blue – Computer beats professional chess player. Some may not consider this to be a fair example, because Deep Blue was built to beat Kasparov alone. It “knew” his play style so it could counter is projected moves. Still, Deep Blue’s win marked a major victory for computing.
IBM’s latest, a supercomputer that models nuclear explosions. IBM Poughkeepsie built the world’s fastest supercomputer for the U. S. Department of Energy. It’s job was to model nuclear explosions.
MIMD – it’s the most complex, fastest, flexible parallel paradigm. It’s beat a world class chess player at his own game. It models things that few people understand. It is parallel processing at its finest.
World’s simplest computer (processor/memory) Standard computer (add cache,disk) Network P M P M C D P M C D P M C D P M C D
A Supercomputer at $5.2 million Virginia Tech 1,100 node Macs. G5 supercomputer
The Virginia Polytechnic Institute and State University has built a supercomputer comprised of a cluster of 1,100 dual-processor Macintosh G5 computers. Based on preliminary benchmarks, Big Mac is capable of 8.1 teraflops per second. The Mac supercomputer still is being fine tuned, and the full extent of its computing power will not be known until November. But the 8.1 teraflops figure would make the Big Mac the world's fourth fastest supercomputer
Big Mac's cost relative to similar machines is as noteworthy as its performance. The Apple supercomputer was constructed for just over US$5 million, and the cluster was assembled in about four weeks. In contrast, the world's leading supercomputers cost well over $100 million to build and require several years to construct. The Earth Simulator, which clocked in at 38.5 teraflops in 2002, reportedly cost up to $250 million.
Srinidhi Varadarajan, Ph.D. Dr. Srinidhi Varadarajan is an Assistant Professor of Computer Science at Virginia Tech. He was honored with the NSF Career Award in 2002 for "Weaving a Code Tapestry: A Compiler Directed Framework for Scalable Network Emulation." He has focused his research on building a distributed network emulation system that can scale to emulate hundreds of thousands of virtual nodes. October 28 2003 Time: 7:30pm - 9:00pm Location: Santa Clara Ballroom
Clusters on the Rise Using clusters of small machines to build a supercomputer is not a new concept. Another of the world's top machines, housed at the Lawrence Livermore National Laboratory, was constructed from 2,304 Xeon processors. The machine was build by Utah-based Linux Networx. Clustering technology has meant that traditional big-iron leaders like Cray (Nasdaq: CRAY) and IBM have new competition from makers of smaller machines. Dell (Nasdaq: DELL) , among other companies, has sold high-powered computing clusters to research institutions.
Typically used where one computer does not have enough capacity to do the expected work
Cheaper than building one GIANT computer
Although not new, supercomputing clustering technology still is impressive. It works by farming out chunks of data to individual machines, adding that clustering works better for some types of computing problems than others. For example, a cluster would not be ideal to compete against IBM's Deep Blue supercomputer in a chess match; in this case, all the data must be available to one processor at the same moment -- the machine operates much in the same way as the human brain handles tasks. However, a cluster would be ideal for the processing of seismic data for oil exploration, because that computing job can be divided into many smaller tasks.
Need to break up work among the computers in the cluster
Example: Microsoft.com Search Engine
6 computers running SQL Server
Each has a copy of the MS Knowledge Base
Search requests come to one computer
Sends request to one of the 6
Attempts to keep all 6 busy
The Virginia Tech Mac supercomputer should be fully functional and in use by January 2004. It will be used for research into nanoscale electronics, quantum chemistry, computational chemistry, aerodynamics, molecular statics, computational acoustics and the molecular modeling of proteins.
According to the article, the supercomputer, powered by 2,200 IBM G5 processors, has been initially rated at computing 7.41 trillion operations per second. The final number could be much higher, according to school officials, but if not, it would rank as the #4 fastest supercomputing cluster in the world. Japan's US$250M Earth Simulator, which is currently the world's fastest computer Lawrence Livermore's US$10-15M cluster system, which is made up of 2,304 Intel Xeon processors. IBM recently installed "Pacific Blue" at the Lawrence Livermore Laboratories for $94 million
"We are demonstrating that you can build a very high performance machine for a fifth to a tenth of the cost of what supercomputers now cost," said Hassan Aref, the dean of the School of Engineering at Virginia Tech in Blacksburg 1998 a group called distributed.net linked thousands of computers of all kinds around the world via the Internet, and cracked a 56-bit DES-II code in 40 days. It had previously been thought that such heavyweight ciphers would take hundreds of years to crack even on fast computers. One version of the Distributed.net program ran as a screen saver that kicked in, and began cracking code, whenever the machine was idle for more than a few minutes. Distributed.net bills itself as the "Fastest Computer on Earth", even though their hardware bill is effectively zero.
The idea is straightforward. You set up an arbitrary number of PCs, network them, typically using fast Ethernet, and then send them problems that can be divided up among the machines' processors. One machine acts as a server that syncs up all the rest, called clients. Beowulf specs software like the Message Passing Interface written under the Linux operating system, that allows the machines to communicate while working on the problem. And since Linux, brainchild of computer science student Linus Torvalds, is free, it keeps the cost down
Modeling the trajectories of tens of millions of charged particles, each interacting with the others through electro-magnetic forces, requires heavy-duty number crunching. To harness supercomputing power at a desktop price, UCLA’s Dr. Viktor K. Decyk and his colleagues have created their own super-fast, parallel processing “supercomputer” using a cluster of Power Macintosh computers.
SYDNEY - 22 January 2001 Apple's G4 Cubes used for cell mutation detection and genotyping analysis
World's fastest" Macintosh cluster Tuesday, May 15, 2001 @ 8:45am Researchers at the Grupo de Lasers e Plasmas (GoLP) in Portugal have created what they bill as the world's fastest Macintosh-based cluster . Consisting of 16 dual-processor Power Mac G4/450s, the cluster delivers more than 50 GigaFlops of peak power and took just one day to set up.
Apple Computer purchased a big Cray supercomputer in the mid-1980s. In fact, Steve Jobs was Cray's first and only walk-in customer. He arrived unannounced (so the story goes) at Cray headquarters in Mendota Heights, Minnesota and asked to speak to someone about buying a Cray. They nearly threw him out. It's only slightly less eccentric than someone walking into NASA Johnson Space Center and inquiring how to purchase a shuttle orbiter. Later, Cray president John Rollwagen phoned Seymour and told him that Apple had just purchased a Cray that would be used in designing the next Macintosh. Seymour thought for a bit, and replied that that seemed reasonable, since he was using a Macintosh to design the next Cray!