This document provides an overview of the history of computing and how computers store data. It discusses:
- Gottfried Leibniz inventing binary arithmetic in the 17th century, which became the basis for how computers represent numbers.
- How early computers used mechanical switches to represent 1s and 0s, with switches in the on position representing 1 and off representing 0.
- Each byte in a computer's memory being divided into eight bits, with each bit representing a digit in the binary number system.
- Larger numbers being stored across multiple bytes, with the maximum value storable in a single byte being 255 and across two bytes being 65,535.
- A brief history of
This document provides an overview of data representation in computer systems. It discusses how computers use binary numeric codes to represent different types of data like text, numbers, graphics and audio. These codes allow computers to interpret raw sequences of 0s and 1s as meaningful information. The document then explains binary number systems in more detail, how decimal numbers can be converted to and from binary, and how bytes and bits are used to store data in computer memory and represent characters. Specific examples are given of how binary representations are used in applications like robotics to control devices.
Mathematical concepts and their applications: Number systemJesstern Rays
The document discusses various number systems including binary and hexadecimal used in computing. It explains how binary represents numbers as 1s and 0s and is used in electronics like transistors and to represent text, images, and more. Hexadecimal is also introduced which uses 16 symbols to efficiently represent more characters using fewer bits than binary. Color codes in computing are represented using hexadecimal values for red, green, and blue components.
The binary number system uses only 1s and 0s to represent all data that computers process. It was first described in ancient India and introduced by Gottfried Leibniz in the 17th century. Claude Shannon later used binary code in his thesis which helped establish its practical use in computers and electronics. Braille is a common example of binary code used in the real world. Binary represents numbers in base 2 rather than base 10, with each binary digit called a bit and 8 bits making a byte of data.
The document discusses different number systems including binary, octal, decimal, and hexadecimal. It provides details on each system such as the base, digits used, applications, and how to convert between them. Binary uses only 0s and 1s and is the most fundamental system used in computing. Octal uses digits 0-7, with applications including older computer architectures. Decimal uses 0-9 and is the most common. Hexadecimal uses 0-9 and A-F, with each digit representing 4 bits, making it convenient for displaying colors and memory addresses.
It is a ppt on number system of computer science. It is relative to class 11th CBSE. This can help you to get quickly to through number system and help them to revise when needed.
The document provides a history of computers from early counting aids to modern personal computers. It describes the development of binary numbering systems and components of early computers like the Analytical Engine, ENIAC, and UNIVAC. It also discusses basic computer concepts like data storage, processing units, algorithms, programming, and hardware/software components.
Bits are the basic units of information in computing, representing values of 0 or 1. Bytes consist of 8 bits bundled together, allowing 256 possible values. Computer components like memory and storage are measured using multiples of bytes like kilobytes and megabytes. Binary numbers use bits like decimal numbers use digits, with each place value representing increasing powers of two rather than ten. Bytes are commonly used to represent text characters through encoding schemes like ASCII.
Bits are the basic units of information in computing, representing values of 0 or 1. Bytes consist of 8 bits bundled together, allowing 256 possible values. Computer components like memory and storage are measured using multiples of bytes like kilobytes and megabytes. Binary numbers use bits like decimal numbers use digits, with each place value representing increasing powers of two rather than ten. Bytes are commonly used to represent text characters through encoding schemes like ASCII.
This document provides an overview of data representation in computer systems. It discusses how computers use binary numeric codes to represent different types of data like text, numbers, graphics and audio. These codes allow computers to interpret raw sequences of 0s and 1s as meaningful information. The document then explains binary number systems in more detail, how decimal numbers can be converted to and from binary, and how bytes and bits are used to store data in computer memory and represent characters. Specific examples are given of how binary representations are used in applications like robotics to control devices.
Mathematical concepts and their applications: Number systemJesstern Rays
The document discusses various number systems including binary and hexadecimal used in computing. It explains how binary represents numbers as 1s and 0s and is used in electronics like transistors and to represent text, images, and more. Hexadecimal is also introduced which uses 16 symbols to efficiently represent more characters using fewer bits than binary. Color codes in computing are represented using hexadecimal values for red, green, and blue components.
The binary number system uses only 1s and 0s to represent all data that computers process. It was first described in ancient India and introduced by Gottfried Leibniz in the 17th century. Claude Shannon later used binary code in his thesis which helped establish its practical use in computers and electronics. Braille is a common example of binary code used in the real world. Binary represents numbers in base 2 rather than base 10, with each binary digit called a bit and 8 bits making a byte of data.
The document discusses different number systems including binary, octal, decimal, and hexadecimal. It provides details on each system such as the base, digits used, applications, and how to convert between them. Binary uses only 0s and 1s and is the most fundamental system used in computing. Octal uses digits 0-7, with applications including older computer architectures. Decimal uses 0-9 and is the most common. Hexadecimal uses 0-9 and A-F, with each digit representing 4 bits, making it convenient for displaying colors and memory addresses.
It is a ppt on number system of computer science. It is relative to class 11th CBSE. This can help you to get quickly to through number system and help them to revise when needed.
The document provides a history of computers from early counting aids to modern personal computers. It describes the development of binary numbering systems and components of early computers like the Analytical Engine, ENIAC, and UNIVAC. It also discusses basic computer concepts like data storage, processing units, algorithms, programming, and hardware/software components.
Bits are the basic units of information in computing, representing values of 0 or 1. Bytes consist of 8 bits bundled together, allowing 256 possible values. Computer components like memory and storage are measured using multiples of bytes like kilobytes and megabytes. Binary numbers use bits like decimal numbers use digits, with each place value representing increasing powers of two rather than ten. Bytes are commonly used to represent text characters through encoding schemes like ASCII.
Bits are the basic units of information in computing, representing values of 0 or 1. Bytes consist of 8 bits bundled together, allowing 256 possible values. Computer components like memory and storage are measured using multiples of bytes like kilobytes and megabytes. Binary numbers use bits like decimal numbers use digits, with each place value representing increasing powers of two rather than ten. Bytes are commonly used to represent text characters through encoding schemes like ASCII.
Computer data representation (integers, floating-point numbers, text, images,...ArtemKovera
This document discusses how computers represent different types of data at a low level. It covers binary, octal, and hexadecimal number systems. It also discusses how integers, floating point numbers, text, images, and sound are represented in computer memory in binary format using bits and bytes. Understanding how data is represented is important for programming efficiently and writing secure code.
This lesson is for students taking the Cambridge School certificate exams Computer science subject(2210).I hope that it will of help to students in this period of crisis. Send me your feedback or suggestions on buxooa72@ gmail.com,
This document discusses how computers represent and store data using binary sequences. It explains that all digital data is ultimately stored as sequences of zeros and ones at the lowest level of abstraction. Higher-level abstractions include integers, floating point numbers, text strings, and other data types which are interpreted based on how many bits are used. The document also discusses limitations of fixed bit representations and different units for measuring data transmission capabilities.
This document discusses how computers represent different types of data using binary numbers. It explains that all data inside a computer is stored as binary digits (bits) that represent ON and OFF switches. Various data types like characters, pictures, sound, programs and integers are represented by grouping bits into bytes. The context determines how a computer interprets each byte. Standards like ASCII, JPEG and WAV define how different data is encoded into binary format and bytes. The document also covers number systems like binary, decimal, hexadecimal and their properties.
In this ppt , you will learn about the evolution of number systems, decimal, binary and hexadecimal and why hexadecima is the most important form of number systems when working with microcontroller programming.
Digital logic design deals with digital circuits and how to design digital hardware using logic gates. It involves working with binary and other number systems. Binary represents information using two states (0 and 1) which can be represented electrically using voltage levels. Converting between number systems like binary, decimal, and octal allows digital components to interface. Basic logic operations like addition, subtraction and multiplication can then be performed on binary numbers.
This document provides an overview of topics related to algorithms, pseudo code, binary number systems, and Morse code. It includes objectives, examples, and activities for each topic. Students will learn about defining pseudo code, writing algorithms, binary number representation, addition and subtraction in binary, and Morse code encryption/decryption. Practice problems are provided to convert between binary and decimal numbers, perform binary operations, and write pseudo code.
Number Systems — Decimal, Binary, Octal, and Hexadecimal
Base 10 (Decimal) — Represent any number using 10 digits [0–9]
Base 2 (Binary) — Represent any number using 2 digits [0–1]
Base 8 (Octal) — Represent any number using 8 digits [0–7]
Base 16(Hexadecimal) — Represent any number using 10 digits and 6 characters [0–9, A, B, C, D, E, F]
SSC-ICT 7_History of Computer_031810.pptxHaruHaru68
The document provides information about the history of computers and different generations of computers. It discusses:
- The first generation of computers from 1946-1959 which used vacuum tubes and had limitations like being unreliable, costly, slow, and generating a lot of heat. Examples included ENIAC, EDVAC, UNIVAC, IBM 701, and IBM 650.
- The second generation from 1959-1965 started using transistors, making computers cheaper, more compact, reliable, and faster than the first generation. Magnetic cores, tapes, and disks were used for storage. Languages included FORTRAN and COBOL.
- Further generations saw the introduction of integrated circuits, microprocessors, personal computers, and newer technologies
This book's author is Zafar Ali Khan .
It consists of all the topics of As Level Computer Science topics that are required to be covered.
All credits goes to Zafar Ali Khan .
The document discusses different methods of representing data in computers, including:
1. Binary representation of numbers using 0s and 1s. This allows integers and floating point numbers to be stored.
2. Text representation using character encoding standards like ASCII and Unicode which assign binary codes to letters, numbers and symbols.
3. Graphic representations including bitmapped images and vector graphics. Bitmaps store color values for each pixel while vectors store mathematical descriptions of shapes.
The document discusses different methods of representing data in computers, including:
1. Binary representation of numbers using 0s and 1s. This allows integers and floating point numbers to be stored.
2. Text representation using character encoding standards like ASCII and Unicode which assign binary codes to letters, numbers and symbols.
3. Graphic representations including bitmaps which store color values for each pixel, and vectors which store graphical objects as mathematical descriptions.
This document discusses new technologies and computing basics. It covers:
- The evolution of computers from mainframes to personal computers to wireless devices.
- How computers work based on the von Neumann model of input, operation, and output.
- How digital information is represented using binary digits (1s and 0s) and Boolean logic.
- Basic logic gates and how more complex circuits are built up from these.
- Storage media technologies over time from tape to hard drives to USB drives.
- The importance of evaluating online resources for credibility.
The document discusses the different number systems used in computers. It introduces the four main types of number systems: binary, decimal, octal, and hexadecimal. It explains that computers understand numbers in the form of binary digits. The other number systems are introduced to make working with binary numbers easier for humans. The hexadecimal system in particular is used as a shorthand because each hexadecimal digit can represent a group of 4 binary digits. Understanding number systems is important for understanding how computers work with numeric data and instructions.
The document discusses various applications of computers in different fields such as business, medicine, banking, education, services, defense, engineering, and entertainment. It provides examples of how computers are used in each field, such as for keeping records, online services, design, research, and more. Computers apply binary numbering systems using bits and bytes to store and process all digital data and information.
The document discusses various applications of computers in different fields such as business, medicine, banking, education, services, defense, engineering, and entertainment. It provides examples of how computers are used in each field, such as for keeping records, online services, design, research, and more. Computers apply binary numbering systems using bits and bytes to store and process all digital data and information across various sectors.
This document discusses various topics related to digital representation of data including:
1. The differences between FAT32 and NTFS file systems and their advantages and limitations.
2. How data is represented digitally using coding schemes like ASCII and converted between binary and other number systems.
3. An overview of different numbering systems including binary, decimal, octal and hexadecimal; and how to convert between them.
This document discusses various topics related to digital representation of data including:
1. The differences between FAT32 and NTFS file systems and their advantages and limitations.
2. How data is represented digitally using coding schemes like ASCII and converted between binary and other number systems.
3. An overview of different numbering systems including binary, decimal, octal and hexadecimal; and how to convert between them.
The document provides an introduction to computational thinking concepts including converting information to data, data types and encoding, and logic. It discusses how information is converted to continuous and discrete data, and how data is encoded through binary representations and bit strings. Different data types like numbers, text, colors, pictures and sound are also explained in terms of their encoding. The document then covers logic and computational thinking concepts like inductive and deductive logic, and how Boolean logic uses true/false propositions and logical operators.
Computer data representation (integers, floating-point numbers, text, images,...ArtemKovera
This document discusses how computers represent different types of data at a low level. It covers binary, octal, and hexadecimal number systems. It also discusses how integers, floating point numbers, text, images, and sound are represented in computer memory in binary format using bits and bytes. Understanding how data is represented is important for programming efficiently and writing secure code.
This lesson is for students taking the Cambridge School certificate exams Computer science subject(2210).I hope that it will of help to students in this period of crisis. Send me your feedback or suggestions on buxooa72@ gmail.com,
This document discusses how computers represent and store data using binary sequences. It explains that all digital data is ultimately stored as sequences of zeros and ones at the lowest level of abstraction. Higher-level abstractions include integers, floating point numbers, text strings, and other data types which are interpreted based on how many bits are used. The document also discusses limitations of fixed bit representations and different units for measuring data transmission capabilities.
This document discusses how computers represent different types of data using binary numbers. It explains that all data inside a computer is stored as binary digits (bits) that represent ON and OFF switches. Various data types like characters, pictures, sound, programs and integers are represented by grouping bits into bytes. The context determines how a computer interprets each byte. Standards like ASCII, JPEG and WAV define how different data is encoded into binary format and bytes. The document also covers number systems like binary, decimal, hexadecimal and their properties.
In this ppt , you will learn about the evolution of number systems, decimal, binary and hexadecimal and why hexadecima is the most important form of number systems when working with microcontroller programming.
Digital logic design deals with digital circuits and how to design digital hardware using logic gates. It involves working with binary and other number systems. Binary represents information using two states (0 and 1) which can be represented electrically using voltage levels. Converting between number systems like binary, decimal, and octal allows digital components to interface. Basic logic operations like addition, subtraction and multiplication can then be performed on binary numbers.
This document provides an overview of topics related to algorithms, pseudo code, binary number systems, and Morse code. It includes objectives, examples, and activities for each topic. Students will learn about defining pseudo code, writing algorithms, binary number representation, addition and subtraction in binary, and Morse code encryption/decryption. Practice problems are provided to convert between binary and decimal numbers, perform binary operations, and write pseudo code.
Number Systems — Decimal, Binary, Octal, and Hexadecimal
Base 10 (Decimal) — Represent any number using 10 digits [0–9]
Base 2 (Binary) — Represent any number using 2 digits [0–1]
Base 8 (Octal) — Represent any number using 8 digits [0–7]
Base 16(Hexadecimal) — Represent any number using 10 digits and 6 characters [0–9, A, B, C, D, E, F]
SSC-ICT 7_History of Computer_031810.pptxHaruHaru68
The document provides information about the history of computers and different generations of computers. It discusses:
- The first generation of computers from 1946-1959 which used vacuum tubes and had limitations like being unreliable, costly, slow, and generating a lot of heat. Examples included ENIAC, EDVAC, UNIVAC, IBM 701, and IBM 650.
- The second generation from 1959-1965 started using transistors, making computers cheaper, more compact, reliable, and faster than the first generation. Magnetic cores, tapes, and disks were used for storage. Languages included FORTRAN and COBOL.
- Further generations saw the introduction of integrated circuits, microprocessors, personal computers, and newer technologies
This book's author is Zafar Ali Khan .
It consists of all the topics of As Level Computer Science topics that are required to be covered.
All credits goes to Zafar Ali Khan .
The document discusses different methods of representing data in computers, including:
1. Binary representation of numbers using 0s and 1s. This allows integers and floating point numbers to be stored.
2. Text representation using character encoding standards like ASCII and Unicode which assign binary codes to letters, numbers and symbols.
3. Graphic representations including bitmapped images and vector graphics. Bitmaps store color values for each pixel while vectors store mathematical descriptions of shapes.
The document discusses different methods of representing data in computers, including:
1. Binary representation of numbers using 0s and 1s. This allows integers and floating point numbers to be stored.
2. Text representation using character encoding standards like ASCII and Unicode which assign binary codes to letters, numbers and symbols.
3. Graphic representations including bitmaps which store color values for each pixel, and vectors which store graphical objects as mathematical descriptions.
This document discusses new technologies and computing basics. It covers:
- The evolution of computers from mainframes to personal computers to wireless devices.
- How computers work based on the von Neumann model of input, operation, and output.
- How digital information is represented using binary digits (1s and 0s) and Boolean logic.
- Basic logic gates and how more complex circuits are built up from these.
- Storage media technologies over time from tape to hard drives to USB drives.
- The importance of evaluating online resources for credibility.
The document discusses the different number systems used in computers. It introduces the four main types of number systems: binary, decimal, octal, and hexadecimal. It explains that computers understand numbers in the form of binary digits. The other number systems are introduced to make working with binary numbers easier for humans. The hexadecimal system in particular is used as a shorthand because each hexadecimal digit can represent a group of 4 binary digits. Understanding number systems is important for understanding how computers work with numeric data and instructions.
The document discusses various applications of computers in different fields such as business, medicine, banking, education, services, defense, engineering, and entertainment. It provides examples of how computers are used in each field, such as for keeping records, online services, design, research, and more. Computers apply binary numbering systems using bits and bytes to store and process all digital data and information.
The document discusses various applications of computers in different fields such as business, medicine, banking, education, services, defense, engineering, and entertainment. It provides examples of how computers are used in each field, such as for keeping records, online services, design, research, and more. Computers apply binary numbering systems using bits and bytes to store and process all digital data and information across various sectors.
This document discusses various topics related to digital representation of data including:
1. The differences between FAT32 and NTFS file systems and their advantages and limitations.
2. How data is represented digitally using coding schemes like ASCII and converted between binary and other number systems.
3. An overview of different numbering systems including binary, decimal, octal and hexadecimal; and how to convert between them.
This document discusses various topics related to digital representation of data including:
1. The differences between FAT32 and NTFS file systems and their advantages and limitations.
2. How data is represented digitally using coding schemes like ASCII and converted between binary and other number systems.
3. An overview of different numbering systems including binary, decimal, octal and hexadecimal; and how to convert between them.
The document provides an introduction to computational thinking concepts including converting information to data, data types and encoding, and logic. It discusses how information is converted to continuous and discrete data, and how data is encoded through binary representations and bit strings. Different data types like numbers, text, colors, pictures and sound are also explained in terms of their encoding. The document then covers logic and computational thinking concepts like inductive and deductive logic, and how Boolean logic uses true/false propositions and logical operators.
Similar to Computer_Programming_Fundamentals CHAPTER 2.pptx (20)
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
4. Binary is a system of
representing numbers
using a pattern of ones
and zeroes. First invented
by Gottfried Wilhelm
Leibniz in the 17th
century, the binary
number system became
widely used once
computers required a way
to represent numbers
using mechanical
switches.
LESSON 1: BINARY CODE
5. was a German polymath active
as a mathematician,
philosopher, scientist, and
diplomat. He is a prominent
figure in both the history of
philosophy and the history of
mathematics. He wrote works
on philosophy, theology, ethics,
politics, law, history, and
philology.
Gottfried Wilhelm Leibniz
6. Early computer systems had mechanical switches
that turned on to represent 1, and turned off to
represent 0. By using switches in series, computers
could represent numbers using binary code.
Modern computers still use binary code in the form
of digital ones and zeroes inside
the CPU and RAM.
What Is Binary Code?
7. BINARY CHARACTER BINARY CODE
BINARY CHARACTER OF BIG LETTERS
IN OUR ALPHABET
A 0100
0001
I 0100 1001 Q 0101 0001 Y 0101 1001
B 0100
0010
J 0100 1010 R 0101 0010 Z 0101 1010
C 0100 0011 K 0100 1011 S 0101 0011
D 0100
0100
L 0100 1100 T 0101 0100
E 0100 0101 M 0100 1101 U 0101 0101
F 0100 0110 N 0100 1110 V 0101 0110
G 0100 0111 O 0100 1111 W 0101 0111
H 0100
1000
P 0101 0000 X 0101 1000
8. BINARY CHARACTER BINARY CODE
BINARY CHARACTER OF small letters
IN OUR ALPHABET
a 0110 0001 i 0100 1001 q 0101 0001 y 0111 1001
b 0110 0010 j 0110 1010 r 0111 0010 z 0111 1010
C 0110 0011 k 0110 1011 s 0111 0011
d 0110 0100 l 0110 1100 t 0111 0100
e 0110 0101 m 0110 1101 u 0111 0101
f 0110 0110 n 0110 1110 v 0111 0110
g 0110 0111 o 0110 1111 w 0111 0111
h 0110 1000 p 0111 0000 x 0111 1000
11. All data that is stored in a computer is converted to sequences
of 0s and 1s.
A computer’s memory is divided into tiny storage locations
known as bytes. One byte is only enough memory to store a
letter of the alphabet or a small number. In order to do anything
meaningful, a computer has to have lots of bytes. Most
computers today have millions, or even billions, of bytes of
memory
LESSON 2: How Computers Store
Data CONCEPT:
12. Each byte is divided into eight smaller storage locations known as
bits. The term bit stands for binary digit. Computer scientists
usually think of bits as tiny switches that can be either on or off.
Bits aren’t actual “switches,” however, at least not in the
conventional sense. In most computer systems, bits are tiny
electrical components that can hold either a positive or a negative
charge. Computer scientists think of a positive charge as a switch in
the on position, and a negative charge as a switch in the off
position.
How Computers Store Data
CONCEPT:(CONTINUATION)
13. 1.3 How Computers Store
Data
When a piece of data is
stored in a byte, the
computer sets the eight
bits to an on/off pattern
that represents the data.
Think of a byte as eight switches
OFF
ON
OFF OFF
OFF
ON ON ON
14. Bit patterns for 77 and the letter A
ON
OFF
OFF OFF
ON ON
OFF
ON ON
OFF
OFF OFF OFF OFF OFF
ON
For example, the pattern shown on the left shows
how the number 77 would be stored in a byte, and
the pattern on the right shows how the letter A
would be stored in a byte. We explain below how
these patterns are determined.
15. A bit can be used in a very limited way to represent numbers.
Depending on whether the bit is turned on or off, it can represent one
of two different values. In computer systems, a bit that is turned off
represents the number 0 and a bit that is turned on represents the
number 1. This corresponds perfectly to the binary numbering system.
In the binary numbering system (or binary, as it is usually called) all
numeric values are written as sequences of 0s and 1s. Here is an
example of a number that is written in binary:
10011101
The position of each digit in a binary number has a value assigned to
it. Starting with the rightmost digit and moving left, the position
values are 20, 21, 22, 23, and so forth, as 1-10 shows the same diagram
with the position values calculated. Starting with the rightmost digit
and moving left, the position values are 1, 2, 4, 8, and so forth.
Storing Numbers
16. To determine the value of a
binary number you simply add up
the position values of all the 1s.
For example, in the binary
number 10011101, the position
values of the 1s are 1, 4, 8, 16,
and 128. This is shown in the left.
The sum of all of these position
values is 157. So, the value of the
binary number 10011101 is 157.
The values of binary digits as powers of 2
0 0 1 1 1 0
1 1
2
0
2
1
2
2
2
3
2
4
2
5
2
6
2
7
17. This picture shows
how you can picture
the number 157 stored
in a byte of memory.
Each 1 is represented
by a bit in the on
position, and each 0 is
represented by a bit in
the off position.
Determining the value of 10011101
1 0 0 1 1 1 0 1
1
4
8
16
128
1 + 4 + 8 + 16 + 128 =
157
19. When all of the bits in a byte are set to 0 (turned off),
then the value of the byte is 0. When all of the bits in a
byte are set to 1 (turned on), then the byte holds the
largest value that can be stored in it. The largest value
that can be stored in a byte is 1 2 4 8 16 32 64 128
255. This limit exists because there are only eight bits in
a byte.
What if you need to store a number larger than
255? The answer is simple: use more than one byte.
For example, suppose we put two bytes together.
That gives us 16 bits. The position values of those 16
bits would be 20, 21, 22, 23, and so forth, up through
215. As shown in picture below, the maximum value
that can be stored in two bytes is 65,535. If you need
to store a number larger than this, then more bytes
are necessary.
How Computers Store Data
20. 32768 + 16384 + 8192 + 4096 + 2048 + 1024 + 512 + 256 + 128 + 64 + 32 + 16 + 8 +
4 + 2 + 1 = 65535
Two bytes used for a large number
21. TIP: In case you’re feeling overwhelmed by all this,
relax! You will not have to actually convert
numbers to binary while programming. Knowing
that this process is taking place inside the computer
will help you as you learn, and in the long term this
knowledge will make you a better programmer.
31. Ancient Times
LESSON 2: HISTORY OF COMPUTER
Early Man relied on counting on his fingers and toes
(which by the way, is the basis for our base 10
numbering system). He also used sticks and stones as
markers. Later notched sticks and knotted cords were
used for counting. Finally came symbols written on
hides, parchment, and later paper. Man invents the
concept of number, then invents devices to help keep up
with the numbers of his possessions.
32. The ancient Romans developed
an Abacus, the first "machine"
for calculating. While it predates
the Chinese abacus we do not
know if it was the ancestor of
that Abacus. Counters in the
lower groove are 1 x 10n, those
in the upper groove are 5 x 10n
ROMAN EMPIRE
33. John Napier, a Scottish nobleman
and politician devoted much of his
leisure time to the study of
mathematics. He was especially
interested in devising ways to aid
computations. His greatest
contribution was the invention of
logarithms. He inscribed
logarithmic measurements on a set
of 10 wooden rods and thus was
able to do multiplication and
division by matching up numbers
on the rods. These became known
as Napier’s Bones.
INDUSTRIAL AGE 1600
34. Napier invented logarithms, Edmund
Gunter invented the logarithmic scales
(lines etched on metal or wood), but it was
William Oughtred, in England who
invented the sliderule. Using the concept of
Napier’s bones, he inscribed logarithms on
strips of wood and invented the calculating
"machine" which was used up until the
mid-1970s when the first hand-held
calculators and microcomputers appeared.
1620 THE SLIDE RULE
35. Blaise Pascal, a French mathematical genius, at
the age of 19 invented a machine, which he
called the Pascaline that could do addition and
subtraction to help his father, who was also a
mathematician. Pascal’s machine consisted of a
series of gears with 10 teeth each, representing
the numbers 0 to 9. As each gear made one turn
it would trip the next gear up to make 1/10 of a
revolution. This principle remained the
foundation of all mechanical adding machines
for centuries after his death. The Pascal
programming language was named in his honor.
1642- Blaise Pascal (1623-1662)
36. 1673 - Gottfried Wilhelm von Leibniz (1646-1716)
Gottfried Wilhelm von Leibniz invented differential and integral
calculus independently of Sir Isaac Newton, who is usually given sole
credit. He invented a calculating machine known as Leibniz’s
Wheel or the Step Reckoner. It could add and subtract, like Pascal’s
machine, but it could also multiply and divide. It did this by repeated
additions or subtractions, the way mechanical adding machines of the
mid to late 20th century did. Leibniz also invented something essential
to modern computers — binary arithmetic.
37.
Basile Bouchon, the son of an organ maker, worked in the textile
industry. At this time fabrics with very intricate patterns woven into
them were very much in vogue. To weave a complex pattern,
however involved somewhat complicated manipulations of the
threads in a loom which frequently became tangled, broken, or out
of place. Bouchon observed the paper rolls with punched holes that
his father made to program his player organs and adapted the idea
as a way of "programming" a loom. The paper passed over a section
of the loom and where the holes appeared certain threads were
lifted. As a result, the pattern could be woven repeatedly. This was
the first punched paper, stored program. Unfortunately the paper
tore and was hard to advance. So, Bouchon’s loom never really
caught on and eventually ended up in the back room collecting dust.
1725 - The Bouchon Loom
38.
In 1728 Jean-Batist Falçon, substituted a deck of
punched cardboard cards for the paper roll of Bouchon’s
loom. This was much more durable, but the deck of
cards tended to get shuffled and it was tedious to
continuously switch cards. So, Falçon’s loom ended up
collecting dust next to Bouchon’s loom.
1728 - Falçon Loom
39. It took inventor Joseph M. Jacquard to bring
together Bouchon’s idea of a continuous punched
roll, and Falcon’s ides of durable punched cards
to produce a really workable programmable
loom. Weaving operations were controlled by
punched cards tied together to form a long loop.
And, you could add as many cards as you
wanted. Each time a thread was woven in, the
roll was clicked forward by one card. The results
revolutionized the weaving industry and made a
lot of money for Jacquard. This idea of punched
data storage was later adapted for computer data
input.
1745 - Joseph Marie Jacquard
(1752-1834)
40. Charles Babbage is known as the Father of the
modern computer (even though none of his
computers worked or were even constructed in
their entirety). He first designed plans to build,
what he called the Automatic Difference
Engine. It was designed to help in the
construction of mathematical tables for
navigation. Unfortunately, engineering
limitations of his time made it impossible for
the computer to be built. His next project was
much more ambitious.
1822 – Charles Babbage (1791-1871)
and Ada Augusta, The Countess of
Lovelace
41. While a professor of mathematics at Cambridge
University (where Stephen Hawkin is now), a position
he never actually occupied, he proposed the construction
of a machine he called the Analytic Engine. It was to
have a punched card input, a memory unit (called
the store), an arithmetic unit (called the mill), automatic
printout, sequential program control, and 20-place
decimal accuracy. He had actually worked out a plan for
a computer 100 years ahead of its time. Unfortunately it
was never completed. It had to wait for manufacturing
technology to catch up to his ideas.
During a nine-month period in 1842-1843, Ada Lovelace
translated Italian mathematician Luigi Menabrea's
memoir on Charles Babbage's Analytic Engine. With her
translation she appended a set of notes which specified
in complete detail a method for calculating Bernoulli
numbers with the Engine. Historians now recognize this
as the world's first computer program and honor her as
the first programmer. Too bad she has such an ill-
received programming language named after her.
42.
The computer trail next takes us to, of all places, the U.S.
Bureau of Census. In 1880 taking the U.S. census proved
to be a monumental task. By the time it was completed it
was almost time to start over for the 1890 census. To try
to overcome this problem the Census Bureau hired Dr.
Herman Hollerith. In 1887, using Jacquard’s idea of the
punched card data storage, Hollerith developed a punched
card tabulating system, which allowed the census takers to
record all the information needed on punched cards which
were then placed in a special tabulating machine with a
series of counters. When a lever was pulled a number of
pins came down on the card. Where there was a hole the
pin went through the card and made contact with a tiny
pool of mercury below and tripped one of the counters by
one. With Hollerith’s machine the 1890 census tabulation
was completed in 1/8 the time. And they checked the
count twice.
1880s – Herman Hollerith (1860-
1929)
43.
44.
45.
46. Think about some of the different ways that people use computers. In school,
students use computers for tasks such as writing papers, searching for articles,
sending email, and participating in online classes.
At work, people use computers to analyze data, make presentations, conduct
business transactions, communicate with customers and coworkers, control
machines in manufacturing facilities, and do many other things.
At home, people use computers for tasks such as paying bills, shopping online,
communicating with friends and family, and playing computer games.
And don’t forget that cell phones, iPods®, BlackBerries®, car navigation systems,
and many other devices are computers too.
The uses of computers are almost unlimited in our everyday lives.
A computer is general purpose electronic device.
Introduction
What is a Computer?
47. Computers can do a wide variety of things because they can be
programmed. This means that computers are not designed to do just one
job, but to do any job that their programs tell them to do.
Programming is the art of writing computer programs
At its core, computer programming is solving problems
To solve a problem using a computer, you must express the solution to the
problem in terms of the instructions of the particular computer.
A computer program is just a collection of the instructions necessary to
solve a specific problem.
The approach or method that is used to solve the problem is known as an
algorithm.
In general a computer program is a set of instructions that a computer
follows to perform a specific task.
Introduction
What is a Programming?
48. Computer Programs are commonly referred to as software. Software
is essential to a computer because it controls everything the
computer does.
All of the software that we use to make our computers useful is
created by individuals working as programmers or software
developers.
A programmer, or software developer, is a person with the training
and skills necessary to design, create, and test computer programs.
Computer programming is an exciting and rewarding career. Today,
you will find programmers’ work used in business, medicine,
government, law enforcement, agriculture, academics,
entertainment, and many other fields…
Introduction
What is Software?
49. The physical devices that a computer is made of are
referred to as the computer’s hardware. The programs
that run on a computer are referred to as software.
The term hardware refers to all of the physical devices, or
components, that a computer is made of. A computer is
not one single device, but a system of devices that all
work together. Like the different instruments in a
symphony orchestra, each device in a computer plays its
own part.
Introduction
Hardware and Software
50. Basic Working Principle of a Computer
DATA
INPUT
DATA
(INFORMATION)
OUTPUT
DATA
PROCESSING
Data is the raw material for data processing. Data
consists of numbers, letters and symbols and relates
to facts, events and transactions.
Information is data that has been processed in such a
way as to be meaningful to the person who receives
it.
Introduction
51. A typical computer system consists of the following
major components:
The central processing unit (CPU)
Main memory
Secondary storage devices
Input devices
Output devices
Hardware
Typical components of a computer system
53. When a computer is performing the tasks that a
program tells it to do, we say that the computer is
running or executing the program. The central
processing unit, or CPU, is the part of a computer that
actually runs programs.
The CPU is the most important component in a
computer because without it, the computer could not
run software
Hardware
Functions of the CPU
54. You can think of main memory as the computer’s work
area. This is where the computer stores a program while
the program is running, as well as the data that the
program is working with.
For example, suppose you are using a word processing
program to write an assignment for one of your classes.
While you do this, both the word processing program and
the assignment are stored in main memory.
Main memory is commonly known as random-access
memory, or RAM. It is called this because the CPU is able
to quickly access data stored at any random location in
RAM
Hardware
Functions of the Main Memory
55. Secondary storage is a type of memory that can hold data for long
periods of time, even when there is no power to the computer.
Programs are normally stored in secondary memory and loaded into
main memory as needed. Important data, such as word processing
documents, payroll data, and inventory records, is saved to
secondary storage as well.
The most common type of secondary storage device is the disk drive.
A disk drive stores data by magnetically encoding it onto a circular
disk.
Most computers have a disk drive mounted inside their case. External
disk drives, which connect to one of the computer’s communication
ports, are also available. External disk drives can be used to create
backup copies of important data or to move data to another
computer.
Hardware
Functions of the Secondary Storage Devices
56. Input is any data the computer collects from people
and from other devices. The component that collects
the data and sends it to the computer is called an input
device.
Common input devices are the keyboard, mouse,
scanner, microphone, and digital camera.
Disk drives and optical drives can also be considered
input devices because programs and data are
retrieved from them and loaded into the computer’s
memory.
Functions of Input Devices
Hardware
57. Output is any data the computer produces for people
or for other devices. It might be a sales report, a list of
names, or a graphic image.
The data is sent to an output device, which formats
and presents it. Common output devices are video
displays and printers.
Disk drives and CD recorders can also be considered
output devices because the system sends data to
them in order to be saved.
Hardware
Functions of Output Devices
60. The programs that control and manage the basic
operations of a computer are generally referred to as
system software.
System software typically includes the following types of
programs:
Operating System: Windows XP/7/8/10/server 2012, Mac OS X, and Linux.
Utility Programs: virus scanners, file compression programs, and data
backup programs.
Software Development Tools are the programs that programmers use to
create, modify, and test software. Assemblers, compilers, and
interpreters are examples of programs that fall into this category.
System Software
Software
61. Programs that make a computer useful for everyday
tasks are known as application software. These are
the programs that people normally spend most of
their time running on their computers.
Commonly used applications: Microsoft Word, a word
processing program, and Adobe Photoshop, an image editing
program.
Some other examples of application software are spreadsheet
programs, email programs, web browsers, and game
programs.
Software
Application Software
62. All data that is stored in a computer is converted to sequences of 0s
and 1s.
A computer’s memory is divided into tiny storage locations known as
bytes. One byte is only enough memory to store a letter of the
alphabet or a small number. In order to do anything meaningful, a
computer has to have lots of bytes. Most computers today have
millions, or even billions, of bytes of memory.
Each byte is divided into eight smaller storage locations known as
bits. The term bit stands for binary digit. Computer scientists usually
think of bits as tiny switches that can be either on or off. Bits aren’t
actual “switches,” however, at least not in the conventional sense
How Computers Store Data
Concepts
63. To write a program for a computer, we must use a
computer language. Over the years computer languages
have evolved from machine language to natural
languages.
Computer languages evolves from lower level to high
level
Computer Languages
Computer Language Evolution
70. 70
A compiler is a program that translates a high-
level language program into a separate
machine language program.
Computer Languages
Note
71. In this section, we explain the procedure for turning a
program written in C into machine language. The
process is presented in a straightforward, linear fashion,
but you should recognize that these steps are repeated
many times during development to correct errors and
make improvements to the code.
Steps
Writing and Editing a C Program
Compiling and Linking a Program
Executing Program
C Programming Language overview
Creating and Running a C Program
72. Building a C Program
C Programming Language overview
Editor's Notes
----- Meeting Notes (3/1/17 20:25) -----
Where we stopped on wednesday