The ASCII code used by most computers uses the last seven positions of an eight-bit byte to
represent all the characters on a standard keyboard. how many different orderings of 0\'s and 1\'s
(or how many different characters) can be made by using the last seven positions of an eight-bit
byte?
Solution
There are 256 possible values (or characters) in 8 bits.
If you\'re somewhat familiar with computers, then you know that all modern computers are
\"digital\", i.e. internally they represent all data as numbers. In the very early days of computing
(1940\'s), it became clear that computers could be used for more than just number crunching.
They could be used to store and manipulate text. This could be done by simply representing
different alphabetic letters by specific numbers. For example, the number 65 to represent the
letter \"A\", 66 to represent \"B\", and so on. At first, there was no standard, and different ways
of representing text as numbers developed, e.g. EBCDIC (ref. 2).
By the late 1950\'s computers were getting more common, and starting to communicate with
each other. There was a pressing need for a standard way to represent text so it could be
understood by different models and brands of computers. This was the impetus for the
development of the ASCII table, first published in 1963 but based on earlier similar tables used
by teleprinters. After several revisions, the modern version of the 7-bit ASCII table was adopted
as a standard by the American National Standards Institute (ANSI) during the 1960\'s. The
current version is from 1986, published as ANSI X3.4-1986 (ref. 1). ACSII expands to
\"American Standard Code for Information Interchange\".
If you\'ve read this far then you probably know that around then (1960\'s), an 8-bit byte was
becoming the standard way that computer hardware was built, and that you can store 128
different numbers in a 7-bit number. When you counted all possible alphanumeric characters (A
to Z, lower and upper case, numeric digits 0 to 9, special characters like \"% * / ?\" etc.) you
ended up a value of 90-something. It was therefore decided to use 7 bits to store the new ASCII
code, with the eighth bit being used as a parity bit to detect transmission errors.
Over time, this table had limitations which were overcome in different ways. First, there were
\"extended\" or \"8-bit\" variations to accomodate European languages primarily, or
mathematical symbols. These are not \"standards\", but used by different computers, languages,
manufacturers, printers at different times. Thus there are many variations of the 8-bit or extended
\"ascii table\". None of them is reproduced here, but you can read about them in the references
below (ref. 5).
By the 1990\'s there was a need to include non-English languages, including those that used
other alphabets, e.g. Chinese, Hindi, Persian etc. The UNICODE representation uses 16 bits to
store each alphanumeric character, which allows for many tens of thousands of different
c.
The ASCII code used by most computers uses the last seven positions .pdf
1. The ASCII code used by most computers uses the last seven positions of an eight-bit byte to
represent all the characters on a standard keyboard. how many different orderings of 0's and 1's
(or how many different characters) can be made by using the last seven positions of an eight-bit
byte?
Solution
There are 256 possible values (or characters) in 8 bits.
If you're somewhat familiar with computers, then you know that all modern computers are
"digital", i.e. internally they represent all data as numbers. In the very early days of computing
(1940's), it became clear that computers could be used for more than just number crunching.
They could be used to store and manipulate text. This could be done by simply representing
different alphabetic letters by specific numbers. For example, the number 65 to represent the
letter "A", 66 to represent "B", and so on. At first, there was no standard, and different ways
of representing text as numbers developed, e.g. EBCDIC (ref. 2).
By the late 1950's computers were getting more common, and starting to communicate with
each other. There was a pressing need for a standard way to represent text so it could be
understood by different models and brands of computers. This was the impetus for the
development of the ASCII table, first published in 1963 but based on earlier similar tables used
by teleprinters. After several revisions, the modern version of the 7-bit ASCII table was adopted
as a standard by the American National Standards Institute (ANSI) during the 1960's. The
current version is from 1986, published as ANSI X3.4-1986 (ref. 1). ACSII expands to
"American Standard Code for Information Interchange".
If you've read this far then you probably know that around then (1960's), an 8-bit byte was
becoming the standard way that computer hardware was built, and that you can store 128
different numbers in a 7-bit number. When you counted all possible alphanumeric characters (A
to Z, lower and upper case, numeric digits 0 to 9, special characters like "% * / ?" etc.) you
ended up a value of 90-something. It was therefore decided to use 7 bits to store the new ASCII
code, with the eighth bit being used as a parity bit to detect transmission errors.
Over time, this table had limitations which were overcome in different ways. First, there were
"extended" or "8-bit" variations to accomodate European languages primarily, or
mathematical symbols. These are not "standards", but used by different computers, languages,
manufacturers, printers at different times. Thus there are many variations of the 8-bit or extended
"ascii table". None of them is reproduced here, but you can read about them in the references
below (ref. 5).
2. By the 1990's there was a need to include non-English languages, including those that used
other alphabets, e.g. Chinese, Hindi, Persian etc. The UNICODE representation uses 16 bits to
store each alphanumeric character, which allows for many tens of thousands of different
characters to be stored or displayed (ref. 3).
Even as these new standards are phased in, the 7-bit ASCII table continues to be the backbone of
modern computing and data storage. It is one of the few real standards that all computers
understand, and everything from e-mail to web browsing to document editing would not be
possible without it. It is so ubiquitous that the terms "text file" and "ascii file" have come to
mean the same thing for most computer users.