SlideShare a Scribd company logo
1 of 100
Download to read offline
Western Science History
for Zoey and Kaya
Created by GrandBob
Muse
Grandma
This is an initial draft. More
details will be added later.
17th Century Science
Rise of Science (1600 -1700)
Galileo
Copernicus Newton
Kepler
Up until 1600, most people believed that the Earth was the centerof the Universe with
the sun, moon , and stars revolving around it. Religious groups like the Catholic Church
thought that the Bible supported this belief. Copernicus showed that it made more
sense if the Earth went around the sun. He only published j his result after his death out
of fear of reprisal. Galileo discovered the basic principles of physics, invented the
telescope, and believed that thed Earth went around the sun. Howeverhe was arrested
and silenced by the Catholic Church. Later Kepler discovered that all of the planets
including the Earth went around the sun in elliptical orbits. Isaac Newton invented the
Theory of Gravity and created the mathematics needed (Calculus) to show why Kepler’s
Laws worked. He was then able to predict the future paths of heavenly . This was a
major blow to religions. However Newton also calculated the size of the Ark because he
believed in parts of the Bible.
17th Century Mathematics
Descartes
Pascal
Fermat
Johann Bernouilli
Jacob Bernouilli De Moivre
18th Century Science
18th Century Science
Lavoisier Halley Herschel
Priestley Cavendish Black
Oxygen Chlorine Carbon Dioxide
Combustion
Comet
Uranus
Franklin
Lightening Rod
Watt
Steam Engine
18th Century Mathematics
Gauss Euler Laplace
Fourier Lagrange Cauchy
19th Century Science
19th Century, Scientists and Inventors
Edison Marconi
Morse Tesla
Darwin Mendel Agassiz Malthus Pasteur
Daimler
Some 19th Century, Physicists and Mathematicians
Maxwell
Faraday
Helmholtz Boltzmann
Cantor Klein Riemann
Kelvin
20th Century Science
Physics in the 20th Century
In the 19th Century, James Maxwell discovered the equations of electromagnetism. Many physicists thought that there
would be no more new physics. However the 20th Century produced amazing new science that revolutionized the world.
In 1905, Einstein discovered special relativity that showed nothing could travel faster than light and that E= MC^2. In
1917, he discovered general relativity for gravity based on curved space-time. In 1900, Max Planck, showed that radiation
was emitted in discrete chunks (quanta). Rutherford discovered atom nuclei. Bohr,Heisenberg (Uncertainty principle),
Dirac, and Schrodinger (equations) used quantum mechanics to explain the hydrogen atom and revolutionize physics.
Richard Feynman showed how to use diagrams to calculate accurately predicted measurements for particles in a
“Standard Model”. Murray Gell-Man discovered particles called quarks that are key to the Standard Model. Stephen
Hawking showed the black holes radiate
Planck
Einstein Bohr Heisenberg Schrodinger
Dirac Feynman Gell-mann
Hawking
Rutherford
Curie
Dyson
Nobel Prize Winners in Chemistry
From https://en.wikipedia.org/wiki/List_of_Nobel_laureates_in_Chemistry
List of all winners
The Nobel Prize in Chemistry (Swedish: Nobelpriset i kemi) is awarded annually by the Royal Swedish Academy of Sciences to scientists in the
various fields of chemistry. It is one of the five Nobel Prizes established by the 1895 will of Alfred Nobel, who died in 1896. These prizes are
awarded for outstanding contributions in chemistry, physics, literature, peace, and physiology or medicine.[1] As dictated by Nobel's will, the
award is administered by the Nobel Foundation and awarded by the Royal Swedish Academy of Sciences.[2] The first Nobel Prize in Chemistry
was awarded in 1901 to Jacobus Henricus van 't Hoff, of the Netherlands. Each recipient receives a medal, a diploma and a monetary award prize
that has varied throughout the years.[3] In 1901, van 't Hoff received 150,782 SEK, which is equal to 7,731,004 SEK in December 2007. The
award is presented in Stockholm at an annual ceremony on 10 December, the anniversary of Nobel's death.[4]
At least 25 laureates have received the Nobel Prize for contributions in the field of organic chemistry, more than any other field of chemistry.[5]
Two Nobel Prize laureates in Chemistry, Germans Richard Kuhn (1938) and Adolf Butenandt (1939), were not allowed by their government to
accept the prize. They would later receive a medal and diploma, but not the money. Frederick Sanger is one out of two laureates to be awarded
the Nobel prize twice in the same subject, in 1958 and 1980. John Bardeen is the other and was awarded the Nobel Prize in physics in 1956 and
1972. Two others have won Nobel Prizes twice, one in chemistry and one in another subject: Maria Skłodowska-Curie (physics in 1903,
chemistry in 1911) and Linus Pauling (chemistry in 1954, peace in 1962).[6] As of 2020, the prize has been awarded to 185 individuals, including
seven women: Maria Skłodowska-Curie, Irène Joliot-Curie (1935), Dorothy Hodgkin (1964), Ada Yonath (2009), Frances Arnold (2018),
Emmanuelle Charpentier (2020), and Jennifer Doudna (2020).[7][8]
Nobel
Nobel Prize Winners in Physics
From https://en.wikipedia.org/wiki/List_of_Nobel_laureates_in_Physics
List of all winners
The Nobel Prize in Physics (Swedish: Nobelpriset i fysik) is awarded annually by the Royal Swedish Academy of Sciences to
scientists in the various fields of physics. It is one of the five Nobel Prizes established by the 1895 will of Alfred Nobel (who died
in 1896), awarded for outstanding contributions in physics.[1] As dictated by Nobel's will, the award is administered by the Nobel
Foundation and awarded by the Royal Swedish Academy of Sciences.[2] The award is presented in Stockholm at an annual
ceremony on 10 December, the anniversary of Nobel's death.[3] Each recipient receives a medal, a diploma and a monetary award
prize that has varied throughout the years.[4]
The first Nobel Prize in Physics was awarded in 1901 to Wilhelm Conrad Röntgen, of Germany, who received 150,782 SEK,
which is equal to 7,731,004 SEK in December 2007. John Bardeen is the only laureate to win the prize twice—in 1956 and 1972.
Marie Skłodowska-Curie also won two Nobel Prizes, for physics in 1903 and chemistry in 1911. William Lawrence Bragg was,
until October 2014, the youngest ever Nobel laureate; he won the prize in 1915 at the age of 25. He remains the youngest
recipient of the Physics Prize.[5] Four women have won the prize: Curie, Maria Goeppert-Mayer (1963), Donna Strickland (2018),
and Andrea Ghez (2020).[6] As of 2021, the prize has been awarded to 218 individuals.[7]
Nobel Prize Winners in Physiology or Medicine
From https://en.wikipedia.org/wiki/Nobel_Prize_in_Physiology_or_Medicine
List of all winners
The Nobel Prize in Physiology or Medicine is awarded yearly by the Nobel Assembly at the Karolinska Institute for outstanding discoveries in
physiology or medicine. The Nobel Prize is not a single prize, but five separate prizes that, according to Alfred Nobel's 1895 will, are awarded "to
those who, during the preceding year, have conferred the greatest benefit to humankind". Nobel Prizes are awarded in the fields of Physics,
Chemistry, Physiology or Medicine, Literature, and Peace.
The Nobel Prize is presented annually on the anniversary of Alfred Nobel's death, 10 December. As of 2021, 112 Nobel Prizes in Physiology or
Medicine have been awarded to 224 laureates, 212 men and 12 women. The first one was awarded in 1901 to the German physiologist Emil von
Behring, for his work on serum therapy and the development of a vaccine against diphtheria. The first woman to receive the Nobel Prize in
Physiology or Medicine, Gerty Cori, received it in 1947 for her role in elucidating the metabolism of glucose, important in many aspects of
medicine, including treatment of diabetes. The most recent Nobel prize was announced by the Karolinska Institute on 4 October 2021, and has
been awarded to American David Julius and Lebanese-American Ardem Patapoutian, for the discovery of receptors for temperature and touch.[2]
The prize consists of a medal along with a diploma and a certificate for the monetary award. The front side of the medal displays the same profile
of Alfred Nobel depicted on the medals for Physics, Chemistry, and Literature; the reverse side is unique to this medal.
Nobel Prize Winners in Economics
From https://en.wikipedia.org/wiki/Nobel_Memorial_Prize_in_Economic_Sciences

The Nobel Memorial Prize in Economic Sciences, officially the Sveriges Riksbank Prize in Economic Sciences in Memory of
Alfred Nobel[2][3][4] (Swedish: Sveriges riksbanks pris i ekonomisk vetenskap till Alfred Nobels minne), is an economics award
administered by the Nobel Foundation.
Although not one of the five Nobel Prizes which were established by Alfred Nobel's will in 1895,[5] it is commonly referred to as
the Nobel Prize in Economics.[6] The winners of the Nobel Memorial Prize in Economic Sciences are chosen in a similar way, are
announced along with the Nobel Prize recipients, and the prize is presented at the Nobel Prize Award Ceremony.[7]
The award was established in 1968 by an endowment "in perpetuity" from Sweden's central bank, Sveriges Riksbank, to
commemorate the bank's 300th anniversary.[8][9][10][11] It is administered and referred to along with the Nobel Prizes by the Nobel
Foundation.[12] Laureates in the Memorial Prize in Economics are selected by the Royal Swedish Academy of Sciences.[13][14] It was
first awarded in 1969 to Dutch economist Jan Tinbergen and Norwegian economist Ragnar Frisch "for having developed and
applied dynamic models for the analysis of economic processes".[11][15][16]
List of all winners
Announcement of the Laureate of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2008, Mats Persson,
Bertil Holmlund, Gunnar Öquist, Peter Englund
Mathematics in the Early 20th Century
Weill
Von Neumann
Poincaire Godel
Gronthendieck Weyl
Bourbaki
Hilbert Courant
Hardy
Wiener
Brouwer
Erdos
Eastern Europe
In My Life:
20th and 21th Century
In My Life: Science, Technology and Math Examples
Biology : (DNA- Birth Control - Human Genome - Gene Editing)
Medicine : (Polio Vaccine - Heart Transplants - AIDS drugs- Immunotherapy - Covid )
Aerospace : (Jet Planes - Satellites- Man in Space - Moon Voyage - Space Exploration)
Physics : (Big Bang - Standard Model - Black Holes)
Math : (Four Color Theorem - Fermat’s Last Theorem - Langlands Program)
Computers : (Mainframes - Minicomputers - PCs - Smart Phones - Internet of Things - Cloud)
Computing Applications : (Numerical Analysis - Simulation - Bioinformatics - Artificial Intelligence)
Large Scale Computing (Big Data- Deep Learning- Supercomputing- Computational Science )
Electronics : ( Televison Transistors— Microprocessors)
Networking (Internet : Web- Social Media- Internet of Things)
Unsolved Problems : (P vs N - Quantum Gravity)
Future : (Robotics, Fusion Power, Alien Life )
Biology
Genetics
DNA
Non-sexual Genetic Variation
Birth Control
Human Genome
Gene Editing
Gaia Hypothesis
Biology: 20th Century Genetics
Charles Darwin published the most famous science book “The Origin of Species” in 1859 describing the Theory of
Evolution through natural selection. He was influenced by a round-the-world voyage he had taken as a young
man, his study of nature, the works of Malthus on over population, and the geological research showing that the
earth was millions of years old, and that species had died out. His work was and is still scandalous to religious
people who believed in the biblical story of creation. In 1854, the monk Gregor Mendel discovered the concept of
genes by breeding peas in his garden. Frances Crick and James Watson using results of Franklin, discovered the
structure of DNA which explained how genes work Leroy Hood built machines for finding genes in DNA. Craig
Venter sequenced human genome and created an artificial microbe.
Watson DNA Structure
Craig Venter
Leroy Hood Human Genome part
History of DNA
Norman Borlaug
and Crick
Franklin
Biology:DNA
From https://www.genome.gov/genetics-glossary/Deoxyribonucleic-Acid
Deoxyribonucleic acid (abbreviated DNA) is the molecule that carries genetic information for the development and functioning of an organism. DNA is made of
two linked strands that wind around each other to resemble a twisted ladder — a shape known as a double helix. Each strand has a backbone made of alternating
sugar (deoxyribose) and phosphate groups. Attached to each sugar is one of four bases: adenine (A), cytosine (C), guanine (G) or thymine (T). The two strands
are connected by chemical bonds between the bases: adenine bonds with thymine, and cytosine bonds with guanine. The sequence of the bases along DNA’s
backbone encodes biological information, such as the instructions for making a protein or RNA molecule. 
Biology:Non-sexual Genetic Variation
Epigenetics
Endosymbosis
Horizontal-Transfer
Biology:Birth Control
From https://www.nhsinform.scot/healthy-living/contraception/getting-started/the-different-types-of-contraception
1. Cap
2. Combined pill
3. Condoms
4. Contraceptive implant
5. Contraceptive injection
6. Contraceptive patch
7. Diaphragm
8. Female condoms
9. Female sterilisation
10. IUD (intrauterine device, coil)
11. IUS (intrauterine system)
12. Progestogen-only pill (POP, mini pill)
13. Vaginal ring
14. Vasectomy
15. Natural family planning (fertility awareness)
The different types of contraception
Biology: Human Genome
From https://en.wikipedia.org/wiki/Human_genome
The human genome is a complete set of nucleic acid sequences for humans, encoded as DNA within the 23
chromosome pairs in cell nuclei and in a small DNA molecule found within individual mitochondria. These are usually
treated separately as the nuclear genome and the mitochondrial genome.[2] Human genomes include both protein-
coding DNA genes and noncoding DNA. The term noncoding DNA is somewhat misleading because it includes not
only junk DNA but also DNA coding for ribosomal RNA, for transfer RNA and for ribozymes. Haploid human
genomes, which are contained in germ cells (the egg and sperm gamete cells created in the meiosis phase of sexual
reproduction before fertilization) consist of 3,054,815,472 DNA base pairs (if X chromosome is used),[3] while female
diploid genomes (found in somatic cells) have twice the DNA content.
Biology: CRISPR Gene Editing
From https://en.wikipedia.org/wiki/CRISPR_gene_editing
CRISPR gene editing (pronounced /ˈkrispər/ "crisper") is a genetic engineering technique in molecular biology by which the genomes of living organisms
may be modified. It is based on a simplified version of the bacterial CRISPR-Cas9 antiviral defense system. By delivering the Cas9 nuclease complexed
with a synthetic guide RNA (gRNA) into a cell, the cell's genome can be cut at a desired location, allowing existing genes to be removed and/or new ones
added in vivo.[1]
The technique is considered highly significant in biotechnology and medicine as it enables editing genomes in vivo very precisely, cheaply, and easily. It
can be used in the creation of new medicines, agricultural products, and genetically modified organisms, or as a means of controlling pathogens and pests.
It also has possibilities in the treatment of inherited genetic diseases as well as diseases arising from somatic mutations such as cancer. However, its use in
human germline genetic modification is highly controversial. The development of the technique earned Jennifer Doudna and Emmanuelle Charpentier the
Nobel Prize in Chemistry in 2020.[2][3] The third researcher group that shared the Kavli Prize for the same discovery,[4] led by Virginijus Šikšnys, was not
awarded the Nobel prize.[5][6][7]
Working like genetic scissors, the Cas9 nuclease opens both strands of the targeted sequence of DNA to introduce the modification by one of two methods.
Knock-in mutations, facilitated via homology directed repair (HDR), is the traditional pathway of targeted genomic editing approaches.[1] This allows for
the introduction of targeted DNA damage and repair. HDR employs the use of similar DNA sequences to drive the repair of the break via the incorporation
of exogenous DNA to function as the repair template.[1] This method relies on the periodic and isolated occurrence of DNA damage at the target site in
order for the repair to commence. Knock-out mutations caused by CRISPR-Cas9 result in the repair of the double-stranded break by means of non-
homologous end joining (NHEJ). NHEJ can often result in random deletions or insertions at the repair site, which may disrupt or alter gene functionality.
Therefore, genomic engineering by CRISPR-Cas9 gives researchers the ability to generate targeted random gene disruption. Because of this, the precision
of genome editing is a great concern. Genomic editing leads to irreversible changes to the genome.
Doudna Charpentier
From https://en.wikipedia.org/wiki/Gaia_hypothesis
Biology: Gaia Hypothesis
The Gaia hypothesis (/ˈɡaɪ.ə/), also known as the Gaia theory, Gaia paradigm, or the Gaia principle, proposes that living organisms interact with their inorganic
surroundings on Earth to form a synergistic and self-regulating, complex system that helps to maintain and perpetuate the conditions for life on the planet.
The hypothesis was formulated by the chemist James Lovelock[1] and co-developed by the microbiologist Lynn Margulis in the 1970s.[2] Lovelock named the idea after
Gaia, the primordial goddess who personified the Earth in Greek mythology. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal in part
for his work on the Gaia hypothesis.[3]
Topics related to the hypothesis include how the biosphere and the evolution of organisms affect the stability of global temperature, salinity of seawater, atmospheric
oxygen levels, the maintenance of a hydrosphere of liquid water and other environmental variables that affect the habitability of Earth.
The Gaia hypothesis was initially criticized for being teleological and against the principles of natural selection, but later refinements aligned the Gaia hypothesis with
ideas from fields such as Earth system science, biogeochemistry and systems ecology.[4][5][6] Even so, the Gaia hypothesis continues to attract criticism, and today many
scientists consider it to be only weakly supported by, or at odds with, the available evidence.[7][8][9]
Lovelock Margulis
Medicine
Polio Vaccine
Heart Transplants
AIDS drugs
Immunotherapy
Controlling Covid
Medicine: Polio Vaccine
From https://en.wikipedia.org/wiki/Polio_vaccine
Polio vaccines are vaccines used to prevent poliomyelitis (polio).[2] Two types are used: an inactivated poliovirus given by injection (IPV)
and a weakened poliovirus given by mouth (OPV).[2] The World Health Organization (WHO) recommends all children be fully vaccinated
against polio.[2] The two vaccines have eliminated polio from most of the world,[3][4] and reduced the number of cases reported each year
from an estimated 350,000 in 1988 to 33 in 2018.[5][6]
The inactivated polio vaccines are very safe.[2] Mild redness or pain may occur at the site of injection.[2] Oral polio vaccines cause about
three cases of vaccine-associated paralytic poliomyelitis per million doses given.[2] This compares with 5,000 cases per million who are
paralysed following a polio infection.[7] Both types of vaccine are generally safe to give during pregnancy and in those who have HIV/
AIDS but are otherwise well.[2] However, the emergence of circulating vaccine-derived poliovirus (cVDPV), a form of the vaccine virus
that has reverted to causing poliomyelitis, has led to the development of novel oral polio vaccine type 2 (nOPV2) which aims to make the
vaccine safer and thus stop further outbreaks of cVDPV2.[8]
The first successful demonstration of a polio vaccine was by Hilary Koprowski in 1950, with a live attenuated virus which people drank.[9]
The vaccine was not approved for use in the United States, but was used successfully elsewhere.[9] The success of an inactivated (killed)
polio vaccine, developed by Jonas Salk, was announced in 1955.[2][10] Another attenuated live oral polio vaccine was developed by Albert
Sabin and came into commercial use in 1961.[2][11]
Koprowski Salk Sabin
Medicine: Heart Transplants
From https://www.mayoclinic.org/tests-procedures/heart-transplant/about/pac-20384750
Medicine: AIDS drugs
From https://hivinfo.nih.gov/understanding-hiv/fact-sheets/fda-approved-hiv-medicines
Treatment with HIV medicines is called antiretroviral therapy (ART). ART is recommended for everyone with HIV, and people with
HIV should start ART as soon as possible. People on ART take a combination of HIV medicines (called an HIV treatment regimen)
every day. A person's initial HIV treatment regimen generally includes three HIV medicines from at least two different HIV drug
classes.



The following table lists HIV medicines recommended for the treatment of HIV infection in the United States, based on the U.S.
Department of Health and Human Services (HHS) HIV/AIDS medical practice guidelines. All of these drugs are approved by the U.S.
Food and Drug Administration (FDA). The HIV medicines are listed according to drug class and identified by generic and brand
names. Click on a drug name to view information on the drug from the Clinical Info Drug Database, or download the Clinical Info
mobile application to view the information on your Apple or Android devices.
To see a timeline of all FDA approval dates for HIV medicines, view the HIVinfo FDAApproval of HIV Medicines infographic.
Go to Website for large table
Medicine: Immunotherapy
From https://www.cancer.gov/about-cancer/treatment/types/immunotherapy
• How does immunotherapy work against cancer?

• What are the types of immunotherapy?

• Which cancers are treated with immunotherapy?

• What are the side effects of immunotherapy?

• How is immunotherapy given?

• Where do you go for immunotherapy?

• How often do you receive immunotherapy?

• How can you tell if immunotherapy is working?

• What is the current research in immunotherapy?

• How do you find clinical trials that are testing immunotherapy?

Immunotherapy is a type of cancer treatment that helps your immune system fight cancer. The immune system helps your
body fight infections and other diseases. It is made up of white blood cells and organs and tissues of the lymph system.
Immunotherapy is a type of biological therapy. Biological therapy is a type of treatment that uses substances made from
living organisms to treat cancer.
Medicine: Controlling Covid
From https://www.cdc.gov/coronavirus/2019-ncov/your-health/about-covid-19.html
• Basics
• Spread
• Prevention
• If You Have COVID-19
• If You Come into Close Contact with Someone with COVID-19
• Children
• Symptoms and Emergency Warning Signs
• Testing
• Contact Tracing
• Pets and Animals
Medicine: Stopping Covid-19 CoronaVirus (2020)?
The Covid-19 virus started in Wuhan China and has spread
through the whole world because humans have no immunity
Covid-19 Virus
World-Wide Spread
The World is wearing masks
Covid-19 Doctors and Nurses
Medicine: CovidVaccines 2021
mRNA vaccines
Aerospace
Jet Planes
Satellites
Space Exploration
Man in Space
Aerospace: Jet Planes
From https://www.mayoclinic.org/coronavirus-covid-19/history-disease-outbreaks-vaccine-timeline/covid-19
A jet aircraft (or simply jet) is an aircraft (nearly always a fixed-wing aircraft) propelled by jet engines.
Whereas the engines in propeller-powered aircraft generally achieve their maximum efficiency at much lower speeds and altitudes, jet engines
achieve maximum efficiency at speeds close to or even well above the speed of sound. Jet aircraft generally cruise most efficiently at about
Mach 0.8 (981 km/h (610 mph)) and at altitudes around 10,000–15,000 m (33,000–49,000 ft) or more.
The idea of the jet engine was not new, but the technical problems involved could not begin to be solved until the 1930s. Frank Whittle, an
English inventor and RAF officer, began development of a viable jet engine in 1928,[1] and Hans von Ohain in Germany began work
independently in the early 1930s. In August 1939 the turbojet powered Heinkel He 178, the world's first jet aircraft, made its first flight. A
wide range of different types jet aircraft exist, both for civilian and military purposes.
Aerospace: Satellites
From https://en.wikipedia.org/wiki/Satellite
Two CubeSats orbiting around Earth after being deployed from the International Space Station
A satellite or artificial satellite is an object intentionally placed into orbit in outer space. Except for passive satellites, most satellites have an electricity
generation system for equipment on board, such as solar panels or radioisotope thermoelectric generators (RTGs). Most satellites also have a method of
communication to ground stations, called transponders. Many satellites use a standardized bus to save cost and work, the most popular of which is small
CubeSats. Similar satellites can work together as a group, forming constellations. Because of the high launch cost to space, satellites are designed to be
as lightweight and robust as possible.
Satellites are placed from the surface to orbit by launch vehicles, high enough to avoid orbital decay by the atmosphere. Satellites can then change or
maintain the orbit by propulsion, usually by chemical or ion thrusters. In 2018, about 90% of satellites orbiting Earth are in low Earth orbit or
geostationary orbit; geostationary means the satellites stay still at the sky. Some imaging satellites chose a Sun-synchronous orbit because they can scan
the entire globe with similar lighting. As the number of satellites and space debris around Earth increases, the collision threat are becoming more severe.
A small number of satellites orbit other bodies (such as the Moon, Mars, and the Sun) or many bodies at once (two for a halo orbit, three for a Lissajous
orbit).
Earth observation satellites gather information for reconnaissance, mapping, monitoring the weather, ocean, forest, etc. Space telescopes take advantage
of outer space's near perfect vacuum to observe objects with the entire electromagnetic spectrum. Because satellites can see a large portion of the Earth at
once, communications satellites can relay information to remote places. The signal delay from satellites and their orbit's predictability are used in
satellite navigation systems, such as GPS. Space probes are satellites designed for robotic space exploration outside of Earth, and space stations are in
essence crewed satellites.
Aerospace: Space Exploration
From https://en.wikipedia.org/wiki/Space_exploration
Space exploration is the use of astronomy and space technology to explore outer space.[1] While the exploration of space is carried out
mainly by astronomers with telescopes, its physical exploration though is conducted both by uncrewed robotic space probes and human
spaceflight. Space exploration, like its classical form astronomy, is one of the main sources for space science.
While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and
relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. The world's first
large-scale experimental rocket program was Opel-RAK under the leadership of Fritz von Opel and Max Valier during the late 1920s
leading to the first crewed rocket cars and rocket planes,[2] [3] which paved the way for the Nazi era V2 program and US and Soviet
activities from 1950 onwards. The Opel-RAK program and the spectacular public demonstrations of ground and air vehicles drew large
crowds, as well as caused global public excitement as so-called "Rocket Rumble"[4] and had a large long-lasting impact on later
spaceflight pioneers like Wernher von Braun. Common rationales for exploring space include advancing scientific research, national
prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other
countries.[5]
Aerospace: Human in Space (1961- 1969)
1961 First Man in Space
Russian Yuri Gagarin
1963 First Woman in Space
Russian Valentina Tereshkova
First Man on the moon
American Neil Armstrong
Neil Armstrong on
the moon 1969
Moon rocket takeoff 1969
Moon capsule landing 1969
Moon rocket path 1969
Aerospace: Human in Space
From https://en.wikipedia.org/wiki/Human_spaceflight
Human spaceflight (also referred to as manned spaceflight or crewed spaceflight) is spaceflight with a crew or passengers aboard a
spacecraft, often with the spacecraft being operated directly by the onboard human crew. Spacecraft can also be remotely operated from
ground stations on Earth, or autonomously, without any direct human involvement. People trained for spaceflight are called astronauts
(American or other), cosmonauts (Russian), or taikonauts (Chinese); and non-professionals are referred to as spaceflight participants or
spacefarers.[1]
The first human in space was Soviet cosmonaut Yuri Gagarin, who launched on 12 April 1961 as part of the Soviet Union's Vostok
program. This was towards the beginning of the Space Race. On 5 May 1961, Alan Shepard became the first American in space, as part of
Project Mercury. Humans traveled to the Moon nine times between 1968 and 1972 as part of the United States' Apollo program, and have
had a continuous presence in space for 21 years and 262 days on the International Space Station (ISS).[2] As of 2021, humans have not
traveled beyond low Earth orbit since the Apollo 17 lunar mission in December 1972.
Space Shuttle
Space Walk
Physics
Nuclear Weapons
Big Bang
Standard Model
Black Holes
Physics: Nuclear Weapons
Einstein’s equation E = MC^2 showed that a small amount of mass M could yield a large amount of energy since
the speed of light C is very large (300,000 meters/sec). No one knew how to unlock this energy until 1938. German
scientists were able to use neutrons split Uranium 235 atom nuclei into smaller nuclei releasing energy and more
neutrons. Scientists quickly realized that there could be a chain reaction with many nuclei splitting rapidly and
releasing energy (bomb). When World War 2 began, scientists on both sides started research into making a nuclear
weapon. In the US, Robert Oppenheimer led a large group of scientists in a secret “Manhattan Project” in New
Mexico. In July 1945, they tested the first “atom bomb”. Germany had surrendered so the bomb was used against
Japan ending the war. Russia explode an atom bomb in 1949. Scientists in the US led by Edward Teller realized that
a much bigger bomb could be created by nuclear fusion of Hydrogen into Helium (fusion powers the sun). In 1953,
the US tested an H-Bomb in the Pacific followed shortly afterwards by a Russian H-Bomb in Siberia. Many other
countries now have atom bombs which could kill millions of people if ever used in war.
Nuclear Fission Nuclear Fusion
Oppenheimer
Now I am become Death, the destroyer of worlds
Teller
H-Bomb
Physics: Big Bang
From https://en.wikipedia.org/wiki/Big_Bang
The Big Bang theory describes how the universe expanded from an initial state of high density and temperature.[1] It is the prevailing cosmological
model explaining the evolution of the observable universe from the earliest known periods through its subsequent large-scale form.[2][3][4] The model
offers a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave
background (CMB) radiation, and large-scale structure.
Crucially, the theory is compatible with Hubble–Lemaître law—the observation that the farther away a galaxy is, the faster it is moving away from
Earth. Extrapolating this cosmic expansion backwards in time using the known laws of physics, the theory describes an increasingly concentrated
cosmos preceded by a singularity in which space and time lose meaning (typically named "the Big Bang singularity").[5] Detailed measurements of the
expansion rate of the universe place the Big Bang singularity at around 13.8 billion years ago, which is thus considered the age of the universe.[6]
After its initial expansion, an event that is by itself often called "the Big Bang", the universe cooled sufficiently to allow the formation of subatomic
particles, and later atoms. Giant clouds of these primordial elements—mostly hydrogen, with some helium and lithium—later coalesced
through gravity, forming early stars and galaxies, the descendants of which are visible today. Besides these primordial building materials,
astronomers observe the gravitational effects of an unknown dark matter surrounding galaxies. Most of the gravitational potential in the universe
seems to be in this form, and the Big Bang theory and various observations indicate that this excess gravitational potential is not created by baryonic
matter, such as normal atoms. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an
observation attributed to dark energy's existence.[7
Physics: Standard Model of Particle Physics (1970’s)
In the 1970’s, physicists constructed a “Standard Model” that explained all of the particle experiments to the
highest accuracy ever achieved anywhere in science. It is a strange theory because initially.the mathematics
gives infinities for several measurements like the mass of the electron. However when the infinities are replaced
by the measured values and substituted into equations, everything works out (renormalization). Protons and
neutrons are made up of 3 smaller quark particles that held together by gluons and can’t escape to be on their
own. The column on the left are the most common particles but there are two other generations of heavier
versions of each particle like electron, muon, tau for some u unknown reason. The photon is particle of light and
the Higgs particle supplies mass to other particles. It was predicted in 1964 and just discovered in 2012.
10 ^ -n = 1/(10 ^ n)
Example 10 ^-2 = 1/100
10^- 9=1/1000000000
m = meter = 39.4 inches
Physics: Black Holes
From https://en.wikipedia.org/wiki/Black_hole
A black hole is a region of spacetime where gravity is so strong that nothing – no particles or even electromagnetic radiation such as light – can
escape from it.[2] The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole.[3]
[4] The boundary of no escape is called the event horizon. Although it has a great effect on the fate and circumstances of an object crossing it, it
has no locally detectable features according to general relativity.[5] In many ways, a black hole acts like an ideal black body, as it reflects no light.[6]
[7] Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black
body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it
essentially impossible to observe directly.
Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon
Laplace.[8] In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterize a black hole. David Finkelstein,
in 1958, first published the interpretation of "black hole" as a region of space from which nothing can escape. Black holes were long considered a
mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery
of neutron stars by Jocelyn Bell Burnell in 1967 sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality.
The first black hole known was Cygnus X-1, identified by several researchers independently in 1971.[9][10]
Black holes of stellar mass form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing
mass from its surroundings. Supermassive black holes of millions of solar masses (M☉) may form by absorbing other stars and merging with other
black holes. There is consensus that supermassive black holes exist in the centres of most galaxies.
Math
Four Color Theorem
Fermat’s Last Theorem
Kepler Packing
Poincaire Conjecture
Monstrous Moonshine
Langlands Program
Riemann Hypothesis
Math: Four Color Theorem
From https://en.wikipedia.org/wiki/Four_color_theorem
Haken
Appel
Math: Fermat’s Last Theorem
From https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem
In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no
three positive integers a, b, and c satisfy the equation an + bn = cn for any integer value of n greater than 2. The cases n = 1 and n =
2 have been known since antiquity to have infinitely many solutions.[1]
The proposition was first stated as a theorem by Pierre de Fermat around 1637 in the margin of a copy of Arithmetica. Fermat added
that he had a proof that was too large to fit in the margin. Although other statements claimed by Fermat without proof were
subsequently proven by others and credited as theorems of Fermat (for example, Fermat's theorem on sums of two squares),
Fermat's Last Theorem resisted proof, leading to doubt that Fermat ever had a correct proof. Consequently the proposition became
known as a conjecture rather than a theorem. After 358 years of effort by mathematicians, the first successful proof was released in
1994 by Andrew Wiles and formally published in 1995. It was described as a "stunning advance" in the citation for Wiles's Abel
Prize award in 2016.[2] It also proved much of the Taniyama-Shimura conjecture, subsequently known as the modularity theorem, and
opened up entire new approaches to numerous other problems and mathematically powerful modularity lifting techniques.
The unsolved problem stimulated the development of algebraic number theory in the 19th and 20th centuries. It is among the most
notable theorems in the history of mathematics and prior to its proof was in the Guinness Book of World Records as the "most difficult
mathematical problem", in part because the theorem has the largest number of unsuccessful proofs.[3]
Andrew Wiles
Math:Kepler Packing Conjecture
From https://en.wikipedia.org/wiki/Kepler_conjecture
The Kepler conjecture, named after the 17th-century mathematician and astronomer Johannes Kepler, is a mathematical
theorem about sphere packing in three-dimensional Euclidean space. It states that no arrangement of equally sized spheres
filling space has a greater average density than that of the cubic close packing (face-centered cubic) and hexagonal close
packing arrangements. The density of these arrangements is around 74.05%.
In 1998 Thomas Hales, following an approach suggested by Fejes Tóth (1953), announced that he had a proof of the
Kepler conjecture. Hales' proof is a proof by exhaustion involving the checking of many individual cases using complex
computer calculations. Referees said that they were "99% certain" of the correctness of Hales' proof, and the Kepler
conjecture was accepted as a theorem. In 2014, the Flyspeck project team, headed by Hales, announced the completion of
a formal proof of the Kepler conjecture using a combination of the Isabelle and HOL Light proof assistants. In 2017, the
formal proof was accepted by the journal Forum of Mathematics, Pi.[1]
Hales
Toth
Math:Fields Medals
From https://en.wikipedia.org/wiki/Fields_Medal
The Fields Medal is a prize awarded to two, three, or four mathematicians under 40 years of age at the International Congress of the International
Mathematical Union (IMU), a meeting that takes place every four years. The name of the award honours the Canadian mathematician John Charles
Fields.[1]
The Fields Medal is regarded as one of the highest honors a mathematician can receive, and has been described as the Nobel Prize of Mathematics,[2][3][4]
although there are several major differences, including frequency of award, number of awards, age limits, monetary value, and award criteria.[5]
According to the annual Academic Excellence Survey by ARWU, the Fields Medal is consistently regarded as the top award in the field of mathematics
worldwide,[6] and in another reputation survey conducted by IREG in 2013–14, the Fields Medal came closely after the Abel Prize as the second most
prestigious international award in mathematics.[7][8]
The prize includes a monetary award which, since 2006, has been CA$15,000.[9][10] Fields was instrumental in establishing the award, designing the
medal himself, and funding the monetary component, though he died before it was established and his plan was overseen by John Lighton Synge.[1]
The medal was first awarded in 1936 to Finnish mathematician Lars Ahlfors and American mathematician Jesse Douglas, and it has been awarded every
four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions. In 2014,
the Iranian mathematician Maryam Mirzakhani became the first female Fields Medallist.[11][12][13] In all, 64 people have been awarded the Fields Medal.
List of Fields Medal Winners
Math:Abel Prize
From https://abelprize.no/winners
Math:Abel Prize(cont)
From https://abelprize.no/winners
Math:Monstrous Moonshine
From https://en.wikipedia.org/wiki/Monstrous_moonshine
In mathematics, monstrous moonshine, or moonshine theory, is the unexpected connection between the monster group M and modular functions, in
particular, the j function. The term was coined by John Conway and Simon P. Norton in 1979.
The monstrous moonshine is now known to be underlain by a vertex operator algebra called the moonshine module (or monster vertex algebra)
constructed by Igor Frenkel, James Lepowsky, and Arne Meurman in 1988, which has the monster group as its group of symmetries. This vertex
operator algebra is commonly interpreted as a structure underlying a two-dimensional conformal field theory, allowing physics to form a bridge
between two mathematical areas. The conjectures made by Conway and Norton were proven by Richard Borcherds for the moonshine module in 1992
using the no-ghost theorem from string theory and the theory of vertex operator algebras and generalized Kac–Moody algebras.
Conway Norton Borcherds
Math: Langlands Program
From https://en.wikipedia.org/wiki/Langlands_program
In representation theory and algebraic number theory, the Langlands program is a web of far-reaching and influential conjectures about
connections between number theory and geometry. Proposed by Robert Langlands (1967, 1970), it seeks to relate Galois groups in algebraic
number theory to automorphic forms and representation theory of algebraic groups over local fields and adeles. Widely seen as the single
biggest project in modern mathematical research, the Langlands program has been described by Edward Frenkel as "a kind of grand unified
theory of mathematics."[1]
The Langlands program consists of some very complicated theoretical abstractions, which can be difficult even for specialist
mathematicians to grasp. To oversimplify, the fundamental lemma of the project posits a direct connection between the generalized
fundamental representation of a finite field with its group extension to the automorphic forms under which it is invariant. This is
accomplished through abstraction to higher dimensional integration, by an equivalence to a certain analytical group as an absolute extension
of its algebra. Consequently, this allows an analytical functional construction of powerful invariance transformations for a number field to
its own algebraic structure.
The meaning of such a construction is nuanced, but its specific solutions and generalizations are very powerful. The consequence for proof
of existence to such theoretical objects implies an analytical method in constructing the categoric mapping of fundamental structures for
virtually any number field. As an analogue to the possible exact distribution of primes, the Langlands program allows a potential general
tool for the resolution of invariance at the level of generalized algebraic structures. This in turn permits a somewhat unified analysis of
arithmetic objects through their automorphic functions. Simply put, the Langlands philosophy allows a general analysis of structuring the
abstractions of numbers. Naturally, this description is at once a reduction and over-generalization of the program's proper theorems, but
these mathematical analogues provide the basis of its conceptualization.
Langlands
Math:Clay Millenium Problems
From https://www.claymath.org/millennium-problems
Yang–Mills and Mass Gap
Experiment and computer simulations suggest the existence of a "mass gap" in the solution to the quantum versions of the Yang-Mills equations. But no proof of this property is known.
Riemann Hypothesis
The prime number theorem determines the average distribution of the primes. The Riemann hypothesis tells us about the deviation from the average. Formulated in Riemann's 1859 paper, it
asserts that all the 'non-obvious' zeros of the zeta function are complex numbers with real part 1/2.
P vs NP Problem
If it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This is the essence of the P vs NP question. Typical of the NP problems is that of the Hamiltonian
Path Problem: given N cities to visit, how can one do this without visiting a city twice? If you give me a solution, I can easily check that it is correct. But I cannot so easily find a solution.
Navier–Stokes Equation
This is the equation which governs the flow of fluids such as water and air. However, there is no proof for the most basic questions one can ask: do solutions exist, and are they unique? Why ask
for a proof? Because a proof gives not only certitude, but also understanding.
Hodge Conjecture
The answer to this conjecture determines how much of the topology of the solution set of a system of algebraic equations can be defined in terms of further algebraic equations. The Hodge
conjecture is known in certain special cases, e.g., when the solution set has dimension less than four. But in dimension four it is unknown.
Poincaré Conjecture
In 1904 the French mathematician Henri Poincaré asked if the three dimensional sphere is characterized as the unique simply connected three manifold. This question, the Poincaré conjecture,
was a special case of Thurston's geometrization conjecture. Perelman's proof tells us that every three manifold is built from a set of standard pieces, each with one of eight well-understood
geometries.
Birch and Swinnerton-Dyer Conjecture
Supported by much experimental evidence, this conjecture relates the number of points on an elliptic curve mod p to the rank of the group of rational points. Elliptic curves, defined by cubic
equations in two variables, are fundamental mathematical objects that arise in many areas: Wiles' proof of the Fermat Conjecture, factorization of numbers into primes, and cryptography, to
name three.
Poincaire Conjecture
From https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture
In the mathematical field of geometric topology, the Poincaré conjecture (UK: /ˈpwæ
̃kæreɪ/,[2] US: /ˌpwæ
̃kɑːˈreɪ/,[3][4] French: [pwɛ̃kaʁe]), or Perelman's theorem, is a theorem about
the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space.
Originally conjectured by Henri Poincaré in 1904, the theorem concerns spaces that locally look like ordinary three-dimensional space but which are finite in extent. Poincaré
hypothesized that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere.
Attempts to resolve the conjecture drove much progress in the field of geometric topology during the 20th century.
The eventual proof built upon Richard S. Hamilton's program of using the Ricci flow to attempt to solve the problem. By developing a number of new techniques and results in the
theory of Ricci flow, Grigori Perelman was able to modify and complete Hamilton's program. In unpublished arXiv preprints released in 2002 and 2003, Perelman presented his
work proving the Poincaré conjecture, along with the more powerful geometrization conjecture of William Thurston. Over the next several years, several mathematicians studied
his papers and produced detailed formulations of his work.
Hamilton and Perelman's work on the conjecture is widely recognized as a milestone of mathematical research. Hamilton was recognized with the Shaw Prize and the Leroy P.
Steele Prize for Seminal Contribution to Research. The journal Science marked Perelman's proof of the Poincaré conjecture as the scientific Breakthrough of the Year in 2006.[5] The
Clay Mathematics Institute, having included the Poincaré conjecture in their well-known Millennium Prize Problem list, offered Perelman their prize of US$1 million for the
conjecture's resolution.[6] He declined the award, saying that Hamilton's contribution had been equal to his own.[7][8]
Hamilton Perelman
Simplified Explanation of the Monster group
Math: Simplified Convergences Diagram
There are many surprising connections in mathematics and even more surprising connections
with physics. The Standard Model for particles uses Lie Groups and Linear Operators. Einstein’s
Theory of Gravity is based on Riemannian Geometry.
Algebra
Number Theory
Calculus
Complex
Numbers
Periodic
Functions
Analytic Geometry
Geometry
Analysis
Langlands Program
Fermat Last Theorem
Riemann
Hypothesis
Group Theory
Monstrous Moonshine
Lie Groups
Descartes
Euler
Riemann
Wiles
Newton
Borcherds
Linear
Algebra Linear Operators
Hilbert
Galois
Gauss
Riemannian Geometry
and Manifolds
Riemann
Fourier
Computers
Scentists
Hardware
Software
Smart Phones
Internet of Things
Computers: Famous Computer Scientists
The father of Computer Science was Allen Turing. He developed the simple Turing Machine and showed how
it could compute anything computable but couldn’t solve some problems. He also proposed the Turing test
to see if computers could simulate human responses. Turing is also known for breaking German codes in
World War 2. After World War 2, large mainframe computers were developed by IBM and smaller
minicomputers by HP and Digital in the early 1980’s. In the 1980’s Steve Jobs and Steve Wozniak developed
the Apple Personal Computer. Steve Jobs went on to manage the development of the iPhone. Bill Gates and
Paul Allen started Microsoft to write software for an IBM PC and other personal computers The most
important theoretical advance was the separation of problems into solvable in polynomial time t^n (P) vs
problems verifiable in polynomial time( NP). by Stephen Cook and Richard Karp. Many problems are in NP
without known P solutions. No one knows if P = NP. Tim Berners-Lee is the inventor of the World Wide Web.
Turing
Wozniak Jobs: young and old
Allen Gates
Karp Cook
Tim Berners-Lee
Computers: Hardware
From https://medium.com/@magicsilicon/history-and-future-of-computing-in-one-chart-eadb25ce61fc
Mainframe
Personal Device
Personal Computer
Minicomputer
Internet of Things
Computers: Software
From https://en.wikipedia.org/wiki/Software
Application software uses the computer system to perform special functions beyond the basic operation of the computer itself. There are many
different types of application software becaufse the range of tasks that can be performed with a modern computer is so large—see list of software.
System software manages hardware behaviour, as to provide basic functionalities that are required by users, or for other software to run properly, if at
all. System software is also designed or providing a platform for running application software,[12] and it includes the following:
• Operating systems are essential collections of software that manage resources and provide common services for other software that runs "on
top" of them. Supervisory programs, boot loaders, shells and window systems are core parts of operating systems. In practice, an operating
system comes bundled with additional software (including application software) so that a user can potentially do some work with a computer
that only has one operating system.
• Device drivers operate or control a particular type of device that is attached to a computer. Each device needs at least one corresponding device
driver; because a computer typically has at minimum at least one input device and at least one output device, a computer typically needs more
than one device driver.
• Utilities are computer programs designed to assist users in the maintenance and care of their computers.
Software Programming Languages
Computer Hardware: Smart Phones
From https://en.wikipedia.org/wiki/Smartphone
Two smartphones: Samsung Galaxy S22 Ultra (top)
and iPhone 13 Pro (bottom)
A smartphone is a portable computer device that combines mobile telephone and computing functions into one
unit. They are distinguished from feature phones by their stronger hardware capabilities and extensive mobile
operating systems, which facilitate wider software, internet (including web browsing over mobile broadband), and
multimedia functionality (including music, video, cameras, and gaming), alongside core phone functions such as
voice calls and text messaging. Smartphones typically contain a number of metal–oxide–semiconductor (MOS)
integrated circuit (IC) chips, include various sensors that can be leveraged by pre-included and third-party software
(such as a magnetometer, proximity sensors, barometer, gyroscope, accelerometer and more), and support wireless
communications protocols (such as Bluetooth, Wi-Fi, or satellite navigation).
Early smartphones were marketed primarily towards the enterprise market, attempting to bridge the functionality of
standalone personal digital assistant (PDA) devices with support for cellular telephony, but were limited by their
bulky form, short battery life, slow analog cellular networks, and the immaturity of wireless data services. These
issues were eventually resolved with the exponential scaling and miniaturization of MOS transistors down to sub-
micron levels (Moore's law), the improved lithium-ion battery, faster digital mobile data networks (Edholm's law),
and more mature software platforms that allowed mobile device ecosystems to develop independently of data
providers.
In the 2000s, NTT DoCoMo's i-mode platform, BlackBerry, Nokia's Symbian platform, and Windows Mobile
began to gain market traction, with models often featuring QWERTY keyboards or resistive touchscreen input, and
emphasizing access to push email and wireless internet. Following the rising popularity of the iPhone in the late
2000s, the majority of smartphones have featured thin, slate-like form factors, with large, capacitive screens with
support for multi-touch gestures rather than physical keyboards, and offer the ability for users to download or
purchase additional applications from a centralized store, and use cloud storage and synchronization, virtual
assistants, as well as mobile payment services. Smartphones have largely replaced PDAs, handheld/palm-sized PCs,
portable media players (PMP)[1] and to a lesser extent, handheld video game consoles.
Computer Hardware: Internet of Things
From https://en.wikipedia.org/wiki/Internet_of_things
The Internet of things (IoT) describes physical objects (or groups of such objects) with sensors, processing ability, software, and other technologies that connect and exchange
data with other devices and systems over the Internet or other communications networks.[1][2][3][4] Internet of things has been considered a misnomer because devices do not need
to be connected to the public internet, they only need to be connected to a network and be individually addressable.[5][6]
The field has evolved due to the convergence of multiple technologies, including ubiquitous computing, commodity sensors, increasingly powerful embedded systems, and
machine learning.[7] Traditional fields of embedded systems, wireless sensor networks, control systems, automation (including home and building automation), independently
and collectively enable the Internet of things.[8] In the consumer market, IoT technology is most synonymous with products pertaining to the concept of the "smart home",
including devices and appliances (such as lighting fixtures, thermostats, home security systems, cameras, and other home appliances) that support one or more common
ecosystems, and can be controlled via devices associated with that ecosystem, such as smartphones and smart speakers. IoT is also used in healthcare systems.[9]
There are number of concerns about the risks in the growth of IoT technologies and products, especially in the areas of privacy and security, and consequently, industry and
governmental moves to address these concerns have begun, including the development of international and local standards, guidelines, and regulatory frameworks.[10]
Internet of Things: What It Is, How It Works, Examples and More
Computer Applications
Numerical Analysis
Artificial Intelligence
Simulation
Bioinformatics
Computational Science
Industrial Internet of Things
Robotics
Computer Applications: Numerical Analysis
From https://en.wikipedia.org/wiki/Numerical_analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical
analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt at finding approximate solutions of problems rather than
the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social
sciences, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed
and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial
mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis,[2][3][4] and stochastic differential equations and
Markov chains for simulating living cells in medicine and biology.
Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid 20th century,
computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms.[5]
WolframAlpha.com provides
many numerical methods.
For example solving
cos x = x
Numerical analysis requires numerical methods + error analysis
Computer Applications: Artificial Intelligence
From https://en.wikipedia.org/wiki/Artificial_intelligence
Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals including humans. AI research has been
defined as the field of study of intelligent agents, which refers to any system that perceives its environment and takes actions that maximize its chance of achieving its goals.[a]
The term "artificial intelligence" had previously been used to describe machines that mimic and display "human" cognitive skills that are associated with the human mind, such as
"learning" and "problem-solving". This definition has since been rejected by major AI researchers who now describe AI in terms of rationality and acting rationally, which does
not limit how intelligence can be articulated.[b]
AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri
and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go).[2] As machines become
increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[3] For instance, optical character
recognition is frequently excluded from things considered to be AI,[4] having become a routine technology.[5]
Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[6][7] followed by disappointment and the
loss of funding (known as an "AI winter"),[8][9] followed by new approaches, success and renewed funding.[7][10] AI research has tried and discarded many different approaches since
its founding, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge and imitating animal behavior. In the first decades of
the 21st century, highly mathematical-statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging
problems throughout industry and academia.[10][11]
The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge
representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects.[c] General intelligence (the ability to solve an arbitrary
problem) is among the field's long-term goals.[12] To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques—including
search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability and economics. AI also draws upon computer science,
psychology, linguistics, philosophy, and many other fields.
The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[d] This raised philosophical arguments
about the mind and the ethical consequences of creating artificial beings endowed with human-like intelligence; these issues have previously been explored by myth, fiction and
philosophy since antiquity.[14] Computer scientists and philosophers have since suggested that AI may become an existential risk to humanity if its rational capacities are not
steered towards beneficial goals.[e]
Computer Applications: Simulation
From https://en.wikipedia.org/wiki/Simulation
A simulation is the imitation of the operation of a real-world process or system over time.[1] Simulations require the use of models; the model
represents the key characteristics or behaviors of the selected system or process, whereas the simulation represents the evolution of the model over
time. Often, computers are used to execute the simulation.
Simulation is used in many contexts, such as simulation of technology for performance tuning or optimizing, safety engineering, testing, training,
education,[2] and video games. Simulation is also used with scientific modelling of natural systems[2] or human systems to gain insight into their
functioning,[3] as in economics. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is
also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being
designed but not yet built, or it may simply not exist.[4]
Key issues in modeling and simulation include the acquisition of valid sources of information about the relevant selection of key characteristics and
behaviors used to build the model, the use of simplifying approximations and assumptions within the model, and fidelity and validity of the simulation
outcomes. Procedures and protocols for model verification and validation are an ongoing field of academic study, refinement, research and development
in simulations technology or practice, particularly in the work of computer simulation.
Flight Simulator
Simulation is based on system behavior models
Emulation is based on system constuction models
AI simulates natural intelligence
AGI emulates natural intelligence
Computer Applications: Computational Science
From https://en.wikipedia.org/wiki/Computational_science
Computational science, also known as scientific computing or scientific computation (SC), is a field in mathematics that uses advanced computing
capabilities to understand and solve complex problems. It is an area of science that spans many disciplines[which?], but at its core, it involves the development of
models and simulations to understand natural systems.
• Algorithms (numerical and non-numerical): mathematical models, computational models, and computer simulations developed to solve science (e.g.,
biological, physical, and social), engineering, and humanities problems
• Computer hardware that develops and optimizes the advanced system hardware, firmware, networking, and data management components needed to
solve computationally demanding problems
• The computing infrastructure that supports both the science and engineering problem solving and the developmental computer and information science
In practical use, it is typically the application of computer simulation and other forms of computation from numerical analysis and theoretical computer
science to solve problems in various scientific disciplines. The field is different from theory and laboratory experiments, which are the traditional forms of
science and engineering. The scientific computing approach is to gain understanding through the analysis of mathematical models implemented on computers.
Scientists and engineers develop computer programs and application software that model systems being studied and run these programs with various sets of
input parameters. The essence of computational science is the application of numerical algorithms[1] and computational mathematics. In some cases, these
models require massive amounts of calculations (usually floating-point) and are often executed on supercomputers or distributed computing platforms.
Big Data Analytics
Big Data Collection
Computer Applications: Bioinformatics
From https://en.wikipedia.org/wiki/Bioinformatics
Bioinformatics (/ˌbaɪ.oʊˌɪnfərˈmætɪks/ (listen)) is an interdisciplinary field that develops methods and software tools for understanding biological data, in
particular when the data sets are large and complex. As an interdisciplinary field of science, bioinformatics combines biology, chemistry, physics, computer
science, information engineering, mathematics and statistics to analyze and interpret the biological data. Bioinformatics has been used for in silico analyses of
biological queries using computational and statistical techniques.
Bioinformatics includes biological studies that use computer programming as part of their methodology, as well as specific analysis "pipelines" that are repeatedly
used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidates genes and single nucleotide polymorphisms
(SNPs). Often, such identification is made with the aim to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in
agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic
acid and protein sequences, called proteomics.[1]
Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating
genomes and their observed mutations. It plays a role in the text mining of biological literature and the development of biological and gene ontologies to organize
and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and
interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps
analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and
modeling of DNA,[2] RNA,[2][3] proteins[4] as well as biomolecular interactions.[5][6][7][8]
Example: Protein Folding from Deep Learning
Computer Applications: Industrial Internet of Things
From https://www.tibco.com/reference-center/what-is-iiot
Industrial IoT, or the Industrial Internet of Things (IIoT), is a vital element of Industry 4.0. IIoT harnesses the power of smart machines and
real-time analysis to make better use of the data that industrial machines have been churning out for years. The principal driver of IIoT is smart
machines, for two reasons. The first is that smart machines capture and analyze data in real-time, which humans cannot. The second is that
smart machines communicate their findings in a manner that is simple and fast, enabling faster and more accurate business decisions.
Computer Applications: Robotics Overview
Fundamentals of Robotics
History of Robotics
Drones
Asimov 1950 Capek 1920 Devol 1954
3 Laws of Robotics R.U.R First Robot
Cyborgs First Cyborg
Computer Applications: Advanced Robotics
Boston Dynamics Dancing Robots
Carbon-based Life Forms
Bionic Xenobots from Frogs
Non-Bionic Frog Robot
Giant Gundam Robot
AI Mayflower
x
Humanoid Robots
Monkey Cyborg
Reproducing Xenobots
Large Scale Computing
Cloud
Big Data
Deep Learning
Supercomputing
From https://en.wikipedia.org/wiki/Cloud_computing
Cloud computing[1] is the on-demand availability of computer system resources, especially data storage (cloud storage) and computing
power, without direct active management by the user.[2] Large clouds often have functions distributed over multiple locations, each location
being a data center. Cloud computing relies on sharing of resources to achieve coherence and typically using a "pay-as-you-go" model which
can help in reducing capital expenses but may also lead to unexpected operating expenses for unaware users.[3]
According to IDC, the global spending on cloud computing services has reached $706 billion and expected to reach $1.3 trillion by 2025.[4]
While Gartner estimated that the global public cloud services end-user spending forecast to reach $600 billion by 2023.[5] As per McKinsey &
Company report, cloud cost-optimization levers and value-oriented business use cases foresees more than $1 trillion in run-rate EBITDA
across Fortune 500 companies as up for grabs in 2030.[6] In 2022, more than $1.3 trillion in enterprise IT spending is at stake from the shift to
cloud, growing to almost $1.8 trillion in 2025, according to Gartner.[7]
Cloud computing metaphor: the group of networked elements providing services need not be individually addressed
Large Scale Computing: Cloud
Large Scale Computing: Big Data
From https://en.wikipedia.org/wiki/Big_data
Big data refers to data sets that are too large or complex to be dealt with by traditional data-processing application software. Data with many fields (rows) offer greater statistical power, while data
with higher complexity (more attributes or columns) may lead to a higher false discovery rate.[2] Big data analysis challenges include capturing data, data storage, data analysis, search, sharing,
transfer, visualization, querying, updating, information privacy, and data source. Big data was originally associated with three key concepts: volume, variety, and velocity.[3] The analysis of big data
presents challenges in sampling, and thus previously allowing for only observations and sampling. Thus a fourth concept, veracity, refers to the quality or insightfulness of the data. Without sufficient
investment in expertise for big data veracity, then the volume and variety of data can produce costs and risks that exceed an organization's capacity to create and capture value from big data.[4]
Current usage of the term big data tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from big data, and
seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that's not the most relevant characteristic of this new data ecosystem."[5]
Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on".[6] Scientists, business executives, medical practitioners, advertising and
governments alike regularly meet difficulties with large data-sets in areas including Internet searches, fintech, healthcare analytics, geographic information systems, urban informatics, and business
informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics,[7] connectomics, complex physics simulations, biology, and environmental research.[8]
The size and number of available data sets have grown rapidly as data is collected by devices such as mobile devices, cheap and numerous information-sensing Internet of things devices, aerial
(remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks.[9][10] The world's technological per-capita capacity to store
information has roughly doubled every 40 months since the 1980s;[11] as of 2012, every day 2.5 exabytes (2.5×260 bytes) of data are generated.[12] Based on an IDC report prediction, the global data
volume was predicted to grow exponentially from 4.4 zettabytes to 44 zettabytes between 2013 and 2020. By 2025, IDC predicts there will be 163 zettabytes of data.[13] According to IDC, global
spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021.[14][15] While Statista report, the global big data market is forecasted to grow to $103 billion by
2027.[16] In 2011 McKinsey & Company reported, if US healthcare were to use big data creatively and effectively to drive efficiency and quality, the sector could create more than $300 billion in value
every year.[17] In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data.[17]
And users of services enabled by personal-location data could capture $600 billion in consumer surplus.[17] One question for large enterprises is determining who should own big-data initiatives that
affect the entire organization.[18]
Large Scale Computing: Deep Learning
From https://en.wikipedia.org/wiki/Deep_learning
Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning.
Learning can be supervised, semi-supervised or unsupervised.[2]
Deep-learning architectures such as deep neural networks, deep belief networks, deep reinforcement learning, recurrent neural networks, convolutional neural networks and
Transformers have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical
image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert
performance.[3][4][5]
Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from
biological brains. Specifically, artificial neural networks tend to be static and symbolic, while the biological brain of most living organisms is dynamic (plastic) and analogue.[6][7]
The adjective "deep" in deep learning refers to the use of multiple layers in the network. Early work showed that a linear perceptron cannot be a universal classifier, but that a
network with a nonpolynomial activation function with one hidden layer of unbounded width can. Deep learning is a modern variation which is concerned with an unbounded
number of layers of bounded size, which permits practical application and optimized implementation, while retaining theoretical universality under mild conditions. In deep
learning the layers are also permitted to be heterogeneous and to deviate widely from biologically informed connectionist models, for the sake of efficiency, trainability and
understandability, whence the "structured" part.
10 Deep Learning Algorithms
Deep Learning Networks

are 1000 times larger and

require many inuts for training
1. Convolutional Neural Networks (CNNs)
2. Long Short Term Memory Networks (LSTMs)
3. Recurrent Neural Networks (RNNs)
4. Generative Adversarial Networks (GANs)
5. Radial Basis Function Networks (RBFNs)
6. Multilayer Perceptrons (MLPs)
7. Self Organizing Maps (SOMs)
8. Deep Belief Networks (DBNs)
9. Restricted Boltzmann Machines( RBMs)
10. Autoencoders
Transfer Learning
Reinforcement Learning
Transformer Learning
Large Scale Computing: Supercomputing
From https://en.wikipedia.org/wiki/Supercomputer
A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in
floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there have existed supercomputers which can perform over
1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS).[3]
For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS to tens of teraFLOPS.[4][5]
Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems.[6] Additional research is being conducted in the United States, the European
Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.[7]
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum
mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological
macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation
of nuclear weapons, and nuclear fusion). They have been essential in the field of cryptanalysis.[8]
The IBM Blue Gene/P supercomputer "Intrepid" at Argonne National Laboratory
runs 164,000 processor cores using normal data center air conditioning, grouped in
40 racks/cabinets connected by a high-speed 3D torus network
Exascale Computing
Electronics
Televison
Transistors
Microprocessors
Electronics: Televison
From https://en.wikipedia.org/wiki/Television
Farnsworth
Television, sometimes shortened to TV, is a telecommunication medium for transmitting moving images and sound. The term can refer to a television set, or the medium of television
transmission. Television is a mass medium for advertising, entertainment, news, and sports.
Television became available in crude experimental forms in the late 1920s, but only after several years of further development was the new technology marketed to consumers. After
World War II, an improved form of black-and-white television broadcasting became popular in the United Kingdom and the United States, and television sets became commonplace in
homes, businesses, and institutions. During the 1950s, television was the primary medium for influencing public opinion.[1] In the mid-1960s, color broadcasting was introduced in the
U.S. and most other developed countries.
The availability of various types of archival storage media such as Betamax and VHS tapes, high-capacity hard disk drives, DVDs, flash drives, high-definition Blu-ray Discs, and cloud
digital video recorders has enabled viewers to watch pre-recorded material—such as movies—at home on their own time schedule. For many reasons, especially the convenience of
remote retrieval, the storage of television and video programming now also occurs on the cloud (such as the video-on-demand service by Netflix). At the end of the first decade of the
2000s, digital television transmissions greatly increased in popularity. Another development was the move from standard-definition television (SDTV) (576i, with 576 interlaced lines of
resolution and 480i) to high-definition television (HDTV), which provides a resolution that is substantially higher. HDTV may be transmitted in different formats: 1080p, 1080i and
720p. Since 2010, with the invention of smart television, Internet television has increased the availability of television programs and movies via the Internet through streaming video
services such as Netflix, Amazon Prime Video, iPlayer and Hulu.
In 2013, 79% of the world's households owned a television set.[2] The replacement of earlier cathode-ray tube (CRT) screen displays with compact, energy-efficient, flat-panel alternative
technologies such as LCDs (both fluorescent-backlit and LED), OLED displays, and plasma displays was a hardware revolution that began with computer monitors in the late 1990s.
Most television sets sold in the 2000s were flat-panel, mainly LEDs. Major manufacturers announced the discontinuation of CRT, Digital Light Processing (DLP), plasma, and even
fluorescent-backlit LCDs by the mid-2010s.[3][4] In the near future, LEDs are expected to be gradually replaced by OLEDs.[5] Also, major manufacturers have announced that they will
increasingly produce smart TVs in the mid-2010s.[6][7][8] Smart TVs with integrated Internet and Web 2.0 functions became the dominant form of television by the late 2010s.[9]
Television signals were initially distributed only as terrestrial television using high-powered radio-frequency television transmitters to broadcast the signal to individual television
receivers. Alternatively television signals are distributed by coaxial cable or optical fiber, satellite systems and, since the 2000s via the Internet. Until the early 2000s, these were
transmitted as analog signals, but a transition to digital television was expected to be completed worldwide by the late 2010s. A standard television set consists of multiple internal
electronic circuits, including a tuner for receiving and decoding broadcast signals. A visual display device which lacks a tuner is correctly called a video monitor rather than a television.
Baird
Cathode Ray Tube
Electronics: Transistors
From https://en.wikipedia.org/wiki/Transistor
A transistor is a semiconductor device used to amplify or switch electrical signals and power. The transistor is one of the basic building blocks of modern electronics.[1] It is
composed of semiconductor material, usually with at least three terminals for connection to an electronic circuit. A voltage or current applied to one pair of the transistor's
terminals controls the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify
a signal. Some transistors are packaged individually, but many more are found embedded in integrated circuits.
Austro-Hungarian physicist Julius Edgar Lilienfeld proposed the concept of a field-effect transistor in 1926, but it was not possible to actually construct a working device at that
time.[2] The first working device to be built was a point-contact transistor invented in 1947 by American physicists John Bardeen and Walter Brattain while working under
William Shockley at Bell Labs. The three shared the 1956 Nobel Prize in Physics for their achievement.[3] The most widely used type of transistor is the metal–oxide–
semiconductor field-effect transistor (MOSFET), which was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959.[4][5][6] Transistors revolutionized the field of
electronics, and paved the way for smaller and cheaper radios, calculators, and computers, among other things.
Most transistors are made from very pure silicon, and some from germanium, but certain other semiconductor materials are sometimes used. A transistor may have only one kind
of charge carrier, in a field-effect transistor, or may have two kinds of charge carriers in bipolar junction transistor devices. Compared with the vacuum tube, transistors are
generally smaller and require less power to operate. Certain vacuum tubes have advantages over transistors at very high operating frequencies or high operating voltages. Many
types of transistors are made to standardized specifications by multiple manufacturers.
Metal-oxide-semiconductor field-effect transistor (MOSFET)
, showing gate (G), body (B), source (S) and drain (D) terminals.
The gate is separated from the body by an insulating layer (pink).
John Bardeen, William Shockley and Walter Brattain at Bell Labs in
1948. Bardeen and Brattain invented the point-contact transistor in
1947 and Shockley the bipolar junction transistor in 1948.
Electronics: Microprocessor
From https://en.wikipedia.org/wiki/Microprocessor
A microprocessor is a computer processor where the data processing logic and control is included on a single integrated circuit, or a small number of integrated circuits. The
microprocessor contains the arithmetic, logic, and control circuitry required to perform the functions of a computer's central processing unit. The integrated circuit is capable of
interpreting and executing program instructions and performing arithmetic operations.[1] The microprocessor is a multipurpose, clock-driven, register-based, digital integrated circuit
that accepts binary data as input, processes it according to instructions stored in its memory, and provides results (also in binary form) as output. Microprocessors contain both
combinational logic and sequential digital logic, and operate on numbers and symbols represented in the binary number system.
The integration of a whole CPU onto a single or a few integrated circuits using Very-Large-Scale Integration (VLSI) greatly reduced the cost of processing power. Integrated circuit
processors are produced in large numbers by highly automated metal-oxide-semiconductor (MOS) fabrication processes, resulting in a relatively low unit price. Single-chip
processors increase reliability because there are much fewer electrical connections that could fail. As microprocessor designs improve, the cost of manufacturing a chip (with smaller
components built on a semiconductor chip the same size) generally stays the same according to Rock's law.
Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits, typically of TTL type. Microprocessors
combined this into one or a few large-scale ICs. The first commercially available microprocessor was the Intel 4004 introduced in 1971.
Continued increases in microprocessor capacity have since rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more
microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.
First microprocessor by Intel, the 4004
Noyce
Kilby Moore
Networking
Internet
Web
Social Media
Internet of Everything
Networking: Internet
From https://en.wikipedia.org/wiki/Internet
The Internet (or internet) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks
that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a
vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing.
The origins of the Internet date back to the development of packet switching and research commissioned by the United States Department of Defense in the 1960s to enable time-sharing of computers.[2] The
primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1970s. The funding of the National Science Foundation Network
as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many
networks.[3] The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet,[4] and generated a sustained exponential growth as generations
of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia in the 1980s, commercialization incorporated its services and technologies
into virtually every aspect of modern life.
Most traditional communication media, including telephone, radio, television, paper mail and newspapers are reshaped, redefined, or even bypassed by the Internet, giving birth to new services such as email,
Internet telephone, Internet television, online music, digital newspapers, and video streaming websites. Newspaper, book, and other print publishing are adapting to website technology, or are reshaped into
blogging, web feeds and online news aggregators. The Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, and social networking services.
Online shopping has grown exponentially for major retailers, small businesses, and entrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or even sell goods and
services entirely online. Business-to-business and financial services on the Internet affect supply chains across entire industries.
The Internet has no single centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies.[5] The overreaching definitions of the
two principal name spaces in the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned
Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated
international participants that anyone may associate with by contributing technical expertise.[6] In November 2006, the Internet was included on USA Today's list of New Seven Wonders.[7]
Licklider Kahn Cerf
Networking: Web
From https://en.wikipedia.org/wiki/World_Wide_Web
The World Wide Web (WWW), commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet.[1]
Documents and downloadable media are made available to the network through web servers and can be accessed by programs such as web browsers. Servers and
resources on the World Wide Web are identified and located through character strings called uniform resource locators (URLs). The original and still very common
document type is a web page formatted in Hypertext Markup Language (HTML). This markup language supports plain text, images, embedded video and audio contents,
and scripts (short programs) that implement complex user interaction. The HTML language also supports hyperlinks (embedded URLs) which provide immediate access
to other web resources. Web navigation, or web surfing, is the common practice of following such hyperlinks across multiple websites. Web applications are web pages
that function as application software. The information in the Web is transferred across the Internet using the Hypertext Transfer Protocol (HTTP).
Multiple web resources with a common theme and usually a common domain name make up a website. A single web server may provide multiple websites, while some
websites, especially the most popular ones, may be provided by multiple servers. Website content is provided by a myriad of companies, organizations, government
agencies, and individual users; and comprises an enormous mass of educational, entertainment, commercial, and government information.
The World Wide Web has become the world's dominant software platform.[2][3][4][5] It is the primary tool billions of people worldwide use to interact with the Internet.[6]
The Web was originally conceived as a document management system.[7] It was invented by Tim Berners-Lee at CERN in 1989 and opened to the public in 1991.
Graphic representation of a minute fraction
of the WWW, demonstrating hyperlinks
Tim Berners-Lee
Networking: Social Media
From https://en.wikipedia.org/wiki/Social_media
Social media are interactive technologies that facilitate the creation and sharing of information, ideas, interests, and other forms of expression through virtual communities and networks.[1][2]
While challenges to the definition of social media arise[3][4] due to the variety of stand-alone and built-in social media services currently available, there are some common features:[2]
1. Social media are interactive Web 2.0 Internet-based applications.[2][5]
2. User-generated content—such as text posts or comments, digital photos or videos, and data generated through all online interactions—is the lifeblood of social media.[2][5]
3. Users create service-specific profiles for the website or app that are designed and maintained by the social media organization.[2][6]
4. Social media helps the development of online social networks by connecting a user's profile with those of other individuals or groups.[2][6]
The term social in regard to media suggests that platforms are user-centric and enable communal activity. As such, social media can be viewed as online facilitators or enhancers of human
networks—webs of individuals who enhance social connectivity.[7]
Users usually access social media services through web-based apps on desktops or download services that offer social media functionality to their mobile devices (e.g., smartphones and
tablets). As users engage with these electronic services, they create highly interactive platforms which individuals, communities, and organizations can share, co-create, discuss, participate,
and modify user-generated or self-curated content posted online.[8][9][1] Additionally, social media are used to document memories, learn about and explore things, advertise oneself, and form
friendships along with the growth of ideas from the creation of blogs, podcasts, videos, and gaming sites.[10] This changing relationship between humans and technology is the focus of the
emerging field of technological self-studies.[11] Some of the most popular social media websites, with more than 100 million registered users, include Facebook (and its associated Facebook
Messenger), TikTok, WeChat, Instagram, QZone, Weibo, Twitter, Tumblr, Baidu Tieba, and LinkedIn. Depending on interpretation, other popular platforms that are sometimes referred to as
social media services include YouTube, QQ, Quora, Telegram, Meta, Signal, LINE, Snapchat, Pinterest, Viber, Reddit, Discord, VK, Microsoft Teams, and more. Wikis are examples of
collaborative content creation.
Zuckerberg
Networking: Internet of Everything
From https://www.bbvaopenmind.com/en/technology/digital-world/the-internet-of-everything-ioe/
The Internet of Everything (IoE)  “is bringing together people, process, data, and things to make networked connections more
relevant and valuable than ever before-turning information into actions that create new capabilities, richer experiences, and
unprecedented economic opportunity for businesses, individuals, and countries.”, (Cisco, 2013) .
In simple terms: IoE is the intelligent connection of people, process, data and things. The Internet of Everything (IoE) describes a world
where billions of objects have sensors to detect measure and assess their status; all connected over public or private networks using
standard and proprietary protocols.
Pillars of The Internet of Everything (IoE)
• People: Connecting people in more relevant, valuable ways.
• Data: Converting data into intelligence to make better decisions.
• Process: Delivering the right information to the right person (or machine) at the right time.
• Things: Physical devices and objects connected to the Internet and each other for intelligent decision making; often called Internet of
Things (IoT).
Unsolved Problems
P vs NP
Quantum Gravity
Western Science History.pdf
Western Science History.pdf
Western Science History.pdf
Western Science History.pdf
Western Science History.pdf
Western Science History.pdf
Western Science History.pdf
Western Science History.pdf
Western Science History.pdf
Western Science History.pdf
Western Science History.pdf

More Related Content

What's hot

Electromagnetic Spectrum PowerPoint Presentation for Teachers/Students
Electromagnetic Spectrum PowerPoint Presentation for Teachers/StudentsElectromagnetic Spectrum PowerPoint Presentation for Teachers/Students
Electromagnetic Spectrum PowerPoint Presentation for Teachers/StudentsRoma Balagtas
 
Branches of physics
Branches of physicsBranches of physics
Branches of physicsSRLive
 
Hans Christian Oersted
Hans Christian OerstedHans Christian Oersted
Hans Christian Oerstedvelocifossa
 
Factors affecting climate
Factors affecting climateFactors affecting climate
Factors affecting climateRhajTheWonder
 
Marie Curie by Aygiz Akhtyamov
Marie Curie by Aygiz AkhtyamovMarie Curie by Aygiz Akhtyamov
Marie Curie by Aygiz AkhtyamovAygul Gazizova
 
Physical Science Notes - Properties, Systems, Matter & Energy
Physical Science Notes - Properties, Systems, Matter & EnergyPhysical Science Notes - Properties, Systems, Matter & Energy
Physical Science Notes - Properties, Systems, Matter & Energyjschmied
 
Grade 10 Science Learner's Material Activity 1: Find The Center
Grade 10 Science Learner's Material Activity 1: Find The CenterGrade 10 Science Learner's Material Activity 1: Find The Center
Grade 10 Science Learner's Material Activity 1: Find The CenterJan Cecilio
 
Albert einstein jan2012 final
Albert einstein jan2012 finalAlbert einstein jan2012 final
Albert einstein jan2012 finalGTClub
 
Parts-of-the-Science-Investigatory-Project-PPT.pptx
Parts-of-the-Science-Investigatory-Project-PPT.pptxParts-of-the-Science-Investigatory-Project-PPT.pptx
Parts-of-the-Science-Investigatory-Project-PPT.pptxAldrinBalita1
 
Electromagnetic spectrum
Electromagnetic spectrumElectromagnetic spectrum
Electromagnetic spectrumSabrina Medel
 
Action research on lenses
Action research on lensesAction research on lenses
Action research on lensesangelbindusingh
 
CuRRENT-ADVANCEMENTS-AND-INFORMATIONS USHLIE_20230902_142432_0000 (1).pdf
CuRRENT-ADVANCEMENTS-AND-INFORMATIONS USHLIE_20230902_142432_0000 (1).pdfCuRRENT-ADVANCEMENTS-AND-INFORMATIONS USHLIE_20230902_142432_0000 (1).pdf
CuRRENT-ADVANCEMENTS-AND-INFORMATIONS USHLIE_20230902_142432_0000 (1).pdfCheyeneReliGlore
 
James chadwick
James chadwickJames chadwick
James chadwickjane1015
 

What's hot (20)

Earth subsystem
Earth subsystemEarth subsystem
Earth subsystem
 
Electromagnetic Spectrum PowerPoint Presentation for Teachers/Students
Electromagnetic Spectrum PowerPoint Presentation for Teachers/StudentsElectromagnetic Spectrum PowerPoint Presentation for Teachers/Students
Electromagnetic Spectrum PowerPoint Presentation for Teachers/Students
 
Branches of physics
Branches of physicsBranches of physics
Branches of physics
 
Hans Christian Oersted
Hans Christian OerstedHans Christian Oersted
Hans Christian Oersted
 
Factors affecting climate
Factors affecting climateFactors affecting climate
Factors affecting climate
 
Earth History ppt
Earth History pptEarth History ppt
Earth History ppt
 
Michael Faraday ppt
Michael Faraday pptMichael Faraday ppt
Michael Faraday ppt
 
Marie Curie by Aygiz Akhtyamov
Marie Curie by Aygiz AkhtyamovMarie Curie by Aygiz Akhtyamov
Marie Curie by Aygiz Akhtyamov
 
Physical Science Notes - Properties, Systems, Matter & Energy
Physical Science Notes - Properties, Systems, Matter & EnergyPhysical Science Notes - Properties, Systems, Matter & Energy
Physical Science Notes - Properties, Systems, Matter & Energy
 
Andre Marie Ampere
Andre Marie AmpereAndre Marie Ampere
Andre Marie Ampere
 
Grade 10 Science Learner's Material Activity 1: Find The Center
Grade 10 Science Learner's Material Activity 1: Find The CenterGrade 10 Science Learner's Material Activity 1: Find The Center
Grade 10 Science Learner's Material Activity 1: Find The Center
 
Albert einstein jan2012 final
Albert einstein jan2012 finalAlbert einstein jan2012 final
Albert einstein jan2012 final
 
Generator vs motor electromagnetism
Generator vs motor electromagnetismGenerator vs motor electromagnetism
Generator vs motor electromagnetism
 
Parts-of-the-Science-Investigatory-Project-PPT.pptx
Parts-of-the-Science-Investigatory-Project-PPT.pptxParts-of-the-Science-Investigatory-Project-PPT.pptx
Parts-of-the-Science-Investigatory-Project-PPT.pptx
 
Mendeleev
MendeleevMendeleev
Mendeleev
 
Electromagnetic spectrum
Electromagnetic spectrumElectromagnetic spectrum
Electromagnetic spectrum
 
Action research on lenses
Action research on lensesAction research on lenses
Action research on lenses
 
Constellations
ConstellationsConstellations
Constellations
 
CuRRENT-ADVANCEMENTS-AND-INFORMATIONS USHLIE_20230902_142432_0000 (1).pdf
CuRRENT-ADVANCEMENTS-AND-INFORMATIONS USHLIE_20230902_142432_0000 (1).pdfCuRRENT-ADVANCEMENTS-AND-INFORMATIONS USHLIE_20230902_142432_0000 (1).pdf
CuRRENT-ADVANCEMENTS-AND-INFORMATIONS USHLIE_20230902_142432_0000 (1).pdf
 
James chadwick
James chadwickJames chadwick
James chadwick
 

Similar to Western Science History.pdf

nobel prize winners of 2019,cosmology and exoplanet.ppt
nobel prize winners of 2019,cosmology and exoplanet.pptnobel prize winners of 2019,cosmology and exoplanet.ppt
nobel prize winners of 2019,cosmology and exoplanet.pptmaniiron02
 
ICE BREAKING - PGT Chemistry workshop
ICE BREAKING - PGT Chemistry workshopICE BREAKING - PGT Chemistry workshop
ICE BREAKING - PGT Chemistry workshopameetajee
 
Nobel prize history in physics
Nobel prize history in physicsNobel prize history in physics
Nobel prize history in physicsLabRoots
 
Nobel Prize
Nobel PrizeNobel Prize
Nobel Prizerajasv
 
Nobel Prize Winning Works in Chemistry and their Impact on Society
Nobel Prize Winning Works in Chemistry and their Impact on SocietyNobel Prize Winning Works in Chemistry and their Impact on Society
Nobel Prize Winning Works in Chemistry and their Impact on Societyijtsrd
 
Book on Einstein's Life is for Sale on The Centennial of his Nobel Prize Lect...
Book on Einstein's Life is for Sale on The Centennial of his Nobel Prize Lect...Book on Einstein's Life is for Sale on The Centennial of his Nobel Prize Lect...
Book on Einstein's Life is for Sale on The Centennial of his Nobel Prize Lect...associate14
 
Great scientists in the world
Great scientists in the worldGreat scientists in the world
Great scientists in the worldHaseena Shouk
 
Some great physcists and their contribution
Some great physcists and their contributionSome great physcists and their contribution
Some great physcists and their contributionkaushikbalaji2
 
Citizenship (Alfred Nobel)
Citizenship (Alfred Nobel)Citizenship (Alfred Nobel)
Citizenship (Alfred Nobel)eurekatestbed
 
famous scientist.pdf
famous scientist.pdffamous scientist.pdf
famous scientist.pdfSharan49198
 

Similar to Western Science History.pdf (20)

Nobel prize
Nobel prizeNobel prize
Nobel prize
 
nobel prize winners of 2019,cosmology and exoplanet.ppt
nobel prize winners of 2019,cosmology and exoplanet.pptnobel prize winners of 2019,cosmology and exoplanet.ppt
nobel prize winners of 2019,cosmology and exoplanet.ppt
 
ICE BREAKING - PGT Chemistry workshop
ICE BREAKING - PGT Chemistry workshopICE BREAKING - PGT Chemistry workshop
ICE BREAKING - PGT Chemistry workshop
 
K0526068
K0526068K0526068
K0526068
 
Nobel prize history in physics
Nobel prize history in physicsNobel prize history in physics
Nobel prize history in physics
 
Nobel Prize
Nobel PrizeNobel Prize
Nobel Prize
 
Nobel Prize Winning Works in Chemistry and their Impact on Society
Nobel Prize Winning Works in Chemistry and their Impact on SocietyNobel Prize Winning Works in Chemistry and their Impact on Society
Nobel Prize Winning Works in Chemistry and their Impact on Society
 
Book on Einstein's Life is for Sale on The Centennial of his Nobel Prize Lect...
Book on Einstein's Life is for Sale on The Centennial of his Nobel Prize Lect...Book on Einstein's Life is for Sale on The Centennial of his Nobel Prize Lect...
Book on Einstein's Life is for Sale on The Centennial of his Nobel Prize Lect...
 
Great scientists in the world
Great scientists in the worldGreat scientists in the world
Great scientists in the world
 
Some great physcists and their contribution
Some great physcists and their contributionSome great physcists and their contribution
Some great physcists and their contribution
 
Nobel prize
Nobel prizeNobel prize
Nobel prize
 
Scienceproj
ScienceprojScienceproj
Scienceproj
 
Great scientists
Great scientists Great scientists
Great scientists
 
Great scientists
Great scientists Great scientists
Great scientists
 
Albert Einstein, Physics Nobel Prize
Albert Einstein, Physics Nobel PrizeAlbert Einstein, Physics Nobel Prize
Albert Einstein, Physics Nobel Prize
 
Science in 18 th century
Science in 18 th centuryScience in 18 th century
Science in 18 th century
 
Nobel prize
Nobel prizeNobel prize
Nobel prize
 
Citizenship (Alfred Nobel)
Citizenship (Alfred Nobel)Citizenship (Alfred Nobel)
Citizenship (Alfred Nobel)
 
famous scientist.pdf
famous scientist.pdffamous scientist.pdf
famous scientist.pdf
 
Famous scientists
Famous scientistsFamous scientists
Famous scientists
 

Recently uploaded

What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxUnboundStockton
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
MICROBIOLOGY biochemical test detailed.pptx
MICROBIOLOGY biochemical test detailed.pptxMICROBIOLOGY biochemical test detailed.pptx
MICROBIOLOGY biochemical test detailed.pptxabhijeetpadhi001
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 

Recently uploaded (20)

What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docx
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
MICROBIOLOGY biochemical test detailed.pptx
MICROBIOLOGY biochemical test detailed.pptxMICROBIOLOGY biochemical test detailed.pptx
MICROBIOLOGY biochemical test detailed.pptx
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 

Western Science History.pdf

  • 1. Western Science History for Zoey and Kaya Created by GrandBob Muse Grandma
  • 2. This is an initial draft. More details will be added later.
  • 4. Rise of Science (1600 -1700) Galileo Copernicus Newton Kepler Up until 1600, most people believed that the Earth was the centerof the Universe with the sun, moon , and stars revolving around it. Religious groups like the Catholic Church thought that the Bible supported this belief. Copernicus showed that it made more sense if the Earth went around the sun. He only published j his result after his death out of fear of reprisal. Galileo discovered the basic principles of physics, invented the telescope, and believed that thed Earth went around the sun. Howeverhe was arrested and silenced by the Catholic Church. Later Kepler discovered that all of the planets including the Earth went around the sun in elliptical orbits. Isaac Newton invented the Theory of Gravity and created the mathematics needed (Calculus) to show why Kepler’s Laws worked. He was then able to predict the future paths of heavenly . This was a major blow to religions. However Newton also calculated the size of the Ark because he believed in parts of the Bible.
  • 5. 17th Century Mathematics Descartes Pascal Fermat Johann Bernouilli Jacob Bernouilli De Moivre
  • 7. 18th Century Science Lavoisier Halley Herschel Priestley Cavendish Black Oxygen Chlorine Carbon Dioxide Combustion Comet Uranus Franklin Lightening Rod Watt Steam Engine
  • 8. 18th Century Mathematics Gauss Euler Laplace Fourier Lagrange Cauchy
  • 10. 19th Century, Scientists and Inventors Edison Marconi Morse Tesla Darwin Mendel Agassiz Malthus Pasteur Daimler
  • 11. Some 19th Century, Physicists and Mathematicians Maxwell Faraday Helmholtz Boltzmann Cantor Klein Riemann Kelvin
  • 13. Physics in the 20th Century In the 19th Century, James Maxwell discovered the equations of electromagnetism. Many physicists thought that there would be no more new physics. However the 20th Century produced amazing new science that revolutionized the world. In 1905, Einstein discovered special relativity that showed nothing could travel faster than light and that E= MC^2. In 1917, he discovered general relativity for gravity based on curved space-time. In 1900, Max Planck, showed that radiation was emitted in discrete chunks (quanta). Rutherford discovered atom nuclei. Bohr,Heisenberg (Uncertainty principle), Dirac, and Schrodinger (equations) used quantum mechanics to explain the hydrogen atom and revolutionize physics. Richard Feynman showed how to use diagrams to calculate accurately predicted measurements for particles in a “Standard Model”. Murray Gell-Man discovered particles called quarks that are key to the Standard Model. Stephen Hawking showed the black holes radiate Planck Einstein Bohr Heisenberg Schrodinger Dirac Feynman Gell-mann Hawking Rutherford Curie Dyson
  • 14. Nobel Prize Winners in Chemistry From https://en.wikipedia.org/wiki/List_of_Nobel_laureates_in_Chemistry List of all winners The Nobel Prize in Chemistry (Swedish: Nobelpriset i kemi) is awarded annually by the Royal Swedish Academy of Sciences to scientists in the various fields of chemistry. It is one of the five Nobel Prizes established by the 1895 will of Alfred Nobel, who died in 1896. These prizes are awarded for outstanding contributions in chemistry, physics, literature, peace, and physiology or medicine.[1] As dictated by Nobel's will, the award is administered by the Nobel Foundation and awarded by the Royal Swedish Academy of Sciences.[2] The first Nobel Prize in Chemistry was awarded in 1901 to Jacobus Henricus van 't Hoff, of the Netherlands. Each recipient receives a medal, a diploma and a monetary award prize that has varied throughout the years.[3] In 1901, van 't Hoff received 150,782 SEK, which is equal to 7,731,004 SEK in December 2007. The award is presented in Stockholm at an annual ceremony on 10 December, the anniversary of Nobel's death.[4] At least 25 laureates have received the Nobel Prize for contributions in the field of organic chemistry, more than any other field of chemistry.[5] Two Nobel Prize laureates in Chemistry, Germans Richard Kuhn (1938) and Adolf Butenandt (1939), were not allowed by their government to accept the prize. They would later receive a medal and diploma, but not the money. Frederick Sanger is one out of two laureates to be awarded the Nobel prize twice in the same subject, in 1958 and 1980. John Bardeen is the other and was awarded the Nobel Prize in physics in 1956 and 1972. Two others have won Nobel Prizes twice, one in chemistry and one in another subject: Maria Skłodowska-Curie (physics in 1903, chemistry in 1911) and Linus Pauling (chemistry in 1954, peace in 1962).[6] As of 2020, the prize has been awarded to 185 individuals, including seven women: Maria Skłodowska-Curie, Irène Joliot-Curie (1935), Dorothy Hodgkin (1964), Ada Yonath (2009), Frances Arnold (2018), Emmanuelle Charpentier (2020), and Jennifer Doudna (2020).[7][8] Nobel
  • 15. Nobel Prize Winners in Physics From https://en.wikipedia.org/wiki/List_of_Nobel_laureates_in_Physics List of all winners The Nobel Prize in Physics (Swedish: Nobelpriset i fysik) is awarded annually by the Royal Swedish Academy of Sciences to scientists in the various fields of physics. It is one of the five Nobel Prizes established by the 1895 will of Alfred Nobel (who died in 1896), awarded for outstanding contributions in physics.[1] As dictated by Nobel's will, the award is administered by the Nobel Foundation and awarded by the Royal Swedish Academy of Sciences.[2] The award is presented in Stockholm at an annual ceremony on 10 December, the anniversary of Nobel's death.[3] Each recipient receives a medal, a diploma and a monetary award prize that has varied throughout the years.[4] The first Nobel Prize in Physics was awarded in 1901 to Wilhelm Conrad Röntgen, of Germany, who received 150,782 SEK, which is equal to 7,731,004 SEK in December 2007. John Bardeen is the only laureate to win the prize twice—in 1956 and 1972. Marie Skłodowska-Curie also won two Nobel Prizes, for physics in 1903 and chemistry in 1911. William Lawrence Bragg was, until October 2014, the youngest ever Nobel laureate; he won the prize in 1915 at the age of 25. He remains the youngest recipient of the Physics Prize.[5] Four women have won the prize: Curie, Maria Goeppert-Mayer (1963), Donna Strickland (2018), and Andrea Ghez (2020).[6] As of 2021, the prize has been awarded to 218 individuals.[7]
  • 16. Nobel Prize Winners in Physiology or Medicine From https://en.wikipedia.org/wiki/Nobel_Prize_in_Physiology_or_Medicine List of all winners The Nobel Prize in Physiology or Medicine is awarded yearly by the Nobel Assembly at the Karolinska Institute for outstanding discoveries in physiology or medicine. The Nobel Prize is not a single prize, but five separate prizes that, according to Alfred Nobel's 1895 will, are awarded "to those who, during the preceding year, have conferred the greatest benefit to humankind". Nobel Prizes are awarded in the fields of Physics, Chemistry, Physiology or Medicine, Literature, and Peace. The Nobel Prize is presented annually on the anniversary of Alfred Nobel's death, 10 December. As of 2021, 112 Nobel Prizes in Physiology or Medicine have been awarded to 224 laureates, 212 men and 12 women. The first one was awarded in 1901 to the German physiologist Emil von Behring, for his work on serum therapy and the development of a vaccine against diphtheria. The first woman to receive the Nobel Prize in Physiology or Medicine, Gerty Cori, received it in 1947 for her role in elucidating the metabolism of glucose, important in many aspects of medicine, including treatment of diabetes. The most recent Nobel prize was announced by the Karolinska Institute on 4 October 2021, and has been awarded to American David Julius and Lebanese-American Ardem Patapoutian, for the discovery of receptors for temperature and touch.[2] The prize consists of a medal along with a diploma and a certificate for the monetary award. The front side of the medal displays the same profile of Alfred Nobel depicted on the medals for Physics, Chemistry, and Literature; the reverse side is unique to this medal.
  • 17. Nobel Prize Winners in Economics From https://en.wikipedia.org/wiki/Nobel_Memorial_Prize_in_Economic_Sciences The Nobel Memorial Prize in Economic Sciences, officially the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel[2][3][4] (Swedish: Sveriges riksbanks pris i ekonomisk vetenskap till Alfred Nobels minne), is an economics award administered by the Nobel Foundation. Although not one of the five Nobel Prizes which were established by Alfred Nobel's will in 1895,[5] it is commonly referred to as the Nobel Prize in Economics.[6] The winners of the Nobel Memorial Prize in Economic Sciences are chosen in a similar way, are announced along with the Nobel Prize recipients, and the prize is presented at the Nobel Prize Award Ceremony.[7] The award was established in 1968 by an endowment "in perpetuity" from Sweden's central bank, Sveriges Riksbank, to commemorate the bank's 300th anniversary.[8][9][10][11] It is administered and referred to along with the Nobel Prizes by the Nobel Foundation.[12] Laureates in the Memorial Prize in Economics are selected by the Royal Swedish Academy of Sciences.[13][14] It was first awarded in 1969 to Dutch economist Jan Tinbergen and Norwegian economist Ragnar Frisch "for having developed and applied dynamic models for the analysis of economic processes".[11][15][16] List of all winners Announcement of the Laureate of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2008, Mats Persson, Bertil Holmlund, Gunnar Öquist, Peter Englund
  • 18. Mathematics in the Early 20th Century Weill Von Neumann Poincaire Godel Gronthendieck Weyl Bourbaki Hilbert Courant Hardy Wiener Brouwer Erdos Eastern Europe
  • 19. In My Life: 20th and 21th Century
  • 20. In My Life: Science, Technology and Math Examples Biology : (DNA- Birth Control - Human Genome - Gene Editing) Medicine : (Polio Vaccine - Heart Transplants - AIDS drugs- Immunotherapy - Covid ) Aerospace : (Jet Planes - Satellites- Man in Space - Moon Voyage - Space Exploration) Physics : (Big Bang - Standard Model - Black Holes) Math : (Four Color Theorem - Fermat’s Last Theorem - Langlands Program) Computers : (Mainframes - Minicomputers - PCs - Smart Phones - Internet of Things - Cloud) Computing Applications : (Numerical Analysis - Simulation - Bioinformatics - Artificial Intelligence) Large Scale Computing (Big Data- Deep Learning- Supercomputing- Computational Science ) Electronics : ( Televison Transistors— Microprocessors) Networking (Internet : Web- Social Media- Internet of Things) Unsolved Problems : (P vs N - Quantum Gravity) Future : (Robotics, Fusion Power, Alien Life )
  • 21. Biology Genetics DNA Non-sexual Genetic Variation Birth Control Human Genome Gene Editing Gaia Hypothesis
  • 22. Biology: 20th Century Genetics Charles Darwin published the most famous science book “The Origin of Species” in 1859 describing the Theory of Evolution through natural selection. He was influenced by a round-the-world voyage he had taken as a young man, his study of nature, the works of Malthus on over population, and the geological research showing that the earth was millions of years old, and that species had died out. His work was and is still scandalous to religious people who believed in the biblical story of creation. In 1854, the monk Gregor Mendel discovered the concept of genes by breeding peas in his garden. Frances Crick and James Watson using results of Franklin, discovered the structure of DNA which explained how genes work Leroy Hood built machines for finding genes in DNA. Craig Venter sequenced human genome and created an artificial microbe. Watson DNA Structure Craig Venter Leroy Hood Human Genome part History of DNA Norman Borlaug and Crick Franklin
  • 23. Biology:DNA From https://www.genome.gov/genetics-glossary/Deoxyribonucleic-Acid Deoxyribonucleic acid (abbreviated DNA) is the molecule that carries genetic information for the development and functioning of an organism. DNA is made of two linked strands that wind around each other to resemble a twisted ladder — a shape known as a double helix. Each strand has a backbone made of alternating sugar (deoxyribose) and phosphate groups. Attached to each sugar is one of four bases: adenine (A), cytosine (C), guanine (G) or thymine (T). The two strands are connected by chemical bonds between the bases: adenine bonds with thymine, and cytosine bonds with guanine. The sequence of the bases along DNA’s backbone encodes biological information, such as the instructions for making a protein or RNA molecule. 
  • 25. Biology:Birth Control From https://www.nhsinform.scot/healthy-living/contraception/getting-started/the-different-types-of-contraception 1. Cap 2. Combined pill 3. Condoms 4. Contraceptive implant 5. Contraceptive injection 6. Contraceptive patch 7. Diaphragm 8. Female condoms 9. Female sterilisation 10. IUD (intrauterine device, coil) 11. IUS (intrauterine system) 12. Progestogen-only pill (POP, mini pill) 13. Vaginal ring 14. Vasectomy 15. Natural family planning (fertility awareness) The different types of contraception
  • 26. Biology: Human Genome From https://en.wikipedia.org/wiki/Human_genome The human genome is a complete set of nucleic acid sequences for humans, encoded as DNA within the 23 chromosome pairs in cell nuclei and in a small DNA molecule found within individual mitochondria. These are usually treated separately as the nuclear genome and the mitochondrial genome.[2] Human genomes include both protein- coding DNA genes and noncoding DNA. The term noncoding DNA is somewhat misleading because it includes not only junk DNA but also DNA coding for ribosomal RNA, for transfer RNA and for ribozymes. Haploid human genomes, which are contained in germ cells (the egg and sperm gamete cells created in the meiosis phase of sexual reproduction before fertilization) consist of 3,054,815,472 DNA base pairs (if X chromosome is used),[3] while female diploid genomes (found in somatic cells) have twice the DNA content.
  • 27. Biology: CRISPR Gene Editing From https://en.wikipedia.org/wiki/CRISPR_gene_editing CRISPR gene editing (pronounced /ˈkrispər/ "crisper") is a genetic engineering technique in molecular biology by which the genomes of living organisms may be modified. It is based on a simplified version of the bacterial CRISPR-Cas9 antiviral defense system. By delivering the Cas9 nuclease complexed with a synthetic guide RNA (gRNA) into a cell, the cell's genome can be cut at a desired location, allowing existing genes to be removed and/or new ones added in vivo.[1] The technique is considered highly significant in biotechnology and medicine as it enables editing genomes in vivo very precisely, cheaply, and easily. It can be used in the creation of new medicines, agricultural products, and genetically modified organisms, or as a means of controlling pathogens and pests. It also has possibilities in the treatment of inherited genetic diseases as well as diseases arising from somatic mutations such as cancer. However, its use in human germline genetic modification is highly controversial. The development of the technique earned Jennifer Doudna and Emmanuelle Charpentier the Nobel Prize in Chemistry in 2020.[2][3] The third researcher group that shared the Kavli Prize for the same discovery,[4] led by Virginijus Šikšnys, was not awarded the Nobel prize.[5][6][7] Working like genetic scissors, the Cas9 nuclease opens both strands of the targeted sequence of DNA to introduce the modification by one of two methods. Knock-in mutations, facilitated via homology directed repair (HDR), is the traditional pathway of targeted genomic editing approaches.[1] This allows for the introduction of targeted DNA damage and repair. HDR employs the use of similar DNA sequences to drive the repair of the break via the incorporation of exogenous DNA to function as the repair template.[1] This method relies on the periodic and isolated occurrence of DNA damage at the target site in order for the repair to commence. Knock-out mutations caused by CRISPR-Cas9 result in the repair of the double-stranded break by means of non- homologous end joining (NHEJ). NHEJ can often result in random deletions or insertions at the repair site, which may disrupt or alter gene functionality. Therefore, genomic engineering by CRISPR-Cas9 gives researchers the ability to generate targeted random gene disruption. Because of this, the precision of genome editing is a great concern. Genomic editing leads to irreversible changes to the genome. Doudna Charpentier
  • 28. From https://en.wikipedia.org/wiki/Gaia_hypothesis Biology: Gaia Hypothesis The Gaia hypothesis (/ˈɡaɪ.ə/), also known as the Gaia theory, Gaia paradigm, or the Gaia principle, proposes that living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulating, complex system that helps to maintain and perpetuate the conditions for life on the planet. The hypothesis was formulated by the chemist James Lovelock[1] and co-developed by the microbiologist Lynn Margulis in the 1970s.[2] Lovelock named the idea after Gaia, the primordial goddess who personified the Earth in Greek mythology. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal in part for his work on the Gaia hypothesis.[3] Topics related to the hypothesis include how the biosphere and the evolution of organisms affect the stability of global temperature, salinity of seawater, atmospheric oxygen levels, the maintenance of a hydrosphere of liquid water and other environmental variables that affect the habitability of Earth. The Gaia hypothesis was initially criticized for being teleological and against the principles of natural selection, but later refinements aligned the Gaia hypothesis with ideas from fields such as Earth system science, biogeochemistry and systems ecology.[4][5][6] Even so, the Gaia hypothesis continues to attract criticism, and today many scientists consider it to be only weakly supported by, or at odds with, the available evidence.[7][8][9] Lovelock Margulis
  • 29. Medicine Polio Vaccine Heart Transplants AIDS drugs Immunotherapy Controlling Covid
  • 30. Medicine: Polio Vaccine From https://en.wikipedia.org/wiki/Polio_vaccine Polio vaccines are vaccines used to prevent poliomyelitis (polio).[2] Two types are used: an inactivated poliovirus given by injection (IPV) and a weakened poliovirus given by mouth (OPV).[2] The World Health Organization (WHO) recommends all children be fully vaccinated against polio.[2] The two vaccines have eliminated polio from most of the world,[3][4] and reduced the number of cases reported each year from an estimated 350,000 in 1988 to 33 in 2018.[5][6] The inactivated polio vaccines are very safe.[2] Mild redness or pain may occur at the site of injection.[2] Oral polio vaccines cause about three cases of vaccine-associated paralytic poliomyelitis per million doses given.[2] This compares with 5,000 cases per million who are paralysed following a polio infection.[7] Both types of vaccine are generally safe to give during pregnancy and in those who have HIV/ AIDS but are otherwise well.[2] However, the emergence of circulating vaccine-derived poliovirus (cVDPV), a form of the vaccine virus that has reverted to causing poliomyelitis, has led to the development of novel oral polio vaccine type 2 (nOPV2) which aims to make the vaccine safer and thus stop further outbreaks of cVDPV2.[8] The first successful demonstration of a polio vaccine was by Hilary Koprowski in 1950, with a live attenuated virus which people drank.[9] The vaccine was not approved for use in the United States, but was used successfully elsewhere.[9] The success of an inactivated (killed) polio vaccine, developed by Jonas Salk, was announced in 1955.[2][10] Another attenuated live oral polio vaccine was developed by Albert Sabin and came into commercial use in 1961.[2][11] Koprowski Salk Sabin
  • 31. Medicine: Heart Transplants From https://www.mayoclinic.org/tests-procedures/heart-transplant/about/pac-20384750
  • 32. Medicine: AIDS drugs From https://hivinfo.nih.gov/understanding-hiv/fact-sheets/fda-approved-hiv-medicines Treatment with HIV medicines is called antiretroviral therapy (ART). ART is recommended for everyone with HIV, and people with HIV should start ART as soon as possible. People on ART take a combination of HIV medicines (called an HIV treatment regimen) every day. A person's initial HIV treatment regimen generally includes three HIV medicines from at least two different HIV drug classes.
 
 The following table lists HIV medicines recommended for the treatment of HIV infection in the United States, based on the U.S. Department of Health and Human Services (HHS) HIV/AIDS medical practice guidelines. All of these drugs are approved by the U.S. Food and Drug Administration (FDA). The HIV medicines are listed according to drug class and identified by generic and brand names. Click on a drug name to view information on the drug from the Clinical Info Drug Database, or download the Clinical Info mobile application to view the information on your Apple or Android devices. To see a timeline of all FDA approval dates for HIV medicines, view the HIVinfo FDAApproval of HIV Medicines infographic. Go to Website for large table
  • 33. Medicine: Immunotherapy From https://www.cancer.gov/about-cancer/treatment/types/immunotherapy • How does immunotherapy work against cancer?
 • What are the types of immunotherapy?
 • Which cancers are treated with immunotherapy?
 • What are the side effects of immunotherapy?
 • How is immunotherapy given?
 • Where do you go for immunotherapy?
 • How often do you receive immunotherapy?
 • How can you tell if immunotherapy is working?
 • What is the current research in immunotherapy?
 • How do you find clinical trials that are testing immunotherapy?
 Immunotherapy is a type of cancer treatment that helps your immune system fight cancer. The immune system helps your body fight infections and other diseases. It is made up of white blood cells and organs and tissues of the lymph system. Immunotherapy is a type of biological therapy. Biological therapy is a type of treatment that uses substances made from living organisms to treat cancer.
  • 34. Medicine: Controlling Covid From https://www.cdc.gov/coronavirus/2019-ncov/your-health/about-covid-19.html • Basics • Spread • Prevention • If You Have COVID-19 • If You Come into Close Contact with Someone with COVID-19 • Children • Symptoms and Emergency Warning Signs • Testing • Contact Tracing • Pets and Animals
  • 35. Medicine: Stopping Covid-19 CoronaVirus (2020)? The Covid-19 virus started in Wuhan China and has spread through the whole world because humans have no immunity Covid-19 Virus World-Wide Spread The World is wearing masks Covid-19 Doctors and Nurses
  • 38. Aerospace: Jet Planes From https://www.mayoclinic.org/coronavirus-covid-19/history-disease-outbreaks-vaccine-timeline/covid-19 A jet aircraft (or simply jet) is an aircraft (nearly always a fixed-wing aircraft) propelled by jet engines. Whereas the engines in propeller-powered aircraft generally achieve their maximum efficiency at much lower speeds and altitudes, jet engines achieve maximum efficiency at speeds close to or even well above the speed of sound. Jet aircraft generally cruise most efficiently at about Mach 0.8 (981 km/h (610 mph)) and at altitudes around 10,000–15,000 m (33,000–49,000 ft) or more. The idea of the jet engine was not new, but the technical problems involved could not begin to be solved until the 1930s. Frank Whittle, an English inventor and RAF officer, began development of a viable jet engine in 1928,[1] and Hans von Ohain in Germany began work independently in the early 1930s. In August 1939 the turbojet powered Heinkel He 178, the world's first jet aircraft, made its first flight. A wide range of different types jet aircraft exist, both for civilian and military purposes.
  • 39. Aerospace: Satellites From https://en.wikipedia.org/wiki/Satellite Two CubeSats orbiting around Earth after being deployed from the International Space Station A satellite or artificial satellite is an object intentionally placed into orbit in outer space. Except for passive satellites, most satellites have an electricity generation system for equipment on board, such as solar panels or radioisotope thermoelectric generators (RTGs). Most satellites also have a method of communication to ground stations, called transponders. Many satellites use a standardized bus to save cost and work, the most popular of which is small CubeSats. Similar satellites can work together as a group, forming constellations. Because of the high launch cost to space, satellites are designed to be as lightweight and robust as possible. Satellites are placed from the surface to orbit by launch vehicles, high enough to avoid orbital decay by the atmosphere. Satellites can then change or maintain the orbit by propulsion, usually by chemical or ion thrusters. In 2018, about 90% of satellites orbiting Earth are in low Earth orbit or geostationary orbit; geostationary means the satellites stay still at the sky. Some imaging satellites chose a Sun-synchronous orbit because they can scan the entire globe with similar lighting. As the number of satellites and space debris around Earth increases, the collision threat are becoming more severe. A small number of satellites orbit other bodies (such as the Moon, Mars, and the Sun) or many bodies at once (two for a halo orbit, three for a Lissajous orbit). Earth observation satellites gather information for reconnaissance, mapping, monitoring the weather, ocean, forest, etc. Space telescopes take advantage of outer space's near perfect vacuum to observe objects with the entire electromagnetic spectrum. Because satellites can see a large portion of the Earth at once, communications satellites can relay information to remote places. The signal delay from satellites and their orbit's predictability are used in satellite navigation systems, such as GPS. Space probes are satellites designed for robotic space exploration outside of Earth, and space stations are in essence crewed satellites.
  • 40. Aerospace: Space Exploration From https://en.wikipedia.org/wiki/Space_exploration Space exploration is the use of astronomy and space technology to explore outer space.[1] While the exploration of space is carried out mainly by astronomers with telescopes, its physical exploration though is conducted both by uncrewed robotic space probes and human spaceflight. Space exploration, like its classical form astronomy, is one of the main sources for space science. While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. The world's first large-scale experimental rocket program was Opel-RAK under the leadership of Fritz von Opel and Max Valier during the late 1920s leading to the first crewed rocket cars and rocket planes,[2] [3] which paved the way for the Nazi era V2 program and US and Soviet activities from 1950 onwards. The Opel-RAK program and the spectacular public demonstrations of ground and air vehicles drew large crowds, as well as caused global public excitement as so-called "Rocket Rumble"[4] and had a large long-lasting impact on later spaceflight pioneers like Wernher von Braun. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.[5]
  • 41. Aerospace: Human in Space (1961- 1969) 1961 First Man in Space Russian Yuri Gagarin 1963 First Woman in Space Russian Valentina Tereshkova First Man on the moon American Neil Armstrong Neil Armstrong on the moon 1969 Moon rocket takeoff 1969 Moon capsule landing 1969 Moon rocket path 1969
  • 42. Aerospace: Human in Space From https://en.wikipedia.org/wiki/Human_spaceflight Human spaceflight (also referred to as manned spaceflight or crewed spaceflight) is spaceflight with a crew or passengers aboard a spacecraft, often with the spacecraft being operated directly by the onboard human crew. Spacecraft can also be remotely operated from ground stations on Earth, or autonomously, without any direct human involvement. People trained for spaceflight are called astronauts (American or other), cosmonauts (Russian), or taikonauts (Chinese); and non-professionals are referred to as spaceflight participants or spacefarers.[1] The first human in space was Soviet cosmonaut Yuri Gagarin, who launched on 12 April 1961 as part of the Soviet Union's Vostok program. This was towards the beginning of the Space Race. On 5 May 1961, Alan Shepard became the first American in space, as part of Project Mercury. Humans traveled to the Moon nine times between 1968 and 1972 as part of the United States' Apollo program, and have had a continuous presence in space for 21 years and 262 days on the International Space Station (ISS).[2] As of 2021, humans have not traveled beyond low Earth orbit since the Apollo 17 lunar mission in December 1972. Space Shuttle Space Walk
  • 44. Physics: Nuclear Weapons Einstein’s equation E = MC^2 showed that a small amount of mass M could yield a large amount of energy since the speed of light C is very large (300,000 meters/sec). No one knew how to unlock this energy until 1938. German scientists were able to use neutrons split Uranium 235 atom nuclei into smaller nuclei releasing energy and more neutrons. Scientists quickly realized that there could be a chain reaction with many nuclei splitting rapidly and releasing energy (bomb). When World War 2 began, scientists on both sides started research into making a nuclear weapon. In the US, Robert Oppenheimer led a large group of scientists in a secret “Manhattan Project” in New Mexico. In July 1945, they tested the first “atom bomb”. Germany had surrendered so the bomb was used against Japan ending the war. Russia explode an atom bomb in 1949. Scientists in the US led by Edward Teller realized that a much bigger bomb could be created by nuclear fusion of Hydrogen into Helium (fusion powers the sun). In 1953, the US tested an H-Bomb in the Pacific followed shortly afterwards by a Russian H-Bomb in Siberia. Many other countries now have atom bombs which could kill millions of people if ever used in war. Nuclear Fission Nuclear Fusion Oppenheimer Now I am become Death, the destroyer of worlds Teller H-Bomb
  • 45. Physics: Big Bang From https://en.wikipedia.org/wiki/Big_Bang The Big Bang theory describes how the universe expanded from an initial state of high density and temperature.[1] It is the prevailing cosmological model explaining the evolution of the observable universe from the earliest known periods through its subsequent large-scale form.[2][3][4] The model offers a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. Crucially, the theory is compatible with Hubble–Lemaître law—the observation that the farther away a galaxy is, the faster it is moving away from Earth. Extrapolating this cosmic expansion backwards in time using the known laws of physics, the theory describes an increasingly concentrated cosmos preceded by a singularity in which space and time lose meaning (typically named "the Big Bang singularity").[5] Detailed measurements of the expansion rate of the universe place the Big Bang singularity at around 13.8 billion years ago, which is thus considered the age of the universe.[6] After its initial expansion, an event that is by itself often called "the Big Bang", the universe cooled sufficiently to allow the formation of subatomic particles, and later atoms. Giant clouds of these primordial elements—mostly hydrogen, with some helium and lithium—later coalesced through gravity, forming early stars and galaxies, the descendants of which are visible today. Besides these primordial building materials, astronomers observe the gravitational effects of an unknown dark matter surrounding galaxies. Most of the gravitational potential in the universe seems to be in this form, and the Big Bang theory and various observations indicate that this excess gravitational potential is not created by baryonic matter, such as normal atoms. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to dark energy's existence.[7
  • 46. Physics: Standard Model of Particle Physics (1970’s) In the 1970’s, physicists constructed a “Standard Model” that explained all of the particle experiments to the highest accuracy ever achieved anywhere in science. It is a strange theory because initially.the mathematics gives infinities for several measurements like the mass of the electron. However when the infinities are replaced by the measured values and substituted into equations, everything works out (renormalization). Protons and neutrons are made up of 3 smaller quark particles that held together by gluons and can’t escape to be on their own. The column on the left are the most common particles but there are two other generations of heavier versions of each particle like electron, muon, tau for some u unknown reason. The photon is particle of light and the Higgs particle supplies mass to other particles. It was predicted in 1964 and just discovered in 2012. 10 ^ -n = 1/(10 ^ n) Example 10 ^-2 = 1/100 10^- 9=1/1000000000 m = meter = 39.4 inches
  • 47. Physics: Black Holes From https://en.wikipedia.org/wiki/Black_hole A black hole is a region of spacetime where gravity is so strong that nothing – no particles or even electromagnetic radiation such as light – can escape from it.[2] The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole.[3] [4] The boundary of no escape is called the event horizon. Although it has a great effect on the fate and circumstances of an object crossing it, it has no locally detectable features according to general relativity.[5] In many ways, a black hole acts like an ideal black body, as it reflects no light.[6] [7] Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace.[8] In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterize a black hole. David Finkelstein, in 1958, first published the interpretation of "black hole" as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars by Jocelyn Bell Burnell in 1967 sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971.[9][10] Black holes of stellar mass form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses (M☉) may form by absorbing other stars and merging with other black holes. There is consensus that supermassive black holes exist in the centres of most galaxies.
  • 48. Math Four Color Theorem Fermat’s Last Theorem Kepler Packing Poincaire Conjecture Monstrous Moonshine Langlands Program Riemann Hypothesis
  • 49. Math: Four Color Theorem From https://en.wikipedia.org/wiki/Four_color_theorem Haken Appel
  • 50. Math: Fermat’s Last Theorem From https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers a, b, and c satisfy the equation an + bn = cn for any integer value of n greater than 2. The cases n = 1 and n = 2 have been known since antiquity to have infinitely many solutions.[1] The proposition was first stated as a theorem by Pierre de Fermat around 1637 in the margin of a copy of Arithmetica. Fermat added that he had a proof that was too large to fit in the margin. Although other statements claimed by Fermat without proof were subsequently proven by others and credited as theorems of Fermat (for example, Fermat's theorem on sums of two squares), Fermat's Last Theorem resisted proof, leading to doubt that Fermat ever had a correct proof. Consequently the proposition became known as a conjecture rather than a theorem. After 358 years of effort by mathematicians, the first successful proof was released in 1994 by Andrew Wiles and formally published in 1995. It was described as a "stunning advance" in the citation for Wiles's Abel Prize award in 2016.[2] It also proved much of the Taniyama-Shimura conjecture, subsequently known as the modularity theorem, and opened up entire new approaches to numerous other problems and mathematically powerful modularity lifting techniques. The unsolved problem stimulated the development of algebraic number theory in the 19th and 20th centuries. It is among the most notable theorems in the history of mathematics and prior to its proof was in the Guinness Book of World Records as the "most difficult mathematical problem", in part because the theorem has the largest number of unsuccessful proofs.[3] Andrew Wiles
  • 51. Math:Kepler Packing Conjecture From https://en.wikipedia.org/wiki/Kepler_conjecture The Kepler conjecture, named after the 17th-century mathematician and astronomer Johannes Kepler, is a mathematical theorem about sphere packing in three-dimensional Euclidean space. It states that no arrangement of equally sized spheres filling space has a greater average density than that of the cubic close packing (face-centered cubic) and hexagonal close packing arrangements. The density of these arrangements is around 74.05%. In 1998 Thomas Hales, following an approach suggested by Fejes Tóth (1953), announced that he had a proof of the Kepler conjecture. Hales' proof is a proof by exhaustion involving the checking of many individual cases using complex computer calculations. Referees said that they were "99% certain" of the correctness of Hales' proof, and the Kepler conjecture was accepted as a theorem. In 2014, the Flyspeck project team, headed by Hales, announced the completion of a formal proof of the Kepler conjecture using a combination of the Isabelle and HOL Light proof assistants. In 2017, the formal proof was accepted by the journal Forum of Mathematics, Pi.[1] Hales Toth
  • 52. Math:Fields Medals From https://en.wikipedia.org/wiki/Fields_Medal The Fields Medal is a prize awarded to two, three, or four mathematicians under 40 years of age at the International Congress of the International Mathematical Union (IMU), a meeting that takes place every four years. The name of the award honours the Canadian mathematician John Charles Fields.[1] The Fields Medal is regarded as one of the highest honors a mathematician can receive, and has been described as the Nobel Prize of Mathematics,[2][3][4] although there are several major differences, including frequency of award, number of awards, age limits, monetary value, and award criteria.[5] According to the annual Academic Excellence Survey by ARWU, the Fields Medal is consistently regarded as the top award in the field of mathematics worldwide,[6] and in another reputation survey conducted by IREG in 2013–14, the Fields Medal came closely after the Abel Prize as the second most prestigious international award in mathematics.[7][8] The prize includes a monetary award which, since 2006, has been CA$15,000.[9][10] Fields was instrumental in establishing the award, designing the medal himself, and funding the monetary component, though he died before it was established and his plan was overseen by John Lighton Synge.[1] The medal was first awarded in 1936 to Finnish mathematician Lars Ahlfors and American mathematician Jesse Douglas, and it has been awarded every four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions. In 2014, the Iranian mathematician Maryam Mirzakhani became the first female Fields Medallist.[11][12][13] In all, 64 people have been awarded the Fields Medal. List of Fields Medal Winners
  • 55. Math:Monstrous Moonshine From https://en.wikipedia.org/wiki/Monstrous_moonshine In mathematics, monstrous moonshine, or moonshine theory, is the unexpected connection between the monster group M and modular functions, in particular, the j function. The term was coined by John Conway and Simon P. Norton in 1979. The monstrous moonshine is now known to be underlain by a vertex operator algebra called the moonshine module (or monster vertex algebra) constructed by Igor Frenkel, James Lepowsky, and Arne Meurman in 1988, which has the monster group as its group of symmetries. This vertex operator algebra is commonly interpreted as a structure underlying a two-dimensional conformal field theory, allowing physics to form a bridge between two mathematical areas. The conjectures made by Conway and Norton were proven by Richard Borcherds for the moonshine module in 1992 using the no-ghost theorem from string theory and the theory of vertex operator algebras and generalized Kac–Moody algebras. Conway Norton Borcherds
  • 56. Math: Langlands Program From https://en.wikipedia.org/wiki/Langlands_program In representation theory and algebraic number theory, the Langlands program is a web of far-reaching and influential conjectures about connections between number theory and geometry. Proposed by Robert Langlands (1967, 1970), it seeks to relate Galois groups in algebraic number theory to automorphic forms and representation theory of algebraic groups over local fields and adeles. Widely seen as the single biggest project in modern mathematical research, the Langlands program has been described by Edward Frenkel as "a kind of grand unified theory of mathematics."[1] The Langlands program consists of some very complicated theoretical abstractions, which can be difficult even for specialist mathematicians to grasp. To oversimplify, the fundamental lemma of the project posits a direct connection between the generalized fundamental representation of a finite field with its group extension to the automorphic forms under which it is invariant. This is accomplished through abstraction to higher dimensional integration, by an equivalence to a certain analytical group as an absolute extension of its algebra. Consequently, this allows an analytical functional construction of powerful invariance transformations for a number field to its own algebraic structure. The meaning of such a construction is nuanced, but its specific solutions and generalizations are very powerful. The consequence for proof of existence to such theoretical objects implies an analytical method in constructing the categoric mapping of fundamental structures for virtually any number field. As an analogue to the possible exact distribution of primes, the Langlands program allows a potential general tool for the resolution of invariance at the level of generalized algebraic structures. This in turn permits a somewhat unified analysis of arithmetic objects through their automorphic functions. Simply put, the Langlands philosophy allows a general analysis of structuring the abstractions of numbers. Naturally, this description is at once a reduction and over-generalization of the program's proper theorems, but these mathematical analogues provide the basis of its conceptualization. Langlands
  • 57. Math:Clay Millenium Problems From https://www.claymath.org/millennium-problems Yang–Mills and Mass Gap Experiment and computer simulations suggest the existence of a "mass gap" in the solution to the quantum versions of the Yang-Mills equations. But no proof of this property is known. Riemann Hypothesis The prime number theorem determines the average distribution of the primes. The Riemann hypothesis tells us about the deviation from the average. Formulated in Riemann's 1859 paper, it asserts that all the 'non-obvious' zeros of the zeta function are complex numbers with real part 1/2. P vs NP Problem If it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This is the essence of the P vs NP question. Typical of the NP problems is that of the Hamiltonian Path Problem: given N cities to visit, how can one do this without visiting a city twice? If you give me a solution, I can easily check that it is correct. But I cannot so easily find a solution. Navier–Stokes Equation This is the equation which governs the flow of fluids such as water and air. However, there is no proof for the most basic questions one can ask: do solutions exist, and are they unique? Why ask for a proof? Because a proof gives not only certitude, but also understanding. Hodge Conjecture The answer to this conjecture determines how much of the topology of the solution set of a system of algebraic equations can be defined in terms of further algebraic equations. The Hodge conjecture is known in certain special cases, e.g., when the solution set has dimension less than four. But in dimension four it is unknown. Poincaré Conjecture In 1904 the French mathematician Henri Poincaré asked if the three dimensional sphere is characterized as the unique simply connected three manifold. This question, the Poincaré conjecture, was a special case of Thurston's geometrization conjecture. Perelman's proof tells us that every three manifold is built from a set of standard pieces, each with one of eight well-understood geometries. Birch and Swinnerton-Dyer Conjecture Supported by much experimental evidence, this conjecture relates the number of points on an elliptic curve mod p to the rank of the group of rational points. Elliptic curves, defined by cubic equations in two variables, are fundamental mathematical objects that arise in many areas: Wiles' proof of the Fermat Conjecture, factorization of numbers into primes, and cryptography, to name three.
  • 58. Poincaire Conjecture From https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture In the mathematical field of geometric topology, the Poincaré conjecture (UK: /ˈpwæ ̃kæreɪ/,[2] US: /ˌpwæ ̃kɑːˈreɪ/,[3][4] French: [pwɛ̃kaʁe]), or Perelman's theorem, is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. Originally conjectured by Henri Poincaré in 1904, the theorem concerns spaces that locally look like ordinary three-dimensional space but which are finite in extent. Poincaré hypothesized that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. Attempts to resolve the conjecture drove much progress in the field of geometric topology during the 20th century. The eventual proof built upon Richard S. Hamilton's program of using the Ricci flow to attempt to solve the problem. By developing a number of new techniques and results in the theory of Ricci flow, Grigori Perelman was able to modify and complete Hamilton's program. In unpublished arXiv preprints released in 2002 and 2003, Perelman presented his work proving the Poincaré conjecture, along with the more powerful geometrization conjecture of William Thurston. Over the next several years, several mathematicians studied his papers and produced detailed formulations of his work. Hamilton and Perelman's work on the conjecture is widely recognized as a milestone of mathematical research. Hamilton was recognized with the Shaw Prize and the Leroy P. Steele Prize for Seminal Contribution to Research. The journal Science marked Perelman's proof of the Poincaré conjecture as the scientific Breakthrough of the Year in 2006.[5] The Clay Mathematics Institute, having included the Poincaré conjecture in their well-known Millennium Prize Problem list, offered Perelman their prize of US$1 million for the conjecture's resolution.[6] He declined the award, saying that Hamilton's contribution had been equal to his own.[7][8] Hamilton Perelman Simplified Explanation of the Monster group
  • 59. Math: Simplified Convergences Diagram There are many surprising connections in mathematics and even more surprising connections with physics. The Standard Model for particles uses Lie Groups and Linear Operators. Einstein’s Theory of Gravity is based on Riemannian Geometry. Algebra Number Theory Calculus Complex Numbers Periodic Functions Analytic Geometry Geometry Analysis Langlands Program Fermat Last Theorem Riemann Hypothesis Group Theory Monstrous Moonshine Lie Groups Descartes Euler Riemann Wiles Newton Borcherds Linear Algebra Linear Operators Hilbert Galois Gauss Riemannian Geometry and Manifolds Riemann Fourier
  • 61. Computers: Famous Computer Scientists The father of Computer Science was Allen Turing. He developed the simple Turing Machine and showed how it could compute anything computable but couldn’t solve some problems. He also proposed the Turing test to see if computers could simulate human responses. Turing is also known for breaking German codes in World War 2. After World War 2, large mainframe computers were developed by IBM and smaller minicomputers by HP and Digital in the early 1980’s. In the 1980’s Steve Jobs and Steve Wozniak developed the Apple Personal Computer. Steve Jobs went on to manage the development of the iPhone. Bill Gates and Paul Allen started Microsoft to write software for an IBM PC and other personal computers The most important theoretical advance was the separation of problems into solvable in polynomial time t^n (P) vs problems verifiable in polynomial time( NP). by Stephen Cook and Richard Karp. Many problems are in NP without known P solutions. No one knows if P = NP. Tim Berners-Lee is the inventor of the World Wide Web. Turing Wozniak Jobs: young and old Allen Gates Karp Cook Tim Berners-Lee
  • 63. Computers: Software From https://en.wikipedia.org/wiki/Software Application software uses the computer system to perform special functions beyond the basic operation of the computer itself. There are many different types of application software becaufse the range of tasks that can be performed with a modern computer is so large—see list of software. System software manages hardware behaviour, as to provide basic functionalities that are required by users, or for other software to run properly, if at all. System software is also designed or providing a platform for running application software,[12] and it includes the following: • Operating systems are essential collections of software that manage resources and provide common services for other software that runs "on top" of them. Supervisory programs, boot loaders, shells and window systems are core parts of operating systems. In practice, an operating system comes bundled with additional software (including application software) so that a user can potentially do some work with a computer that only has one operating system. • Device drivers operate or control a particular type of device that is attached to a computer. Each device needs at least one corresponding device driver; because a computer typically has at minimum at least one input device and at least one output device, a computer typically needs more than one device driver. • Utilities are computer programs designed to assist users in the maintenance and care of their computers. Software Programming Languages
  • 64. Computer Hardware: Smart Phones From https://en.wikipedia.org/wiki/Smartphone Two smartphones: Samsung Galaxy S22 Ultra (top) and iPhone 13 Pro (bottom) A smartphone is a portable computer device that combines mobile telephone and computing functions into one unit. They are distinguished from feature phones by their stronger hardware capabilities and extensive mobile operating systems, which facilitate wider software, internet (including web browsing over mobile broadband), and multimedia functionality (including music, video, cameras, and gaming), alongside core phone functions such as voice calls and text messaging. Smartphones typically contain a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, include various sensors that can be leveraged by pre-included and third-party software (such as a magnetometer, proximity sensors, barometer, gyroscope, accelerometer and more), and support wireless communications protocols (such as Bluetooth, Wi-Fi, or satellite navigation). Early smartphones were marketed primarily towards the enterprise market, attempting to bridge the functionality of standalone personal digital assistant (PDA) devices with support for cellular telephony, but were limited by their bulky form, short battery life, slow analog cellular networks, and the immaturity of wireless data services. These issues were eventually resolved with the exponential scaling and miniaturization of MOS transistors down to sub- micron levels (Moore's law), the improved lithium-ion battery, faster digital mobile data networks (Edholm's law), and more mature software platforms that allowed mobile device ecosystems to develop independently of data providers. In the 2000s, NTT DoCoMo's i-mode platform, BlackBerry, Nokia's Symbian platform, and Windows Mobile began to gain market traction, with models often featuring QWERTY keyboards or resistive touchscreen input, and emphasizing access to push email and wireless internet. Following the rising popularity of the iPhone in the late 2000s, the majority of smartphones have featured thin, slate-like form factors, with large, capacitive screens with support for multi-touch gestures rather than physical keyboards, and offer the ability for users to download or purchase additional applications from a centralized store, and use cloud storage and synchronization, virtual assistants, as well as mobile payment services. Smartphones have largely replaced PDAs, handheld/palm-sized PCs, portable media players (PMP)[1] and to a lesser extent, handheld video game consoles.
  • 65. Computer Hardware: Internet of Things From https://en.wikipedia.org/wiki/Internet_of_things The Internet of things (IoT) describes physical objects (or groups of such objects) with sensors, processing ability, software, and other technologies that connect and exchange data with other devices and systems over the Internet or other communications networks.[1][2][3][4] Internet of things has been considered a misnomer because devices do not need to be connected to the public internet, they only need to be connected to a network and be individually addressable.[5][6] The field has evolved due to the convergence of multiple technologies, including ubiquitous computing, commodity sensors, increasingly powerful embedded systems, and machine learning.[7] Traditional fields of embedded systems, wireless sensor networks, control systems, automation (including home and building automation), independently and collectively enable the Internet of things.[8] In the consumer market, IoT technology is most synonymous with products pertaining to the concept of the "smart home", including devices and appliances (such as lighting fixtures, thermostats, home security systems, cameras, and other home appliances) that support one or more common ecosystems, and can be controlled via devices associated with that ecosystem, such as smartphones and smart speakers. IoT is also used in healthcare systems.[9] There are number of concerns about the risks in the growth of IoT technologies and products, especially in the areas of privacy and security, and consequently, industry and governmental moves to address these concerns have begun, including the development of international and local standards, guidelines, and regulatory frameworks.[10] Internet of Things: What It Is, How It Works, Examples and More
  • 66. Computer Applications Numerical Analysis Artificial Intelligence Simulation Bioinformatics Computational Science Industrial Internet of Things Robotics
  • 67. Computer Applications: Numerical Analysis From https://en.wikipedia.org/wiki/Numerical_analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt at finding approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis,[2][3][4] and stochastic differential equations and Markov chains for simulating living cells in medicine and biology. Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid 20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms.[5] WolframAlpha.com provides many numerical methods. For example solving cos x = x Numerical analysis requires numerical methods + error analysis
  • 68. Computer Applications: Artificial Intelligence From https://en.wikipedia.org/wiki/Artificial_intelligence Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals including humans. AI research has been defined as the field of study of intelligent agents, which refers to any system that perceives its environment and takes actions that maximize its chance of achieving its goals.[a] The term "artificial intelligence" had previously been used to describe machines that mimic and display "human" cognitive skills that are associated with the human mind, such as "learning" and "problem-solving". This definition has since been rejected by major AI researchers who now describe AI in terms of rationality and acting rationally, which does not limit how intelligence can be articulated.[b] AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go).[2] As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[3] For instance, optical character recognition is frequently excluded from things considered to be AI,[4] having become a routine technology.[5] Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[6][7] followed by disappointment and the loss of funding (known as an "AI winter"),[8][9] followed by new approaches, success and renewed funding.[7][10] AI research has tried and discarded many different approaches since its founding, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge and imitating animal behavior. In the first decades of the 21st century, highly mathematical-statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.[10][11] The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects.[c] General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals.[12] To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques—including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields. The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[d] This raised philosophical arguments about the mind and the ethical consequences of creating artificial beings endowed with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity.[14] Computer scientists and philosophers have since suggested that AI may become an existential risk to humanity if its rational capacities are not steered towards beneficial goals.[e]
  • 69. Computer Applications: Simulation From https://en.wikipedia.org/wiki/Simulation A simulation is the imitation of the operation of a real-world process or system over time.[1] Simulations require the use of models; the model represents the key characteristics or behaviors of the selected system or process, whereas the simulation represents the evolution of the model over time. Often, computers are used to execute the simulation. Simulation is used in many contexts, such as simulation of technology for performance tuning or optimizing, safety engineering, testing, training, education,[2] and video games. Simulation is also used with scientific modelling of natural systems[2] or human systems to gain insight into their functioning,[3] as in economics. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist.[4] Key issues in modeling and simulation include the acquisition of valid sources of information about the relevant selection of key characteristics and behaviors used to build the model, the use of simplifying approximations and assumptions within the model, and fidelity and validity of the simulation outcomes. Procedures and protocols for model verification and validation are an ongoing field of academic study, refinement, research and development in simulations technology or practice, particularly in the work of computer simulation. Flight Simulator Simulation is based on system behavior models Emulation is based on system constuction models AI simulates natural intelligence AGI emulates natural intelligence
  • 70. Computer Applications: Computational Science From https://en.wikipedia.org/wiki/Computational_science Computational science, also known as scientific computing or scientific computation (SC), is a field in mathematics that uses advanced computing capabilities to understand and solve complex problems. It is an area of science that spans many disciplines[which?], but at its core, it involves the development of models and simulations to understand natural systems. • Algorithms (numerical and non-numerical): mathematical models, computational models, and computer simulations developed to solve science (e.g., biological, physical, and social), engineering, and humanities problems • Computer hardware that develops and optimizes the advanced system hardware, firmware, networking, and data management components needed to solve computationally demanding problems • The computing infrastructure that supports both the science and engineering problem solving and the developmental computer and information science In practical use, it is typically the application of computer simulation and other forms of computation from numerical analysis and theoretical computer science to solve problems in various scientific disciplines. The field is different from theory and laboratory experiments, which are the traditional forms of science and engineering. The scientific computing approach is to gain understanding through the analysis of mathematical models implemented on computers. Scientists and engineers develop computer programs and application software that model systems being studied and run these programs with various sets of input parameters. The essence of computational science is the application of numerical algorithms[1] and computational mathematics. In some cases, these models require massive amounts of calculations (usually floating-point) and are often executed on supercomputers or distributed computing platforms. Big Data Analytics Big Data Collection
  • 71. Computer Applications: Bioinformatics From https://en.wikipedia.org/wiki/Bioinformatics Bioinformatics (/ˌbaɪ.oʊˌɪnfərˈmætɪks/ (listen)) is an interdisciplinary field that develops methods and software tools for understanding biological data, in particular when the data sets are large and complex. As an interdisciplinary field of science, bioinformatics combines biology, chemistry, physics, computer science, information engineering, mathematics and statistics to analyze and interpret the biological data. Bioinformatics has been used for in silico analyses of biological queries using computational and statistical techniques. Bioinformatics includes biological studies that use computer programming as part of their methodology, as well as specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidates genes and single nucleotide polymorphisms (SNPs). Often, such identification is made with the aim to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences, called proteomics.[1] Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. It plays a role in the text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA,[2] RNA,[2][3] proteins[4] as well as biomolecular interactions.[5][6][7][8] Example: Protein Folding from Deep Learning
  • 72. Computer Applications: Industrial Internet of Things From https://www.tibco.com/reference-center/what-is-iiot Industrial IoT, or the Industrial Internet of Things (IIoT), is a vital element of Industry 4.0. IIoT harnesses the power of smart machines and real-time analysis to make better use of the data that industrial machines have been churning out for years. The principal driver of IIoT is smart machines, for two reasons. The first is that smart machines capture and analyze data in real-time, which humans cannot. The second is that smart machines communicate their findings in a manner that is simple and fast, enabling faster and more accurate business decisions.
  • 73. Computer Applications: Robotics Overview Fundamentals of Robotics History of Robotics Drones Asimov 1950 Capek 1920 Devol 1954 3 Laws of Robotics R.U.R First Robot Cyborgs First Cyborg
  • 74. Computer Applications: Advanced Robotics Boston Dynamics Dancing Robots Carbon-based Life Forms Bionic Xenobots from Frogs Non-Bionic Frog Robot Giant Gundam Robot AI Mayflower x Humanoid Robots Monkey Cyborg Reproducing Xenobots
  • 75. Large Scale Computing Cloud Big Data Deep Learning Supercomputing
  • 76. From https://en.wikipedia.org/wiki/Cloud_computing Cloud computing[1] is the on-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct active management by the user.[2] Large clouds often have functions distributed over multiple locations, each location being a data center. Cloud computing relies on sharing of resources to achieve coherence and typically using a "pay-as-you-go" model which can help in reducing capital expenses but may also lead to unexpected operating expenses for unaware users.[3] According to IDC, the global spending on cloud computing services has reached $706 billion and expected to reach $1.3 trillion by 2025.[4] While Gartner estimated that the global public cloud services end-user spending forecast to reach $600 billion by 2023.[5] As per McKinsey & Company report, cloud cost-optimization levers and value-oriented business use cases foresees more than $1 trillion in run-rate EBITDA across Fortune 500 companies as up for grabs in 2030.[6] In 2022, more than $1.3 trillion in enterprise IT spending is at stake from the shift to cloud, growing to almost $1.8 trillion in 2025, according to Gartner.[7] Cloud computing metaphor: the group of networked elements providing services need not be individually addressed Large Scale Computing: Cloud
  • 77. Large Scale Computing: Big Data From https://en.wikipedia.org/wiki/Big_data Big data refers to data sets that are too large or complex to be dealt with by traditional data-processing application software. Data with many fields (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate.[2] Big data analysis challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy, and data source. Big data was originally associated with three key concepts: volume, variety, and velocity.[3] The analysis of big data presents challenges in sampling, and thus previously allowing for only observations and sampling. Thus a fourth concept, veracity, refers to the quality or insightfulness of the data. Without sufficient investment in expertise for big data veracity, then the volume and variety of data can produce costs and risks that exceed an organization's capacity to create and capture value from big data.[4] Current usage of the term big data tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from big data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that's not the most relevant characteristic of this new data ecosystem."[5] Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on".[6] Scientists, business executives, medical practitioners, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet searches, fintech, healthcare analytics, geographic information systems, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics,[7] connectomics, complex physics simulations, biology, and environmental research.[8] The size and number of available data sets have grown rapidly as data is collected by devices such as mobile devices, cheap and numerous information-sensing Internet of things devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks.[9][10] The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s;[11] as of 2012, every day 2.5 exabytes (2.5×260 bytes) of data are generated.[12] Based on an IDC report prediction, the global data volume was predicted to grow exponentially from 4.4 zettabytes to 44 zettabytes between 2013 and 2020. By 2025, IDC predicts there will be 163 zettabytes of data.[13] According to IDC, global spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021.[14][15] While Statista report, the global big data market is forecasted to grow to $103 billion by 2027.[16] In 2011 McKinsey & Company reported, if US healthcare were to use big data creatively and effectively to drive efficiency and quality, the sector could create more than $300 billion in value every year.[17] In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data.[17] And users of services enabled by personal-location data could capture $600 billion in consumer surplus.[17] One question for large enterprises is determining who should own big-data initiatives that affect the entire organization.[18]
  • 78. Large Scale Computing: Deep Learning From https://en.wikipedia.org/wiki/Deep_learning Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.[2] Deep-learning architectures such as deep neural networks, deep belief networks, deep reinforcement learning, recurrent neural networks, convolutional neural networks and Transformers have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.[3][4][5] Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains. Specifically, artificial neural networks tend to be static and symbolic, while the biological brain of most living organisms is dynamic (plastic) and analogue.[6][7] The adjective "deep" in deep learning refers to the use of multiple layers in the network. Early work showed that a linear perceptron cannot be a universal classifier, but that a network with a nonpolynomial activation function with one hidden layer of unbounded width can. Deep learning is a modern variation which is concerned with an unbounded number of layers of bounded size, which permits practical application and optimized implementation, while retaining theoretical universality under mild conditions. In deep learning the layers are also permitted to be heterogeneous and to deviate widely from biologically informed connectionist models, for the sake of efficiency, trainability and understandability, whence the "structured" part. 10 Deep Learning Algorithms Deep Learning Networks are 1000 times larger and require many inuts for training 1. Convolutional Neural Networks (CNNs) 2. Long Short Term Memory Networks (LSTMs) 3. Recurrent Neural Networks (RNNs) 4. Generative Adversarial Networks (GANs) 5. Radial Basis Function Networks (RBFNs) 6. Multilayer Perceptrons (MLPs) 7. Self Organizing Maps (SOMs) 8. Deep Belief Networks (DBNs) 9. Restricted Boltzmann Machines( RBMs) 10. Autoencoders Transfer Learning Reinforcement Learning Transformer Learning
  • 79. Large Scale Computing: Supercomputing From https://en.wikipedia.org/wiki/Supercomputer A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there have existed supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS).[3] For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS to tens of teraFLOPS.[4][5] Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems.[6] Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.[7] Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). They have been essential in the field of cryptanalysis.[8] The IBM Blue Gene/P supercomputer "Intrepid" at Argonne National Laboratory runs 164,000 processor cores using normal data center air conditioning, grouped in 40 racks/cabinets connected by a high-speed 3D torus network Exascale Computing
  • 81. Electronics: Televison From https://en.wikipedia.org/wiki/Television Farnsworth Television, sometimes shortened to TV, is a telecommunication medium for transmitting moving images and sound. The term can refer to a television set, or the medium of television transmission. Television is a mass medium for advertising, entertainment, news, and sports. Television became available in crude experimental forms in the late 1920s, but only after several years of further development was the new technology marketed to consumers. After World War II, an improved form of black-and-white television broadcasting became popular in the United Kingdom and the United States, and television sets became commonplace in homes, businesses, and institutions. During the 1950s, television was the primary medium for influencing public opinion.[1] In the mid-1960s, color broadcasting was introduced in the U.S. and most other developed countries. The availability of various types of archival storage media such as Betamax and VHS tapes, high-capacity hard disk drives, DVDs, flash drives, high-definition Blu-ray Discs, and cloud digital video recorders has enabled viewers to watch pre-recorded material—such as movies—at home on their own time schedule. For many reasons, especially the convenience of remote retrieval, the storage of television and video programming now also occurs on the cloud (such as the video-on-demand service by Netflix). At the end of the first decade of the 2000s, digital television transmissions greatly increased in popularity. Another development was the move from standard-definition television (SDTV) (576i, with 576 interlaced lines of resolution and 480i) to high-definition television (HDTV), which provides a resolution that is substantially higher. HDTV may be transmitted in different formats: 1080p, 1080i and 720p. Since 2010, with the invention of smart television, Internet television has increased the availability of television programs and movies via the Internet through streaming video services such as Netflix, Amazon Prime Video, iPlayer and Hulu. In 2013, 79% of the world's households owned a television set.[2] The replacement of earlier cathode-ray tube (CRT) screen displays with compact, energy-efficient, flat-panel alternative technologies such as LCDs (both fluorescent-backlit and LED), OLED displays, and plasma displays was a hardware revolution that began with computer monitors in the late 1990s. Most television sets sold in the 2000s were flat-panel, mainly LEDs. Major manufacturers announced the discontinuation of CRT, Digital Light Processing (DLP), plasma, and even fluorescent-backlit LCDs by the mid-2010s.[3][4] In the near future, LEDs are expected to be gradually replaced by OLEDs.[5] Also, major manufacturers have announced that they will increasingly produce smart TVs in the mid-2010s.[6][7][8] Smart TVs with integrated Internet and Web 2.0 functions became the dominant form of television by the late 2010s.[9] Television signals were initially distributed only as terrestrial television using high-powered radio-frequency television transmitters to broadcast the signal to individual television receivers. Alternatively television signals are distributed by coaxial cable or optical fiber, satellite systems and, since the 2000s via the Internet. Until the early 2000s, these were transmitted as analog signals, but a transition to digital television was expected to be completed worldwide by the late 2010s. A standard television set consists of multiple internal electronic circuits, including a tuner for receiving and decoding broadcast signals. A visual display device which lacks a tuner is correctly called a video monitor rather than a television. Baird Cathode Ray Tube
  • 82. Electronics: Transistors From https://en.wikipedia.org/wiki/Transistor A transistor is a semiconductor device used to amplify or switch electrical signals and power. The transistor is one of the basic building blocks of modern electronics.[1] It is composed of semiconductor material, usually with at least three terminals for connection to an electronic circuit. A voltage or current applied to one pair of the transistor's terminals controls the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Some transistors are packaged individually, but many more are found embedded in integrated circuits. Austro-Hungarian physicist Julius Edgar Lilienfeld proposed the concept of a field-effect transistor in 1926, but it was not possible to actually construct a working device at that time.[2] The first working device to be built was a point-contact transistor invented in 1947 by American physicists John Bardeen and Walter Brattain while working under William Shockley at Bell Labs. The three shared the 1956 Nobel Prize in Physics for their achievement.[3] The most widely used type of transistor is the metal–oxide– semiconductor field-effect transistor (MOSFET), which was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959.[4][5][6] Transistors revolutionized the field of electronics, and paved the way for smaller and cheaper radios, calculators, and computers, among other things. Most transistors are made from very pure silicon, and some from germanium, but certain other semiconductor materials are sometimes used. A transistor may have only one kind of charge carrier, in a field-effect transistor, or may have two kinds of charge carriers in bipolar junction transistor devices. Compared with the vacuum tube, transistors are generally smaller and require less power to operate. Certain vacuum tubes have advantages over transistors at very high operating frequencies or high operating voltages. Many types of transistors are made to standardized specifications by multiple manufacturers. Metal-oxide-semiconductor field-effect transistor (MOSFET) , showing gate (G), body (B), source (S) and drain (D) terminals. The gate is separated from the body by an insulating layer (pink). John Bardeen, William Shockley and Walter Brattain at Bell Labs in 1948. Bardeen and Brattain invented the point-contact transistor in 1947 and Shockley the bipolar junction transistor in 1948.
  • 83. Electronics: Microprocessor From https://en.wikipedia.org/wiki/Microprocessor A microprocessor is a computer processor where the data processing logic and control is included on a single integrated circuit, or a small number of integrated circuits. The microprocessor contains the arithmetic, logic, and control circuitry required to perform the functions of a computer's central processing unit. The integrated circuit is capable of interpreting and executing program instructions and performing arithmetic operations.[1] The microprocessor is a multipurpose, clock-driven, register-based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, and provides results (also in binary form) as output. Microprocessors contain both combinational logic and sequential digital logic, and operate on numbers and symbols represented in the binary number system. The integration of a whole CPU onto a single or a few integrated circuits using Very-Large-Scale Integration (VLSI) greatly reduced the cost of processing power. Integrated circuit processors are produced in large numbers by highly automated metal-oxide-semiconductor (MOS) fabrication processes, resulting in a relatively low unit price. Single-chip processors increase reliability because there are much fewer electrical connections that could fail. As microprocessor designs improve, the cost of manufacturing a chip (with smaller components built on a semiconductor chip the same size) generally stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits, typically of TTL type. Microprocessors combined this into one or a few large-scale ICs. The first commercially available microprocessor was the Intel 4004 introduced in 1971. Continued increases in microprocessor capacity have since rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers. First microprocessor by Intel, the 4004 Noyce Kilby Moore
  • 85. Networking: Internet From https://en.wikipedia.org/wiki/Internet The Internet (or internet) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing. The origins of the Internet date back to the development of packet switching and research commissioned by the United States Department of Defense in the 1960s to enable time-sharing of computers.[2] The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1970s. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks.[3] The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet,[4] and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia in the 1980s, commercialization incorporated its services and technologies into virtually every aspect of modern life. Most traditional communication media, including telephone, radio, television, paper mail and newspapers are reshaped, redefined, or even bypassed by the Internet, giving birth to new services such as email, Internet telephone, Internet television, online music, digital newspapers, and video streaming websites. Newspaper, book, and other print publishing are adapting to website technology, or are reshaped into blogging, web feeds and online news aggregators. The Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, and social networking services. Online shopping has grown exponentially for major retailers, small businesses, and entrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or even sell goods and services entirely online. Business-to-business and financial services on the Internet affect supply chains across entire industries. The Internet has no single centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies.[5] The overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.[6] In November 2006, the Internet was included on USA Today's list of New Seven Wonders.[7] Licklider Kahn Cerf
  • 86. Networking: Web From https://en.wikipedia.org/wiki/World_Wide_Web The World Wide Web (WWW), commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet.[1] Documents and downloadable media are made available to the network through web servers and can be accessed by programs such as web browsers. Servers and resources on the World Wide Web are identified and located through character strings called uniform resource locators (URLs). The original and still very common document type is a web page formatted in Hypertext Markup Language (HTML). This markup language supports plain text, images, embedded video and audio contents, and scripts (short programs) that implement complex user interaction. The HTML language also supports hyperlinks (embedded URLs) which provide immediate access to other web resources. Web navigation, or web surfing, is the common practice of following such hyperlinks across multiple websites. Web applications are web pages that function as application software. The information in the Web is transferred across the Internet using the Hypertext Transfer Protocol (HTTP). Multiple web resources with a common theme and usually a common domain name make up a website. A single web server may provide multiple websites, while some websites, especially the most popular ones, may be provided by multiple servers. Website content is provided by a myriad of companies, organizations, government agencies, and individual users; and comprises an enormous mass of educational, entertainment, commercial, and government information. The World Wide Web has become the world's dominant software platform.[2][3][4][5] It is the primary tool billions of people worldwide use to interact with the Internet.[6] The Web was originally conceived as a document management system.[7] It was invented by Tim Berners-Lee at CERN in 1989 and opened to the public in 1991. Graphic representation of a minute fraction of the WWW, demonstrating hyperlinks Tim Berners-Lee
  • 87. Networking: Social Media From https://en.wikipedia.org/wiki/Social_media Social media are interactive technologies that facilitate the creation and sharing of information, ideas, interests, and other forms of expression through virtual communities and networks.[1][2] While challenges to the definition of social media arise[3][4] due to the variety of stand-alone and built-in social media services currently available, there are some common features:[2] 1. Social media are interactive Web 2.0 Internet-based applications.[2][5] 2. User-generated content—such as text posts or comments, digital photos or videos, and data generated through all online interactions—is the lifeblood of social media.[2][5] 3. Users create service-specific profiles for the website or app that are designed and maintained by the social media organization.[2][6] 4. Social media helps the development of online social networks by connecting a user's profile with those of other individuals or groups.[2][6] The term social in regard to media suggests that platforms are user-centric and enable communal activity. As such, social media can be viewed as online facilitators or enhancers of human networks—webs of individuals who enhance social connectivity.[7] Users usually access social media services through web-based apps on desktops or download services that offer social media functionality to their mobile devices (e.g., smartphones and tablets). As users engage with these electronic services, they create highly interactive platforms which individuals, communities, and organizations can share, co-create, discuss, participate, and modify user-generated or self-curated content posted online.[8][9][1] Additionally, social media are used to document memories, learn about and explore things, advertise oneself, and form friendships along with the growth of ideas from the creation of blogs, podcasts, videos, and gaming sites.[10] This changing relationship between humans and technology is the focus of the emerging field of technological self-studies.[11] Some of the most popular social media websites, with more than 100 million registered users, include Facebook (and its associated Facebook Messenger), TikTok, WeChat, Instagram, QZone, Weibo, Twitter, Tumblr, Baidu Tieba, and LinkedIn. Depending on interpretation, other popular platforms that are sometimes referred to as social media services include YouTube, QQ, Quora, Telegram, Meta, Signal, LINE, Snapchat, Pinterest, Viber, Reddit, Discord, VK, Microsoft Teams, and more. Wikis are examples of collaborative content creation. Zuckerberg
  • 88. Networking: Internet of Everything From https://www.bbvaopenmind.com/en/technology/digital-world/the-internet-of-everything-ioe/ The Internet of Everything (IoE)  “is bringing together people, process, data, and things to make networked connections more relevant and valuable than ever before-turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries.”, (Cisco, 2013) . In simple terms: IoE is the intelligent connection of people, process, data and things. The Internet of Everything (IoE) describes a world where billions of objects have sensors to detect measure and assess their status; all connected over public or private networks using standard and proprietary protocols. Pillars of The Internet of Everything (IoE) • People: Connecting people in more relevant, valuable ways. • Data: Converting data into intelligence to make better decisions. • Process: Delivering the right information to the right person (or machine) at the right time. • Things: Physical devices and objects connected to the Internet and each other for intelligent decision making; often called Internet of Things (IoT).
  • 89. Unsolved Problems P vs NP Quantum Gravity