On October 23rd, 2014, we updated our
By continuing to use LinkedIn’s SlideShare service, you agree to the revised terms, so please take a few minutes to review them.
Note: The following texts are for the master students who are preparing for the English state
The texts will be followed by reading comprehension and vocabulary exercises. Also will be added
and grammar tasks in the exam.
Texts are for the schools: 1. School of Food and Biotechnology (10)
2. School of Power Engineering (10)
3. School of Industrial Technology and Design (10)
4. School of Computer Science and Management School (Computer 10,
5. School of Geology and Petroleum Engineering (10)
6. School of Mechanical Engineering (10)
7. School of Mathematics (10)
8. School of Materials Science (10)
9. School of Telecommunication and Information Technology (10)
10. School of Civil Engineering and Architecture (10)
11. School of Social Technology (10)
13. School of Mining Engineering (10)
Total 130 texts
School of Food and Biotechnology
WHY WE NEED FOOD
All foods – from apples and pears to whole meal bread and ice cream – contain two main
categories of nutrients, the macronutrients and the micronutrients.
Macronutrients are required in large amount for healthy growth and development;
they form the basis of every diet and they provide energy for all the body’s everyday functions
and activities. These nutrients are further categorized as being primarily fats, proteins,
carbohydrates, or fiber, although most foods contain all of them in varying proportions.
Vitamins and minerals make up the micronutrients, so called because they are found in tiny
amount in foods. Unlike macronutrients, vitamins and minerals do not provide energy and are
needed in small amounts, but they play a critical role in the normal functioning of the body and
digestive processes, to ensure good health.
Take a look at what you eat in an average day: the chances are that your diet includes a
wide variety of foods from all the basic food groups, and that it provides a range of essential
nutrients. Your breakfast, for example may be rich in carbohydrates and fiber from cereal or
wholemeal toast; you may have a mixed salad for your lunch, and grilled fish and vegetables for
dinner providing proteins and a variety of vitamins and minerals. Whatever you eat at individual
meals, your diet is made up of foods from the five basic food groups.
In addition to supplying nutrients, food provides your body with energy. Approximately half
to two-thirds of the energy we obtain from food goes to support the body’s basic, involuntary
functions, which are the activities that are performed without any conscious control, such as heart
rate, maintaining breathing, and body temperature. The minimum energy needed to carry out
these functions is determined by your basal metabolic rate which is your baseline rate of
metabolism measured when the body is at rest. You also expand energy through conscious,
voluntary activities, which range from the sedentary to the strenuous. All your body’s energy
needs are met from your body’s energy stores.
Almost all foods are of plant or animal origin, although there are some exceptions. Foods not
coming from animal or plant sources include various edible fungi, including mushrooms. Fungi and
ambient bacteria are used in the preparation of fermented and pickled foods such as leavened
bread, wine, beer, cheese, pickles, and yogurt. Additionally, salt is often eaten as a flavoring or
preservative, and baking soda is used in food preparation. Both of these are inorganic substances,
as is water, an important part of human diet.
Plants: Many plants or plant parts are eaten as food. There are around 2,000 plant
species which are cultivated for food, and many have several distinct cultivars. Plant-based foods
can be classified as with the nutrients necessary for the plant’s initial growth. Because of this,
seeds are often packed with energy, and are good sources of food for animals, including humans.
Fruits are the ripened extensions of plants, including the seeds within. Fruits are made
attractive to animals so that animals will eat the fruits and excrete the seeds over long distances.
Fruits, therefore, make up a significant part of the diets of most cultures. Some fruits, such as
pumpkin and eggplant, are eaten as vegetables. Vegetables are a second type of plant matter
eaten as food. These include root vegetables (such as potatoes and carrots), leaf vegetables
(such as spinach and lettuce), stem vegetables (such as bamboo shoots and asparagus), and in
Florence vegetables (such as globe artichokes and broccoli). Many herbs and spices are highly-
Animals: can be used as food either directly or indirectly by the products they produce.
Meat is an example of a direct product taken from an animal, which comes from either muscle
systems or from organ. Food products produced by animals include milk produced by mammals,
which in many cultures is drunk or processed into dairy products such as cheese or butter.
Preparation: While some food can be eaten raw, many foods undergo some form of
preparation for reasons of safety, palatability, or flavor. At the simplest level this may involve
washing, cutting, trimming or adding other foods or ingredients, such as spices. It may also
involve mixing, heating or cooling, pressure cooking, fermentation, or combination with other
food. In a home, most food preparation takes place in a kitchen. A meal is made up of food which
is prepared to be eaten at a specific time and place. The preparation of animal-based food will
usually involve slaughter, evisceration, hanging, portioning and rendering. In developed countries,
this is usually done outside the home in slaughterhouses which are used to process animals mass
for meat production. On the local level a butcher may commonly break down larger animal meat
into smaller manageable cuts and pre-wrapped for commercial sale or wrapped to order in
butcher paper. In addition fish and seafood may be fabricated into smaller cuts by a fish monger
at the local level. However fish butchery may be done on board a fishing vessel and quick-froze
for preservation of quality.
These are naturally occurring chemicals essential for health. For many of us, the word
“vitamin” conjures up the shelves of the local chemist, or perhaps the fortified cereals that we eat
for breakfast each morning. But these chemical substances occur naturally, in minute quantities,
in most of the foods that we eat on food sources to meet our vitamin needs.
Although vitamins contain no calories, they are essential for normal growth and
development, and many chemical reactions in the body. Vitamins are necessary for the food to
use the calories provided by the food that we eat and help process proteins carbohydrates, and
fats. Vitamins are also involved in building cells, tissues, and organs- vitamin C, for example,
helps produce healthy skin. Vitamins are classified as fat-soluble or water-soluble, based on how
they are absorbed by the body. Vitamins A, D, E and K are fat-soluble vitamins include vitamin C
and the B-complex vitamins.
Research has shown that foods rich in antioxidants are particularly beneficial for health.
Antioxidants include vitamins A, C, and E, and they are found in a wide range of vegetables and
For the most part, we rely on food sources or supplements to meet our vitamin and
mineral requirements. However, there are a few exceptions to this; for example, gut flora (the
micro-organisms in the intestinal tract) produce vitamin K. Vitamin D is also converted by the skin
into a form that the body can use with the help of ultraviolet light in sun light.
Because your body makes only a few vitamins itself, a balanced diet is very important – it
ensures that your body receives the sufficient amount of vitamins, as well as minerals, that it
requires each day.
The key to getting enough vitamins in your diet is to eat a variety of foods. This is because
while some nutrients tend to be found in substantial amounts in certain groups of foods, such as
vitamin C in fruits and vegetables, other nutrients, such as the B vitamins, are found in smaller
amounts in a wide range of foods. No one food contains an adequate amount of all the vitamins
that you require daily, but if you make healthy choices from a variety of foods, you are less likely
to miss out on any one particular nutrient.
Most people buy the same foods each week, which can result in a limited range of vitamin.
For example eat two apricots instead of one orange, for a boost of vitamin A. Or choose salmon
on your bagel instead of your usual cream cheese, to boost your intake of vitamin D. Buying
vegetables and fruits in season also helps to vary your shopping choices.
Part of a group of compounds known as lipids, and composed of the elements carbon
oxygen, and hydrogen, fats are found mainly in plants, fish, and meats. They form a major part of
all cell membranes in the body and play a vital role in the absorption of the fat –soluble vitamins
A,D,E, and K from foods.
Fat gives the body insulation, helping to maintain a constant temperature against extremes
of hot and cold. It is also serves as an important source of energy.
Lipids and lipoproteins: In addition to fats, lipids include phospholipids, triglycerides, waxes,
and sterols. The most well-known sterol is cholesterols which circulate in the blood attached to
compound known as lipoproteins. Low-density lipoproteins (LDL), which carry cholesterol to
tissues and organs, are often called “bad” cholesterol, since high levels in the blood are
associated with an increased risk of cardiovascular disease. High-density lipoproteins (HDL),
which carry cholesterol away from the tissues and back to the liver, are known as “good”
cholesterol, since high levels decrease the risk of cardiovascular disease.
Fats are also referred to as good or bad according to whether their chemical bonds are
“saturated” with hydrogen. Unsaturated fats are further classified into mono-and polyunsaturates,
which differ in their nutritional makeup.
Avoid saturated fats: With the exception of palm and coconut oils, most saturated fats are
derived from animal and dairy products. Red meat and meat products such as sausages are major
sources of saturated fat in the diet, along with whole milk and its products, such as cheese,
cream, and ice cream.
Excessive intake of saturated fats and trans fatty acids are now believed to increase the
risk of cardiovascular disease by raising the unhealthy LDL and triglycerides in the blood, without
lowering healthy HDL levels.
Polyunsaturated fats consist of two major types: omega – 3 fatty acids, founds in fish oils
and omega -6 fatty acids, found in vegetable oils such as sunflower, rapeseed, and corn. Your
diet should include both types.
These are substances originating in rocks and metal ores. Many minerals are essential for
health. We obtain them by eating plants, which take up minerals from the soil, by eating animals
that have eaten plants and, to some extent, by drinking water that contains minerals.
Minerals are needed by the body in only tiny quantities and are termed macrominerals or
microminerals, according to the percentage of your total body weight they constitute and how
much you need in your daily diet.
Macrominerals make up more than 0.005 percent of the body’s weight and you need to be
getting more than 100mg of these daily. They include calcium, magnesium, phosphorous,
potassium, sodium, and sulphur. Microminerals, which are also known as trace elements, make up
less than 0.005 percent of the body’s weight and you need less than 100 mg daily. Those
microminerals with identified roles in health include chromium, copper, fluoride, iodine, iron,
selenium, and zinc.
Minerals work together in making and breaking down body tissues and breaking down body
tissues and in regulating metabolism-the chemical reactions constantly occurring in the body.
Bone, for example consists of a framework of the protein collagen in which most of the body’s
calcium, phosphorus, and magnesium are deposited. Minerals are stored in your bones so that in
the event of a dietary deficiency, some of the minerals can be released from the bones for the
body’s needs. The teeth also contain significant amount of the minerals calcium and phosphorus.
Minerals are found in many key molecules in the body and are involved in essential chemical
reaction. For example, calcium activates a digestive enzyme that helps to break down fats; copper
is needed to incorporate iron into hemoglobin.
No single food is the best source of all of the minerals, but eating a variety of foods usually
ensures that you get enough of these important nutrients. In addition, the body can store
minerals for future use when intake might be low.
Animal foods are generally the best sources of minerals because they tend to contain
minerals in the proportions our bodies need. Fruits and vegetables can be useful sources of some
minerals such as potassium. Mineral water can be a source of minerals including magnesium.
Minerals are often lost when a food is processed. For example, potassium, iron, and
chromium are removed from whole grains during the refining process.
Minerals differ from vitamins in that they are not damaged by heat or light, that but some
can be lost in the water used for cooking. To help preserve the mineral content of vegetables,
avoid boiling them. Instead, steam them if possible or use the microwave, and keep the cooking
time short. If you do boil, wait until the water is bubbling before you add the vegetables: if you
put them in cold water and then bring it to the boil, more nutrients will be lost.
FRUITS FOR HEALTH
Fruits – naturally sweet, colourful, high in vitamins and fibre, and low in calories and fat –
are the ideal snack. Scientific research shown that a modest increase of one or two servings of
fruit per day can dramatically reduce your susceptibility to many diseases.
Rich in antioxidants: Vitamin C and phytochemicals, including antioxidants, abound in fruit.
Antioxidants destroy harmful substances in the body, called free radicals, which can build up and
cause cancer. Of particular interest are two types of phytochemicals – flavonoids and polyphenols
– which together have a powerful antioxidant quality. In addition, other phytochemicals in fruit
have been found to be anti-allergenic, anti- carcinogenic, anti- viral, and anti- inflammatory.
We truly do have a reason to say that an apple (or any fruit) a day keeps the doctor away.
Benefits of different fruits: Fruits are rich in vitamins and minerals, especially vitamin C
and potassium, and in fibre. Eat a variety to reap their individual nutritional benefits.
Apples: The skin of this refreshing fruit is an excellent source of fibre. A medium apple
has about 47 calories.
Apricots: Due to a short life span once picked, most apricots are dried or canned. A fresh
apricot has about 12 calories.
Bananas: Technically a herb and hot a fruit, a medium banana (100g) contains 95 calories
and is loaded with vitamins and minerals.
Blueberries: These delicious fruits are rich in antioxidants and help prevent urinary tract
infections. There are about 50 calories in 80g blueberries.
Grapes: 80g contains 48 calories, with vitamins A and C and minerals.
Kiwi fruit: A medium kiwi fruit 60g has 29 calories and offers a good range of vitamins.
Melon: This is rich in a form of carotene that is known to fight cancer. A slice of melon
(100g) has 24 calories.
Peaches: A medium peach 100g has about 33 calories, and offers vitamin C and D plus
Pears: A medium peach (100) has about 33 calories, and offers vitamins C and D plus
Pineapple: This fruit contains a potent enzyme, bromelain that has been used to aid
digestion, reduce inflammation, and help cardiovascular disease. A large size 80g has 33 calories.
Plums: A medium plum 55g has 20calories. Plums are a good source of vitamins C and
offer potassium too.
Raisins and sultanas: Being so rich in sugar, these dried fruits are an excellent source of
energy: 1 tablespoon contains 82 calories.
Raspberries: There are nearly 1000 varieties of raspberries. They provide 20 calories per
Watermelon: A slice 200g of this refreshing melon contains 62 calories plus vitamin C and
THE BENEFITS OF DAIRY PRODUCTS
Milk and its products are excellent sources of protein, vitamins, and minerals – most
particularly of calcium, which is essential for healthy bones and teeth.
The varieties of milk: Although cow’s milk is the most common in the UK, sheep’s and
goat’s milk are available too, as are plant- based substitutes such as soya milk and rice milk.
Cow’s milk is processed in a variety of ways to create products that vary in nutritional content and
storage capability. Fat content is one of the most important distinctions, varying from whole or
full-fat milk (which contains 3.9 percent fat) to through semi-skimmed (1.6 percent) to skimmed.
Special milks are available for people with specific dietary needs, such as lactose
intolerance. Milk is also available in UHT (ultra –heat-treated), dried, evaporated, and condensed
forms, which are useful for cooking.
Cheese is in concentrated form, which is why cheese is such a great source of the
important nutrients found in milk. It’s also the reason why cheese has such high saturated fat
content. As with milk, the solution is simply to opt for reduced fat and low –fat varieties, which
contain the vital nutrients while limiting unhealthy saturated fat.
Yogurt is another milk product, made by treating milk with a bacterial culture. Yogurt is
rich in protein and vitamin B2, and contains living bacteria that are healthy for your digestive
system. It is available in many different types and, as with other milk products, the lower fat
varieties are the healthier choice.
Choosing the right milk: Most milk consumed in the world is cow’s milk. However, other
milks are available as healthy alternatives.
Cow’s milk: Whole or full-fat milk has 7.8 g of fat per 200ml serving and 132 calories.
Calcium content is slightly less than that in lower fat varieties.
Goat’s milk: With slightly less lactose than cow’s milk, goat’s milk contains more vitamins
A, B6, and calcium, potassium, copper, and selenium. Full-fat goat’s milk has about the same
amount of fat as cow’s milk, but there are skimmed versions.
Sheep’s milk: Rich in protein, fat, and minerals, sheep’s milk is not widely available. It is
most often found made into cheese and yogurt.
Soya milk: This is good for people with lactose intolerance as it doesn’t contain any lactose
or casein. A 200ml glass contains almost 6.0g of protein, 4.8g of fat, no cholesterol, and 86
calories. Soya milk is not a good natural source of calcium or vitamin B12, so choose a fortified
Rice milk: This is a good substitute for semi-skimmed cow’s milk for people who have
allergies or who are lactose-intolerant.
Oat milk: Lactose- and cholesterol-free, and low in fat. Choose varieties fortified with
calcium and vitamin D.
Almond milk: Lactose-free and low in saturated fat, almond milk is also very low in sugar.
FAST FOOD IN AMERICA
The modern history of fast-food in America began on July 7, 1912 with the opening of a
fast food restaurant called the Automat in New York. The Automat was a cafeteria with its
prepared foods behind small glass windows and coin-operated slots. Joseph Horn and Frank
Hardart had already opened an Automat in Philadelphia, but their “Automat” at Broadway and
13th Street, in New York City, created a sensation. Numerous Automat restaurants were quickly
built around the country to deal with the demand. Automats remained extremely popular
throughout the 1920’s and 1930’s. The company also popularized the notion of “take-out” food,
with their slogan “Less work for Mother”. The American company White Castle is generally
credited with opening the second fast-food outlet in Wichita, Kansas in 1921, selling hamburgers
for five cents apiece. Among its innovations, the company allowed customers to see the food
being prepared. White Castle later added five holes to each beef patty to increase its surface area
and speed cooking times. White Castle was successful from its inception and spawned numerous
McDonald’s, the largest fast-food chain in the world and the brand most associated with
the term “fast food,” was founded as a barbecue drive-in in 1940 by Dick and Mac McDonald.
After discovering that most of their profits came from hamburgers, the brothers closed their
restaurant for three months and reopened it in 1948 as a walk-up stand offering a simple menu of
hamburgers, French fries, shakes, coffee, and Coca-Cola, served in disposable paper wrapping. As
a result, they were able to produce hamburgers and fries constantly, without waiting for customer
orders, and could serve them immediately; hamburgers cost 15 cents, about half the price at a
typical diner. Their streamlined production method, which they named the “Speeded Service
System” was influenced by the production line innovations of Henry Ford. The McDonalds’ stand
was the milkshake machine company’s biggest customer and a milkshake salesman named Ray
Kroc traveled to California to discover the secret to their high-volume burger-and-shake operation.
Kroc thought he could expand their concept, eventually buying the McDonalds’ operation outright
in 1961 with the goal of making cheap, ready-to-go hamburgers, French fries and milkshakes a
Kroc was the mastermind behind the rise of McDonald’s as a national chain. The first part
of his plan was to promote cleanliness in his restaurants. Kroc often took part at his own Des
Plaines, Illinois, outlet by hosing down the garbage cans and scraping gum off the cement. Kroc
also added great swaths of glass which enabled the customer to view the food preparation. This
was very important to the American public which became quite germ conscious. A clean
atmosphere was only part of Kroc’s grander plan which separated McDonald’s from the rest of the
competition and attributes to their great success. Kroc envisioned making his restaurants appeal
to families of suburbs. “Where White Tower (one of the original fast food restaurants) had tied
hamburgers to public transportation and the workingman...McDonald’s tied hamburgers to the
car, children, and the family.
FOOD FOR DIFFERENT CULTURE
Have you ever stopped to really think about what you and your family eats every day and
why? Have you ever stopped to think what other people eat? In the movie Indiana Jones the
Temple of Doom, there are two scene in which the two lead characters are offered meals from a
different culture. One meal, meant to break the ice consisted of insects. The second meal was a
lavish banquet that featured such delicacies as roasted beetles, live snake, eyeball soup, and
chilled monkey brains for dessert. Some cultures eat such things as vipers and rattlesnakes, bush
rats, dog meat, horsemeat bats, animal heart, liver, eyes, and
insects of all sorts. Sound good?
The manner in which food is selected, prepared, presented and eaten often differs by
culture. One man’s pet is another person‘s delicacy– dog, anyone? Americans love beef, yet it is
forbidden to Hindus, while the forbidden food in Moslem and Jewish cultures is normally pork,
eaten extensively by the Chinese and others. In large cosmopolitan cities, restaurants often cater
to diverse diets and offer “national” dishes to meet varying cultural tastes. Feeding habits also
differ, and the range goes from hands and chopsticks to fill sets of cutlery. Even when cultures
use a utensil such as a fork, one can distinguish a European from an American by which hand
holds the implement. Subcultures, too, can be analyzed from this perspective, such as the
executive dinning room, the soldiers mess …or the ladies tea room, and the vegetarian’s
Often the different among cultures in the foods they eat are related to the difference in
geography and local resources. People who live near water (the sea, lakes, and rivers) tend to eat
more fish and crustaceans. People who live in colder climates tend to eat heavier fatty foods.
However with the development of a global economy, food boundaries and difference are
beginning to dissipate: McDonalds is now on every continent except Antarctica, and tofu and
yogurt are served all over the world.
FISH AND SHELLFISH
Eating fish twice a week reduces your risk of heart disease. Low in both total and saturated
fat content, fish and shellfish are excellent sources of protein and vitamins, so you should try to
include them in your diet at least twice a week. Fish and shellfish are high in important nutrients,
such as vitamins B1, B6, niacin, and D and some are rich in omega – 3 fatty acids.
Benefits of fish: Ever since it was discovered that people such as Inuite, who eat a diet
based on fish, have a low incidence of cardiovascular disease, the link between eating fish and
reduced risk of heart attack has been a hot topic.
Shellfish is healthy: This food source has acquired a bad reputation because some
shellfish contain a high level of cholesterol. However we now know that cholesterol levels in the
blood are related to the intake of saturated fat rather than to eating cholesterol – rich foods.
When handled properly, fish and shellfish are as safe to eat as any other source of protein.
Most harmful microbes found in fish are destroyed during cooking.
Choosing fish for omega – fatty acids: Oil rich fish such as sardines, mackerel, and
salmon contain healthy fat called omega-3 fatty acids. This fat is believed to reduce the risk of
your developing cardiovascular disease by increasing the levels of “good” cholesterol in the body
and lowering the levels of “bad” cholesterol and triglycerides. All fish and shellfish contain some
omega -3 fatty acids, but the amount can vary. Generally, the fattier fish contain more than the
leaner fish, but the proportion of omega-3 fatty acids can vary considerably between fish species.
School of Power Engineering
The Main Components of Electric Circuits
The main components of any circuits are devices that produce and utilize electric energy.
They are: 1. Power sources 2. Utilizing loads 3. Connecting conductors.
The most common power sources are electric generators and primary cells. Electric
generators convert chemical energy into electric energy.
Loads include electric heaters, electric motors, incandescent lamps, etc. Motors convert
electric energy into mechanical, incandescent lamps and heaters convert electric energy into light
and heat. Utilizing devices or loads convert electric energy into thermal, mechanical or chemical
Electric powers is delivered from power sources to loads by electric wires. According to
their material, wires can be aluminium, copper, steel, etc.
In addition to these three main components, electric circuits use different types of
switches, protection devices (relays and fuses), and meters (ammeters, voltmeters, wattmeters,
etc.). The types of electrical circuits associated with electrical power production or power
conversion systems are either resistive, inductive, or capacitive. Most system have some
combination of each of these three circuit types. These circuit elements are also types of loads.
A load is a part of a circuit that converts one type of energy into another type. A resistive load
converts electrical energy into heat energy.
In our discussion of electrical circuits, we will primarily consider alternating-current (ac)
systems as the vast majority of the electrical power which is produced is alternating current.
Direct-current (dc) systems must be studied in terms of electrical power conversation.
After electricity is produced at power plants it has to get to the customers that use the
electricity. Our cities, towns, states and the entire country are criss-crossed with power lines that
“carry” the electricity.
A power system is an interconnection of electric power stations by high voltage power
transmission lines. Nowadays the electricity is transmitted over long distances and the length of
transmitting power lines varies from area to area.
A wire system is termed a power line in case it has no parallel branches and a power
network in case it has parallel branches.
According to their functions, power lines and networks are subdivided into transmission
and distribution lines.
Transmission lines serve to deliver power form a station to distribution centres. Distribution
lines deliver power from distribution centres to the loads.Lines are also classed into: 1. overhead
2. indoor 3. cable (underground)
Overhead lines comprise line conductors, insulators, and supports. The conductors are
attached to the insulators, and these are attached to the supports. The greater the offered
resistance the higher are the heating losses in the conducting wires. The losses can be reduced
simply by using a step down transformer.
Indoor lines include conductors, cords, and buses. A conductor may comprise one wire or a
combination of bare wire not insulated from one another. They deliver electric current to the
As to underground lines, they are suitable for urban areas. Accordingly, they are used in
cities and in the areas of industrial enterprises.
Electric Power Consumers and Power System
An electric power consumer is an enterprise utilizing electric power. Its operating characteristics
vary during the hours of day, days and nights, days of week and seasons. All electric power
consumers are divided into groups with common load characteristics. To the first group belong
municipal consumers with a predominant lighting load: dwelling houses, hospitals, theatres, street
lighting system, mines, etc. To the second group belong industrial consumers with a predominant
power load (electric motors): industrial plants, mines, etc. To the third group belongs transport,
for example, electrified railways. The fourth consists of agricultural consumers, for example,
The operating load conditions of each group are determined by the load graph. The load
graph shows the consumption of power during different periods of day, month and year. On the
load graph the time of the maximum loads and minimum loads is given. Large industrial areas
with cities are supplied from electric networks fed by electric power plants. These plants are
interconnected for operation in parallel and located in different parts of the given area. They may
include some large thermal and hydroelectric power plants.
The sum total of the electric power plants, the networks that interconnect them and the
power utilizing devices of the consumers, is called a power system. All the components of a power
system, are interrelated by the common processes of protection, distribution, and consumption of
both electric and heat power. In a power system, all the parallelly operating plants take part in
carrying the total load of all the consumers supplied by the given system.
Nuclear Energy: Solution to Global Climate Change
The issue of global climate has been widely reported on, and was recently covered in the
PBC documentary, “What’s Up With the Weather?” nuclear power plants do not produce carbon
dioxide emissions, which are a major contributor to the greenhouse effect and global climate
change. In fact, nuclear energy releases no emissions of any kind, so they also do not contribute
to local air pollution problems. The US Representative to UN Organizations in Vienna, Ambassador
John B. Rich III, has declared that “only nuclear energy can help meet the world’s energy needs
without threatening the environment”. Worldwide, reliance on nuclear has reduced greenhouse
gas emissions by almost metric tons annually.
The herring: the “problem” of nuclear waste. The entire nuclear power industry generates
approximately 2.000 tons of solid waste annually in the United States. All technical and safely
issues have been resolved in creation of high-level waste repository in the US; politics are the only
reason we do not have one. In comparison, coal fired power produced 100.000.000 tons of ash
and sludge annually, and this ash is laced with poisons such as mercury and nitric oxide. Industry
generates 38.000.000 tons if hazardous waste, and the kind they make will be with us forever,
not decaying away, this waste does not receive nearly the care and attention in disposal that
radioactive waste does. This is not to say that radioactive waste is more dangerous; it is not. We
should be probably more careful with other industrial wastes.
Power engineering is a science which studies all kinds of energy. It is a very young science
and it is applied in very branch of industry. Our industrial progress is based on power, power for
our machines, industrial plants, heating and lighting systems transport and communication.
Indeed, there is hardly a sphere of our life where power is not required. We may trace the rise of
civilization by man’s ability to generate power.
Power engineering comprises different sciences and branches of sciences such as:
mathematics, machine details, strength of materials, electrical engineering, hydrolics, heat
transfer, electrical units, gas and steam turbines, atomic reactors, solar installations and many
Power supply is one of the major criteria of a country industrial might. Without an ample
power supply, no branch of national economy can develop rapidly or at all effectively.
Industrial process depends on power. For centuries coal, oil and water were its main
sources. In the 19th century they were used to produce steam. In the first half of the 20 th century-
electricity. Our time is the age of automatic power. A new fuel and a new source of power is put
to the service of man.
Today we obtain power from many sources: one of them is coal, oil, natural gas, to
produce the best that operates internal and external combustion engines. The other source is
falling water in our hydro-electric power stations, where water turbines operate electric
generators, the next is nuclear reactor which produces heat by means of atomic fission. We also
use the energy of tides, subterranean heat and solar energy to produce electricity.
Power engineering includes such forms of energy as: solar, atomic, thermal, electric energy
from combustion of coal oil, shale and gas, steam power, wind etc.
The sources from which energy can be obtained to provide heat, light, and power. Energy
sources have progressed from human and animal power to fossil fuels and radioactive elements,
and water, wind, and solar power. Industrial society was based on the substitution of fossil fuels
for human and animal power. Future generations will have to increasingly use solar energy and
nuclear power as the finite reserves of fossil fuels become exhausted. The principal fossil fuels are
coal, lignite, petroleum, and natural gas – all of which were formed millions of years ago. Fossil
fuels which have potential for future use are oil shale and tar sands.
Oil shale deposits have been found in many areas of the United States, but the only deposits of
sufficient potential oil content considered as near-them potential resources are those of the Green
River Formation in Colorado., Wyoming, and Utah.
Tar sands represent the largest known world supply of liquid hydrocarbons. Extensive
resources are located throughout the world, but primarily in the Western Hemisphere. The best-
known deposit is the Athabasca tar sands in northeastern Alberta, Canada.
Nonfuel sources of energy include wastes, water, wind, geothermal deposits, biomass, and solar
heat. At the present time the nonfuel sources contribute very little energy, but as the fossil fuels
become depleted, the nonfuel sources and fission and fusion sources will become of greater
importance since they are renewable. Nuclear power based on the fission of uranium, thorium,
and plutonium, and fusion power based on the forcing together of the nuclei of two light atoms,
such as deuterium, tritium, or helium 3, could become principal sources of energy in the 21st
All sources of energy put together is energy locked up in nuclei of atoms of matter itself. It
has been known for at least a century. It is called nuclear energy.
Many atomic power plants for producing electric energy were built in many countries of the world.
There are great possibilities of using nuclear energy for world. A number of countries are working
at the development and construction of various kinds of locomotive, airplanes and other means of
transport. Many atomic powered ships have been already built. Nuclear energy is and will be used
in medicine, and in many spheres of life where the atom may find useful application.
Electric power is generated at electric power plants. The main unit of an electric power
plant comprises a prime mover and the generator which it rotates.
In order to actuate the prime mover energy is required. Many different sources of energy
are in use nowadays. To these sources belong heat obtained by burning fuels, pressure due to
the flow prime mover, power plants are divided into groups.
Thermal, hydraulic (water/power) and wind plants are classed as:
• Steam turbine plants, where steam turbines serve as prime movers. The main generating units at
steam turbine plants belong to the modern, high-capacity class of power plants.
• Steam engine plants, in which the prime mover is a piston-type steam engine. Nowadays no large
generating plants of industrial importance are constructed with such prime movers. They are
used only for local power supply.
• Diesel-engine plants; in them diesel internal combustion engines are installed. These plants are
also of small capacity, they are employed for local power supply.
• Hydroelectric power plants employ water turbines as prime movers. Therefore they are called
hydroturbine plants. Their main generating units is the hydrogenerator.
• Modern wind-electric power plants utilize various turbines; these plants as well as the small
capacity hydroelectric power plants are widely used in agriculture.
• Hydroelectric stations deliver power from great rivers, but still about 80 percent of the required
electric power is produced in thermal, electrical plants. These plants burn coal, gas, peat or shale
to make steam.
An Electric Motor
An electric motor is a device using electrical energy to produce mechanical energy.
Electric motors are everywhere! In your house, almost every mechanical movement is caused by
an AC (alternating current) or DC (direct current) electric motor.
In an electric motor an electric current and magnetic field produce a turning
movement. This can drive all sorts of machines, from wrist- watches to trains.
An electric current running through a wire produces a magnetic field around the wire.
If an electric current flows around a loop of wire with a bar of iron through it, the iron becomes
If you put two magnets close together, like poles-for example, two north poles –
repel each other, and unlike poles attract each other.
In a simple electric motor, a piece of iron with loops of wire round it, called an
armature, is placed between the north and south poles of a stationary magnet, known as the field
magnet when electricity flows around the armature wire, the iron becomes an electromagnet.
The attraction and repulsion between the poles of this armature magnet and the
poles of the field magnet make the armature turn. As a result, its north pole is close to the south
pole of the field magnet. Then the current is reserved so the north pole of the armature magnet
becomes the south pole. Once again, the attraction and repulsion between it and the field magnet
make it turn. The armature continues turning as long as the direction of the current, and
therefore its magnetic poles, keeps being reversed.
To reverse the direction of the current, the ends of the armature wire are connected
to different halves of a split ring called a commutator. Current flows to and from the commutator
through small carbon blocks called brushes. As the armature turns, first one half of the
commutator comes into contact with the brush delivering the current, and then the other, so the
direction of the current keeps being reversed.
A portable generator can provide electricity to power lights and other appliances no
matter how far you are from the mains. It works by turning the movement of a piston into
Although most electricity comes from power stations, power can also be generated by
far smaller means. Nowadays, electricity generators can be small enough to hold in the hand.
Portable generators are made up of two main parts: an engine, which powers the equipment, and
an alternator, which converts motion into electricity.
The engine which runs on petrol, is started by pulling a cord. This creates a spark
inside which ignites the fuel mixture.
In a typical four-stroke engine, when the piston descends, the air inlet valve opens and a mixture
of air and petrol is sucked in through a carburetor.
The valve closes, the piston rises on the compression stroke and a spark within the
upper chamber ignites the mixture. This mini- explosion pushes the piston back down, and as it
rises again the fumes formed by the ignition are forced out through the exhaust valve.
This cycle is repeated many times per second. The moving piston makes the crankshaft
rotate at great speed.
The crankshaft extends directly to an alternator, which consists of two main sets of
windings-coils of insulated copper wire wound closely around an iron core. One set, called stator
windings, is in a fixed position and shaped like a broad ring. Other set, the armature windings, is
wound on the rotor which is fixed to the rotating crankshaft. The rotor makes about 3.000
revolutions per minute.
Faraday’s experiments of August 29, 1831, gave us the principle of the electric
transformer, without which the later discoveries of that fateful year could have little real practical
application. For to convey electric current over long distances, say to supply a town, or feed an
electric railway, it is necessary to generate it at a very high voltage, or force. By means of
transformers based on Faraday’s induction coil discovery, it is simple for a current from the grid or
direct from a power-station of say 132.000 volts to be stepped down for the electric train to 600
volts and for household use to 240 volts. Smaller transformers in individual prices of electrical
equipment, say a shaver or radio, may step the current down still further for special purposes.
Similarly, currents may be stepped up in voltage, if required, by the same device. The
procedure is quite simple. The current is fed into the transformer across the primary, of input coil,
which corresponds to Faraday’s right-hand coil on his induction ring. The resultant induced
current is taken from the secondary, of output coil, which corresponds to Faraday’s left-hand coil,
the voltage will be stepped down.
So the two related discoveries of 1831 provided not only the means of making electricity
easily and cheaply, on as large a scale as required, without any cumbersome batteries, but also
the way of using it in a safe and practical way.
School of Industrial Technology and Design
Fashion design is the applied art dedicated to clothing and lifestyle accessories created
within the cultural and social influences of a specific time. It is considered to have a built in
obsolescence usually of one to two seasons. A season is defined as either autumn/winter or
spring/summer. Nowadays, even though French, British, Japanese and American fashion are the
top in style, Italian fashion is considered the most important and elegant in design and it has led
the world of fashion since the 1970s and '80s.
Fashion designers can work in a number of ways. Fashion designers may work full-time for
one Fashion Company, known as in-house designers, which owns the designs. They may work
alone or as part of a team. Freelance designers works for themselves, and sell their designs to
fashion houses, directly to shops, or to clothing manufacturers. The garments bear the buyer's
label. Some fashion designers set up their own labels, under which their designs are marketed.
Some fashion designers are self-employed and design for individual clients. Other high-fashion
designers cater to specialty stores or high-fashion department stores. These designers create
original garments, as well as those that follow established fashion trends. Most fashion designers,
however, work for apparel manufacturers, creating designs of men’s, women’s, and children’s
fashions for the mass market. Large designer brands which have a 'name' as their brand such as
Calvin Klein, Gucci, Ralph Lauren, or Chanel are likely to be designed by a team of individual
designers under the direction of a designer director.
Designing a collection and garment: A fashion collection is something that designers put together
each season to show their idea of new trends in both their high end couture range as well as their
mass market range. Fashion designers must take numerous matters into account when designing
clothes for a collection, including consistency of theme and style. They will also take into account
views of existing customers, previous fashions and styles of competitors, and anticipated fashion
trends, as well as the season for the collection of fashion.
Fashion designers work in different ways. Some sketch their ideas on paper, while others
drape fabric on a dress form. When a designer is completely satisfied with the fit of the toile (or
muslin), he or she will consult a professional pattern maker who then makes the finished, working
version of the pattern out of card. The pattern maker's job is very precise and painstaking. The fit
of the finished garment depends on their accuracy. Finally, a sample garment is made up and
tested on a model.
Ready to Wear
Fashion design is generally considered to have started in the 19th century with Charles
Frederick Worth who was the first designer to have his label sewn into the garments that he
created. Before the former draper set up his maison couture (fashion house) in Paris, clothing
design and creation was handled by largely anonymous seamstresses, and high fashion
descended from that worn at royal courts. Worth's success was such that he was able to dictate
to his customers what they should wear, instead of following their lead as earlier dressmakers had
done. The term couturier was in fact first created in order to describe him. While all articles of
clothing from any time period are studied by academics as costume design, only clothing created
after 1858 could be considered as fashion design.
It was during this period that many design houses began to hire artists to sketch or paint
designs for garments. The images were shown to clients, which was much cheaper than
producing an actual sample garment in the workroom. If the client liked their design, they
ordered it and the resulting garment made money for the house. Thus, the tradition of designers
sketching out garment designs instead of presenting completed garments on models to customers
began as an economy.
At this time in fashion history the division between haute couture and ready-to-wear was
not sharply defined. The two separate modes of production were still far from being competitors,
and, indeed, they often co-existed in houses where the seamstresses moved freely between
made-to-measure and ready-made.
Around the start of the 20th century fashion magazines began to include photographs and
became even more influential than in the past. In cities throughout the world these magazines
were greatly sought-after and had a profound effect on public taste. Talented illustrators, among
them Paul Iribe, George Lepape and George Barbier, drew exquisite fashion plates for these
publications, which covered the most recent developments in fashion and beauty. Perhaps the
most famous of these magazines was La Gazette du Bon Ton, which was founded in 1912 by
Lucien Vogel and regularly published until 1925 (with the exception of the war years).
World War II brought about many radical changes to the fashion industry. After the war,
Paris's reputation as the global center of fashion began to crumble and off-the-peg and mass-
manufactured clothing became increasingly popular. A new youth style emerged in the 1950s,
changing the focus of fashion. As the installation of central heating became more widespread the
age of minimum-care garments began and lighter textiles and, eventually, synthetics, were
Leather in Modern Culture
Leather is a material created through the tanning of hides and skins of animals, primarily
cattle hide. The tanning process converts the putrescible skin into a durable and versatile
material. Together with wood, leather formed the basis of much ancient technology. The leather
industry and the fur industry are distinct industries that are differentiated by the importance of
their raw materials. In the leather industry the raw materials are by-products of the meat
industry, with the meat having higher value than the skin. The fur industry uses raw materials
that are higher in value than the meat and hence the meat is classified as a by-product.
Taxidermy also makes use of the skin of animals, but generally the head and part of the back are
used. Hides and skins are also used in the manufacture of glue and gelatin.
Due to its excellent resistance to abrasion and wind, leather found a use in rugged
occupations. The enduring image of a cowboy in leather chaps gave way to the leather-jacketed
and leather-helmeted aviator. When motorcycles were invented, some riders took to wearing
heavy leather jackets to protect from road rash and wind blast; some also wear chaps or full
leather pants to protect the lower body. Many sports still use leather to help in playing the game
or protecting players; its flexibility allows it to be formed and flexed.
The term leathering is sometimes used in the sense of a physical punishment (such as a
severe spanking) applied with a leather whip, martinet, etc.
Leather fetishism is the name popularly used to describe a fetishistic attraction to people
wearing leather, or in certain cases, to the garments themselves.
Many rock groups (particularly heavy metal and punk groups in the 1980s) are well-known
for wearing leather clothing. Leather clothing, particularly jackets, almost come as standard in the
heavy metal and Punk subculture. Extreme metal bands (especially black metal bands) and Goth
rock groups have extensive leather clothing, i.e. leather pants, accessories, etc.
Many cars and trucks come with optional or standard 'leather' seating. This can range from
cheap vinyl imitation leather, found on some low cost vehicles, to real Nappa leather, found on
luxury car brands like Mercedes-Benz, BMW, and Audi.
Computer-Aided Design (CAD)
Computer-aided design (CAD) is the use of computer technology for the design of objects,
real or virtual. CAD often involves more than just shapes. As in the manual drafting of technical
and engineering drawings, the output of CAD often must convey also symbolic information such
as materials, processes, dimensions, and tolerances, according to application-specific conventions.
CAD may be used to design curves and figures in two-dimensional ("2D") space; or curves,
surfaces, or solids in three-dimensional ("3D") objects.
CAD is an important industrial art extensively used in many applications, including
automotive, shipbuilding, and aerospace industries, industrial and architectural design,
prosthetics, and many more. CAD is also widely used to produce computer animation for special
effects in movies, advertising and technical manuals. The modern ubiquity and power of
computers means that even perfume bottles and shampoo dispensers are designed using
techniques unheard of by shipbuilders of the 1960s. Because of its enormous economic
importance, CAD has been a major driving force for research in computational geometry,
computer graphics (both hardware and software), and discrete differential geometry.
The design of geometric models for object shapes, in particular, is often called computer-
aided geometric design (CAD).
Using CAD: Computer-Aided Design is one of the many tools used by engineers and designers
and is used in many ways depending on the profession of the user and the type of software in
question. There are several different types of CAD. Each of these different types of CAD systems
require the operator to think differently about how he or she will use them and he or she must
design their virtual components in a different manner for each.
There are many producers of the lower-end 2D systems, including a number of free and
open source programs. These provide an approach to the drawing process without all the fuss
over scale and placement on the drawing sheet that accompanied hand drafting, since these can
be adjusted as required during the creation of the final draft.
3D wireframe is basically an extension of 2D drafting. Each line has to be manually inserted
into the drawing. The final product has no mass properties associated with it and cannot have
features directly added to it, such as holes. The operator approaches these in a similar fashion to
the 2D systems, although many 3D systems allow using the wireframe model to make the final
engineering drawing views.
3D "dumb" solids (programs incorporating this technology include AutoCAD and Cadkey 19)
are created in a way analogous to manipulations of real world objects. Basic three-dimensional
geometric forms (prisms, cylinders, spheres, and so on) have solid volumes added or subtracted
from them, as if assembling or cutting real-world objects.
The Effects of CAD
Starting in the late 1980s, the development of readily affordable Computer-Aided Design
programs that could be run on personal computers began a trend of massive downsizing in
drafting departments in many small to mid-size companies. As a general rule, one CAD operator
could readily replace at least three to five drafters using traditional methods. Additionally, many
engineers began to do their own drafting work, further eliminating the need for traditional
drafting departments. This trend mirrored that of the elimination of many office jobs traditionally
performed by a secretary as word processors, spreadsheets, databases, etc. became standard
software packages that "everyone" was expected to learn.
Another consequence had been that since the latest advances were often quite expensive,
small and even mid-size firms often could not compete against large firms who could use their
computational edge for competitive purposes. Today, however, hardware and software costs have
come down. Even high-end packages work on less expensive platforms and some even support
multiple platforms. The costs associated with CAD implementation now are more heavily weighted
to the costs of training in the use of these high level tools, the cost of integrating a CAD/CAM/CAE
PLM using enterprise across multi-CAD and multi-platform environments and the costs of
modifying design work flows to exploit the full advantage of CAD tools.
CAD vendors have effectively lowered these training costs. These methods can be split into three
1. Improved and simplified user interfaces. This includes the availability of “role” specific tailorable
user interfaces through which commands are presented to users in a form appropriate to their
function and expertise.
2. Enhancements to application software. One such example is improved design-in-context, through
the ability to model/edit a design component from within the context of a large, even multi-CAD,
active digital mockup.
3. User oriented modeling options. This includes the ability to free the user from the need to
understand the design intent history of a complex intelligent model.
The term graphic design can refer to a number of artistic and professional disciplines which
focus on visual communication and presentation. Various methods are used to create and
combine symbols, images and words to create a visual representation of ideas and messages. A
graphic designer may use typography, visual arts and page layout techniques to produce the final
result. Graphic design often refers to both the process (designing) by which the communication is
created and the products (designs) which are generated.
Common uses of graphic design include magazines, advertisements and product
packaging. For example, a product package might include a logo or other artwork, organized text
and pure design elements such as shapes and color which unify the piece. Composition is one of
the most important features of graphic design especially when using pre-existing materials or
While Graphic Design as a discipline has a relatively recent history, graphic design-like
activities span the history of humankind: from the caves of Lascaux, to Rome's Trajan's Column to
the illuminated manuscripts of the Middle Ages, to the dazzling neons of Ginza. In both this
lengthy history and in the relatively recent explosion of visual communication in the 20th and 21st
centuries, there is sometimes a blurring distinction and over-lapping of advertising art, graphic
design and fine art. After all, they share many of the same elements, theories, principles,
practices and languages, and sometimes the same benefactor or client. In advertising art the
ultimate objective is the sale of goods and services. In graphic design, "the essence is to give
order to information, form to ideas, expression and feeling to artifacts that document human
Design can also aid in selling a product or idea through effective visual communication. It is
applied to products and elements of company identity like logos, colors, packaging, and text.
Together these are defined as branding (see also advertising).
Branding has increasingly become important in the range of services offered by many
graphic designers, alongside corporate identity, and the terms are often used interchangeably.
Graphic design is also applied to layout and formatting of educational material to make the
information more accessible and more readily understandable.
Graphic design is applied in the entertainment industry in decoration, scenery, and visual
story telling. Other examples of design for entertainment purposes include novels, comic books,
opening credits and closing credits in film, and programs and props on stage. This could also
include artwork used for t-shirts and other items screen printed for sale.
Sewing or stitching or Tailoring is the fastening of cloth, leather, furs, bark, or other
flexible materials, using needle and thread. Its use is nearly universal among human populations
and dates back to Paleolithic times (30,000 BCE). Sewing predates the weaving of cloth.
Sewing is used primarily to produce clothing and household furnishings such as curtains,
bedclothes, upholstery, and table linens. It is also used for sails, bellows, skin boats, banners, and
other items shaped out of flexible materials such as canvas and leather.
Most sewing in the industrial world is done by machines. Pieces of a garment are often first
tacked together. The machine has a complex set of gears and arms that pierces thread through
the layers of the cloth and semi-securely interlocks the thread.
Some people sew clothes for themselves and their families. More often home sewers sew
to repair clothes, such as mending a torn seam or replacing a loose button. A person who sews
for a living is known as a seamstress (from seams-mistress) or seamster (from seams-master),
dressmaker, tailor, garment worker, machinist, or sweatshop worker.
"Plain" sewing is done for functional reasons: making or mending clothing or household
linens. "Fancy" sewing is primarily decorative, including techniques such as shirring, smocking,
embroidery, or quilting.
Sewing is the foundation for many needle arts and crafts, such as applique, canvas work,
While sewing is sometimes seen as a semi-skill job, flat sheets of fabric with holes and slits
cut into the fabric can curve and fold in complex ways that require a high level of skill and
experience to manipulate into a smooth, ripple-free design. Aligning and orienting patterns
printed or woven into the fabric further complicates the design process. Once a clothing designer
with these skills has created the initial product, the fabric can then be cut using templates and
sewn by manual laborers or machines.
Industrial design is a combination of applied art and applied science, whereby the
aesthetics and usability of mass-produced products may be improved for marketability and
production. The role of an Industrial Designer is to create and execute design solutions towards
problems of form, usability, user ergonomics, engineering, marketing, brand development and
The term "industrial design" is often attributed to the designer Joseph Claude Sinel in 1919
(although he himself denied it in later interviews) but the discipline predates that by at least a
decade. Its origins lay in the industrialization of consumer products. For instance the Deutscher
Werkbund, founded in 1907 and a precursor to the Bauhaus, was a state-sponsored effort to
integrate traditional crafts and industrial mass-production techniques, to put Germany on a
competitive footing with England and the United States.
Western Electric model 302 Telephone, found almost universally in the United States from
1937 until the introduction of touch-tone dialing
General Industrial Designers are a cross between an engineer and an artist. They study
both function and form, and the connection between product and the user. They do not design
the gears or motors that make machines move, or the circuits that control the movement, but
they can affect technical aspects through usability design and form relationships. And usually,
they partner with engineers and marketers, to identify and fulfill needs, wants and expectations.
In Depth "Industrial Design (ID) is the professional service of creating and developing
concepts and specifications that optimize the function, value and appearance of products and
systems for the mutual benefit of both user and manufacturer" according to the IDSA (Industrial
Designers Society of America).
Design, itself, is often difficult to define to non-designers because the meaning accepted by
the design community is not one made of words. Instead, the definition is created as a result of
acquiring a critical framework for the analysis and creation of artifacts. One of the many accepted
(but intentionally unspecific) definitions of design originates from Carnegie Mellon's School of
Design, "Design is the process of taking something from its existing state and moving it to a
preferred state." This applies to new artifacts, whose existing state is undefined and previously
created artifacts, whose state stands to be improved.
The Development of Industrial Management
Industrial management term applied to highly organized modern methods of carrying on
industrial, especially manufacturing, operations. Before the Industrial Revolution people worked
with hand tools, manufacturing articles in their own homes or in small shops. In the third quarter
of the 18th cent. steam power was applied to machinery, and people and machines were brought
together under one roof in factories, where the manufacturing process could be supervised. This
was the beginning of shop management. In the next hundred years factories grew rapidly in size,
in degree of mechanization, and in complexity of operation. The growth, however, was
accompanied by much waste and inefficiency. In the United States many engineers, spurred by
the increased competition of the post-Civil War era, began to seek
ways of improving plant efficiency.
The first sustained effort in the direction of improved efficiency was made by Frederick
Winslow Taylor , an assistant foreman in the Midvale Steel Company, who in the 1880s undertook
a series of studies to determine whether workers used unnecessary motions and hence too much
time in performing operations at a machine. Each operation required to turn out an article or part
was analyzed and studied minutely, and superfluous motions were eliminated. Records were kept
of the performance of workers and standards were adopted for each operation. The early studies
resulted in a faster pace of work and the introduction of rest periods.
Industrial management also involves studying the performance of machines as well as
people. Specialists are employed to keep machines in good working condition and to ensure the
quality of their production. The flow of materials through the plant is supervised to ensure that
neither workers nor machines are idle. Constant inspection is made to keep output up to
standard. Charts are used for recording the accomplishment of both workers and machines and
for comparing them with established standards. Careful accounts are kept of the cost of each
operation. When a new article is to be manufactured it is given a design that will make it suitable
for machine production, and each step in its manufacture is planned,
including the machines and materials to be used.
Modern Trends of Management
The principles of scientific management have been gradually extended to every department
of industry, including office work, financing, and marketing. Soon after 1910 American firms
established the first personnel departments, and eventually some of the larger companies took
the lead in creating environments conducive to worker efficiency. Safety devices, better
sanitation, plant cafeterias, and facilities for rest and recreation were provided, thus adding to the
welfare of employees and enhancing morale. Many such improvements were made at the
insistence of employee groups, especially labor unions.
Over the years, workers and their unions also sought and often won higher wages and
increased benefits, including group health and life insurance and liberal retirement pensions.
During the 1980s and 1990s, however, cutbacks and downsizing in many American businesses
substantially reduced many of these benefits. Some corporations permit employees to buy stock;
others make provision for employee representation on the board of directors or on the shop
grievance committee. Many corporations provide special opportunities for training and promotion
for workers who desire advancement, and some have made efforts to solve such difficult
problems as job security and a guaranteed annual wage.
Modern technological devices, particularly in the areas of computers, electronics,
thermodynamics, and mechanics, have made automatic and semiautomatic machines a reality.
The development of such automation is bringing about a second industrial revolution and is
causing vast changes in commerce as well as the way work is organized.
Such technological changes and the need to improve productivity and quality of products in
traditional factory systems also changed industrial management practices. In the 1960s Swedish
automobile companies discovered that they could improve productivity with a system of group
assembly. In a contrast to older manufacturing techniques where a worker was responsible for
assembling only one part of the car, group assembly gave a group of workers the responsibility
for assembling an entire car.
The system was also applied in Japan, where managers developed a number of other
innovative systems to lower costs and improve the quality of products. One Japanese innovation,
known as quality circles, allowed workers to offer management suggestions on how to make
production more efficient and to solve problems. Workers were also given the right to stop the
assembly line if something went wrong, a sharp departure from U.S. factories.
Computer Science and Management School
How did the computer evolve and where did it all start?
The computer first started with a machine that could calculate math problems known as a
calculator to us today, but known as the difference engine to Charles Babbage in 1833. The main
purpose of the machine was to calculate astronomical tables. The machine was never built
because the government ignored his ideas. In 1940 an idea of the Analytical machine, which
claimed could perform any mathematical calculation. The machine was built but there were too
many bugs and defects that made the machine incompatible of doing a lot of mathematics.
The first computer costs over a million dollars, and the first personal computer was around 10,000
dollars. The computer system used punch cards to record numerical data. “The computer took up
a whole room and consists of; 19,000 vacuums, weighed over 30 tons, and consumed over 200
kilowatts of energy. In comparison the first calculator had a 5 horsepower engine, measured 2x51
feet, weighed 5 tons, and contained hundreds of miles of wiring, which could do less math than a
calculator we use today.
Today over 60% of the metropolitans in the U.S have families with access to computers. Likewise
General Motors has a 1:2 ratio of computers in their workplace for their employees. The Microsoft
Corporation hit big with its operating system and had 3 major changes in their lives. In 1992
Microsoft’s stock reached a record high of $113 a share, it shipped windows 3.1 and sold the most
number of copies, and established a separation from IBM and became an independent company.
With Microsoft in control they sent out over 1,000 upgraded computer components from 1990 to
In 2006 a computer record was released followed by many statistics that astounding the
computer companies . “The annual search revenue was around $4,000,000,000, ¾ of Americans
spent 12 hours on the computer a week, Spam on the internet increases 60%, 70% of Americans
said they would rather shop online than go to a store, and windows newest release xp had 50
million lines of code, which grows over 20% more each year.” As soon as the business world
noticed the drastic change in computers in the 1990s a revolution of the business world was soon
Today we use computers everyday, we don’t think about how it all started or how many years
and hard work there is behind every program, operating system, and website, that make a
computer what it is today.
What is a computer virus
Computer viruses are small software programs that are designed to spread from one computer to
another and to interfere with computer operation. A virus might corrupt or delete data on your
computer, use your e-mail program to spread itself to other computers, or even erase everything
on your hard disk.
Viruses are often spread by attachments in e-mail messages or instant messaging messages. That
is why it is essential that you never open e-mail attachments unless you know who it's from and
you are expecting it. Viruses can be disguised as attachments of funny images, greeting cards, or
audio and video files.
Viruses also spread through downloads on the Internet. They can be hidden in illicit software or
other files or programs you might download. To help avoid viruses, it's essential that you keep
your computer current with the latest updates and antivirus tools, stay informed about recent
threats, and that you follow a few basic rules when you surf the Internet, download files, and
Once a virus is on your computer, its type or the method it used to get there is not as important
as removing it and preventing further infection.
Viruses may take several forms. The two principal ones are the boot-sector virus and file viruses,
but there are others.
- Boot-sector virus: The boot sector is that part of the system software containing most of
the instructions for booting, or powering up, the system. The boot sector virus replaces
these boot instructions with some of its own. Once the system is turned on, the virus is
loaded into main memory before the operating system. From there it is in a position to
infect other files.
- File virus: File viruses attach themselves to executable files- those that actually begin in a
program. (these files have extensions .com and .exe.) When the program is run, the virus
starts working, trying to get into main memory and infect other files.
Recent trends of supporting your Computer with Modern Hardware
Recently computer hardware has become one of the most flourishing industries in the
world. As a number of people are getting familiar with the computer technology the
demand of hardware industry for has grown up enormously.
A number of companies are handling the sales of computer hardware and achieving the
demands of a several computer literate customers. Apart from the hardware sales these
companies also provide essential computer support, which is needed by most of all
computerized organizations irrespective of their scale, location or size. These companies
usually offer proficiency in all kinds of computer hardware mechanism and computer
hardware support service in a rate, which is highly competitive. As there is a cutthroat
competition in the market different companies offer various kinds of specialized service in a
highly affordable rate to the customers.
Some big and reputed companies like IBM, HP, Microsoft or Apple have their own websites,
which offer wide support services and hardware sales in many ways to their customers for
the products they sale. All these brands have their centers, online PC support, tutorials and
tips and FAQs all over the world from where they offer corresponding hardware supports.
Their wide range of supports includes different topics such as recovery and back up, brand
components, battery related issues, maintenance and performance of the
hardware/software of the brand and the security features offered with them and many
In most of all metro cities there are a number of companies, which are involved in
hardware sales. Most of all these shops have hardware engineers and mechanics who are
trained enough to solve all kinds of PC related issues and hence provide necessary PC
supports whenever you need them. According to its rule a computer support services can
also offer maintenance and repair for a certain period of time.
Moreover recently a number of websites are offering online PC support tutorials, which are
mostly created by award winning professionals, authors and technology and expertise.
These websites also offer reliable computer support to the customers. You may also visit
these websites to download the support utilities by simply registering yourself in these
1. How to Clean Your Computer
By: Sarah Jones | 23/07/2009 | Security
If you are a computer user and want your computer to run smoothly, for a long time, you need to
maintain it properly and clean all your programs, files and applications regularly. If you wish you
can send your PC for cleaning every six months.
Spyware Identification and Elimination with Technical Experts
By: Sarah Jones | 20/07/2009 | Security
Now a days many people are opting for Internet for searching anything they need. It saves a lot
of time and the information are quite authentic. However, the entire process of surfing the
internet and downloading information from different websites can invite virus, spyware and
malware to your computer. Once the viruses, spywares or other malwares get installed on your
PC, it will steal all your personal online information by monitoring your keystrokes.
Spyware, Virus, Malware – Threat to the Online Identity
By: Sarah Jones | 20/07/2009 | Security
It can be quite disappointing if a computer, which used to run very fast till a few days ago
suddenly, starts running like a snail, frequently restarts and freezes. It can get more dangerous if
you cannot remember what went wrong with the PC. All that you can remember is that while
surfing the internet you have clicked on some ads, which suddenly appeared on the screen of
Speed up Windows XP and Vista while starting up
By: Sarah Jones | 18/07/2009 | Software
Does your computer take forever to startup? It is the most common problem a computer user has
to face. It can get irritating if you have a certain deadline to meet in your work front.
Get rid off Adware and Spyware
By: Sarah Jones | 18/07/2009 | Software
There are various kinds of “infections” or “intrusion” which can infect a modern day computer
including Malware and viruses. All types of spiteful software, which includes Adware and Spyware,
are known as Malware.
Spyware and Adware removers to Boost up PC performance
By: Sarah Jones | 18/07/2009 | Software
It is a widely accepted truth that spyware and adware do a lot of harm to computer but most
computer users do not know how much harmful they actually are. But they definitely kill speed of
Speed up PC without upgrading
By: Sarah Jones | 18/07/2009 | Software
Computer is one of the most popular devices and has been placed at the top of the priority list for
many people all over the world. Most people cannot work without computer be it at home or
school or at work. But using a computer and handling it properly is completely two different
Robots commanded by man thought
Honda has lately highly-developed fresh interface engineering that grants man thought to
command the Asimo android merely by thinkings. The user interface is known as BMI (brain-
machine int.) and was produced along with Advanced Telecommunications Research Institute
International (ATR) and Shimadzu Corporation. It comprises of a sensor-laden helmet that
quantifies the user's brainpower and a computing device that examines the thinking models and
relays them as radio instructions to the android.
Once the exploiter thinks of displacing his or her right, the pre-programmed Asimo answers
numerous moments later by elevating its right limb. Similarly, Asimo elevates its left arm once the
individual imagines locomoting their left, it commences to walk once the human thinks of
locomoting their legs, and it bears its deal upwards before of its utter if the individual thinks of
locomoting their tongue.
The high-precision BMI engineering trusts upon 3 dissimilar cases of brainpower action measures:
- EEG (electroencephalography) detectors evaluate the slim variations in electric potential on the
scalp that happen while imagining
- NIRS (near-infrared spectrometry) detectors evaluate alterations in intellectual blood flow
- Fresh acquired data origin engineering is accustomed method the complex information by these
2 cases of detectors, leading inwards a more exact indication.
BMI scheme bears an accuracy grade of more than 90%.
Honda has been carrying on BMI inquiry and developing on ATR since 2005. It's checking over the
hypothesis of one day expending this case of user interface engineering with AI and robotics to
produce devices that exploiters can engage without having to make a motion.
Possible Coming Attractions: From Gallium Arsenide to Nanotechnology to Biochips
Future processing technologies may use optical processing, nanotechnology and biochips.
The old theological question of how many angels could fit on the head of a pin today has become
the technological question of how many circuits could pin there. Computer developers are
obsessed with speed, constantly seeking ways to promote faster processing. Some of the most
promising directions, already discussed, are RISC chips and parallel processing. Some other
research paths being explored are the following:
- Opto-electronic processing: Today’s computers are electronic, tomorrow’s might be opto-
electronic-using light, not electricity. With optical-electronic technology, a machine using
lasers, lenses, and mirrors would represent the on-and-off codes of data with pulses of
Light is much faster than electricity. Indeed, fiber-optic networks, which consist of hair-thin
glass fibers, can move information at speeds 3000 times faster than conventional networks.
However, the signals get bogged down when they have to be processed by silicon chips. Opto-
electronics chips would remove that bottleneck.
- Nanotechnology: Nanontechnology, nanoelectronics, nanostructures, and nanofabrication-
all start with a measurement known as a nanometer. A nanometer is a billionth of a mete,
which means we are operating at the level of atoms and molecules. A human hair is
approximately 100,000 nanometers in diameter. (Nanotechnology is a science based on
using molecules to create tiny machines to hold data or perform tasks. Experts attempt to
do “nanofabrication” by building tiny “nanostructures” one atom or molecule at a time.
When applied to chips and other electronic devices, the field is called “nanoelectronics.”)
- Biotechnology: A final possibility is using biotechnology to grow cultures of bacteria, such
as one that, when exposed to light, emits a small electrical charge. The properties of this
“biochip” could be used to represent the on-off digital signals used in computing.
Imagine millions of nanomachines grown from microorganisms processing information at the
speed of light and sending it over far-reaching pathways. What kind of changes could we
expect with computers like these?
Human- Biology Input Devices
Human biology input devices include biometric systems, line-of-sight systems, cyber gloves and
body suits, and brainwave devices.
Characteristics and movements of the human body, when interpreted by sensors, optical
scanners, voice recognition, and other technologies, can become forms of input. Some examples
are as follows:
- Biometric systems: Biometric security devices identify a person through a fingerprint, voice
intonation, or other biological characteristic. For example, retinal-identification devices use
a ray of light to identify the distinctive network of blood vessels at the back of one’s
eyeball. Biometric systems are used in lieu of typed passwords to identify people
authorized to use a computer system.
- Line-of-sight systems: Line-of-sight systems enable a person to use his or her eyes to
“point” at the screen, a technology that allows physically handicapped users to direct a
computer. This is accomplished by a video camera mounted beneath the monitor in front
of the viewer. When the user looks at a certain place on the screen, the video camera and
computer translate the area being focused on into screen coordinates.
- Cyber gloves and body suits: Special gloves and body suits- often used in conjunction with
“virtual reality” games (described shortly) - use sensors to detect body movements. The
data for these movements is sent to a computer system. Similar technology is being used
for human-controlled robot hands, which are used in nuclear power plants and hazardous-
- Brainwave devices: Perhaps the ultimate input device analyzes the electrical signals of the
brain and translates them into computer commands. Experiments have been successful in
getting users to move a cursor on the screen through sheer power of thought. Other
experiments have shown users able to type a letter by slowly spelling out the words in their
heads. Although there is a very long way to go before brainwave input technology becomes
practicable, the consequences could be tremendous, not only for handicapped people but
Display screens are either CRT (cathode-ray tube) or flat- panel display. CRTs use a vacuum tube
like that in TV set. Flat-panel displays are thinner, weigh less, and consume less power but are
not as clear. Flat-panel displays are liquid-crystal display (LCD), electroluminescent (EL) display,
or gas-plasma display. Users must decide about screen clarity, monochrome versus color, and text
versus graphics (character-mapped versus bitmapped). Various video display adapters (such as
VGA, SVGA, and XGA) allow various kinds of resolution and colors.
Display screens – also variously called monitors, CRTs, or simply screens – are output devices that
show programming instructions and data as they are being input and information after it is
processed. Sometimes a display screen is also referred to as a VDT, for video display terminal,
although technically a VDT includes both screen and keyboard. The size of a screen is measured
diagonally from corner to corner in inches, just like television screens. For desktop
microcomputers, 14-inch screens are a common size. Portable computers of the notebook and
subnotebook size may have screens ranging from 7.4 inches to 10.4 inches. Pocket-size
computers may have even smaller screens. To give themselves a larger screen size, some
portable-computer users buy a larger desktop monitor (or a separate “docking station”) to which
the portable can be connected. Near the display screen are control knobs that, as on a television
set, allow you to adjust brightness and contrast.
Displays screens are two types: cathode-ray tubes and flat-panel displays.
Cathode-ray tubes (CRTs) – The most common form of display screen is the CRT. A CRT, for
cathode-ray tube, is a vacuum tube used as a display screen in a computer or video display
terminal. This same kind of technology is found not only in the screens of desktop computers but
also in television sets and in flight-information monitors in airports.
Flat-panel displays – if CRTs were the only existing technology for computer screens, we would
still be carrying around 25-pound “luggables” instead of lightweight notebooks, subnotebooks,
and pocket PCs. CRTs provide bright, clear images, but they consume space, weight, and power.
Compared to CRTs, flat-panel displays are much thinner, weigh less, and consume less power.
Thus, they are better for portable computers.
The first Robot Olympics was held in Toronto in November 1991. “Robots competed for honors in
15 events – jumping, rolling, fighting, climbing, walking, racing against each other, and solving
problems,” reported writer John Malyon. For instance, in the Micromouse race, robots had to
negotiate a standardized maze in the shortest possible time.
A robot is an automatic device that performs functions ordinarily ascribed to human beings or that
operate with what appears to be almost human intelligence. Actually, robots are of several kinds-
industrial robots, perception systems, and mobile robots, for example. All are the objects of study
of robotics, a field that attempts to develop machines that can perform work normally done by
people. Robotics in turn is a subset of artificial intelligence, a family of technologies that attempts
to develop computer systems that can mimic or simulate human thought processes and actions.
Robots are of interest to us as output devices because they can perform computer-driven
electromechanical functions that the other devices so far described cannot. For example, a robot
resembling a miniature tank was able to explore the inside of the Great Pyramid of Giza in Egypt.
Equipped with treads bottom and top, and carrying lights and television camera, the robot was
able to probe an 8-inch-square 63-yard-long shaft to a formerly hidden chamber in the pyramid. A
robot called ScrubMate-equipped with computerized controls, ultrasonic “eyes,” sensors, batteries,
three different cleaning and scrubbing tools, and a self-squeezing mop-can clean bathrooms.
Rosie the HelpMate delivers special-order meals from the kitchen to nursing stations in hospitals.
Robodoc is used in surgery to bore the thighbone so that a hip implant can be attached. Robots
are also used for dangerous jobs such as fighting oil-well fires, doing nuclear inspections and
cleanups, and checking for mines and booby traps. When equipped with video and two-way
audio, they can also be used to negotiate with terrorists.
Fuzzy-logic (a method of dealing with imprecise data and vagueness, with problems that have
many answers rather than one) principles are being applied in another area of AI, neural
networks. The word neural comes from neurons, or brain cells. Neural networks are physical
electronic devices or software to mimic the neurological structure of the human brain. The human
brain is made up of nerve cells (neurons) with a three-dimensional lattice of connections between
them (axons). Electrical connections between nerve cells are activated by synapses. In a
hardware neural network, the nerve cell is replaced by a transistor, which acts as a switch. Wires
connect the cells (transistors) with each other. The synapse is replaced by an electronic
component called a resistor, which determines whether a cell should activate the electricity to
other cells. A software neural network emulates a hardware neural network, although it doesn’t
work as fast.
The essential characteristics of neural networks are as follows:
- Learning: Like a small child, a neural network can be trained to learn by having its mistakes
corrected, just as the human brain learns by making changes in the links (synapses)
between nerve cells.
One writer gives this example: “If you’re teaching the neural network to speak, for
instance, you train it by giving it sample words and sentences, as well as desired
pronunciations. The connections between the electronic neurons gradually change,
allowing more or less current to pass.” The current is adjusted until the system is able to
How effective are neural networks? One such program learned to pronounce a 20,000-word
vocabulary overnight. Another helped a mutual-fund manager to outperform the stock market by
2.3-5.6 percentage points over three years. As a San Diego hospital emergency room in which
patients complained of chest pains, a neural network program was given the same information
doctors received. It correctly diagnosed patients with heart attacks 97% of the time, compared to
78% for the human physicians.
3G refers to the third generation of mobile telephony (that is, cellular) technology. The third
generation, as the name suggests, follows two earlier generations.
The first generation (1G) began in the early 80's with commercial deployment of Advanced Mobile
Phone Service (AMPS) cellular networks. Early AMPS networks used Frequency Division
Multiplexing Access (FDMA) to carry analog voice over channels in the 800 MHz frequency band.
The second generation (2G) emerged in the 90's when mobile operators deployed two competing
digital voice standards. In North America, some operators adopted IS-95, which used Code
Division Multiple Access (CDMA) to multiplex up to 64 calls per channel in the 800 MHz band.
Across the world, many operators adopted the Global System for Mobile communication (GSM)
standard, which used Time Division Multiple Access (TDMA) to multiplex up to 8 calls per channel
in the 900 and 1800 MHz bands.
The International Telecommunications Union (ITU) defined the third generation (3G) of mobile
telephony standards – IMT-2000 – to facilitate growth, increase bandwidth, and support more
diverse applications. For example, GSM could deliver not only voice, but also circuit-switched data
at speeds up to 14.4 Kbps. But to support mobile multimedia applications, 3G had to deliver
packet-switched data with better spectral efficiency, at far greater speeds.
However, to get from 2G to 3G, mobile operators had make "evolutionary" upgrades to existing
networks while simultaneously planning their "revolutionary" new mobile broadband networks.
This lead to the establishment of two distinct 3G families: 3GPP and 3GPP2.
The 3rd Generation Partnership Project (3GPP) was formed in 1998 to foster deployment of 3G
networks that descended from GSM. 3GPP technologies evolved as follows.
• General Packet Radio Service (GPRS) offered speeds up to 114 Kbps.
• Enhanced Data Rates for Global Evolution (EDGE) reached up to 384 Kbps.
• UMTS Wideband CDMA (WCDMA) offered downlink speeds up to 1.92 Mbps.
• High Speed Downlink Packet Access (HSDPA) boosted the downlink to 14Mbps.
• LTE Evolved UMTS Terrestrial Radio Access (E-UTRA) is aiming for 100 Mbps.
Managers are charged with getting work done through people effectively and efficiently.
Effectiveness refers to the achievement of the desired objectives. Thus, if a business’s goal is for
customers to be pleased with its products, an engineering department is effective when it designs
products customers will like. Efficiency refers to minimal use of resources. An efficient engineering
department does its job without wasted time and materials. Note that getting a lot done at a low
cost (efficiency) is not desirable without effectiveness. For business organizations, the
fundamental indicator that they are operating effectively and efficiently is profit.
Managers seek effectiveness through the way they manage resources. Managers acquire and use
three broad categories of resources: physical, organizational, and human capital. Skillfully
managing any of these can improve performance; however, it is interesting to consider whether
one category of resources is most important in giving organizations a sustainable competitive
An organization’s physical resources include the technology it uses, its plant and equipment, its
geographic location, and its access to raw materials. The money an organization can raise or earn
may also be thought of as part of its physical resources. The management of physical resources
encompasses a variety of activities. The organization needs to acquire technology and equipment
that help it deliver greater value to its customers (in terms of better service, lower cost, or both);
pre-empt or block competitors; or lock in customers by giving them something they cannot get
elsewhere. The organization also needs to select sites, acquire parts and materials, finance its
activities, and dispose of any resources that cease providing enough benefits.
Effectively and efficiently managing physical resources certainly helps an organization’s
performance. Setting up a modern factory, getting financing at a low interest rate, or linking
employees to suppliers with current communications technology can keep costs down, improve
the organization’s goods and services, or both.
However, these resources do not provide a sustainable competitive advantage. Although they are
valuable, other organizations eventually can – and do – use the same tactics, leaving several
organizations on the same footing. Thus, when Whistler, a U.S. consumer electronics company,
was losing market share to Asian competitors, the company concluded it would have to cut
manufacturing costs to its competitors’ levels or move its operations offshore. Whistler chose the
cost-cutting option, but by the time costs were reduced to the desired levels, the same
competitors were gaining market share through product innovations. Yet Whistler’s management
had stopped focusing on product development. As at Whistler, organizations today need
managers who can manage more than physical resources.
The concept that managers should care about and encourage ethics- principles of morally
acceptable conduct – is not new. However, ethical issues merit special mention for two reasons.
One is that the behavior of managers is under greater scrutiny than in the past. Because people
have more access to information, misdeeds become widely known, greatly damaging the
organization’s reputation, and a good reputation, which can take years to build, can be destroyed
in minutes. In addition, today’s public has high standards for the behavior of managers. This has
resulted not only in customer demands for ethical behavior but also in increased government
regulation of organizational activities.
Giving the challenge of ethics in the modern organization, what are managers to do? They should
be aware of situations that have the potential to cause harm. When such situations arise,
managers should identify alternative, less damaging courses of action.
Also, managers can create a climate that encourages ethical behavior by all employees. Creating
such a climate includes identifying situations in which ethical issues may arise, developing policies
governing behavior in such situations, and ensuring that the organization’s rewards (including pay
and praise) reinforce ethical behavior. Formal policies are important; however, organizations
presumably want to encourage a high standard of behavior. Doing so requires that managers
model and reward ethical behavior, not just say they think it is important.
Views of ethics: When people confront issues related to ethics, they need some guides to choose
a course of action. The ultimate decision depends in part on the person’s view of ethics. The usual
views have been summarize as utilitarian, Golden Rule, Kantian and enlightened self-interest. Of
course, views of ethics are only as important as the behaviors they lead to.
Golden rule: The Golden Rule is a name Christians have given to Jesus’s teaching “do to others as
you would have them do to you” – a principle that is found in most, if not all, world religions. This
view requires identifying various courses of action and choosing the one that treats others the
way you would want to be treated. The ‘others’ to consider are the organization’s stakeholders –
all those who are affected by the organization’s policies and practices. Stakeholders include the
organization’s investors, customers, and employees, among others.
While an organization’s managers are trying to learn what customers need and how to meet that
need, managers at competing organizations are doing the same. In general, competitors are the
organizations that seek to meet the same customer needs. For example, video rental stores and
cable and network television stations serving the same geographic area are all competitors.
Impact of Competitors. Competitors limit the organization’s access to resources. They often
compete for the same inputs (such as talented employees), and they compete for the revenues
from the customers they would like to serve.
The impact of competitors is strongest when barriers to entry are low and buyers are willing to
accept substitutes for the organization’s products.
Given the potential impact of competitors, organizations need information about what competitors
are doing. An organization can help maintain its competitive edge by giving employees at all levels
access to such information. Chef Allen’s, a restaurant located in North Miami Beach, Florida, gives
its servers and cooks an allowance of $50 apiece to dine in any comparable restaurant and report
what they learned. One cook told his colleagues that the elegant meal he ordered was ruined by
being served on cold plates. “He thought more about warming up plates after that,” says Allen
Susser, owner of Chef Allen’s.
Trends in the Competitive Environment: The global nature of modern business has made
competitions more complex and challenging for U.S. companies. Competitors from other countries
have used higher quality and lower costs to erode the large market share once held by U.S. firms.
Thus, although U.S. firms used to dominate the worldwide automobile and computer industries,
their share has tumbled in recent decades. And among the dozen largest banks in the world
today, none are U.S. banks.
Managers are especially likely to be surprised by competitors when they have focused their
information gathering exclusively on well-known organizations or on existing technology. For
example, the long-standing broadcast networks – ABC, CBS, and NBC – considered cable TV a
fringe business, so they were unprepared for the success of Turner Broadcasting System and
other cable companies. Furthermore, readership of newspapers has declined as people turn to
cable TV for news and other information. After a decade of this trend, some major newspapers,
including the New York Times and Los Angeles Times, finally began to view their industry more
broadly to encompass all news media.
Planning and Forecasting
Although different organizations will choose different tactics for managing the environment,
managers should begin by trying to understand what is happening in the environment, what is
likely to happen, and what actions will be most beneficial to the organization. To do this,
managers rely on planning and forecasting.
Planning. An organization is most likely to benefit from the opportunities in its environment and
avoid environmental threats if its managers have thought through the possible courses of action
and set goals accordingly. This is the process of planning. To plan effectively, managers begin by
gathering and reviewing information. Gathering information about the dimensions of the
organization’s environment is environmental scanning. To scan the environment, managers and
others in the organization may review government statistics, conduct surveys, and read relevant
magazine, newspaper, and journal articles.
Forecasting. To plan for the future, managers need a sense of what members of the
organization will be doing. Of course, they cannot know for sure what will happen, but they have
to make predictions. Predicting future environmental needs and actions is called forecasting.
Generally, the approach is to use the data gathered in environmental scanning to look for trends
that may extend into the future.
No one can be certain about the future, regardless of how sophisticated the forecasting model.
More often than not, changes in the environment will make a forecast at least slightly incorrect.
Today, computer models allow managers to see how changing their assumptions about
environmental conditions will affect forecasts. But not even these models can predict a major
Consider the growth of the computer industry. James Fallows, Washington editor of the Atlantic
Monthly, expects it will transform our lives in unexpected ways, some of them negative. For
instance, if only educated people possess computer skills, the growing importance of computers is
likely to widen the economic gap between the well educated and the remainder of society. Yet the
forecasting done by the computer industry as largely been limited to such issues as how many
new products will be purchased and what applications can be devised for them. Fallows describes
a recent speech by Microsoft CEO Bill Gates as “a vision … that boiled down to a picture of lotsa,
lotsa computers in our future lives.” Fallows himself forecasts that this limited vision will breed
suspicion and distrust among a general public that perceived the computer business as being
unwilling or unable to care about the more far-reaching social impact of computers.
Business Level Strategy
An organization also needs business-level strategy for each business unit or product line. In the
case of Disney, this would mean business-level strategies for its amusement parks, its sports
teams, its condos, and so on. At this level, strategy describes how the business unit or product
line will compete for customers. Possible strategies might include investing heavily in research and
development, aggressively adding new products to an existing line, or changing the products in
the line to attract new customers.
A business-level strategy at IBM was to make extensive use of patents to block competitors from
offering comparable mainframe computers. Because this strategy also prevented customers from
running their software on competitor’s machines, it also locked in customers for years until they
were ready to invest in new software. In contrast, when IBM launched its personal computers, it
chose a strategy of getting the products to market quickly. To do so, IBM used so-called open
architecture, which meant that other companies could write compatible software and build
machines that could run the software. Therefore, its strategy for PCs gave the company a less
The relationship between corporate-and business-level strategies depends on the size and
complexity of the organization. In small organizations with a single type of product, business-level
strategy may simply extend and elaborate on corporate-level strategy. But a big, diversified
organization like General Electric requires a corporate-level strategy that is broad enough to
encompass all the organization’s groups as well as a business-level strategy for each area of
business in which the organization is involved.
Functional: Each functional department- such as marketing, finance, manufacturing, and
engineering-may devise a strategy for supporting the higher-level strategies. Thus, a production
manager who thinks strategically looks beyond keeping costs down and making workers and
machines more efficient. This manager considers how to support the organization’s strategy by
contributing to the organization’s competitive advantage, say, by adapting processes particular
customer needs or by offering additional services.
A number of years ago, McDonald’s Corporation (which had a strategy for growth that included
opening more stores) forecast that not enough U.S. teenagers would be available to work in its
stores. The human resource unit therefore supported the corporate-level growth strategy was to
recruit and hire older, retired people (a growing segment of the population)
Forces for Change
External forces. When the organization’s general or task environment changes, the organization’s
success often rides on its ability and willingness to change as well. The general environment has
social, economic, legal and political, and technological dimensions. Any of these can introduce the
need for change. In recent years, far-reaching forces for change have included developments in
information technology, the globalization of competition, and demands that organizations take
greater responsibility for their impact on the environment.
For Hugo Boss, a German producer of menswear, recent forces for change have been economic
and social. Slow economic growth has forced the company to keep its costs as low as possible. As
a result, Hugo Boss decided to cut its German manufacturing base from 40 percent to 20 percent.
Like other German manufacturers, Hugo Boss moved much of its production to Eastern European
countries including Romania, Slovenia, and the Czech Republic, where labor costs are much lower.
In the social environment, a trend toward more casual dress at work has cut into sales of
traditional Boss suits. The company responded by creating its more casual and colorful Hugo line.
Because the task environment interacts directly with the organization, it is an especially important
source of forces for change. The task environment includes the organization’s customers,
competitors, regulators, and suppliers. A force for change at Russia’s Bolshoi Ballet Academy is
that the government can no longer provide the lavish support it provided in the past. To keep
float, the school must consider a broader range of funding sources, notably grants from local
businesspeople. The Bolshoi is also seeking special concessions from the Russian legislature,
including tax breaks for contributing to cultural institutions. In addition, the Bolshoi seeks to raise
hard currency by touring internationally.
For Tata Iron & Steel Company, foreign investors (suppliers of capital) are a new force for
change. In the past, Tata emphasized the creation of jobs in its community of Jamshedpur, a city
in eastern India. Tata’s 78,000 workers receive lifetime employment, along with free housing,
education, and medical care. The company, in turn, has benefitted from a complete lack of strikes
in 60 years. But investors interested in Tata have asked how the company might improve its profit
margin of only 3.7 percent. Notes Tata’s managing director, Jamshed Irani, “We will now be
forced to balance loyalty against productivity.”
Individual and Organizational Values
An organization upholds certain values as a part of its culture. At Levi’s and at Ben & Jerry’s
Homemade, the values include ethical management practices and employee empowerment. The
individuals in an organization also have sets of values. These individual values may vary in the
extent to which they resemble the organizational values, especially in an organization with a
diverse work force. Such differences may be minimized by the tendency of individuals to accept
jobs with organizations that demonstrate values matching their own.
Because values influence behavior, organizations need employees who share the values of the
organizational culture. Recognizing this, Jack Welch, CEO of General Electric, wrote in a recent
Managers who hit their numbers [meet performance objectives] and live by GE’s values
expect to get promoted. Managers who don’t hit their numbers but live by GE’s values can expect
a second chance. Managers who don’t hit their numbers and don’t live by GE’s values will be fired.
As for managers who hit their numbers but don’t live by GE’s values? Well, they may be financial
geniuses welcome at a lot other companies, but we no longer want those types of managers in
When an individual’s values differ from the prevailing values of the organizational culture, conflict
results. In some cases, the individual faces ethical dilemmas. For example, suppose a manager
values honesty above financial gain, but the organization places higher value on maintaining
profitability with whatever lawful tactics are necessary. This manager could at times be expected
to implement strategies that (even if legal) do not meet the manager’s standards for honesty.
Ideally, managers and other employees will find solutions that do not compromise either set of
values. When surgeon and biochemist P.Roy Vagelos left Washington University to head Merck’s
research labs, he faced what he calls “the challenge of a lifetime”: “I needed to hold on to the
values that were important to me as a physician and blend them with Merck’s need to remain
profitable.” By leading the development of numerous vaccines and medicines, Vagelos helped
Merck make a major difference in the health of millions of people and thrive as a company.
Job Satisfaction and Job Performance
Why should managers care whether their employees are satisfied with their jobs? For some
managers, this is a matter of ethics or of consideration for others. In terms of strategic
management, we need to know whether a satisfied employee will contribute more to achieving
objectives than a dissatisfied employee will. For this reason, researchers have investigated
whether there is a link between job satisfaction and job performance.
This research tested the once-widespread assumption that satisfaction is related to job
performance. Researchers looked for a correlation between the two; that is, they investigated
whether raising job satisfaction would lead to an increase in job performance. Overall, that
research failed to find such a link, and by the 1950s the consensus was that satisfaction and
performance are unrelated.
More recent investigations have considered whether satisfaction and performance may be related
in some other, less direct way. And they made a model. A test of the model on 1,200 employees
in four organizations found overall support for the model. Although not all relationships in the
model were found highly significant, the data supported the general concept of a model in which
job satisfaction and job performance covary subject to motivational factors. Other research also
has identified covariances between the two. For example, a review of over 200 studies in which
psychologically-based interventions sought to raise productivity and performance found that 87
percent raised productivity by some measure and three-quarters also resulted in greater job
satisfaction. This evidence is also consistent with our model of individual differences, in which we
show the effect of attitudes on outcomes to be indirect, via motivation.
As a practical matter for managers, this means that job satisfaction is important, but that
managers must view it in the context of motivation. If the motivational factors in an organization
do not fit the pattern called for by the model they can reduce, eliminate, or even reverse the
relationship between satisfaction and performance. For example, making goals more challenging
could increase performance, but if rewards do not rise as well, job satisfaction could decline
because employees believe the reward system is unfair.
Organizations seek to directly influence employee behavior through reward systems. An
organization’s reward system consists of its formal and informal mechanisms for defining the
kinds of behavior desired, evaluating performance, and rewarding good performance. Most
reward systems offer pay, benefits, and promotions when behavior meets or exceeds
performance standards. These rewards are called extrinsic because they are outside any
satisfaction obtained from the job itself.
A recent article in the Harvard Business Review ignited considerable debate by maintaining that
extrinsic rewards are not only ineffective but can actually undermine quality, job commitment,
and organizational citizenship. According to this view, incentives such as pay linked to output
encourage employees to focus on the reward, rather than on the needs of the organization. Real
commitment requires the kind of leadership and organizational culture that foster positive
attitudes toward the organization, its objectives, and its customers. Furthermore, reliance on
extrinsic rewards may undermine morale by causing employees to feel manipulated. When Emery
Air Freight instituted a system of management through positive reinforcement, productivity and
management skills improved, but managers initially faced resistance to the program. Later,
trainers at the company concluded that the resistance arose from employees’ belief they were
being manipulated. Consistent with this view, an analysis of 98 previously conducted studies
found training and goal setting had more impact on productivity than compensation linked to
Given these criticisms, why do managers continue to use rewards such as financial incentives? If
used appropriately, extrinsic rewards can support other efforts to influence motivation. When
rewards are designed to enhance employees’ perception that they make a valued contribution,
they can build employee satisfaction. As at Nordstrom’s, such rewards are part of a culture and
management system in which contributing to the organization’s success is a source of pride and
accomplishment. Lantech, a manufacturer based in Louisville, Kentucky, uses a profit-sharing plan
not to induce certain behaviors but as fair payment for work performed. Lantech’s president
believes its employees it employees are motivated by a culture in which they feel “empowered
and involved in continually improving our customer satisfaction.” With these limitations in mind,
managers wishing to devise an appropriate reward system should offer rewards that are valued,
clearly linked to desired behaviors, and perceived as equitable.
The Significance of Leadership
A leader with a vision focuses followers on something bigger than themselves yet presented in a
way they can understand and remember. This is especially important when organizational
objectives are complex and obscure or have uncertain consequences. Distilling a complicated
situation into a clear vision distinguished Ronald Reagan as president.
The distinct role of leadership makes it significant for solving one of the most fundamental
management problems: gaining employee commitment to fulfilling the organization’s mission and
achieving its objectives. Effective leadership can solve that problem when the leader has a
strategic vision – one that involves offering customers something unique and valuable, preparing
the organization to manage change, and drawing on the organization’s unique strengths.
How can the organization find leaders with a strategic vision? How can it create the conditions in
which its people can be such leaders? How (if at all) can managers become such leaders? Many
theories of leadership have attempted to answer questions such as these. Unfortunately, there is
no well-supported theory that broadly describes how leadership works. Most theories and
research have looked only at specific aspects of leadership, and the newest theories have not yet
undergone thorough testing. Until the field of leadership matures, managers are limited to
seeking clues from the various theories in their present state.
The leader’s role: If we view leadership as a meaning-making process, part of the process often
involves designating someone as the leader-perhaps even endowing that person with great power
to direct activities. The person chosen to lead is someone who can eloquently represent the
meaning of the group. An example is Russian political leader Vladimir Zhirinovsky, called by Time
magazine “a touchstone for ordinary Russians’ deepest yearnings and darkest fears.” The leader
also is someone centrally involved in the group-typically someone who has invested much time in
the group, occupies a high position, and is expert in what the group does. Thus, the leader’s
influence results from his or her work in the group rather than from the ability to get other people
to work in the group. This view is consistent with the idea that charismatic leadership depends in
part on the followers’ perceptions of the leader.
School of Geology and Petroleum Engineering
An Invitation to Geology
Geology is the study of the earth. Physical geology, in particular, is concerned with the materials
and physical features of the earth, changes in those features, and the processes that bring them
about. Intellectual curiosity about the way the earth works is one reason for the study of geology.
It is not an isolated discipline but draws on the principles of many other sciences. The earth is
challenging subject, for it is old, complex in composition and structure, and large in scale. Physical
geology focuses particularly on the physical features of the earth and how they have formed.
Observations suggest that, for the most part, those features are the result of many individually
small, gradual change continuing over long periods of time, punctured by occasional, unusual,
cataclysmic it’s formation, the events. Shortly after its formation, the earth underwent melting
and compositional differentiation into core, mantle, and crust.
There are also practical aspects to the study of geology. Certain geologic processes and events
can be hazardous, and a better understanding of such phenomena may help us to minimize the
The scientific method in a means of discovering basic scientific principles. The systematic study
of the earth that constitutes the science of geology has existed as an organized discipline for only
about250 years. It was first formally developed in Europe. Two principal opposing schools of
thought emerged in the eighteenth and nineteenth centuries to explain geologic observation. One,
popularized by James Hutton and later named by Charles Lyell , was the concept of
uniformitarianism. Uniformitarianism comprises the ideas that the surface of the earth has been
continuously and gradually changed and modified over the immense span of geologic time and
that, by studying the geologic processes now active in shaping the earth. It is not assumed that
the rates of all processes have been the same throughout time, but rather that the nature of
processes in similar that the same physical principles operating in the present. The second,
contrasting theory was catastrophism. The catastrophists, led by French scientist Georges Cuvier ,
believed that a series of immense, worldwide upheavals were the agents of change that, between
catastrophes, the earth was static. Violent volcanic eruptions followed by torrential rains and
floods were invoked to explain mountains and valleys and to bury animal populations that later
became fossilized. In between those episodic global devastations, the earth’s surface did not
change, according to catastrophist theory. Catastrophists also believed that entire plant and
animal populations were created a new after each such event, to be wholly destroyed by the next.
For all practical purposes, the earth is a closed system, meaning that the amount of matter in
and on the earth is fixed. No new elements are being added. There is, therefore, an ultimate limit
to how much of any metal we can exploit. There is also only so much land to live on.
The early atmosphere and oceans formed at the same time. heat from within the earth and from
the sun together drive many of the internal and surface processes that have shaped and modified
the earth throughout its history and that continue to do so. The earliest life forms date back
several billion years. Organisms with hard parts became widespread about 600 million years ago.
Humans only appeared 3 to 4 million years ago, but their large and growing numbers and
technological advances have had significant impacts on natural systems, some of which may not
readily be erased by slower- paced geological processes.
Minerals and Rocks
It is difficult to talk at length about geology without talking about rocks or the minerals of which
they are composed. All natural and most synthetic substances on earth are made from the 90
naturally occurring chemical elements. An element is the simplest kind of chemical; it cannot be
broken down further by ordinary chemical or physical processes.
Chemical elements consist of atoms, which are, composed of protons, neutrons, and electrons.
Isotopes are atoms of one element (having; therefore, the same number of protons) with
different numbers of neutrons; chemically, isotopes of one element and thus acquired a positive
or negative charge. Atoms of the same or different elements may bond together. The most
common kinds of bonding in minerals are ionic (resulting from the attraction between oppositely
charged ions) and covalent (involving sharing or electrons between atoms). When atoms of two
or more different elements bond together, they form a compound.
The nucleus, at the center of the atom, contains the protons and neutrons; the electrons
move around the nucleus. The number of protons in the nucleus is unique to each element and
determines what chemical element that atom is. Every atom of hydrogen contains one proton in
its nucleus; every carbon atom, six protons; every oxygen atom; eight protons; every uranium
atom, ninety-two protons. The characteristic number of protons in the atomic number of the
In an electrically neutral atom, the number of protons equals the number of electrons. The
negative charge of one electron just equals the positive charge of the proton.
Most atoms, however, can gain or lose electrons. When this happens, the atom acquires a
positive or negative electrical charge and is termed an ion. If it loses electrons, it becomes a
positively charged cation, as the number of protons exceeds the number of electrons. If it gains
electrons, the resulting ion has a negative electrical charge and is termed an union.
A mineral is a naturally accruing, inorganic, solid element or compound. With a definite
composition (or range in composition) and a regular internal crystal structure. When appropriate
instruments for determining composition and crystal structure are unavailable, minerals can be
identified from a set of physical properties, including color, crystal form, cleavage or fracture,
hardness, luster, specific gravity, and others. Minerals are broadly divided into silicates and non
silicates. The silicates are subdivided into structural types(for example, chain silicates, sheet
silicates, framework, silicates) on the basis of how the silica tetrahedral are linked in each mineral.
Silicates may alternatively be grouped by compositional characteristics. The non silicates are
subdivided into several groups, each of which has some compositional characteristic in common.
Examples, include the carbonates (each containing the CO3 group), the sulfates (SO4 ), and the
Rocks are cohesive mineral aggregates. Certain of their physical properties are a consequence of
the ways in which their constituent mineral grains are assembled. All rocks are part of the rock
cycle, through which hold rocks are continually being transformed into new ones. A consequence
of this that no rocks have been preserved throughout earth’s history, and many early stages in
the development of one rock may have been erased by subsequent events.
A volcano is a vent through which magma, fragments of rock and ash, and gases erupt, or the
structure built around the vent by such eruption. Most volcanic activity is concentrated near plate
boundaries. Active volcanoes send out smoke and steam and occasionally erupt. An erupting
volcano gushes out ash, molten lava and smoke. Volcanoes form either at the edges of tectonic
plates, or at hot-spots in the earth’s crust. Volcanoes differ widely in eruptive style, and thus in
the kinds of dangers they present. Seafloor rift zones and hot spots are characterized by the more
fluid, basaltic lavas. Subduction -zone volcanoes typically produce much more viscous, silica-rich,
gas – charged andesitic magma, so, in addition to lava, they may emit large quantities of
pyroclastics and other deadly products like nuees ardentes. Lava is perhaps the leats serious
hazard associated with volcanoes. It moves slowly, it can sometimes be predicted. The results of
explosive eruptions are less predictable and the eruptions themselves more sudden. One
secondary affect of volcanic eruptions, especially explosive ones, which occurs as a result of dust
and gases being thrown into the atmosphere and blocking incoming sunlight. Lava is not generally
life threatening. Most lava flows advance at speeds of at most a few kilometers an hour. So one
can evade the advancing lava readily even on foot. The lava will destroy or bury any property
over which it flows. Lava temperatures are typically over 500° C and may be over 1,400° C.
Pyroclastics are often more dangerous than lava flows. They may erupt more suddenly.
Explosively, and spread faster and farther. Cinders and ash are examples of free- falling
pyroclastics. Another special kind of pyroclastic outburst is a deadly, denser-than-air flow of mixed
hot gases and fine ash known as a nuee ardente, from the French for ‘glowing cloud.” A nuee
ardente is very hot-temperatures can be over 1,000°C in the interior – and it can rush down the
slopes of the volcano at more than 100 kilometers per hour, charring everything in its path and
flattening trees and weak buildings.
The volcanic structure divided into following parts based on their eruptive patterns and
characteristic form, shield volcano, stratovolcano, dormant volcano, extinct volcano and active
volcano. When a volcano emits lava as well as pyroclastics, a concave- shaped composite volcano
or stratovolcano is built of alternating lava flows and beds of pyroclastics. Dormant volcano is a
volcano that is not now erupting but that has erupted within historic time and is considered likely
to do so in the future. A volcano that is erupting or is expected to erupt is an active volcano.
Shield volcano is a volcano in the shape of a flattened dome, broad and low, built by flows of very
fluid, basaltic lava. Extinct volcano is a volcano that is not presently erupting and that is not
considered likely to do so in the future.
Precursors are changes observed in or near a volcano that herald an impending eruption. There
are several types of advance warnings of volcanic activity. A common one is seismic activity. Early
signs of potential volcanic activity include bulging and warming of the ground surface and
increased seismic activity. Volcanologists cannot yet predict the exact timing or type of eruption
very precisely, except insofar as they can anticipate eruptive style on the basis of historic records,
the nature of the products of previous eruptions, and tectonic setting.
A single volcanic eruption can have a global impact on climate, although the effect may be only
brief. Intense explosive eruptions put large quantities of volcanic dust high into the atmosphere,
from which it may take years to settle. In the interim, it par- tially blocks out incoming sunlight,
thus causing measurable cooling. After Krakatoa’s 1883 eruption, worldwide temperatures
dropped nearly half a degree centigrade, and the cooling effects persisted for almost ten years.
The larger 1815 eruption of Tambora caused still more dramatic cooling. 1816 became known in
the Northern Hemisphere as the “year without a summer.” Such past experience forms the basis
for fears of a “nuclear winter” in the event of a nuclear war, for modern nuclear weapons are
powerful enough to cast volumes of fine dust into the air, and more dust and ash would be
generated by ensuing fires. The climatic impacts of volcanoes are not confined to the effects of
volcanic dust. The 1982 eruption of the Mexican volcano EI Chichon did not produce a particularly
large quantity of dust, but it did shoot volumes of unusually sulfur-rich gases into the atmosphere.
These gases produced clouds of sulfuric acid droplets that spread around the earth. Not only do
the acid droplets block some sun-light, like dust, but in time they also settle back to the earth as
How Old is the Earth?
The ability to determine the numerical ages of rocks has changed the way you think about the
world. But how can we determine the age of the Earth itself? Time earliest and preserved in the
Earth come from the great assemblage of most morphic and igneous rocks formed during
Precambrian time. The oldest radiometric dates, about 4.4 billion years, have been obtained from
individual miner grains in sedimentary rocks from Australia. These grains – of the mineral zircon –
show evidence of having experienced wet partial melting during the formation a magma, which
then crystallized into an igneous rock, which in turn was weathered, eroded and redeposited,
eventually to be incorporated into a sedimentary rock. Dates almost as old – 4.0 billion years –
have been obtained from granite igneous rocks from Canada. The existence of such ancient rocks
proves that continental crust was present 4.0 billion years ago, while the 4.4 billion year old
mineral grains prove that the cycle of weathering, erosion, deposition, and cementation was
operating then. Because we see wet melting and ancient sediment that was transported by water,
we know that there must have been water on the surface of the Earth at the time the sediments
These ancient rocks are all from the Archean eon. Recall that the Hadean eon predates the
Archean. No rocks that might provide radiometric dates are preserved from early Hadean time –
none that we have yet found, at any rate. How long did the Hadean eon last, and, therefore, how
much older might our planet be? Strong evidence from astronomy suggests that Earth formed at
the same time as the Moon, the other planets, and meteorites. Through radiometric dating, it has
been possible to determine the ages of meteorites and of Moon rocks brought back by astronauts.
The ages of the most primitive of these objects cluster closely around 4.56 billion years. By
inference, the time of formation of Earth, and indeed of all the other planets and meteorites in the
solar system, is believed to be 4.56 billion years ago.
Metamorphism is change in rocks (short of complete melting) brought about by changes in
temperature, pressure, or chemical conditions. With progressive metamorphism, existing minerals
are commonly recrystallized, the crystals often growing larger in the process, and minerals stable
at low temperatures and pressures may break down, to be replaced by other minerals stable at
higher-grade conditions. Metamorphic rocks are subdivided into foliated and nonfoliated rocks.
Directed stress may also lead to formation of foliated rocks, in which elongated or platy minerals
assume a preferred orientation. Foliated rocks are named on the basis of their particular texture.
Nonfoliated rocks are most often named on the basis of mineralogic composition.
An amphibolites is a metamorphic rock rich in amphiboles. Amphibolities are not
necessarily unfoliated rocks, for amphiboles commonly form in elongated or needlelike crystals
that may take on a preferred orientation in the presence of directed stress. Most metamorphism is
either contact metamorphism, which occurs in country rock close to the contacts of invading
plutons and is characterized by relatively low-pressure mineral assemblages, or regional
metamorphism, which is caused principally by platetectonic activity and emplacement of large
batholiths, and is characterized by elevated pressure as well as elevated temperature.
Metamorphic rocks are assigned to a particular facies, corresponding to a specific range of
pressures and temperatures, on the basis of the mineral assemblages they contain. Index
minerals are useful in assessing the general metamorphic grade of a rock and in determining
regional trends in metamorphic grade.
A variety of geologic settings and events lead to metamorphism. Most metamorphic
processes can be sub- divided into regional, contact, and fault-zone metamorphism.
Regional metamorphism is, as its name implies, metamorphism on a grand scale, a
regional event. Regional metamorphism is commonly associated with mountain building events,
when large areas are uplifted, downwarped or stressed severely and deformed, as during plate
collisions. Rocks pushed to greater depths are subjected to increased pressures and
temperatures, and are metamorphosed. Collision adds directed stress, producing abundant
lineated and foliated rocks. Regional metamorphism involving changes in both pressure and
temperature is also called dynamothermal metamorphism. The emplacement of large batholiths,
which may or may not be associated with mountain building, raises crystal temperature over
broad areas as heat is released from cooling plutons, so batholith formation may likewise result in
metamorphism on a regional scale.
Contact metamorphism is named for the setting in which it occurs, near the contact of a
pluton. The pluton, emplaced from greater depths, is hotter than the country rock, and if it is
significantly hotter, the adjacent country rock is metamorphosed. The pluton then is surrounded
by a zone of metamorphic rock, also known as a contact aureole, or halo. (The term aureole
comes from the Latin for “golden,” as a crown or halo might be.) Higher – temperature
metamorphic minerals are found close to the contact, lower – temperature minerals farther away.
Metasomatism may not occur in the contact aureole. Contact metamorphism, by definition, is a
more localized phenomenon than regional metamorphism, since it is confined to the immediate
environs of the responsible pluton. Contact – metamorphic effects also tend to be most marked
around plutons emplaced at shallow depths in the crust.
A metamorphic facies is a set of physical conditions that give rise to characteristic mineral
assemblages. That is rock of a particular metamorphic facies contain one or more minerals
indicative of a particular, restricted range of pressures and temperatures. Contact – metamorphic
facies are the facies of low pressure and a range of temperatures. They consist of sanidinite
facies pyroxene – hornfels facies, hornblende – hornfels facies and zeolite facies. The zeolites are
group of hydrous silicates, stable only at low pressure and temperature. Regional – metamorphic
facies are characterized by elevated pressure and temperature. The greenschist facies is so
named because greenschist – facies rock commonly contain one or both of the greenish silicates
chlorite and epidote. The blueschist facies is characterized by high pressure but low temperature.
It’s name derives from the several bluish silicates kyanite, an aluminum silicate, and that form
under these conditions.
Rocks subjected to stress may behave elastically or plastically Earthquakes result from
sudden rupture of brittle or elastic rocks, or sudden movement along fault zones in response to
stress. Most earthquakes are confined to the cold, rigid lithosphere. The pent – up energy of an
earthquake is related through seismic waves, which include ground rupture and shaking, fire,
liquefaction, landslides, and tsunamis.
While earthquakes cannot be stopped, their negative effects can be limited by: seeking ways
to cause locked faults to slip gradually and harmlessly, perhaps by using fluid injection to reduce
frictional resistance to shear; designing structures in active fault zones to be more resistant to
earthquake damage; identifying and, wherever possible, avoiding development in areas at
particular risk from earthquake – related hazards; increasing public awareness of and
preparedness for earthquakes in threatened areas; and learning enough about earthquake
precursor phenomena to make accurate and timely predictions of earthquakes.
Strain is deformation a change in shape, or volume, or both resulting from stress. It may be
either temporary or permanent, depending on the amount and type of stress and the ability of the
material to resist it. If the deformation is elastic, the amount of deformation is proportional to the
stress applied, and the material returns to its original size and shape when the stress is removed.
A gently stretched rubber band shows elastic behavior. Rocks, too, may behave elastically,
although much greater stress is needed to produce detectable strain. Once the elastic limit of a
material is reached, it may go through a phase of plastic deformation with increasing stress.
During this stage, relatively small added stresses yield large corresponding strains, and the
changes of shape are permanent. The material does not return to its original dimensions after
removal of the stress. A glassblower, an artist shaping clay, a carpenter fitting caulk into cracks
around a window, and a blacksmith shaping a bar of hot iron into a horseshoe are all making use
of plastic behavior of materials. Faults and fractures come in all sizes, from microscopically small
to hundreds of kilometers long. Likewise, earthquakes come in all size, from tremors so small that
even sensitive instruments can barely detect them, to massive shocks that can level cities.
The point on a fault at which the first movement or break occurs during an earthquake is
called the earthquake’s focus, or hypocenter. In the case of a large earthquake, a section of fault
many kilometers long may slip, but there is always a point at which the first movement occurs,
and this is the focus. The point on the earth’s sur-face directly above the focus is the epicenter.
When an earthquake occurs, the stored-up energy is released in the form of seismic waves that
travel away from the focus. There are several types of seismic waves. Body waves (P waves and S
waves) travel through the interior of the earth. Surface waves, as their name suggests, travel
along the surface. The use of body waves to explore the earth’s internal structure is explored P
waves are compressional waves. As P waves travel through matter, the matter is alternately
compressed and expanded. P waves travel through the earth, than, much as sound waves travel
S waves are shear waves, involving a side-to-side sliding motion of material. Ground shaking
may cause a further problem in areas where the ground is very wet in filled land, near the coast,
or in places with a high water table. This problem is liquefaction. When wet soil is shaken by an
earthquake, the soil particles may be jarred apart, allowing water to seep in between them. This
greatly reduces the friction between soil particles that gives the soil strength, and it causes the
ground to become somewhat like quicksand. When this happens, buildings can just topple over or
partially sink into the liquefied soil the soil has no strength to support them. The effects of
liquefaction were dramatically after a major earthquake in Niigata. Apartment building tipped over
to settle at an angle of 30 degrees to the ground while the structure remained intact. In some
areas prone to liquefaction, improved underground drainage systems may be installed to try to
keep the soil drier, but little else can be done about this hazard beyond avoiding areas at risk. Not
all areas with wet soils are subject to liquefaction. The nature of the soil or fill plays a large role in
the extent of the danger.
Coastal areas, especially around the Pacific Ocean basin where so many large earthquakes
occur, may also be vulnerable to tsunamis. These are seismic sea waves, sometimes improperly
called “ tidal waves” although they have nothing to do with tides. When an undersea or near
shore earthquake occurs, sudden movement of the sea floor may set up waves travelling away
from that spot, like ripples on a pond caused by a dropped pebble. Tsunamis can also be
triggered by major submarine landslides or by violent explosion of volcanoes in ocean basis. In
the open sea, the tsunami is only an unusually broad swell or ripple on the water surface. Like all
waves, tsunamis only develop into breakers as they approach shore and the undulating waters
touch bottom. The breakers associated with tsunamis, however, can easily be over 15 meters
high and may reach up to 65 meters in the case of larger earthquakes. Several such breakers may
crash over the coast in succession: between waves, the water may be pulled swiftly seaward,
emptying a harbor or bay, and perhaps pulling unwary onlookers along. Tsunamis can travel very
quickly speeds of 1,000 kilometers per hour are not uncommon and a tsunami set off on one side
of the Pacific may still cause noticeable effects on the other side of the ocean.
STRESS AND STRAIN
In discussing rock deformation, geologists often use the word stress, which refers to the force
acting on a surface. The definition of pressure is exactly the same. However, the term stress often
refers to “differential stress, strain” that is, a situation in which the force acting on the surface of
a body is greater from one direction than from another. Pressure is commonly used to mean
“uniform stress, “in which the force on a body is equal in all directions. For example, the pressure
on a small body floating within a liquid is uniform-the same from all directions. Uniform stress is
also called confining pressure. A rock in the lithosphere is confined by the rocks all around it and
is uniformly stressed by those surrounding rocks. The related terms “lithostatic pressure” and
“hydrostatic pressure” also describe uniform stress on a rock, but they convey additional
information about how the pressure is transmitted to the rock: by overlying rocks (lithostatic,
from lithos, the Greek root that means “rock”), or by water (hydrostatic, from hydro, the Greek
root that means “water”).
In response to stress, a rock will change its shape or its volume, sometimes both. This change is
called strain. Uniform stress causes rocks to change their volume. For example, if a rock is
subjected to uniform stress by being buried deep in the Earth, its volume will decrease. If the
spaces (pores) between the grains become smaller, or if the minerals in the rock are transformed
into more compact crystal structures, the volume change may be relatively large. Differential
stress causes rocks to change their shape, and sometimes their volume as well.
There area several kinds of differential stress (Figure 8.1). Tension acts in a direction
perpendicular to and away a surface; this kind of stress pulls or stretches rocks. Compression acts
in a direction perpendicular to and toward a surface; compressional stress squeezes rocks,
shortening or squashing them and decreasing their volume. Shear stress acts parallel to a surface.
It causes rocks to change shape by bending, or breaking. In response to shear stress, different
parts of the rock may slide past each other like cards in a deck.
THE ROCK-FORMING MINERALS
Geologists have identified approximately 3,500 minerals, but fewer than 30 of them are common
in the crust of the Earth. Why aren’t there more minerals in the Earth’s crust? The reason
becomes clear when we consider the relative abundances of the chemical elements in the Earth’
crust. Only 12 elements – oxygen, silicon, aluminum, iron, calcium, magnesium, sodium,
potassium, titanium, hydrogen, manganese, and phosphorus – occur in the amounts greater than
0.1 percent (by weight). Together, these 12 elements make up more than 99 percent of the mass
of a limited number of minerals in which one or more of these 12 abundant elements is an
essential ingredient. Minerals containing scarcer elements occur only in small amounts and only
under special circumstances.
Two elements – oxygen and silicon – make up more than 70 percent of the crust in atomic by
weight. Oxygen itself constitutes more than 60 percent of the crust in atomic proportion – that is,
the actual number of atoms of oxygen in the crust – and more than 90 percent by volume.
Oxygen is a large, lightweight atom; not only are there lots of oxygen atoms in the crust, they
also take up a lot of space. Oxygen forms a simple anion, O2- ; compounds that contain this anion
are called oxides. Oxygen and silicon together form an exceedingly strong anionic complex called
a silicon anion, (SiO4)4-; minerals that contain this anion are called silicates. Silicates are the
most abundant of all minerals; oxides are the second most abundant. Other mineral groups based
on different anions are less common.
EROSION BY WATER UNDER THE GROUND
Water can also cause erosion underneath the ground. As soon as rainwater infiltrates the ground
to become groundwater, it begins to react with the minerals in the regolith and the bedrock,
causing chemical weathering. Among the minerals of the Earth’s crust, the carbonates are most
readily dissolved. Carbonate rocks such as limestone are almost insoluble in pure water, but are
easily dissolved by carbonic acid, a common constituent of rainwater. The attack occurs mainly
along joints and other opening in the rock. When limestone weathers, it may be dissolved and
carried away in slowly moving groundwater. In some carbonate terrains, the rate of dissolution is
even faster than the average rate of erosion of surface materials by streams and mass wasting.
When carbonate rock is dissolved by circulating groundwater, a cave may form. Caves are
dissolution cavities that are closed to the surface, or have only a small opening. Cave formation
begins with dissolution along interconnected fractures and bedding planes, where two different
sedimentary rock units meet. A passage eventually develops along the most favorable flow route.
The rate of cave formation is related to the rate of dissolution. As the passage grows and the flow
of groundwater becomes more rapid and turbulent, the rate of dissolution also tends to increase.
The development of a continuous passage by slowly moving groundwater may take up to 10,000
years, and enlargement of the passage to create a fully developed cave system may take as long
as a million years. Terrains that are underlain by extensive cave systems are called karst terrains.
Sinkholes are dissolution cavities, like caves, but open to the sky. Some sinkholes are
formed when the roof of a cave collapses. Others are formed at the surface, when rainwater is
freshly charged with carbon dioxide and hence is most effective as a solvent. Some sinkholes form
slowly; others form catastrophically. An example of the latter occurred in Winter Park, Florida, in
1972. In a period of just 10 hours, a sinkhole developed and consumed part of a house, six
commercial buildings, several automobiles, and a municipal swimming pool. The total cost of the
damage was over $2 million. Events as dramatic as the Winter Park sinkhole are rare, but sinkhole
collapse is a common occurrence in areas underlain by carbonate rocks.
EROSION BY WIND
Wind is an important agent of erosion, especially in arid and semiarid regions. Processes related
to wind are called eolian processes after Aeolus, the Greek god of wind. Because the density of
air is far less than that of water, air cannot move as large a particle as water flowing at the same
velocity. In most regions with moderate to strong winds, the largest particles that can be lifted by
the air are grains of sand. Only the finest dust particles remain aloft long enough to be moved by
Flowing air erodes the land surface in two ways. The first, abrasion, results from the impact
of wind-driven grains of sand (Figure 6.4). Airborne particles act like tools, chipping small
fragments off rocks that stick up from the surface. When rocks are abraded in this way they
acquire distinctive, curved shapes and a surface polish. A bedrock surface or stone that has been
abraded and shaped by wind-blown sediment is called a ventifact (“wind artifact”). The second
wind erosional process, deflation (from the Latin word meaning “to blow away”), occurs when
the wind picks up and removes loose particles of sand and dust (Figure 6.5). Deflation on a large
scale takes place only where there is a little or no vegetation and loose particles are fine enough
to be picked up by the wind. It is especially severe in deserts, but can occur elsewhere during
times of drought when no moisture is present to hold soil particles together.
School of Mechanical Engineering
Lubrication of Motor Vehicle.
Friction causes heat and wear. In an engine, oil lubricates the moving parts and reduces the heat
and the wear. The oil also collects any small particles of dirt or metal and carries them to the oil
Some of the oil will leak out of the engine when it is used. The amount of oil in the engine will
need checking regularly. The dipstick is used for checking the amount of oil. If there isn’t enough
oil in the engine, friction between the moving parts will increase and the engine will quickly
The oil in the engine will need changing about once every 5000 km. If it is not changed, it will
become thin and full of impurities and it will not lubricate efficiently. The oil filter will also need
changing regularly. If it is not changed, it will become blocked by particles of dirt and metal. If
the filter becomes blocked, the oil will not flow around the engine and heat and water will
increase very rapidly.
Some parts of a car need greasing-usually about once every six months. Fifty years ago cars
needed greasing every week. Modern vehicles need much less greasing. They only need greasing
about twice a year. Cars in the future will probably need no greasing.
In gas welding, it is necessary to use a mixture of two gases. To create a hot enough flame, a
combustible gas must be mixed with oxygen. Although acetylene (C2H2) is normally used, the
combustible gas need not be acetylene. Hydrogen or petroleum gases can also be used.
Oxygen can be stored at very high pressure. It is dangerous to compress gaseous acetylene in the
same way and so it is dissolved under pressure in liquid acetone but at a much lower pressure
than oxygen. To create a suitable flame, the gases must be supplied to the welding torch at low
pressure. Pressure regulators are therefore used to regulate the gas flow from the cylinders. They
are screwed into the top of each cylinder.
Gas welding is normally used to join steel to steel. To make a very strong joint, the work pieces
must be composed of the same metal. Welding rods are used to provide filler metal. In gas
welding, these rods are generally composed of steel. Bronze or brass rods may sometimes be
used. When bronze and brass filler metal is used the process is called brazing.
To light the welding torch, the combustible gas must be turned on first. The oxygen must not be
turned on before the flame is lit. The oxygen supply must be adjusted to give the correct flame.
The Instruments in a Car
All vehicles require certain instruments to provide information for the driver. For instance, every
car has a speedometer to indicate its speed. It also has a fuel gauge to indicate the amount of
fuel in the petrol tank. Many cars also have a tachometer to indicate the engine speed. They may
also have an ammeter to indicate if the battery is charging or discharging.
The speedometer is indicating zero kph. The car is not moving. The engine is turning at minimum
speed (approximately 750 rpm). As the engine is only turning slowly, the alternator is also
turning slowly. It is not producing enough current for the engine. Therefore, the battery must
supply some of the necessary current. The battery discharging and so the ammeter is indicating
If the car is moving at 60kph, the engine is turning at 2500 rpm and so the alternator is turning
quite fast. It is producing a strong current for the engine and so the battery is no longer needed
to supply current. The battery is now recharging from the alternator and so the ammeter is
indicating +10A. after a short time, the battery will be fully charged again.
If the car is moving at 90kph, the engine is turning at a speed of 4500 rpm. However, the
alternator is not producing any current. The ammeter is indicating -20A. In other words, the
battery is discharging rapidly although the engine is turning at high speed. Therefore, the
alternator is not producing any power and the battery is discharging at 20A. So, unless the fault is
put right or the engine stopped, the battery will soon become completely discharged. The
electrical items, such as the headlights, should be switched off as soon as possible. When they
are switched off and the engine is stopped the ammeter will read zero and the needle will point
A car battery can easily become discharged if there is an electrical fault in the car. If the fan belt
is broken, for instance, the battery may become discharged in quite a short time. If the lights are
left on while the car is not in use, the battery will also become discharged.
A battery (d.c.) cannot be recharged directly from the mains (a.c.). A battery charger is needed to
rectify the a.c. to d.c. and to reduce the voltage to 12V. Before charging the battery, remove all
the filler plugs. While battery is charging, hydrogen will be produced. This gas cannot escape
easily from the battery if the filler plugs are not removed.
When connecting the crocodile clips to the battery, check the connections. The positive clip must
be connected to the positive terminal and the negative clip to the negative terminal. Make sure
the clips are connected before switching on the charger. After charging, switch off the charger
before disconnecting the clips.
Charging started eight hours ago. During the first hour, the ammeter needle was indicating 5A.
(the battery was being charged at the maximum rate). During the second and third hours, the
ammeter was indicating about 4.5A. During the next two hours, the charging rate was decreasing
more rapidly. After five hours, the rate was only 2A. After eight hours, the ammeter is now
indicating 0.5A. The battery is almost fully charged. It will be fully charged in about an hour from
Energy Conversion Process
Fuel and oxygen are converted to heat energy by combustion. The heat created during the
combustion process causes the gases trapped inside the cylinder to expand. This expansion
causes a pressure build-up, which is then converted to mechanical energy as the expanding gases
force a piston down the cylinder.
The up-and-down movement of the piston is converted to rotary motion by a connecting rod and
crank. This is like the rider’s legs and the chain wheel of a bicycle.
The energy of motion and of vehicle movement is called kinetic energy. To stop a vehicle this
energy must be converted to another form. The brakes do this by converting kinetic energy into
heat. When all the kinetic energy has been converted the vehicle is stationary.
The internal combustion engine is not a very efficient energy converter, as it can not turn all the
fuel into mechanical energy. Surprisingly, much of the fuel is wasted in the form of heat, either
down the exhaust pipe with the waste gases, or absorbed by the cooling system and radiated to
the atmosphere. Only some 25 percent of the chemical energy fed into the engine is used to drive
As the piston moves up and down the cylinder it must stop at the top and bottom every time it
changes directions. These points are known as top dead-centre and bottom dead-centre. The
distance travelled by the piston between top dead-centre and bottom dead-centre. The distance
travelled by the piston between top dead-centre and bottom dead-centre is called the stroke. The
diameter of the cylinder is called the bore.
The crankshaft of an engine is similar to the cranks on a bicycle. The rider’s feet push on the
pedals, which turn the cranks mounted on the centre spindle, converting the up-and-down
movement of the legs into rotary motion. In that way a pushing effort is converted into a turning
force called torque.
The piston in an engine is connected to the crankshaft by the connecting rod. In much the same
way as the bicycle, the up-and-down movement of the piston is converted into the rotary torque
as the crankshaft. Because far greater forces are being converted, all the components are much
stronger, and the crankshaft is supported in more bearings.
The crankshaft is normally a robust, one-piece alloy steel forging machined to very fine
tolerances. Some manufacturers use steel alloys or cast iron containing copper, chromium and
nickel. Cast iron crankshafts have proved to be very durable; they have good wearing properties
and are less prone to fatigue than forged steel shafts. The journals may be hardened by
processes such as nitriding or induction hardening.
The crankshaft has a number of identifiable parts:
1. Main bearing journal. Any part of the shaft that rotates in a bearing is called a journal.
The main bearing journals support the shaft in the cylinder block.
2. Crank pin journal. This is the part of the crankshaft to which the connecting rod is
attached, and is often called the big-end journal.
3. Crank radius. This is the term used to describe the offset from the main journals to the
crank pin; it is like the length of the pedal crank on a cycle. In the same way as the cycle
rider’s foot moves two crank lengths from highest to lowest position, so the piston moves
an amount that is twice the crank offset. This is the stroke of an engine.
4. The webs. The big-end journals and the main bearing journals are held together by the
webs, which may also incorporate counterbalance weights.
5. Fillet radius. A sharp corner in this position would create a weak spot, so a radius is
provided to avoid any problems.
6. Crank throw. A single-cylinder engine has a single-throw crankshaft, while a four-cylinder
engine would have a four-throw crankshaft. However, engines of ‘vee’ configuration often
share big-end journals between two opposing cylinders.
7. Crankshaft throw. This describes how far the centre of the big-end journal is offset from
the centre of the crankshaft main journal: the larger this measurement, the greater the
turning force applied to the crankshaft while increasing the piston‘s effective stroke.
8. Internal oilways. To supply oil to the big-end journals the crankshaft has internal oilways
drilled from the adjacent main bearing journal. Oil flows into main bearings from the oil
gallery, and from there it is fed along the crankshaft oilways to each of the big-end
9. Other requirements. One end of the crankshaft forms a boss to which the flywheel is
attacked. The other end usually has some form of key way machined into it to provide a
positive drive for the timing gears, sprockets or pulleys. Pulleys for auxiliary drives may
also need to be mounted there.
At both ends of the shaft some form of oil sealing must be used to prevent leakage from the
revolving journals. The scroll or quick thread ‘screws’ the oil back towards the inside of the
engine. At the same time, the centrifugal force of the spinning crankshaft forces oil to climb the
thrower. When it reaches the edge it is thrown off into a channel and returns to the sump.
The cover at the timing gear end usually has a lipped seal, made from synthetic rubber stiffened
by a metal shell. It has an inner lip that rubs on the crankshaft to stop oil leakage. Light contact is
maintained by the use of a steel garter spring.
Most of the very earliest internal combustion engines of the 17th and 18th centuries can be
classified as atmospheric engines. These were large engines with a single piston and cylinder, the
cylinder being open on the end. Combustion was initiated in the open cylinder using any of the
various fuels which were available. Gunpowder was often used as the fuel. Immediately after
combustion, the cylinder would be full of hot exhaust gas at atmospheric pressure. At this time,
the cylinder end was closed and the trapped gas was allowed to cool. As the gas cooled, it
created a vacuum within the cylinder. This caused a pressure differential across the piston,
atmospheric pressure on one side and a vacuum on the other. As the piston moved because of
this pressure differential, it would do work by being connected to an external system, such as
raising a weight.
Some early steam engines also were atmospheric engines. Instead of combustion, the open
cylinder was filled with hot steam. The end was then closed and the steam was allowed to cool
and condense. This created the necessary vacuum.
In addition to a great amount of experimentation and development in Europe and the US during
the middle and latter half of the 1800s, two other technological occurrences during this time
stimulated the emergence of the internal combustion engine. In 1859, the discovery of crude oil
in Pennsylvania finally made available the development of reliable fuels which could be used in
these newly developed engines. Up to this time, the lack of good, consistent fuels was a major
drawback in engine development. Fuels like whale oil, coal gas, mineral oils, coal, and gun
powder which were available before this time were less than ideal for engine use and
development. It still took many years before products of the petroleum industry evolved from the
first crude oil to gasoline, the automobile fuel of the 20th century. However improved hydrocarbon
products began to appear as early as the 1860s and gasoline, lubricating oils, and the internal
combustion engine evolved together.
The second technological invention that stimulated the development of the internal combustion
engine was the pneumatic rubber tire, which was first marketed by John B. Dunlop in 1888. This
invention made the automobile much more practical and desirable and thus generated a large
market for propulsion systems, including the internal combustion engine.
During the early years of the automobile, the internal combustion engine competed with
electricity and steam engines as the basic means of propulsion. Early in the 20th century is the
period of the internal combustion engine and the automobile powered by the internal combustion
engine. Now, at the end of the century, the internal combustion engine is again being challenged
by electricity and other forms of propulsion systems for automobiles and other applications. What
goes around comes around.
During the second half of the 19th century, many different styles of internal combustion engines
were built and tested. Engines operated with variable success and dependability using many
different mechanical systems and engine cycles.
The first fairly practical engine was invented by J.J.E. Lenoir (1822-1900) and appeared on the
scene about 1860. During the next decade, several hundred of these engines were built with
power up to about 4.5 kW (6 hp) and mechanical efficiency up to 5%. In 1867 the Otto-Langen
engine, with efficiency improved to about 11%, was first introduced, and several thousand of
these were produced during the next decade. This was a type of atmospheric engine with the
power stroke propelled by atmospheric pressure acting against a vacuum. Nicolaus A. Otto
(1832-1891) and Eugen Langen (1833-1895) were two of many engine inventors of this period.
During this time, engines operating on the same basic four-stroke cycle as the modern automobile
engine began to evolve as the best design. Although many people were working on four-stroke
cycle design. Otto was given credit when his prototype engine was built in 1876.
In the 1880s the internal combustion engine first appeared in automobiles. Also, in this decade
the two-stroke cycle engine became practical and was manufactured in large numbers.
By 1892, Rudolf Diesel (1858-1913) had perfected his compression ignition engine into basically
the same diesel engine known today. This was after years of development work which included
the use of solid fuel in his early experimental engines. Early compression ignition engines were
noisy, large, slow, single-cylinder engines. They were, however, generally more efficient than
spark ignition engines. It wasn’t until the 1920s that multicylinder compression ignition engines
were made small enough to be used with automobiles and trucks.
Fundamental principles of generators
A generator is a machine that converts mechanical energy into electrical energy by the process of
electromagnetic induction. In both d.c. and a.c. types of generator, the voltage induced is
alternating; the major difference between them being in the method by which the electrical
energy is collected and applied to the circuit externally connected to the generator. Plotting of the
induced voltage throughout the full cycle produces the alternating or sine curve shown.
Generators are classified according to the method by which their magnetic circuits are energized,
and the following three classes are normally recognized:
1. Permanent magnet generators.
2. Separately-excited generators, in which electromagnets are excited by current obtained
from a separate source of d.c.
3. Self-excited generators, in which electromagnets are excited by current produced by the
machines themselves. These generators are further classified by the manner in which the
fixed windings, the electromagnetic field and armature windings, are interconnected.
In aircraft d.c. power supply systems, self-excited shunt-wound generators are employed and the
following details are therefore related only to this type.
The Random House College Dictionary defines propulsion as “the act of propelling, the state of
being propelled, a propelling force or impulse” and defines the verb propel as “to derive, or cause
to move, forward or onward.” From these definitions, we can conclude that the study of
propulsion includes the study of the propelling force, the motion caused, and the bodies involved.
Propulsion involves an object to be propelled plus one or more additional bodies, called
The study of propulsion is concerned with vehicles such as automobiles, trains, ships, aircraft and
spacecraft. Methods devised to produce a thrust force for the propulsion of a vehicle in flight are
based on the principles of jet propulsion. The fluid may be the gas used by the engine itself
(turbojet), it may be fluid available in the surrounding environment (air used by a propeller), or it
may be stored in the vehicle and carried by it during the flight. (rocket).
Jet propulsion system can be subdivided into two broad categories: air-breathing and non-air-
breathing. Air-breathing propulsion systems include the reciprocating, turbojet, turbofan, ramjet,
turboprop, and turboshaft engines. Non-air-breathing engines include rocket motors, nuclear
propulsion systems, and electric propulsion systems.
School of Mathematics
Basic geometric concepts
The practical value of geometry lies in the fact that we can abstract and illustrate physical objects
by drawings and models. For example, a drawing of a circle in not a circle, it suggests the idea of
a circle. In study of geometry we separate all geometric figures into two groups: plane figures
whose points lie in one plane and space figures or solids. A point is a primary and starting concept
in geometry. Line segments, rays, triangles and circles are definite sets of points. A simple closed
curve with line segments as its boundaries is a polygon. The line segments are sides of the
polygon and the end points of the segments are vertices of the polygon. A polygon with four sides
is a quadrilateral. We can name some important quadrilaterals. A trapezoid is a quadrilateral with
one pair of parallel sides. A rectangle is a parallelogram with four right angles. A square is a
rectangle with all sides of the same length. Geometry is the science of the properties,
measurement and construction of lines, planes, surfaces and different geometric figures. We
measure segments in terms of other segments and angles in terms of other angles. The length
and width are indirect measurements, for we find the area when we measure lengths. The
dimensions we take in the case of volume are the area and the length or the height. In other
words, even the very common formulae of geometry permit us to measure areas and volumes
indirectly, when we express these quantities as lengths.
What is Mathematics?
“Mathematics” is a Greek word, by origin, it means “something that must be learnt or
understood”, perhaps “acquired knowledge” or “general knowledge”. The word “maths” is
contraction of all these phrases. What is maths in the modern sense of the term?
Maths as a science, viewed as a whole, is a collection of branches. The largest branch is that
which builds on the ordinary whole numbers, fractions, and irrational numbers, is called the real
number system. Arithmetic, algebra, the study of functions, the calculus, differential equations,
and various other subjects which follow the calculus in logical order are all developments of the
real number system. This part of maths is termed the maths of number. A second branch is
geometry consisting of several geometries. Maths contains many more divisions. Each branch has
the same logical structure: it begins with certain concepts, such as the whole numbers or integers
in the maths of number, and such as point, line and triangle in geometry. These concepts must
verify explicitly stated axioms. Some of the axioms of the maths of number are the associative,
commutative, and distributive properties and the axioms about equalities. Some of the axioms of
geometry are that two points determine a line, all right angles are equal, etc. from the concept
and axioms theorems are deduced. Hence, from the standpoint of structure, the concepts,
axioms, and theorems are the essential components of any compartment of maths.
The basic concepts of the main branches of maths are abstractions from experience, implied by
their obvious physical counterparts. Irrational numbers, negative numbers and so forth are not
wholly abstracted from the physical practice, for the man’s mind must create the notion of entirely
new types of numbers to which operations such as addition, multiplication, and the like can be
applied. The notion of a variable that represents the quantitative values of some changing
physical phenomena, such as temperature and tome, is also at least one mental step beyond the
mere observation of change.
Mathematics – the Language of Science
One of the foremost reasons given for the study of maths is to use a common phrase, that “maths
is the language of science”. This is not meant to imply that maths is useful only to those who
specialize in science. It implies that even a layman must know something about the scope and the
basic role played by maths in our scientific age.
The language of maths consists mostly of signs and symbols, and in a sense, is an unspoken
language. There can be no more universal or more simple language, it is the same throughout the
civilized world, though the people of each country translate it into their own particular spoken
language. For instance, the symbol 5 means the same to a person in England, Spain, Italy or any
other country; but in each it may be called by a different word. Some of the best known symbols
of maths are the numerals 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 and the signs of addition (+), subtraction (–),
multiplication (×), division (:), equality (=) and the letters of the alphabets: Greek, Latin, Gothic.
Symbolic language is one of the basic characteristics of modern maths for it determines its true
aspect. With the aid of symbolism mathematicians can make transition in reasoning almost
mechanically by the eye and leave their mind free to grasp the fundamental ideas of the subject
matter. Just as music uses symbolism for the representation and communication of sounds, so
maths expresses quantitatively relations and spatial forms symbolically. Unlike the common
language, which is the product of custom, as well as social and political movements, the language
of maths is carefully, purposefully and often ingeniously designed.
Math language is precise and concise, so that it is often confusing to people unaccustomed
to its forms. The symbolism used in math language is essential to distinguish meanings often
confused in common speech. In the study of maths much time must be devoted 1) to the
expressing of verbally stated facts in math language, that is, in the signs and symbols of maths;
2) to the translating of math expressions into common language. We use signs and symbols for
convenience. In some cases the symbols are abbreviations of words, but often they have no such
relations to the things they stand for.
Number System of Mathematics
Mathematicians study numbers and develop new number systems in a specific field of
maths – number theory – which is the oldest branch of maths. The generators of classical
number theory – the ancient Greek mathematicians – studied numbers with no immediate
applications in mind. They assigned all kinds of mysterious meanings and interpretations to
numbers: the number 2 for them stood for female, 3 stood for male, 4 – for justice, 5 – for
marriage. Although applications were not the main aim of the classical number theory, Greek
investigators discovered many curious and fascinating number properties and gave birth to
theoretical pure maths. They were the first to formulate the abstract notion of "number". The
positive integers or natural numbers were the foundation of all classical maths.
In maths there exist various ways to study numbers – one way of further extension,
generalization and synthesis when mathematicians build up number concepts of great
complexity and generality. Another method is analysis when mathematicians arrive at the
essence of numbers, when they break down the complexities and study the original primitive
positive integers and their properties.
Nowadays mathematicians separate the number systems of maths into five principal
stages: 1) the system of natural numbers or positive integers only; 2) the positive, negative
integers and zero; 3) the rational numbers which combine integers and fractions; 4) the real
numbers that include irrational numbers such as π ; 5) the complex numbers that contain the
so-called "imaginary" number √l. In modern maths there are several new number systems.
Three of them occupy a significant place in maths: quaternions (triplets), matrices and
It is interesting to mention that the number 0 (zero) originally had signified an empty place
only. Modern mathematicians recognize zero as any other number. Zero is a meaningful math
object with the properties defined by a set of rules.
A proof is a demonstration that some statement is true. The Greeks were the first to
apply the deductive procedures developed by the Greek philosophers in maths. They are
credited with the use of deductive methods of proof in geometry instead of intuition,
experiment and trial-and-error methods of the Egyptians. Deduction as a method of
obtaining conclusion has many advantages over reasoning by induction and analogy.
Math proof demands a specific kind of reasoning. In a formal math proof the
mathematician cannot rely on his intuition, insight and imagination. He must reason logically and
start with 1) the definitions of basic concepts for the theory involved, 2) axioms and 3) deduce a
conclusion without making further assumptions. By analysis of the mechanism and structure of
proofs we can see that the main feature of formal math proofs is that every statement in the
proof must be justified by referring to a) definition; b) axioms; c) chain substitution; d) the
theorem already proved.
Math text can be expressed in a language containing only a small number of fixed
"words" consisting of a small number of unbreakable rules. Such a text is referred to as
"formalized". A formal system has some analogy with a natural language. Its symbols
correspond to letters of the alphabet, punctuation marks and numerals. There are of course
important differences between natural languages and formal systems but the analogy is close
enough so that when formal systems are interpreted, they are often called artificial math
languages. When transformation rules are applied to the axioms, the result is a theorem. The
exhibition of the application of the transformation rules is a proof. A proof is a finite sequence of
formalized sentences such that each sentence is an axiom or follows from an earlier formalized
sentence by the application of a transformation rule. The last line of the proof is a theorem.
Discrete mathematics is the common name for the fields of mathematics most generally
useful in theoretical computer science. This includes computability theory, computational
complexity theory, and information theory. Computability theory examines the limitations of
various theoretical models of the computer, including the most powerful known model – the
Turing machine. Complexity theory is the study of tractability by computer; some problems,
although theoretically solvable by computer, are so expensive in terms of time or space that
solving them is likely to remain practically unfeasible, even with rapid advance of computer
hardware. Finally, information theory is concerned with the amount of data that can be stored on
a given medium, and hence concepts such as compression and entropy.
The originators of the basic concepts of Discrete Mathematics, the mathematics of finite
structures, were the Hindus, who knew the formulae for the number of permutations of a set of n
elements, and for the number of subsets of cardinality k in a set of n elements already in the sixth
century. The beginning of Combinatorics started with the work of Pascal in the 17th century, and
continued in the 18th century with the seminal ideas of Euler in Graph theory, with his work on
partitions and their enumeration. These old results are among the roots of the study of formal
methods of enumeration, the development of configurations and designs, and the extensive work
on Graph Theory in the last two centuries. The tight connection between Discrete mathematics
and Theoretical computer science, and the rapid development of the latter in recent years, led to
an increased interest in Combinatorial techniques and to an impressive development of the
subject. Concepts and questions of Discrete Mathematics appear naturally in many branches
of mathematics, and the area has found applications in other disciplines as well. These include
applications in Information Theory and Electrical Engineering, in Statistical Physics, in Chemistry
and Molecular Biology, and, of course, in Computer Science. Combinatorial topics such as Ramsey
Theory, Combinatorial Set Theory, Matroid Theory, Extremal Graph Theory, Combinatorial
The contribution of the Greeks that determines most the character of present-day civilization
was their maths! Thales, Pythagoras, Euclid, Archimedes created an amazing amount of first-
class maths. The most outstanding contribution of the early Greeks to maths was the
formulation of the math method.
In 300 B.C. Euclid produced his epoch-making effort, the Elements, a single deductive chain
of 465 propositions comprising plane and solid geometry, number theory, and Greek geomet-
The work of many schools and isolated individuals was unified by Euclid in this most famous
textbook on geometry. Euclid deduced all the most important results of the Greek masters of
the classical period and therefore the Elements constituted the math history of the age as well
as the logical presentation of geometry. The plan of Euclid's Elements is begins with a list of
definitions of such notions as point and line. Next appear various statements some of which
are labeled axioms and others postulates.
The axioms chosen by Euclid state properties of points, lines and other geometric figures that
are possessed by their physical counterparts. The properties in question are so obviously true
of these physical objects that all mathematicians agreed on them as a basis for further
reasoning. Euclid chose a very limited number of axioms, twelve in all and constructed the
whole system of geometry. His method of proof is strictly deductive, his theorems are proved
by several deductive arguments and each yields an unquestionable conclusion.
The most outstanding contribution of the early Greeks was the formulation of the pattern of
material axiomatics and the insistence that geometry should be systematized according to this
pattern. Euclid's Elements is the earliest extensively developed example of this use of the
pattern available to us. In recent years, this pattern was significantly generalized to yield a more
abstract form of discourse known as "formal axiomatics". The necessity for accurate and exact
definitions, for clearly stated assumptions and for rigorous proof became evident in Euclid's
The man whose work best epitomizes the character of the Alexandrian age is Archimedes
whose fame was based for many centuries not upon the immortal achievements explained
in his own works, but upon the legends around his name. These legends had a core of truth:
he did invent machines, such as compound pulleys, burning mirrors, but these activities were
secondary, he was primarily a mathematician, the greatest of antiquity and one of the very
greatest of all times.
The ingenuity of the mechanical devices was Archimedes' huge mirror which concentrated
the sun's rays on Roman ships besieging his native city of Syracuse. The most famous of
Archimedes' scientific discoveries is the hydrostatic principle now named after him. Archimedes
discovered that a body immersed in water is buoyed up by a force equal to the weight of the
water displaced. Since the weight of the displaced water as well as the weight of a body in air
can be measured, the ratio of the weights is known.
The principle that Archimedes discovered is one of the first universal laws of science; he
incorporated it among others in his book On Floating Bodies. Two branches of mechanics –
statics and hydrostatics – were founded on math bases by Archimedes who must be called the
first rational scientist of mechanics. Two of his mechanical treatises begin with definitions and
postulates on the bases of which a number of propositions are geometrically proved.
In codifying our knowledge of nature in simple laws, scientists are looking first for constancy; the
mass of a body remains constant; total electric charge remains constant; momentum is
conserved; all electrons are the same, etc. Almost as simple and equally fruitful is direct
proportionality when two measured quantities increase together in the same proportion: stretch
of a spring with its load; force and acceleration; gas pressure and gas density, etc.
The History of Algebra
The word "arithmetic" is derived from the Greek arithmos ("number"), algebra is a Latin variant
of the Arabic word al-jabr. Although originally "algebra" referred to equations, the word
today has a much broader meaning: 1. Early (elementary) algebra is the study of equations
and methods of solving them. 2. Modern (abstract) algebra is the study of math structures
such as groups, rings, and fields – to mention only a few.
Since algebra might have probably originated in Babylonia, it seems appropriate to credit the
country with the origin of the rhetorical style of algebra, illustrated by the problems found in
clay tablets dating back to 1700 B.C. The Babylonians also knew how to solve systems by
elimination but preferred often to use their parametric method.
Algebra in Egypt must have appeared almost as soon as in Babylonia; but Egyptian algebra
lacked the sophistication in method shown by Babylonian algebra, as well as its variety in types
of equations solved. For linear equations the Egyptians used a method of Solution consisting of
an initial estimate followed by a final correction. The numeration system of the Egyptians,
relatively primitive in comparison with that of the Babylonians, helps to explain the lack of
sophistication in Egyptian algebra.
The algebra of the early Greeks was geometric because of their logical difficulties with
irrational and even fractional numbers and their practical difficulties with Greek numerals. The
Greeks of Euclid's day thought of the product ab as a rectangle of base b and height a and
they referred to it as "a rectangle contained by CD and DE". Some centuries later, another
Greek, Diophantus, made a start toward modern symbolism in his work Diophantine Equations
by introducing abbreviated words and avoiding the rather cumbersome style of geometric
The Hindus solved quadratic equations by "completing the square" and they accepted negative
and irrational roots; they also realized that a quadratic equation (with real roots) has two roots.
One of their most outstanding achievements was the system of Hindu (often called Arabic)
Probability theory is concerned with determining the relationship between the number of
times a certain event occurs and the number of times any event occurs. For example, the number
of times a head will appear when a coin is flipped 100 times. Determining probabilities can be
done in two ways; theoretically and empirically.
Probability theory was originally developed to help gamblers determine the best bet to
make in a given situation. Suppose a gambler had a choice between two bets; she could either
wager $4 on a coin toss in which she would make $8 if it came up heads or she could bet $4 on
the roll of a die and make $8 if it lands on a 6. By using the idea of mathematical expectation she
could determine which is the better bet. Mathematical expectation is defined as the average
outcome anticipated when an experiment, or bet, is repeated a large number of times. In its
simplest form, it is equal to the product of the amount a player stands to win and the probability
of the event. In our example, the gambler will expect to win $8 × 0.5 = $4 on the coin flip and
$8 × 0.17 = $1.33 on the roll of the die. Since the expectation is higher for the coin toss, this bet
When more than one winning combination is possible, the expectation is equal to the sum
of the individual expectations. Consider the situation in which a person can purchase one of 500
lottery tickets where first prize is $1000 and second prize is $500. In this case, his or her
expectation is $1000 × (1/500) + $500 × (1/500) = $3. This means that if the same lottery was
repeated many times, one would expect to win an average of $3 on every ticket purchased.
Ex. 2. Translate the following sentences into Mongolian.
1. Probability theory is concerned with determining the relationship between the number of times
a certain event occurs and the number of times any event occurs.
2. Determining probabilities can be done in two ways; theoretically and empirically.
3. Probability theory was originally developed to help gamblers determine the best bet to make in a
School of Material Science
Materials technology impacts on the quality of our lives. It engineers the components of
construction that shape our world: our buildings and infrastructure.
Materials technology enables us to design durable construction components that serve several
generations by understanding ageing processes in different operating conditions. The ability to
design and adopt better-performing, energy-saving, cost-efficient materials with known durability
characteristics is key to our construction industries future.
The world we live in has finite resources. Materials technology presents a platform for
sustainable construction. It permits optimal use of resources through life cycle cost analysis. It
encourages the use of valuable waste products in the construction process. These factors
enhance cost-effectiveness, promotes “green” technology and environmental awareness.
Our vision is to provide the best materials engineering expertise available. As a customer-
orientated solution provider, we help clients benefit from advances in materials technology.
As an integral unit, we provide a range of specialist services:
- Structural Investigation, Maintenance and Repair (Infrastructure and Buildings).
- Corrosion Engineering: Cathodic Protection and Corrosion Monitoring.
- New Construction - Technical Services.
- Durability Assurance Services for Engineering Infrastructure.
- Façade technology.
- Construction Materials Technology in general, research and specialist testing.
- Litigation Support: provision of expertise.
- Life cycle assessment and sustainable construction
Plastics are the most common synthetic materials. They were first used in the 1860s. In 1869
John Hyatt developed Celluloid, the first commercially successful plastic. Products made from
Celluloid included combs, dentures, and photographic film.
Hundreds of plastics are available today. The uses of plastics range from simple products
such as hot drink cups to automobile bodies. Automobile manufacturers are using more and more
plastic in their new models each year. Plastics that are cheaper, lighter, and stronger than metals
are causing rapid changes within the automotive industry. Plastics have also been introduced into
the construction industry. Waste pipes, prefabricated showers and bathtubs, and skylights are all
products in which plastics have replaced more traditional materials.
All plastics are either thermoplastic or thermosetting. Thermoplastics can be softened with
heat and molded to a desired shape. Later they can be reheated, softened, and remolded. An
advantage of thermoplastics is that scrap materials can be recycled. Thermosetting plastics ‘set’
after being heated and cannot be easily reshaped.
The two groups of plastics have different characteristics because of the structure of their
molecules. The chains of molecules (called polymers) in thermoplastics are very flexible. The
molecules in thermosetting plastics are linked together and are more rigid. This rigidity is why
thermosetting plastics are difficult to change. Most plastics are made from petroleum, but
scientists are studying another source: bacteria. Some kinds of bacteria store energy in the form
of tiny plastic pellets. The plastic made by the bacteria is too brittle for use by industry.
Through genetic engineering, scientists hope to make bacteria able to produce many other
kinds of plastics. This knowledge may then be applied to plants.
The elements boron, silicon, germanium, antinomy, and tellurium separate the metals from
the nonmetals in the periodic table. These elements, called semimetals or sometimes metalloids,
exhibit properties characteristic of both metals and nonmetals. Their electro negativities are less
than that of hydrogen, but they do not form positive ions. The structures of these elements are
similar to those of nonmetals, but the elements are electrical semiconductors.
Boron exhibits some metallic characteristics, but the great majority of its chemical behavior is
that of a nonmetal that exhibits an oxidation state of +3 in its compounds (although other
oxidation states are known). Its valence shell configuration is 2s22p 1, so it forms trigonal planar
compounds with three single covalent bonds. The resulting compounds are Lewis acids, because
the unhybridized p orbital does not contain an electron pair. The most stable Boron compounds
are those containing oxygen and fluorine. Diborane, the simplest stable boron hydride, contains
three-center two-electron bonds, as does elemental boron.
Silicon is a semi-metal with a valence shell configuration of 3s 2 3p2d0. It commonly forms
tetrahedral compounds in which silicon exhibits an oxidation state of +4. Although the d orbital is
unfilled in four-coordinate silicon compounds, its presence makes silicon compounds much more
reactive than the corresponding carbon compounds. Silicon forms strong single bonds with
carbon, giving rise to the stability of silicon carbide and silicones. Silicon also forms strong bonds
to oxygen and fluorine. Silicates contain oxyanions of silicon and are important components of
minerals and glass.
In the investigation, copper and zinc were mixed together to form the alloy brass. You may
familiar with this alloy because it is used to make candlesticks, fireplace tools, keys and other
items you may see in your home. Bronze, another alloy, is a mixture of copper and tin. Bronze is
stronger than brass and is more durable because it resists corrosion. Corrosion is chemical
reaction that may slowly weaken metal and cause it to crumble or break.
Most metal easily combine with oxygen. You may have seen flaky, orange material on some
steel surfaces. This material, called rust, is the result of the attraction between iron and oxygen in
the air. Why does oxygen combine so easily with metals? You learned that metal atoms have a
weak attraction to their outer layer of electrons. Oxygen, on the other hand, has a strong
attraction for electrons. When the proper conditions are present, iron will lose its electrons to
oxygen. The loss of negative electrons causes iron to become positively charged. When oxygen
gains the extra electrons it takes on an overall charge of 2-. The attraction of the opposite
charges brings iron and oxygen together to form a new compound. A common form of iron oxide,
which is rust, has the formula Fe2O3.
Several methods ore used to prevent or slow the corrosion of metals. Stainless steel, also an
alloy, is often used in plumbing fixtures because it resists corrosion, just as bronze does. Paint,
enamel, or plastic also may be applied to a metal’s surface. Some metal oxides do not weaken a
structure. The surface of copper turns to green-colored copper carbonate and several other
copper compounds. The Statue of liberty is covered by this naturally occurring protective coating.
Steel-From Cans to Cars
Each time you open a can of chicken noodle soup or ride in a car, you are probably using
recycled steel. Steel is an alloy made of iron and a small amount of carbon. An alloy is a mixture
of two or more elements in which at least one of the elements is a metal. Properties of an alloy
can be controlled by the addition of other elements.
Sixty-six percent of steel is recovered, making it the most recycled material in the United
States. And because steel is the major component of cars, the automobile recycling industry is a
major contributor of scrap steel. What makes steel so easy to collect and recycle? For one thing,
it’s easy to separate from other materials. Most recycling programs accept steel and aluminum
cans together, for example, although they are recycled separately. It’s easy to separate steel and
aluminum because iron, the main element in steel, is magnetic, while aluminum is not. Therefore,
big magnets are used to separate steel from other materials.
The steel headed for recycling is then sorted based on alloy content and sent to a steel scrap-
processing plant. Used steel can be reprocessed by melting or by chemical methods. Chemical
processing removes unwanted impurities such as other metals. This is useful for steel cans, which
are often coated with tin to prevent corrosion. Steel scrap from automobiles is usually remelted.
When properly collected and processed to remove impurities, recycled steel can be reprocessed to
make any steel product. The soup can you recycle today might show up in a new car. Steel,
plastic, and glass aren’t the only recyclable materials in cars. Motor oil, batteries, and tires are
also recyclable. The United States could save more than 1million barrels of oil each day if all used
motor oil were recycled.
Soft as Silk, Strong as Steel
Spider silk-like bird wings and oyster pearls –is one of nature’s wonders. Smooth, shiny, and
flexible, it is also extremely light. Thousands of metres of spun silk typically weigh no more than a
gram. And the wonders do not end there.
Dragline silk, used by spiders to create the structural frames of their webs, is nearly as strong
as Kevlar, yet significantly more elastic. Minor ampullate silk, another of the half-dozen varieties,
is a little weaker but has virtually no elasticity. When stretched, it remains at its new length-like
pulled toffee. Then there is flagelliform silk, which spiders use to catch flying insects. This type of
silk reverts to its original shape, even after being expanded to more than twice its initial length.
For more than a century, researchers have dreamed about using properties of spiders silk,
which are more varied than those of silkworm silk. The problem is that spiders are not like
silkworms, which can survive peacefully in close quarters as long as they are fed a steady supply
of mulberry leaves. By contrast, spiders are too aggressive and territorial to domesticate.
Over the past decade, Randy Lewis and his colleagues have managed to sequence and clone
the genes that code for the key proteins in four different types of spider silk. Once the genes had
been determined and replicated, the Woyming group inserted them into bacteria, which
proceeded to make the right proteins. Unfortunately, the quantities produced were too small to be
commercially useful. In the meantime Dr Lewis had licensed his technology to Nexia, which has
taken different approach. Nexia intends to focus on low-volume, high-value applications in
medicine. Spider silk might also be used as synthetic tendons or ligaments. Would also be good
for parachute cords and may appeal to fashion designers. Nexia expects to see commercial
products based on spider silk on the market within a few years.
Plastic-Don’t Just Bag It
Can you quickly find three examples of plastic around you right now? We use plastic polymers
in the clothes we wear and in the containers that store our food. Plastics are also widely used in
automobiles. Most of the plastics in cars are shredded and incinerated or hauled to the landfill.
Many different kinds of plastic-50 or more-are used in a single car. Different plastics can have
nearly identical color and density. There’s no easy, mechanical way to separate all the different
kinds of plastic.
Most plastics are polymers composed largely of carbon, hydrogen, and oxygen. Polymers with
different structures are used to create plastics with unique properties. The most common
household plastics recycled are clear, 2-L soft drink bottles (made of polyethylene terephthalate,
or PET) and translucent milk jugs (made of high-density polyethylene, or HDPE). When a type of
plastic is collected and sorted, other materials are removed. For example, opaque lids are usually
made of a different plastic and are removed. After plastics are sorted, the recycling process is
fairly simple. The plastic bottles are chopped into small pieces and washed. After drying, the
material is melted. It is pushed through a screen filter and formed into pellets. The pellets are
stored until the material is used to make a new product. PET from 2-L soft drink bottles is often
used to make fiberfill for sleeping bags and coats.
The plastics-recycling industry promises to grow in the future.
Materials science or materials engineering is an interdisciplinary field involving the properties of
matter and its applications to various areas of science and engineering. This science investigates
the relationship between the structure of materials at atomic or molecular scales and their
macroscopic properties. It includes elements of applied physics and chemistry. With significant
media attention focused on nanoscience and nanotechnology in recent years, materials science
has been propelled to the forefront at many universities. It is also an important part of forensic
engineering and failure analysis. The material science also deals with fundamental properties and
characteristics of material.
History of materials science
The material of choice of a given era is often its defining point; the Stone Age, Bronze Age, and
Steel Age are examples of this. Materials science is one of the oldest forms of engineering and
applied science, deriving from the manufacture of ceramics. Modern materials science evolved
directly from metallurgy, which itself evolved from mining. A major breakthrough in the
understanding of materials occurred in the late 19th century, when the American scientist Josiah
Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in
various phases are related to the physical properties of a material. Important elements of modern
materials science are a product of the space race: the understanding and engineering of the
metallic alloys, and silica and carbon materials, used in the construction of space vehicles enabling
the exploration of space. Materials science has driven, and been driven by, the development of
revolutionary technologies such as plastics, semiconductors, and biomaterials.
Before the 1960s (and in some cases decades after), many materials science departments were
named metallurgy departments, from a 19th and early 20th century emphasis on metals. The field
has since broadened to include every class of materials, including: ceramics, polymers,
semiconductors, magnetic materials, medical implant materials and biological materials
In materials science, rather than haphazardly looking for and discovering materials and exploiting
their properties, the aim is instead to understand materials so that new materials with the desired
properties can be created.
The basis of materials science involves relating the desired properties and relative performance of
a material in a certain application to the structure of the atoms and phases in that material
through characterization. The major determinants of the structure of a material and thus of its
properties are its constituent chemical elements and the way in which it has been processed into
its final form. These characteristics, taken together and related through the laws of
thermodynamics, govern a material’s microstructure, and thus its properties.
The manufacture of a perfect crystal of a material is currently physically impossible. Instead
materials scientists manipulate the defects in crystalline materials such as precipitates, grain
boundaries (Hall-Petch relationship), interstitial atoms, vacancies or substitutional atoms, to
create materials with the desired properties.
Not all materials have a regular crystal structure. Polymers display varying degrees of crystallinity,
and many are completely non-crystalline. Glasses, some ceramics, and many natural materials are
amorphous, not possessing any long-range order in their atomic arrangements. The study of
polymers combines elements of chemical and statistical thermodynamics to give thermodynamic,
as well as mechanical, descriptions of physical properties.
In addition to industrial interest, materials science has gradually developed into a field which
provides tests for condensed matter or solid state theories. New physics emerge because of the
diverse new material properties which need to be explained.
Materials in Industry
Radical materials advances can drive the creation of new products or even new industries, but
stable industries also employ materials scientists to make incremental improvements and
troubleshoot issues with currently used materials. Industrial applications of materials science
include materials design, cost-benefit tradeoffs in industrial production of materials, processing
techniques (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition,
sintering, glassblowing, etc.), and analytical techniques (characterization techniques such as
electron microscopy, x-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford
backscattering, neutron diffraction,small-angle X-ray scattering (SAXS), etc.).
Besides material characterisation, the material scientist/engineer also deals with the extraction of
materials and their conversion into useful forms. Thus ingot casting, foundry techniques, blast
furnace extraction, and electrolytic extraction are all part of the required knowledge of a
metallurgist/engineer. Often the presence, absence or variation of minute quantities of secondary
elements and compounds in a bulk material will have a great impact on the final properties of the
materials produced, for instance, steels are classified based on 1/10th and 1/100 weight
percentages of the carbon and other alloying elements they contain. Thus, the extraction and
purification techniques employed in the extraction of iron in the blast furnace will have an impact
of the quality of steel that may be produced.
The overlap between physics and materials science has led to the offshoot field of materials
physics, which is concerned with the physical properties of materials. The approach is generally
more macroscopic and applied than in condensed matter physics. See important publications in
materials physics for more details on this field of study.
The study of metal alloys is a significant part of materials science. Of all the metallic alloys in use
today, the alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the
largest proportion both by quantity and commercial value. Iron alloyed with various proportions of
carbon gives low, mid and high carbon steels. For the steels, the hardness and tensile strength of
the steel is directly related to the amount of carbon present, with increasing carbon levels also
leading to lower ductility and toughness. The addition of silicon and graphitization will produce
cast irons (although some cast irons are made precisely with no graphitization). The addition of
chromium, nickel and molybdenum to carbon steels (more than 10%) gives us stainless steels.
Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper
alloys have been known for a long time (since the Bronze Age), while the alloys of the other three
metals have been relatively recently developed. Due to the chemical reactivity of these metals,
the electrolytic extraction processes required were only developed relatively recently. The alloys of
aluminium, titanium and magnesium are also known and valued for their high strength-to-weight
ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These
materials are ideal for situations where high strength-to-weight ratios are more important than
bulk cost, such as in the aerospace industry and certain automotive engineering applications.
Other than metals, polymers and ceramics are also an important part of materials science.
Polymers are the raw materials (the resins) used to make what we commonly call plastics. Plastics
are really the final product, created after one or more polymers or additives have been added to a
resin during processing, which is then shaped into a final form. Polymers which have been
around, and which are in current widespread use, include polyethylene, polypropylene, PVC,
polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Plastics are generally
classified as "commodity", "specialty" and "engineering" plastics.
PVC (polyvinyl-chloride) is widely used, inexpensive, and annual production quantities are large. It
lends itself to an incredible array of applications, from artificial leather to electrical insulation and
cabling, packaging and containers. Its fabrication and processing are simple and well-established.
The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts.
The term "additives" in polymer science refers to the chemicals and compounds added to the
polymer base to modify its material properties.
Polycarbonate would be normally considered an engineering plastic (other examples include PEEK,
ABS). Engineering plastics are valued for their superior strengths and other special material
properties. They are usually not used for disposable applications, unlike commodity plastics.
Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical
conductivity, electro-fluorescence, high thermal stability, etc.
It should be noted here that the dividing line between the various types of plastics is not based on
material but rather on their properties and applications. For instance, polyethylene (PE) is a
cheap, low friction polymer commonly used to make disposable shopping bags and trash bags,
and is considered a commodity plastic, whereas Medium-Density Polyethylene MDPE is used for
underground gas and water pipes, and another variety called Ultra-high Molecular Weight
Polyethylene UHMWPE is an engineering plastic which is used extensively as the glide rails for
industrial equipment and the low-friction socket in implanted hip joints.
Another application of material science in industry is the making of composite materials.
Composite materials are structured materials composed of two or more macroscopic phases. An
example would be steel-reinforced concrete; another can be seen in the "plastic" casings of
television sets, cell-phones and so on. These plastic casings are usually a composite material
made up of a thermoplastic matrix such as acrylonitrile-butadiene-styrene (ABS) in which calcium
carbonate chalk, talc, glass fibres or carbon fibres have been added for added strength, bulk, or
electro-static dispersion. These additions may be referred to as
reinforcing fibres, or dispersants, depending on their purpose
School of Telecommunications and Information Technology
Telecommunication is transmission over a distance for the purpose of communication. In earlier
times, this may have involved the use of smoke signals, drums, semaphore, flags or heliograph.
In modern times, telecommunication typically involves the use of electronic devices such as
thetelephone, television, radio or computer. Early inventors in the field of telecommunication
include Alexander Graham Bell, Guglielmo Marconi and John Logie Baird. Telecommunication is an
important part of the world economy and the telecommunication industry's revenue was
estimated to be $1.2 trillion in 2006.
In the Middle Ages, chains of beacons were commonly used on hilltops as a means of relaying
a signal. Beacon chains suffered the drawback that they could only pass a single bit of
information, so the meaning of the message such as "the enemy has been sighted" had to be
agreed upon in advance. One notable instance of their use was during the Spanish Armada, when
a beacon chain relayed a signal from Plymouth to London signalling the arrival of Spanish ships.
In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system
(or semaphore line) between Lille and Paris. However semaphore suffered from the need for
skilled operators and expensive towers at intervals of ten to thirty km (six to nineteen miles). As a
result of competition from the electrical telegraph, the last commercial line was abandoned in
Some of the earliest forms of telecommunication were so simplistic that they are rarely considered
in the ranks of today's modern technology. And yet, these were great achievements for the
people of the time. Some of the earliest forms included smoke signals, which were not only used
by indigenous people, but in signal towers across the world. In 1875, Jean-Maurice-Emile Baudot
created the first printable telegraph, using a code similar to the forerunner code of 1s and 0s used
in today's standard computers. In 1876, Alexander Graham Bell filed for a patent on his invention
- and the rest is well-known telecommunications history.
Telegraph , Telephone, Radio and Television
The first commercial electrical telegraph was constructed by Sir Charles Wheatstone and
Sir William Fothergill Cooke and opened on 9 April 1839. Both Wheatstone and Cooke viewed
their device as "an improvement to the [existing] electromagnetic telegraph" not as a new device.
Samuel Morse independently developed a version of the electrical telegraph that he
unsuccessfully demonstrated on 2 September 1837. His code was an important advance over
Wheatstone's signaling method. The first transatlantic telegraph cable was successfully completed
on 27 July 1866, allowing transatlantic telecommunication for the first time.
The conventional telephone was invented independently by Alexander Bell and Elisha Gray in
1876. Antonio Meucci invented the first device that allowed the electrical transmission of voice
over a line in 1849. However Meucci's device was of little practical value because it relied upon
the electrophonic effect and thus required users to place the receiver in their mouth to “hear”
what was being said. The first commercial telephone services were set up in 1878 and 1879 on
both sides of the Atlantic in the cities of New Haven and London.
In 1832, James Lindsay gave a classroom demonstration of wireless telegraphy to his students.
By 1854, he was able to demonstrate a transmission across the Firth of Tay from Dundee,
Scotland to Woodhaven, a distance of two miles (3 km), using water as the transmission
medium. In December 1901, Guglielmo Marconi established wireless communication between St.
John's, Newfoundland (Canada) and Poldhu, Cornwall (England), earning him the 1909 Nobel
Prize in physics (which he shared with Karl Braun). However small-scale radio communication had
already been demonstrated in 1893 by Nikola Tesla in a presentation to the National Electric Light
On 25 March 1925, John Logie Baird was able to demonstrate the transmission of moving pictures
at the London department store Selfridges. Baird's device relied upon the Nipkow disk and thus
became known as the mechanical television. It formed the basis of experimental broadcasts done
by the BBC beginning 30 September 1929. However, for most of the twentieth century televisions
depended upon the cathode ray tube invented by Karl Braun. The first version of such a television
to show promise was produced by Philo Farnsworth and demonstrated to his family on 7
Computer networks and the Internet
On 11 September 1940, George Stibitz was able to transmit problems using teletype to his
Complex Number Calculator in New York and receive the computed results back at Dartmouth
College in New Hampshire. This configuration of a centralized computer or mainframe with
remote dumb terminals remained popular throughout the 1950s. However, it was not until the
1960s that researchers started to investigate packet switching — a technology that would allow
chunks of data to be sent to different computers without first passing through a centralized
mainframe. A four-node network emerged on 5 December 1969; this network would
become ARPANET, which by 1981 would consist of 213 nodes.
ARPANET's development centred around the Request for Comment process and on 7 April
1969, RFC 1 was published. This process is important because ARPANET would eventually merge
with other networks to form the Internet and many of the protocols the Internet relies upon today
were specified through the Request for Comment process. In September 1981, RFC
791 introduced the Internet Protocol v4 (IPv4) and RFC 793 introduced the Transmission Control
Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today.
However, not all important developments were made through the Request for Comment process.
Two popular link protocols for local area networks (LANs) also appeared in the 1970s. A patent
for the token ring protocol was filed by Olof Soderblom on 29 October 1974 and a paper on
the Ethernet protocol was published by Robert Metcalfe and David Boggs in the July 1976 issue
of Communications of the ACM.
A basic telecommunication system consists of three elements:
a transmitter that takes information and converts it to a signal;
a transmission medium that carries the signal; and,
a receiver that receives the signal and converts it back into usable information.
For example, in a radio broadcast the broadcast tower is the transmitter, free space is the
transmission medium and the radio is the receiver. Often telecommunication systems are two-way
with a single device acting as both a transmitter and receiver or transceiver. For example,
a mobile phone is a transceiver. Telecommunication over a telephone line is called point-to-point
communication because it is between one transmitter and one receiver. Telecommunication
through radio broadcasts is called broadcast communication because it is between one powerful
transmitter and numerous receivers.
Analogue or digital
Signals can be either analogue or digital. In an analogue signal, the signal is varied continuously
with respect to the information. In a digital signal, the information is encoded as a set of discrete
values (for example ones and zeros). During transmission the information contained in analogue
signals will be degraded by noise. Conversely, unless the noise exceeds a certain threshold, the
information contained in digital signals will remain intact. Noise resistance represents a key
advantage of digital signals over analogue signals.
A network is a collection of transmitters, receivers and transceivers that communicate with each
other. Digital networks consist of one or more routers that work together to transmit information
to the correct user. An analogue network consists of one or more switches that establish a
connection between two or more users. For both types of network, repeaters may be necessary to
amplify or recreate the signal when it is being transmitted over long distances.
Channels and Modulation
A channel is a division in a transmission medium so that it can be used to send multiple streams
of information. For example, a radio station may broadcast at 96.1 MHz while another radio
station may broadcast at 94.5 MHz. In this case, the medium has been divided by frequency and
each channel has received a separate frequency to broadcast on. Alternatively, one could allocate
each channel a recurring segment of time over which to broadcast—this is known as time-division
multiplexing and is sometimes used in digital communication.
The shaping of a signal to convey information is known as modulation. Modulation can be
used to represent a digital message as an analogue waveform. This is known as keying and
several keying techniques exist (these include phase-shift keying, frequency-shift
keying and amplitude-shift keying). Bluetooth, for example, uses phase-shift keying to exchange
information between devices.
Modulation can also be used to transmit the information of analogue signals at higher frequencies.
This is helpful because low-frequency analogue signals cannot be effectively transmitted over free
space. Hence the information from a low-frequency analogue signal must be superimposed on a
higher-frequency signal (known as the carrier wave) before transmission. There are several
different modulation schemes available to achieve this (two of the most basic being amplitude
modulation and frequency modulation). An example of this process is a DJ's voice being
superimposed on a 96 MHz carrier wave using frequency modulation (the voice would then be
received on a radio as the channel “96 FM”).
In an analogue telephone network, the caller is connected to the person he wants to talk to by
switches at various telephone exchanges. The switches form an electrical connection between the
two users and the setting of these switches is determined electronically when the caller dials the
number. Once the connection is made, the caller's voice is transformed to an electrical signal
using a small microphone in the caller's handset. This electrical signal is then sent through the
network to the user at the other end where it is transformed back into sound by a
small speaker in that person's handset. There is a separate electrical connection that works in
reverse, allowing the users to converse.
The fixed-line telephones in most residential homes are analogue — that is, the speaker's voice
directly determines the signal's voltage. Although short-distance calls may be handled from end-
to-end as analogue signals, increasingly telephone service providers are transparently converting
the signals to digital for transmission before converting them back to analogue for reception. The
advantage of this is that digitized voice data can travel side-by-side with data from the Internet
and can be perfectly reproduced in long distance communication (as opposed to analogue signals
that are inevitably impacted by noise).
Mobile phones have had a significant impact on telephone networks. Mobile phone subscriptions
now outnumber fixed-line subscriptions in many markets. Sales of mobile phones in 2005 totaled
816.6 million with that figure being almost equally shared amongst the markets of Asia/Pacific
(204 m), Western Europe (164 m), CEMEA (Central Europe, the Middle East and Africa) (153.5
m), North America (148 m) and Latin America (102 m). In terms of new subscriptions over the
five years from 1999, Africa has outpaced other markets with 58.2% growth. Increasingly these
phones are being serviced by systems where the voice content is transmitted digitally such
as GSM or W-CDMA with many markets choosing to depreciate analogue systems such as AMPS.
There have been dramatic changes in telephone communication behind the scenes. Starting with
the operation of TAT-8 in 1988, the 1990s saw the widespread adoption of systems based
on optic fibres. The benefit of communicating with optic fibres is that they offer a drastic increase
in data capacity. TAT-8 itself was able to carry 10 times as many telephone calls as the last
copper cable laid at that time and today's optic fibre cables are able to carry 25 times as many
telephone calls as TAT- This increase in data capacity is due to several factors: First, optic fibres
are physically much smaller than competing technologies. Second, they do not suffer
from crosstalk which means several hundred of them can be easily bundled together in a single
cable. Lastly, improvements in multiplexing have led to an exponential growth in the data capacity
of a single fibre.
Assisting communication across many modern optic fibre networks is a protocol known
as Asynchronous Transfer Mode (ATM). The ATM protocol allows for the side-by-side data
transmission mentioned in the second paragraph. It is suitable for public telephone networks
because it establishes a pathway for data through the network and associates a traffic
contractwith that pathway. The traffic contract is essentially an agreement between the client and
the network about how the network is to handle the data; if the network cannot meet the
conditions of the traffic contract it does not accept the connection. This is important because
telephone calls can negotiate a contract so as to guarantee themselves a constant bit rate,
something that will ensure a caller's voice is not delayed in parts or cut-off completely. There are
competitors to ATM, such as Multiprotocol Label Switching (MPLS), that perform a similar task and
are expected to supplant ATM in the future.
Radio and Television
In a broadcast system, the central high-powered broadcast tower transmits a high-
frequency electromagnetic wave to numerous low-powered receivers. The high-frequency wave
sent by the tower is modulated with a signal containing visual or audio information.
The receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to
retrieve the signal containing the visual or audio information. The broadcast signal can be either
analogue (signal is varied continuously with respect to the information) or digital (information is
encoded as a set of discrete values).
The broadcast media industry is at a critical turning point in its development, with many countries
moving from analogue to digital broadcasts. This move is made possible by the production of
cheaper, faster and more capable integrated circuits. The chief advantage of digital broadcasts is
that they prevent a number of complaints with traditional analogue broadcasts. For television, this
includes the elimination of problems such as snowy pictures, ghosting and other distortion. These
occur because of the nature of analogue transmission, which means that perturbations due
to noise will be evident in the final output. Digital transmission overcomes this problem because
digital signals are reduced to discrete values upon reception and hence small perturbations do not
affect the final output. In a simplified example, if a binary message 1011 was transmitted with
signal amplitudes [1.0 0.0 1.0 1.0] and received with signal amplitudes [0.9 0.2 1.1 0.9] it would
still decode to the binary message 1011 — a perfect reproduction of what was sent. From this
example, a problem with digital transmissions can also be seen in that if the noise is great enough
it can significantly alter the decoded message. Using forward error correction a receiver can
correct a handful of bit errors in the resulting message but too much noise will lead to
incomprehensible output and hence a breakdown of the transmission.
In digital television broadcasting, there are three competing standards that are likely to be
adopted worldwide. These are the ATSC, DVB and ISDB stasndards; the adoption of these
standards thus far is presented in the captioned map. All three standards use MPEG-2 for video
The Internet is a worldwide network of computers and computer networks that can communicate
with each other using the Internet Protocol. Any computer on the Internet has a unique IP
address that can be used by other computers to route information to it. Hence, any computer on
the Internet can send a message to any other computer using its IP address. These messages
carry with them the originating computer's IP address allowing for two-way communication. The
Internet is thus an exchange of messages between computers.
As of 2008, an estimated 21.9% of the world population has access to the Internet with the
highest access rates (measured as a percentage of the population) in North America (73.6%),
Oceania/Australia (59.5%) and Europe (48.1%). In terms of broadband access,Iceland (26.7%),
South Korea (25.4%) and the Netherlands (25.3%) led the world.
The Internet works in part because of protocols that govern how the computers and routers
communicate with each other. The nature of computer network communication lends itself to a
layered approach where individual protocols in the protocol stack run more-or-less independently
of other protocols. This allows lower-level protocols to be customized for the network situation
while not changing the way higher-level protocols operate. A practical example of why this is
important is because it allows an Internet browser to run the same code regardless of whether
the computer it is running on is connected to the Internet through an Ethernet or Wi-
Fi connection. Protocols are often talked about in terms of their place in the OSI reference
model (pictured on the right), which emerged in 1983 as the first step in an unsuccessful attempt
to build a universally adopted networking protocol suite.
For the Internet, the physical medium and data link protocol can vary several times as packets
traverse the globe. This is because the Internet places no constraints on what physical medium or
data link protocol is used.
Local area Networks
Despite the growth of the Internet, the characteristics of local area networks (computer networks
that run at most a few kilometres) remain distinct. This is because networks on this scale do not
require all the features associated with larger networks and are often more cost-effective and
efficient without them.
In the mid-1980s, several protocol suites emerged to fill the gap between the data link and
applications layer of the OSI reference model. These were Appletalk, IPX and NetBIOS with the
dominant protocol suite during the early 1990s being IPX due to its popularity with MS-
DOS users. TCP/IP existed at this point but was typically only used by large government and
research facilities. As the Internet grew in popularity and a larger percentage of traffic became
Internet-related, local area networks gradually moved towards TCP/IP and today networks mostly
dedicated to TCP/IP traffic are common. The move to TCP/IP was helped by technologies such
as DHCP that allowed TCP/IP clients to discover their own network address — a functionality that
came standard with the AppleTalk/IPX/NetBIOS protocol suites.
It is at the data link layer though that most modern local area networks diverge from the Internet.
Whereas Asynchronous Transfer Mode (ATM) or Multiprotocol Label Switching (MPLS) are typical
data link protocols for larger networks, Ethernet and Token Ring are typical data link protocols for
local area networks. These protocols differ from the former protocols in that they are simpler (e.g.
they omit features such as Quality of Service guarantees) and offer collision prevention. Both of
these differences allow for more economic set-ups.
Despite the modest popularity of Token Ring in the 80's and 90's, virtually all local area networks
now use wired or wireless Ethernet. At the physical layer, most wired Ethernet implementations
use copper twisted-pair cables (including the common 10BASE-T networks). However, some early
implementations used coaxial cables and some recent implementations (especially high-speed
ones) use optic fibres. Where optic fibre is used, the distinction must be made between multi-
mode fibre and single-mode fibre. Multi-mode fibre can be thought of as thicker optical fibre that
is cheaper to manufacture devices for but that suffers from less usable bandwidth and greater
attenuation (i.e. poor long-distance performance).
School of Civil Engineering
Architectural space is a powerful shaper of behavior. Winston Churchill understood this well when,
in 1943, before the House of Commons, he said, “We shape our buildings, and afterwards our
buildings shape us.” The chamber in which the Commons had been meeting for nearly a century
had been gutted by a German bomb in 1941, and Parliament was beginning to consider
alternative ways of reconstructing the chamber. When Parliament had first begun to meet, in the
thirteenth century, it had been given the use of rooms in medieval Westminster Palace and had
moved into the palace chapel. A typical Gothic chapel, it was narrow and tall, with parallel rows of
choir stalls on either side of the aisle down the center. The members of Parliament sat in the
stalls, dividing themselves into two groups, one the government in power and the other the loyal
opposition. Seldom did members take the brave step of crossing the aisle to change political
allegiance. When the House of Parliament had to be rebuilt after a fire in 1834, the Gothic form
was followed, and Churchill argued that this ought to be done again in 1943. There were those
who advocated rebuilding the House with a fan of seats in a board semicircle, as used in
legislative chambers in the United States and France. But Churchill convincingly argued that the
form of English parliamentary government had been shaped by the physical environment in which
it had first been housed; to change that environment, to give it a different behavioral space,
would change the very nature of parliamentary operation. The English had first shaped their
architecture, and then that architecture had shaped English government and history. Through
Churchill’s persuasion, the Houses of Parliament were rebuilt with the medieval arrangement of
facing rows of parallel seats looking across a central aisle.
Mathematical systems of proportion originate from the Pythagorean concept of ‘all is number’ and
the belief that certain numerical relationships manifest the harmonic structure of the universe.
One of these relationships that has been in use ever since the days of antiquity is the proportion
known as the Golden Section. The Greeks recognized the dominating role the Golden Section
played in the proportions of the human body. Believing that both humanity and the shrines
housing their deities should belong to a higher universal order, they utilized these same
proportions in their temple structures. Renaissance architects also explored the Golden Section in
their work. In more recent times, Le Corbusier based his Modulor system on the Golden Section.
Its use in architecture endures even today. The Golden Section can be defined as the radio
between two sections of a line, or the two dimensions of a plane figure, in which the lesser of the
two is to the greater is to the sum of both. It can be expressed algebraically by the equation of
two ratios: a/b=b/(a+b). The Golden
Section has some remarkable algebraic and geometric properties that account for its existence in
architecture as well as the structures of many living organisms. Any progression based on the
Golden Section is at once additive and geometrical.
Another progression that closely approximates the Golden Section in
whole numbers is the Fibonacci Series: 1,1,2,3,5,8,13… Each term again is the sum of the two
preceding ones, and the radio between two consecutive terms tends to approximate the Golden
Section as the series progresses to infinity.
There are many types of rhythm which are of special importance in building. First, there is the
repetition of shapes-windows, doors, columns, wall areas, and so on. Second, there is the
repetition of dimensions, such as the dimensions between supports or those of bay spacing. In
the first case-the repetition of shapes- the spacing can vary without destroying the rhythmical
character. Conversely, where dimensions are equal, the units may vary in size or shape and
rhythm will still remain. It is this rhythmical quality of dimension repetition which accounts for
much of the beauty of well-designed lettering – a quality that is especially marked in carved
inscriptions. A third and more
complex type of rhythm is based on the repetition of differences. Thus, if we have of lines,
parallel to each other, in which the distance between the second pair is greatest that between the
first pair and the distance between the third pair greater than that between the second pair, we
inevitably establish an irregular, progressive rhythm. And so with lines of varying length, placed
continuously: we may start from a dot, have a dash, then one longer still; the effect will be
definitely rhythmical and will, moreover, imply a strong sense of motion, either from the small to
the large or from the large to the small. We can even combine ascending and descending
progressions in the same rhythmical series, building up from small to large and then gradually
returning to small again, or, conversely, working from large to small to large. In the latter case,
however, the relationship may be felt as constricted. More useful is the combination in which the
large is in the center, with a sense of swelling to an important element and diminishing to a small
one- progressing from a quiet beginning to a climax and then relaxing again.
Highway engineering is both an art and a science. A well-designed highway should possess
internal harmony-motorists should be able to see smooth lines ahead and have a clear vision of
the landscape at the sides. The highway also should have external harmony-to the eye of an
onlooker, the highway should fit in well with its surroundings. These requirements demand
something akin to the vision and imagination of an artist, one who can visualize the three-
dimensional aspects of the various combinations of horizontal and vertical curves, of cuts merging
smoothly with fills, of side slopes blending with the terrain. The
highway, however, is primarily a transportation medium. It should be built to endure and to
provide adequately foe safe passage of vehicles. To achieve this objective, the design must adopt
certain criteria for strength, safety, and uniformity. Most of these criteria have been developed
over many years in the hard school of experience; some have evolved through research and
testing. Thus, certain standard formulas have been established. But these always are subject to
modifications since roads are intimately associated with the earth’s surface, which seldom
conforms to mathematical concepts.
People started building skyscrapers not only because of new technological of new technological
discoveries, but also because they are needed to effectively utilize expensive land and have office
workers close to each other. The steel frame developed with several buildings in New York and
Chicago advancing the technology, which allowed the steel frame to carry a building on its own.
Suddenly, it was possible to live and work in colossal towers, hundreds of feet above the ground.
People didn’t construct many buildings made of bricks and mortar more than 10 stories tall until
the late 1800s. The main technological advancement that made skyscrapers possible was the
development of mass iron and steel production. Skyscrapers were then erected in the growing
American metropolitan centers, most notably Chicago. Steel, which is even lighter and stronger
than iron, made it possible to build even taller buildings. Many skyscrapers are built almost
entirely of steel and glass, giving the occupants a spectacular view of their city.
The skyscraper race is far from over. There are more than 5
proposed buildings that would break the current record. According to some engineering experts,
the real limitation is money, not technology. Experts are
divided about how high we can really go in the near future. Some say we could build a mile-high
(5,280 ft, or 1,609m) building with existing technology, while others say we would need to
develop lighter, stronger materials, faster elevators and advanced sway dampers before these
buildings were feasible. Speaking only hypothetically, most engineers will not impose an upper
limit. Future technology advances could lead to sky-high cities, many experts say, housing a
million people or more.
Characteristics of Concrete
Portland cement concrete is a simple material in appearance with a very complex internal nature.
In contrast to its internal complexity, concrete’s versatility, durability, and economy have made it
the world’s most used construction material. This can be seen in the variety of structures it is
used in, from highways, bridges, buildings, and dams to floors, sidewalks, and even works of art.
The use of concrete is unlimited and not even earthbound, as indicated by recent interest.
As with most rocklike substances, concrete has a high compressive strength and a very low tensile
strength. Reinforced concrete is a combination of concrete and steel wherein the steel
reinforcement provides the tensile strength lacking in the concrete.
Concrete is strong in compression, but weak in
tension: its tensile strength varies from 8 to 14 percent of its compressive strength. Due to such a
low tensile capacity, flexural cracks develop at early stages of loading. In order to reduce or
prevent such cracks from developing, a concentric force is imposed in the longitudinal direction of
the structural elements. This force prevents the cracks from developing by eliminating or
considerably reducing the tensile stresses at the critical mid-span and support sections at service
load, thereby raising the bending, shear, and torsional capacities of the sections.
The moment at various points in a structure necessary for plotting a bending moment diagram
may be obtained algebraically by taking moments at those points, but the procedure is quite
tedious of there are more than two or three loads applied to the structure.
The change in moment between those points on
a structure has been shown to equal the shear between those points times the distance between
them therefore the change in moment equals the area of the shear diagram between the points.
The relationship between shear and moment greatly simplifies the drawing of
moment diagrams. To determine the moment at a particular section, it is only necessary to
compute the total area beneath the shear curve, either to the left or to the right of the section,
taking into account the algebraic signs of the various segments of the shear curve. Shear and
moment diagrams are self-checking. If they are initiated at one end of a structure, usually the
left, and check out to the proper value on the other end, the work is probably correct.
Deep beams are structural elements loaded as beams but having a large depth/thickness ratio
and a shear span/depth ratio not exceeding 2 to 2.5, where the shear span in the clear span of
the beam for distributed load. Floor slabs under horizontal loads, wall slabs under vertical loads,
short-span beams carrying heavy loads, and some shear walls are examples of this type of
structural elements. Because of the geometry of deep beams, they behave as two-
dimensional rather than one-dimensional members and subjected to a two-dimensional state of
stress. As a result, plane sections before bending do not necessarily remain plane after bending.
The resulting strain distribution is no longer considered linear, and shear deformations that are
neglected in normal beams become significant compared to pure flexure. Consequently, the stress
block becomes nonlinear even at the elastic stage. At the limit state of ultimate load, the
compressive stress distribution in the concrete would no longer follow the same parabolic shape
or in tensity as that shown in figure for a normal beam.
Sewers are classified into three categories: sanitary, storm, and combined. Sanitary sewers are
designed to carry municipal wastewater from homes and commercial establishments. With proper
pretreatment, industrial wastes may also be discharged into these sewers. Storm sewers are
designed to handle excess rainwater to prevent flooding of low areas. While sanitary sewers
convey wastewater to treatment facilities, storm sewers generally discharge into rivers and
streams. Combined sewers are expected to accommodate both municipal wastewater and
stormwater. These systems are designed so that during dry periods the wastewater carried to a
treatment facility. During rain storms, the excess water is discharged directly into a river, stream,
or lake without treatment. Modern design practice discourages the building of combined sewers,
and the continued improvement of our natural water bodies will probably require extensive
replacement of combined sewers with separate systems for sanitary and storm flow.
When an area is urbanized, trees and vegetation are removed, the drainage pattern is altered,
conveyance is accelerated, and the imperviousness of the area is increased because of the
construction of residential or commercial structures and roads. Increased imperviousness
decreases infiltration with a consequent increase in the volume of runoff. Improvements in a
drainage system cause runoff to leave the urbanized area faster than a from a similar
undeveloped area. Consequently, the time for runoff to reach its peak is shorter for an urban
watershed than for an undeveloped watershed. The peak runoff from urbanized watersheds, on
the other hand, is larger than from similar undeveloped watersheds.
Urban stormwater drainage collection and conveyance systems are designed to remove runoff
from urbanized areas so that flooding is avoided and transportation is not adversely affected. The
cost of this and similar systems is directly dependent on the recurrence interval of rainfall used in
the design. Rainfall with5 to 10 years recurrence intervals is most often used in the sizing and
design of the urban drainage system.
School of Social Technology
The Impact of Tourism on the Environment
The environment in which tourism takes place is important to the quality of the tourist’s
experience. Both the natural environment in the form of land, water, plants and animals, and the
man-made environment, which includes buildings and streets, from the tourism industry. In the
absence of an attractive environment, tourism rarely succeeds, because this is one of the vital
things which tourists look for in destination.
The environments so important to tourism, it might be reasonable to expect that tourism
developers-those responsible for building tourist accommodation and attractions-would take care
to ensure that the environment was properly cared for and preserved. This, however, is not
always the case as tourism can have two different types of impact on the environment where it
1. Tourism and environment can exist together in harmony, when tourism benefits the
environment in some way.
2. Tourism and the environment can exist together in conflict, when tourism damages the
environment in some way.
Tourism and the environment in harmony:
When tourism and environment exit together harmony, the environment benefits from tourism
(and, of, course, tourism benefits from the environment).
There are many examples of this relationship, most of which fall into one of two types of benefits
to the environment: conservation and rehabilitation.
Conservation is the preservation and sensible use of the natural and manmade environment.
Conservation and tourism often go together hand and hand.
Many historic monuments and archeological sites have been saved from destruction because of
the great interest in these from tourists.
Rehabilitation describes what happens when a building or an area is given a new life and is
”reborn”, as something quite different from its original state. While conservation involves
preserving the environment in a form as close as possible to its original or natural state,
rehabilitation involves a major change of use of the environment. Many buildings and areas have
been saved by tourism through their rehabilitation as tourist attractions or as tourism
Customer Relations Skill
This is the name given to a person’s ability to make a visitor feel welcome, properly looked after,
and confident that they are receiving the standard of service they except in the aircraft, hotel, or
tourist attraction, for example. The need for tourism employees to have good customer relation
skills are related to visitors’ expectations of enjoyment and comfort, despite being away from
home. They expect the tourism staffs who serve them be cheerful, polite and helpful at all times.
Customer relation skills are at the heart of what managers and owners of tourists facilities expect
and wish their staff to display in their dealings with customers. Staffs are expected to behave in
a welcoming and pleasant manner to tourists, particularly if they are the first members of staff
with whom the visitors come into contact, such as hotel receptionists, or staff selling tickets at
entry points to tourist attractions, for example. Cultural differences are important here because
different ways. At tourist travel further a
field to new destination, they come into contact with cultures that differ from their own, and
misunderstanding over standards of behavior towards tourists inevitably arise. For example,
European visitors to come Asian destinations have mistakenly formed the impression that the
people working in the tourism industry there are reserved and curt in their dealings with them.
By the same token, the courtesies and greetings which come naturally to American tourism staff –
“Have a nice day”, “enjoy your meal” and “you’re welcome” appear odd and extravagant to
some European visitors. Yet both of these are examples of tourism staff from different cultures
behaving in their own perfectly courteous and polite manner visitors.
The final way in which customer relations skill shows them is in dealing with complaints. Even the
best run tourism businesses receive complaints from time to time when visitors feel that
something is not as it should be. Much of the training in customer relations skill worldwide is the
effective handling of complaints. This involves convincing the visitors that their complaint is
being acknowledged and properly dealt with, and preventing it from escalating into a loud
unpleasant argument which affects other people’s enjoyment.
Tourists need information on a variety of topics from travel directions to explanations of
unfamiliar items on menus, where places and what there is to see and do locally, and information
on the history and traditions of the places they are visiting. Tourists tend to regard all those
working in the industry as the source of answers to their questions, whether the person is a hotel
doorman, a gardener working at a historic house, a waitress, the ticket collector on a train, or the
manager of a holiday’s park. For this reason, the ability to understand what is being asked and to
provide information and answers is regarded as a important communication skill for tourism staff
everywhere. Most people working with tourists come to build up a range of knowledge about the
place where they work and surrounding area. In France the accumulation of such useful
background information is considered so important that students studying tourism at college there
attend lectures on the culture, traditional and history of their own country as part of their course.
Great emphasis is also placed by tourism training staff on giving information as clearly and as
accurately as possible. To illustrate this is an extract from a tourism training manual for
employees dealing with overseas visitors, entitled “Welcome to Britain”
A bad set of directions: Go out here, follow the road called Bright Street until you get to the
humped backed bridge. Turn right, no left no sorry right, and walk as far as the gas works. The
pub is at the back tucked away in a corner. You can’t miss it.
A better version: As you leave this building, turn left into bright Street. Walk about 200 meters
down to the bridge. The canal runs underneath and you’ll see a number of boasts by the bridge.
Take the road to the right and follow it straight down to Ashley gasworks. There’s a big sign
stretching over the road on your left. Its timber-framed with a thatched roof. You’ll see the car
park first and there are usually plenty of people around. If you do get lost, ask of the locals.
They’re very friendly. This might take extra time but not only will you enable the person to find
the pub; you’ll also show them you’ve done as much as you can to assist.
Social Ecological Model
The Social Ecological Model, also called Social Ecological Perspective, is a framework to examine
the multiple effects and interrelatedness of social elements in an environment. SEM can provide a
theoretical framework to analyze various contexts in multiple types of research and in conflict
communication (Oetzel, Ting-Toomey, & Rinderle, 2006). Social ecology is the study of people in
an environment and the influences on one another (Hawley, 1950). This model allows for the
integration (Oetzel, Ting-Toomey, & Rinderle, 2006) of multiple levels and contexts to establish
the big picture in conflict communication.
There are several adaptations of the Social Ecological Model; however, the initial and most utilized
version is Urie Bronfenbrenner’s (1977, 1979) Ecological Systems Theory which divides factors
into four levels: macro-, exo-, meso-, and micro-, which describe influences as intercultural,
community, organizational, and interpersonal or individual. Traditionally many research theorists
have considered only a dichotomy of perspectives, either micro (individual behavior) or macro
(media or cultural influences). Bronfenbrenner’s perspective (1979) was founded on the person,
the environment, and the continuous interaction of the two. This interaction constantly evolved
and developed both components. However, Bronfenbrenner realized it was not only the
environment directly affecting the person, but that there were layers in between, which all had
resulting impacts on the next level. His research began with the primary purpose of understanding
human development and behavior. Bronfenbrenner’s work was an extension from Kurt Lewin’s
(1935) classic equation showing that behavior is a function of the person and the environment.
Bronfenbrenner (1979) considered the individual, organization, community, and culture to be
nested factors, like Russian dolls. Each echelon operates fully within the next larger sphere.
Although Bronfenbrenner first coined the phrase Ecological Systems Theory, it is necessary to
mention that Amos H. Hawley (1950) conducted a significant amount of research in this field as
well, along with many other philosophers, including his colleague, R. D. McKenzie. Hawley’s work
on the “interrelatedness of life” was grounded in Charles Darwin’s writings on the “web of life” in
his book, Human Ecology (1950) Darwin didn't live until 1950.
Social Learning Theory
What is Social Learning Theory?
The social learning theory proposed by Albert Bandura has become perhaps the most influential
theory of learning and development. While rooted in many of the basic concepts of traditional
learning theory, Bandura believed that direct reinforcement could not account for all types of
learning. His theory added a social element, arguing that people can learn
new information and behaviors by watching other people. Known as observational learning (or
modeling), this type of learning can be used to explain a wide variety of behaviors.
Basic Social Learning Concepts: Observational Learning
In his famous "Bobo doll" studies, Bandura demonstrated that children learn and imitate
behaviors they have observed in other people. The children in Bandura’s studies observed an
adult acting violently toward a Bobo doll. When the children were later allowed to play in a room
with the Bobo doll, they began to imitate the aggressive actions they had previously observed.
Bandura identified three basic models of observational learning:
1. A live model, which involves an actual individual demonstrating or acting out a behavior.
2. A verbal instructional model, which involves descriptions and explanations of a behavior.
3. A symbolic model, which involves real or fictional characters displaying behaviors in books,
films, television programs, or online media.
The Modeling Process
Not all observed behaviors are effectively learned. Factors involving both the model and the
learner can play a role in whether social learning is successful. Certain requirements and steps
must also be followed. The following steps are involved in the observational learning and
In order to learn, you need to be paying attention. Anything that detracts your attention is
going to have a negative effect on observational learning. If the model interesting or there is a
novel aspect to the situation, you are far more likely to dedicate your full attention to learning.
The ability to store information is also an important part of the learning process. Retention can
be affected by a number of factors, but the ability to pull up information later and act on it is
vital to observational learning.
Once you have paid attention to the model and retained the information, it is time to actually
perform the behavior you observed. Further practice of the learned behavior leads to
improvement and skill advancement.
Finally, in order for observational learning to be successful, you have to be motivated to imitate
the behavior that has been modeled. Reinforcement and punishment play an important role in
motivation. While experiencing these motivators can be highly effective, so can observing other
experience some type of reinforcement or punishment. For example, if you see another student
rewarded with extra credit for being to class on time, you might start to show up a few minutes
early each day.
Social Cognitive Theory
Social Cognitive Theory, used in psychology, education, and communication, posits that portions of
an individual's knowledge acquisition can be directly related to observing others within the context
of social interactions, experiences, and outside media influences.
Social Cognitive Theory stemmed out of work in the area of social learning theory proposed by
N.E. Miller and J. Dollard in 1941. Their proposition posits that if one were motivated to learn a
particular behavior, then that particular behavior would be learned through clear observations. By
imitating these observed actions the individual observer would solidify that learned action and
would be rewarded with positive reinforcement  The proposition of social learning was expanded
upon and theorized by American psychologist Albert Bandura from 1962 to the present.
The theorists most commonly associated with social cognitive theory are Albert Bandura and
Social cognitive theory is a learning theory based on the ideas that people learn by watching
what others do and that human thought processes are central to understanding personality. While
social cognitists agree that there is a fair amount of influence on development generated by
learned behavior displayed in the environment in which one grows up, they believe that the
individual person (and therefore cognition) is just as important in determining moral development
People learn by observing others, with the environment, behavior, and cognition all as the chief
factors in influencing development. These three factors are not static or independent; rather, they
are all reciprocal. For example, each behavior witnessed can change a person's way of thinking
(cognition). Similarly, the environment one is raised in may influence later behaviors, just as a
father's mindset (also cognition) will determine the environment in which his children are raised
COMMUNITY BASED SOCIAL WORK
What is Social Work?
The profession of Social Work is an odd mixture of many things. It is usually practised by
government civil servants in the west (Europe and North America) while many international NGOs
have social workers on their staff.
The clientele of social work are often called the vulnerable, ie people whose special conditions or
circumstances put them in positions of weakness or vulnerability in comparison with the
mainstream of a society. Generally they include members of society who need some help.
Typically, these include those with physical or mental disabilities, persons who are not able to
work for a living or not able to care for themselves. In special cases, these may include battered
women (those who have been physically or emotionally assaulted – eg by their spouses –and
cannot escape dangerous situations on their own), frail elderly persons, children without parents
to support them, or who are being mistreated,
The tasks of a social worker mainly include administration and counselling, along with a little bit
of medical (usually psychological) intervention and advocacy. The social worker provides her or
his clients with little bits of wisdom, advice, information, counselling, as needed. Every case is
The word "social" is a bit misleading because, in the west, where it is mainly practised, the social
worker does not work with a whole society, or even with a community or a group in a social
context. The social worker usually handles "cases," and a case is usually about an individual or
lately increasingly, a family.
This is even more ironical because where social work is taught, usually in a university in a
department or a school of social administration or social work, often (where they are small) they
are attached to sociology departments. Such schools or departments, in turn, are then usually
also where community development (like much of the material on this web site) is also taught.
Community development, in contrast, is an activity aimed at social institutions, such as
communities or groups, rather than at individuals.
One of the many motivating facts pushing the development of this web site is that the
empowerment of communities is important and highly needed in low income countries. Limiting
the training of community workers to those who are studying in universities, limits the available
number of potentially capable community workers; this should be taught to middle school level
students (after they have been working out in the real world and have some life experience).
This document will not teach you how to become a social worker (any more than the water
module will teach you how to become a civil engineer), but will help you in initiating and
developing a community based social work (CBSW) program. The training on this web site is
aimed at community workers who do not have to be educated to university level.
Traditional Theories of Popular Culture
The theory of mass society
Mass society formed during the 19th-century industrialization process through the division of
labor, the large-scale industrial organization, the concentration of urban populations, the growing
centralization of decision making, the development of a complex and international communication
system and the growth of mass political movements. The term "mass society", therefore, was
introduced by anticapitalist, aristocratic ideologists and used against the values and practices of
As Alan Swingewood points out in The Myth of Mass Culture (1977:5-8), the aristocratic theory of
mass society is to be linked to the moral crisis caused by the weakening of traditional centers of
authority such as family and religion. The society predicted by José Ortega y Gasset, T. S. Eliot
and others would be dominated by philistine masses, without centers or hierarchies of moral or
cultural authority. In such a society, art can only survive by cutting its links with the masses, by
withdrawing as an asylum for threatened values. Throughout the 20th century, this type of theory
has modulated on the opposition between disinterested, pure autonomous art and commercialized
Contemporary popular culture studies
If we forget precursors such as Umberto Eco and Roland Barthes for a moment, popular culture
studies as we know them today were developed in the late seventies and the eighties. The first
influential works were generally politically left-wing and rejected the "aristocratic" view. However,
they also criticized the pessimism of the Frankfurt School: contemporary studies on mass culture
accept that, apparently, popular culture forms do respond to widespread needs of the public.
They also emphasized the capacity of the consumers to resist indoctrination and passive
reception. Finally, they avoided any monolithic concept of mass culture. Instead they tried to
describe culture as a whole as a complex formation of discourses which indeed correspond to
particular interests, and which indeed can be dominated by specific groups, but which also always
are dialectically related to their producers and consumers.
A nice example of this tendency is Andrew Ross's No Respect. Intellectuals and Popular Culture
(1989). His chapter on the history of jazz, blues and rock does not present a linear narrative
opposing the authentic popular music to the commercial record industry, but shows how popular
music in the U.S., from the twenties until today, evolved out of complex interactions between
popular, avant-garde and commercial circuits, between lower- and middle-class kids, between
blacks and whites.
Gender studies is a field of interdisciplinary study which analyzes the phenomenon of gender.
Gender Studies is sometimes related to studies of class, race, ethnicity, sexuality and location.
The philosopher Simone de Beauvoir said: “One is not born a woman, one becomes one.” In
Gender Studies the term "gender" is used to refer to the social and cultural constructions of
masculinities and femininities, not to the state of being male or female in its entirety. The field
emerged from a number of different areas: the sociology of the 1950s and later (see Sociology of
gender); the theories of the psychoanalyst Jaques Lacan; and the work of feminists such as Judith
Butler. Each field came to regard "gender" as a practice, sometimes referred to as something that
is performative. Feminist theory of psychoanalysis, articulated mainly by Julia Kristeva (the
"semiotic" and "abjection") and Bracha Ettinger (the "matrixial trans-subjectivity" and the
"primal mother-phantasies"), and informed both by Freud, Lacan and the Object relations theory,
is very influential in Gender studies.
Gender is an important area of study in many disciplines, such as literary theory, drama studies,
film theory, performance theory, contemporary art history, anthropology, sociology, psychology
and psychoanalysis. These disciplines sometimes differ in their approaches to how and why they
study gender. For instance in anthropology, sociology and psychology, gender is often studied as
a practice, whereas in cultural studies representations of gender are more often examined.
Gender Studies is also a discipline in itself: an interdisciplinary area of study that incorporates
methods and approaches from a wide range of disciplines.
The term cultural history refers both to an academic discipline and to its subject matter.
Cultural history, as a discipline, at least in its common definition since the 1970s, often combines
the approaches of anthropology and history to look at popular cultural traditions and cultural
interpretations of historical experience. It examines the records and narrative descriptions of past
knowledge, customs, and arts of a group of people. Its subject matter encompasses the
continuum of events occurring in succession leading from the past to the present and even into
the future pertaining to a culture.
Cultural history records and interprets past events involving human beings through the social,
cultural, and political milieu of or relating to the arts and manners that a group favors. Jacob
Burckhardt helped found cultural history as a discipline. Cultural history studies and interprets the
record of human societies by denoting the various distinctive ways of living built up by a group of
people under consideration. Cultural history involves the aggregate of past cultural activity, such
as ceremony, class in practices, and the interaction with locales.
Cultural history overlaps in its approaches with the French movements of histoire des mentalités
(Philippe Poirier, 2004) and the so-called new history, and in the U.S. it is closely associated with
the field of American studies. As originally conceived and practiced by 19th Century Swiss
historian Jakob Burckhardt with regard to the Italian Renaissance, cultural history was oriented to
the study of a particular historical period in its entirety, with regard not only for its painting,
sculpture and architecture, but for the economic basis underpinning society, and the social
institutions of its daily life as well.
Cultural studies is an academic discipline popular among a diverse group of scholars. It combines
political economy, communication, sociology, social theory, literary theory, media theory,
film/video studies, cultural anthropology, philosophy, museum studies and art history/criticism to
study cultural phenomena in various societies. Cultural studies researchers often concentrate on
how a particular phenomenon relates to matters of ideology, nationality, ethnicity, social class,
and/or gender. The term was coined by Richard Hoggart in 1964 when he founded the
Birmingham Centre for Contemporary Cultural Studies. It has since become strongly associated
with Stuart Hall, who succeeded Hoggart as Director.
School of Mining Engineering
General Information on Mining
As has been said, mining refers to actual ore extraction. Broadly speaking, mining is the
industrial process of removing a mineral-bearing substance from the place of its natural
occurrence in the Earth’s crust. The term “mining” includes the recovery of oil and gas from wells;
metal, non-metallic minerals, coal peat, oil shale and other hydrocarbons from the earth. In other
words, the work done to extract mineral, or to prepare for its extraction is called mining.
The tendency in mining has been toward the increased use of mining machinery so that
modern mines are characterized by tremendous capacities. This has contributed to:
1. Improving working conditions and raising labor productivity;
2. The exploitation of lower-grade metal-bearing substances and;
3. The building of mines of great dimensions.
Mining can be done either as a surface operation (quarries, opencasts or open-pits) or it
can be done by an underground method. The mode of occurrence of the sought-for metallic
substance governs to a large degree the type of mining that is practiced.
The problem of depth also affects the mining method. If the rock containing the metallic
substance is at a shallow site and is massive, it may be economically excavated by a pit or quarry
like opening on the surface. If the metal-bearing mass is tabular, as a bed or vein, and goes to a
great distance beneath the surface, then it will be worked by some method of underground
Working or exploiting the deposit means the extraction of mineral. With this point in view a
number of underground workings is driven in barren (waste) rock and in mineral. Mine workings
vary in shape, dimensions, location and function.
Depending on their function mine workings are described as exploratory, if they are driven
with a view to finding or proving mineral and as productive if they are used for the immediate
extraction of useful mineral.
Productive mining can be divided into capital investment work, development work, and
face or production work. Investment work aims at ensuring access to the deposit from the
surface. Development work prepares for the face work, and mineral is extracted (or produced) in
The rock surfaces at the sides of workings are called the sides, or in coal, the ribs. The
surface above the workings is the roof in coal mining while in metal mining it is called the back.
The surface below is called the floor.
The factors such as function, direct access to the surface, driving in mineral or in barren
rock can be used for classifying mine workings.
Harmony with Environment
Minerals at shallow depths are extracted by open-cast mining which is cheaper than
underground mining. Open-cast mining consists in removing the overburden and other strata that
lie above mineral or fuel deposits to recover them.
All the surface excavations waste heaps and equipment needed for extracting mineral in
the open from an independent mining unit. An opencast is a long, wide and comparatively shallow
working though it can reach 200m or even more in depth.
In opencasts the excavation is by horizontal slices corresponding to the type of mineral or
overburden in slice. A bench is a thickness of rock or mineral which is separately broken or
excavated. Other open workings are called trenches, which are long, narrow, shallow exploratory
The whole production process in opencasts can be divided into the following basic stages:
1) preparing the site to be worked;
2) de-watering it and preventing inflows of water to the site;
3) providing access (entry) to the deposit by the necessary permanent investment;
4) removal of overburden (stripping);
5) mineral excavation.
Stripping the overburden and mineral production include breaking rock or mineral,
transporting it and loading it.
Minerals can often be dug directly by earth-moving equipment while to break hard rocks it
is necessary to use explosives.
Modern methods of working opencasts involve the use of mechanical plants or hydraulic
king. The basic units of a mechanical plant are excavators, car drills or other mounted drills, and
various handling mechanical equipment whereas the basic units of hydraulic king are monitors,
pumps such as sludge pumps or gravel pumps. Hydraulic king can be used in soft or friable
Transport operations involve the removal of waste rock or mineral, the latter being
transported to coal washeries, ore concentration plants, to power stations, or to a railway station.
Waste rock is removed to a spoil heap or dump (tip) either outside the deposit or in an extracted
area; these being called external or internal dumps, respectively.
The transports used in opencasts are rail cars, large lorries, and conveyers. Sometimes the
overburden is stripped and dumped by excavators without other transport, in overcastting or side-
Mineral is usually unloaded at specially equipped permanent stations. Waste rock is
dumped at various points which are moved as the work develops.
Summing up, mention should be made of the fast that last decades have seen a marked
trend towards open-cast operations. Large near-surface (though usually lo-grade) deposits offer
the possibility of achieving greater outputs. There can be little doubt that the cost per ton of ore
mined by underground methods is generally higher than that for open-cast mining. At the same
time it is necessary to say that although efforts are made to develop mine sites in harmony with
the environment, extraction methods produce some disturbances on the Earth’s surface which
reduce its economic value.
As has already been said mining is a branch of industry which deals with the recovery of
valuable minerals from the interior of the Earth.
When minerals occur so that they can be worked at a profit, they are called ore deposits.
Economic minerals are those which are of economic importance and include both metallic (ore
minerals) and non-metallic minerals such as building materials (sand, stone, etc.).
In choosing the methods of working ore deposits one must take into consideration the
following main factors:
1. The shape of the deposit;
2. The dimensions of the deposit in thickness, along the strike and down the dip;
3. The type of ore and the distribution of metal in the ore body.
The shape of the ore deposit affects the mining method. Besides, the contact of the
deposit with the country rock is of importance.
According to their angle of dip the deposits are divided into gently sloping (up to 25°),
inclined (25-45°) and steep deposits (45-90°). The thickness of ore deposits also varies. Hey may
be very thin (from 0.7-0.8m to 20m) and extremely thick (more than 20m).
One must say that a rational method of mining ensures the following:
2. minimum cost of production;
3. minimum losses of ore;
4. rate of extraction.
In metal mining as well as in mining bedded deposits preliminary activities (before mining)
involve prospecting and exploration required to locate. Characterize and prove a potential ore
After exploration has provided information on the shape and size of a deposit and its
general geological characteristics, site development for mining begins. Mine development depends
largely upon the kind of ore body and the mining method to be applied.
As a rule mine development work involves development drilling, access road construction;
clearing and grubbing; slope or shaft development; overburden removal, construction of facilities
such as concentration (dressing, processing) plants, etc. the different type of equipment required
range from small, simple units such as backhoes and dump trucks to earth-movers, drag lines and
Mining operations begin with excavation work (blasting or separating portions of rock from
the solid mass), loading, hauling and hoisting of he rock to the surface and supporting mine
Generally speaking, the working of an ore deposit involves opening up, development,
blocking out and stopping operations, the basic stopping methods in use now being open
stopping, room and pillar mining shrinkage stopping, block caving and others. After ores are
mined or dredged, they are usually processed (crushed, concentrated or dried).
Extraction processes can be done by underground or open-cast mining. The main trend has
been toward low-cost open-cast mining.
A great deal of attention is given now to the improving of labor conditions and ensuring the
safety of miners. Russian scientists and engineers are working out highly mechanized remotely
operated and automated mining enterprises. The only personnel employed at such enterprises will
be operators, dispatchers and specialists to control he machinery and equipment.
Surface Mining, its Nature and Significance
As is known, in the USA there are large mineral reserves suitable for open-pit mining.
These reserves are concentrated mostly in the eastern areas, with only small percent being found
in the Western part of the country, including the Great Lakes.
Surface mining consists of removing the overburden that lies above mineral or fuel deposits
to recover them. When compared with underground methods, surface mining offers distinct
advantages. It makes possible the recovery of deposits which for physical reasons cannot be
mined underground; provides safer working conditions; usually results in a more complete
recovery of the deposit; and, most significantly it is generally cheaper in terms of cost-per-unit of
The procedure for surface mining usually consists of the following steps: prospecting and
exploration (to discover and prove the ore body) and the actual mining or recovery phase.
Topography and the configuration of the deposit itself strongly influence both processes.
Exploration techniques generally employed of either drilling to intersect deeper-lying ore bodies,
or excavating shallow trenches or pits to expose the ore.
Rotary drilling is widely used for blasting holes for explosives. The type and quantity of
explosive are governed by the resistance of rock to breaking. Dynamite and ammonium nitrate
find wide application in open-pit mining.
Drills have tended to increase in size and there has been a movement toward larger
diameter holes and wider spacing. Rotary blast hole crawler-mounted drill, for example, is capable
of producing holes up to 12 in. Faster penetration is an obvious way to lower costs per foot drilled
and this may be obtained by automation to give optimum rotation speeds and pressure.
Regardless of the equipment used, the surface mining cycle usually consists of four stages:
1) site preparation, clearing vegetation and other obstructions from the area to be mined, and
constructing access roads and auxiliary installations including areas to be used for the disposal of
spoil or waste; 2) removal and disposal of overburden; 3) excavation and loading of ore; and 4)
transportation of the ore to a concentrator processing plant, storage area, or directly to
It should be noted that excavators are the main types of machines used for stripping
overburden and excavating minerals. The two main types of excavators are in use: single-bucket
type and multi-bucket type. Multi-bucket excavators which include the chain-type (bucket-ladder
excavators) and the wheel-type (bucket-wheel of rotary excavators) are widely used in open-cast
Purpose and Meaning of Mine Surveying
A mining engineer should be well versed in the use of surveyor’s maps and other graphic
material, which is only possible if he or she properly understands the surveyor’s methods of
measurements, calculations and mapping. He should also be familiar with the methods of tackling
the problems arising in the course of construction or mining. Thus, when supervising the drivage
by approaching headings the mining engineer should understand the method by which the
surveyor had set the direction of headings, and put the surveyor’s instructions into
Mine surveying is an important subject for mining students, which has a direct bearing on
their future work. Mine surveying is closely linked with other subjects taken up by the mining
student, i.e. mathematics, geodesy, geology, descriptive geometry and mining. Mine surveying, is
a branch of the mining science and engineering dealing essentially with linear and dimensional
measurements. This operation, known as mine surveying, is carried out for the purpose of:
1. graphic representation (plans or sections) of underground workings, the mode of
occurrence and geometric distribution of mineral properties; the surface above mineral bodies:
existing structures and natural features on the surface.
2. solution of various problems in geometry brought about by the exploration, construction
The study of processes involved in the strata and surface movement caused by mining
operations is likewise included in surveying. Measures for protection of structures are also the
responsibility of the mine surveyor. Surveys cover all phases of the mine development.
Coal and its Classification
Coal is the product of vegetable matter that has been formed by the action of decay,
weathering and the effects of pressure, temperature and time millions of years ago. Although coal
is nit a true mineral, its formation processes are similar to those of sedimentary rocks.
Structurally coal beds are geological strata characterized by the same irregularities in
thickness, uniformity and continuity as other strata of sedimentary origin. Coal beds may consist
of essentially uniform continuous strata or like other sedimentary deposits may be made up of
different bands or benches of varying thickness.
The benches may be separated by thin layers of clay, shale, pyrite or other mineral matter,
commonly called partings.
Like other sedimentary rocks coal beds may be structurally disturbed by folding and
faulting. According to the amount of carbon coals are classified into: brown coals, bituminous
coals and anthracite. Brown coals are in their turn subdivided into lignite and common brown
Although carbon is the most important element in coal, as many as 72 elements have been
found in some coal deposits, including lithium, chromium, cobalt, copper, nickel, tungsten and
Lignite is intermediate in properties between peat and bituminous coal, containing when
dry about 60 to 75 per cent of carbon and a variable proportion of ash. Lignite is a low-rank
brown-black coal containing 30 to 40 per cent of spontaneous combustion. It has been estimated
that about 50 per cent of the world’s total coal reserves are lignite.
Brown coal is harder than lignite, containing from 60 to 65 per cent of carbon and
developing greater heat than lignite (4.000-7.000 calories). It is very combustible and gives a
Bituminous coal is the most abundant variety, varying from medium to high rank. It is a
soft, black, usually banded coal. It gives a black powder and contains 75 to 90 per cent of carbon.
It weathers only slightly and may be kept in open piles with little danger of spontaneous
combustion if properly stored. Medium-to-low volatile bituminous coals may be of coking quality.
Coal is used intensively in blast furnaces for smelting iron ore. There are non-coking varieties of
coal. As for the thickness, the beds of this kind of coal are not very thick (1-1.5 meters). The
great quantities of bituminous coal are found in Russia.
Anthracite or “hard” coal has a brilliant luster containing more than 90 per cent of carbon
and low percentage of volatile matter. It is used primarily as a domestic fuel, although it can
sometimes be blended with bituminous grades of coal to produce a mixture with improved coking
qualities. The largest beds of anthracite are found in Russia, the USA and Great Britain.
Coal is still of great importance for the development of modern industry. It may be used for
domestic and industrial purposes. Being the main source of coke, coal is widely used in the iron
and steel industry. Lignite, for example either in the raw state or in briquette form, is a source of
industrial carbon and industrial gases.
The Role of Coal in the National Economy
It is well-known that the growth of the country’s economic potential depends on its raw
material resources. Our country is rich in mineral resources. It has large deposits of oil, gas, coal,
ferrous and non-ferrous metals. The rational use of raw materials, fuel, energy and mineral
resources is of great importance for the whole of the economy.
One of the main conditions for solving major economic problems is the development of heavy
industry, its basic branches – fuel and power industries.
Coal continues to play an important role in the country’s national economy. Mongolian coal
reserves are large, the most part of which is in the eastern areas, above all, in Eastern Mongolian
where there are fuel and power complexes.
It is important to note that complex development of the national economy is the characteristic
feature of Mongolian economic strategy. Mongolian economy is relying more and more on Eastern
Mongolia. Nearly three-quarters of mineral, fuel, energy and aver a half of hydro resources, a
great part of non-ferrous metals, about half of all timber resources are located in the east.
The Baganuur complex is also one of the country’s important fuel and power bases. It
helps the country meet its fuel and energy needs. Baganuur is really unique. Its deposits lie near
the surface and are thus available for surface (opencast) mining. Today its coal goes to electric
power plants in Ulaanbaatar, Darkhan and Erdenet. Baganuur provides enough coal to operate all
the thermal power stations in Mongolia for many years ahead.
Tavan tolgoi basin possesses excellent quality coking coals lying close to the surface. Bit to
start work there it is necessary to build machines specially adapted for the local conditions. This is
a basin of the future. Our present-day main (and oldest) coal-mining area remains the Nalaikh
Basin. As is known, its history began more than a hundred years ago but still much coal remains
underground. Many mines are more than one kilometer deep.
The World’s First Open–pit Copper Mine
Kennecott’s Bingham Canyon Mine is the world’s first open – pit copper mine and it is today
produces approximately 310.000 tons of refined copper annually plus significant quantities of
molybdenum (a metal used to strengthen steel), gold and silver. When Daniel Jackling’s Utah
Copper Company began hauling ore that contained only 2% copper from Bingham Canyon to
concentrators near the Great Salt Lake a profit mining such low–grade ore.
Today, Daniel Jackling and companies that eventually became Kennecott Utah Copper’s
$650 million mining and concentrating modernization project, completed in 1992, includes in–pit
ore crushing and new grinding and flotation facilities north of the town of Coppertone.
Transportation improvements include a five–mile ore conveyor system and the installation of
three pipelines to replace some of the existing rail haulage system. The project incorporates some
of the largest state–of–the–art crushing, conveying, grinding, flotation, and filtration equipment
available in the industry.
In addition, an $880 millions smelting and refining modernization project was completed in
1995. The new smelter includes state-of-the-art flash smelting and flash converting, a double-
contact sulfuric acid plant, a hydrometallurgical plant and a cogeneration power plant. The Utah
Smelter is the cleanest in the world, recovering 99.9% of all sulfur dioxide emissions. The
Refinery has expanded and modernized electrolytic refining cells and features a new precious
metals plant. The Refinery is produces 99.9% pure copper.
A $500 million tailings impoundment modernization was completed in 2000 that added
about 3000 acres on the north side of the existing site.
This will provide tailings storage capacity for the estimated future life of the Bingham
Modernizing has reduced the cost of producing copper, allowing Kennecott to compete as
one of the world’s lowest-cost copper producers.
Mining engineering is an engineering discipline that involves the practice, the theory, the science,
the technology, and application of extracting and processing minerals from a naturally occurring
environment. Mining engineering also includes processing minerals for additional value.
The need for mineral extraction and production is an essential activity of modern society. Mining
activities by their nature cause a disturbance of the environment in and around which the
minerals are located. Modern mining engineers must therefore be concerned not only with the
production and processing of mineral commodities, but also with the mitigation of damage or to
the environment as a result of that production and processing.
Mining engineers are consulted for virtually every stage of a mining operation. The first role of
engineering in mines is the discovery of a mineral deposit and the determination of the
profitability of a mine.
Mining engineers are involved in the mineral discovery stage by working with geologists to identify
a mineral reserve. The first step in discovering an ore body is to determine what minerals to test
for. The geologists and engineers drill core samples and conduct surface surveys searching for
specific compounds and ores.
The discovery can be made from research of mineral maps, academic geological reports or local,
state, and national geological reports. Other sources of information include property assays, well
drilling logs, and local word of mouth. Mineral research may also include satellite and airborne
photographs. Unless the mineral exploration is done on public property, the owners of the
property may play a significant role in the exploration process, and may be the original discoverer
of the mineral deposit.
After a prospective mineral is located, the mining engineer then determines the ore properties.
This may involve chemical analysis of the ore to determine the composition of the sample. Once
the mineral properties are identified, the next step is determining the quantity of the ore. This
involves determining the extent of the deposit as well as the purity of the ore. The engineer
drills additional core samples to find the limits of the deposit or seam and calculates the quantity
of valuable material present in the deposit.
Once the mineral identification and reserve amount is reasonably determined, the next step is to
determine the feasibility of recovering the mineral deposit. A preliminary study shortly after the
discovery of the deposit examines the market conditions such as the supply and demand of the
mineral, the amount of ore needed to be moved to recover a certain quantity of that mineral as
well as analysis of the cost associated with the operation.
Mining engineers working in an established mine may work as an engineer for operations
improvement, further mineral exploration, and operation capitalization by determining where in
the mine to add equipment and personnel. The engineer may also work in supervision and
management, or as an equipment and mineral salesperson. In addition to engineering and
operations, the mining engineer may work as an environmental, health and safety manager or
The act of mining required different methods of extraction depending on the mineralogy,
geology, and location of the resources. Characteristics such as mineral hardness, the mineral
stratification, and access to that mineral will determine the method of extraction.
Generally, mining is either done from the surface or underground. Mining can also occur
with both surface and underground operations taking place on the same reserve. Mining activity
varies as to what method is employed to remove the mineral.
Surface comprises of 90% of the world's mineral tonnage output. Also called open pit
mining, surface mining is removing minerals in formations that are at or near the surface. Ore
retrieval is done by material removal from the land in its natural state. Surface mining often alters
the land characteristics, shape, topography, and geological make-up.
Surface mining involves quarrying which is excavating minerals by means of machinery
such as cutting, cleaving, and breaking. Explosives are usually used to facilitate breakage. Hard
minerals such as limestone, sand, gravel, and slate are generally quarried into a series of
Strip mining is done on softer minerals such as clays and phosphate are removed through
use of mechanical shovels, track dozers, and front end loaders. Softer Coal seams can also be
extracted this way.
With placer mining, minerals can also be removed from the bottoms of lakes, rivers,
streams, and even the ocean by dredge mining. In addition, in-situ mining can be done from the
surface using dissolving agents on the ore body and retrieving the ore via pumping. The pumped
material is then set to leach for further processing. Hydraulic mining or "hydralicking is utilized" in
forms of water jets to wash away either overburden or the ore itself.
Blasting Explosives are used to break up a rock formation and aid in the collection of ore in
a process called blasting. There are two types of explosives that can be used in mining: high
velocity and low velocity. High velocity blasting uses high explosives while low velocity blasting is
done with low explosives. Engineers determine the placement of the explosive charges and the
blast sequence to efficiently and safely loosen the maximum amount of ore. They also are
responsible for the safety of the miners by determining how best to support the rock ceiling in the