SEMINAR IN COMPUTER EDUCATION
EDT 510 MAIE – EDUCATIONAL TECHNOLOGY
Technological University of the Philippines
OVERVIEW OF COMPUTER /INFORMATION TECHNOLOGY
In moving information onto a computer, people generally assume that the formal and presentation have
little effect on the information itself. Every online educator should consider the effect on students of the
electronic presentation of information, however, as that assumption is questionable. Although many
writers have investigated the computer as a learning medium, few addressed the computer screen as a
WE LEARN WHAT WE SEE
Web pages create to teach something can and should be evaluated on their ability to teach. That ability
depends on how well several underlying and previously unassociated elements of educational
presentation are woven into page design.
Many authors have contributed different areas to the discussion. Below is the grouped of concepts into
four categories that provide balanced method for evaluating a learning‐oriented web page:
1. Creation of a learning model of the subject
2. Communication of that model
3. Web readability
Each category addresses an aspect of how a web page organizes and presents content to learners. The
different aspects interact with each other in providing a balanced and usable presentation.
CREATION OF LEARNING MODELS considers how well the web promotes the art of science of learning
and thinking, and how people react to the content and presentation of information on the page.
COMMUNICATION OF THAT MODEL is not something that educators should take for granted. Effective
communication involves general principles of good design, applying them to the illustrations used by the
WEB READABILITY acknowledges that the computer presents us with the problem of how to define
literacy when traditional definitions no longer serve. The reader of a web page must be able to
understand the text, text links, the use of graphic hot spots, and other non‐traditional methods of
encoding information on the page.
USABILITY is often expressed in terms of software or hardware, but it also covers both general and
educationally specific usability of the learning system interface. How easily can a student perform
common learning‐related tasks? Included in this concept is LEARNABILITY, the ability to intuitively
comprehend the tools presented on the screen for using the system.
To date, these elements have been isolated in separate and unrelated discussions that would benefit
from a more holistic view of the learning environment. The goal of this article is to knit those four
previously independent categories into a single unified and usable knowledge base that exceeds the
sum of its parts. This effort is viewed as the first step in a conversation that can improve the way we
view, evaluate, and use computerized learning.
It is possible to group or categorise information in many ways. Think about the course that you are
currently studying and the different types of course material that you have come across so far.
These may include one or more of the following:
• books (e.g. recommended readings, textbooks)
• online information (e.g. course web pages)
All of these are different types of information. We have chosen to group or categorise them by their
physical form (sometimes called format).
Generic formats are useful for all subject areas. The list below shows some commonly used generic
information formats found in the Library.
• Books (print and electronic)
Publications containing well‐established information.
• Journals (print and online)
Publications issued at regular intervals, usually monthly or quarterly.
• Newspapers (print and electronic)
Publications usually published daily or weekly, containing news, articles, images, etc.
• Online databases
Searchable collections of references. There are two main types.
• Government/official publications
Publications issued by the government and its departments.
• Conference proceedings
Papers presented at a conference, written up and put together as a single work.
All of these are different types of information. We have chosen to group or categorise them by their
physical form (sometimes called format).
(Display Material – LCD – DLP)
Evan Powell, July 28, 2009
If you are new to the world of digital projectors, you won't have to shop around long before discovering
that the terms LCD and DLP refer to two different kinds of projectors. They are in fact two different
kinds of microdisplay imaging technology. You might not even know what LCD and DLP are before asking
the obvious question "which one is better?"
The answer is simple‐‐neither one is better than the other. They both have advantages over the other,
and they both have limitations. Both technologies are much better than they used to be. The purpose of
this article is to discuss how they differ today, so you can determine whether the imaging technology
itself is a relevant factor in your choice of a projector.
It is important to note there is a third significant light engine technology called LCoS (liquid crystal on
silicon). It is developed and marketed by several vendors, most notably Canon, JVC, and Sony. Many
excellent projectors have been made with LCoS technology, including several outstanding home theater
projectors that can, in the opinion of many observers, surpass the value proposition of both LCD and
DLP offerings. The discussion of LCoS technology is beyond the scope of this article, and will be
addressed separately in an upcoming article.
The Technical Differences between 3LCD and DLP
LCD (liquid crystal display) projectors contain three separate LCD glass panels, one each for the red,
green, and blue components of the video signal. Each LCD panel contains thousands (or millions) of
liquid crystals that can be aligned in either open, closed, or partially closed positions to allow light to
pass through. Each liquid crystal behaves in essence like a shutter or blind, and each represents a single
pixel ("picture element"). As red, green, and blue light passes through the respective LCD panels, the
liquid crystals open and close based on how much of each color is needed for that pixel at that moment
in time. This activity modulates the light and produces the image that is projected onto the screen.
DLP ("Digital Light Processing") is a proprietary technology developed by Texas Instruments. It works
quite differently than LCD. Instead of having glass panels through which light is passed, the DLP chip is a
reflective surface made up of thousands (or millions) of tiny mirrors. Each mirror represents a single
In a DLP projector, light from the projector's lamp is directed onto the surface of the DLP chip. The
mirrors tilt back and forth, directing light either into the lens path to turn the pixel on, or away from the
lens path to turn it off.
In the most expensive DLP projectors, there are three separate DLP chips, one each for the red, green,
and blue channels. However, in most DLP projectors under $10,000 there is only one chip. To define
color, a color wheel is used that contains (at minimum) a red, green, and blue filter. This wheel spins in
the light path between the lamp and the DLP chip and alternates the color of the light hitting the chip
from red to green to blue. The mirrors tilt away from or into the lens path based upon how much of
each color is required for each pixel at any given moment in time. This activity modulates the light and
produces the image that is projected onto the screen.
(Note: In addition to red, green, and blue filters, most color wheels contain other segments as well. A
"white" or clear filter used to boost brightness is common in business/commercial projectors, and many
color wheels have filters for colors other than the primaries, such as dark green, cyan, magenta, or
The Advantages of DLP
We will look at the advantages and limitations of both DLP and LCD in turn. The most important
advantages of DLP technology include the following:
• Sealed imaging clip
• Filter free
• No convergence problems
• Contrast advantages
• No image persistence
• No degradation of image quality over time
• Somewhat less pixilation/screendoor effect on low resolution products
• DLP leads in miniaturization
Weaknesses and limitations of DLP
• Color wheels can produce rainbow artifacts
• Color saturation/color brightness
• Dithering artifacts
• Restricted compatibility with zoom lenses and lens shift
The Advantages of 3LCD
• Better price/performance in HT (home theatre) products
• Higher contrasts in HT products
• Fewer artifacts/greater image stability
• Sharper image with data display
• Greater installation flexibility in HT products
• Better light efficiency, less power usage
Weaknesses and Limitations of LCD
• Unknown lifespan of LCD panels
• Lower contrast ratings in business products
• Susceptible for dust spots
The fight for market share between 3LCD and DLP continues at a fevered pitch. It is a fascinating thing to
watch as vendors of both technologies continue to innovate to stay a step ahead of the competition.
Picture quality in digital projectors has improved dramatically over the past decade with significant
increases in contrast, resolution, and color performance. Prices have dropped like a rock, and high
quality projection systems that once were within financial reach of wealthy consumer or businesses who
really needed them, are now within the budgets of mass countries. Thus the consumer is the ultimate
beneficiary of the intense competitive struggle between the DLP and 3LCD technologies.
As we’ve tried to make clear in this article, both DLP and LCD have key advantages over the other. They
also both have limitations that the buyer should be aware of. But in the end, we see better image quality
performance today from both LCD and DLP than we’ve ever seen in the past. And it just keeps getting
The history of microcomputers is often referred to by generations. Each generation has its own
distinctive technological advantage resulting in cost efficiencies, smaller sizes, increased reliability and
ease of use.
First Generation ‐ 1940‐1956: Vacuum Tubes
First generation computers used vacuum tubes for circuitry and magnetic drums for memory, and were
often enormous, taking up entire rooms. They relied on machine language to perform operations, and
they could only solve one problem at a time. Their Input was based on punched cards and paper tape.
Modern computing can probably be traced back to 1943 and the creation of the 'Harvard Mk I' and
Colossus electronic computers. Colossus was built in Britain at the end of 1943 and was designed to
crack German military codes. The 'Harvard Mk I' was a more general machine built at Harvard University
with backing from a small company called IBM.
The 'ENIAC' (Electronic Numerical Integrator and Computer), which was completed in 1946, is an
example of first generation computers from this time period. It weighed in at a staggering 30 tonnes
contained 18,000 vacuum tubes, 1500 relays and consumed around 25,000 Watts of power. It was,
however, capable of an amazing 100,000 calculations a second.
Second Generation ‐ 1956‐1963: The Transistor Revolution
Transistors replaced vacuum tubes and lead in an exciting new development of the computer. Even
though the transistor was invented in 1947 it really did not see widespread use in computers until the
late 1950s. The transistor was far superior to the vacuum tube, allowing computers to become smaller,
faster, cheaper, more energy‐efficient and more reliable than the first generation models. A great deal
of heat was still generated and did subject systems to failures however it was a vast improvement over
the tube. Second‐generation computers still relied on punch cards for input and printouts for output.
Some of the first computers of this generation were developed for the atomic energy industry.
Second‐generation computers moved from cryptic binary machine language to symbolic, or assembly,
languages, which allowed programmers to specify instructions in words. High‐level programming
languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These
were also the first computers that stored their instructions in memory, which moved from a magnetic
drum to magnetic core technology.
Development of the IC (integrated circuit) was the hallmark of the third generation of computers.
Transistors were miniaturized and embedded on silicon chips, called semiconductors, which drastically
increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted through peripherals such as keyboards and
monitors and interfaced with an operating system, which allowed the device to run many different
applications at one time. Computers for the first time became accessible to a larger audience because
they were smaller and lower cost than their predecessors.
Fourth Generation ‐ 1971‐Present: Microprocessor based
The microprocessor is the king of fourth generation computers, as thousands of integrated circuits were
built onto a single silicon chip. The Intel 4004 chip, developed in 1971, located all the components of the
computer ‐ from the central processing unit and memory to input/output controls ‐ on a single chip. It
was this new microprocessor that led the way for modern‐day computer technology. Thirty years on
processing power and storage capacities have increased beyond all recognition and microchips appear in
everything from telephones to toasters.
In 1981 IBM introduced its first personal computer for the home user, and in 1984 Apple introduced the
Macintosh. As microprocessor based technology developed they became fully integrated into our
lifestyles. Fourth generation computers also saw the development of GUIs, the mouse and handheld
Fifth Generation ‐ Present and Beyond: Artificial Intelligence
Fifth generation computing devices, based on AI (Artificial Intelligence), are still in development, though
there are some applications, such as voice recognition, that are being used today. The use of parallel
processing and superconductors is helping to make artificial intelligence a reality. Quantum
computation, molecular and nanotechnology will radically change the face of computers in years to
come. The aspiration of fifth‐generation computing is to develop devices that respond to natural
language input and are capable of learning and self‐organization. For the most part the fifth generation
of computers is yet to be written. We are fortunate to live and experience a very exciting time in the
midst of a technological revolution.
Artificial Intelligence (AI) is the area of computer science focusing on creating machines that
can engage on behaviors that humans consider intelligent. The ability to create intelligent
machines has intrigued humans since ancient times and today with the advent of the computer
and 50 years of research into AI programming techniques, the dream of smart machines is
becoming a reality. Researchers are creating systems which can mimic human thought,
understand speech, beat the best human chess player, and countless other feats never before
possible. Find out how the military is applying AI logic to its hitech systems, and how in the
near future Artificial Intelligence may impact our lives.
Prepared and reported by:
Ms. Joey Bangayan, Mr. Eugene Agulto and Mr. Aris Santos