All product and company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
An up-to-date course material detailing UNIX and Linux operating systems for graduate students who aspire to work towards gaining meaningful employment in the corporate world.
All product and company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
An up-to-date course material detailing UNIX and Linux operating systems for graduate students who aspire to work towards gaining meaningful employment in the corporate world.
Table of Contents
History of Core technology 2
Magnetic Core Memory 2
Who Invented Core Memory? 2
What is Core-i Technology ? 2
Single core, Dual core, Quad core and Octa core. 3
ADVANTAGES : 3
CURRENT LINEUP CORE PROCESSORS 3
Q: 2 Computer transformation 4
Q:3: Vendors of Technology Hardware 5
Q: 4 Counter Argument 8
Will this be a hard sell? why or why not? 9
Q-5: Specialized Organizations In Creating Customized Software Applications For The Clients 9
Over the years Linux is being used as operating system in many intelligent and smart electronic devices starting from mobile phone, refrigerator to cow milking devices apart from desktops and server computers. Though Linux is open source, its often difficult for a beginner to get started. In this presentation embedded system and embedded Linux have been introduced through examples, animation and short explanation. This presentation is targeted to the absolute beginner or embedded software engineer who wants to be part of exciting world of Linux and the enthusiast or even manager who just wants to get idea about embedded Linux.
CASE 6B – CHESTER & WAYNE Chester & Wayne is a regional .docxannandleola
CASE 6B – CHESTER & WAYNE
Chester & Wayne is a regional food distribution company. Mr. Chester, CEO, has asked your
assistance in preparing cash-flow information for the last three months of this year. Selected
accounts from an interim balance sheet dated September 30, have the following balances:
Cash $142,100 Accounts payable $354,155
Marketable securities 200,000 Other payables 53,200
Accounts receivable $1,012,500
Inventories 150,388
Mr. Wayne, CFO, provides you with the following information based on experience and
management policy. All sales are credit sales and are billed the last day of the month of sale.
Customers paying within 10 days of the billing date may take a 2 percent cash discount. Forty
percent of the sales is paid within the discount period in the month following billing. An
additional 25 percent pays in the same month but does not receive the cash discount. Thirty
percent is collected in the second month after billing; the remainder is uncollectible. Additional
cash of $24,000 is expected in October from renting unused warehouse space.
Sixty percent of all purchases, selling and administrative expenses, and advertising expenses is
paid in the month incurred. The remainder is paid in the following month. Ending inventory is
set at 25 percent of the next month's budgeted cost of goods sold. The company's gross profit
averages 30 percent of sales for the month. Selling and administrative expenses follow the
formula of 5 percent of the current month's sales plus $75,000, which includes depreciation of
$5,000. Advertising expenses are budgeted at 3 percent of sales.
Actual and budgeted sales information is as follows:
Actual: Budgeted:
August $750,000 October $826,800
September 787,500 November 868,200
December 911,600
January 930,000
The company will acquire equipment costing $250,000 cash in November. Dividends of $45,000
will be paid in December.
The company would like to maintain a minimum cash balance at the end of each month of
$120,000. Any excess amounts go first to repayment of short-term borrowings and then to
investment in marketable securities. When cash is needed to reach the minimum balance, the
company policy is to sell marketable securities before borrowing.
The company will acquire equipment costing $250,000 cash in November. Dividends of $45,000
will be paid in December.
The company would like to maintain a minimum cash balance at the end of each month of
$120,000. Any excess amounts go first to repayment of short-term borrowings and then to
investment in marketable securities. When cash is needed to reach the minimum balance, the
company policy is to sell marketable securities before borrowing.
Questions (use of spreadsheet software is recommended):
1. Prepare a cash budget for each month of the fourth quarter and for the quarter in total.
Prepare supporting schedules as needed. (Round all budge.
CASE 9 Bulimia Nervosa Table 9-1 Dx Checklist Bulimia Nervos.docxannandleola
"CASE 9 Bulimia Nervosa Table 9-1 Dx Checklist Bulimia Nervosa 1. Repeated binge-eating episodes. 2. Repeated performance of ill-advised compensatory behaviors (e.g., forced vomiting) to prevent weight gain. 3. Symptoms take place at least weekly for a period of 3 months. 4. Inappropriate influence of weight and shape on appraisal of oneself. (Based on APA, 2013.) Rita was a 26-year-old manager of a local Italian restaurant and lived in the same city as her parents. Her childhood was not a happy one. Her parents divorced when she was about 5 years of age. She and her three older brothers remained with their mother, who often seemed overwhelmed with her situation and unable to run the household effectively. Rita would often refer to her childhood as utterly chaotic, as if no one were in charge. Within a 12-month period, 1 percent to 1.5 percent of individuals will meet the diagnostic criteria for bulimia nervosa; at least 90 percent of cases occur in females (APA, 2013). She nevertheless muddled through. When her brothers were finally all off to college or beyond, Rita entered high school, and the household seemed more manageable. Ultimately, she developed a close relationship with her mother, indeed too close, Rita suspected. Her mother seemed like her closest friend, at times the entire focus of her social life. They were both women alone, so to speak, and relied heavily on one another for comfort and support, preventing Rita from developing serious friendships. The two often went shopping together. Rita would give her mother an update on the most recent fashion trends, and her mother would talk to Rita about “how important it is to look good and be put together in this day and age.” Rita didn’t mind the advice, but sometimes she did wonder if her mother kept saying that as a way of telling her that she didn’t think she looked good. Rita later attended a local public college, majoring in business. However, she quit after 3 years to take a job at the restaurant. She had begun working in the restaurant part-time while a sophomore and after 2 years was offered the position of daytime manager. It was a well-paying job, and since her interest was business anyway, Rita figured it made sense to seize an attractive business opportunity. Her mother was not very supportive of her decision to leave college, but Rita reassured her that she intended to go back and finish up after she had worked for a while and saved some money. Just before leaving college, Rita began a serious relationship with a man whom she met at school. Their interest in each other grew, and they eventually got engaged. Everything seemed to be going well when out of the blue, her fiancé’s mental state began to deteriorate. Ultimately he manifested a pattern of schizophrenia and had to be hospitalized. As his impairment extended from days to months and then to more than a year, Rita finally had to end the engagement; she had to pick up the pieces and go on without him. She felt .
More Related Content
Similar to MODERNOPERATING SYSTEMSFOURTH EDITIONTradema.docx
Table of Contents
History of Core technology 2
Magnetic Core Memory 2
Who Invented Core Memory? 2
What is Core-i Technology ? 2
Single core, Dual core, Quad core and Octa core. 3
ADVANTAGES : 3
CURRENT LINEUP CORE PROCESSORS 3
Q: 2 Computer transformation 4
Q:3: Vendors of Technology Hardware 5
Q: 4 Counter Argument 8
Will this be a hard sell? why or why not? 9
Q-5: Specialized Organizations In Creating Customized Software Applications For The Clients 9
Over the years Linux is being used as operating system in many intelligent and smart electronic devices starting from mobile phone, refrigerator to cow milking devices apart from desktops and server computers. Though Linux is open source, its often difficult for a beginner to get started. In this presentation embedded system and embedded Linux have been introduced through examples, animation and short explanation. This presentation is targeted to the absolute beginner or embedded software engineer who wants to be part of exciting world of Linux and the enthusiast or even manager who just wants to get idea about embedded Linux.
CASE 6B – CHESTER & WAYNE Chester & Wayne is a regional .docxannandleola
CASE 6B – CHESTER & WAYNE
Chester & Wayne is a regional food distribution company. Mr. Chester, CEO, has asked your
assistance in preparing cash-flow information for the last three months of this year. Selected
accounts from an interim balance sheet dated September 30, have the following balances:
Cash $142,100 Accounts payable $354,155
Marketable securities 200,000 Other payables 53,200
Accounts receivable $1,012,500
Inventories 150,388
Mr. Wayne, CFO, provides you with the following information based on experience and
management policy. All sales are credit sales and are billed the last day of the month of sale.
Customers paying within 10 days of the billing date may take a 2 percent cash discount. Forty
percent of the sales is paid within the discount period in the month following billing. An
additional 25 percent pays in the same month but does not receive the cash discount. Thirty
percent is collected in the second month after billing; the remainder is uncollectible. Additional
cash of $24,000 is expected in October from renting unused warehouse space.
Sixty percent of all purchases, selling and administrative expenses, and advertising expenses is
paid in the month incurred. The remainder is paid in the following month. Ending inventory is
set at 25 percent of the next month's budgeted cost of goods sold. The company's gross profit
averages 30 percent of sales for the month. Selling and administrative expenses follow the
formula of 5 percent of the current month's sales plus $75,000, which includes depreciation of
$5,000. Advertising expenses are budgeted at 3 percent of sales.
Actual and budgeted sales information is as follows:
Actual: Budgeted:
August $750,000 October $826,800
September 787,500 November 868,200
December 911,600
January 930,000
The company will acquire equipment costing $250,000 cash in November. Dividends of $45,000
will be paid in December.
The company would like to maintain a minimum cash balance at the end of each month of
$120,000. Any excess amounts go first to repayment of short-term borrowings and then to
investment in marketable securities. When cash is needed to reach the minimum balance, the
company policy is to sell marketable securities before borrowing.
The company will acquire equipment costing $250,000 cash in November. Dividends of $45,000
will be paid in December.
The company would like to maintain a minimum cash balance at the end of each month of
$120,000. Any excess amounts go first to repayment of short-term borrowings and then to
investment in marketable securities. When cash is needed to reach the minimum balance, the
company policy is to sell marketable securities before borrowing.
Questions (use of spreadsheet software is recommended):
1. Prepare a cash budget for each month of the fourth quarter and for the quarter in total.
Prepare supporting schedules as needed. (Round all budge.
CASE 9 Bulimia Nervosa Table 9-1 Dx Checklist Bulimia Nervos.docxannandleola
"CASE 9 Bulimia Nervosa Table 9-1 Dx Checklist Bulimia Nervosa 1. Repeated binge-eating episodes. 2. Repeated performance of ill-advised compensatory behaviors (e.g., forced vomiting) to prevent weight gain. 3. Symptoms take place at least weekly for a period of 3 months. 4. Inappropriate influence of weight and shape on appraisal of oneself. (Based on APA, 2013.) Rita was a 26-year-old manager of a local Italian restaurant and lived in the same city as her parents. Her childhood was not a happy one. Her parents divorced when she was about 5 years of age. She and her three older brothers remained with their mother, who often seemed overwhelmed with her situation and unable to run the household effectively. Rita would often refer to her childhood as utterly chaotic, as if no one were in charge. Within a 12-month period, 1 percent to 1.5 percent of individuals will meet the diagnostic criteria for bulimia nervosa; at least 90 percent of cases occur in females (APA, 2013). She nevertheless muddled through. When her brothers were finally all off to college or beyond, Rita entered high school, and the household seemed more manageable. Ultimately, she developed a close relationship with her mother, indeed too close, Rita suspected. Her mother seemed like her closest friend, at times the entire focus of her social life. They were both women alone, so to speak, and relied heavily on one another for comfort and support, preventing Rita from developing serious friendships. The two often went shopping together. Rita would give her mother an update on the most recent fashion trends, and her mother would talk to Rita about “how important it is to look good and be put together in this day and age.” Rita didn’t mind the advice, but sometimes she did wonder if her mother kept saying that as a way of telling her that she didn’t think she looked good. Rita later attended a local public college, majoring in business. However, she quit after 3 years to take a job at the restaurant. She had begun working in the restaurant part-time while a sophomore and after 2 years was offered the position of daytime manager. It was a well-paying job, and since her interest was business anyway, Rita figured it made sense to seize an attractive business opportunity. Her mother was not very supportive of her decision to leave college, but Rita reassured her that she intended to go back and finish up after she had worked for a while and saved some money. Just before leaving college, Rita began a serious relationship with a man whom she met at school. Their interest in each other grew, and they eventually got engaged. Everything seemed to be going well when out of the blue, her fiancé’s mental state began to deteriorate. Ultimately he manifested a pattern of schizophrenia and had to be hospitalized. As his impairment extended from days to months and then to more than a year, Rita finally had to end the engagement; she had to pick up the pieces and go on without him. She felt .
Case 9 Bulimia Nervosa in Gorenstein and Comer (2014)Rita was a.docxannandleola
Case 9: Bulimia Nervosa in Gorenstein and Comer (2014)
Rita was a 26-year-old manager of a local Italian restaurant and lived in the same city as her parents. Her childhood was not a happy one. Her parents divorced when she was about 5 years of age. She and her three older brothers remained with their mother, who often seemed overwhelmed with her situation and unable to run the household effectively. Rita would often refer to her childhood as utterly chaotic, as if no one were in charge. Within a 12-month period, 1 percent to 1.5 percent of individuals will meet the diagnostic criteria for bulimia nervosa; at least 90 percent of cases occur in females (APA, 2013). She nevertheless muddled through. When her brothers were finally all off to college or beyond, Rita entered high school, and the household seemed more manageable. Ultimately, she developed a close relationship with her mother, indeed too close, Rita suspected. Her mother seemed like her closest friend, at times the entire focus of her social life. They were both women alone, so to speak, and relied heavily on one another for comfort and support, preventing Rita from developing serious friendships. The two often went shopping together. Rita would give her mother an update on the most recent fashion trends, and her mother would talk to Rita about “how important it is to look good and be put together in this day and age.” Rita didn’t mind the advice, but sometimes she did wonder if her mother kept saying that as a way of telling her that she didn’t think she looked good. Rita later attended a local public college, majoring in business. However, she quit after 3 years to take a job at the restaurant. She had begun working in the restaurant part-time while a sophomore and after 2 years was offered the position of daytime manager. It was a well-paying job, and since her interest was business anyway, Rita figured it made sense to seize an attractive business opportunity. Her mother was not very supportive of her decision to leave college, but Rita reassured her that she intended to go back and finish up after she had worked for a while and saved some money. Just before leaving college, Rita began a serious relationship with a man whom she met at school. Their interest in each other grew, and they eventually got engaged. Everything seemed to be going well when out of the blue, her fiancé’s mental state began to deteriorate. Ultimately he manifested a pattern of schizophrenia and had to be hospitalized. As his impairment extended from days to months and then to more than a year, Rita finally had to end the engagement; she had to pick up the pieces and go on without him. She felt as if he had died. A period of psychotherapy helped ease her grief and her adjustment following this tragedy, and eventually she was able to move on with her life and to resume dating again. However, serious relationships eluded her. Rita knew that she was a moody person—she judged people harshly and displayed irrita.
Case 8.1 Pros and Cons of Balkan Intervention59Must the a.docxannandleola
Case 8.1 Pros and Cons of Balkan Intervention59
“Must the agony of Bosnia-Herzegovina be regarded, with whatever regrets, as somebody else’s trouble?
We don’t think so, but the arguments on behalf of that view deserve an answer. Among them are the
following:
The Balkan conflict is a civil war and unlikely to spread beyond the borders of the former
Yugoslavia. Wrong. Belgrade has missiles trained on Vienna. Tito’s Yugoslavia claimed, by way of
Macedonia, that northern Greece as far south as Thessaloniki belonged under its sovereignty. Those
claims may return. ‘Civil’ war pitting non-Slavic Albanians against Serbs could spread to Albania,
Turkey, Bulgaria, and Greece.
The United States has no strategic interest in the Balkans. Wrong. No peace, no peace dividend.
Unless the West can impose the view that ethnic purity can no longer be the basis for national
sovereignty, then endless national wars will replace the Cold War. This threat has appeared in
genocidal form in Bosnia. If it cannot be contained here, it will erupt elsewhere, and the Clinton
administration’s domestic agenda will be an early casualty.
If the West intervenes on behalf of the Bosnians, the Russians will do so on behalf of the Serbs, and
the Cold War will be reborn. Wrong. The Russians have more to fear from ‘ethnic cleansing’ than
any people on Earth. Nothing would reassure them better than a new, post-Cold War Western
policy of massive, early response against the persecution of national minorities, including the
Russian minorities found in every post-Soviet republic. The Russian right may favor the Serbs, but
Russian self-interest lies elsewhere.
The Serbs also have their grievances. Wrong. They do, but their way of responding to these
grievances, according to the State Department’s annual human rights report, issued this past week,
‘dwarfs anything seen in Europe since Nazi times.’ Via the Genocide Convention, armed
intervention is legal as well as justified.
The UN peace plan is the only alternative. Wrong. Incredibly, the plan proposes the reorganization
of Bosnia-Herzegovina followed by a cease-fire. A better first step would be a UN declaration that
any nation or ethnic group proceeding to statehood on the principle of ethnic purity is an outlaw
state and will be treated as such. As now drafted, the UN peace plan, with a map of provinces that
not one party to the conflict accepts, is really a plan for continued ‘ethnic cleansing.’”
Case 8.2 Images, Arguments, and the Second Persian Gulf Crisis, 1990–
1991
The analysis of policy arguments can be employed to investigate the ways that policymakers represent or
structure problems (Chapter 3). We can thereby identify the images, or problem representations, that
shape processes of making and justifying decisions. For example, during times of crisis, the images which
United States policymakers have of another country affect deliberations about the use of peacekeeping
and negotiation, the imposition of economic sanctions, o.
Case 6-2 Not Getting Face Time at Facebook—and Getting the Last La.docxannandleola
Case 6-2 Not Getting Face Time at Facebook—and Getting the Last Laugh!
In August 2009, Facebook turned down job applicant Brian Acton, an experienced engineer who had previously worked at Yahoo and Apple. More than 4 years later, Facebook paid him $3 billion to acquire his 20% stake of WhatsApp, a start-up he had cofounded immediately after Facebook rejected his job application.(1) WhatsApp Messenger is a proprietary, cross-platform, instant-messaging subscription service for smartphones and selected feature phones that use the Internet for communication. In addition to text messaging, users can send each other images, video, and audio media messages, as well as their location using integrated mapping features.(2) How could Facebook, a highly successful firm, have made such a drastic mistake?
Back in 2009, Brian Acton was a software engineer who was out of work for what seemed like a very long time. He believed he had what it took to make a difference in the industry, but his career did not work out as planned. Even though he spent years at Apple and Yahoo, he got rejected many times by Twitter and Facebook.(3) Acton described the details of the interview process that he failed to do well in as follows:
First of all, interviewing a person for a job that requires technical skills is difficult for both the interviewer and the interviewee. Facebook is a highly desirable firm to work for and requires the best skills and talents from all of their potential employees. It is therefore not surprising that the selection process rivals, if not tops, any company in the industry. The process starts with an email or a phone call from a recruiter in response to an online application or [to] a recommendation from a friend who may work for Facebook. Sometimes, in the initial chat online, timed software coding challenges are set to find the best performers. If this chat goes well, an applicant will go on to the next level—an initial in-person interview or phone screening.(4)
In this next hurdle, the applicant will have a 45-minute chat with a fellow engineer/potential coworker, [with] whom he or she shares the same area of expertise. They will tell you about their job and what their role is in Facebook; then they ask about the applicant’s résumé, motivation, and interests. Additionally, the applicant will be tested about his or her technical skills, coding exercises, and programming abilities.(5)
If successful, the applicant will be invited for back-to-back interviews. This part of the process is very grueling and stressful since all the interviews take place throughout a single day. The candidate will also be asked to manually write a program on a whiteboard to make sure that the applicant is knowledgeable about program writing. The goal in this final step is to see how one approaches a problem and comes up with a solution [that] is simple enough to solve in 10–30 minutes and can be easily explained.(6)
As a potential coworker, the applicant will be te.
Case 6.4 The Case of the Poorly Performing SalespersonEd Markham.docxannandleola
Case 6.4 The Case of the Poorly Performing Salesperson
Ed Markham, the African American sales manager at WCTV, is considering how to handle a problem with one of his salespersons, Jane Folsom, who is White. Ed was promoted to sales manager three months ago after working at WCTV for 2 years. He earned his promotion by exceeding sales goals every month after his first on the job. He developed a research report using secondary data like MRI and the Lifestyle Market Analyst to analyze the market. His former boss praised the report, gave a copy to all salespersons, and included a summary of it in the rate card. When his former boss left for a new job in a larger market, he recommended Ed as his replacement.
Jane has been a salesperson at WCTV for 2 years. For most of that time, she has exceeded sales quotas about as much as Ed had. For the past 3 months, she has not met sales quotas. After his second month as sales manager, Ed talked to Jane about her performance. She attributed her below-average performance to the closing of a major advertiser, Anthony’s Fashions. This local clothing store closed because several major retailers, including JC Penney and Dillard’s, had opened at the local mall.
Ed listened to Jane’s explanation and then suggested ways to obtain new clients. He asked Jane whether she had set personal sales goals, set up a prospect file of new and inactive advertisers as well as existing businesses that were potential clients, come up with research and data on the market to use in presentations and reports to clients, come up with new ideas or opportunities to advertise for clients, or asked her clients about their needs and goals (Shaver, 1995). Jane said no, she simply telephoned or visited her clients regularly to see if they wanted to run ads.
Ed also asked Jane why several of her clients had not paid their bills. He explained that a salesperson must check out a client’s ability to pay before running a schedule. Jane replied that she was not aware of that fact and that no one had ever trained her to sell. She had sold time for a radio station before, but that was all the training she had. Ed’s predecessor had just hired her and cut her loose.
Ed gave Jane a memo after their first meeting a month ago asking her to focus on sales training for the next month. First, she should read Shaver’s (1995) Making the Sale! How to Sell Media With Marketing. He gave her a copy, told her to read it, and asked her to contact him if she had any questions. After reading the book, he told her that she should establish written personal sales goals, begin to develop a prospect file (with two new and two inactive clients), and develop three ideas for new advertising opportunities for existing clients. In the memo, Ed told Jane that he would not hold her to sales performance standards that month. He wanted Jane to focus on doing the background work he assigned to help her improve her future sales performance.
At the meeting a month later, Ed discovered.
Case 5.6Kelo v City of New London545 U.S. 469 (2005)Ye.docxannandleola
Case 5.6
Kelo v City of New London
545 U.S. 469 (2005)
Yes, Actually, They Can Take That Away From You
Facts
In 1978, the city of New London, Connecticut, undertook a redevelopment plan for purposes of creating a redeveloped area in and around the existing park at Fort Trumbull. The plan sought to develop the related ambience a state park should have, including the absence of pink cottages and other architecturally eclectic homes. Part of the redevelopment plan was the city’s deal with Pfizer Corporation for the location of its research facility in the area. The preface to the city’s development plan included the following statement of goals and purpose:
To create a development that would complement the facility that Pfizer was planning to build, create jobs, increase tax and other revenues, encourage public access to and use of the city’s waterfront, and eventually “build momentum” for the revitalization of the rest of the city, including its downtown area.
The affected property owners, including Susette Kelo, live in homes and cottages (15 total) located in and around other existing structures that would be permitted to stay in the area designated for the proposed new structures (under the city’s economic development plan) that would be placed there primarily by private land developers and corporations. The city was assisted by a private, nonprofit corporation, the New London Development Corporation (NLDC), in the development of the economic plan and piloting it through the various governmental processes, including that of city council approval. The central focus of the plan was getting Pfizer to the Fort Trumbull area (where the homeowners and their properties were located) with the hope of a resulting economic boost that such a major corporate employer can bring to an area.
Kelo and the other landowners whose homes would be razed to make room for Pfizer and the accompanying and resulting economic development plan filed suit challenging New London’s legal authority to take their homes. The trial court issued an injunction preventing New London from taking certain of the properties but allowing others to be taken. The appellate court found for New London on all the claims, and the landowners (petitioners) appealed.
Judicial Opinion
STEVENS, Justice Two polar propositions are perfectly clear. On the one hand, it has long been accepted that the sovereign may not take the property of A for the sole purpose of transferring it to another private party B, even though A is paid just compensation. On the other hand, it is equally clear that a State may transfer property from one private party to another if future “use by the public” is the purpose of the taking; the condemnation of land for a railroad with common-carrier duties is a familiar example. Neither of these propositions, however, determines the disposition of this case.
The disposition of this case therefore turns on the question whether the City’s development plan serves a “public purpos.
CASE 5.10 FIBREBOARD PAPER PRODUCTS CORP. V. NLRB SUPREME COURT OF.docxannandleola
CASE 5.10 FIBREBOARD PAPER PRODUCTS CORP. V. NLRB SUPREME COURT OF THE UNITED STATES, 379 U.S. 203 (1964).
[After receiving union proposals for contract revisions for the benefit of the maintenance workers at the company’s Emeryville, California, plant, the company advised the union that negotiations for a new contract would be pointless because it had definitely decided to contract out the work performed by the employees covered by the agreement upon the expiration of the agreement. The company planned to replace these employees with an independent contractor’s employees and expected that substantial savings would be effected by this contracting-out of the work. The Board ordered the company to reinstate the maintenance operation with the union employees, reinstate the employees with back pay, and fulfill its statutory bargaining obligation. The court of appeals granted the Board’s enforcement petition, and the Supreme Court agreed to hear the case.]
WARREN, C. J.... I. Section 8(a)(5) of the National Labor Relations Act provides that it shall be an unfair labor practice for an employer “to refuse to bargain collectively with the representatives of his employees.” Collective bar- gaining is defined in Section 8(d)
as the performance of the mutual obligation of the employer and the representative of the employees to meet at reasonable times and confer in good faith with respect to wages, hours, and other terms and conditions of employment.
“Read together, these provisions establish the obligation of the employer and the representative of its employees to bargain with each other in good faith with respect to ‘wages, hours, and other terms and conditions of employment....’ The duty is limited to those subjects, and within that area neither is legally obligated to yield. Labor Board v. American Ins. Co., 343 U.S. 395. As to other matters, however, each party is free to bargain or not to bargain....” Labor Board v. Wooster Div. of Borg-Warner Corp., 356 U.S. 342, 349. Because of the limited grant of certiorari, we are concerned here only with whether the subject upon which the employer allegedly refused to bargain— contracting out of plant maintenance work previously performed by employees in the bargaining unit, which the employees were capable of continuing to perform—is covered by the phrase “terms and conditions of employment” within the meaning of Section 8(d).
The subject matter of the present dispute is well within the literal meaning of the phrase “terms and conditions of employment.”
As the Court of Appeals pointed out, it is not necessary that it be likely or probable that the union will yield or supply a feasible solution but rather that the union be afforded an opportunity to meet management’s legitimate complaints that its maintenance was unduly costly.
We are thus not expanding the scope of mandatory bargaining to hold, as we do now, that the type of “contracting out” involved in this case—the replacement of employees in the exi.
Case 4 The McDonald’s China Food Supplier Scandal1. What we.docxannandleola
Case 4:
The McDonald’s China Food Supplier Scandal
1. What were the root causes for Husi’s misbehavior?
2. What are the major challenges faced by the multinationals such as McDonald’s in supply chain management in China?
3. Should McDonald’s be held responsible for the scandal? How could McDonald’s avoid similar situations from happening again?
4. Should OSI be held responsible for the scandal? What should OSI do to prevent similar situations from happening again?
.
Case 3 Neesha Wilson Phoenix Rising Risks, Protective Factors, and.docxannandleola
Case 3 Neesha Wilson Phoenix Rising Risks, Protective Factors, and Psychological Well-Being
Neesha Wilson, a 10-year-old African American girl, was referred for assessment to the school psychologist as a result of a child study team meeting held at the school in May. Presenting problems included poor school progress and escalating behavioral concerns. It was the school’s impression that Neesha might qualify for special education assistance as a child with an emotional disorder. Currently, Neesha has an older brother, Tyrone, who is attending an alternate school program for children and youth with severe emotional disturbance.
Developmental History/Family Background
The school social worker completed Neesha’s initial work-up just prior to the end of the academic term; intake information is summarized as follows:
Neesha lives with her 15-year-old brother, Tyrone, and her mother in a two-bedroom apartment. The social worker described the apartment as tiny but very well kept. Neesha has her own bedroom, and Tyrone sleeps on the couch, which folds out into a bed. The social worker noted that it was difficult to book an appointment with Mrs. Wilson, who was reportedly working two jobs: cleaning offices and working as a hairstylist. Mrs. Wilson graduated from hairstylist classes last year. Although her career as a hairstylist has a lot of potential, she is only beginning to develop clientele. She also works part time cleaning offices. Despite the lack of financial resources, the children were clean, well dressed, and did not miss any meals. The children were on the free-lunch program at the schools. According to Mrs. Wilson, Neesha’s early history was unremarkable and motor and language milestones developed on schedule.
An immediate concern of the social worker’s centered on who cared for the children when their mother, Tanya, had to work evenings cleaning offices. Tanya stated that it was not a problem for her because she would either send the children to her sister’s apartment a few blocks away, or have a cousin who lived in the building check in on the kids. Also, Tyrone was 15, so he was capable of watching his sister, although she preferred to have an adult nearby, given Tyrone’s behavior problems.
Neesha’s mother described her as an easy baby and said that she never really had any problems with her. She added that it was Tyrone who was giving her all the problems, not Neesha. The family had struggled since her husband, Walt, left the family about 3 years ago, when Neesha was in Grade 1. In the past two years, Walt has had virtually no contact with the children. He moved in with his girlfriend and their one-year-old baby and recently moved to another state. Neesha was very upset with the marriage breakdown and misses her father very much. Neesha visited with her dad and his new family, initially, but was very disappointed that the visits were neither consistent nor more frequent. Neesha did not like Walt’s girlfriend and felt that her father wa.
Case 48 Sun Microsystems Done by Nour Abdulaziz Maryam .docxannandleola
Case 48: Sun Microsystems
Done by: Nour Abdulaziz
Maryam Barifah
Shrouq Al-Jaadi
Balqees Mekhalfi
Yara El-Feki
Introduction
•In 2009, Oracle was planning to acquire Sun Microsystems.
•This acquisition would allow Oracle;
•to further diversify their brand, customers and acquire various new platforms that would be added to their portfolio such as MySQL, Solaris and Java.
•Oracle originally placed an offer of $9.50 per share price which is considerably higher than Sun Microsystem’s price that is $6.69.
•This will cut the production costs and make the company more efficient throughout all the value chain.
•Oracle aimed to capitalize on Sun Microsystem’s decline by getting particular assets or the whole company at the deflated price.
Is Sun Microsystems a good strategic fit for Oracle? Should Oracle acquire Sun Microsystems?
- as it will allow them to achieve their vision of becoming the Apple of the software industry.
- it will allow the company to deliver high-quality customer products by combining both hardware and software components, hence reducing the consumer setup process.
Continue
It will provide Oracle with the needed expansion.
-This acquisition fits Oracle’s overall strategy which is to improve through acquiring and effectively integrating other companies
Worth of Sun Microsystems and Valuation Approaches
To know how much Sun Microsystems worth, we must find the Stand Alone Value of the company.
The Stand Alone value represents the present value of Sun Microsystem individually before factoring the synergy that would be created when Oracle acquires Sun.
Another method is the value of Sun Microsystem with synergies, which after being acquired by Oracle, must be found. This is done to see whether or not the acquisition was a proper strategic decision or not
Another method of valuing the Sun Microsystem is through the comparative company analysis (CCA). That is done through the thorough assessment of rival and peer businesses of similar size and industry.
Finally, the acquisition price, which is the price that is paid to the target when it is first acquired, is also used as a separate method of valuation. The value of the acquisition price ranges between the values of the stand-alone and the synergies.
USING THE DCF
To be able to find the values of both, the Stand Alone and the synergies, we have decided the best way to do so is by calculating the discounted cash flow (DCF) by using the multiples and the perpetuity growth methods and finding the average of both.
DCF Using Multiples MethodDCF Using Perpetuity Growth MethodIt does not consider long-term growth rate or the economics of business.This method seems inaccurate as the company assumes a certain growth rate will remains the same 2014 onwards (forever) which is unrealistic.It is considered a challenging method to use as it is very difficult to identify truly comparable companies.
USING THE WACC
The weig.
CASE 42 Myasthenia Gravis The immune response turns agai.docxannandleola
CASE 42 Myasthenia Gravis
The immune response turns against the host.
The specific adaptive immune response can, in rare instances, be mounted
against self antigens and cause autoimmune disease. Injury to body tissues
can result from antibodies directed against cell-surface or extracellular-matrix
molecules, from antibodies bound to circulating molecules that deposit as
immune complexes, or from clones of T cells that react with self antigens. A
special class of autoimmune disease is caused by autoantibodies against cell
surface receptors (Fig. 42.1). Graves' disease and myasthenia gravis are two
well-studied examples . Graves' disease is caused by autoantibodies against
the receptor on thyroid cells for thyroid-stimulating hormone (TSH), secreted
by the pituitary gland. In this disease, autoantibody binds to the TSH recep
tor; like TSH, it stimulates the thyroid gland to produce thyroid hormones.
In myasthenia gravis, the opposite effect is observed: antibodies against the
acetylcholine receptor at the neuromuscular junction impede the binding of
acetylcholine and stimulate internalization of the receptor, thereby block
ing the t ransmission of nerve impulses by acetylcholine (Fig. 42.2). In addi
tion, the presence of autoantibodies at the neuromuscular junction initiates
complement-mediated lysis ofthe muscle endplate and damages the muscle
membrane.
Myasthenia gravis means severe (gravis) muscle (my) weakness (asthenia).
This disease was first identified as an autoimmune disease when an immun
ologist immunized rabbits with purified acetylcholine receptors to obtain
antibodies against this receptor. He noticed that the rabbits developed floppy
ears, like the droopy eyelids (ptosis) that are the most characteristic symptom
of myasthenia gravis in humans. Subsequently, patients with this disease
were found to have antibodies against the acetycholine receptor. In addition,
pregnant women with myasthenia gravis transfer the disease to their newborn
infants. As IgG is the only maternal serum protein that crosses the placenta
fro m mother to fetus, neonatal myasthenia gravis is clear evidence that
myasthenia gravis is caused by an anti-IgG antibody. More recently, patients
with myasthenia gravis have been identified who have autoantibodies against
muscle-specific kinase (MUSK) rather than the acetylcholine receptor.
MUSK is a tyrosine kinase receptor involved in clustering acetylcholine
receptors; therefore, these autoantibodies also inhibit signaling through the
neuromuscular junction.
Topics bearing on
this case:
Humoral autoimmunity
Transfer of maternal
antibodies
Mechanisms for
breaking tolerance
This case was prepared by RaifGeha , MD, in collaboration with Janet Chou, MD.
~ Case 42: Myasthenia Gravis
Fig. 42.1 Autoimmune diseases caused
by antibody against surface or matrix
antigens. These are known as type II
autoimmune diseases. Damage by
IgE-mediated responses (type I) does no.
Case 4 JetBlue Delighting Customers Through Happy JettingIn the.docxannandleola
Case 4 JetBlue: Delighting Customers Through Happy Jetting
In the early years, JetBlue was a thriving young airline with a strong reputation for outstanding service. In fact, the low-fare airline referred to itself as a customer service company that just happened to fly planes. But on a Valentine’s Day, JetBlue was hit by the perfect storm, literally, of events that led to an operational meltdown. One of the most severe storms of the decade covered JetBlue’s main hub at New York’s John F. Kennedy International Airport with a thick layer of snow and ice. JetBlue did not have the infrastructure to deal with such a crisis. The severity of the storm, coupled with a series of poor management decisions, left JetBlue passengers stranded in planes on the runway for up to 11 hours. Worse still, the ripple effect of the storm created major JetBlue flight disruptions for six more days. Understandably, customers were livid. JetBlue’s efforts to clean up the mess following the six-day Valentine’s Day nightmare cost over $30 million in overtime, flight refunds, vouchers for future travel, and other expenses. But the blow to the company’s previously stellar customer-service reputation stung far more than the financial fallout. JetBlue became the butt of jokes by late night talk show hosts. Some industry observers even predicted that this would be the end
of JetBlue. But just three years later, the company is not only still flying, it is growing, profitable, and hotter than ever. During a serious economic downturn competing airlines were cut routes, retiring aircraft, laying off employees, and lost money. JetBlue added planes, expanded into new cities, hired thousands of new employees, and turning profits.
Truly Customer Focused What’s the secret to JetBlue’s success? Quite simply, it’s an obsession with making sure that every customer experience lives up to the company slogan, “Happy Jetting.” Lots of companies say they focus on customers. But at JetBlue, customer well-being is ingrained in the culture. From the beginning, JetBlue set out to provide features that would delight customers. For example, most air travelers expect to be squashed when flying coach. But JetBlue has configured its seats with three more inches of legroom than the average airline seat. That may not sound like much. But those three inches allow six-foot three-inch Arianne Cohen, author of The Tall Book: A Celebration of Life from on High, to stretch out and even cross her legs. If that’s not enough, for as little as $10 per flight, travelers can reserve one of JetBlue’s “Even More Legroom” seats, which offer even more space and a flatter recline position. Add the fact that every JetBlue seat is well padded and covered in leather, and you already have an air travel experience that rivals first-class accommodations (something JetBlue doesn’t offer). Food and beverage is another perk that JetBlue customers enjoy. The airline doesn’t serve meals, but it offers the best selection of free.
Case 4-2 Hardee TransportationThe Assignment Answer the four .docxannandleola
Case 4-2 Hardee Transportation
The Assignment: Answer the four (4) questions at the end of Case 4-2
Resources: Course Textbook, Appendix 4B, Table 4B-1, Attached worksheet (Word or Excel format)
Acceptable Length:
Show your work for solution to questions 1 and 2
. Well-written responses to question 3 and 4.
Formatting Requirements:
Enter your name and date
Provide well-structured solutions/answers- incomplete answers will receive partial credit
Show your work
2. Answer case questions,
using the attached word template or excel document
. Complete assignment and submit as an attachment using the assignment link when finished.
.
Case 3-8 Accountant takes on Halliburton and Wins!1. Descri.docxannandleola
Case 3-8 Accountant takes on Halliburton and Wins!
1. Describe the inadequacies in the corporate governance system at Halliburton.
2. Consider the role of KPMG in the case with respect to the accounting and auditing issues. How did the firms’ actions relate to the ethical and professional expectations for CPAs by the accounting profession?
3. The Halliburton case took place before the Dodd-Frank Financial Reform Act was adopted by Congress. Assume Dodd-Frank had been in effect and Menendez decided to inform the SEC under Dodd-Frank rather than SOX because it had been more than 180 days since the accounting violation had occurred. Given the facts of the case would Menendez have qualified for whistleblower protection? Explain.
4. Some critics claim that while Menendez’s actions may have been courageous, he harmed others along the way. His family was in limbo for many years and had to deal with the agony of being labeled a whistleblower and disloyal to Halliburton. The company’s overall revenue did not change; a small amount was merely shifted to an earlier period. Halliburton didn't steal any money, they didn't cheat the IRS, they didn't cheat their customers or their employees. In fact, they lessened their cash flows by paying out taxes earlier than they should have under the rules. How do you respond to these criticisms?
.
Case 3 Ford’s Pinto Fires The Retrospective View of Ford’s Fiel.docxannandleola
Case 3
Ford’s Pinto Fires: The Retrospective View of Ford’s Field Recall Coordinator
Brief Overview of the Ford Pinto Fires
Determined to compete with fuel- efficient Volkswagen and Japanese imports, the Ford Motor Company introduced the subcompact Pinto in the 1971 model year. Lee Iacocca, Ford’s president at the time, insisted that the Pinto weigh no more than 2,000 pounds and cost no more than $2,000. Even with these restrictions, the Pinto met federal safety standards, although some people have argued that strict adherence to the restrictions led Ford engineers to compromise safety. Some 2 million units were sold during the 10- year life of the Pinto.
The Pinto’s major design flaw— a fuel tank prone to rupturing with moderate speed rear- end collisions— surfaced not too long after the Pinto’s entrance to the market. In April 1974, the Center for Auto Safety petitioned the National Highway Traffic Safety Administration (NHTSA) to recall Ford Pintos due to the fuel tank design defect. The Center for Auto Safety’s petition was based on reports from attorneys of three deaths and four serious injuries in moderate- speed rear- end collisions involving Pintos. The NHTSA did not act on this petition until 1977. As a result of tests performed for the NHTSA, as well as the extraordinary amount of publicity generated by the problem, Ford agreed, on June 9, 1978, to recall 1.5 million 1971– 1976 Ford Pintos and 30,000 1975– 1976 Mercury Bobcat sedan and hatchback models for modifications to the fuel tank. Recall notices were mailed to the affected Pinto and Bobcat owners in September 1978. Repair parts were to be delivered to all dealers by September 15, 1978.
Unfortunately, the recall was initiated too late for six people. Between June 9 and September 15, 1978, six people died in Pinto fires after a rear impact. Three of these people were teenage girls killed in Indiana in August 1978 when their 1973 Pinto burst into flames after being rear- ended by a van. The fiery deaths of the Indiana teenagers led to criminal prosecution of the Ford Motor Company on charges of reckless homicide, marking the first time that an American corporation
was prosecuted on criminal charges. In the trial, which commenced on January 15, 1980, “Indiana state prosecutors alleged that Ford knew Pinto gasoline tanks were prone to catch fire during rear- end collisions but failed to warn the public or fix the problem out of concern for profits.” On March 13, 1980, a jury found Ford innocent of the charges. Production of the Pinto was discontinued in the fall of 1980.
Enter Ford’s Field Recall Coordinator
Dennis A. Gioia, currently a professor in the Department of Management and Organization at Pennsylvania State University, was the field recall coordinator at Ford Motor Company as the Pinto fuel tank defect began unfolding. Gioia’s responsibilities included the operational coordination of all the current recall
92 Business Ethics
campaigns, tracking incoming information.
Case 3Competition in the Craft Brewing Industry in 2017John D. Var.docxannandleola
Case 3Competition in the Craft Brewing Industry in 2017
John D. Varlaro
Johnson & Wales University
John E. Gamble
Texas A&M University–Corpus Christi
Locally produced or regional craft beers caused a seismic shift in the U.S. beer industry during the early 2010s with the gains of the small, regional newcomers coming at the expense of such well-known brands as Budweiser, Miller, Coors, and Bud Light. Craft breweries, which by definition sold fewer than 6 million barrels (bbls) per year, expanded rapidly with the deregulation of intrastate alcohol distribution and retail laws and a change in consumer preferences toward unique and high-quality beers. The growing popularity of craft beers allowed the total beer industry in the United States to increase by 6.7 percent annually between 2011 and 2016 to reach $39.5 billion. The production of U.S. craft breweries more than doubled from 11.5 million bbls per year to about 24.6 million bbls per year during that time. In addition, production by microbreweries and brewpubs accounted for 90 percent of craft brewer growth in 2016.1
The industry had begun to show signs of a slowdown going into 2017, with Boston Beer Company, the second largest craft brewery in the United States and known for its Samuel Adams brand, experiencing a 4 percent sales decline in 2016 that erased two years of of growth. The annual revenues of Anheuser-Busch InBev SA, whose portfolio included global brands Budweiser, Corona, and Stella Artois and numerous international and local brands, remained relatively consistent from 2014 to 2016. However, the sales volume of Anheuser-Busch’s flagship brands and its newly acquired and international brands such as Corona, Goose Island, Shock Top, Beck’s, and St. Pauli Girl allowed it to control 45.8 percent of the U.S. market for beer in 2016.2
Industry competition was increasing as grain price fluctuations affected cost structures and growing consolidation within the beer industry—led most notably by AB InBev’s acquisition of several craft breweries, Grupo Modelo, and its pending $104 billion acquisition of SABMiller—created a battle for market share. While the market for specialty beer was expected to gradually plateau by 2020, it appeared that the slowing growth had arrived by 2017. Nevertheless, craft breweries and microbreweries were expected to expand in number and in terms of market share as consumers sought out new pale ales, stouts, wheat beers, pilsners, and lagers with regional or local flairs.The Beer Market
The total economic impact of the beer market was estimated to be 2.0 percent of the total U.S. GDP in 2016 when variables such as jobs within beer production, sales, and distribution were included.3Exhibit 1 presents annual beer production statistics for the United States between 2006 and 2016.
Year
Barrels Produced (in millions)*
2006
198
2007
200
2008
200
2009
197
2010
195
2011
193
2012
196
2013
192
2014
193
2015
191
2016
189
*Rounded to the nearest million. .
CASE 3.2 Ethics, Schmethics-Enrons Code of EthicsIn Jul.docxannandleola
CASE 3.2 "Ethics, Schmethics"-Enron's Code of Ethics
In July 2000, Enron Corporation published an internal code of ethics docu-
ment that ran 64 pages in length (see the Appendix 1).Page 12 of the document
proudly announced the company's position on business ethics:
Employees of Enron Corp., its subsidiaries, and its affiliated companies
(collectively the "Company") are charged with conducting their business
affairs in accordance with the highest ethical standards. An employee
shall not conduct himself or herself in a manner which directly or indi-
rectly would be detrimental to the best interests of the Company or in
a manner which would bring to the employee financial gain separately
derived as a direct consequence of his or her employment with the Com-
pany. Moral as well as legal obligations will be fulfilled openly, promptly,
and in a manner which will reflect pride on the Company's name.
Products and services of the Company will be of the highest quality and
as represented. Advertising and promotion will be truthful, not exagger-
ated or misleading.
Agreements, whether contractual or verbal, will be honored. No bribes,
bonuses, kickbacks, lavish entertainment, or gifts will be given or received
. in exchange for special position, price or privilege . . . Relations with
the Company's many publics-customers, stockholders, governments,
employees, suppliers, press, and bankers-will be conducted in honesty,
candor, and fairness." .- ~ ~ ~ -
Subsequent investigations into the inner workings of Enron Corp. revealed that
the only time this code of ethics received formal attention (other than, presum-
ably,when it was created and formally accepted) was when the board of directors
voted to waive key provisions of the code in order to allow the off-balance-sheet
partnerships that Chief Financial Officer Andy Fastow ultimately used to hide
over half a billion dollars of debt from analysts and investors.
A more realistic picture of the apparent flexibility of Enron's ethical culture
can be found in the extreme conflict of interest represented in its relationship
with Arthur Andersen. Andersen provided both consulting and auditing ser-
vices for fees running into millions of dollars-money that became so critical to
Andersen's continued growth that its employees were encouraged to sign off on
off-balance-sheet transactions-transactions that were not shown on Enron's
publicly-reported balance sheet-that stretched the limits of generally accepted
accounting principles (GAAP) to their furthest edges. In addition, Enron hired
former Andersen employees to manage the affairs of their former colleagues,
which further strengthened the conflict of interest in a relationship that was
supposed, at the very least, to be at arm's length, and, at best, above reproach.
1. What is the purpose of a code of ethics?
2. Do you think the employees of Enron Corp. were told about the vote to put
aside key elements of the code of ethics? If not, why not? If they had .
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
2024.06.01 Introducing a competency framework for languag learning materials ...
MODERNOPERATING SYSTEMSFOURTH EDITIONTradema.docx
1. MODERN
OPERATING SYSTEMS
FOURTH EDITION
Trademarks
AMD, the AMD logo, and combinations thereof are trademarks
of Advanced Micro Devices, Inc.
Android and Google Web Search are trademarks of Google Inc.
Apple and Apple Macintosh are registered trademarkes of Apple
Inc.
ASM, DESPOOL, DDT, LINK-80, MAC, MP/M, PL/1-80 and
SID are trademarks of Digital
Research.
BlackBerry®, RIM®, Research In Motion® and related
trademarks, names and logos are the
property of Research In Motion Limited and are registered
and/or used in the U.S. and coun-
tries around the world.
Blu-ray Disc™ is a trademark owned by Blu-ray Disc
Association.
2. CD Compact Disk is a trademark of Phillips.
CDC 6600 is a trademark of Control Data Corporation.
CP/M and CP/NET are registered trademarks of Digital
Research.
DEC and PDP are registered trademarks of Digital Equipment
Corporation.
eCosCentric is the owner of the eCos Trademark and eCos
Logo, in the US and other countries. The
marks were acquired from the Free Software Foundation on 26th
February 2007. The Trademark and
Logo were previously owned by Red Hat.
The GNOME logo and GNOME name are registered trademarks
or trademarks of GNOME Foundation
in the United States or other countries.
Firefox® and Firefox® OS are registered trademarks of the
Mozilla Foundation.
Fortran is a trademark of IBM Corp.
FreeBSD is a registered trademark of the FreeBSD Foundation.
GE 645 is a trademark of General Electric Corporation.
Intel Core is a trademark of Intel Corporation in the U.S. and/or
other countries.
Java is a trademark of Sun Microsystems, Inc., and refers to
Sun’s Java programming language.
Linux® is the registered trademark of Linus Torvalds in the
3. U.S. and other countries.
MS-DOS and Windows are registered trademarks of Microsoft
Corporation in the United States and/or
other countries.
TI Silent 700 is a trademark of Texas Instruments Incorporated.
UNIX is a registered trademark of The Open Group.
Zilog and Z80 are registered trademarks of Zilog, Inc.
MODERN
OPERATING SYSTEMS
FOURTH EDITION
ANDREW S. TANENBAUM
HERBERT BOS
Vrije Universiteit
Amsterdam, The Netherlands
Boston Columbus Indianapolis New York San Francisco Upper
Saddle River
Amsterdam Cape Town Dubai London Madrid Milan Munich
Paris Montréal Toronto
Delhi Mexico City São Paulo Sydney Hong Kong Seoul
Singapore Taipei Tokyo
Vice President and Editorial Director, ECS: Marcia Horton
Executive Editor: Tracy Johnson
5. The list keeps growing. (AST)
To Marieke, Duko, Jip, and Spot. Fearsome Jedi, all. (HB)
This page intentionally left blank
CONTENTS
PREFACE xxiii
1 INTRODUCTION 1
1.1 WHAT IS AN OPERATING SYSTEM? 3
1.1.1 The Operating System as an Extended Machine 4
1.1.2 The Operating System as a Resource Manager 5
1.2 HISTORY OF OPERATING SYSTEMS 6
1.2.1 The First Generation (1945–55): Vacuum Tubes 7
1.2.2 The Second Generation (1955–65): Transistors and Batch
Systems 8
1.2.3 The Third Generation (1965–1980): ICs and
Multiprogramming 9
1.2.4 The Fourth Generation (1980–Present): Personal
Computers 14
1.2.5 The Fifth Generation (1990–Present): Mobile Computers
19
1.3 COMPUTER HARDWARE REVIEW 20
1.3.1 Processors 21
1.3.2 Memory 24
1.3.3 Disks 27
1.3.4 I/O Devices 28
6. 1.3.5 Buses 31
1.3.6 Booting the Computer 34
vii
viii CONTENTS
1.4 THE OPERATING SYSTEM ZOO 35
1.4.1 Mainframe Operating Systems 35
1.4.2 Server Operating Systems 35
1.4.3 Multiprocessor Operating Systems 36
1.4.4 Personal Computer Operating Systems 36
1.4.5 Handheld Computer Operating Systems 36
1.4.6 Embedded Operating Systems 36
1.4.7 Sensor-Node Operating Systems 37
1.4.8 Real-Time Operating Systems 37
1.4.9 Smart Card Operating Systems 38
1.5 OPERATING SYSTEM CONCEPTS 38
1.5.1 Processes 39
1.5.2 Address Spaces 41
1.5.3 Files 41
1.5.4 Input/Output 45
1.5.5 Protection 45
1.5.6 The Shell 45
1.5.7 Ontogeny Recapitulates Phylogeny 46
1.6 SYSTEM CALLS 50
1.6.1 System Calls for Process Management 53
1.6.2 System Calls for File Management 56
1.6.3 System Calls for Directory Management 57
1.6.4 Miscellaneous System Calls 59
1.6.5 The Windows Win32 API 60
7. 1.7 OPERATING SYSTEM STRUCTURE 62
1.7.1 Monolithic Systems 62
1.7.2 Layered Systems 63
1.7.3 Microkernels 65
1.7.4 Client-Server Model 68
1.7.5 Virtual Machines 68
1.7.6 Exokernels 72
1.8 THE WORLD ACCORDING TO C 73
1.8.1 The C Language 73
1.8.2 Header Files 74
1.8.3 Large Programming Projects 75
1.8.4 The Model of Run Time 76
CONTENTS ix
1.9 RESEARCH ON OPERATING SYSTEMS 77
1.10 OUTLINE OF THE REST OF THIS BOOK 78
1.11 METRIC UNITS 79
1.12 SUMMARY 80
2 PROCESSES AND THREADS 85
2.1 PROCESSES 85
2.1.1 The Process Model 86
2.1.2 Process Creation 88
2.1.3 Process Termination 90
2.1.4 Process Hierarchies 91
2.1.5 Process States 92
2.1.6 Implementation of Processes 94
2.1.7 Modeling Multiprogramming 95
8. 2.2 THREADS 97
2.2.1 Thread Usage 97
2.2.2 The Classical Thread Model 102
2.2.3 POSIX Threads 106
2.2.4 Implementing Threads in User Space 108
2.2.5 Implementing Threads in the Kernel 111
2.2.6 Hybrid Implementations 112
2.2.7 Scheduler Activations 113
2.2.8 Pop-Up Threads 114
2.2.9 Making Single-Threaded Code Multithreaded 115
2.3 INTERPROCESS COMMUNICATION 119
2.3.1 Race Conditions 119
2.3.2 Critical Regions 121
2.3.3 Mutual Exclusion with Busy Waiting 121
2.3.4 Sleep and Wakeup 127
2.3.5 Semaphores 130
2.3.6 Mutexes 132
x CONTENTS
2.3.7 Monitors 137
2.3.8 Message Passing 144
2.3.9 Barriers 146
2.3.10 Avoiding Locks: Read-Copy-Update 148
2.4 SCHEDULING 148
2.4.1 Introduction to Scheduling 149
2.4.2 Scheduling in Batch Systems 156
2.4.3 Scheduling in Interactive Systems 158
2.4.4 Scheduling in Real-Time Systems 164
2.4.5 Policy Versus Mechanism 165
2.4.6 Thread Scheduling 165
9. 2.5 CLASSICAL IPC PROBLEMS 167
2.5.1 The Dining Philosophers Problem 167
2.5.2 The Readers and Writers Problem 169
2.6 RESEARCH ON PROCESSES AND THREADS 172
2.7 SUMMARY 173
3 MEMORY MANAGEMENT 181
3.1 NO MEMORY ABSTRACTION 182
3.2 A MEMORY ABSTRACTION: ADDRESS SPACES 185
3.2.1 The Notion of an Address Space 185
3.2.2 Swapping 187
3.2.3 Managing Free Memory 190
3.3 VIRTUAL MEMORY 194
3.3.1 Paging 195
3.3.2 Page Tables 198
3.3.3 Speeding Up Paging 201
3.3.4 Page Tables for Large Memories 205
CONTENTS xi
3.4 PAGE REPLACEMENT ALGORITHMS 209
3.4.1 The Optimal Page Replacement Algorithm 209
3.4.2 The Not Recently Used Page Replacement Algorithm 210
3.4.3 The First-In, First-Out (FIFO) Page Replacement
Algorithm 211
3.4.4 The Second-Chance Page Replacement Algorithm 211
3.4.5 The Clock Page Replacement Algorithm 212
3.4.6 The Least Recently Used (LRU) Page Replacement
10. Algorithm 213
3.4.7 Simulating LRU in Software 214
3.4.8 The Working Set Page Replacement Algorithm 215
3.4.9 The WSClock Page Replacement Algorithm 219
3.4.10 Summary of Page Replacement Algorithms 221
3.5 DESIGN ISSUES FOR PAGING SYSTEMS 222
3.5.1 Local versus Global Allocation Policies 222
3.5.2 Load Control 225
3.5.3 Page Size 225
3.5.4 Separate Instruction and Data Spaces 227
3.5.5 Shared Pages 228
3.5.6 Shared Libraries 229
3.5.7 Mapped Files 231
3.5.8 Cleaning Policy 232
3.5.9 Virtual Memory Interface 232
3.6 IMPLEMENTATION ISSUES 233
3.6.1 Operating System Involvement with Paging 233
3.6.2 Page Fault Handling 234
3.6.3 Instruction Backup 235
3.6.4 Locking Pages in Memory 236
3.6.5 Backing Store 237
3.6.6 Separation of Policy and Mechanism 239
3.7 SEGMENTATION 240
3.7.1 Implementation of Pure Segmentation 243
3.7.2 Segmentation with Paging: MULTICS 243
3.7.3 Segmentation with Paging: The Intel x86 247
3.8 RESEARCH ON MEMORY MANAGEMENT 252
3.9 SUMMARY 253
11. xii CONTENTS
4 FILE SYSTEMS 263
4.1 FILES 265
4.1.1 File Naming 265
4.1.2 File Structure 267
4.1.3 File Types 268
4.1.4 File Access 269
4.1.5 File Attributes 271
4.1.6 File Operations 271
4.1.7 An Example Program Using File-System Calls 273
4.2 DIRECTORIES 276
4.2.1 Single-Level Directory Systems 276
4.2.2 Hierarchical Directory Systems 276
4.2.3 Path Names 277
4.2.4 Directory Operations 280
4.3 FILE-SYSTEM IMPLEMENTATION 281
4.3.1 File-System Layout 281
4.3.2 Implementing Files 282
4.3.3 Implementing Directories 287
4.3.4 Shared Files 290
4.3.5 Log-Structured File Systems 293
4.3.6 Journaling File Systems 294
4.3.7 Virtual File Systems 296
4.4 FILE-SYSTEM MANAGEMENT AND OPTIMIZATION 299
4.4.1 Disk-Space Management 299
4.4.2 File-System Backups 306
4.4.3 File-System Consistency 312
4.4.4 File-System Performance 314
4.4.5 Defragmenting Disks 319
4.5 EXAMPLE FILE SYSTEMS 320
12. 4.5.1 The MS-DOS File System 320
4.5.2 The UNIX V7 File System 323
4.5.3 CD-ROM File Systems 325
4.6 RESEARCH ON FILE SYSTEMS 331
4.7 SUMMARY 332
CONTENTS xiii
5 INPUT/OUTPUT 337
5.1 PRINCIPLES OF I/O HARDWARE 337
5.1.1 I/O Devices 338
5.1.2 Device Controllers 339
5.1.3 Memory-Mapped I/O 340
5.1.4 Direct Memory Access 344
5.1.5 Interrupts Revisited 347
5.2 PRINCIPLES OF I/O SOFTWARE 351
5.2.1 Goals of the I/O Software 351
5.2.2 Programmed I/O 352
5.2.3 Interrupt-Driven I/O 354
5.2.4 I/O Using DMA 355
5.3 I/O SOFTWARE LAYERS 356
5.3.1 Interrupt Handlers 356
5.3.2 Device Drivers 357
5.3.3 Device-Independent I/O Software 361
5.3.4 User-Space I/O Software 367
5.4 DISKS 369
5.4.1 Disk Hardware 369
5.4.2 Disk Formatting 375
13. 5.4.3 Disk Arm Scheduling Algorithms 379
5.4.4 Error Handling 382
5.4.5 Stable Storage 385
5.5 CLOCKS 388
5.5.1 Clock Hardware 388
5.5.2 Clock Software 389
5.5.3 Soft Timers 392
5.6 USER INTERFACES: KEYBOARD, MOUSE, MONITOR
394
5.6.1 Input Software 394
5.6.2 Output Software 399
5.7 THIN CLIENTS 416
5.8 POWER MANAGEMENT 417
5.8.1 Hardware Issues 418
xiv CONTENTS
5.8.2 Operating System Issues 419
5.8.3 Application Program Issues 425
5.9 RESEARCH ON INPUT/OUTPUT 426
5.10 SUMMARY 428
6 DEADLOCKS 435
6.1 RESOURCES 436
6.1.1 Preemptable and Nonpreemptable Resources 436
6.1.2 Resource Acquisition 437
14. 6.2 INTRODUCTION TO DEADLOCKS 438
6.2.1 Conditions for Resource Deadlocks 439
6.2.2 Deadlock Modeling 440
6.3 THE OSTRICH ALGORITHM 443
6.4 DEADLOCK DETECTION AND RECOVERY 443
6.4.1 Deadlock Detection with One Resource of Each Type 444
6.4.2 Deadlock Detection with Multiple Resources of Each Type
446
6.4.3 Recovery from Deadlock 448
6.5 DEADLOCK AV OIDANCE 450
6.5.1 Resource Trajectories 450
6.5.2 Safe and Unsafe States 452
6.5.3 The Banker’s Algorithm for a Single Resource 453
6.5.4 The Banker’s Algorithm for Multiple Resources 454
6.6 DEADLOCK PREVENTION 456
6.6.1 Attacking the Mutual-Exclusion Condition 456
6.6.2 Attacking the Hold-and-Wait Condition 456
6.6.3 Attacking the No-Preemption Condition 457
6.6.4 Attacking the Circular Wait Condition 457
6.7 OTHER ISSUES 458
6.7.1 Two-Phase Locking 458
6.7.2 Communication Deadlocks 459
CONTENTS xv
6.7.3 Livelock 461
6.7.4 Starvation 463
6.8 RESEARCH ON DEADLOCKS 464
15. 6.9 SUMMARY 464
7 VIRTUALIZATION AND THE CLOUD 471
7.1 HISTORY 473
7.2 REQUIREMENTS FOR VIRTUALIZATION 474
7.3 TYPE 1 AND TYPE 2 HYPERVISORS 477
7.4 TECHNIQUES FOR EFFICIENT VIRTUALIZATION 478
7.4.1 Virtualizing the Unvirtualizable 479
7.4.2 The Cost of Virtualization 482
7.5 ARE HYPERVISORS MICROKERNELS DONE RIGHT?
483
7.6 MEMORY VIRTUALIZATION 486
7.7 I/O VIRTUALIZATION 490
7.8 VIRTUAL APPLIANCES 493
7.9 VIRTUAL MACHINES ON MULTICORE CPUS 494
7.10 LICENSING ISSUES 494
7.11 CLOUDS 495
7.11.1 Clouds as a Service 496
7.11.2 Virtual Machine Migration 496
7.11.3 Checkpointing 497
7.12 CASE STUDY: VMWARE 498
7.12.1 The Early History of VMware 498
7.12.2 VMware Workstation 499
16. xvi CONTENTS
7.12.3 Challenges in Bringing Virtualization to the x86 500
7.12.4 VMware Workstation:
Solution
Overview 502
7.12.5 The Evolution of VMware Workstation 511
7.12.6 ESX Server: VMware’s type 1 Hypervisor 512
7.13 RESEARCH ON VIRTUALIZATION AND THE CLOUD
514
8 MULTIPLE PROCESSOR SYSTEMS 517
8.1 MULTIPROCESSORS 520
8.1.1 Multiprocessor Hardware 520
8.1.2 Multiprocessor Operating System Types 530
8.1.3 Multiprocessor Synchronization 534
8.1.4 Multiprocessor Scheduling 539
8.2 MULTICOMPUTERS 544
17. 8.2.1 Multicomputer Hardware 545
8.2.2 Low-Level Communication Software 550
8.2.3 User-Level Communication Software 552
8.2.4 Remote Procedure Call 556
8.2.5 Distributed Shared Memory 558
8.2.6 Multicomputer Scheduling 563
8.2.7 Load Balancing 563
8.3 DISTRIBUTED SYSTEMS 566
8.3.1 Network Hardware 568
8.3.2 Network Services and Protocols 571
8.3.3 Document-Based Middleware 576
8.3.4 File-System-Based Middleware 577
8.3.5 Object-Based Middleware 582
8.3.6 Coordination-Based Middleware 584
8.4 RESEARCH ON MULTIPLE PROCESSOR SYSTEMS 587
8.5 SUMMARY 588
CONTENTS xvii
9 SECURITY 593
18. 9.1 THE SECURITY ENVIRONMENT 595
9.1.1 Threats 596
9.1.2 Attackers 598
9.2 OPERATING SYSTEMS SECURITY 599
9.2.1 Can We Build Secure Systems? 600
9.2.2 Trusted Computing Base 601
9.3 CONTROLLING ACCESS TO RESOURCES 602
9.3.1 Protection Domains 602
9.3.2 Access Control Lists 605
9.3.3 Capabilities 608
9.4 FORMAL MODELS OF SECURE SYSTEMS 611
9.4.1 Multilevel Security 612
9.4.2 Covert Channels 615
9.5 BASICS OF CRYPTOGRAPHY 619
9.5.1 Secret-Key Cryptography 620
9.5.2 Public-Key Cryptography 621
9.5.3 One-Way Functions 622
9.5.4 Digital Signatures 622
9.5.5 Trusted Platform Modules 624
19. 9.6 AUTHENTICATION 626
9.6.1 Authentication Using a Physical Object 633
9.6.2 Authentication Using Biometrics 636
9.7 EXPLOITING SOFTWARE 639
9.7.1 Buffer Overflow Attacks 640
9.7.2 Format String Attacks 649
9.7.3 Dangling Pointers 652
9.7.4 Null Pointer Dereference Attacks 653
9.7.5 Integer Overflow Attacks 654
9.7.6 Command Injection Attacks 655
9.7.7 Time of Check to Time of Use Attacks 656
9.8 INSIDER ATTA CKS 657
9.8.1 Logic Bombs 657
9.8.2 Back Doors 658
9.8.3 Login Spoofing 659
xviii CONTENTS
9.9 MALWARE 660
9.9.1 Trojan Horses 662
9.9.2 Viruses 664
20. 9.9.3 Worms 674
9.9.4 Spyware 676
9.9.5 Rootkits 680
9.10 DEFENSES 684
9.10.1 Firewalls 685
9.10.2 Antivirus and Anti-Antivirus Techniques 687
9.10.3 Code Signing 693
9.10.4 Jailing 694
9.10.5 Model-Based Intrusion Detection 695
9.10.6 Encapsulating Mobile Code 697
9.10.7 Java Security 701
9.11 RESEARCH ON SECURITY 703
9.12 SUMMARY 704
10 CASE STUDY 1: UNIX, LINUX, AND ANDROID 713
10.1 HISTORY OF UNIX AND LINUX 714
10.1.1 UNICS 714
10.1.2 PDP-11 UNIX 715
10.1.3 Portable UNIX 716
10.1.4 Berkeley UNIX 717
10.1.5 Standard UNIX 718
21. 10.1.6 MINIX 719
10.1.7 Linux 720
10.2 OVERVIEW OF LINUX 723
10.2.1 Linux Goals 723
10.2.2 Interfaces to Linux 724
10.2.3 The Shell 725
10.2.4 Linux Utility Programs 728
10.2.5 Kernel Structure 730
10.3 PROCESSES IN LINUX 733
10.3.1 Fundamental Concepts 733
10.3.2 Process-Management System Calls in Linux 735
CONTENTS xix
10.3.3 Implementation of Processes and Threads in Linux 739
10.3.4 Scheduling in Linux 746
10.3.5 Booting Linux 751
10.4 MEMORY MANAGEMENT IN LINUX 753
10.4.1 Fundamental Concepts 753
10.4.2 Memory Management System Calls in Linux 756
22. 10.4.3 Implementation of Memory Management in Linux 758
10.4.4 Paging in Linux 764
10.5 INPUT/OUTPUT IN LINUX 767
10.5.1 Fundamental Concepts 767
10.5.2 Networking 769
10.5.3 Input/Output System Calls in Linux 770
10.5.4 Implementation of Input/Output in Linux 771
10.5.5 Modules in Linux 774
10.6 THE LINUX FILE SYSTEM 775
10.6.1 Fundamental Concepts 775
10.6.2 File-System Calls in Linux 780
10.6.3 Implementation of the Linux File System 783
10.6.4 NFS: The Network File System 792
10.7 SECURITY IN LINUX 798
10.7.1 Fundamental Concepts 798
10.7.2 Security System Calls in Linux 800
10.7.3 Implementation of Security in Linux 801
10.8 ANDROID 802
10.8.1 Android and Google 803
10.8.2 History of Android 803
10.8.3 Design Goals 807
23. 10.8.4 Android Architecture 809
10.8.5 Linux Extensions 810
10.8.6 Dalvik 814
10.8.7 Binder IPC 815
10.8.8 Android Applications 824
10.8.9 Intents 836
10.8.10 Application Sandboxes 837
10.8.11 Security 838
10.8.12 Process Model 844
10.9 SUMMARY 848
xx CONTENTS
11 CASE STUDY 2: WINDOWS 8 857
11.1 HISTORY OF WINDOWS THROUGH WINDOWS 8.1 857
11.1.1 1980s: MS-DOS 857
11.1.2 1990s: MS-DOS-based Windows 859
11.1.3 2000s: NT-based Windows 859
11.1.4 Windows Vista 862
11.1.5 2010s: Modern Windows 863
24. 11.2 PROGRAMMING WINDOWS 864
11.2.1 The Native NT Application Programming Interface 867
11.2.2 The Win32 Application Programming Interface 871
11.2.3 The Windows Registry 875
11.3 SYSTEM STRUCTURE 877
11.3.1 Operating System Structure 877
11.3.2 Booting Windows 893
11.3.3 Implementation of the Object Manager 894
11.3.4 Subsystems, DLLs, and User-Mode Services 905
11.4 PROCESSES AND THREADS IN WINDOWS 908
11.4.1 Fundamental Concepts 908
11.4.2 Job, Process, Thread, and Fiber Management API Calls
914
11.4.3 Implementation of Processes and Threads 919
11.5 MEMORY MANAGEMENT 927
11.5.1 Fundamental Concepts 927
11.5.2 Memory-Management System Calls 931
11.5.3 Implementation of Memory Management 932
11.6 CACHING IN WINDOWS 942
11.7 INPUT/OUTPUT IN WINDOWS 943
25. 11.7.1 Fundamental Concepts 944
11.7.2 Input/Output API Calls 945
11.7.3 Implementation of I/O 948
11.8 THE WINDOWS NT FILE SYSTEM 952
11.8.1 Fundamental Concepts 953
11.8.2 Implementation of the NT File System 954
11.9 WINDOWS POWER MANAGEMENT 964
CONTENTS xxi
11.10 SECURITY IN WINDOWS 8 966
11.10.1 Fundamental Concepts 967
11.10.2 Security API Calls 969
11.10.3 Implementation of Security 970
11.10.4 Security Mitigations 972
11.11 SUMMARY 975
12 OPERATING SYSTEM DESIGN 981
12.1 THE NATURE OF THE DESIGN PROBLEM 982
26. 12.1.1 Goals 982
12.1.2 Why Is It Hard to Design an Operating System? 983
12.2 INTERFACE DESIGN 985
12.2.1 Guiding Principles 985
12.2.2 Paradigms 987
12.2.3 The System-Call Interface 991
12.3 IMPLEMENTATION 993
12.3.1 System Structure 993
12.3.2 Mechanism vs. Policy 997
12.3.3 Orthogonality 998
12.3.4 Naming 999
12.3.5 Binding Time 1001
12.3.6 Static vs. Dynamic Structures 1001
12.3.7 Top-Down vs. Bottom-Up Implementation 1003
12.3.8 Synchronous vs. Asynchronous Communication 1004
12.3.9 Useful Techniques 1005
12.4 PERFORMANCE 1010
12.4.1 Why Are Operating Systems Slow? 1010
12.4.2 What Should Be Optimized? 1011
12.4.3 Space-Time Trade-offs 1012
12.4.4 Caching 1015
12.4.5 Hints 1016
27. 12.4.6 Exploiting Locality 1016
12.4.7 Optimize the Common Case 1017
xxii CONTENTS
12.5 PROJECT MANAGEMENT 1018
12.5.1 The Mythical Man Month 1018
12.5.2 Team Structure 1019
12.5.3 The Role of Experience 1021
12.5.4 No Silver Bullet 1021
12.6 TRENDS IN OPERATING SYSTEM DESIGN 1022
12.6.1 Virtualization and the Cloud 1023
12.6.2 Manycore Chips 1023
12.6.3 Large-Address-Space Operating Systems 1024
12.6.4 Seamless Data Access 1025
12.6.5 Battery-Powered Computers 1025
12.6.6 Embedded Systems 1026
12.7 SUMMARY 1027
13 READING LIST AND BIBLIOGRAPHY 1031
28. 13.1 SUGGESTIONS FOR FURTHER READING 1031
13.1.1 Introduction 1031
13.1.2 Processes and Threads 1032
13.1.3 Memory Management 1033
13.1.4 File Systems 1033
13.1.5 Input/Output 1034
13.1.6 Deadlocks 1035
13.1.7 Virtualization and the Cloud 1035
13.1.8 Multiple Processor Systems 1036
13.1.9 Security 1037
13.1.10 Case Study 1: UNIX, Linux, and Android 1039
13.1.11 Case Study 2: Windows 8 1040
13.1.12 Operating System Design 1040
13.2 ALPHABETICAL BIBLIOGRAPHY 1041
INDEX 1071
PREFACE
The fourth edition of this book differs from the third edition in
numerous ways.
There are large numbers of small changes everywhere to bring
29. the material up to
date as operating systems are not standing still. The chapter on
Multimedia Oper-
ating Systems has been moved to the Web, primarily to make
room for new mater-
ial and keep the book from growing to a completely
unmanageable size. The chap-
ter on Windows Vista has been removed completely as Vista has
not been the suc-
cess Microsoft hoped for. The chapter on Symbian has also been
removed, as
Symbian no longer is widely available. However, the Vista
material has been re-
placed by Windows 8 and Symbian has been replaced by
Android. Also, a com-
pletely new chapter, on virtualization and the cloud has been
added. Here is a
chapter-by-chapter rundown of the changes.
• Chapter 1 has been heavily modified and updated in many
places but
with the exception of a new section on mobile computers, no
major
sections have been added or deleted.
30. • Chapter 2 has been updated, with older material removed and
some
new material added. For example, we added the futex
synchronization
primitive, and a section about how to avoid locking altogether
with
Read-Copy-Update.
• Chapter 3 now has more focus on modern hardware and less
emphasis
on segmentation and MULTICS.
• In Chapter 4 we removed CD-Roms, as they are no longer very
com-
mon, and replaced them with more modern solutions (like flash
drives). Also, we added RAID level 6 to the section on RAID.
xxiii
xxiv PREFACE
• Chapter 5 has seen a lot of changes. Older devices like CRTs
and CD-
31. ROMs have been removed, while new technology, such as touch
screens have been added.
• Chapter 6 is pretty much unchanged. The topic of deadlocks is
fairly
stable, with few new results.
• Chapter 7 is completely new. It covers the important topics of
virtu-
alization and the cloud. As a case study, a section on VMware
has
been added.
• Chapter 8 is an updated version of the previous material on
multiproc-
essor systems. There is more emphasis on multicore and
manycore
systems now, which have become increasingly important in the
past
few years. Cache consistency has become a bigger issue
recently and
is covered here, now.
• Chapter 9 has been heavily revised and reorganized, with
considerable
32. new material on exploiting code bugs, malware, and defenses
against
them. Attacks such as null pointer dereferences and buffer
overflows
are treated in more detail. Defense mechanisms, including
canaries,
the NX bit, and address-space randomization are covered in
detail
now, as are the ways attackers try to defeat them.
• Chapter 10 has undergone a major change. The material on
UNIX and
Linux has been updated but the major addtion here is a new and
lengthy section on the Android operating system, which is very
com-
mon on smartphones and tablets.
• Chapter 11 in the third edition was on Windows Vista. That
has been
replaced by a chapter on Windows 8, specifically Windows 8.1.
It
brings the treatment of Windows completely up to date.
• Chapter 12 is a revised version of Chap. 13 from the previous
edition.
33. • Chapter 13 is a thoroughly updated list of suggested readings.
In addi-
tion, the list of references has been updated, with entries to 223
new
works published after the third edition of this book came out.
• Chapter 7 from the previous edition has been moved to the
book’s
Website to keep the size somewhat manageable).
• In addition, the sections on research throughout the book have
all been
redone from scratch to reflect the latest research in operating
systems.
Furthermore, new problems have been added to all the chapters.
Numerous teaching aids for this book are available. Instructor
supplements
can be found at www.pearsonhighered.com/tanenbaum. They
include PowerPoint
www.pearsonhighered.com/tanenbaum
34. PREFACE xxv
sheets, software tools for studying operating systems, lab
experiments for students,
simulators, and more material for use in operating systems
courses. Instructors
using this book in a course should definitely take a look. The
Companion Website
for this book is also located at
www.pearsonhighered.com/tanenbaum. The specif-
ic site for this book is password protected. To use the site, click
on the picture of
the cover and then follow the instructions on the student access
card that came with
your text to create a user account and log in. Student resources
include:
• An online chapter on Multimedia Operating Systems
• Lab Experiments
• Online Exercises
• Simulation Exercises
35. A number of people have been involved in the fourth edition.
First and fore-
most, Prof. Herbert Bos of the Vrije Universiteit in Amsterdam
has been added as
a coauthor. He is a security, UNIX, and all-around systems
expert and it is great to
have him on board. He wrote much of the new material except
as noted below.
Our editor, Tracy Johnson, has done a wonderful job, as usual,
of herding all
the cats, putting all the pieces together, putting out fires, and
keeping the project on
schedule. We were also fortunate to get our long-time
production editor, Camille
Trentacoste, back. Her skills in so many areas have sav ed the
day on more than a
few occasions. We are glad to have her again after an absence
of several years.
Carole Snyder did a fine job coordinating the various people
involved in the book.
The material in Chap. 7 on VMware (in Sec. 7.12) was written
by Edouard
Bugnion of EPFL in Lausanne, Switzerland. Ed was one of the
36. founders of the
VMware company and knows this material as well as anyone in
the world. We
thank him greatly for supplying it to us.
Ada Gavrilovska of Georgia Tech, who is an expert on Linux
internals, up-
dated Chap. 10 from the Third Edition, which she also wrote.
The Android mater-
ial in Chap. 10 was written by Dianne Hackborn of Google, one
of the key dev el-
opers of the Android system. Android is the leading operating
system on smart-
phones, so we are very grateful to have Dianne help us. Chap.
10 is now quite long
and detailed, but UNIX, Linux, and Android fans can learn a lot
from it. It is per-
haps worth noting that the longest and most technical chapter in
the book was writ-
ten by two women. We just did the easy stuff.
We hav en’t neglected Windows, however. Dav e Probert of
Microsoft updated
Chap. 11 from the previous edition of the book. This time the
chapter covers Win-
37. dows 8.1 in detail. Dave has a great deal of knowledge of
Windows and enough
vision to tell the difference between places where Microsoft got
it right and where
it got it wrong. Windows fans are certain to enjoy this chapter.
The book is much better as a result of the work of all these
expert contributors.
Again, we would like to thank them for their invaluable help.
www.pearsonhighered.com/tanenbaum
xxvi PREFACE
We were also fortunate to have sev eral reviewers who read the
manuscript and
also suggested new end-of-chapter problems. These were Trudy
Levine, Shivakant
Mishra, Krishna Sivalingam, and Ken Wong. Steve Armstrong
did the PowerPoint
sheets for instructors teaching a course using the book.
Normally copyeditors and proofreaders don’t get
acknowledgements, but Bob
38. Lentz (copyeditor) and Joe Ruddick (proofreader) did
exceptionally thorough jobs.
Joe in particular, can spot the difference between a roman
period and an italics
period from 20 meters. Nevertheless, the authors take full
responsibility for any
residual errors in the book. Readers noticing any errors are
requested to contact
one of the authors.
Finally, last but not least, Barbara and Marvin are still
wonderful, as usual,
each in a unique and special way. Daniel and Matilde are great
additions to our
family. Aron and Nathan are wonderful little guys and Olivia is
a treasure. And of
course, I would like to thank Suzanne for her love and patience,
not to mention all
the druiven, kersen, and sinaasappelsap, as well as other
agricultural products.
(AST)
Most importantly, I would like to thank Marieke, Duko, and Jip.
Marieke for
her love and for bearing with me all the nights I was working on
39. this book, and
Duko and Jip for tearing me away from it and showing me there
are more impor-
tant things in life. Like Minecraft. (HB)
Andrew S. Tanenbaum
Herbert Bos
ABOUT THE AUTHORS
Andrew S. Tanenbaum has an S.B. degree from M.I.T. and a
Ph.D. from the
University of California at Berkeley. He is currently a Professor
of Computer Sci-
ence at the Vrije Universiteit in Amsterdam, The Netherlands.
He was formerly
Dean of the Advanced School for Computing and Imaging, an
interuniversity grad-
uate school doing research on advanced parallel, distributed,
and imaging systems.
He was also an Academy Professor of the Royal Netherlands
Academy of Arts and
Sciences, which has saved him from turning into a bureaucrat.
40. He also won a pres-
tigious European Research Council Advanced Grant.
In the past, he has done research on compilers, operating
systems, networking,
and distributed systems. His main research focus now is reliable
and secure oper-
ating systems. These research projects have led to over 175
refereed papers in
journals and conferences. Prof. Tanenbaum has also authored or
co-authored fiv e
books, which have been translated into 20 languages, ranging
from Basque to Thai.
They are used at universities all over the world. In all, there are
163 versions (lan-
guage + edition combinations) of his books.
Prof. Tanenbaum has also produced a considerable volume of
software, not-
ably MINIX, a small UNIX clone. It was the direct inspiration
for Linux and the
platform on which Linux was initially developed. The current
version of MINIX,
called MINIX 3, is now focused on being an extremely reliable
and secure operat-
41. ing system. Prof. Tanenbaum will consider his work done when
no user has any
idea what an operating system crash is. MINIX 3 is an ongoing
open-source proj-
ect to which you are invited to contribute. Go to
www.minix3.org to download a
free copy of MINIX 3 and give it a try. Both x86 and ARM
versions are available.
Prof. Tanenbaum’s Ph.D. students have gone on to greater glory
after graduat-
ing. He is very proud of them. In this respect, he resembles a
mother hen.
Prof. Tanenbaum is a Fellow of the ACM, a Fellow of the IEEE,
and a member
of the Royal Netherlands Academy of Arts and Sciences. He has
also won numer-
ous scientific prizes from ACM, IEEE, and USENIX. If you are
unbearably curi-
ous about them, see his page on Wikipedia. He also has two
honorary doctorates.
Herbert Bos obtained his Masters degree from Twente
University and his
42. Ph.D. from Cambridge University Computer Laboratory in the
U.K.. Since then, he
has worked extensively on dependable and efficient I/O
architectures for operating
systems like Linux, but also research systems based on MINIX
3. He is currently a
professor in Systems and Network Security in the Dept. of
Computer Science at
the Vrije Universiteit in Amsterdam, The Netherlands. His main
research field is
system security. With his students, he works on novel ways to
detect and stop at-
tacks, to analyze and reverse engineer malware, and to take
down botnets (malici-
ous infrastructures that may span millions of computers). In
2011, he obtained an
ERC Starting Grant for his research on reverse engineering.
Three of his students
have won the Roger Needham Award for best European Ph.D.
thesis in systems.
www.minix3.org
This page intentionally left blank
43. MODERN OPERATING SYSTEMS
This page intentionally left blank
1
INTRODUCTION
A modern computer consists of one or more processors, some
main memory,
disks, printers, a keyboard, a mouse, a display, network
interfaces, and various
other input/output devices. All in all, a complex system.oo If
every application pro-
grammer had to understand how all these things work in detail,
no code would ever
get written. Furthermore, managing all these components and
using them optimally
is an exceedingly challenging job. For this reason, computers
44. are equipped with a
layer of software called the operating system, whose job is to
provide user pro-
grams with a better, simpler, cleaner, model of the computer
and to handle manag-
ing all the resources just mentioned. Operating systems are the
subject of this
book.
Most readers will have had some experience with an operating
system such as
Windows, Linux, FreeBSD, or OS X, but appearances can be
deceiving. The pro-
gram that users interact with, usually called the shell when it is
text based and the
GUI (Graphical User Interface)—which is pronounced
‘‘gooey’’—when it uses
icons, is actually not part of the operating system, although it
uses the operating
system to get its work done.
A simple overview of the main components under discussion
here is given in
Fig. 1-1. Here we see the hardware at the bottom. The hardware
consists of chips,
45. boards, disks, a keyboard, a monitor, and similar physical
objects. On top of the
hardware is the software. Most computers have two modes of
operation: kernel
mode and user mode. The operating system, the most
fundamental piece of soft-
ware, runs in kernel mode (also called supervisor mode). In this
mode it has
1
2 INTRODUCTION CHAP. 1
complete access to all the hardware and can execute any
instruction the machine is
capable of executing. The rest of the software runs in user
mode, in which only a
subset of the machine instructions is available. In particular,
those instructions that
affect control of the machine or do I/O )Input/Output" are
forbidden to user-mode
programs. We will come back to the difference between kernel
mode and user
46. mode repeatedly throughout this book. It plays a crucial role in
how operating sys-
tems work.
Hardware
Software
User mode
Kernel mode Operating system
Web
browser
E-mail
reader
Music
player
User interface program
Figure 1-1. Where the operating system fits in.
The user interface program, shell or GUI, is the lowest level of
47. user-mode soft-
ware, and allows the user to start other programs, such as a Web
browser, email
reader, or music player. These programs, too, make heavy use of
the operating sys-
tem.
The placement of the operating system is shown in Fig. 1-1. It
runs on the
bare hardware and provides the base for all the other software.
An important distinction between the operating system and
normal (user-
mode) software is that if a user does not like a particular email
reader, he† is free to
get a different one or write his own if he so chooses; he is not
free to write his own
clock interrupt handler, which is part of the operating system
and is protected by
hardware against attempts by users to modify it.
This distinction, however, is sometimes blurred in embedded
systems (which
may not have kernel mode) or interpreted systems (such as Java-
based systems that
48. use interpretation, not hardware, to separate the components).
Also, in many systems there are programs that run in user mode
but help the
operating system or perform privileged functions. For example,
there is often a
program that allows users to change their passwords. It is not
part of the operating
system and does not run in kernel mode, but it clearly carries
out a sensitive func-
tion and has to be protected in a special way. In some systems,
this idea is carried
to an extreme, and pieces of what is traditionally considered to
be the operating
† ‘‘He’’ should be read as ‘‘he or she’’ throughout the book.
SEC. 1.1 WHAT IS AN OPERATING SYSTEM? 3
system (such as the file system) run in user space. In such
systems, it is difficult to
draw a clear boundary. Everything running in kernel mode is
clearly part of the
operating system, but some programs running outside it are
49. arguably also part of it,
or at least closely associated with it.
Operating systems differ from user (i.e., application) programs
in ways other
than where they reside. In particular, they are huge, complex,
and long-lived. The
source code of the heart of an operating system like Linux or
Windows is on the
order of fiv e million lines of code or more. To conceive of
what this means, think
of printing out fiv e million lines in book form, with 50 lines
per page and 1000
pages per volume (larger than this book). It would take 100
volumes to list an op-
erating system of this size—essentially an entire bookcase. Can
you imagine get-
ting a job maintaining an operating system and on the first day
having your boss
bring you to a bookcase with the code and say: ‘‘Go learn
that.’’ And this is only
for the part that runs in the kernel. When essential shared
libraries are included,
Windows is well over 70 million lines of code or 10 to 20
bookcases. And this
50. excludes basic application software (things like Windows
Explorer, Windows
Media Player, and so on).
It should be clear now why operating systems live a long time—
they are very
hard to write, and having written one, the owner is loath to
throw it out and start
again. Instead, such systems evolve over long periods of time.
Windows 95/98/Me
was basically one operating system and Windows
NT/2000/XP/Vista/Windows 7 is
a different one. They look similar to the users because
Microsoft made very sure
that the user interface of Windows 2000/XP/Vista/Windows 7
was quite similar to
that of the system it was replacing, mostly Windows 98.
Nevertheless, there were
very good reasons why Microsoft got rid of Windows 98. We
will come to these
when we study Windows in detail in Chap. 11.
Besides Windows, the other main example we will use
throughout this book is
UNIX and its variants and clones. It, too, has evolved over the
51. years, with versions
like System V, Solaris, and FreeBSD being derived from the
original system,
whereas Linux is a fresh code base, although very closely
modeled on UNIX and
highly compatible with it. We will use examples from UNIX
throughout this book
and look at Linux in detail in Chap. 10.
In this chapter we will briefly touch on a number of key aspects
of operating
systems, including what they are, their history, what kinds are
around, some of the
basic concepts, and their structure. We will come back to many
of these important
topics in later chapters in more detail.
1.1 WHAT IS AN OPERATING SYSTEM?
It is hard to pin down what an operating system is other than
saying it is the
software that runs in kernel mode—and even that is not always
true. Part of the
problem is that operating systems perform two essentially
unrelated functions:
52. 4 INTRODUCTION CHAP. 1
providing application programmers (and application programs,
naturally) a clean
abstract set of resources instead of the messy hardware ones and
managing these
hardware resources. Depending on who is doing the talking, you
might hear mostly
about one function or the other. Let us now look at both.
1.1.1 The Operating System as an Extended Machine
The architecture (instruction set, memory organization, I/O, and
bus struc-
ture) of most computers at the machine-language level is
primitive and awkward to
program, especially for input/output. To make this point more
concrete, consider
modern SATA (Serial ATA) hard disks used on most computers.
A book (Ander-
son, 2007) describing an early version of the interface to the
disk—what a pro-
53. grammer would have to know to use the disk—ran over 450
pages. Since then, the
interface has been revised multiple times and is more
complicated than it was in
2007. Clearly, no sane programmer would want to deal with this
disk at the hard-
ware level. Instead, a piece of software, called a disk driver,
deals with the hard-
ware and provides an interface to read and write disk blocks,
without getting into
the details. Operating systems contain many drivers for
controlling I/O devices.
But even this level is much too low for most applications. For
this reason, all
operating systems provide yet another layer of abstraction for
using disks: files.
Using this abstraction, programs can create, write, and read
files, without having to
deal with the messy details of how the hardware actually works.
This abstraction is the key to managing all this complexity.
Good abstractions
turn a nearly impossible task into two manageable ones. The
first is defining and
54. implementing the abstractions. The second is using these
abstractions to solve the
problem at hand. One abstraction that almost every computer
user understands is
the file, as mentioned above. It is a useful piece of information,
such as a digital
photo, saved email message, song, or Web page. It is much
easier to deal with pho-
tos, emails, songs, and Web pages than with the details of
SATA (or other) disks.
The job of the operating system is to create good abstractions
and then implement
and manage the abstract objects thus created. In this book, we
will talk a lot about
abstractions. They are one of the keys to understanding
operating systems.
This point is so important that it is worth repeating in different
words. With all
due respect to the industrial engineers who so carefully
designed the Macintosh,
hardware is ugly. Real processors, memories, disks, and other
devices are very
complicated and present difficult, awkward, idiosyncratic, and
inconsistent inter-
55. faces to the people who have to write software to use them.
Sometimes this is due
to the need for backward compatibility with older hardware.
Other times it is an
attempt to save money. Often, however, the hardware designers
do not realize (or
care) how much trouble they are causing for the software. One
of the major tasks
of the operating system is to hide the hardware and present
programs (and their
programmers) with nice, clean, elegant, consistent, abstractions
to work with in-
stead. Operating systems turn the ugly into the beautiful, as
shown in Fig. 1-2.
SEC. 1.1 WHAT IS AN OPERATING SYSTEM? 5
Operating system
Hardware
Ugly interface
56. Beautiful interface
Application programs
Figure 1-2. Operating systems turn ugly hardware into beautiful
abstractions.
It should be noted that the operating system’s real customers are
the applica-
tion programs (via the application programmers, of course).
They are the ones
who deal directly with the operating system and its abstractions.
In contrast, end
users deal with the abstractions provided by the user interface,
either a com-
mand-line shell or a graphical interface. While the abstractions
at the user interface
may be similar to the ones provided by the operating system,
this is not always the
case. To make this point clearer, consider the normal Windows
desktop and the
line-oriented command prompt. Both are programs running on
the Windows oper-
ating system and use the abstractions Windows provides, but
they offer very dif-
57. ferent user interfaces. Similarly, a Linux user running Gnome or
KDE sees a very
different interface than a Linux user working directly on top of
the underlying X
Window System, but the underlying operating system
abstractions are the same in
both cases.
In this book, we will study the abstractions provided to
application programs in
great detail, but say rather little about user interfaces. That is a
large and important
subject, but one only peripherally related to operating systems.
1.1.2 The Operating System as a Resource Manager
The concept of an operating system as primarily providing
abstractions to ap-
plication programs is a top-down view. An alternative, bottom-
up, view holds that
the operating system is there to manage all the pieces of a
complex system. Mod-
ern computers consist of processors, memories, timers, disks,
mice, network inter-
faces, printers, and a wide variety of other devices. In the
58. bottom-up view, the job
of the operating system is to provide for an orderly and
controlled allocation of the
processors, memories, and I/O devices among the various
programs wanting them.
Modern operating systems allow multiple programs to be in
memory and run
at the same time. Imagine what would happen if three programs
running on some
computer all tried to print their output simultaneously on the
same printer. The first
6 INTRODUCTION CHAP. 1
few lines of printout might be from program 1, the next few
from program 2, then
some from program 3, and so forth. The result would be utter
chaos. The operating
system can bring order to the potential chaos by buffering all
the output destined
for the printer on the disk. When one program is finished, the
operating system can
59. then copy its output from the disk file where it has been stored
for the printer,
while at the same time the other program can continue
generating more output,
oblivious to the fact that the output is not really going to the
printer (yet).
When a computer (or network) has more than one user, the need
for managing
and protecting the memory, I/O devices, and other resources is
even more since the
users might otherwise interfere with one another. In addition,
users often need to
share not only hardware, but information (files, databases, etc.)
as well. In short,
this view of the operating system holds that its primary task is
to keep track of
which programs are using which resource, to grant resource
requests, to account
for usage, and to mediate conflicting requests from different
programs and users.
Resource management includes multiplexing (sharing) resources
in two dif-
ferent ways: in time and in space. When a resource is time
60. multiplexed, different
programs or users take turns using it. First one of them gets to
use the resource,
then another, and so on. For example, with only one CPU and
multiple programs
that want to run on it, the operating system first allocates the
CPU to one program,
then, after it has run long enough, another program gets to use
the CPU, then an-
other, and then eventually the first one again. Determining how
the resource is time
multiplexed—who goes next and for how long—is the task of
the operating sys-
tem. Another example of time multiplexing is sharing the
printer. When multiple
print jobs are queued up for printing on a single printer, a
decision has to be made
about which one is to be printed next.
The other kind of multiplexing is space multiplexing. Instead of
the customers
taking turns, each one gets part of the resource. For example,
main memory is nor-
mally divided up among several running programs, so each one
can be resident at
61. the same time (for example, in order to take turns using the
CPU). Assuming there
is enough memory to hold multiple programs, it is more
efficient to hold several
programs in memory at once rather than give one of them all of
it, especially if it
only needs a small fraction of the total. Of course, this raises
issues of fairness,
protection, and so on, and it is up to the operating system to
solve them. Another
resource that is space multiplexed is the disk. In many systems
a single disk can
hold files from many users at the same time. Allocating disk
space and keeping
track of who is using which disk blocks is a typical operating
system task.
1.2 HISTORY OF OPERATING SYSTEMS
Operating systems have been evolving through the years. In the
following sec-
tions we will briefly look at a few of the highlights. Since
operating systems have
historically been closely tied to the architecture of the
computers on which they
62. SEC. 1.2 HISTORY OF OPERATING SYSTEMS 7
run, we will look at successive generations of computers to see
what their operat-
ing systems were like. This mapping of operating system
generations to computer
generations is crude, but it does provide some structure where
there would other-
wise be none.
The progression given below is largely chronological, but it has
been a bumpy
ride. Each development did not wait until the previous one
nicely finished before
getting started. There was a lot of overlap, not to mention many
false starts and
dead ends. Take this as a guide, not as the last word.
The first true digital computer was designed by the English
mathematician
Charles Babbage (1792–1871). Although Babbage spent most of
his life and for-
63. tune trying to build his ‘‘analytical engine,’’ he nev er got it
working properly be-
cause it was purely mechanical, and the technology of his day
could not produce
the required wheels, gears, and cogs to the high precision that
he needed. Needless
to say, the analytical engine did not have an operating system.
As an interesting historical aside, Babbage realized that he
would need soft-
ware for his analytical engine, so he hired a young woman
named Ada Lovelace,
who was the daughter of the famed British poet Lord Byron, as
the world’s first
programmer. The programming language Ada® is named after
her.
1.2.1 The First Generation (1945–55): Vacuum Tubes
After Babbage’s unsuccessful efforts, little progress was made
in constructing
digital computers until the World War II period, which
stimulated an explosion of
activity. Professor John Atanasoff and his graduate student
Clifford Berry built
64. what is now reg arded as the first functioning digital computer
at Iowa State Univer-
sity. It used 300 vacuum tubes. At roughly the same time,
Konrad Zuse in Berlin
built the Z3 computer out of electromechanical relays. In 1944,
the Colossus was
built and programmed by a group of scientists (including Alan
Turing) at Bletchley
Park, England, the Mark I was built by Howard Aiken at
Harvard, and the ENIAC
was built by William Mauchley and his graduate student J.
Presper Eckert at the
University of Pennsylvania. Some were binary, some used
vacuum tubes, some
were programmable, but all were very primitive and took
seconds to perform even
the simplest calculation.
In these early days, a single group of people (usually engineers)
designed,
built, programmed, operated, and maintained each machine. All
programming was
done in absolute machine language, or even worse yet, by
wiring up electrical cir-
cuits by connecting thousands of cables to plugboards to control
65. the machine’s
basic functions. Programming languages were unknown (even
assembly language
was unknown). Operating systems were unheard of. The usual
mode of operation
was for the programmer to sign up for a block of time using the
signup sheet on the
wall, then come down to the machine room, insert his or her
plugboard into the
computer, and spend the next few hours hoping that none of the
20,000 or so vac-
uum tubes would burn out during the run. Virtually all the
problems were simple
8 INTRODUCTION CHAP. 1
straightforward mathematical and numerical calculations, such
as grinding out
tables of sines, cosines, and logarithms, or computing artillery
trajectories.
By the early 1950s, the routine had improved somewhat with the
introduction
66. of punched cards. It was now possible to write programs on
cards and read them in
instead of using plugboards; otherwise, the procedure was the
same.
1.2.2 The Second Generation (1955–65): Transistors and Batch
Systems
The introduction of the transistor in the mid-1950s changed the
picture radi-
cally. Computers became reliable enough that they could be
manufactured and sold
to paying customers with the expectation that they would
continue to function long
enough to get some useful work done. For the first time, there
was a clear separa-
tion between designers, builders, operators, programmers, and
maintenance per-
sonnel.
These machines, now called mainframes, were locked away in
large, specially
air-conditioned computer rooms, with staffs of professional
operators to run them.
Only large corporations or major government agencies or
67. universities could afford
the multimillion-dollar price tag. To run a job (i.e., a program
or set of programs),
a programmer would first write the program on paper (in
FORTRAN or assem-
bler), then punch it on cards. He would then bring the card deck
down to the input
room and hand it to one of the operators and go drink coffee
until the output was
ready.
When the computer finished whatever job it was currently
running, an operator
would go over to the printer and tear off the output and carry it
over to the output
room, so that the programmer could collect it later. Then he
would take one of the
card decks that had been brought from the input room and read
it in. If the FOR-
TRAN compiler was needed, the operator would have to get it
from a file cabinet
and read it in. Much computer time was wasted while operators
were walking
around the machine room.
68. Given the high cost of the equipment, it is not surprising that
people quickly
looked for ways to reduce the wasted time. The solution
generally adopted was the
batch system. The idea behind it was to collect a tray full of
jobs in the input
room and then read them onto a magnetic tape using a small
(relatively) inexpen-
sive computer, such as the IBM 1401, which was quite good at
reading cards,
copying tapes, and printing output, but not at all good at
numerical calculations.
Other, much more expensive machines, such as the IBM 7094,
were used for the
real computing. This situation is shown in Fig. 1-3.
After about an hour of collecting a batch of jobs, the cards were
read onto a
magnetic tape, which was carried into the machine room, where
it was mounted on
a tape drive. The operator then loaded a special program (the
ancestor of today’s
operating system), which read the first job from tape and ran it.
The output was
written onto a second tape, instead of being printed. After each
69. job finished, the
operating system automatically read the next job from the tape
and began running
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 9
1401 7094 1401
(a) (b) (c) (d) (e) (f)
Card
reader
Tape
drive Input
tape
Output
tape
System
tape
70. Printer
Figure 1-3. An early batch system. (a) Programmers bring cards
to 1401. (b)
1401 reads batch of jobs onto tape. (c) Operator carries input
tape to 7094. (d)
7094 does computing. (e) Operator carries output tape to 1401.
(f) 1401 prints
output.
it. When the whole batch was done, the operator removed the
input and output
tapes, replaced the input tape with the next batch, and brought
the output tape to a
1401 for printing off line (i.e., not connected to the main
computer).
The structure of a typical input job is shown in Fig. 1-4. It
started out with a
$JOB card, specifying the maximum run time in minutes, the
account number to be
charged, and the programmer’s name. Then came a $FORTRAN
card, telling the
operating system to load the FORTRAN compiler from the
system tape. It was di-
71. rectly followed by the program to be compiled, and then a
$LOAD card, directing
the operating system to load the object program just compiled.
(Compiled pro-
grams were often written on scratch tapes and had to be loaded
explicitly.) Next
came the $RUN card, telling the operating system to run the
program with the data
following it. Finally, the $END card marked the end of the job.
These primitive
control cards were the forerunners of modern shells and
command-line inter-
preters.
Large second-generation computers were used mostly for
scientific and engin-
eering calculations, such as solving the partial differential
equations that often oc-
cur in physics and engineering. They were largely programmed
in FORTRAN and
assembly language. Typical operating systems were FMS (the
Fortran Monitor
System) and IBSYS, IBM’s operating system for the 7094.
1.2.3 The Third Generation (1965–1980): ICs and
72. Multiprogramming
By the early 1960s, most computer manufacturers had two
distinct, incompati-
ble, product lines. On the one hand, there were the word-
oriented, large-scale sci-
entific computers, such as the 7094, which were used for
industrial-strength nu-
merical calculations in science and engineering. On the other
hand, there were the
10 INTRODUCTION CHAP. 1
$JOB, 10,7710802, MARVIN TANENBAUM
$FORTRAN
$LOAD
$RUN
$END
73. Data for program
FORTRAN program
Figure 1-4. Structure of a typical FMS job.
character-oriented, commercial computers, such as the 1401,
which were widely
used for tape sorting and printing by banks and insurance
companies.
Developing and maintaining two completely different product
lines was an ex-
pensive proposition for the manufacturers. In addition, many
new computer cus-
tomers initially needed a small machine but later outgrew it and
wanted a bigger
machine that would run all their old programs, but faster.
IBM attempted to solve both of these problems at a single
stroke by introduc-
ing the System/360. The 360 was a series of software-
compatible machines rang-
ing from 1401-sized models to much larger ones, more powerful
than the mighty
74. 7094. The machines differed only in price and performance
(maximum memory,
processor speed, number of I/O devices permitted, and so forth).
Since they all had
the same architecture and instruction set, programs written for
one machine could
run on all the others—at least in theory. (But as Yogi Berra
reputedly said: ‘‘In
theory, theory and practice are the same; in practice, they are
not.’’) Since the 360
was designed to handle both scientific (i.e., numerical) and
commercial computing,
a single family of machines could satisfy the needs of all
customers. In subsequent
years, IBM came out with backward compatible successors to
the 360 line, using
more modern technology, known as the 370, 4300, 3080, and
3090. The zSeries is
the most recent descendant of this line, although it has diverged
considerably from
the original.
The IBM 360 was the first major computer line to use (small-
scale) ICs (Inte-
grated Circuits), thus providing a major price/performance
75. advantage over the
second-generation machines, which were built up from
individual transistors. It
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 11
was an immediate success, and the idea of a family of
compatible computers was
soon adopted by all the other major manufacturers. The
descendants of these ma-
chines are still in use at computer centers today. Now adays
they are often used for
managing huge databases (e.g., for airline reservation systems)
or as servers for
World Wide Web sites that must process thousands of requests
per second.
The greatest strength of the ‘‘single-family’’ idea was
simultaneously its great-
est weakness. The original intention was that all software,
including the operating
system, OS/360, had to work on all models. It had to run on
small systems, which
76. often just replaced 1401s for copying cards to tape, and on very
large systems,
which often replaced 7094s for doing weather forecasting and
other heavy comput-
ing. It had to be good on systems with few peripherals and on
systems with many
peripherals. It had to work in commercial environments and in
scientific environ-
ments. Above all, it had to be efficient for all of these different
uses.
There was no way that IBM (or anybody else for that matter)
could write a
piece of software to meet all those conflicting requirements.
The result was an
enormous and extraordinarily complex operating system,
probably two to three
orders of magnitude larger than FMS. It consisted of millions of
lines of assembly
language written by thousands of programmers, and contained
thousands upon
thousands of bugs, which necessitated a continuous stream of
new releases in an
attempt to correct them. Each new release fixed some bugs and
introduced new
77. ones, so the number of bugs probably remained constant over
time.
One of the designers of OS/360, Fred Brooks, subsequently
wrote a witty and
incisive book (Brooks, 1995) describing his experiences with
OS/360. While it
would be impossible to summarize the book here, suffice it to
say that the cover
shows a herd of prehistoric beasts stuck in a tar pit. The cover
of Silberschatz et al.
(2012) makes a similar point about operating systems being
dinosaurs.
Despite its enormous size and problems, OS/360 and the similar
third-genera-
tion operating systems produced by other computer
manufacturers actually satis-
fied most of their customers reasonably well. They also
popularized several key
techniques absent in second-generation operating systems.
Probably the most im-
portant of these was multiprogramming. On the 7094, when the
current job
paused to wait for a tape or other I/O operation to complete, the
78. CPU simply sat
idle until the I/O finished. With heavily CPU-bound scientific
calculations, I/O is
infrequent, so this wasted time is not significant. With
commercial data processing,
the I/O wait time can often be 80 or 90% of the total time, so
something had to be
done to avoid having the (expensive) CPU be idle so much.
The solution that evolved was to partition memory into several
pieces, with a
different job in each partition, as shown in Fig. 1-5. While one
job was waiting for
I/O to complete, another job could be using the CPU. If enough
jobs could be held
in main memory at once, the CPU could be kept busy nearly
100% of the time.
Having multiple jobs safely in memory at once requires special
hardware to protect
each job against snooping and mischief by the other ones, but
the 360 and other
third-generation systems were equipped with this hardware.
79. 12 INTRODUCTION CHAP. 1
Job 3
Job 2
Job 1
Operating
system
Memory
partitions
Figure 1-5. A multiprogramming system with three jobs in
memory.
Another major feature present in third-generation operating
systems was the
ability to read jobs from cards onto the disk as soon as they
were brought to the
computer room. Then, whenever a running job finished, the
operating system could
load a new job from the disk into the now-empty partition and
run it. This techni-
80. que is called spooling (from Simultaneous Peripheral Operation
On Line) and
was also used for output. With spooling, the 1401s were no
longer needed, and
much carrying of tapes disappeared.
Although third-generation operating systems were well suited
for big scientific
calculations and massive commercial data-processing runs, they
were still basically
batch systems. Many programmers pined for the first-generation
days when they
had the machine all to themselves for a few hours, so they could
debug their pro-
grams quickly. With third-generation systems, the time between
submitting a job
and getting back the output was often several hours, so a single
misplaced comma
could cause a compilation to fail, and the programmer to waste
half a day. Pro-
grammers did not like that very much.
This desire for quick response time paved the way for
timesharing, a variant
of multiprogramming, in which each user has an online
81. terminal. In a timesharing
system, if 20 users are logged in and 17 of them are thinking or
talking or drinking
coffee, the CPU can be allocated in turn to the three jobs that
want service. Since
people debugging programs usually issue short commands (e.g.,
compile a fiv e-
page procedure†) rather than long ones (e.g., sort a million-
record file), the com-
puter can provide fast, interactive service to a number of users
and perhaps also
work on big batch jobs in the background when the CPU is
otherwise idle. The
first general-purpose timesharing system, CTSS (Compatible
Time Sharing Sys-
tem), was developed at M.I.T. on a specially modified 7094
(Corbató et al., 1962).
However, timesharing did not really become popular until the
necessary protection
hardware became widespread during the third generation.
After the success of the CTSS system, M.I.T., Bell Labs, and
General Electric
(at that time a major computer manufacturer) decided to embark
on the develop-
82. ment of a ‘‘computer utility,’’ that is, a machine that would
support some hundreds
†We will use the terms ‘‘procedure,’’ ‘‘subroutine,’’ and
‘‘function’’ interchangeably in this book.
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 13
of simultaneous timesharing users. Their model was the
electricity system—when
you need electric power, you just stick a plug in the wall, and
within reason, as
much power as you need will be there. The designers of this
system, known as
MULTICS (MULTiplexed Information and Computing Service),
envisioned
one huge machine providing computing power for everyone in
the Boston area.
The idea that machines 10,000 times faster than their GE-645
mainframe would be
sold (for well under $1000) by the millions only 40 years later
was pure science
fiction. Sort of like the idea of supersonic trans-Atlantic
undersea trains now.
83. MULTICS was a mixed success. It was designed to support
hundreds of users
on a machine only slightly more powerful than an Intel 386-
based PC, although it
had much more I/O capacity. This is not quite as crazy as it
sounds, since in those
days people knew how to write small, efficient programs, a skill
that has subse-
quently been completely lost. There were many reasons that
MULTICS did not
take over the world, not the least of which is that it was written
in the PL/I pro-
gramming language, and the PL/I compiler was years late and
barely worked at all
when it finally arrived. In addition, MULTICS was enormously
ambitious for its
time, much like Charles Babbage’s analytical engine in the
nineteenth century.
To make a long story short, MULTICS introduced many seminal
ideas into the
computer literature, but turning it into a serious product and a
major commercial
success was a lot harder than anyone had expected. Bell Labs
84. dropped out of the
project, and General Electric quit the computer business
altogether. Howev er,
M.I.T. persisted and eventually got MULTICS working. It was
ultimately sold as a
commercial product by the company (Honeywell) that bought
GE’s computer busi-
ness and was installed by about 80 major companies and
universities worldwide.
While their numbers were small, MULTICS users were fiercely
loyal. General
Motors, Ford, and the U.S. National Security Agency, for
example, shut down their
MULTICS systems only in the late 1990s, 30 years after
MULTICS was released,
after years of trying to get Honeywell to update the hardware.
By the end of the 20th century, the concept of a computer utility
had fizzled
out, but it may well come back in the form of cloud computing,
in which rel-
atively small computers (including smartphones, tablets, and the
like) are con-
nected to servers in vast and distant data centers where all the
computing is done,
85. with the local computer just handling the user interface. The
motivation here is
that most people do not want to administrate an increasingly
complex and finicky
computer system and would prefer to have that work done by a
team of profession-
als, for example, people working for the company running the
data center. E-com-
merce is already evolving in this direction, with various
companies running emails
on multiprocessor servers to which simple client machines
connect, very much in
the spirit of the MULTICS design.
Despite its lack of commercial success, MULTICS had a huge
influence on
subsequent operating systems (especially UNIX and its
derivatives, FreeBSD,
Linux, iOS, and Android). It is described in several papers and
a book (Corbató et
al., 1972; Corbató and Vyssotsky, 1965; Daley and Dennis,
1968; Organick, 1972;
86. 14 INTRODUCTION CHAP. 1
and Saltzer, 1974). It also has an active Website, located at
www.multicians.org,
with much information about the system, its designers, and its
users.
Another major development during the third generation was the
phenomenal
growth of minicomputers, starting with the DEC PDP-1 in 1961.
The PDP-1 had
only 4K of 18-bit words, but at $120,000 per machine (less than
5% of the price of
a 7094), it sold like hotcakes. For certain kinds of nonnumerical
work, it was al-
most as fast as the 7094 and gav e birth to a whole new
industry. It was quickly fol-
lowed by a series of other PDPs (unlike IBM’s family, all
incompatible) culminat-
ing in the PDP-11.
One of the computer scientists at Bell Labs who had worked on
the MULTICS
project, Ken Thompson, subsequently found a small PDP-7
minicomputer that no
87. one was using and set out to write a stripped-down, one-user
version of MULTICS.
This work later developed into the UNIX operating system,
which became popular
in the academic world, with government agencies, and with
many companies.
The history of UNIX has been told elsewhere (e.g., Salus,
1994). Part of that
story will be given in Chap. 10. For now, suffice it to say that
because the source
code was widely available, various organizations developed
their own (incompati-
ble) versions, which led to chaos. Two major versions
developed, System V, from
AT&T, and BSD (Berkeley Software Distribution) from the
University of Cali-
fornia at Berkeley. These had minor variants as well. To make it
possible to write
programs that could run on any UNIX system, IEEE developed a
standard for
UNIX, called POSIX, that most versions of UNIX now support.
POSIX defines a
minimal system-call interface that conformant UNIX systems
must support. In
88. fact, some other operating systems now also support the POSIX
interface.
As an aside, it is worth mentioning that in 1987, the author
released a small
clone of UNIX, called MINIX, for educational purposes.
Functionally, MINIX is
very similar to UNIX, including POSIX support. Since that
time, the original ver-
sion has evolved into MINIX 3, which is highly modular and
focused on very high
reliability. It has the ability to detect and replace faulty or even
crashed modules
(such as I/O device drivers) on the fly without a reboot and
without disturbing run-
ning programs. Its focus is on providing very high dependability
and availability.
A book describing its internal operation and listing the source
code in an appendix
is also available (Tanenbaum and Woodhull, 2006). The MINIX
3 system is avail-
able for free (including all the source code) over the Internet at
www.minix3.org.
The desire for a free production (as opposed to educational)
89. version of MINIX
led a Finnish student, Linus Torvalds, to write Linux. This
system was directly
inspired by and developed on MINIX and originally supported
various MINIX fea-
tures (e.g., the MINIX file system). It has since been extended
in many ways by
many people but still retains some underlying structure common
to MINIX and to
UNIX. Readers interested in a detailed history of Linux and the
open source
movement might want to read Glyn Moody’s (2001) book. Most
of what will be
said about UNIX in this book thus applies to System V, MINIX,
Linux, and other
versions and clones of UNIX as well.
www.multicians.org
www.minix3.org
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 15
1.2.4 The Fourth Generation (1980–Present): Personal
Computers
90. With the development of LSI (Large Scale Integration)
circuits—chips con-
taining thousands of transistors on a square centimeter of
silicon—the age of the
personal computer dawned. In terms of architecture, personal
computers (initially
called microcomputers) were not all that different from
minicomputers of the
PDP-11 class, but in terms of price they certainly were
different. Where the
minicomputer made it possible for a department in a company or
university to have
its own computer, the microprocessor chip made it possible for
a single individual
to have his or her own personal computer.
In 1974, when Intel came out with the 8080, the first general-
purpose 8-bit
CPU, it wanted an operating system for the 8080, in part to be
able to test it. Intel
asked one of its consultants, Gary Kildall, to write one. Kildall
and a friend first
built a controller for the newly released Shugart Associates 8-
inch floppy disk and
91. hooked the floppy disk up to the 8080, thus producing the first
microcomputer with
a disk. Kildall then wrote a disk-based operating system called
CP/M (Control
Program for Microcomputers) for it. Since Intel did not think
that disk-based
microcomputers had much of a future, when Kildall asked for
the rights to CP/M,
Intel granted his request. Kildall then formed a company,
Digital Research, to fur-
ther develop and sell CP/M.
In 1977, Digital Research rewrote CP/M to make it suitable for
running on the
many microcomputers using the 8080, Zilog Z80, and other CPU
chips. Many ap-
plication programs were written to run on CP/M, allowing it to
completely domi-
nate the world of microcomputing for about 5 years.
In the early 1980s, IBM designed the IBM PC and looked
around for software
to run on it. People from IBM contacted Bill Gates to license
his BASIC inter-
preter. They also asked him if he knew of an operating system
92. to run on the PC.
Gates suggested that IBM contact Digital Research, then the
world’s dominant op-
erating systems company. Making what was surely the worst
business decision in
recorded history, Kildall refused to meet with IBM, sending a
subordinate instead.
To make matters even worse, his lawyer even refused to sign
IBM’s nondisclosure
agreement covering the not-yet-announced PC. Consequently,
IBM went back to
Gates asking if he could provide them with an operating system.
When IBM came back, Gates realized that a local computer
manufacturer,
Seattle Computer Products, had a suitable operating system,
DOS (Disk Operat-
ing System). He approached them and asked to buy it (allegedly
for $75,000),
which they readily accepted. Gates then offered IBM a
DOS/BASIC package,
which IBM accepted. IBM wanted certain modifications, so
Gates hired the per-
son who wrote DOS, Tim Paterson, as an employee of Gates’
fledgling company,
93. Microsoft, to make them. The revised system was renamed MS-
DOS (MicroSoft
Disk Operating System) and quickly came to dominate the IBM
PC market. A
key factor here was Gates’ (in retrospect, extremely wise)
decision to sell MS-DOS
to computer companies for bundling with their hardware,
compared to Kildall’s
16 INTRODUCTION CHAP. 1
attempt to sell CP/M to end users one at a time (at least
initially). After all this
transpired, Kildall died suddenly and unexpectedly from causes
that have not been
fully disclosed.
By the time the successor to the IBM PC, the IBM PC/AT, came
out in 1983
with the Intel 80286 CPU, MS-DOS was firmly entrenched and
CP/M was on its
last legs. MS-DOS was later widely used on the 80386 and
80486. Although the
94. initial version of MS-DOS was fairly primitive, subsequent
versions included more
advanced features, including many taken from UNIX.
(Microsoft was well aware
of UNIX, even selling a microcomputer version of it called
XENIX during the
company’s early years.)
CP/M, MS-DOS, and other operating systems for early
microcomputers were
all based on users typing in commands from the keyboard. That
eventually chang-
ed due to research done by Doug Engelbart at Stanford Research
Institute in the
1960s. Engelbart invented the Graphical User Interface,
complete with windows,
icons, menus, and mouse. These ideas were adopted by
researchers at Xerox PARC
and incorporated into machines they built.
One day, Steve Jobs, who co-invented the Apple computer in his
garage, vis-
ited PARC, saw a GUI, and instantly realized its potential
value, something Xerox
management famously did not. This strategic blunder of
95. gargantuan proportions
led to a book entitled Fumbling the Future (Smith and
Alexander, 1988). Jobs then
embarked on building an Apple with a GUI. This project led to
the Lisa, which
was too expensive and failed commercially. Jobs’ second
attempt, the Apple Mac-
intosh, was a huge success, not only because it was much
cheaper than the Lisa,
but also because it was user friendly, meaning that it was
intended for users who
not only knew nothing about computers but furthermore had
absolutely no inten-
tion whatsoever of learning. In the creative world of graphic
design, professional
digital photography, and professional digital video production,
Macintoshes are
very widely used and their users are very enthusiastic about
them. In 1999, Apple
adopted a kernel derived from Carnegie Mellon University’s
Mach microkernel
which was originally developed to replace the kernel of BSD
UNIX. Thus, Mac
OS X is a UNIX-based operating system, albeit with a very
distinctive interface.
96. When Microsoft decided to build a successor to MS-DOS, it was
strongly
influenced by the success of the Macintosh. It produced a GUI-
based system call-
ed Windows, which originally ran on top of MS-DOS (i.e., it
was more like a shell
than a true operating system). For about 10 years, from 1985 to
1995, Windows
was just a graphical environment on top of MS-DOS. However,
starting in 1995 a
freestanding version, Windows 95, was released that
incorporated many operating
system features into it, using the underlying MS-DOS system
only for booting and
running old MS-DOS programs. In 1998, a slightly modified
version of this sys-
tem, called Windows 98 was released. Nevertheless, both
Windows 95 and Win-
dows 98 still contained a large amount of 16-bit Intel assembly
language.
Another Microsoft operating system, Windows NT (where the
NT stands for
New Technology), which was compatible with Windows 95 at a
97. certain level, but a
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 17
complete rewrite from scratch internally. It was a full 32-bit
system. The lead de-
signer for Windows NT was David Cutler, who was also one of
the designers of the
VAX VMS operating system, so some ideas from VMS are
present in NT. In fact,
so many ideas from VMS were present in it that the owner of
VMS, DEC, sued
Microsoft. The case was settled out of court for an amount of
money requiring
many digits to express. Microsoft expected that the first version
of NT would kill
off MS-DOS and all other versions of Windows since it was a
vastly superior sys-
tem, but it fizzled. Only with Windows NT 4.0 did it finally
catch on in a big way,
especially on corporate networks. Version 5 of Windows NT
was renamed Win-
dows 2000 in early 1999. It was intended to be the successor to
98. both Windows 98
and Windows NT 4.0.
That did not quite work out either, so Microsoft came out with
yet another ver-
sion of Windows 98 called Windows Me (Millennium Edition).
In 2001, a
slightly upgraded version of Windows 2000, called Windows XP
was released.
That version had a much longer run (6 years), basically
replacing all previous ver-
sions of Windows.
Still the spawning of versions continued unabated. After
Windows 2000,
Microsoft broke up the Windows family into a client and a
server line. The client
line was based on XP and its successors, while the server line
included Windows
Server 2003 and Windows 2008. A third line, for the embedded
world, appeared a
little later. All of these versions of Windows forked off their
variations in the form
of service packs. It was enough to drive some administrators
(and writers of oper-
99. ating systems textbooks) balmy.
Then in January 2007, Microsoft finally released the successor
to Windows
XP, called Vista. It came with a new graphical interface,
improved security, and
many new or upgraded user programs. Microsoft hoped it would
replace Windows
XP completely, but it never did. Instead, it received much
criticism and a bad press,
mostly due to the high system requirements, restrictive
licensing terms, and sup-
port for Digital Rights Management, techniques that made it
harder for users to
copy protected material.
With the arrival of Windows 7, a new and much less resource
hungry version
of the operating system, many people decided to skip Vista
altogether. Windows 7
did not introduce too many new features, but it was relatively
small and quite sta-
ble. In less than three weeks, Windows 7 had obtained more
market share than
Vista in seven months. In 2012, Microsoft launched its
100. successor, Windows 8, an
operating system with a completely new look and feel, geared
for touch screens.
The company hopes that the new design will become the
dominant operating sys-
tem on a much wider variety of devices: desktops, laptops,
notebooks, tablets,
phones, and home theater PCs. So far, howev er, the market
penetration is slow
compared to Windows 7.
The other major contender in the personal computer world is
UNIX (and its
various derivatives). UNIX is strongest on network and
enterprise servers but is
also often present on desktop computers, notebooks, tablets, and
smartphones. On
18 INTRODUCTION CHAP. 1
x86-based computers, Linux is becoming a popular alternative
to Windows for stu-
dents and increasingly many corporate users.
101. As an aside, throughout this book we will use the term x86 to
refer to all mod-
ern processors based on the family of instruction-set
architectures that started with
the 8086 in the 1970s. There are many such processors,
manufactured by com-
panies like AMD and Intel, and under the hood they often differ
considerably:
processors may be 32 bits or 64 bits with few or many cores and
pipelines that may
be deep or shallow, and so on. Nevertheless, to the programmer,
they all look quite
similar and they can all still run 8086 code that was written 35
years ago. Where
the difference is important, we will refer to explicit models
instead—and use
x86-32 and x86-64 to indicate 32-bit and 64-bit variants.
FreeBSD is also a popular UNIX derivative, originating from
the BSD project
at Berkeley. All modern Macintosh computers run a modified
version of FreeBSD
(OS X). UNIX is also standard on workstations powered by
high-performance
102. RISC chips. Its derivatives are widely used on mobile devices,
such as those run-
ning iOS 7 or Android.
Many UNIX users, especially experienced programmers, prefer
a command-
based interface to a GUI, so nearly all UNIX systems support a
windowing system
called the X Window System (also known as X11) produced at
M.I.T. This sys-
tem handles the basic window management, allowing users to
create, delete, move,
and resize windows using a mouse. Often a complete GUI, such
as Gnome or
KDE, is available to run on top of X11, giving UNIX a look and
feel something
like the Macintosh or Microsoft Windows, for those UNIX users
who want such a
thing.
An interesting development that began taking place during the
mid-1980s is
the growth of networks of personal computers running network
operating sys-
tems and distributed operating systems (Tanenbaum and Van
103. Steen, 2007). In a
network operating system, the users are aware of the existence
of multiple com-
puters and can log in to remote machines and copy files from
one machine to an-
other. Each machine runs its own local operating system and has
its own local user
(or users).
Network operating systems are not fundamentally different from
single-proc-
essor operating systems. They obviously need a network
interface controller and
some low-level software to drive it, as well as programs to
achieve remote login
and remote file access, but these additions do not change the
essential structure of
the operating system.
A distributed operating system, in contrast, is one that appears
to its users as a
traditional uniprocessor system, even though it is actually
composed of multiple
processors. The users should not be aware of where their
programs are being run or
104. where their files are located; that should all be handled
automatically and ef-
ficiently by the operating system.
True distributed operating systems require more than just
adding a little code
to a uniprocessor operating system, because distributed and
centralized systems
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 19
differ in certain critical ways. Distributed systems, for example,
often allow appli-
cations to run on several processors at the same time, thus
requiring more complex
processor scheduling algorithms in order to optimize the amount
of parallelism.
Communication delays within the network often mean that these
(and other)
algorithms must run with incomplete, outdated, or even
incorrect information. This
situation differs radically from that in a single-processor system
105. in which the oper-
ating system has complete information about the system state.
1.2.5 The Fifth Generation (1990–Present): Mobile Computers
Ever since detective Dick Tracy started talking to his ‘‘two-way
radio wrist
watch’’ in the 1940s comic strip, people have craved a
communication device they
could carry around wherever they went. The first real mobile
phone appeared in
1946 and weighed some 40 kilos. You could take it wherever
you went as long as
you had a car in which to carry it.
The first true handheld phone appeared in the 1970s and, at
roughly one kilo-
gram, was positively featherweight. It was affectionately known
as ‘‘the brick.’’
Pretty soon everybody wanted one. Today, mobile phone
penetration is close to
90% of the global population. We can make calls not just with
our portable phones
and wrist watches, but soon with eyeglasses and other wearable
items. Moreover,
106. the phone part is no longer that interesting. We receive email,
surf the Web, text
our friends, play games, navigate around heavy traffic—and do
not even think
twice about it.
While the idea of combining telephony and computing in a
phone-like device
has been around since the 1970s also, the first real smartphone
did not appear until
the mid-1990s when Nokia released the N9000, which literally
combined two,
mostly separate devices: a phone and a PDA (Personal Digital
Assistant). In 1997,
Ericsson coined the term smartphone for its GS88 ‘‘Penelope.’’
Now that smartphones have become ubiquitous, the competition
between the
various operating systems is fierce and the outcome is even less
clear than in the
PC world. At the time of writing, Google’s Android is the
dominant operating sys-
tem with Apple’s iOS a clear second, but this was not always
the case and all may
be different again in just a few years. If anything is clear in the
107. world of smart-
phones, it is that it is not easy to stay king of the mountain for
long.
After all, most smartphones in the first decade after their
inception were run-
ning Symbian OS. It was the operating system of choice for
popular brands like
Samsung, Sony Ericsson, Motorola, and especially Nokia.
However, other operat-
ing systems like RIM’s Blackberry OS (introduced for
smartphones in 2002) and
Apple’s iOS (released for the first iPhone in 2007) started
eating into Symbian’s
market share. Many expected that RIM would dominate the
business market, while
iOS would be the king of the consumer devices. Symbian’s
market share plum-
meted. In 2011, Nokia ditched Symbian and announced it would
focus on Win-
dows Phone as its primary platform. For some time, Apple and
RIM were the toast
108. 20 INTRODUCTION CHAP. 1
of the town (although not nearly as dominant as Symbian had
been), but it did not
take very long for Android, a Linux-based operating system
released by Google in
2008, to overtake all its rivals.
For phone manufacturers, Android had the advantage that it was
open source
and available under a permissive license. As a result, they could
tinker with it and
adapt it to their own hardware with ease. Also, it has a huge
community of devel-
opers writing apps, mostly in the familiar Java programming
language. Even so,
the past years have shown that the dominance may not last, and
Android’s competi-
tors are eager to claw back some of its market share. We will
look at Android in
detail in Sec. 10.8.
1.3 COMPUTER HARDWARE REVIEW
An operating system is intimately tied to the hardware of the
109. computer it runs
on. It extends the computer’s instruction set and manages its
resources. To work,
it must know a great deal about the hardware, at least about how
the hardware ap-
pears to the programmer. For this reason, let us briefly review
computer hardware
as found in modern personal computers. After that, we can start
getting into the de-
tails of what operating systems do and how they work.
Conceptually, a simple personal computer can be abstracted to a
model resem-
bling that of Fig. 1-6. The CPU, memory, and I/O devices are
all connected by a
system bus and communicate with one another over it. Modern
personal computers
have a more complicated structure, involving multiple buses,
which we will look at
later. For the time being, this model will be sufficient. In the
following sections,
we will briefly review these components and examine some of
the hardware issues
that are of concern to operating system designers. Needless to
say, this will be a
110. very compact summary. Many books have been written on the
subject of computer
hardware and computer organization. Two well-known ones are
by Tanenbaum
and Austin (2012) and Patterson and Hennessy (2013).
Monitor
Keyboard USB printer
Hard
disk drive
Hard
disk
controller
USB
controller
Keyboard
controller
Video
111. controller
MemoryCPU
Bus
MMU
Figure 1-6. Some of the components of a simple personal
computer.
SEC. 1.3 COMPUTER HARDWARE REVIEW 21
1.3.1 Processors
The ‘‘brain’’ of the computer is the CPU. It fetches instructions
from memory
and executes them. The basic cycle of every CPU is to fetch the
first instruction
from memory, decode it to determine its type and operands,
execute it, and then
fetch, decode, and execute subsequent instructions. The cycle is
repeated until the
112. program finishes. In this way, programs are carried out.
Each CPU has a specific set of instructions that it can execute.
Thus an x86
processor cannot execute ARM programs and an ARM processor
cannot execute
x86 programs. Because accessing memory to get an instruction
or data word takes
much longer than executing an instruction, all CPUs contain
some registers inside
to hold key variables and temporary results. Thus the
instruction set generally con-
tains instructions to load a word from memory into a register,
and store a word
from a register into memory. Other instructions combine two
operands from regis-
ters, memory, or both into a result, such as adding two words
and storing the result
in a register or in memory.
In addition to the general registers used to hold variables and
temporary re-
sults, most computers have sev eral special registers that are
visible to the pro-
grammer. One of these is the program counter, which contains
113. the memory ad-
dress of the next instruction to be fetched. After that instruction
has been fetched,
the program counter is updated to point to its successor.
Another register is the stack pointer, which points to the top of
the current
stack in memory. The stack contains one frame for each
procedure that has been
entered but not yet exited. A procedure’s stack frame holds
those input parameters,
local variables, and temporary variables that are not kept in
registers.
Yet another register is the PSW (Program Status Word). This
register con-
tains the condition code bits, which are set by comparison
instructions, the CPU
priority, the mode (user or kernel), and various other control
bits. User programs
may normally read the entire PSW but typically may write only
some of its fields.
The PSW plays an important role in system calls and I/O.
The operating system must be fully aware of all the registers.
114. When time mul-
tiplexing the CPU, the operating system will often stop the
running program to
(re)start another one. Every time it stops a running program, the
operating system
must save all the registers so they can be restored when the
program runs later.
To improve performance, CPU designers have long abandoned
the simple
model of fetching, decoding, and executing one instruction at a
time. Many modern
CPUs have facilities for executing more than one instruction at
the same time. For
example, a CPU might have separate fetch, decode, and execute
units, so that while
it is executing instruction n, it could also be decoding
instruction n + 1 and fetch-
ing instruction n + 2. Such an organization is called a pipeline
and is illustrated in
Fig. 1-7(a) for a pipeline with three stages. Longer pipelines are
common. In most
pipeline designs, once an instruction has been fetched into the
pipeline, it must be
executed, even if the preceding instruction was a conditional
115. branch that was taken.
22 INTRODUCTION CHAP. 1
Pipelines cause compiler writers and operating system writers
great headaches be-
cause they expose the complexities of the underlying machine to
them and they
have to deal with them.
Fetch
unit
Fetch
unit
Fetch
unit
Decode
unit
Decode
117. CPU, shown in
Fig. 1-7(b). In this design, multiple execution units are present,
for example, one
for integer arithmetic, one for floating-point arithmetic, and one
for Boolean opera-
tions. Two or more instructions are fetched at once, decoded,
and dumped into a
holding buffer until they can be executed. As soon as an
execution unit becomes
available, it looks in the holding buffer to see if there is an
instruction it can hand-
le, and if so, it removes the instruction from the buffer and
executes it. An implica-
tion of this design is that program instructions are often
executed out of order. For
the most part, it is up to the hardware to make sure the result
produced is the same
one a sequential implementation would have produced, but an
annoying amount of
the complexity is foisted onto the operating system, as we shall
see.
Most CPUs, except very simple ones used in embedded systems,
have two
modes, kernel mode and user mode, as mentioned earlier.
118. Usually, a bit in the PSW
controls the mode. When running in kernel mode, the CPU can
execute every in-
struction in its instruction set and use every feature of the
hardware. On desktop
and server machines, the operating system normally runs in
kernel mode, giving it
access to the complete hardware. On most embedded systems, a
small piece runs
in kernel mode, with the rest of the operating system running in
user mode.
User programs always run in user mode, which permits only a
subset of the in-
structions to be executed and a subset of the features to be
accessed. Generally, all
instructions involving I/O and memory protection are
disallowed in user mode.
Setting the PSW mode bit to enter kernel mode is also
forbidden, of course.
To obtain services from the operating system, a user program
must make a sys-
tem call, which traps into the kernel and invokes the operating
system. The TRAP
119. instruction switches from user mode to kernel mode and starts
the operating sys-
tem. When the work has been completed, control is returned to
the user program at
the instruction following the system call. We will explain the
details of the system
call mechanism later in this chapter. For the time being, think
of it as a special kind
SEC. 1.3 COMPUTER HARDWARE REVIEW 23
of procedure call that has the additional property of switching
from user mode to
kernel mode. As a note on typography, we will use the lower-
case Helvetica font
to indicate system calls in running text, like this: read.
It is worth noting that computers have traps other than the
instruction for ex-
ecuting a system call. Most of the other traps are caused by the
hardware to warn
of an exceptional situation such as an attempt to divide by 0 or
a floating-point
120. underflow. In all cases the operating system gets control and
must decide what to
do. Sometimes the program must be terminated with an error.
Other times the
error can be ignored (an underflowed number can be set to 0).
Finally, when the
program has announced in advance that it wants to handle
certain kinds of condi-
tions, control can be passed back to the program to let it deal
with the problem.
Multithreaded and Multicore Chips
Moore’s law states that the number of transistors on a chip
doubles every 18
months. This ‘‘law’’ is not some kind of law of physics, like
conservation of mo-
mentum, but is an observation by Intel cofounder Gordon Moore
of how fast proc-
ess engineers at the semiconductor companies are able to shrink
their transistors.
Moore’s law has held for over three decades now and is
expected to hold for at
least one more. After that, the number of atoms per transistor
will become too
121. small and quantum mechanics will start to play a big role,
preventing further
shrinkage of transistor sizes.
The abundance of transistors is leading to a problem: what to do
with all of
them? We saw one approach above: superscalar architectures,
with multiple func-
tional units. But as the number of transistors increases, even
more is possible. One
obvious thing to do is put bigger caches on the CPU chip. That
is definitely hap-
pening, but eventually the point of diminishing returns will be
reached.
The obvious next step is to replicate not only the functional
units, but also
some of the control logic. The Intel Pentium 4 introduced this
property, called
multithreading or hyperthreading (Intel’s name for it), to the
x86 processor, and
several other CPU chips also have it—including the SPARC, the
Power5, the Intel
Xeon, and the Intel Core family. To a first approximation, what
it does is allow the
122. CPU to hold the state of two different threads and then switch
back and forth on a
nanosecond time scale. (A thread is a kind of lightweight
process, which, in turn,
is a running program; we will get into the details in Chap. 2.)
For example, if one
of the processes needs to read a word from memory (which
takes many clock
cycles), a multithreaded CPU can just switch to another thread.
Multithreading
does not offer true parallelism. Only one process at a time is
running, but
thread-switching time is reduced to the order of a nanosecond.
Multithreading has implications for the operating system
because each thread
appears to the operating system as a separate CPU. Consider a
system with two
actual CPUs, each with two threads. The operating system will
see this as four
CPUs. If there is only enough work to keep two CPUs busy at a
certain point in