AIX 5L Version 5.3 introduces many new features, including NFS Version 4 and Advanced Accounting, and exploits the advanced capabilities of POWER5 equipped severs, such as Virtual SCSI, Virtual Ethernet SMT, Micro-Partitioning, and others. This IBM Redbook focuses on the differences introduced in AIX 5L Version 5.3 when compared to AIX 5L Version 5.2. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
This is the printout version of my lecture slides for the OS course. It includes more details (quations from books, references, etc.) than the slides version.
The IBM zEnterprise 114 is the second member in the zEnterprise family. Similarly to the
z196, it was designed to help overcome problems in today's IT infrastructure and provide a
foundation for the future. The zEnterprise System represents both a revolution and an
evolution of mainframe technology. IBM is taking a bold step by integrating heterogeneous
platforms under the well-proven System z hardware management capabilities, while
extending System z qualities of service to those platforms.
Slides of a course that is given to teach embedded linux to engineers. The full course is 2-days; this is the first time a 'light' version was given lasting a single day.
Focus is on
. What is Linux
. How do I compile
. How do I flash
AIX 5L Version 5.3 introduces many new features, including NFS Version 4 and Advanced Accounting, and exploits the advanced capabilities of POWER5 equipped severs, such as Virtual SCSI, Virtual Ethernet SMT, Micro-Partitioning, and others. This IBM Redbook focuses on the differences introduced in AIX 5L Version 5.3 when compared to AIX 5L Version 5.2. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
This is the printout version of my lecture slides for the OS course. It includes more details (quations from books, references, etc.) than the slides version.
The IBM zEnterprise 114 is the second member in the zEnterprise family. Similarly to the
z196, it was designed to help overcome problems in today's IT infrastructure and provide a
foundation for the future. The zEnterprise System represents both a revolution and an
evolution of mainframe technology. IBM is taking a bold step by integrating heterogeneous
platforms under the well-proven System z hardware management capabilities, while
extending System z qualities of service to those platforms.
Slides of a course that is given to teach embedded linux to engineers. The full course is 2-days; this is the first time a 'light' version was given lasting a single day.
Focus is on
. What is Linux
. How do I compile
. How do I flash
In a presentation for Atainz, Terry Baucher of Baucher Consulting (www.baucherconsulting.co.nz) explains how to handle an IRD Audit & what to do should the IRD come calling.
Baucher also goes into detail about the difference between an IRD Audit and an IRD Review and how each should be treated.
This presentation was given on behalf of Atainz in March 2013
10 ways hardware engineers can make software integration easierChris Simmonds
Sometimes it seems that hardware engineers go out of their way to may the job of software engineers difficult. Here are my top 10 tips (plus two bonus slides) that will make integration to smoothly
eclipse is an open source programming tool.
s an open-source software system
whose aim is to serve as a platform for integrating various Logic Programming extensions
The paperback version is available on lulu.com there http://goo.gl/fraa8o
This is the first volume of the postgresql database administration book. The book covers the steps for installing, configuring and administering a PostgreSQL 9.3 on Linux debian. The book covers the logical and physical aspect of PostgreSQL. Two chapters are dedicated to the backup/restore topic.
Makefiles are used to help decide which parts of a large program need to be recompiled. In the vast majority of cases, C or C++ files are compiled. Other languages typically have their own tools that serve a similar purpose as Make. Make can also be used beyond compilation too, when you need a series of instructions to run depending on what files have changed. This tutorial will focus on the C/C++ compilation use case.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Cosmetic shop management system project report.pdf
Embedded linux barco-20121001
1. Internal Barco Training
September 24th / October 1st, 2012
Kubrick training room
Noordlaan 5, 8520 Kuurne | Belgium
Introduction to Embedded Linux for Engineering
Marc Leeman, VNG
Peter Korsgaard, DnA
2011
33. guration screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7.2 Winscp, a drag and drop interface to your embedded target . . . . . . . . . . . . . 121
10.1 Running ddd with a remote target. . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
10.2 Lab setup with a workstation on a LAN (10.x); public servers (150.158.231.x) and
embedded targets on the LAN. The gateway (niobe) is not directly accessible but
provides an ssh tunnel on port 22 to gemini on the LAN . . . . . . . . . . . . . . . 159
10.3 After putting the ssh tunnel in place, the connections on 150.158.231.13, port 4000
are forwarded over TCP to the target 10.2.4.10 on port 2200. . . . . . . . . . . . . 162
A.1 Richard Stallman, founder of the GNU project for a free operating system. . . . . 165
A.2 Linus Torvalds, creator of the Linux kernel. . . . . . . . . . . . . . . . . . . . . . . 166
A.3 A GNOME Desktop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
C.1 Drawing a graphical dependency between patches with quilt. . . . . . . . . . . . . 184
vi
34. PREFACE
As an engineer, there is nothing more fun than poking in the internal of a system, to see how
it reacts and how it all works. It is very hard to cope with black boxes, we want to know what
causes output c when inputs a and b are provided. Even more; we want to know that a system is
designed good and if there is an error; we want to
36. x it.
An open source operating system allows to do exactly that: First of all, it gives us the freedom
to design the hardware platform we want, only limited by time and money; to add the peripherals
we care about and to con
38. t. Obviously, we'll encounter some bumps
along the road and we'll need to dig in to our designs; adding debugging code in the kernel - But
in the end, we can always get it to work.
One of the advantages, in our view, of Linux is that we can run the same software on all our
systems: From our servers that compile and manage our environment, over our desktops to our
embedded targets. As the requirements shrink1, so does our operating system. A typical server
installation quickly surpasses a couple of GB, while we can shrink the root
39. le system of an
embedded Linux system to a couple of hundred kB.
In this text, we tried to bundle some of the experience and techniques we've had with building
embedded Linux systems. We have tried to ensure everything is correct, but some errors are bound
to have slipped through. If you feel something is not correct or missing, you are invited to inform
us about it, so we can correct the text for future trainings.
Though a lot of text is original, some sections have been added or integrated that were accessible
from public sources. Wherever possible, you should be able to obtain the original text from the
references section at the end of each chapter.
We hope you have as much fun and as good a learning experience as we had while drafting
this text.
Flanders, August 2006, May 2008, December 2008, June 2009, August 2010, January 2011, Septem-ber
2012.
Marc Leeman Peter Korsgaard
1Taking an embedded processor does not mean that it has less capabilities than a desktop or server processor,
quite the contrary. A lot of functionality that is otherwise reserved for external peripherals is on the processor die
itself. As a rule, embedded processors will be clocked slower as the desktop and server counterparts and will be
re-designed to consume less power during operation.
40. Chapter 1
Introduction
1.1 Preconditions and Goals
Embedded Linux is a huge topic, that cannot realistically be covered in such a short session (even
if we would know it all). The idea of this training is therefore not to cover everything related to
building embedded Linux systems, but rather to provide an introduction to the subject and get
you up to speed as fast a possible.We will try to share the experience we have and to show what so-lutions
we have found to work. This is not to say that these are the only workable solutions though!
To limit the scope a bit and provide real life examples we focus on and base the examples on
the existing embedded Linux systems within Barco and the Marvell Sheevaplug. We also assume
that the reader is familiar with Linux on PC hardware. If not, have a look at appendix A.
1.2 System Overview
Like other embedded systems, the detailed architecture of embedded Linux systems vary a lot,
but certain basic components are common for all systems.
The basic hardware consists of a CPU, RAM, some kind of storage and a number of peripherals
for I/O.
Linux supports a long range of CPUs, but ARM and PowerPC processors are typically used
within Barco. Storage can also vary a lot: Disks, network, NOR/NAND/managed
ash, where
ash is the most commonly used solution. I/O peripherals probably have the most variation of
them all, but the most interesting from a Linux system design are UARTs and Ethernet MACs.
The software consists of a boot loader, a Linux kernel and one or more
41. le systems containing
the applications and libraries. Boot loaders are further described in chapter 4. Boot loaders are
important for bringing up a system, but once the kernel is loaded it is no longer active. The generic
architecture of a running Linux system can be seen in
43. CHAPTER 1. INTRODUCTION 2
At the bottom we have the hardware. Right above it the kernel is located. The kernel is the
core part of the operating system, and its purpose is to manage hardware and provide high level
abstractions to the user level software. The kernel is (normally) the only software which talks
directly to the hardware. The kernel is further described in chapter 5. Above the kernel the user
space applications and static- or dynamically linked libraries are located. Libraries provide further
high level abstraction for applications than what is provided by the kernel. Libraries exists for just
about everything, but all Linux systems at least contain a C library1. User space applications are
further described in chapter 7.
Notice that this generic architecture is the same for all Linux systems, no matter if they are
server-, desktop- or embedded systems.
1.3 Some Hackable Examples
1.3.1 Marvell SheevaPlug
The Marvell SheevaPlug is a cheap, powerful device in a small form factor. It contains a 1.2GHz
Marvell Sheeva processor (ARMv5), 512MB DDR2 and 512MB NAND
ash, USB, gigabit ethernet
and a SDIO interface.
Figure 1.2: Marvell SheevaPlug
For development, it is also very interesting that the device comes with serial and JTAG access
(through USB) out of the box, making it very easy to get started with.
root@debian:~# cat /proc/cpuinfo
Processor : ARM926EJ-S rev 1 (v5l)
BogoMIPS : 1192.75
Features : swp half thumb fastmult edsp
CPU implementer : 0x56
CPU architecture: 5TE
CPU variant : 0x2
CPU part : 0x131
CPU revision : 1
Cache type : write-back
Cache clean : cp15 c7 ops
Cache lockdown : format C
Cache format : Harvard
I size : 16384
I assoc : 4
I line length : 32
I sets : 128
D size : 16384
1You could imagine a setup without it, but it wouldn't be very useful.
44. CHAPTER 1. INTRODUCTION 3
D assoc : 4
D line length : 32
D sets : 128
Hardware : Feroceon-KW
Revision : 0000
Serial : 0000000000000000
The price of a SheevaPlug is around 75 Euros.
1.3.2 Dreambox
The dreambox devices (see Figure 1.3) are very popular DVB (S/T/C) decoders that are all
running Linux. Since the code is open; a lot of alternative
45. rmwares are available on the internet;
oering more
exibility and functionality than the original
46. rmware.
root@dm7025:~ cat /proc/cpuinfo
system type : ATI XILLEON HDTV SUPERTOLL
processor : 0
cpu model : MIPS 4KEc V4.8
BogoMIPS : 297.98
wait instruction : yes
microsecond timers : yes
tlb_entries : 16
extra interrupt vector : yes
hardware watchpoint : yes
VCED exceptions : not available
VCEI exceptions : not available
Unfortunately; the Flemish DVB-C provider has chosen a closed box approach (generate rev-enue
based on trivial functionality like recording, pause, delayed playback, . . . ) and getting a
dreambox to run for cable TV is not that trivial in Flanders. There are reports that programming
the default box number (read with a JTAG probe) should work.
The most popular use is receiving DVB-S. Even though that the Satellite provider does not
support a dreambox; it is fully functional with the default
48. rmware with an alternative version that provides
a software CAM (Conditional Access Module).
Figure 1.3: Dreambox 7025 S
Depending on the model, a dreambox can be obtained from 300 Euro onwards.
49. CHAPTER 1. INTRODUCTION 4
1.3.3 Linksys NSLU2
Another extremely popular device up until recently is the Linksys NSLU2 (see Figure 1.4) (Network
Storage Link for USB 2.0). It oers out of the box a ARMv5 CPU, running from
ash. Via a web
interface; the user can con
50. gure the hard disks that can accessed via a number of network protocols
(e.g. NFS, Samba, . . . ).
The real interesting part of this device is that the hacker does not need to stick with the on
board
ash to build the system on. If a USB disk (or memory stick) is connected; the root
51. lesystem
can be stored on the external device; while the kernel boots from
ash. With this modi
52. cation;
the NSLU can serve as full
edged Linux server, keeping into account the hardware limitations of
e.g. 32 MB memory.
[marc@chiana ~]$ cat /proc/cpuinfo
Processor : XScale-IXP42x Family rev 1 (v5l)
BogoMIPS : 266.24
Features : swp half fastmult edsp
CPU implementer : 0x69
CPU architecture: 5TE
CPU variant : 0x0
CPU part : 0x41f
CPU revision : 1
Cache type : undefined 5
Cache clean : undefined 5
Cache lockdown : undefined 5
Cache format : Harvard
I size : 32768
I assoc : 32
I line length : 32
I sets : 32
D size : 32768
D assoc : 32
D line length : 32
D sets : 32
Hardware : Linksys NSLU2
Revision : 0000
Serial : 0000000000000000
When the external HDD is replaced by a
ash memory pen, the full power of the NSLU2 is
unleashed: a running Linux system can be used with as little as 4 Watt power consumption. Some
people use it for e.g. Domotics control (EIB), network access points for all kinds of USB devices,
ssh tunnel server, bittorrent downloader, . . .
The price of a NSLU2 used to be around 70 Euros.
1.3.4 Bualo Linkstation Live
Unfortunately, the NSLU2 is being made obsolete in the course of 2008, but a good candidate to
replace the niche left by the NSLU2 is the Bualo Linkstatation Live (see Figure 1.5).
Two of the drawbacks for the NSLU2 were the limited CPU clocking (133 or 266 for newer
devices) and only 32 MB of memory. In contrast, the Linkstation Live pictured here, has an ARM9
CPU core, clocked at 400 MHz and 128 MB of memory. Especially for running a home server; the
additional memory comes in handy for multiple concurrent processes.
Again, the stock
53. rmware can be replaced with GNU/Debian and support for the Feroceon
processor is included from kernl 2.6.27 onwards.
54. CHAPTER 1. INTRODUCTION 5
Figure 1.4: Linksys NSLU2
Processor : Feroceon rev 0 (v5l)
BogoMIPS : 266.24
Features : swp half thumb fastmult edsp
CPU implementer : 0x41
CPU architecture: 5TEJ
CPU variant : 0x0
CPU part : 0x926
CPU revision : 0
Cache type : write-back
Cache clean : cp15 c7 ops
Cache lockdown : format C
Cache format : Harvard
I size : 32768
I assoc : 1
I line length : 32
I sets : 1024
D size : 32768
D assoc : 1
D line length : 32
D sets : 1024
Hardware : Buffalo Linkstation Pro/Live
Revision : 0000
Serial : 0000000000000000
As are real nice hacker feature; the case designers left a hole to connect a serial level converter
to; giving direct access to the U-Boot bootloader. It is enough to solder a 90 degree header on the
motherboard to get serial access on the device.
Depending on the size of the disk, the price of a Linkstation Live is anywhere between 100 to
200 Euros. Note that it can be cheaper buying a device with a small HDD and replace the HDD
with a larger one; than buying the Linkstation with the large disk in the
56. CHAPTER 1. INTRODUCTION 6
Figure 1.5: Bualo Linkstation Live
1.3.5 Neo Freerunner
The Neo FreeRunner (see Figure 1.6) (made by FIC) is a smartphone developed by the Openmoko
project. It is the successor to the
57. rst development phase smartphone Neo 1973, and is intended
for hackers, since it gives the user great customizability.
Processor : ARM920T rev 0 (v4l)
BogoMIPS : 199.47
Features : swp half thumb
CPU implementer : 0x41
CPU architecture: 4T
CPU variant : 0x1
CPU part : 0x920
CPU revision : 0
Cache type : write-back
Cache clean : cp15 c7 ops
Cache lockdown : format A
Cache format : Harvard
I size : 16384
I assoc : 64
I line length : 32
I sets : 8
D size : 16384
D assoc : 64
D line length : 32
D sets : 8
Hardware : GTA02
Revision : 0360
Serial : 0000000000000000
The default OpenMoko distribution can be replaced by Debian.
The Freerunner costs about 350 Euro.
58. CHAPTER 1. INTRODUCTION 7
Figure 1.6: Neo Freerunner
1.3.6 AzBox HD
Just like the Dreambox devices, the AZBox is a DVB decoder based on Linux. It has full hardware
decoding of MPEG4; which allows you to basically decode almost any current video, audio or
image format on your box.
It allows you to add diskspace with Samba, eSata, USB; . . .
Figure 1.7: The AzBox HD decoder, and much more...
system type: Sigma Designs TangoX
processor: 0
cpu model: MIPS 4KEc V6.9
Initial bogomips: 296.96
wait instruction: yes
microsecond timers: yes
tlb_entries: 32
extra interrupt vector: yes
Hardware watchpoint: yes
ASES implemented: mips16
VCED exceptions: not available
VCEI exceptions: not available
System bus frequency: 200250000 Hz
59. CHAPTER 1. INTRODUCTION 8
CPU frequency: 300375000 Hz
DSP frequency: 300375000 Hz
At around 350 Euro, it is a lot cheaper than its Dreambox HD counterpart (DM 8000). As with
all Dreambox devices; the custom
60. rmwares use Software CAM (Conditional Access Module); to
keep track of the key negociation for image decoding. As such; keys can be shared over the network.
61. Chapter 2
Cross Compilation Toolchain
2.1 Introduction
Before we can get started with developing embedded Linux systems we need a toolchain suitable
for generating code for our embedded platform. Development can be done natively (E.G. on the
embedded system itself once it is bootstrapped), but by far the most common setup is to use a
cross compiler.
A cross compiler allows the developer to run the compilation on a much more powerful plat-form
(a multiuser server or a powerful desktop machine) instead of the slower and more resource
constrained embedded system.
This chapter describes how to con
62. gure and compile such a cross toolchain from sources. It is
possible to download pre-compiled cross toolchains like the ones included in ELDK, but even if
you are not going to compile the toolchain yourself, it can be very useful to know how it is done.
Just like on a desktop Linux system, the toolchain of choice for an embedded Linux system is
the GNU toolchain.
Compiling a program takes place by running a compiler on the build platform. The compiled
program will run on the host platform. Usually these two are the same; if they are dierent, the
process is called cross-compilation.
Typically the hardware architecture diers, like for example when compiling a program destined
for the PowerPC architecture on an x86-64 computer; but cross-compilation is also applicable when
only the operating system environment diers, as when compiling a FreeBSD program under Linux;
or even just the system library, as when compiling programs with uClibc on a glibc host.
The GNU/Autotools packages (i.e. autoconf, automake, and libtool) use the notion of a build
platform, a host platform, and a target platform.
The build platform is where the code is actually compiled.
The host platform is where the compiled code will execute.
The target platform usually only applies to compilers as it represents what type of object code
the package itself will produce (such as cross-compiling a cross-compiler); otherwise the
target platform setting is irrelevant.
Since we will be compiling a target
64. ts in under 1 MB, we cannot perform
the compilation on the target itself (limited in
ash). Even if we could; it would still be better
and faster to do this in a server class machine. Even when compiling for a target architecture that
is similar to the server/development environment, there are valid arguments for using a cross-compiler;
especially when the product is relatively long lived and there are no plans to upgrade
the operating systems' libc version1.
1This is not a good idea in any case, but it beats having to keep around that single version of the obsolete
RH7.0, merely for building the
66. CHAPTER 2. CROSS COMPILATION TOOLCHAIN 10
In this chapter, the building blocks will be laid out for the cross compiler speci
67. cally targeted
for small embedded systems.
First, gcc will be introduced, followed by glibc. gcc and glibc is the typical compiler combi-nation
that is used in most desktop systems. The following section will cover a smaller alternative
to glibc: uClibc. Finally, gdb (and gdbserver) is introduced.
These are the building blocks for the cross compilation toolchain that we need for our previously
introduced target. Manually hacking up a compiler can be a challenging task; but luckily there is
an easier way: Buildroot, which is a set of Make
68. les doing exactly this2.
2.2 GNU Toolchain
A minimal GNU toolchain consists of binutils, the GNU Compiler Collection (GCC), and a C
library.
Binutils are the binary utilities of the toolchain, i.e. the programs that work with the binary
and object
69. les. This includes the assembler, linker, archiver and a number of smaller more-or-less
obscure utilities.
GCC is the compiler itself. GCC contains front-ends for a lot of languages (C, C++, Java,
Ada, Objective C, Fortran, ..), but here we will only focus on the C compiler.
Last, but not least, a C library is needed. The C library is part of the con
71. guration depends on the chosen C library.
Due to this, GCC has to be compiled in two steps. First a bootstrap compiler is compiled,
which is then used to compile the C library, which in turn is used to compile the
73. guration depends on 3 high level choices:
Build and target CPU type
Build and target Operating System
C library to use
A con
74. guration could for example be: A cross compiler running on an x86 Linux PC which
creates executables for an embedded Linux system with a PowerPC processor using the uClibc
C library (see below). To keep track of all these con
75. guration parameters, the following naming
convention is normally used for the binaries:
target-cpu-target-os-target-c-library-toolname
e.g. the C compiler for the above would be called:
powerpc-linux-uclibc-gcc
Next to these major con
76. guration choices, some more subtle tweaking is still available. One
of the most important of these is
oating point mode. The compiler can either be con
77. gured to
generate hardware
oating point instructions or use a software
oating point emulation. Hardware
oating point instructions can be used even if the CPU doesn't have a FPU, but then the kernel
has to emulate it, which is a lot slower than soft
oat (10-100x).
2.3 C Library
What C library to use? Several options exists, the most popular being the GNU C library (Glibc)
and uClibc:
2There are a number of alternatives that will not be covered here
78. CHAPTER 2. CROSS COMPILATION TOOLCHAIN 11
2.3.1 GNU C Library
Glibc is the GNU project's C standard library. It is free software and is available under the GNU
Lesser General Public License. The lead contributor and maintainer is Ulrich Drepper.
Glibc is what is used for practically all desktop and server Linux distributions. It is very
featureful and supports a lot of dierent hardware platforms and operating systems. Unfortunately
it is also very big (several MBs), which makes it less suitable for building small embedded Linux
systems.
2.3.2 uClibc
uClibc is a small C library intended for embedded Linux systems.
uClibc was created to support uClinux, a version of Linux not requiring a memory management
unit and thus suited for microcontrollers (hence the uC in the name), but now also runs on real
Linux.
uClibc is much smaller than Glibc, but still very much compatible. For most applications no
change to the source code is needed to use uClibc.
While Glibc is intended to fully support all relevant C standards across a wide range of plat-forms,
uClibc is speci
79. cally focused on embedded Linux. Features can be enabled or disabled
according to space requirements.
uClibc doesn't support other operating systems than Linux. It supports amongst others: i386,
ARM, AVR32, Black
80. n, h8300, m68k, Microblaze, MIPS, Nios/Nios2, PowerPC, SuperH, SPARC,
and x86-64 processors.
2.4 Compilation
As described above, the GNU toolchain is a big system consisting of several independent packages,
every version of which might not be compatible with each other without extra patches. Finding a
working combinations of all these packages and Compiling the toolchain by hand is not a simple
job.
Luckily there now exists scripts to automate it, crosstool(-NG) and buildroot.
2.4.1 Crosstool-NG
Crosstool-ng is a tool by Yann E. Morin, which makes it easy to create cross toolchains using
uClibc/Glibc/EGlibc. Crosstool-ng is nice, but it only creates toolchains, so we will here instead
focus on Buildroot (see chapter 7).
Notice that Crosstool-ng toolchains can be used with Buildroot through its external toolchain
support.
2.4.2 Buildroot
Buildroot is a set of Make
81. les and patches that allows to easily generate cross toolchains using
uClibc. Actually it is more than that, as it can also be used to build the complete userspace for a
system, but more about that in chapter 7.
2.5 Hands On - Toolchain with Buildroot
2.5.1 Getting the Source
While the hardware platform for the duration of the course will be Marvell SheevaPlug, most of
the Barco designs use Buildroot to create a toolchain and/or the target
84. CHAPTER 2. CROSS COMPILATION TOOLCHAIN 12
Figure 2.1: http://buildroot.net
Buildroot until recently didn't have releases on a regular basis, but that has luckily changed.
As for getting the source, we take the latest version available (or you can check out the sources
with git).
[mleeman@cypher code]$ wget http://www.buildroot.net/downloads/
buildroot-2012.08.tar.bz2
[mleeman@cypher code]$ tar jxf buildroot-2012.08.tar.bz2
[mleeman@cypher code]$ cd buildroot-2012.08
2.5.2 Con
87. guring each and every component; or in a more coarse fashion. Since most of
the developers focus on small and fast, it can be assumed that the defaults are reasonable (this
has been veri
88. ed by experience).
At this point, only a toolchain is created; that is the compiler, the binutils, optionally gdb,
and the (uC)libc version that is heavily intertwined with the compiler. When gdb is enabled for
the host, gdbserver for the target needs to be enabled too (one without the other does not make
much sense). When browsing through the options, disable all the target packages.
Figure 2.2: Con
94. CHAPTER 2. CROSS COMPILATION TOOLCHAIN 13
Figure 2.3: Selecting a system wide path with a date-string avoids confusion and overwriting
existing toolchains
Compilers and libc libraries improve and evolve over time. On the other hand, installing a new
toolchain, is changing the entire engine of your embedded development and needs to be done with
care. Therefore, adding a date string in system wide path (where the toolchain will be placed) is
added to avoid this. This way, users can play with dierent compilers by just changing the date
string in their $PATH environment variable (see Figure 2.3).
Exit and save the con
96. nal list of changed options is rather short:
[mleeman@cypher buildroot-2012.08]$ make savedefconfig
[mleeman@cypher buildroot-2012.08]$ cat defconfig
BR2_arm=y
BR2_arm926t=y
BR2_PACKAGE_GDB_SERVER=y
BR2_PACKAGE_GDB_HOST=y
make savedefconfig creates a defconfig
97. le from the full .config, with only the settings
that are changed from the default.
Run make, sit back and enjoy3:
[mleeman@cypher buildroot-2012.08]$ make
Buildroot will now download and compile all the packages. If a question is asked for input; just
opt for the default values.
Depending on the speed of you machine, this will take from about an hour to several hours to
compile (after all, the GCC compiler is compiled 3).
The result for the target is is a number of
100. lesystem. Typical targets are archive, ubifs,
ext2, jffs2, . . .
In order to use it, add these lines to the bottom of your ~/.bashrc
3You will need to con
101. gure wget to either use a proxy that does not require authentication and that uses the
Barco proxy as a parent; or con
102. gure the .wgetrc to use the proxy-user and proxy-password options.
103. CHAPTER 2. CROSS COMPILATION TOOLCHAIN 14
PATH=/users/firmware/mleeman/Development/
buildroot-2012.08/buildroot-2012.08/
output/host/usr/bin:$PATH
export PATH
and re-source your .bashrc
[mleeman@neo buildroot]$ . ~/.bashrc
A
104. nal check of our toolchain should result in:
[mleeman@cypher bin]$ ./arm-unknown-linux-uclibcgnueabi-gcc -v
Using built-in specs.
COLLECT_GCC=./arm-unknown-linux-uclibcgnueabi-gcc
COLLECT_LTO_WRAPPER=/users/firmware/mleeman/Development/buildroot-2012.08/buildroot-2012.08/output/host/usr/libexec/gcc/arm-unkTarget: arm-unknown-linux-uclibcgnueabi
...
Thread model: posix
gcc version 4.7.1 (Buildroot 2012.08)
2.5.3 Finishing up
After creating the toolchain, we want to distribute it in a clean fashion to other machines of similar
architecture (e.g. colleagues debugging in the
105. eld with laptops).
In order to do that; select a location more suitable than a home directory
(/opt/barco/arm/20120911/toolchain uclibc arm/); and build the toolchain there.
Assuming you've built the toolchain on a comparable machine, use the following command to
package the toolchain in a Debian package:
[mleeman@neo buildroot-20120911]$ tar cvfz toolchain_arm_uclibc_20120911.tar.gz
/opt/barco/arm/20120911/
[mleeman@neo buildroot-20120911]$ fakeroot alien --fixperms
toolchain_arm_uclibc_20120911.tar.gz
toolchain-arm-uclibc-20120911_1-2_all.deb generated
Note that we put the time stamp in the package name, instead as in the version name; since
we want to allow dierent version to exist next to each other after installation. If not, installing a
package with a more recent version will replace (and remove) the other package.
2.6 References
Cross Compile: http://en.wikipedia.org/wiki/Cross-compile
Remote Debugging: http://www.cucy.net/lacp/archives/000024.html
GCC: http://gcc.gnu.org
Glibc: http://www.gnu.org/software/libc/
uClibc: http://www.uclibc.org
Crosstool-NG: http://ymorin.is-a-geek.org/projects/crosstool
Buildroot: http://buildroot.net
Embedded Linux Development Kit (ELDK): http://www.denx.de/wiki/DULG/ELDK
106. Chapter 3
The Linux Boot Process
In the beginning, there was GRUB (or maybe LILO) and GRUB loaded the kernel,
and kernel begat init, and init begat rc, and rc begat network and httpd and getty,
and getty begat login, and login begat shell and so on.
3.1 Introduction
This section will cover the boot process of most Linux distributions. Even though there are some
dierences between the distributions, the process is alike.
The process of booting a Linux system consists of a number of stages, but whether a x86,
x86-64 desktop, server or a deeply embedded processor is booted, the
ow is similar. In this
chapter, we will explore the Linux boot process from the initial bootstrap to the start of the
107. rst
user-space application. Along the way; several boot-related topics such as the bootloaders, kernel
decompression and RAM disks and other element of the Linux boot process will be introduced.
As an example, a GNU/Debian 6.0 (Squeeze) on a x86-64 will be used to explain the process;
but booting on x86, PowerPC, Sparc, . . . are more or less the same.
In modern computers the bootstrapping process begins with the CPU executing software con-tained
in ROM (for example, the BIOS of an IBM PC) at a prede
108. ned address (the CPU is
designed to execute this software after reset without outside help). This software contains rudi-mentary
functionality to search for devices eligible to participate in booting, and load a small
program from a special section (most commonly the boot sector) of the most promising device.
Boot loaders may face peculiar constraints, especially in size; for instance, on the IBM PC and
compatibles, the
111. rst 446 bytes of the Master Boot
Record, in order to leave room for the 64-byte partition table and the 2-byte AA55h 'signature',
which the BIOS requires for a proper boot loader.
Today's computers are equipped with facilities to simplify the boot process, but that doesn't
necessarily make it simple.
Figure 3.1 shows a high level view of the Linux boot process. In the next sections, each step
will be elaborated.
When a system is
112. rst booted, or is reset, the processor executes code at a well-known location.
In a personal computer (PC), this location is in the basic input/output system (BIOS), which is
stored in
ash memory on the motherboard. The central processing unit (CPU) in an embedded
system invokes the reset vector to start a program at a known address in
ash/ROM. In a lot of
Linux based embedded processors; the devices is boot at a well know address (e.g. 0x00000100 on
Chip Select 0 (CS0)). Placing the bootloader (e.g. U-Boot) on that location will start it.
In either case, the result is the same. Because PCs oer so much
exibility, the BIOS must
determine which devices are candidates for boot. We'll look at this in more detail later.
When a boot device is found, the
113. rst-stage boot loader is loaded into RAM and executed. This
boot loader is less than 512 bytes in length (a single sector), and its job is to load the second-stage
15
114. CHAPTER 3. THE LINUX BOOT PROCESS 16
Figure 3.1: A high level view of the Linux boot process
boot loader.
When the second-stage boot loader is in RAM and executing, a splash screen is commonly
displayed, and Linux and an optional initial RAM disk (temporary root
115. le system) are loaded
into memory. When the images are loaded, the second-stage boot loader passes control to the
kernel image and the kernel is decompressed and initialised. At this stage, the kernel checks and
initialises the system hardware, enumerates the attached hardware devices, mounts the root device,
and then loads the necessary kernel modules. When complete, the
116. rst user-space program (init)
starts, and high-level system initialisation is performed.
That's Linux boot in a nutshell. Now let's dig in a little further and explore some of the details
of the Linux boot process.
3.2 Step 1: The Boot Manager
The boot manager is a small program that resides mostly on the MBR1 1 and presents a menu
for choosing the Operating System (if more than one is present); kernel or boot options to boot.
In the regular, plain-old-booting-linux business, all the boot loader does is:
Load the kernel into memory
Optionally load a ramdisk called initrd containing stu like disk drivers
Pass the kernel arguments, of which we are only interested in runlevel and init
Start execution of the kernel.
3.2.1 System startup
The system startup stage depends on the hardware that Linux is being booted on. On an embedded
platform, a bootstrap environment is used when the system is powered on, or reset. Examples
include U-Boot, RedBoot, and MicroMonitor from Lucent. Embedded platforms are commonly
shipped with a boot monitor. These programs reside in special region of
ash memory on the
target hardware and provide the means to download a Linux kernel image into
ash memory and
subsequently execute it. In addition to having the ability to store and boot a Linux image, these
1Master Boot Record.
117. CHAPTER 3. THE LINUX BOOT PROCESS 17
boot monitors perform some level of system test and hardware initialisation. In an embedded
target, these boot monitors commonly cover both the
118. rst- and second-stage boot loaders.
In a PC, booting Linux begins in the BIOS at address 0xFFFF0. The
119. rst step of the BIOS is
the power-on self test (POST). The job of the POST is to perform a check of the hardware. The
second step of the BIOS is local device enumeration and initialisation.
Given the dierent uses of BIOS functions, the BIOS is made up of two parts: the POST
code and runtime services. After the POST is complete, it is
ushed from memory, but the BIOS
runtime services remain and are available to the target operating system.
To boot an operating system, the BIOS runtime searches for devices that are both active
and bootable in the order of preference de
120. ned by the complementary metal oxide semiconductor
(CMOS) settings. A boot device can be a
oppy disk, a CD-ROM, a partition on a hard disk, a
device on the network, or even a USB
ash memory stick.
Commonly, Linux is booted from a hard disk, where the Master Boot Record (MBR) contains
the primary boot loader. The MBR is a 512-byte sector, located in the
121. rst sector on the disk
(sector 1 of cylinder 0, head 0). After the MBR is loaded into RAM, the BIOS yields control to it.
3.2.2 Extracting the MBR
As an exercise, the MBR can be inspected. Use these commands:
$ sudo dd if=/dev/sda of=mbr.bin bs=512 count=1
$ od -xa mbr.bin
The dd command, which needs to be run from root. Since is is a bad habit of logging into your
system as root; we use the sudo command that gives the user temporarily root permissions. dd
reads the
125. le in hex and ASCII formats.
3.2.3 Stage 1 boot loader
The primary boot loader that resides in the MBR is a 512-byte image containing both program
code and a small partition table (see Figure 3.2). The
126. rst 446 bytes are the primary boot loader,
which contains both executable code and error message text. The next sixty-four bytes are the
partition table, which contains a record for each of four partitions (sixteen bytes each). The MBR
ends with two bytes that are de
127. ned as the magic number (0xAA55). The magic number serves
as a validation check of the MBR.
The job of the primary boot loader is to
128. nd and load the secondary boot loader (stage 2). It
does this by looking through the partition table for an active partition. When it
129. nds an active
partition, it scans the remaining partitions in the table to ensure that they're all inactive. When
this is veri
130. ed, the active partition's boot record is read from the device into RAM and executed.
3.2.4 Stage 2 boot loader
The secondary, or second-stage, boot loader could be more aptly called the kernel loader. The task
at this stage is to load the Linux kernel and optional initial RAM disk.
The
131. rst- and second-stage boot loaders combined are called Linux Loader (LILO) or GRand
Uni
132. ed Bootloader (GRUB) in the x86 PC environment. Both alternatives are pretty well docu-mented,
elaborating on the options server little purpose here. Most of the options and con
134. le with a lot of the options explained in commentary (e.g. /boot/grub/menu.lst
for GRUB and /etc/lilo.conf for LILO). Some distribution have patched versions for including
graphical themes instead of the default minimalistic text or curses-alike approach. A dierence
that should be mentioned is that LILO requires to run the lilo command after modifying the
con
136. le; while current GRUB version do not: the changes in /boot/grub/menu.lst are
instantaneous.
137. CHAPTER 3. THE LINUX BOOT PROCESS 18
Figure 3.2: Anatomy of the MBR
Because LILO has some disadvantages that were corrected in GRUB, let's look into GRUB.
The great thing about GRUB is that it includes knowledge of Linux
138. le systems. Instead of
using raw sectors on the disk, as LILO does, GRUB can load a Linux kernel from an ext2 or ext3
139. le system. It does this by making the two-stage boot loader into a three-stage boot loader. Stage
1 (MBR) boots a stage 1.5 boot loader that understands the particular
140. le system containing
the Linux kernel image. Examples include reiserfs stage1 5 (to load from a Reiser journaling
141. le
system) or e2fs stage1 5 (to load from an ext2 or ext3
142. le system). When the stage 1.5 boot loader
is loaded and running, the stage 2 boot loader can be loaded.
With stage 2 loaded, GRUB can, upon request, display a list of available kernels (de
143. ned
in /boot/grub/menu.lst). You can select a kernel and even amend it with additional kernel
parameters. Optionally, you can use a command-line shell for greater manual control over the
boot process.
With the second-stage boot loader in memory, the
144. le system is consulted, and the default
kernel image and initrd image are loaded into memory. With the images ready, the stage 2 boot
loader invokes the kernel image.
3.2.4.1 GRUB stage boot loaders
The /boot/grub directory contains the stage1, stage1.5, and stage2 boot loaders, as well as a
number of alternate loaders (for example, CR-ROMs use the iso9660 stage 1 5).
145. CHAPTER 3. THE LINUX BOOT PROCESS 19
3.2.5 Kernel
With the kernel image in memory and control given from the stage 2 boot loader, the kernel stage
begins. The kernel image isn't so much an executable kernel, but a compressed kernel image. On
Linux systems, vmlinux is a statically linked executable
148. le might be required for kernel debugging, generating symbol table or other operations, but must
be made bootable before being used as an operating system kernel by adding a multiboot header,
bootsector and setup routines.
Typically this is a zImage (compressed image, less than 512KB) or a bzImage (big compressed
image, greater than 512KB), that has been previously compressed with zlib. As the Linux kernel
matured, the size of the kernels generated by users grew beyond the limits imposed by some
architectures, where the space available to store the compressed kernel code is limited. The bzImage
(big zImage) format was developed to overcome this limitation by cleverly splitting the kernel over
discontiguous memory regions (see Figure 3.3). The bzImage format is still compressed using the
zlib algorithm2.
Figure 3.3: Anatomy of bzImage
At the head of this kernel image is a routine that does some minimal amount of hardware
setup and then decompresses the kernel contained within the kernel image and places it into high
memory. If an initial RAM disk image is present, this routine moves it into memory and notes it
for later use. The routine then calls the kernel and the kernel boot begins.
When the bzImage (for an x86 image) is invoked, you begin at ./arch/x86/boot/header.S in
the start assembly routine (see Figure 3.4 for the major
ow). This routine does some basic hard-ware
setup and invokes the startup 32 routine in ./arch/x86/boot/compressed/header.S. This
routine sets up a basic environment (stack, etc.) and clears the Block Started by Symbol (BSS).
The kernel is then decompressed through a call to a C function called decompress kernel (located
in ./arch/x86/boot/compressed/misc.c). When the kernel is decompressed into memory, it is
called. This is yet another startup 32 function, but this function is in ./arch/x86/kernel/header.S.
In the new startup 32 function (also called the swapper or process 0), the page tables are
initialised and memory paging is enabled. The type of CPU is detected along with any optional
oating-point unit (FPU) and stored away for later use. The start kernel function is then invoked
(init/main.c), which takes you to the non-architecture speci
149. c Linux kernel. This is, in essence,
the main function for the Linux kernel.
2Although there is the popular misconception that the bz- pre
150. x means that bzip2 compression is used (the
bzip2 package is often distributed with tools pre
151. xed with bz-, such as bzless, bzcat, etc.), this is not the case.
152. CHAPTER 3. THE LINUX BOOT PROCESS 20
Figure 3.4: Major functions
ow for the Linux kernel x86 boot
With the call to start kernel, a long list of initialisation functions are called to set up inter-rupts,
perform further memory con
153. guration, and load the initial RAM disk. In the end, a call is
made to kernel thread (in ./arch/x86/kernel/process.c) to start the init function, which is
the
154. rst user-space process. Finally, the idle task is started and the scheduler can now take control
(after the call to cpu idle). With interrupts enabled, the pre-emptive scheduler periodically takes
control to provide multitasking.
During the boot of the kernel, the initial-RAM disk (initrd) that was loaded into memory
by the stage 2 boot loader is copied into RAM and mounted. This initrd serves as a temporary
root
155. le system in RAM and allows the kernel to fully boot without having to mount any physical
disks. Since the necessary modules needed to interface with peripherals can be part of the initrd,
the kernel can be very small, but still support a large number of possible hardware con
159. le system is mounted.
The initrd function allows you to create a small Linux kernel with drivers compiled as loadable
modules. These loadable modules give the kernel the means to access disks and the
160. le systems
on those disks, as well as drivers for other hardware assets. Because the root
167. le system can be mounted via the Network File
System (NFS).
3.2.5.1 Manual boot in GRUB
From the GRUB command-line, you can boot a speci
168. c kernel with a named initrd image as
follows:
grub kernel /bzImage-2.6.22.6
[Linux-bzImage, setup=0x1400, size=0x29672e]
grub initrd /initrd-2.6.22.6.img
[Linux-initrd @ 0x5f13000, 0xcc199 bytes]
169. CHAPTER 3. THE LINUX BOOT PROCESS 21
grub boot
Uncompressing Linux... Ok, booting the kernel.
If you don't know the name of the kernel to boot, just type a forward slash (/) and press the
Tab key. GRUB will display the list of kernels and initrd images.
3.2.5.2 decompress kernel output
The decompress kernel function is where you see the usual decompression messages emitted to
the display:
Uncompressing Linux... Ok, booting the kernel.
3.3 Step 2: init
After the kernel is booted and initialised, the kernel starts the
171. rst program invoked that is compiled with the standard C library. Prior to this point in the
process, no standard C applications have been executed.
The init argument the boot loader can pass to the kernel is the name of a program. Usually,
none is given, and the default, /sbin/init is used. But it need not be. Rarely do embedded systems
require the extensive initialisation provided by init (as con
172. gured through /etc/inittab). In
many cases, you can invoke a simple shell script that starts the necessary embedded applications.
A good example where /sbin/init is replaced by a script is in embedded systems; where a
read-only
173. lesystem is overlaid with another FS that is writable. The changes are written to a
ash
175. lesystem. In this
case, e.g. init=/etc/preinit is passed to the kernel as an argument.
#!/bin/sh
# script to do pivot root and allow the entire root filesystem to be
# written to
/sbin/insmod /lib/modules/$(uname -r)/kernel/fs/mini_fo/mini_fo.ko
/sbin/insmod /lib/modules/$(uname -r)/kernel/lib/zlib_deflate/zlib_deflate.ko
/sbin/insmod /lib/modules/$(uname -r)/kernel/fs/jffs2/jffs2.ko
if ! /bin/mount -t jffs2 -w -o noatime,nodiratime /dev/mtdblock7 /mnt/mtdblock7
then
/usr/bin/eraseall /dev/mtd7
/bin/mount -t jffs2 -w -o noatime,nodiratime /dev/mtdblock7 /mnt/mtdblock7
fi
mount -t mini_fo -o base=/,sto=/mnt/mtdblock7 / /mnt/mini_fo
cd /mnt/mini_fo
[ -e old_rootfs ] || mkdir -p old_rootfs
pivot_root . old_rootfs
exec /usr/sbin/chroot . /sbin/init
echo Oops, exec chroot didnt work! :( :( :(
exit 1
When the we pass the following parameter to the kernel: init=/bin/sh to the kernel, and then
a plain shell would be used instead of init.
176. CHAPTER 3. THE LINUX BOOT PROCESS 22
What does the kernel do with init? It starts it. It's the only program the kernel itself starts,
everything else is started by init.
The regular Linux init will then read a
178. le is somewhat involved and archaic, but it's not too complex3.
In order to understand the process of init, the concept of a runlevel needs to be introduced. A
runlevel is a state or mode, that is de
179. ned by the services that run in that mode. The runlevels
are derived from its Unix historical roots. Here services means services like sshd, network, ftpd
and, crond, . . .
Runlevels are needed because dierent systems can be used in dierent ways. Some services
are not available until the system is in a particular state or mode. Only when some lower services
are available, other higher services can be started/used.
Consider that your system disk, may be a LAN server and, is corrupted and you want to
repair it. In such situations, you do not expect other users to login to the system. Now you can
switch to runlevel 1 and perform the maintenance tasks on your disk. Since runlevel 1 doesn't
support network/multiuser login, other users cannot login to the system, when it is under main-tenance.
(i.e. When a low-level service
180. lesystem is not available, other high-level services such as
multiuser/network login cannot be started or used).
Linux has the following runlevels:
0 : Halt (Shutdown)
1 : Single User Mode
2 : Basic Multi-User mode without NFS
3 : Full Multi-User mode
4 : Not Used (User De
181. nable)
5 : Full Multi User Mode with X11 Login
6 : Reboot
Each runlevel runs a particular set of services. The list of all services in the system will be in
the /etc/init.d directory. There is a directory that corresponds to each runlevels.
For runlevel 0: /etc/rc0.d
For runlevel 1: /etc/rc1.d
For runlevel 2: /etc/rc2.d
For runlevel 3: /etc/rc3.d
For runlevel 4: /etc/rc4.d
For runlevel 5: /etc/rc5.d
For runlevel 6: /etc/rc6.d
For runlevel S: /etc/rcS.d
3A lot of embedded systems do not use the Sys-V init, but busybox init. The con
183. le is slightly dierent. Another option is initng. While classic init executes processes in sequence; and a
lot of these tasks are hardware dependent; the processor is idle while waiting the reply from the hardware. initng
tackles this by starting independent tasks in parallel, resulting in a faster boot-up; but a lot harder to con
185. CHAPTER 3. THE LINUX BOOT PROCESS 23
Each of these directory will contain many symbolic links. These links will point to the services
in the /etc/init.d directory. All these links will start with either an S or K. Each link is named
with a pre
186. x of K or S according to whether that particular service need to be killed or started
in that runlevel.
e.g.. Consider the following entries (symbolic links) in the directory /etc/rc0.d:
[mleeman@seraph ~]$ ls -1 /etc/rc0.d/
K11anacron
K11cron
K20autofs
K20courier-authdaemon
K20courier-mta
...
S50mdadm-raid
S60umountroot
S90halt
This directory corresponds to runlevel 0 which is shutdown. Here the services killall and
halt are started. All other services are killed. This can be seen since only killall and halt start
with S and all other entries start with K. You may wonder what if killall and halt services
start before the kill of all the other services. Unfortunately that doesnt happen. First all the kill
services in the directory will be executed, followed by the start services. If you need further info,
tweak into the /etc/init.d/rc
187. le which manages the start and stop of services when switching
runlevels.
The system starts, when init loads in an unde
188. ned state (sometimes called N), and then will
switch to one runlevel or another depending on what the runlevel argument from the bootloader
to the kernel was, and the contents of /etc/inittab.
For example, if the bootloader passed runlevel as 5, init will try to switch to that state. If no
runlevel argument was passed, it will use its default, which is in /etc/inittab
The default runlevel is de
190. le:
# The default runlevel.
id:3:initdefault:
By default it is set to runlevel 3 or 5 (when X11 is installed). It can be customized to your
needs4.
Some distributions (like Debian) de
192. rst (/etc/rcS.d), and
starting as few processes as possible5.
Normally the only reason for the bootloader to pass an argument is if you want it to boot in an
unusual state, for example, a single-user mode for maintenance (runlevel 1), or with a replacement
init because of disk corruption (init=/bin/sh).
So, let's look at that
193. le in more detail.
3.3.1 Step 2.1: /etc/inittab
All lines starting with # are comments. The other lines are like this:
1:2345:respawn:/sbin/getty 38400 tty1
They have 4
194. elds, separated with colons, which mean (taken from the inittab(5) man page).
id : is a unique sequence of 1-4 characters which identi
195. es an entry in inittab (for versions of
sysvinit compiled with the old libc5 ( 5.2.18) or a.out libraries the limit is 2 characters).
4Alert: Be sure not to set the default to 0 or 6.
5In fact, Debian, as well as most of the distributions based on it, like Ubuntu, does not make any dierence
between runlevels 2 to 5, they are all there for the local admin to con
200. eld starts with a `+' character, init
will not do utmp and wtmp accounting for that process. This is needed for gettys that insist
on doing their own utmp/wtmp housekeeping.
When it's booting, to decide the desired runlevel (again, if it's not passed as an argument),
init will look for a line with the initdefault action.
id:5:initdefault:
That means: go to runlevel 5. So, if you wanted to change the default runlevel, that's what
you change.
But what does it mean to go to one runlevel? Well, each runlevel runs a dierent con
201. guration
of software. One runlevel may have a webserver running, and another not have it. One runlevel
may show you a graphical login screen, or not, or give you 6 text terminals, or one.
So, for example, if you switched to runlevel 6, it would reboot. You can switch runlevels at any
moment using the telinit command, but for the purposes of booting and this article, you switch
only once, to the default runlevel, and you're done.
So, what happens after you know you are going to runlevel 5?
If you are booting, you check all lines with actions sysinit boot and bootwait, in that order,
and run what the command
202. eld says.
For GNU/Debian, this is
si::sysinit:/etc/init.d/rcS
So, it will run a script called /etc/init.d/rcS, which does stu like loading a terminal font,
check disks, mount stu... basic system habitability drudge work.
Then it will get all lines with action once and wait that have the desired runlevel in the runlevel
203. eld, and will run its commands, and will wait until the wait lines commands are
204. nished.
In GNU/Debian, for runlevel 5:
l5:5:wait:/etc/init.d/rc 5
What this particular script does is start all services con
206. guring foo bar on boot. Now, let's see the details...
3.4 Step 3: Services
When you install a decently packaged software that needs to run without being manually started
by a user (think webserver), it should have provided you with a control script for itself, and placed
it in the standard place: /etc/init.d/.
There you will
207. nd many scripts. For example, there is one called /etc/init.d/networking
which, amazingly enough, controls the network.
For example, when /etc/init.d/networking stop is executed, it brings down the network.
and /etc/init.d/networking start brings the networking back up.
Some services support more or less commands, but all support stop, start and restart. To see
what is supported, call the script without arguments:
[mleeman@seraph ~]$ /etc/init.d/networking
Usage: /etc/init.d/networking {start|stop|restart|force-reload}
208. CHAPTER 3. THE LINUX BOOT PROCESS 25
For each runlevel, there's a list of services that should be started, and a list of services that
should be stopped. On entering runlevel 5, for example, you may want to stop service httpd but
start service smb, or whatever. I heavily recommend you use a system management tool, like
Debian's rcconf to handle this, they are simple and work just
209. ne. But, if you want to do it by
hand, or just want to know how that con
210. guration is stored, read on :-)
For each runlevel N, there is a folder, called /etc/rc.d/rcN.d.
Here is part of runlevel 5:
[mleeman@seraph ~]$ ls -al /etc/rc5.d/ | cut -c 52-
...
S10sysklogd - ../init.d/sysklogd
S11klogd - ../init.d/klogd
S14ppp - ../init.d/ppp
S19slapd - ../init.d/slapd
S20autofs - ../init.d/autofs
...
S91apache2 - ../init.d/apache2
S99rmnologin - ../init.d/rmnologin
S99stop-bootlogd - ../init.d/stop-bootlogd
As mentioned before, the links that start with K are to be stopped and those which start with
S are to be started. The numbers are to give them an order to be killed or started.
The stopping or starting is simply done by calling, for example
/etc/rc5.d/S20autofs start
Since S20autofs is a symbolic link to /etc/init.d/autofs, it's just the same as what we
used before to start the network service.
After all that is done, all services are started, we get back to inittab.
3.5 Step 4: More inittab fun
Now init will get all lines with action respawn for the desired runlevel and start their processes.
respawn commands are restarted when they end, so they will be running pretty much all the time
as long as you are in this runlevel.
For GNU/Debian in runlevel 5:
1:2345:respawn:/sbin/getty 38400 tty1
2:23:respawn:/sbin/getty 38400 tty2
3:23:respawn:/sbin/getty 38400 tty3
4:23:respawn:/sbin/getty 38400 tty4
5:23:respawn:/sbin/getty 38400 tty5
6:23:respawn:/sbin/getty 38400 tty6
co:2345:respawn:/sbin/getty -L console 57600 vt220
The lines with id 1 through 6 run a program in the terminals you reach using ALT-F1 through
ALT-F6, which asks your username. Yes, those are what you use to login in text mode.
The line with co spawns a serial console on the serial port.
And voila, you are booted and ready to login.
3.6 Hands On
By now, the SheevaPlug device should be up and running. Log in to your device (user: root,
password: nosoup4u) and examine the /etc/inittab
214. CHAPTER 3. THE LINUX BOOT PROCESS 26
3.7 References
http://www-128.ibm.com/developerworks/linux/library/l-linuxboot/
http://en.wikipedia.org/wiki/BzImage
http://sourceforge.net/projects/u-boot
http://www.faqs.org/docs/Linux-HOWTO/Kernel-HOWTO.html
215. Chapter 4
Boot Loaders
4.1 Introduction
The boot loader is the very
216. rst thing running after power on. Its task is to initialise (some of)
the hardware and provide a means for loading the kernel from some kind of storage and execute
it. Bootloaders also often have monitor functionality to read/write memory, program
ash and so
on.
A lot of Linux compatible bootloaders exists. For Linux systems running on PCs the most
popular are LILO and GRUB, but they are not interesting for most embedded Linux systems as
they are x86 speci
217. c and only support booting from disks.
Most boot loaders are by their nature very platform/board speci
218. c, but three strives to be
portable: RedBoot, Das U-Boot and Barebox. A portable boot loader is very interesting as
you don't need to write or get familiar with a new boot loader every time you change hardware
platform.
4.2 RedBoot
RedBoot (Red Hat Embedded Debug and Bootstrap), is an advanced bootloader by Red Hat
written on top of the eCos embedded operating system. Features of special interest are:
Portable (ARM, Calmrisc16/32, Cold
219. re, Frv, H8300, x86, M68K, Mips, OpenRisc, PPC,
SH, Sparc, V85x)
Interactive command line interface over serial and telnet
Boot scripting
TCP/IP stack with BOOTP and DHCP support
Image download from
220. le, X/Y modem, TFTP and HTTP
ELF, SREC and binary image formats, optionally GZIP compressed
Flash interface (NOR/NAND) with image system (FIS)
Read-only
221. le system access (JFFS2, FAT, EXT2)
Integrated GDB stubs for easy debugging (serial and TCP/IP)
Boot support for eCos and Linux
27
222. CHAPTER 4. BOOT LOADERS 28
The Flash Image System (FIS) is especially interesting as Linux can also parse it (See CON-FIG
MTD REDBOOT PARTS), so no special eort is needed to keep the bootloader and kernel's
idea of the
ash layout in sync.
As Red Hat is no longer working on eCos development activity around RedBoot has unfortu-nately
slowed down quite a bit.
RedBoot's eCos heritage also means that it is a fairly large source base (~100MB), where most
of the source code isn't relevant for RedBoot. This might lead to a steeper learning curve than
other dedicated boot loaders. The memory map is further more not optimised for loading Linux
kernels (E.G. on PPC RedBoot normally is located at the bottom of the memory map, so a raw
Linux kernel cannot be directly loaded but the zImage target with it's small loader that moves
the kernel in place after loading must be used).
RedBoot is licensed under the eCos License, which is for all intents the same as GPL.
4.3 Das U-Boot
U-Boot, the universal boot loader is probably the most feature full,
exible and most actively
developed open source boot loader available. It is maintained byWolfgang Denk of DENX Software
Engineering.
It started as a PowerPC speci
223. c boot loader (PPCBoot), but now also runs on ARM, Black
224. n,
x86, M68K, Microblaze, Mips and Nios(1 2) boards.
It is very much focused on booting Linux systems, and the development approach is also clearly
inspired by it (GIT version control, coding style, reuse of Linux drivers, ..).
Features of special interest are:
Flash support (NOR, NAND, Data
ash)
Compression (GZIP, BZIP2)
Interactive command line interface and boot scripting
TCP/IP stack with BOOTP, DHCP, TFTP and NFS support
Lots of drivers (IDE, SCSI, MMC, PCMCIA, USB, LCD, I2C, SPI, ...)
x86 emulation for graphics card POST on non-x86
File systems (JFFS2, Cramfs, EXT2, FAT, ReiserFS, ..)
Boot splash images
FPGA con
225. guration
U-Boot is licensed under the GPL.
4.4 Barebox
Barebox is a relatively new bootloader (2009). It started its life under the code name U-Boot v2
by Sascha Hauer from Pengutronix as a technology study to see if it was possible to merge the nice
user features of U-Boot with infrastructure concepts inspired by the Linux kernel (driver model,
POSIX, ..).
Barebox has many of the same features as U-Boot, a cleaner code base and command set, but
not as broad hardware support or popularity. It follows a relatively agressive development
ow
with monthly releases.
Barebox is also licensed under the GPL.
226. CHAPTER 4. BOOT LOADERS 29
4.5 Conclusion
Both RedBoot and U-Boot are or have been in use within Barco and both are valid options, but
because of its active development and strong Linux focus we recommend to go with U-Boot for
new development. Barebox has to our knowledge not been used within Barco yet, but is also a
very interesting project to consider.
4.6 Hands On - Explore U-Boot
The SheevaPlug device runs U-Boot; log into the device with serial and poke around to discover
the hardware, using U-Boot commands. Inspect the environment that is stored in
ash.
4.7 Hands On - Replace Bootloader
4.7.1 Introduction
The SheevaPlug uses U-Boot as a bootloader. If any custom hardware is made, it is always wise
to start from a well known reference design.
First of all, these kinds of evaluation boards are designed to allow the customers to evaluate the
on-board functionality of the processor. As such a lot of the hardware peripherals will be accessible,
or at the very least, de
227. ned in the code (e.g. the IMMR registers) of both the Bootloader (Das
U-Boot) as well as for the Linux kernel.
Secondly, whenever there is a design error, a lot of users will have the same error; possibly
saving you time if someone else already encountered the problem (by providing a patch and/or
workaround). As we all know, this is often the case in the
228. rst revisions of new devices. It is not
unusual to see patches appearing for these devices on mailing lists for the bootloader (the most
likely place to tackle silicon bugs); or possibly even in the Linux kernel.
Deviation from the reference design should be taken with great care, and always in cooperation
with the person(s) doing the U-Boot and Linux kernel port. What can be a simple twist of the
pen for a hardware designer (re-connecting chip selects) can cause a lot of work in locating the
relevant code snippets and/or adjusting in the code, especially in a start-up phase where we are
not yet certain what causes a particular problem.
A general rule of thumb should be: don't change if there is no paramount reason; while some
changes can be
229. xed rather rapidly; others can cause important head aches (changing interrupt
lines) and maintenance problems for the remainder of the product life cycle. Changes are often
easier incorporated when the base platform is understood and ported: e.g. con
230. guring memory
from a SoDimm device is easy since the settings can be read out via I2C from a small EEPROM
on the SoDimm; soldering the memory is cheaper and more compact; but requires setting the
timings themselves.
In the remainder of this chapter, we will have a look at the U-Boot con
235. ned. As well as a working JTAG
probe is important, the same goes for a serial line, especially for early debugging and U-Boot
access. Again, we assume that this has been take care of during early system design.
4.7.1.1 Getting the Source
U-Boot has a regular 3 month release interval not unlike the kernel. We use the 2010.06 release
tarball. Later releases can be used as well, but the board support con
236. guration is somewhat
changed. Download from ftp://ftp.denx.de/pub/u-boot/u-boot-2010.06.tar.bz2.
247. guration OPTIONS : These are selectable by the user and have names beginning with
CONFIG .
Con
248. guration SETTINGS : These depend on the hardware etc. and should not be meddled
with if you don't know what you're doing; they have names beginning with CFG .
The options themselves are documented in the README.
Since we want to create a derived con
249. guration from the reference board, we copy the
include/configs/sheevaplug.h to include/configs/myplug.h. In order to build our variant,
we add the following lines in the Make
250. le(in the top source directory).
myplug_config: unconfig
@$(MKCONFIG) $(@:_config=) arm arm926ejs $(@:_config=) barco kirkwood
The
252. es that we will be building for a arm architecture; processor type
arm926ejs and board con
253. guration Marvell1. The second but last option indicates that we will be
placing our board port under board/barco/ instead of board/ (Barco is using the board/barco/
directory, as was agreed up on within Barco2.
Since our port is based on the board/Marvell/sheevaplug/; we copy the directory to provide
a base to work with.
[mleeman@neo u-boot-2010.06]$ cp -a board/Marvell/sheevaplug/ board/barco/myplug/
Since we opted to keep changes to a minimum and trying to leverage as much as possible from
the U-Boot functionality while keeping maintenance to a minimum; we chose not to do this.
We carefully inspect, validate and adjust the settings where needed, if your design is close to
the reference design, you will not need to make any other code changes.
While this is not strictly needed, U-Boot gives us the possibility to store the environment
in
ash. Since this is a very powerful tool, we enable this by making certain that the following
variables are correct:
#define CFG_ENV_IS_IN_FLASH 1
#define CFG_ENV_ADDR (CFG_MONITOR_BASE + 0x40000)
#define CFG_ENV_SECT_SIZE 0x20000 /* 128K (one sector) for env */
#define CFG_ENV_SIZE 0x20000
/* Address and size of Redundant Environment Sector */
#define CFG_ENV_ADDR_REDUND (CFG_ENV_ADDR + CFG_ENV_SECT_SIZE)
#define CFG_ENV_SIZE_REDUND (CFG_ENV_SIZE)
This instructs U-Boot that a environment will be stored in
ash at location CFG MONITOR BASE
+0x40000, of size 0x20000 (one sector). We also add a redundant con
255. cantly dierent from the reference design or if you require extensive and speci
256. c
functionality you need to add; it might be wise to completely branch the original port into a speci
257. c one.
2Since Barco is creating a lot of U-Boot based boards, it is a good idea to provide a vendor (barco) directory to
place the boards in.
258. CHAPTER 4. BOOT LOADERS 31
4.7.1.3 Building and booting
At this point; you can try to build the bootloader from this point onwards. We specify the usual
CROSS COMPILE and ARCH parameters (See Chapter 2). Fist, we con
261. nally, build the image.
make ARCH=arm CROSS_COMPILE=arm-linux- myplug_config
make ARCH=arm CROSS_COMPILE=arm-linux-
make ARCH=arm CROSS_COMPILE=arm-linux- u-boot.kwb
If all goes well, you should end up with a binary and Marvell (kwb) image (u-boot.bin and
u-boot.kwb). We will use the Marvell image to load over the network and burn it to
ash. For the
initial loading of the bootloader in
ash, we copy the image to our tftpboot directory and burn
it with OpenOCD.
The
262. rst step is to get OpenOCD working (See Chapter 9 for more information about OpenOCD).
With this information, connect OpenOCD JTAG emulator. OpenOCD is part of GNU/Debian and the
most recent ones (testing/unstable) have been tested and are known to work with the SheevaPlug.
Another option is to use the precompiled version available from http://www.openplug.org.
Start OpenOCD:
[marc@staleek Sheeva]$ openocd -f /usr/share/openocd/scripts/board/sheevaplug.cfg
If OpenOCD complains with a similar message:
[marc@staleek Sheeva]$ openocd -f /usr/share/openocd/scripts/board/sheevaplug.cfg
Open On-Chip Debugger 0.3.0-in-development (2009-08-13-23:22) svn:r2529
$URL: http://svn.berlios.de/svnroot/repos/openocd/trunk/src/openocd.c $
For bug reports, read http://svn.berlios.de/svnroot/repos/openocd/trunk/BUGS
2000 kHz
jtag_nsrst_delay: 200
jtag_ntrst_delay: 200
dcc downloads are enabled
Error: unable to open ftdi device: device not found
Runtime error, file command.c, line 469:
[marc@staleek Sheeva]$
you might need to change the