Breaking the Kubernetes Kill Chain: Host Path Mount
Linux principles and philosophy
1.
2. is a Unix-like.
computer operating system (OS)
assembled under the model of free and
open-source software
3. The defining component of Linux is
the Linux kernel.
It is an operating system kernel first released
on 5 October 1991 by Linus Torvalds.
4. The development of Linux is one
of the most prominent examples
of free and open-source
software collaboration
5. Everything is a file. ( Including hardware )
Small, single-purpose programs.
Ability to chain programs together to
perform complex tasks.
Avoid captive user interfaces.
Configuration data stored in text.
6. UNIX systems have many powerful utilities
designed to create and manipulate files. The
UNIX security model is based around the
security of files.
By treating everything as a file, a
consistency emerges. You can secure access
to hardware in the same way as you secure
access to a document.
7. UNIX provides many small utilities that
perform one task very well.
When new functionality is required, the
general philosophy is to create a separate
program – rather than to extend an existing
utility with new features.
8. A core design feature of UNIX is that the
output of one program can be the input for
another. This gives the user the flexibility to
combine many small programs together to
perform a larger, more complex task.
9. Interactive commands are rare in UNIX. Most
commands expect their options and
arguments to be typed on the command line
when the command is launched.
10. The command completes normally, possibly
producing output, or generates an error
message and quits. Interactivity is reserved
for programs where it makes sense, for
example, text editors (of course, there are
non-interactive text editors too.)
11. Text is a universal interface, and many UNIX
utilities exist to manipulate text. Storing
configuration in text allows an administrator
to move a configuration from one machine to
another easily.
12. There are several revision control applications
that enable an administrator to track which
change was made on a particular day, and
provide the ability to roll back a system
configuration to a particular date and time.
13. Each of the commands that make up this
command line program is a filter.
That is each command will take an input,
usually from Standard Input, and “filters”
the data stream by making some change to
it, then sends the resulting data stream to
Standard Output.
14. Standard Input and Standard Output are
known collectively as STDIO.
The who command generates an initial
stream of data.
Each following command changes that data
stream in some manner, taking the Standard
Input and sending the modified data to
Standard Output for the next command to
manipulate.
15. Each of the commands in this program is
fairly small, and each performs a specific
task.
The sort command, for example does only
one thing. It sorts the data stream sent to it
via Standard Input and sends the results to
Standard Output.
16. It can perform numeric, alphabetic and
alphanumeric sorts in forward and reverse
order.
But it does nothing else. It only sorts but it is
very, very good at that. Because it is very
small, having only 2614 lines of code as
shown in the table below, it is also very fast.
17. The portability of shell scripts can be far
more efficient in the long run than the
perceived efficiency of writing a program in a
compiled language—not even considering the
time required to compile and test such a
program—because they can run on many
otherwise incompatible systems.
18. it means that by using four command line
commands, we are leveraging the work of
the programmers who created those
commands with over 7,000 lines of C code.
19. That is code that we do not have to create.
We are leveraging the efforts of those other,
under-appreciated programmers to
accomplish the task we have set for
ourselves.
20. Another aspect of software leverage is that
good programmers write good code and great
programmers borrow good code. Never
rewrite code that has already been written.