Abstract
Application sandboxes allow developers to take an unusual stance: not that our systems will be bug-free, and that bugs should be considered the corner-case; but that in fact there will be bugs, bugs as the rule, bugs that will be exploited in the messiest, ugliest way.
(I won't mention current events. But we'll know what they are...)
For this talk, I propose speaking about the design of a CGI framework that assumes exactly that: that its network-touching components will be exploited.
After all, CGI frameworks have a celestially vast attack surface: URL query strings; cookies and HTTP headers; and beneath and beyond it all, form parsing. Combine these attack vectors with validation--at best validation of simple types, and then more terrifyingly (and normally) via external libraries such as libpng.
In reviewing CGI frameworks in C for some recent work, I noticed less a lack of security focus than a parade committee for exploits. Even given my own small demands for CGI security, I was led to asked myself: can I do better than this?
The topic would necessarily focus on available sandbox techniques (e.g., systrace, Capsicum) and their practical pros and cons (portability, ease of implementation, documentation, etc.). After all, if we make mistakes in deploying our sandbox, it's just more ticker-tape for the parade.
The CGI framework in question, kcgi, is one I use for my own small purposes. Obviously it's ISC-licensed, well-documented C code, and will be mentioned as little as possible beyond as an exemplar of how easy (or hard!) it can be to write portable sandboxes. In short, this isn't about kcgi, but about systrace, Capsicum, Darwin's sandbox, and so on.
Speaker bio
Most of my open-source work focusses on UNIX documentation, e.g., the mandoc suite (now captained by schwarze@) and its constellation of related tools, such as pod2mdoc, docbook2mdoc, etc. Earlier work focussed more on security, from the experimental mult kernel container on OpenBSD and NetBSD to sysjail. In general, I dislike computers and enjoy the sea.
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Bugs Ex Ante by Kristaps Dzonsons
1. Thema
In brief, the situation.
You develop software.
People use your software.
2. Coda
The situation, in brief.
You develop software that is certainly broken.
People are going to exploit it.
3. Bugs Ex Ante
Kristaps Dzonsons
September 27, 2014
Or: how to protect system resources from the weakest link of the
system: the developer.
4.
5. Coda
write defensive code, use a team of auditors, QA
use up-to-date, audited libraries with a history of attention to
security
use a language with formal underpinnings and construct
proofs of correctness
run on systems supporting your defensive strategy
ride your unicorn to work every day.
6. Adjustment: Expense
write “defensive” code , use a team of auditors, QA
use up-to-date , audited libraries with a history of attention to
security
use a language with formal underpinnings and construct
proofs of correctness
run on systems supporting your defensive strategy
7. Adjustment: time
write “defensive” code , use a team of auditors, QA
use up-to-date , audited libraries with a history of attention to
security
use a language with formal underpinnings and construct
proofs of correctness C
run on systems supporting your defensive strategy
8.
9. Don’t Bring a Dog to a Cat Show
Invariants:
economics
buggy software
Variables:
valuable resources
10. Minimising the Goal Function
How do we do this?
resources ← software → users
resources → software ← users
Resource constraint:
resources ←software → users
resources →software ← users
Consider how it’s been done to date. In the beginning. . .
13. File Permissions (CTSS)
Each U.F.D. contains information about the location
and contents of the various files which the user has
created. The U.F.D. is associated with a problem
number and a programmer number. Also associated
with certain problem numbers are “common files” – file
directories which contain files of common interest and are
directly accessible to all users on the problem number.
Certain of the common files associated with the
system programmers’ problem number (M1416) contain
information of general utility and are accessible to all
users.
–CTSS Programmer’s Guide (AA.2 12/69): October 1965, MIT.
15. Segment Positions (Multics)
Storage is logically organized in separately named
data storage segments Associated with each segment is
an access control list, an open-ended list of names of
users who are permitted to reference the segment.
. . . Whenever the process attempts to access a segment
or other object cataloged by the storage system, the
principal identifier of the process is compared with those
appearing in the access control list of the object; if no
match is found access is not granted.
–Jerry Saltzer, Protection and the Control of Information Sharing
in Multics: 1974, MIT.
18. Capabilities (Hydra)
First, we must assume that any user-level program
contains bugs and may even be malevolent. We therefore
cannot allow any single user or application to
“commandeer” the system to the detriment of others. By
implication, we must prevent programs which define
policies direct access to hardware or data which could be
(mis)used to destroy another program. That is–such
programs must execute in a protected environment.
–R. Levin et al., Policy/Mechanism Separation in Hydra: CMU,
1975.
19. Unfortunately...
Figure : Paul Karger and Roger Schell, 1984. Credit: Tom Van Vleck
. . . They tried to break Multics security on the MIT
GE-645 that we all used as our timesharing utility and
development build & exposure site.
And break it they did. . . (1972–1974)
20. What Went Wrong? (Multics)
The large number of programs, as well as the very
high internal intricacy level, frustrates line-by-line
auditing for errors, misimplementation, or intentionally
planted trapdoors.
Economics. . . a function could be implemented more
cheaply [than] in the most protected region.
Rush to get on the air.
Lack of understanding.
–Jerry Saltzer, Protection and the Control of Information Sharing
in Multics: 1974, MIT.
21. What Went Wrong? (Hydra)
Subsystem construction still suffers from being ad
hoc, there being inadequate software support for
managing the programs, data structures, and
documentation which comprise the subsystem.
–W. Wulf and S. Harbison, Reflections in a pool of processors–An
experience report on C.mmp/Hydra: CMU, 1978.
22. What about UNIX?
In many ways, UNIX is a very conservative system.
Only a handful of its ideas are genuinely now. In fact, a
good case can be made that it is in essence a modern
implementation of M.I.T.’s CTSS system.
The UNIX system kernel and much of the software
were written in a rather open environment, so the
continuous, careful effort required to maintain a fully
secure system has not always been expended; as a result,
there are several security problems.
–D. M. Ritchie, The UNIX Time-Sharing System: A Retrospective:
1976, AT&T Bell Labs.
24. UNIX: “As Times Goes By”
1971 chmod(2), setuid(2) (V1 UNIX)
1979 chroot(2) (V7 UNIX)
1982 setrlimit(2) (4.1cBSD)
2000 jail(8) (FreeBSD 4.0)
2002 systrace(4) (OpenBSD?)
2003 POSIX.1e (FreeBSD 5.0, Mac OS X 10.4)
2007 kauth(9) (NetBSD 4.0)
2007 sandbox init(3) (Mac OS X 10.4)
2012 Capsicum (FreeBSD 9.0)
25. Concepts
Labelling: limit access only to labelled environment.
POSIX.1e
chmod(2), setuid(2)
Containers: limit the environment.
chroot(2)
setrlimit(2)
jail(8)
Capabilities: limit access to the environment.
systrace(4)
Capsicum
There’s lots of overlap.
26. chmod(2), setuid(2)
part of the original UNIX V1
forms the basis of privsep/privdrop (along with fork(2) et
al.)
On privilege separation...
Doable in simple programs
Requires complicated and very detailed programming
–Theo de Raadt: OpenCON, 2005
28. setrlimit(2)
Establish limits on resources, originally:
CPU time
data segment size
stack segment size
core file size
physical memory size
Now also (OpenBSD-current):
largest file size
maximum open files
maximum size of locked memory
maximum simultaneous processes for user
These can be set and unset at will, so an attacker with arbitrary
power can simply change the resource limits himself.
29. jail(8)
System interface by Poul-Henning Kamp in FreeBSD 4.0, further
extended in FreeBSD 5.1 (jail attach(2)) and FreeBSD 8.0
(jail set(2)).
Extends the chroot(2) concept into a constrained view of users,
network, files, etc.
Must be root. Cannot be recursive.
30. systrace(4)
Developed by Neils Provos (Improving Host Security with System
Call Policies, USENIX Security 2003), now only in OpenBSD.
Found vulnerable by Robert Watson, Exploiting Concurrency
Vulnerabilities in System Call Wrappers, WOOT 2007.
Uses the /dev/systrace device (open(2), ioctl(2), read(2))
to set a resource limitation policy (white-list, black-list) on process
and process children.
Must be root; can only be applied to children or other processes.
(See privsep.)
31. POSIX.1e: ACL
Organised under TrustedBSD, April 2000, merged into FreeBSD
5.0 and Darwin 10.6.
acl(3) File-system (VFS) access control. FreeBSD, Darwin.
(Slightly differing.)
setfacl(3) POSIX-compliant (not yet?) version.
32. POSIX.1e: MAC
Derived from TrustedBSD, April 2000, merged into FreeBSD 5.0
and Darwin 10.6 (and indirectly into NetBSD’s kauth(9 and
secmodel(9)).
mac(3), mac(4) FreeBSD interface. Full policy framework for
MLS, Biba, etc. Manuals inspired by the Voynich
manuscript.
sandbox init(3) Darwin 10.6 one-function entry into MAC
sandbox. Manuals inspired by haiku.
33. kauth(9)
In-kernel authorisation framework (like FreeBSD’s mac(9), etc.).
Inspired by Mac 10.4. Not exposed to user-land, but would enable
a user-land equivalent to mac(4), sandbox init(3), etc.
34. sandbox init(3)
Mandatory access control interface introduced in Darwin 10.6.
Based on FreeBSD and POSIX.1e.
Almost no documentation. See OpenSSH’s sandbox-darwin.c
for some significant “gotchas”.
Limitation profiles available: kSBXProfileNoInternet,
kSBXProfileNoNetwork, kSBXProfileNoWrite,
kSBXProfileNoWriteExceptTemporary,
kSBXProfilePureComputation.
35. Capsicum
Recent capabilities (see rights(4)) innovation on FreeBSD,
inherited from TrustedBSD.
Sits between systrace(4) and sandbox init(3) in terms of
complexity: requires consideration of each resource, but (in theory)
doesn’t need a fork(2) for processing children (except for process
limitations, pdfork(2)).
Used in bspatch(1), bsdiff(1), tcpdump(1), fetch(1),
bzip2(1), syslogd(8), . . .
Requires a significant addition of functions: prfork(2), . . .
36. Case Study: OpenSSH
Designing software resistent to developer bugs is hard. Too hard.
Consider one of the canonical sandboxed applications, OpenSSH
(5.8).
Introduce sandboxing of the pre-auth privsep child
using an optional sshd config(5)
“UsePrivilegeSeparation=sandbox” mode that enables
mandatory restrictions on the syscalls the privsep child
can perform. This intention is to prevent a compromised
privsep child from being used to attack other hosts (by
opening sockets and proxying) or probing local kernel
attack surface.
37. One Step Ahead, Two Steps Behind
% wc -l *sandbox*
122 sandbox-capsicum.c
98 sandbox-darwin.c
72 sandbox-null.c
97 sandbox-rlimit.c
240 sandbox-seccomp-filter.c
200 sandbox-systrace.c
24 ssh-sandbox.h
853 total
This is ridiculously complicated!
38. Methodology
Sandbox method for systrace(4):
1. parent fork(2) a child, wait for child to SIGSTOP
2. child emits a SIGSTOP
3. parent prepare systrace(4) policy for child
4. parent sends SIGCONT to child
5. child continues in sandbox
Sandbox method on other platforms:
1. parent fork(2) a child
2. child prepares sandbox environment
3. child continues in sandbox
39. Case Study: kcgi
% wc -l sandbox*
72 sandbox-darwin.c
249 sandbox-systrace.c
366 sandbox.c
687 total
40. Methodology
Inherit structure of OpenSSH.
Work around CGI environment: not a root process, already in a
chroot(2, unknown number of file descriptors already open.
41. Questionable Morals
The moral is obvious. You can’t trust code that you
did not totally create yourself. (Especially code from
companies that employ people like me.) No amount of
source-level verification or scrutiny will protect you from
using untrusted code.
–Ken Thompson, Reflections on Trusting Trust, 1984.