We'd all like to use computers to their fullest capability. However, we'd also like to restrict the capability of computers to do things we don't intend. These constraints present a problem: how our computers should determine our intention, when the software we use is written by others. Most programming languages ignore this problem, by running untrusted programs with completely open doors. Browsers run programs with a single policy that is mostly hard-coded (no file system access) but also complex, so that it's often unclear what is protected. In this talk I'll describe my attempts to come up with a model that is both flexible and easy to understand.
This talk will:
- elaborate on what makes sandboxing difficult (code is data),
- outline past approaches to sandboxing (web browsers), and
- summarize the problems of past approaches (who watches the watchers?)
It will describe a new approach that replaces the hard-coded coarse-grained protections of browsers with declarative fine-grained protections organized by the real-world effects of computers (syscalls). In outline, the approach separates untrusted software in apps from a tiny set of programmable policies. Each policy is advice that applies to a single syscall and decides whether to permit the syscall or not.
This approach is implemented in a fork of Lua for purely text-mode apps. The browser provides default policies, but tries to gradually empower each person over time to take ownership of the policies on their browser without any mediation from others. In the process, it hopes to educate people on some basic aspects of programming.
The talk will describe the new challenges posed by this approach, including:
- educating people to never paste in code into policies without understanding it
- educating people on the value of minimalism in policy code (and indeed all code)
- educating people on the need for policy code to itself be side-effect-free
- coaching people on good and poor changes to policy code when intended uses are disallowed by policies