conferences | speakers | series

Coming Soon: Machine-Checked Mathematical Proofs in Everyday Software and Hardware Development

home

Coming Soon: Machine-Checked Mathematical Proofs in Everyday Software and Hardware Development
34th Chaos Communication Congress

Most working engineers view machine-checked mathematical proofs as an academic curiosity, if they have ever heard of the concept at all. In contrast, activities like testing, debugging, and code review are accepted as essential. They are woven into the lives of nearly all developers. In this talk, I will explain how I see machine-checked proofs enabling new everyday activities for developers of computer software and hardware. These activities have the potential to lower development effort dramatically, at the same time as they increase our assurance that systems behave correctly and securely. I will give a cosmological overview of this field, answering the FAQs that seem to stand in the way of practicality; and I will illustrate the principles with examples from projects that you can clone from GitHub today, covering the computing stack from digital hardware design to cryptographic software and applications.

Today's developers of computer software and hardware are tremendously effective, compared to their predecessors. We have found very effective ways of modularizing and validating our work. The talk is about ammunition for these activities from a perhaps-unexpected source.

Modularity involves breaking a complex system into a hierarchy of simpler pieces, which may be written and understood separately. Structured programming (e.g., using loops and conditionals instead of gotos) helps us read and understand parts of a single function in isolation, and data abstraction lets us encapsulate important functionality in objects, with guarantees that other code can only access the private data by calling public methods. That way, we can convince ourselves that the encapsulated code upholds certain essential properties, regardless of which other code it is linked with. Systematic unit testing also helps enforce contracts for units of modularity. Each of these techniques can be rerun automatically, to catch regressions in evolving systems, and catch those regressions in a way that accurately points the finger of responsibility to particular modules.

Validation is an important part of development that encompasses testing, debugging, code review, and anything else that we do to raise our confidence that the system behaves as intended. Experienced engineers know that validation tends to take up the majority of engineering effort. Often that effort involves mentally taxing activities that would not otherwise come up in coding. One example is thinking about test-case coverage, and another is including instrumentation that produces traces to consult during debugging.

It is not hard for working developers to imagine great productivity gains from better ways to break systems into pieces or raise our confidence in those pieces. The claim I will make in this talk is that a key source of such insights has been neglected: machine-checked mathematical proofs. Here the basic functionality is an ASCII language for defining mathematical objects, stating theorems about them, and giving proofs of theorems. Crucially, an algorithm checks that purported proofs really do establish the theorems. By going about these activities in the style of programming, we inherit usual supporting tools like IDEs, version control, continuous integration, and automated build processes. But how could so esoteric a task as math proofs call for that kind of tooling, and what does it have to do with building real computer systems?

I will explain a shared vision to that end, developed along with many other members of my research community. Let me try to convince you that all of the following goals are attainable in the next 10 years.

  • We will have complete computer systems implementing moderately complex network servers for popular protocols, proved to implement those protocols correctly, from the level of digital circuits on up. We will remove all deployed code (hardware or software) from the trusted computing base, shifting our trust to much smaller specifications and proof checkers.
  • Hobbyists will be able to design new embedded computing platforms by mixing and matching open-source hardware and software components, also mixing and matching the proofs of these components, guaranteeing no bugs at the digital-abstraction level or higher, with no need for debugging.
  • New styles of library design will be enabled by the chance to attach a formal behavioral specification to each library. For instance, rank-and-file programmers will able to assemble their own code for cryptographic protocols, with code that looks like reference implementations in Python, but getting performance comparable to what experts handcraft in assembly today. Yet that benefit would come with no need to trust that library authors have avoided bugs or intentional backdoors, perhaps even including automatic proofs of cryptographic security properties.

Main technical topics to cover to explain my optimism:

  • The basic functionality of proof assistants and why we should trust their conclusions
  • How to think about system decomposition with specifications and proofs, including why, for most components, we do not need to worry about specification mistakes
  • The different modes of applying proof technology to check or generate components
  • The engineering techniques behind cost-effective proof authoring for realistic systems
  • A hardware case study: Kami, supporting component-based digital hardware authoring with proofs
  • A software case study: Fiat Cryptography, supporting correct-by-construction auto-generation of fast code for elliptic-curve cryptography
  • Pointers to where to look next, if you would like to learn more about this technology

Speakers: Adam Chlipala