talk on conference website
Cryptographic hash functions are everywhere, yet modeling the characteristics of their real-world occurrences is surprisingly complicated when trying to prove security. We argue how seemingly convenient features of doing classical math may make it actually harder to model them correctly.
Did you ever wonder why programmers use hash functions without keys while cryptographers only proved the implemented protocol secure for a hash function that is keyed? Did you ever want to have your passwords hashed using a random oracle for maximum security? If you are unhappy because it is possible to prove that a microkernel implementation can be proven to do what it is supposed to do, but your favourite cryptographic protocol cannot, then this talk may be for you.
We explore how the way we do classical math leads deviations between cryptographic functions and how they can be modeled in proofs, and what could be done about that.
We focus on questions like:
- How we can be forced to worry about collisions that no one knows
- How it can be that proofs in an exact science like math need to be interpreted
- Why modern cryptographers cannot prove something under the assumption that "hash function X is secure" while programmers have to design their software like this
- Whether "proving A" is always the same as "proving A"
- How to reasonably measure precomputation complexity in cryptographic attack
- And finally: Why we may need different mathematical foundations to formalize cryptography