I’m super excited about this blogpost. The approach is so counterintuitive, and yet the results are so much better than anything else that we’ve tried for memory safety. We finally understand why.
https://security.googleblog.com/2024/09/eliminating-memory-safety-vulnerabilities-Android.html
Why? It's consistent across all projects that the cited "large scale" study analyzed. It's also consistent when we looked at Android, which was not part of the study. When we change the behavior of development within Android, the result matched what we would expect based on the half-life metric.
When you look at studies that analyze how this works from the opposite angle "how much does it cost to find the next vulnerability in the same codebase?" you'll see a similar result. E.g. "On the Exponential Cost of Vulnerability Discovery" https://mboehme.github.io/paper/FSE20.EmpiricalLaw.pdf
There's a finite number of vulnerabilities within a code base. As the density drops, the cost of finding the next vulnerability will rise.
@jeffvanderstoep Thanks for your reply! I don’t doubt the validity of your measurement. I’d argue about two things: