Information Leakage Tools

This is a list of academic publications related to the theory and implementation of our information leakage tools.

Author ordering is alphabetical for all publications.

Time Protection: The Missing OS Abstraction [as PDF][cite] [GYCH19]

Qian Ge, Yuval Yarom, Tom Chothia and Gernot Heiser

In Proceedings of the Fourteenth EuroSys Conference (EuroSys 2019), pages 340–356, ACM. 2019.
Abstract: Timing channels enable data leakage that threatens the security of computer systems, from cloud platforms to smartphones and browsers executing untrusted third-party code. Preventing unauthorised information flow is a core duty of the operating system, however, present OSes are unable to prevent timing channels. We argue that OSes must provide time protection, the temporal equivalent of the established memory protection, for isolating security domains. We examine the requirements of time protection, present a design and its implementation in the seL4 microkernel, and evaluate efficacy and cost on x86 and Arm processors.

Compositionality Results for Quantitative Information Flow [as PDF][cite] [KCP14]

Yusuke Kawamoto, Konstantinos Chatzikokolakis and Catuscia Palamidessi

In Proceedings of the 11th International Conference on Quantitative Evaluation of Systems (QEST 2014), volume 8657 of Lecture Notes in Computer Science, pages 368––383, Springer. September 2014.
Abstract: In the min-entropy approach to quantitative information flow, the leakage is dened in terms of a minimization problem, which, in case of large systems, can be computationally rather heavy. The same happens for the recently proposed generalization called g-vulnerability. In this paper we study the case in which the channel associated to the system can be decomposed into simpler channels, which typically happens when the observables consist of several components. Our main contribution is the derivation of bounds on the g-leakage of the whole system in terms of the g-leakages of its components.

LeakWatch: Estimating Information Leakage from Java Programs [as PDF][cite] [CKN14]

Tom Chothia, Yusuke Kawamoto and Chris Novakovic

In Proceedings of the 19th European Symposium on Research in Computer Security (ESORICS 2014) Part II, volume 8713 of Lecture Notes in Computer Science, pages 219––236, Springer. September 2014.
Abstract: Programs that process secret data may inadvertently reveal information about those secrets in their publicly-observable output. This paper presents LeakWatch, a quantitative information leakage analysis tool for the Java programming language; it is based on a flexible "point-to-point" information leakage model, where secret and public-observable data may occur at any time during a program's execution. LeakWatch repeatedly executes a Java program containing both secret and publicly-observable data and uses robust statistical techniques to provide estimates, with confidence intervals, for min-entropy leakage (using a new theoretical result presented in this paper) and mutual information. We demonstrate how LeakWatch can be used to estimate the size of information leaks in a range of real-world Java programs.

A Tool for Estimating Information Leakage [as PDF][cite] [CKN13a]

Tom Chothia, Yusuke Kawamoto and Chris Novakovic

In Proceedings of the 25th International Conference on Computer Aided Verification (CAV 2013), volume 8044 of Lecture Notes in Computer Science, pages 690–695, Springer. July 2013.
Abstract: We present leakiEst, a tool that estimates how much information leaks from systems. To use leakiEst, an analyst must run a system with a range of secret values and record the outputs that may be exposed to an attacker. Our tool then estimates the amount of information leaked from the secret values to the observable outputs of the system. Importantly, our tool calculates the confidence intervals for these estimates, and tests whether they represent real evidence of an information leak in the system. leakiEst is freely available and has been used to verify the security of a range of real-world systems, including e-passports and Tor.

Probabilistic Point-to-Point Information Leakage [as PDF][cite] [CKNP13]

Tom Chothia, Yusuke Kawamoto, Chris Novakovic and David Parker

In Proceedings of the 26th IEEE Computer Security Foundations Symposium (CSF 2013), pages 193–205, IEEE Computer Society. June 2013.
Abstract: The outputs of a program that processes secret data may reveal information about the values of these secrets. This paper develops an information leakage model that can measure the leakage between arbitrary points in a probabilistic program. Our aim is to create a model of information leakage that makes it convenient to measure specific leaks, and provide a tool that may be used to investigate a program's information security. To make our leakage model precise, we base our work on a simple probabilistic, imperative language in which secret values may be specified at any point in the program; other points in the program may then be marked as potential sites of information leakage. We extend our leakage model to address both non-terminating programs (with potentially infinite numbers of secret and observable values) and user input. Finally, we show how statistical approximation techniques can be used to estimate our leakage measure in real-world Java programs.

A Statistical Test for Information Leaks Using Continuous Mutual Information [as PDF][cite] [CG11]

Tom Chothia and Apratim Guha

In Proceedings of the 24th IEEE Computer Security Foundations Symposium (CSF 2011), pages 177–190, IEEE Computer Society. June 2011.
Abstract: We present a statistical test for detecting information leaks in systems with continuous outputs. We use continuous mutual information to detect the information leakage from trial runs of a probabilistic system. It has been shown that there is no universal rate of convergence for sampled mutual information, however when the leakage is zero, and under some reasonable conditions, we establish a rate for the sampled estimate, and show that it can converge to zero very quickly. We use this result to develop a statistical test for information leakage, and we use our new test to analyse a number of possible fixes for a time-based information leak in e-passports. We compare our new test with existing statistical methods, and we find that our test outperforms these other tests in almost all cases, and in one case in particular, ours is the only statistical test that can detect an information leak.

Statistical Measurement of Information Leakage [as PDF][cite] [CCG10]

Konstantinos Chatzikokolakis, Tom Chothia and Apratim Guha

In Proceedings of the 16th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2010), volume 6015 of Lecture Notes in Computer Science, pages 390–404, Springer. March 2010.
Abstract: Information theory provides a range of useful methods to analyse probability distributions and these techniques have been successfully applied to measure information flow and the loss of anonymity in secure systems. However, previous work has tended to assume that the exact probabilities of every action are known, or that the system is non-deterministic. In this paper, we show that measures of information leakage based on mutual information and capacity can be calculated, automatically, from trial runs of a system alone. We find a confidence interval for this estimate based on the number of possible inputs, observations and samples. We have developed a tool to automatically perform this analysis and we demonstrate our method by analysing a Mixminon anonymous remailer node.

Calculating Probabilistic Anonymity from Sampled Data [as PDF][cite] [CCG09]

Konstantinos Chatzikokolakis, Tom Chothia and Apratim Guha

Technical report CSR-09-10, School of Computer Science, University of Birmingham. May 2009.
Abstract: This paper addresses the problem of calculating the anonymity of a system statistically from a number of trial runs. We show that measures of anonymity based on capacity can be estimated, by showing that the Blahut-Arimoto algorithm converges for sampled data. We obtain bounds on the error of the estimated value by calculating the distribution of mutual information when one distribution is known and one unknown. This leads to finding the variance of the estimation of anonymity in terms of the numbers of samples, inputs and possible observations, which in turn tells us what kinds of systems can and cannot be accurately analysed using a statistical approach. We demonstrate our method on an implementation of the Dining Cryptographers protocol and on a Mixminion anonymous remailer node.