Computer Security lecture notes
Copyright © 2006 Tien Tuan Anh Dinh and  Mark Dermot Ryan
Any usage complying with GPL is allowed

Trusted Computing: TCG proposals

Trusted computing concepts

Trusted Computing aims to provide a level of security which is beyond the control of the PC user, and is therefore resistant to attacks which the user may deliberately or accidentally allow. It does this by employing a chip called a trusted platform module which securely stores cryptographic keys and other data. In particular, it is manufactured with a public/private key pair, known as the endorsement key (EK). The private part of that key cannot be extracted from the TPM, and records of it at manufacture time should be destroyed. Trusted computing wrests control from the PC's owner/user, and potentially places it in the hands of content providers or other parties. The uniqueness of the TPM EK threatens the privacy of the PC user.

The Trusted Computing Group

The Trusted Computing Group (TCG) is an industry consortium led by HP, IBM, Microsoft and others, which coordinates actual implementations of trusted computing concepts. They aim to srike a balance between the two opposing needs:
TCG's main output is the TPM v1.2 specification, which proposes protocols and standards for attestation, key migration, and secure storage.

Trusted Platform Module (TPM)

The main components of a TPM are illustrated by the figure:
TPM chip

PCR "extend" operation

A measurement is stored by extending a particular PCR. The extend operation works like this:
PCR := SHA-1(PCR + measurement)

A new measurement value is concatenated with the current PCR value and then hashed by SHA-1.  The result will be stored as a new value of the PCR. The extend operation has several benefits: (a) it is unfeasible to find two different measurement values such that when extended returns the same value; (b) it preservers order in which entities' measurement were extended (for example, extending A then B results in a different value than extending B then A); (c) the operation allows unlimited number of measurement to be stored in a PCR, because result is always a 160-bits value.  

Integrity Measurement with TPM

Measuring is done by hashing the entity with a hash function like SHA-1. The result will be the measured value of that entity. An entity in a  PC platform could be an application executble, a configuration file or a data file. Considering two entities A and B:
  1. A measures entity B (could be executable or other files ...). Result is a B's "fingerprint".
  2. This fingerprint is stored in a Stored Measurement Log (SML) which resides in the hard drive (outside, and not protected by a TPM).
  3. A then inserts B's fingerprint into a PCR (via the PCR's extend operation).
  4. Control is passed to B.
An example of a SML can be found here. Note that A stores B's fingerprint to a PCR before passing control to it. The benefit of following this order is that B can not hide its existence (the fact that it had been loaded and run). Imagine that B is a malicous program, it tries to avoid being detected by removing its fingerprint in the SML. However, B can not remove its fingerprint from the PCR, because the PCR is protected at hardware level. No part of the system can write directly to the PCR. It is computationally infeasible to find another program whose hash value is the same as B.  This integrity measurement mechanism does not prevent an entity from misbehaving or being malicious. But because its presence is logged by the SML, and this is guaranteed by the TPM, one has an unforgeable record of all the entities that have been loaded. One can choose whether to trust the system based on this record.

Note that in most systems, the PCR and the SML keep only a single measurement of each loaded program. It does not take into account subsequent loads of the same program, as well as number of times it is loaded in run-time. Taking measurement every time a program loaded is a good thing, because it would reflect the platform's "live" configuration, but also slows things down and affects the system's scalability.

Chain of Trust

Consider HP trusted module that entity A launches entity B, then B launches C. For example, A is a operating system, B is the Java Virtual Machine (JVM) and C is a Java application. In order to trust C, one must trust A and B. In the above example, to trust the Java application to behave correctly, one must trust the JVM to behave correctly, which in turn requires to trust the operating system to behave in a correct maner. To establish this chain of trust:
  1. A measures B then passes control to B
  2. B measures C and passes control to C
The question now becomes "who measures A?".

Root of Trust
   is an entity that must be trusted implicitly, because there is no way to measure that entity. In the example above, A would be the Root of Trust for Measurement (RTM) because it is trusted to measure other entities without faults or errors. Other Root of Trust associated with TPM are Root of Trust for Reporting (RTR) and Root of Trust for Storage (RTS). They will be discussed later.

Root of trust for measurement

The Core Root of Trust for Measurement (CRTM) is the BIOS boot block code. This piece of code is considered trustworthy. It reliably measures integrity value of other entities, and stays unchanged during the lifetime of the platform.  CRTM CRTM is an extension of normal BIOS, which will be run first to measure other parts of the BIOS block before passing control. The BIOS then measures hardware, and the bootloader and passes control to the bootloader. The bootloader measures OS kernel and pass control to the OS.
After the OS is loaded (or during the boot process), one can check for the PCR values to see if it is running in a good (expected) configuration. Any change will result in an unseen PCR's value, and user could decide if he trusts this new configuration to continue or not.


Attestation is the means by which a trusted computer assures a remote computer of its trustworthy status. The TPM is manufactured with a public/private key pair built into the hardware, called the endorsement key (EK). The public part of the EK is certified by an appropriate CA as being the EK of a particular TPM. Each individual TPM has a unique EK. Using the private part of its EK, the TPM can sign assertions about the trusted computer's state. A remote computer can verify that those assertions have been signed by a trusted TPM.

An Attestation Identity Key (AIK) is a key pair created during attestation, for use by a particular application. At creation time, its security is bootstrapped from the TPM's EK. Using an AIK instead of using the EK directly has several benefits: (a) it reduces the load on the TPM, since only the TPM can use the EK but the CPU will use the AIK; (b) helps prevent cryptanalysis of the EK; (c) somewhat addresses the privacy issues, since the AIK is not directly associated with the hardware.

Remote Attestation and Root and Trust for Reporting (RTR)

As discussed above, Remote Attestation (RA) is a method to prove to a remote party that the local PC is a trusted platform (TPM-enabled) and to show its current configuration. The remote party needs to trust the attestator to reliably measure and report its configuration. 

Attestation and privacy/anonymity

The attestation protocol described in the previous lecture necessarily reveals the unique hardware key (EK) and therefore the identity of the platform. This enables a remote computer to link different sessions to the same trusted computer. For some applications of TC, this lack of anonymity is undesirable. (Look back at the list of applications of TC above, and determine which ones.) Therefore the TCG has adopted two different approaches to enabling anonymous attestation:

Attestation using a privacy CA

Remote attestationAs illustrated in this figure, on receipt of a request for attestation, the attestor generates  a public/private key pair, called the attestation identity key (AIK), and send the public part to a trusted third party (TTP in the figure) called a Privacy CA. That TTP generates a AIK certificate after validating the attestator's EK. The certificate is signed by TTP and sent back to the attestator. The attestator can now send its PCR values (signed with AIK), Stored Measurement Log (SML) and the received AIK certificate to the challenger.
The verification process in the challenger side is as follows:
  1. Verify the AIK certificate with the TTP public key.
  2. Use AIK to verify the signature on the PCR values.
  3. Recalculate this value from the fingerprint list within SML (by applying the PCR's extend operation  on these fingerprints).
  4. Compare the calculated value with PCR's value. If the PCR value and SML do not match, it implies that the SML had been tampered, and the verifier should decide not to trust the attestator.
  5. If they do match, the verifier goes through the fingerprint list in SML and looks for any unapproved entity. This can be done by a whitelist, or a blacklist.
If no bad entity found, the challenger can decide that the local party is trustworthy.

The security of the attestation report relies on the the AIK, which is certified by the TTP on the basis of the EK. Therefore, the Root of Trust for Reporting (RTR) can be said to be the EK.

Protected storage and Root of Trust for Storage (RTS)

Users' data can be encrypted by TPM-generated and TPM-protected keys. There could be a very large number of keys, potentially too many to be stored together in the TPM's small memory. Many keys (AIK for example) are stored on disk, but they are encrypted by keys stored in TPM. Eventually, every external key is secured by the TPM's Storage Root Key (SRK), which is the Root of Trust for Storage (RTS). SRK resides in the TPM's Non-Volatile memory.

Key managementAll keys are encrypted by their parent keys (according to the hirarchy on the right). At the root of the tree is the Storage Root Key (SRK) which permantly resides in the TPM's NV memory. This key is generated for each new owner, whenever he calls the "take ownership" operation of the TPM. Other keys are generated by the TPM, encrypted by some keys and then stored on disk. To use a key, it must be loaded to the TPM (to the key-slot) together with its parent keys. The decryption process is done entirely within the TPM. Of course, one must provide some sort of authorization (such as password or passphrase) when creating and using a key.

Essentially, there are two types of keys:
A key can be migratable (i.e. transfered and used in another TPM), or non-mirgratable (i.e. permanently bound to a specific TPM). Binding keys should be migratable so that user's encrypted data in one TPM could be decrypted with another TPM when he is travelling. AIKs must be non-migratable, otherwise a TPM could be masqueraded by another.

There are two ways to protect data with TPM and SRK:
  1. Data binding:   a (migratable) binding key is generated and used to encrypt data.
  2. Data sealing:   Data sealingdata is encrypted, bound to a specific TPM platform and a particular configuration. The sealing process takes data, a non-migratable key and requested PCR values as input, then outputs a sealed data package. To decrypt this package, one must be running the same TPM, have the key, and the current PCR value has to match with the value used in the sealing process. For example, one seals a Word document with a TPM-generated non-migratable key, and PCR values indicating that Microsoft Word and Symantec antivirus must have been loaded. In order to read that document, other users must have access to the key, be using Microsoft Word and Symantec antivirus software, in the same TPM. Otherwise, data remains sealed.

Other topics

  1. Disaster recovery
  2. Bit locker
  3. Virtualisation
  4. Privacy safeguards

TC and open-source software

Open-source software can help keep TC protocols open. This will help awareness of how TC is being used, and will avoid excessive distrust and paranoia about TC. As mentioned above, many systems - laptops in particular - are currently equipped with TPM chips, so this is a technology which Linux users can play with today.

The Linux kernel has had driver support for TPM chips since 2.6.12; a couple of chips are supported now, with drivers for others in the works. Major distros including RHEL, Fedora, SUSE and Gentoo are supporting it.  Grub and LILO both support secure booting into a trusted mode, and there is open-source BIOS versions such as FreeBIOS and OpenBIOS. Once the kernel is booted, the TPM driver takes over, with the user-space being handled by the Trusted Software Stack known as TrouSerS.

TrouSerS makes a number of TPM capabilities available to the system. If the TPM has RSA capabilities, TrouSerS can perform RSA key pair generation, along with encryption and decryption. There is support for remote attestation functionality. The TSS can be used to "seal" data; such data will be encrypted in such a way that it can only be decrypted if certain PCRs contain the same values. This capability can also be used to bind data to a specific system; move an encrypted file to another host, and that host's TPM will simply lack the keys it needs to decrypt that file. Needless to say, if you make use of these features, you need to give some real thought to recovery plans; there are various sorts of key escrow schemes and such which can be used to get your data back should your motherboard (with its TPM chip) go up in flames.

The TrouSerS package also provides a set of tools for TPM configuration tasks. However, a number of BIOS implementations will lock down the TPM before invoking the boot loader, so TPM configuration is often best done by working directly with the BIOS. There is also a PKCS#11 library; PKCS#11 is a standard API for working with cryptographic hardware.

At the next level is the integrity measurement architecture (IMA) code. IMA uses a PCR to accumulate checksums of every application and library run on the system since boot; this checksum, when signed by the TPM, can be provided to another system to prove that the measured system is running a specific list of software, that the programs have not been modified, and that nothing which is not on the list has been run.


  1. E. Brickell, J. Camenisch, and L. Chen: Direct anonymous attestation. In Proceedings of 11th ACM Conference on Computer and Communications Security, ACM Press, 2004. (PDF)
  2. Trusted Computing: How to Make Your Systems and Data Truly Secure. Trusted Computing Group and other vendors.
  3. Integrity Measurement Architeture (IMA) - from IBM


Updated 4 November 2006