4th March 2019, version of 17 October 2019, 17:38.
Martín Hötzel Escardó, School of Computer Science, University of Birmingham, UK.
Abstract. We introduce Voevodsky’s univalent foundations and univalent mathematics, and explain how to develop them with the computer system Agda, which is based on Martin-Löf type theory. Agda allows us to write mathematical definitions, constructions, theorems and proofs, for example in number theory, analysis, group theory, topology, category theory or programming language theory, checking them for logical and mathematical correctness.
Agda is a constructive mathematical system by default, which amounts to saying that it can also be considered as a programming language for manipulating mathematical objects. But we can assume the axiom of choice or the principle of excluded middle for pieces of mathematics that require them, at the cost of losing the implicit programming-language character of the system. For a fully constructive development of univalent mathematics in Agda, we would need to use its new cubical flavour, and we hope these notes provide a base for researchers interested in learning cubical type theory and cubical Agda as the next step.
Compared to most expositions of the subject, we work with explicit universe levels.
We also fully discuss and emphasize that classical axioms can be assumed consistently in univalent mathematics.
Keywords. Univalent mathematics. Univalent foundations. Univalent
type theory. Univalence axiom. ∞
-Groupoid. Homotopy type. Type
theory. Homotopy type theory. Intensional Martin-Löf type
theory. Dependent type theory. Identity type. Type
universe. Constructive mathematics. Agda. Cubical type
theory. Cubical Agda. Computer-verified mathematics.
About this document. This is a set of so-called literate Agda files, with the formal, verified, mathematical development within code environments, and the usual mathematical discussion outside them. Most of this file is not Agda code, and is in markdown format, and the html web page is generated automatically from it using Agda and other tools. Github issues or pull requests by students to fix typos or mistakes and clarify ambiguities are welcome. There is also a pdf version with internal links to sections and Agda definitions, which is automatically generated from the html version. These notes were originally developed for the Midlands Graduate School 2019. They will evolve for a while.
A univalent type theory is the underlying formal system for a foundation of univalent mathematics as conceived by Voevodsky.
In the same way as there isn’t just one set theory (we have e.g. ZFC and NBG among others), there isn’t just one univalent type theory (we have e.g. the underlying type theory used in UniMath, HoTT-book type theory, and cubical type theory, among others, and more are expected to come in the foreseeable future before the foundations of univalent mathematics stabilize).
The salient differences between univalent mathematics and traditional, set-based mathematics may be shocking at first sight:
The kinds of objects we take as basic.
ℕ
of natural numbers is a set, and this is a theorem, not a definition.1
-groupoid, automatically.2
-groupoid, again automatically.The treatment of logic.
-1
-groupoids, with at most one element.The treatment of equality.
x ≡ y
is a type, called the identity type, which is not necessarily a truth value.x
and y
are identified.ℕ
, as there is at most one way for two natural numbers to be equal.1
-groupoid, amounting to the type of equivalences of categories, again automatically.The important word in the above description of univalent foundations is automatic. For example, we don’t define equality of monoids to be isomorphism. Instead, we define the collection of monoids as the large type of small types that are sets, equipped with a binary multiplication operation and a unit satisfying associativity of multiplication and neutrality of the unit in the usual way, and then we prove that the native notion of equality that comes with univalent type theory (inherited from Martin-Löf type theory) happens to coincide with the notion of monoid isomorphism. Largeness and smallness are taken as relative concepts, with type universes incorporated in the theory to account for the size distinction.
In particular, properties of monoids are automatically invariant under isomorphism, properties of categories are automatically invariant under equivalence, and so on.
Voevodsky’s way to achieve this is to start with a Martin-Löf type theory (MLTT), including identity types and type universes, and postulate a single axiom, named univalence. This axiom stipulates a canonical bijection between type equivalences (in a suitable sense defined by Voevodsky in type theory) and type identifications (in the original sense of Martin-Löf’s identity type). Voevodsky’s notion of type equivalence, formulated in MLTT, is a refinement of the notion of isomorphism, which works uniformly for all higher groupoids, i.e. types.
In particular, Voevodsky didn’t design a new type theory, but instead gave an axiom for an existing type theory (or any of a family of possible type theories, to be more precise).
The main technical contributions in type theory by Voevodsky are:
Univalent mathematics begins within MLTT with (4) and (5) before we postulate univalence. In fact, as the reader will see, we will do a fair amount of univalent mathematics before we formulate or assume the univalence axiom.
All of (4)-(6) crucially rely on Martin-Löf’s identity type. Initially, Voevodsky thought that a new concept would be needed in the type theory to achieve (4)-(6) and hence (1) and (3) above. But he eventually discovered that Martin-Löf’s identity type is precisely what he needed.
It may be considered somewhat miraculous that the addition of the univalence axiom alone to MLTT can achieve (1) and (3). Martin-Löf type theory was designed to achieve (2), and, regarding (1), types were imagined/conceived as sets (and even named sets in some of the original expositions by Martin-Löf), and, regarding (3), the identity type was imagined/conceived as having at most one element, even if MLTT cannot prove or disprove this statement, as was eventually shown by Hofmann and Streicher with their groupoid model of types in the early 1990’s.
Another important aspect of univalent mathematics is the presence of explicit mechanisms for distinguishing
which are common place in current mathematical practice (e.g. cartesian closedness of a category is a property in some sense (up to isomorphism), whereas monoidal closedness is given structure).
In summary, univalent mathematics is characterized by (1)-(8) and not by the univalence axiom alone. In fact, half of these notes begin without the univalence axiom.
Lastly, univalent type theories don’t assume the axiom of choice or the principle of excluded middle, and so in some sense they are constructive by default. But we emphasize that these two axioms are consistent and hence can be safely used as assumptions. However, virtually all theorems of univalent mathematics, e.g. in UniMath, have been proved without assuming them, with natural mathematical arguments. The formulations of theses principles in univalent mathematics differ from their traditional formulations in MLTT, and hence we sometimes refer to them as the univalent principle of excluded middle and the univalent axiom of choice.
In these notes we will explore the above ideas, using Agda to write MLTT definitions, constructions, theorems and proofs, with univalence as an explicit assumption each time it is needed. We will have a further assumption, the existence of certain subsingleton (or propositional, or truth-value) truncations in order to be able to deal with the distinction between property and data, and in particular with the distinction between designated and unspecified existence (for example to be able to define the notions of image of a function and of surjective function).
We will not assume univalence and truncation globally, so that the students can see clearly when they are or are not needed. In fact, the foundational definitions, constructions, theorems and proofs of univalent mathematics don’t require univalence or propositional truncation, and so can be developed in a version of the original Martin-Löf type theories, and this is what happens in these notes, and what Voevodsky did in his brilliant original development in the computer system Coq. Our use of Agda, rather than Coq, is a personal matter of taste only, and the students are encouraged to learn Coq, too.
Univalent type theory is often called homotopy type theory. Here we are following Voevodsky, who coined the phrases univalent foundations and univalent mathematics. We regard the terminology homotopy type theory as probably more appropriate for referring to the synthetic development of homotopy theory within univalent mathematics, for which we refer the reader to the HoTT book.
However, the terminology homotopy type theory is also used as a synonym for univalent type theory, not only because univalent type theory has a model in homotopy types (as defined in homotopy theory), but also because, without considering models, types do behave like homotopy types, automatically. We will not discuss how to do homotopy theory using univalent type theory in these notes. We refer the reader to the HoTT book as a starting point.
A common compromise is to refer to the subject as HoTT/UF.
ncatlab
references.In particular, it is recommended to read the concluding notes for each chapter in the HoTT book for discussion of original sources. Moreover, the whole HoTT book is a recommended complementary reading for this course.
And, after the reader has gained enough experience:
Regarding the computer language Agda, we recommend the following as starting points:
Regarding the genesis of the subject:
Voevodsky says that he was influenced by Makkai’s thinking:
An important foundational reference, by Steve Awodey and Michael A. Warren, is
Additional expository material:
More references as clickable links are given in the course of the notes.
We also have an Agda development of univalent foundations which is applied to work on injective types, compact (or searchable) types, compact ordinals and more.
This is intended as an introductory graduate course. We include what we regard as the essence of univalent foundations and univalent mathematics, but we are certainly omitting important material that is needed to do univalent mathematics in practice, and the readers who wish to practice univalent mathematics should consult the above references.
𝓤,𝓥,𝓦
𝟙
𝟘
ℕ
of natural numbers_+_
Σ
typesΠ
typesId
, also written _≡_
There are two views:
Agda is a dependently-typed functional programming language.
Agda is a language for defining mathematical notions (e.g. group or topological space), formulating constructions to be performed (e.g. a type of real numbers, a group structure on the integers, a topology on the reals), formulating theorems (e.g. a certain construction is indeed a group structure, there are infinitely many primes), and proving theorems (e.g. the infinitude of the collection of primes with Euclid’s argument).
This doesn’t mean that Agda has two sets of features, one for (1) and the other for (2). The same set of features account simultaneously for (1) and (2). Programs are mathematical constructions that happen not to use non-constructive principles such as excluded middle.
In these notes we study a minimal univalent type theory and do mathematics with it using a minimal subset of the computer language Agda as a vehicle.
Agda allows one to construct proofs interactively, but we will not discuss how to do this in these notes. Agda is not an automatic theorem prover. We have to come up with our own proofs, which Agda checks for correctness. We do get some form of interactive help to input our proofs and render them as formal objects.
Before embarking into a full definition of our Martin-Löf type theory in Agda, we summarize the particular Martin-Löf type theory that we will consider, by naming the concepts that we will include. We will have:
An empty type 𝟘
.
A one-element type 𝟙
.
A type of ℕ
natural numbers.
Type formers +
(binary sum),
Π
(product),
Σ
(sum),
Id
(identity type).
Universes (types of types), ranged
over by 𝓤,𝓥,𝓦
.
This is enough to do number theory, analysis, group theory, topology, category theory and more.
spartan /ˈspɑːt(ə)n/ adjective:
showing or characterized by austerity or a lack of comfort or
luxury.
We will also be rather spartan with the subset of Agda that we choose to discuss. Many things we do here can be written in more concise ways using more advanced features. Here we introduce a minimal subset of Agda where everything in our spartan MLTT can be expressed.
We don’t use any Agda library. For pedagogical purposes, we start from scratch, and here are our first two lines of code:
{-# OPTIONS --without-K --exact-split --safe #-} module HoTT-UF-Agda where
The option --without-K
disables Streicher’s K
axiom, which we don’t
want for univalent mathematics.
The option --exact-split
makes Agda to only accept definitions
with the equality sign “=
” that behave like so-called
judgmental or definitional equalities.
The option --safe
disables features that may make Agda
inconsistent,
such as --type-in-type
, postulates and more.
Every Agda file is a module. These lecture notes are a set of Agda files, which are converted to html by Agda after it successfully checks the mathematical development for correctness.
The Agda code in these notes has syntax highlighting and links (in the html and pdf versions), so that we can navigate to the definition of a name or symbol by clicking at it.
A universe 𝓤
is a type of types.
One use of universes is to define families of types indexed by a
type X
as functions X → 𝓤
.
Such a function is sometimes seen
as a property of elements of X
.
Another use of universes, as we shall see, is to define types of mathematical structures, such as monoids, groups, topological spaces, categories etc.
Sometimes we need more than one universe. For example, the type of groups in a universe lives in a bigger universe, and given a category in one universe, its presheaf category also lives in a larger universe.
We will work with a tower of type universes
𝓤₀, 𝓤₁, 𝓤₂, 𝓤₃, ...
These are actually universe names (also called levels, not to be confused with hlevels). We reference the universes themselves by a deliberately almost-invisible superscript dot. For example, we will have
𝟙 : 𝓤₀ ̇
where 𝟙
is the one-point type to be defined shortly. We will sometimes
omit this superscript in our discussions, but are forced to write it
in Agda code. We have that the universe 𝓤₀
is a type in the universe
𝓤₁
, which is a type in the universe 𝓤₂ and so on.
𝓤₀ ̇
: 𝓤₁ ̇
𝓤₁ ̇
: 𝓤₂ ̇
𝓤₂ ̇
: 𝓤₃ ̇
⋮
The assumption that 𝓤₀ : 𝓤₀
or that any universe is in itself or a
smaller universe gives rise to a
contradiction,
similar to Russell’s
Paradox.
Given a universe 𝓤
, we denote by
𝓤 ⁺
its successor universe. For example, if 𝓤
is 𝓤₀
then 𝓤 ⁺
is
𝓤₁
. According to the above discussion, we have
𝓤 ̇ : 𝓤 ⁺ ̇
The least upper bound of two universes 𝓤
and 𝓥
is written
𝓤 ⊔ 𝓥
.
For example, if 𝓤
is 𝓤₀
and 𝓥
is 𝓤₁
, then 𝓤 ⊔ 𝓥
is 𝓤₁
.
We now bring our notation for universes by importing our Agda file
Universes
. The Agda keyword
open
asks to make all definitions in the file Universe
visible in our
file here.
open import Universes public
The keyword public
makes the contents of the file Universes
available to importers of our module HoTT-UF-Agda
.
We will refer to universes by letters 𝓤,𝓥,𝓦,𝓣
:
variable 𝓤 𝓥 𝓦 𝓣 : Universe
In some type theories, the universes are cumulative “on the nose”, in
the sense that from X : 𝓤
we derive that X : 𝓤 ⊔ 𝓥
. We will
instead have an embedding 𝓤 → 𝓤 ⊔
𝓥
of universes into larger universes.
𝟙
We place it in the first universe, and we name its unique element
“⋆
”. We use the data
declaration in Agda to introduce it:
data 𝟙 : 𝓤₀ ̇ where ⋆ : 𝟙
It is important that the point ⋆
lives in the type 𝟙
and in no other
type. There isn’t dual citizenship in our type theory. When we create
a type, we also create freshly new elements for it, in this case
“⋆
”. (However, Agda has a limited form of overloading, which allows
us to sometimes use the same name for different things.)
Next we want to give a mechanism to prove that all points of the
type 𝟙
satisfy a given property A
.
The property is a function A : 𝟙 → 𝓤
for some universe 𝓤
.
The type A(x)
, which we will write simply A x
, doesn’t need
to be a truth value.
It can be any type. We will meet examples shortly.
In MLTT, mathematical statements are types, such as
Π (A : 𝟙 → 𝓤), A ⋆ → Π (x : 𝟙), A x
.
We read this in natural language as “for any given property A
of elements of the type 𝟙
, if A ⋆
holds, then it follows that
A x
holds for all x : 𝟙
”.
In Agda, the above type is written as
(A : 𝟙 → 𝓤 ̇ ) → A ⋆ → (x : 𝟙) → A x
.
This is the type of functions with three arguments A : 𝟙 → 𝓤 ̇
and a : A ⋆
and x : 𝟙
, with values in the type A x
.
A proof of a mathematical statement rendered as a type is a
construction of an element of the type. In our example, we have
to construct a function, which we will name 𝟙-induction
.
We do this as follows in Agda, where we first declare the type of the
function 𝟙-induction
with “:
” and then define the function by an
equation:
𝟙-induction : (A : 𝟙 → 𝓤 ̇ ) → A ⋆ → (x : 𝟙) → A x 𝟙-induction A a ⋆ = a
The universe 𝓤
is arbitrary, and Agda knows 𝓤
is a universe variable because we said so above.
Notice that we supply A
and a
as arbitrary arguments, but instead of
an arbitrary x : 𝟙
we have written “⋆
”. Agda accepts this because it
knows from the definition of 𝟙
that “⋆
” is the only element of the
type 𝟙
. This mechanism is called pattern matching.
A particular case of 𝟙-induction
occurs when the family A
is constant
with value B
, which can be written variously as
A x = B
or
A = λ (x : 𝟙) → B
,
or
A = λ x → B
if we want Agda to figure out the type of x
by itself, or
A = λ _ → B
if we don’t want to name the argument of A
because it
is not used. In usual mathematical practice, such a lambda expression is often
written
x ↦ B
(x
is mapped toB
)
so that the above amount to A = (x ↦ B)
.
Given a type B
and a point b : B
, we construct the function 𝟙 → B
that maps any given x : 𝟙
to b
.
𝟙-recursion : (B : 𝓤 ̇ ) → B → (𝟙 → B) 𝟙-recursion B b x = 𝟙-induction (λ _ → B) b x
The above expression B → (𝟙 → B)
can be written as B → 𝟙 → B
,
omitting the brackets, as the function-type formation symbol →
is
taken to be right associative.
Not all types have to be seen as mathematical statements (for example
the type ℕ
of natural numbers defined below). But the above definition
has a dual interpretation as a mathematical function, and as the
statement “B
implies (true implies B
)” where 𝟙
is the type encoding
the truth value true.
The unique function to 𝟙
will be named !𝟙
. We define two versions
to illustrate implicit
arguments,
which correspond in mathematics to “subscripts that are omitted when
the reader can safely infer them”, as for example for the identity
function of a set or the identity arrow of an object of a category.
!𝟙' : (X : 𝓤 ̇ ) → X → 𝟙 !𝟙' X x = ⋆ !𝟙 : {X : 𝓤 ̇ } → X → 𝟙 !𝟙 x = ⋆
This means that when we write
!𝟙 x
we have to recover the (uniquely determined) missing type X
with x : X
“from the context”. When Agda can’t figure it out, we need to
supply it and write
!𝟙 {𝓤} {X} x
.
This is because 𝓤
is also an implicit argument (all things declared
with the Agda keyword variable as
above
are implicit arguments). There are other,
non-positional,
ways to indicate X
without having to indicate 𝓤
too. Occasionally,
people define variants of a function with different choices of
“implicitness”, as above.
𝟘
It is defined like 𝟙
, except that no elements are listed for it:
data 𝟘 : 𝓤₀ ̇ where
That’s the complete definition. This has a dual interpretation, mathematically as the empty set (we can actually prove that this type is a set, once we know the definition of set), and logically as the truth value false. To prove that a property of elements of the empty type holds for all elements of the empty type, we have to do nothing.
𝟘-induction : (A : 𝟘 → 𝓤 ̇ ) → (x : 𝟘) → A x 𝟘-induction A ()
When we write the pattern ()
, Agda checks if there is any case we
missed. If there is none, our definition is accepted. The expression
()
corresponds to the mathematical phrase vacuously
true. The unique
function from 𝟘
to any type is a particular case of 𝟘-induction
.
𝟘-recursion : (A : 𝓤 ̇ ) → 𝟘 → A 𝟘-recursion A a = 𝟘-induction (λ _ → A) a
We will use the following categorical notation for 𝟘-recursion
:
!𝟘 : (A : 𝓤 ̇ ) → 𝟘 → A !𝟘 = 𝟘-recursion
We give the two names is-empty
and ¬
to the same function now:
is-empty : 𝓤 ̇ → 𝓤 ̇ is-empty X = X → 𝟘 ¬ : 𝓤 ̇ → 𝓤 ̇ ¬ X = X → 𝟘
This says that a type is empty precisely when we have a function to
the empty type. Assuming univalence,
once we have defined the identity type former
_≡_
, we will be able to prove that
(is-empty X) ≡ (X ≃ 𝟘)
, where X ≃ 𝟘
is the type of bijections, or
equivalences, from X
to
𝟘
. We will also be able to prove things like (2 + 2 ≡ 5) ≡ 𝟘
and
(2 + 2 ≡ 4) ≡ 𝟙
.
This is for numbers. If we define types 𝟚 = 𝟙 + 𝟙
and 𝟜 = 𝟚 +
𝟚
with two and four elements respectively, where we are anticipating
the definition of _+_
for types, then we
will instead have that 𝟚 + 𝟚 ≡ 𝟜
is a type with 4!
elements, which
is the number of permutations
of a set with four elements, rather than a truth value 𝟘
or 𝟙
, as
a consequence of the univalence axiom. That is, we will have (𝟚 + 𝟚 ≡
𝟜) ≃ (𝟜 + 𝟜 + 𝟜 + 𝟜 + 𝟜 + 𝟜)
, so that the type identity 𝟚 + 𝟚 ≡ 𝟜
holds in many more ways than the
numerical equation 2 + 2 ≡ 4
.
The above is possible only because universes are genuine types and
hence their elements (that is, types) have identity types themselves,
so that writing X ≡ Y
for types X
and Y
(inhabiting the same
universe) is allowed.
When we view 𝟘
as false, we can read the definition of
the negation ¬ X
as saying that “X
implies false”. With univalence
we will be able to show that “(false → true) ≡
true”, which amounts
to (𝟘 → 𝟙) ≡ 𝟙
, which in turn says that there is precisely one function
𝟘 → 𝟙
, namely the (vacuous) function.
ℕ
of natural numbersThe definition is similar but not quite the same as the one via Peano Axioms.
We stipulate an element zero : ℕ
and a successor function succ : ℕ → ℕ
,
and then define induction. Once we have defined the identity type former _≡_
, we
will prove the other peano axioms.
data ℕ : 𝓤₀ ̇ where zero : ℕ succ : ℕ → ℕ
In general, declarations with data
are inductive definitions. To write the number 5
, we have to write
succ (succ (succ (succ (succ zero))))
We can use the following Agda
built-in
to be able to just write 5
as a shorthand:
{-# BUILTIN NATURAL ℕ #-}
Apart from this notational effect, the above declaration doesn’t play any role in the Agda development of these lecture notes.
In the following, the type family A
can be seen as playing the role
of a property of elements of ℕ
, except that it doesn’t need to be
necessarily
subsingleton valued. When it
is, the type of the function gives the familiar principle of
mathematical
induction for
natural numbers, whereas, in general, its definition says how to
compute with induction.
ℕ-induction : (A : ℕ → 𝓤 ̇ ) → A 0 → ((n : ℕ) → A n → A (succ n)) → (n : ℕ) → A n ℕ-induction A a₀ f = h where h : (n : ℕ) → A n h 0 = a₀ h (succ n) = f n (h n)
The definition of ℕ-induction
is itself by induction. It says that given a point
a₀ : A 0
and a function
f : (n : ℕ) → A n → A (succ n)
,
in order to calculate an element of A n
for a given n : ℕ
, we just calculate h n
, that is,
f n (f (n-1) (f (n-2) (... (f 0 a₀)...)))
.
So the principle of induction is a construction that given a base
case a₀ : A 0
, an induction step f : (n : ℕ) → A n → A (succ n)
and a number n
, says how to get an element of the type A n
by
primitive
recursion.
Notice also that ℕ-induction
is the dependently typed version of
primitive recursion, where the non-dependently typed version is
ℕ-recursion : (X : 𝓤 ̇ ) → X → (ℕ → X → X) → ℕ → X ℕ-recursion X = ℕ-induction (λ _ → X)
The following special case occurs often (and is related to the fact that ℕ
is the initial algebra of the functor 𝟙 + (-)
):
ℕ-iteration : (X : 𝓤 ̇ ) → X → (X → X) → ℕ → X ℕ-iteration X x f = ℕ-recursion X x (λ _ x → f x)
Agda checks that any recursive definition as above is well founded, with recursive invocations with structurally smaller arguments only. If it isn’t, the definition is not accepted.
In official Martin-Löf type theories, we have to use the ℕ-induction
functional to define everything else with the natural numbers. But Agda
allows us to define functions by structural recursion, like we defined
ℕ-induction
.
We now define addition and multiplication for the sake of illustration.
We first do it in Peano style. We will create a local module
so that the
definitions are not globally visible, as we want to have the symbols
+
and ×
free for type operations of MLTT to be defined soon. The
things in the module are indented and are visible outside the module
only if we open
the module or if we write them as
e.g. Arithmetic._+_
in the following example.
module Arithmetic where _+_ _×_ : ℕ → ℕ → ℕ x + 0 = x x + succ y = succ (x + y) x × 0 = 0 x × succ y = x + x × y infixl 20 _+_ infixl 21 _×_
The above “fixity” declarations allow us to indicate the precedences
(multiplication has higher precedence than addition) and their
associativity (here we take left-associativity as the convention, so that
e.g. x+y+z
parses as (x+y)+z
).
Equivalent definitions use ℕ-induction
on the second argument y
, via
ℕ-iteration
:
module Arithmetic' where _+_ _×_ : ℕ → ℕ → ℕ x + y = h y where h : ℕ → ℕ h = ℕ-iteration ℕ x succ x × y = h y where h : ℕ → ℕ h = ℕ-iteration ℕ 0 (x +_) infixl 20 _+_ infixl 21 _×_
Here the expression “x +_
” stands for the function ℕ → ℕ
that adds
x
to its argument. So to multiply x
by y
, we apply y
times the
function “x +_
” to 0
.
As another example, we define the less-than-or-equal relation by nested induction, on the first argument and then the second, but we use pattern matching for the sake of readability.
Exercise. Write it using
ℕ-induction
, recursion or iteration, as appropriate.
module ℕ-order where _≤_ _≥_ : ℕ → ℕ → 𝓤₀ ̇ 0 ≤ y = 𝟙 succ x ≤ 0 = 𝟘 succ x ≤ succ y = x ≤ y x ≥ y = y ≤ x
Exercise. After learning Σ
and _≡_
explained below, prove that
x ≤ y
if and only ifΣ \(z : ℕ) → x + z ≡ y
.
Later, after learning univalence prove that in this case this implies
(x ≤ y) ≡ Σ \(z : ℕ) → x + z ≡ y
.
That bi-implication can be turned into equality only holds for types that are subsingletons (and this is called propositional extensionality).
If we are doing applied mathematics and want to actually compute, we can define a type for binary notation for the sake of efficiency, and of course people have done that. Here we are not concerned with efficiency but only with understanding how to codify mathematics in (univalent) type theory and in Agda.
_+_
We now define the disjoint sum of two types X
and Y
. The elements of
the type
X + Y
are stipulated to be of the forms
inl x
andinr y
with x : X
and y : Y
. If X : 𝓤
and Y : 𝓥
, we stipulate that
X + Y : 𝓤 ⊔ 𝓥
, where
𝓤 ⊔ 𝓥
is the least upper bound of the two universes 𝓤
and
𝓥
. In Agda we can define this as follows.
data _+_ {𝓤 𝓥} (X : 𝓤 ̇ ) (Y : 𝓥 ̇ ) : 𝓤 ⊔ 𝓥 ̇ where inl : X → X + Y inr : Y → X + Y
To prove that a property A
of the sum holds for all z : X + Y
, it is enough to
prove that A (inl x)
holds for all x : X
and that A (inr y)
holds for
all y : Y
. This amounts to definition by cases:
+-induction : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (A : X + Y → 𝓦 ̇ ) → ((x : X) → A (inl x)) → ((y : Y) → A (inr y)) → (z : X + Y) → A z +-induction A f g (inl x) = f x +-induction A f g (inr y) = g y +-recursion : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {A : 𝓦 ̇ } → (X → A) → (Y → A) → X + Y → A +-recursion {𝓤} {𝓥} {𝓦} {X} {Y} {A} = +-induction (λ _ → A)
When the types A
and B
are understood as mathematical statements,
the type A + B
is understood as the statement “A
or B
”, because
to prove “A
or B
” we have to prove one of A
and B
. When A
and
B
are simultaneously possible, we have two proofs, but sometimes we
want to deliberately ignore which one we get, when we want to get a
truth value rather than a possibly more general type, and in this case
we use the truncation ∥ A + B ∥
.
But also _+_
is used to construct mathematical objects. For example,
we can define a two-point type:
𝟚 : 𝓤₀ ̇ 𝟚 = 𝟙 + 𝟙
We can name the left and right points as follows, using patterns, so that they can be used in pattern matching:
pattern ₀ = inl ⋆ pattern ₁ = inr ⋆
We can define induction on 𝟚 directly by pattern matching:
𝟚-induction : (A : 𝟚 → 𝓤 ̇ ) → A ₀ → A ₁ → (n : 𝟚) → A n 𝟚-induction A a₀ a₁ ₀ = a₀ 𝟚-induction A a₀ a₁ ₁ = a₁
Or we can prove it by induction on _+_
and 𝟙
:
𝟚-induction' : (A : 𝟚 → 𝓤 ̇ ) → A ₀ → A ₁ → (n : 𝟚) → A n 𝟚-induction' A a₀ a₁ = +-induction A (𝟙-induction (λ (x : 𝟙) → A (inl x)) a₀) (𝟙-induction (λ (y : 𝟙) → A (inr y)) a₁)
Σ
typesGiven universes 𝓤
and 𝓥
, a type
X : 𝓤
and a type family
Y : X → 𝓥
,
we want to construct its sum, which is a type whose elements are of the form
(x , y)
with x : X
and y : Y x
. This sum type will live in the least
upper bound
𝓤 ⊔ 𝓥
of the universes 𝓤
and 𝓥
. We will write this sum
Σ Y
,
with X
, as well as the universes, implicit. Often Agda, and people,
can figure out what the unwritten type X
is, from the definition of Y
. But
sometimes there may be either lack of enough information, or of
enough concentration power by people, or of sufficiently powerful inference
algorithms in the implementation of Agda. In such cases we can write
Σ λ(x : X) → Y x
,
because Y = λ (x : X) → Y x
by a so-called η-rule. However, we will
often use the synonym \
of λ
for Σ
, as if considering it as part
of the Σ
syntax:
Σ \(x : X) → Y x
.
In MLTT we would write this as Σ (x : X), Y x
or
similar, for example with
the indexing x : X
written as a subscript of Σ
or under it.
Or it may be that the name Y
is not defined, and we work with a
nameless family defined on the fly, as in the exercise proposed above:
Σ \(z : ℕ) → x + z ≡ y
,
where Y z = (x + z ≡ y)
in this case, and where we haven’t defined
the identity type former _≡_
yet.
We can construct the Σ
type former as follows in Agda:
record Σ {𝓤 𝓥} {X : 𝓤 ̇ } (Y : X → 𝓥 ̇ ) : 𝓤 ⊔ 𝓥 ̇ where constructor _,_ field x : X y : Y x
This says we are defining a binary operator _,_
to construct the
elements of this type as x , y
. The brackets are not needed, but we
will often write them to get the more familiar notation (x , y)
. The
definition says that an element of Σ Y
has two fields
, giving the
types for them.
Remark. Beginners may safely ignore this remark: Normally people
will call these two fields x
and y
something like pr₁
and pr₂
,
or fst
and snd
, for first and second projection, rather than x
and y
, and then do open Σ public
and have the projections
available as functions automatically. But we will deliberately not do
that, and instead define the projections ourselves, because this is
confusing for beginners, no matter how mathematically or
computationally versed they may be, in particular because it will not
be immediately clear that the projections have the following types.
pr₁ : {X : 𝓤 ̇ } {Y : X → 𝓥 ̇ } → Σ Y → X pr₁ (x , y) = x pr₂ : {X : 𝓤 ̇ } {Y : X → 𝓥 ̇ } → (z : Σ Y) → Y (pr₁ z) pr₂ (x , y) = y
To prove that A z
holds for all z : Σ Y
, for a given
property A
, we just prove that we have A (x , y)
for all x :
X
and y : Y x
. This is called Σ
induction or Σ
elimination, or uncurry
, after Haskell
Curry.
Σ-induction : {X : 𝓤 ̇ } {Y : X → 𝓥 ̇ } {A : Σ Y → 𝓦 ̇ } → ((x : X) (y : Y x) → A (x , y)) → (z : Σ Y) → A z Σ-induction g (x , y) = g x y
This function has an inverse:
curry : {X : 𝓤 ̇ } {Y : X → 𝓥 ̇ } {A : Σ Y → 𝓦 ̇ } → ((z : Σ Y) → A z) → ((x : X) (y : Y x) → A (x , y)) curry f x y = f (x , y)
An important particular case of the sum type is the binary cartesian product, when the type family doesn’t depend on the indexing type:
_×_ : 𝓤 ̇ → 𝓥 ̇ → 𝓤 ⊔ 𝓥 ̇ X × Y = Σ \(x : X) → Y
We have seen by way of examples that the function type symbol →
represents logical implication, and that a dependent function type
(x : X) → A x
represents a universal quantification.
We have the following uses of Σ
.
The binary cartesian product represents conjunction “and”. If the
types A
and B
stand for mathematical statements, then the
mathematical statement
A
andB
is codified as
A × B
,
because to establish “A
and B
”, we have to provide a
pair (a , b)
of proofs a : A
and b : B
.
So notice that in type theory proofs are mathematical objects, rather than meta-mathematical entities like in set theory. They are just elements of types.
The more general type
Σ (x : X), A x
,
if the type X
stands for a mathematical object and A
stands for a mathematical
statement, represents designated existence
there is a designated
x : X
withA x
.
To establish this, we have to provide a specific element x : X
and a proof a : A x
, together in a pair (x , a)
.
Later we will discuss unspecified existence
∃ (x : X), A x
,
which will be obtained by a sort of quotient of Σ (x : X), A x
,
written
∥ Σ (x : X), A x ∥
,
that identifies all the elements of the type Σ (x : X), A x
in
a single equivalence class, called its subsingleton (or truth
value or propositional) truncation.
Another reading of
Σ (x : X), A x
is as
the type of
x : X
withA x
,
similar to set-theoretical notation
{ x ∈ X | A x }
.
But we
have to be careful because if there is more than one element
in the type A x
, then x
will occur more than once in this
type. More precisely, for a₀ a₁ : A x
we have inhabitants
(x , a₀)
and (x , a₁)
of the type Σ (x : X), A x
.
In such situations, if we don’t want that, we have to either
ensure that the type A x
has at most one element for every x :
X
, or instead consider the truncated type ∥ A x ∥
and write
Σ (x : X), ∥ A x ∥
.
An example is the image of a function f : X →
Y
, which will be defined to be
Σ (y : Y), ∥ Σ (x : X), f x ≡ y ∥
.
This is the type of y : Y
for which there is an unspecified
x : X
with f x ≡ y
.
(For constructively minded readers, we emphasize that this
doesn’t erase the
witness x:X
.)
Π
typesΠ
types are builtin with a different notation in Agda, as discussed
above, but we can introduce the notation Π
for them, similar to that for Σ
:
Π : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) → 𝓤 ⊔ 𝓥 ̇ Π {𝓤} {𝓥} {X} A = (x : X) → A x
Notice that the function type X → Y
is the particular case of the Π
type when the family A
is constant with value Y
.
We take the opportunity to define the identity function (in two versions with different implicit arguments) and function composition:
id : {X : 𝓤 ̇ } → X → X id x = x 𝑖𝑑 : (X : 𝓤 ̇ ) → X → X 𝑖𝑑 X = id
Usually the type of function composition _∘_
is given as simply
(Y → Z) → (X → Y) → (X → Z)
.
With dependent functions, we can give it a more general type:
_∘_ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {Z : Y → 𝓦 ̇ } → ((y : Y) → Z y) → (f : X → Y) → (x : X) → Z (f x) g ∘ f = λ x → g (f x)
Notice that this type for the composition function can be read as a mathematical
statement: If Z y
holds for all y : Y
, then for any given f : X →
Y
we have that Z (f x)
holds for all x : X
. And the non-dependent
one is just the transitivity of implication.
The following functions are sometimes useful when we are using implicit arguments, in order to recover them explicitly without having to list them as given arguments:
domain : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X → Y) → 𝓤 ̇ domain {𝓤} {𝓥} {X} {Y} f = X codomain : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X → Y) → 𝓥 ̇ codomain {𝓤} {𝓥} {X} {Y} f = Y type-of : {X : 𝓤 ̇ } → X → 𝓤 ̇ type-of {𝓤} {X} x = X
Id
, also written _≡_
We now introduce the central type constructor of MLTT from the point of view of univalent mathematics. In Agda we can define Martin-Löf’s identity type as follows:
data Id {𝓤} (X : 𝓤 ̇ ) : X → X → 𝓤 ̇ where refl : (x : X) → Id X x x
Intuitively, the above definition would say that the only element
of the type Id X x x
is something called refl x
(for
reflexivity). But, as we shall see in a moment, this intuition turns
out to be incorrect.
Notice a crucial difference with the previous definitions using data
or induction: In the previous cases, we defined types, namely 𝟘
,
𝟙
, ℕ
, or a type depending on type parameters, namely _+_
, with 𝓤
and 𝓥
fixed,
_+_ : 𝓤 ̇ → 𝓥 ̇ → 𝓤 ⊔ 𝓥 ̇
But here we are defining a type family indexed by the elements of
a given type, rather than a new type from old types. Given a type X
in a universe 𝓤
, we define a function
Id X : X → X → 𝓤
by some mysterious sort of induction. It is this that prevents us from
being able to prove that the only element of the type Id X x x
would
be refl x
, or that the type Id X x y
would have at most one
element no matter what y : X
is.
There is however, one interesting, and crucial, thing we can
prove, namely that for any fixed
element x : X
, the type
Σ \(y : Y) → Id X x y
is always a singleton.
We will use the following alternative notation for the identity type
former Id
, where the symbol “_
” in the right-hand side of the
definition indicates that we ask Agda to infer which type we are
talking about (which is X
, but this name is not available in the
scope of the defining equation of the type former _≡_
):
_≡_ : {X : 𝓤 ̇ } → X → X → 𝓤 ̇ x ≡ y = Id _ x y
Another intuition for this type family _≡_ : X → X → 𝓤
is that it
gives the least reflexive relation on the type X
, as suggested by
Martin-Löf’s induction principle J
discussed below.
Whereas we can make the intuition that x ≡ x
has precisely one
element good by postulating a certain K
axiom due to
Thomas Streicher, which comes with Agda by default but we have
disabled above, we cannot
prove that refl x
is the only element of x ≡ x
for an arbitrary
type X
. This non-provability result was established by Hofmann and
Streicher, by giving a
model of type theory in which types are interpreted as
1
-groupoids. This is in
spirit similar to the non-provability of the parallel
postulate in
Euclidean geometry, which also considers models, which in turn are
interesting in their own right.
However, for the elements of some types, such as the type ℕ
of
natural numbers, it is possible to
prove that any identity type x ≡ y
has at most one element. Such types are called sets in univalent
mathematics.
If instead of the axiom K
we adopt Voevodsky’s
univalence axiom, we get specific
examples of objects x
and y
such that
the type x ≡ y
has multiple elements, within the type theory. It
follows that the identity type x ≡ y
is fairly under-specified in
general, in that we can’t prove or disprove that it has at most one
element.
There are two opposing ways to resolve the ambiguity or
under-specification of the identity types: (1) We can consider the K
axiom, which postulates that all types are sets, or (2) we can
consider the univalence axiom, arriving at univalent mathematics,
which gives rise to types that are more general than sets, the
n
-groupoids and ∞
-groupoids. In fact, the univalence axiom will
say, in particular, that for some types X
and elements x y : X
, the
identity type x ≡ y
does have more than one element.
A possible way to understand the element refl x
of the type x ≡ x
is as the “generic identification” between the point x
and itself,
but which is by no means necessarily the only identitification in
univalent foundations. It is generic in the sense that to explain what
happens with all identifications p : x ≡ y
between any two points
x
and y
of a type X
, it suffices to explain what happens with
the identification refl x : x ≡ x
for all points x : X
. This is
what the induction principle for identity given by Martin-Löf says,
which he called J (we could have called it ≡-induction
, but we
prefer to honour MLTT tradition):
𝕁 : (X : 𝓤 ̇ ) (A : (x y : X) → x ≡ y → 𝓥 ̇ ) → ((x : X) → A x x (refl x)) → (x y : X) (p : x ≡ y) → A x y p 𝕁 X A f x x (refl x) = f x
This is related to the Yoneda
Lemma in category theory,
for readers familiar with the subject, which says that certain natural
transformations are uniquely determined by their action on the
identity arrows, even if they are defined for all arrows. Similarly
here, 𝕁
is uniquely determined by its action on reflexive
identifications, but is defined for all identifications between any
two points, not just reflexivities.
In summary, Martin-Löf’s identity type is given by the data
Id
,refl
,𝕁
,𝕁
.However, we will not always use this induction principle, because we
can instead work with the instances we need by pattern matching on
refl
(which is just what we did to define the principle itself) and
there is a theorem by Jesper
Cockx that says that
with the Agda option without-K
, pattern matching on refl
can
define/prove precisely what 𝕁
can.
Exercise. Define
ℍ : {X : 𝓤 ̇ } (x : X) (B : (y : X) → x ≡ y → 𝓥 ̇ ) → B x (refl x) → (y : X) (p : x ≡ y) → B y p ℍ x B b x (refl x) = b
Then we can define 𝕁
from ℍ
as follows:
𝕁' : (X : 𝓤 ̇ ) (A : (x y : X) → x ≡ y → 𝓥 ̇ ) → ((x : X) → A x x (refl x)) → (x y : X) (p : x ≡ y) → A x y p 𝕁' X A f x = ℍ x (A x) (f x) 𝕁s-agreement : (X : 𝓤 ̇ ) (A : (x y : X) → x ≡ y → 𝓥 ̇ ) (f : (x : X) → A x x (refl x)) (x y : X) (p : x ≡ y) → 𝕁 X A f x y p ≡ 𝕁' X A f x y p 𝕁s-agreement X A f x x (refl x) = refl (f x)
Similarly define ℍ'
from 𝕁
without using pattern matching on refl
and show that it coincides with ℍ
(possibly using pattern matching
on refl
). This is harder.
Notational remark. The symbols “=
” and “≡
” are swapped with
respect to the HoTT book
convention for definitional/judgemental equality and type valued equality,
and there is nothing we can do about that because “=
” is a
reserved Agda symbol for definitional equality. Irrespectively of
this, it does make sense to use “≡
” with a triple bar, if we
understand this as indicating that there are multiple ways of
identifying two things in general.
With this, we have concluded the rendering of our spartan MLTT in Agda notation. Before embarking on the development of univalent mathematics within our spartan MLTT, we pause to discuss some basic examples of mathematics in Martin-Löf type theory.
Transport along an identification.
transport : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) {x y : X} → x ≡ y → A x → A y transport A (refl x) = 𝑖𝑑 (A x)
We can equivalently define transport using 𝕁
as follows:
transport𝕁 : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) {x y : X} → x ≡ y → A x → A y transport𝕁 {𝓤} {𝓥} {X} A {x} {y} = 𝕁 X (λ x y _ → A x → A y) (λ x → 𝑖𝑑 (A x)) x y
In the same way ℕ
-recursion can be seen as the non-dependent special
case of ℕ
-induction, the following transport function can be seen as
the non-dependent special case of the ≡
-induction principle ℍ
with
some of the arguments permuted and made implicit:
nondep-ℍ : {X : 𝓤 ̇ } (x : X) (A : X → 𝓥 ̇ ) → A x → (y : X) → x ≡ y → A y nondep-ℍ x A = ℍ x (λ y _ → A y) transportℍ : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) {x y : X} → x ≡ y → A x → A y transportℍ A {x} {y} p a = nondep-ℍ x A a y p
All the above transports coincide:
transports-agreement : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) {x y : X} (p : x ≡ y) → (transportℍ A p ≡ transport A p) × (transport𝕁 A p ≡ transport A p) transports-agreement A (refl x) = refl (transport A (refl x)) , refl (transport A (refl x))
The following is for use when we want to recover implicit arguments without mentioning them.
lhs : {X : 𝓤 ̇ } {x y : X} → x ≡ y → X lhs {𝓤} {X} {x} {y} p = x rhs : {X : 𝓤 ̇ } {x y : X} → x ≡ y → X rhs {𝓤} {X} {x} {y} p = y
Composition of identifications.
Given two identifications p : x ≡ y
and q : y ≡ z
, we can compose them
to get an identification p ∙ q : x ≡ z
. This can also be seen as
transitivity of equality. Because the type of composition doesn’t
mention p
and q
, we can use the non-dependent version of ≡
-induction.
_∙_ : {X : 𝓤 ̇ } {x y z : X} → x ≡ y → y ≡ z → x ≡ z p ∙ q = transport (lhs p ≡_) q p
Here we are considering the family A t = (x ≡ t)
, and using the
identification q : y ≡ z
to transport A y
to A z
, that is x ≡
y
to x ≡ z
.
Exercise. Can you define an alternative version that uses p
to
transport. Do the two versions give equal results?
When writing p ∙ q
, we lose information on the lhs and the rhs of the
identifications p : x ≡ y
and q : y ≡ z
, which makes some definitions hard to read. We now
introduce notation to be able to write e.g.
x ≡⟨ p ⟩
y ≡⟨ q ⟩
z ∎
as a synonym of the expression p ∙ q
with some of the implicit arguments of _∙_
made
explicit. We have one ternary mixfix operator _≡⟨_⟩_
and one unary
postfix
operator _∎
.
_≡⟨_⟩_ : {X : 𝓤 ̇ } (x : X) {y z : X} → x ≡ y → y ≡ z → x ≡ z x ≡⟨ p ⟩ q = p ∙ q _∎ : {X : 𝓤 ̇ } (x : X) → x ≡ x x ∎ = refl x
Inversion of identifications. Given an identification, we get an identification in the opposite direction:
_⁻¹ : {X : 𝓤 ̇ } → {x y : X} → x ≡ y → y ≡ x p ⁻¹ = transport (_≡ lhs p) p (refl (lhs p))
We can define an alternative of identification composition with this:
_∙'_ : {X : 𝓤 ̇ } {x y z : X} → x ≡ y → y ≡ z → x ≡ z p ∙' q = transport (_≡ rhs q) (p ⁻¹) q
This agrees with the previous one:
∙agreement : {X : 𝓤 ̇ } {x y z : X} (p : x ≡ y) (q : y ≡ z) → p ∙' q ≡ p ∙ q ∙agreement (refl x) (refl x) = refl (refl x)
But refl y
is a definitional neutral element for one of them on the right and for the other one on the left,
p ∙ refl y = p
,refl y ∙' q = q
,which can be checked as follows
rdnel : {X : 𝓤 ̇ } {x y : X} (p : x ≡ y) → p ∙ refl y ≡ p rdnel p = refl p rdner : {X : 𝓤 ̇ } {y z : X} (q : y ≡ z) → refl y ∙' q ≡ q rdner q = refl q
Exercise. The identification refl y
is neutral on both sides of
each of the two operations _∙_
and _∙'_
, although not
definitionally. This has to be proved by induction on
identifications, as in ∙-agreement
.
Application of a function to an identification.
Given an identification p : x ≡ x'
we get an identification
ap f p : f x ≡ f x'
for any f : X → Y
:
ap : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) {x x' : X} → x ≡ x' → f x ≡ f x' ap f {x} {x'} p = transport (λ - → f x ≡ f -) p (refl (f x))
Here the symbol “-
”, which is not to be confused with the symbol
“_
”, is a variable. We will adopt the convention in these notes of
using this variable name “-
” to make clear which part of an
expression we are replacing with transport
.
Notice that we have so far used the recursion principle transport
only. To reason about transport
, _∙_
, _⁻¹
and ap
, we will
need to use the full induction
principle 𝕁
(or equivalently pattern matching on refl
).
Pointwise equality of functions. We will work with pointwise equality of functions, defined as follows, which, using univalence, will be equivalent to equality of functions.
_∼_ : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } → Π A → Π A → 𝓤 ⊔ 𝓥 ̇ f ∼ g = ∀ x → f x ≡ g x
The symbol ∀
is a built-in notation for Π
. We could equivalently
write the definiens as
(x : _) → f x ≡ g x
,
or, with our Π
notation,
Π \x → f x ≡ g x
,
or, with our domain
notation
(x : domain f) → f x ≡ g x
.
We first introduce notation for double and triple negation to avoid the use of brackets.
¬¬ ¬¬¬ : 𝓤 ̇ → 𝓤 ̇ ¬¬ A = ¬(¬ A) ¬¬¬ A = ¬(¬¬ A)
To prove that A → ¬¬ A
, that is, A → ((A → 𝟘) → 𝟘)
, we start with
a hypothetical element a : A
and a hypothetical function u : A → 𝟘
and the goal is to get an element of 𝟘
. All we need to do is to
apply the function u
to a
. This gives double-negation
introduction:
dni : (A : 𝓤 ̇ ) → A → ¬¬ A dni A a u = u a
Mathematically, this says that if we have a point of A
(we say that
A
is pointed) then A
is nonempty. There is no general procedure to
implement the converse, that is, from a function (A → 𝟘) → 𝟘
to get
a point of A
. For truth
values A
, we can assume
this as an axiom if we wish, because it is equivalent to the
principle excluded middle. For arbitrary types A
,
this would be a form of global
choice for type
theory. However, global choice is inconsistent with univalence [HoTT
book, Theorem 3.2.2], because
there is no way to choose an element of every non-empty type in a way
that is invariant under automorphisms. However, the axiom of
choice is consistent with univalent type
theory, as stated in the introduction.
In the proof of the following, we are given hypothetical
functions f : A → B
and v : B → 𝟘
, and a hypothetical element a :
A
, and our goal is to get an element of 𝟘
. But this is easy,
because f a : B
and hence v (f a) : 𝟘
.
contrapositive : {A : 𝓤 ̇ } {B : 𝓥 ̇ } → (A → B) → (¬ B → ¬ A) contrapositive f v a = v (f a)
We have given a logical name to this function. Mathematically, this
says that if we have a function A → B
and B
is empty, then A
must be empty, too. The proof is by assuming that A
would have an
inhabitant a
, to get a contradiction, namely that B
would have an
inhabitant, too, showing that there can’t be any such inhabitant a
of A
after all. See
Bauer
for a general discussion.
And from this we get that three negations imply one:
tno : (A : 𝓤 ̇ ) → ¬¬¬ A → ¬ A tno A = contrapositive (dni A)
Hence, using dni
once again, we get that ¬¬¬ A
if and only if ¬
A
. It is entertaining to see how Brouwer formulated and proved this
fact in his Cambridge Lectures on
Intuitionism:
Theorem. Absurdity of absurdity of absurdity is equivalent to absurdity.
Proof. Firstly, since implication of the assertion 𝑦 by the assertion 𝑥 implies implication of absurdity of 𝑥 by absurdity of 𝑦, the implication of absurdity of absurdity by truth (which is an established fact) implies the implication of absurdity of truth, that is to say of absurdity, by absurdity of absurdity of absurdity. Secondly, since truth of an assertion implies absurdity of its absurdity, in particular truth of absurdity implies absurdity of absurdity of absurdity.
If we define logical equivalence by
_⇔_ : 𝓤 ̇ → 𝓥 ̇ → 𝓤 ⊔ 𝓥 ̇ X ⇔ Y = (X → Y) × (Y → X) lr-implication : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X ⇔ Y) → (X → Y) lr-implication = pr₁ rl-implication : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X ⇔ Y) → (Y → X) rl-implication = pr₂
then we can render Brouwer’s argument in Agda as follows, where the
“established fact” is dni
:
absurdity³-is-absurdity : {A : 𝓤 ̇ } → ¬¬¬ A ⇔ ¬ A absurdity³-is-absurdity {𝓤} {A} = firstly , secondly where firstly : ¬¬¬ A → ¬ A firstly = contrapositive (dni A) secondly : ¬ A → ¬¬¬ A secondly = dni (¬ A)
But of course Brouwer, as is well known, was averse to formalism, and hence wouldn’t approve of such a sacrilege.
We now define a symbol for the negation of equality.
_≢_ : {X : 𝓤 ̇ } → X → X → 𝓤 ̇ x ≢ y = ¬(x ≡ y)
In the following proof, we have u : x ≡ y → 𝟘
and need to define a
function y ≡ x → 𝟘
. So all we need to do is to compose the function
that inverts identifications with u
:
≢-sym : {X : 𝓤 ̇ } {x y : X} → x ≢ y → y ≢ x ≢-sym {𝓤} {X} {x} {y} u = λ (p : y ≡ x) → u (p ⁻¹)
To show that the type 𝟙
is not equal to the type 𝟘
, we use that
transport id
gives 𝟙 ≡ 𝟘 → id 𝟙 → id 𝟘
where id
is the identity
function of the universe 𝓤₀
. More
generally, we have the following conversion of type identifications
into functions:
Id→Fun : {X Y : 𝓤 ̇ } → X ≡ Y → X → Y Id→Fun {𝓤} = transport (𝑖𝑑 (𝓤 ̇ ))
Here the identity function is that of the universe 𝓤
where the types
X
and Y
live. An equivalent definition is the following, where
this time the identity function is that of the type X
:
Id→Fun' : {X Y : 𝓤 ̇ } → X ≡ Y → X → Y Id→Fun' (refl X) = 𝑖𝑑 X Id→Funs-agree : {X Y : 𝓤 ̇ } (p : X ≡ Y) → Id→Fun p ≡ Id→Fun' p Id→Funs-agree (refl X) = refl (𝑖𝑑 X)
So if we have a hypothetical identification p : 𝟙 ≡ 𝟘
, then we get a
function 𝟙 → 𝟘
. We apply this function to ⋆ : 𝟙
to conclude the
proof.
𝟙-is-not-𝟘 : 𝟙 ≢ 𝟘 𝟙-is-not-𝟘 p = Id→Fun p ⋆
To show that the elements ₁
and ₀
of the two-point type 𝟚
are
not equal, we reduce to the above case. We start with a hypothetical
identification p : ₁ ≡ ₀
.
₁-is-not-₀ : ₁ ≢ ₀ ₁-is-not-₀ p = 𝟙-is-not-𝟘 q where f : 𝟚 → 𝓤₀ ̇ f ₀ = 𝟘 f ₁ = 𝟙 q : 𝟙 ≡ 𝟘 q = ap f p
Remark. Agda allows us to use a pattern ()
to get the following
quick proof. However, this method of proof doesn’t belong to the
realm of MLTT. Hence we will use the pattern ()
only in the above
definition of 𝟘-induction
and
nowhere else in these notes.
₁-is-not-₀[not-an-MLTT-proof] : ¬(₁ ≡ ₀) ₁-is-not-₀[not-an-MLTT-proof] ()
Perhaps the following is sufficiently self-explanatory given the above:
decidable : 𝓤 ̇ → 𝓤 ̇ decidable A = A + ¬ A has-decidable-equality : (X : 𝓤 ̇ ) → 𝓤 ̇ has-decidable-equality X = (x y : X) → decidable (x ≡ y) 𝟚-has-decidable-equality : has-decidable-equality 𝟚 𝟚-has-decidable-equality ₀ ₀ = inl (refl ₀) 𝟚-has-decidable-equality ₀ ₁ = inr (≢-sym ₁-is-not-₀) 𝟚-has-decidable-equality ₁ ₀ = inr ₁-is-not-₀ 𝟚-has-decidable-equality ₁ ₁ = inl (refl ₁)
So we consider four cases. In the first and the last, we have equal
things and so we give an answer in the left-hand side of the sum. In
the middle two, we give an answer in the right-hand side, where we need
functions ₀ ≡ ₁ → 𝟘
and ₁ ≡ ₀ → 𝟘
, which we can take to be ≢-sym
₁-is-not-₀
and ₁-is-not-₀
respectively.
The following is more interesting. We consider the two possible cases
for n
. The first one assumes a hypothetical function f : ₀ ≡ ₀ →
𝟘
, from which we get f (refl ₀) : 𝟘
, and then, using !𝟘
, we get
an element of any type we like, which we choose to be ₀ ≡ ₁
, and we
are done. Of course, we will never be able to use the function
not-zero-is-one
with such outrageous arguments. The other case n =
₁
doesn’t need to use the hypothesis f : ₁ ≡ ₀ → 𝟘
, because the
desired conclusion holds right away, as it is ₁ ≡ ₁
, which is proved
by refl ₁
. But notice that there is nothing wrong with the
hypothesis f : ₁ ≡ ₀ → 𝟘
. For example, we can use not-zero-is-one
taking n
to be ₀
and f
to be ₁-is-not-₀
, so that the
hypotheses can be fulfilled in the second equation.
not-zero-is-one : (n : 𝟚) → n ≢ ₀ → n ≡ ₁ not-zero-is-one ₀ f = !𝟘 (₀ ≡ ₁) (f (refl ₀)) not-zero-is-one ₁ f = refl ₁
The following generalizes ₁-is-not-₀
, both in its statement and its
proof (so we could have formulated it first and then used it to deduce
₁-is-not-₀
):
inl-inr-disjoint-images : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {x : X} {y : Y} → inl x ≢ inr y inl-inr-disjoint-images {𝓤} {𝓥} {X} {Y} p = 𝟙-is-not-𝟘 q where f : X + Y → 𝓤₀ ̇ f (inl x) = 𝟙 f (inr y) = 𝟘 q : 𝟙 ≡ 𝟘 q = ap f p
If P or Q
holds and P
fails, then Q
holds:
right-fails-gives-left-holds : {P : 𝓤 ̇ } {Q : 𝓥 ̇ } → P + Q → ¬ Q → P right-fails-gives-left-holds (inl p) u = p right-fails-gives-left-holds (inr q) u = !𝟘 _ (u q)
We illustrate the above constructs of MLTT to formulate this conjecture.
module twin-primes where open Arithmetic renaming (_×_ to _*_ ; _+_ to _∔_) open ℕ-order is-prime : ℕ → 𝓤₀ ̇ is-prime n = (n ≥ 2) × ((x y : ℕ) → x * y ≡ n → (x ≡ 1) + (x ≡ n)) twin-prime-conjecture : 𝓤₀ ̇ twin-prime-conjecture = (n : ℕ) → Σ \(p : ℕ) → (p ≥ n) × is-prime p × is-prime (p ∔ 2)
Thus, not only can we write down definitions, constructions, theorems and proofs, but also conjectures. They are just definitions of types. Likewise, the univalence axiom, to be formulated in due course, is a type.
We first prove the remaining Peano axioms.
positive-not-zero : (x : ℕ) → succ x ≢ 0 positive-not-zero x p = 𝟙-is-not-𝟘 (g p) where f : ℕ → 𝓤₀ ̇ f 0 = 𝟘 f (succ x) = 𝟙 g : succ x ≡ 0 → 𝟙 ≡ 𝟘 g = ap f
To show that the successor function is left cancellable, we can use the following predecessor function.
pred : ℕ → ℕ pred 0 = 0 pred (succ n) = n succ-lc : {x y : ℕ} → succ x ≡ succ y → x ≡ y succ-lc = ap pred
With this we have proved all the Peano axioms.
Without assuming the principle of excluded middle, we can prove that
ℕ
has decidable equality:
ℕ-has-decidable-equality : has-decidable-equality ℕ ℕ-has-decidable-equality 0 0 = inl (refl 0) ℕ-has-decidable-equality 0 (succ y) = inr (≢-sym (positive-not-zero y)) ℕ-has-decidable-equality (succ x) 0 = inr (positive-not-zero x) ℕ-has-decidable-equality (succ x) (succ y) = f (ℕ-has-decidable-equality x y) where f : decidable (x ≡ y) → decidable (succ x ≡ succ y) f (inl p) = inl (ap succ p) f (inr k) = inr (λ (s : succ x ≡ succ y) → k (succ-lc s))
Exercise. Students should do this kind of thing at least once in
their academic life: rewrite the above proof of the decidability of
equality of ℕ
to use the ℕ-induction
principle instead of pattern
matching and recursion, to understand by themselves that this can be
done.
We now move to basic arithmetic, and we use a module for that.
module basic-arithmetic-and-order where open ℕ-order public open Arithmetic renaming (_+_ to _∔_) hiding (_×_)
We can show that addition is associative as follows, by induction on
z
, where IH
stands for “induction hypothesis”:
+-assoc : (x y z : ℕ) → (x ∔ y) ∔ z ≡ x ∔ (y ∔ z) +-assoc x y zero = (x ∔ y) ∔ 0 ≡⟨ refl _ ⟩ x ∔ (y ∔ 0) ∎ +-assoc x y (succ z) = (x ∔ y) ∔ succ z ≡⟨ refl _ ⟩ succ ((x ∔ y) ∔ z) ≡⟨ ap succ IH ⟩ succ (x ∔ (y ∔ z)) ≡⟨ refl _ ⟩ x ∔ (y ∔ succ z) ∎ where IH : (x ∔ y) ∔ z ≡ x ∔ (y ∔ z) IH = +-assoc x y z
Notice that the proofs refl _
should be read as “by definition” or
“by construction”. They are not necessary, because Agda knows the
definitions and silently expands them when necessary, but we are
writing them here for the sake of clarity. Elsewhere in these notes,
we do occasionally rely on silent expansions of definitions. Here is
the version with the silent expansion of definitions, for the sake of
illustration (the author of these notes can write, but not read it the
absence of the above verbose version):
+-assoc' : (x y z : ℕ) → (x ∔ y) ∔ z ≡ x ∔ (y ∔ z) +-assoc' x y zero = refl _ +-assoc' x y (succ z) = ap succ (+-assoc' x y z)
We defined addition by induction on the second argument. Next we show that the base case and induction step of a definition by induction on the first argument hold (but of course not definitionally). We do this by induction on the second argument.
+-base-on-first : (x : ℕ) → 0 ∔ x ≡ x +-base-on-first 0 = refl 0 +-base-on-first (succ x) = 0 ∔ succ x ≡⟨ refl _ ⟩ succ (0 ∔ x) ≡⟨ ap succ IH ⟩ succ x ∎ where IH : 0 ∔ x ≡ x IH = +-base-on-first x +-step-on-first : (x y : ℕ) → succ x ∔ y ≡ succ (x ∔ y) +-step-on-first x zero = refl (succ x) +-step-on-first x (succ y) = succ x ∔ succ y ≡⟨ refl _ ⟩ succ (succ x ∔ y) ≡⟨ ap succ IH ⟩ succ (x ∔ succ y) ∎ where IH : succ x ∔ y ≡ succ (x ∔ y) IH = +-step-on-first x y
Using this, the commutativity of addition can be proved by induction on the first argument.
+-comm : (x y : ℕ) → x ∔ y ≡ y ∔ x +-comm 0 y = 0 ∔ y ≡⟨ +-base-on-first y ⟩ y ≡⟨ refl _ ⟩ y ∔ 0 ∎ +-comm (succ x) y = succ x ∔ y ≡⟨ +-step-on-first x y ⟩ succ(x ∔ y) ≡⟨ ap succ IH ⟩ succ(y ∔ x) ≡⟨ refl _ ⟩ y ∔ succ x ∎ where IH : x ∔ y ≡ y ∔ x IH = +-comm x y
We now show that addition is cancellable in its left argument, by induction on the left argument:
+-lc : (x y z : ℕ) → x ∔ y ≡ x ∔ z → y ≡ z +-lc 0 y z p = y ≡⟨ (+-base-on-first y)⁻¹ ⟩ 0 ∔ y ≡⟨ p ⟩ 0 ∔ z ≡⟨ +-base-on-first z ⟩ z ∎ +-lc (succ x) y z p = IH where q = succ (x ∔ y) ≡⟨ (+-step-on-first x y)⁻¹ ⟩ succ x ∔ y ≡⟨ p ⟩ succ x ∔ z ≡⟨ +-step-on-first x z ⟩ succ (x ∔ z) ∎ IH : y ≡ z IH = +-lc x y z (succ-lc q)
Now we solve part of an exercise given above, namely that (x ≤ y) ⇔ Σ \(z : ℕ) → x + z ≡ y
.
First we name the alternative definition of ≤
:
_≼_ : ℕ → ℕ → 𝓤₀ ̇ x ≼ y = Σ \(z : ℕ) → x ∔ z ≡ y
Next we show that the two relations ≤
and ≼
imply each other.
In both cases, we proceed by induction on both arguments.
≤-gives-≼ : (x y : ℕ) → x ≤ y → x ≼ y ≤-gives-≼ 0 0 l = 0 , refl 0 ≤-gives-≼ 0 (succ y) l = succ y , +-base-on-first (succ y) ≤-gives-≼ (succ x) 0 l = !𝟘 (succ x ≼ zero) l ≤-gives-≼ (succ x) (succ y) l = γ where IH : x ≼ y IH = ≤-gives-≼ x y l z : ℕ z = pr₁ IH p : x ∔ z ≡ y p = pr₂ IH γ : succ x ≼ succ y γ = z , (succ x ∔ z ≡⟨ +-step-on-first x z ⟩ succ (x ∔ z) ≡⟨ ap succ p ⟩ succ y ∎) ≼-gives-≤ : (x y : ℕ) → x ≼ y → x ≤ y ≼-gives-≤ 0 0 (z , p) = ⋆ ≼-gives-≤ 0 (succ y) (z , p) = ⋆ ≼-gives-≤ (succ x) 0 (z , p) = positive-not-zero (x ∔ z) q where q = succ (x ∔ z) ≡⟨ (+-step-on-first x z)⁻¹ ⟩ succ x ∔ z ≡⟨ p ⟩ zero ∎ ≼-gives-≤ (succ x) (succ y) (z , p) = IH where q = succ (x ∔ z) ≡⟨ (+-step-on-first x z)⁻¹ ⟩ succ x ∔ z ≡⟨ p ⟩ succ y ∎ IH : x ≤ y IH = ≼-gives-≤ x y (z , succ-lc q)
Later we will show that
(x ≤ y) ≡ Σ \(z : ℕ) → x + z ≡ y
, using univalence.
We now develop some generally useful material regarding the order ≤
on natural numbers. First, it is reflexive, transitive and antisymmetric:
≤-refl : (n : ℕ) → n ≤ n ≤-refl zero = ⋆ ≤-refl (succ n) = ≤-refl n ≤-trans : (l m n : ℕ) → l ≤ m → m ≤ n → l ≤ n ≤-trans zero m n p q = ⋆ ≤-trans (succ l) zero n p q = !𝟘 (succ l ≤ n) p ≤-trans (succ l) (succ m) zero p q = q ≤-trans (succ l) (succ m) (succ n) p q = ≤-trans l m n p q ≤-anti : (m n : ℕ) → m ≤ n → n ≤ m → m ≡ n ≤-anti zero zero p q = refl zero ≤-anti zero (succ n) p q = !𝟘 (zero ≡ succ n) q ≤-anti (succ m) zero p q = !𝟘 (succ m ≡ zero) p ≤-anti (succ m) (succ n) p q = ap succ (≤-anti m n p q) ≤-succ : (n : ℕ) → n ≤ succ n ≤-succ zero = ⋆ ≤-succ (succ n) = ≤-succ n zero-minimal : (n : ℕ) → zero ≤ n zero-minimal n = ⋆ unique-minimal : (n : ℕ) → n ≤ zero → n ≡ zero unique-minimal zero p = refl zero unique-minimal (succ n) p = !𝟘 (succ n ≡ zero) p ≤-split : (m n : ℕ) → m ≤ succ n → (m ≤ n) + (m ≡ succ n) ≤-split zero n l = inl l ≤-split (succ m) zero l = inr (ap succ (unique-minimal m l)) ≤-split (succ m) (succ n) l = +-recursion inl (inr ∘ ap succ) (≤-split m n l) _<_ : ℕ → ℕ → 𝓤₀ ̇ x < y = succ x ≤ y not-<-gives-≥ : (m n : ℕ) → ¬(n < m) → m ≤ n not-<-gives-≥ zero n u = zero-minimal n not-<-gives-≥ (succ m) zero = dni (zero < succ m) (zero-minimal m) not-<-gives-≥ (succ m) (succ n) = not-<-gives-≥ m n bounded-∀-next : (A : ℕ → 𝓤 ̇ ) (k : ℕ) → A k → ((n : ℕ) → n < k → A n) → (n : ℕ) → n < succ k → A n bounded-∀-next A k a φ n l = +-recursion f g s where s : (n < k) + (succ n ≡ succ k) s = ≤-split (succ n) k l f : n < k → A n f = φ n g : succ n ≡ succ k → A n g p = transport A ((succ-lc p)⁻¹) a
The type of roots of a function:
root : (ℕ → ℕ) → 𝓤₀ ̇ root f = Σ \(n : ℕ) → f n ≡ 0 _has-no-root<_ : (ℕ → ℕ) → ℕ → 𝓤₀ ̇ f has-no-root< k = (n : ℕ) → n < k → f n ≢ 0 is-minimal-root : (ℕ → ℕ) → ℕ → 𝓤₀ ̇ is-minimal-root f m = (f m ≡ 0) × (f has-no-root< m) at-most-one-minimal-root : (f : ℕ → ℕ) (m n : ℕ) → is-minimal-root f m → is-minimal-root f n → m ≡ n at-most-one-minimal-root f m n (p , φ) (q , ψ) = c m n a b where a : ¬(m < n) a u = ψ m u p b : ¬(n < m) b v = φ n v q c : (m n : ℕ) → ¬(m < n) → ¬(n < m) → m ≡ n c m n u v = ≤-anti m n (not-<-gives-≥ m n v) (not-<-gives-≥ n m u)
The type of minimal roots of a function:
minimal-root : (ℕ → ℕ) → 𝓤₀ ̇ minimal-root f = Σ \(m : ℕ) → is-minimal-root f m minimal-root-is-root : ∀ f → minimal-root f → root f minimal-root-is-root f (m , p , _) = m , p bounded-ℕ-search : ∀ k f → (minimal-root f) + (f has-no-root< k) bounded-ℕ-search zero f = inr (λ n → !𝟘 (f n ≢ 0)) bounded-ℕ-search (succ k) f = +-recursion φ γ (bounded-ℕ-search k f) where A : ℕ → (ℕ → ℕ) → 𝓤₀ ̇ A k f = (minimal-root f) + (f has-no-root< k) φ : minimal-root f → A (succ k) f φ = inl γ : f has-no-root< k → A (succ k) f γ u = +-recursion γ₀ γ₁ (ℕ-has-decidable-equality (f k) 0) where γ₀ : f k ≡ 0 → A (succ k) f γ₀ p = inl (k , p , u) γ₁ : f k ≢ 0 → A (succ k) f γ₁ v = inr (bounded-∀-next (λ n → f n ≢ 0) k v u)
Given any root, we can find a minimal root.
root-gives-minimal-root : ∀ f → root f → minimal-root f root-gives-minimal-root f (n , p) = γ where g : ¬(f has-no-root< (succ n)) g φ = φ n (≤-refl n) p γ : minimal-root f γ = right-fails-gives-left-holds (bounded-ℕ-search (succ n) f) g
But, as discussed above, rather than postulating univalence and truncation, we will use them as explicit assumptions each time they are needed.
We emphasize that there are univalent type theories in which univalence and existence of truncations are theorems, for example cubical type theory, which has a version available in Agda, called cubical Agda.
Voevodsky defined a notion of contractible type, which we refer to here as singleton type.
is-singleton : 𝓤 ̇ → 𝓤 ̇ is-singleton X = Σ \(c : X) → (x : X) → c ≡ x
Such an element c
is called a center of contraction of X
.
𝟙-is-singleton : is-singleton 𝟙 𝟙-is-singleton = ⋆ , 𝟙-induction (λ x → ⋆ ≡ x) (refl ⋆)
Once we have defined the notion of type
equivalence, we will have
that a type is a singleton if and only if it is equivalent to 𝟙
.
When working with singleton types, it will be convenient to have distinguished names for the two projections:
center : (X : 𝓤 ̇ ) → is-singleton X → X center X (c , φ) = c centrality : (X : 𝓤 ̇ ) (i : is-singleton X) (x : X) → center X i ≡ x centrality X (c , φ) = φ
A type is a subsingleton if it has at most one element, that is, any two of its elements are equal, or identified.
is-subsingleton : 𝓤 ̇ → 𝓤 ̇ is-subsingleton X = (x y : X) → x ≡ y 𝟘-is-subsingleton : is-subsingleton 𝟘 𝟘-is-subsingleton x y = !𝟘 (x ≡ y) x singletons-are-subsingletons : (X : 𝓤 ̇ ) → is-singleton X → is-subsingleton X singletons-are-subsingletons X (c , φ) x y = x ≡⟨ (φ x)⁻¹ ⟩ c ≡⟨ φ y ⟩ y ∎ 𝟙-is-subsingleton : is-subsingleton 𝟙 𝟙-is-subsingleton = singletons-are-subsingletons 𝟙 𝟙-is-singleton pointed-subsingletons-are-singletons : (X : 𝓤 ̇ ) → X → is-subsingleton X → is-singleton X pointed-subsingletons-are-singletons X x s = (x , s x) singleton-iff-pointed-and-subsingleton : {X : 𝓤 ̇ } → is-singleton X ⇔ (X × is-subsingleton X) singleton-iff-pointed-and-subsingleton {𝓤} {X} = (a , b) where a : is-singleton X → X × is-subsingleton X a s = center X s , singletons-are-subsingletons X s b : X × is-subsingleton X → is-singleton X b (x , t) = pointed-subsingletons-are-singletons X x t
The terminology stands for subtype
of a singleton and is
justified
by the fact that a type X
is a subsingleton according to the above
definition if and only if the map X → 𝟙
is an
embedding, if and only if X
is
embedded into a singleton.
Under univalent excluded
middle, the empty type 𝟘
and the singleton
type 𝟙
are the only subsingletons, up to equivalence, or up to
identity if we further assume univalence.
Subsingletons are also called propositions or truth values:
is-prop is-truth-value : 𝓤 ̇ → 𝓤 ̇ is-prop = is-subsingleton is-truth-value = is-subsingleton
The terminology is-subsingleton
is more mathematical and avoids the
clash with the slogan propositions as
types,
which is based on the interpretation of mathematical statements as
arbitrary types, rather than just subsingletons. In these notes we
prefer the terminology subsingleton, but we occasionally use the
terminologies proposition and truth value. When we wish to
emphasize this notion of proposition adopted in univalent mathematics,
as opposed to “propositions as (arbitrary) types”, we may say
univalent proposition.
In some formal systems, for example set theory based on first-order logic, one can tell that something is a proposition by looking at its syntactical form, e.g. “there are infinitely many prime numbers” is a proposition, by construction, in such a theory.
In univalent mathematics, however, propositions are mathematical,
rather than meta-mathematical, objects, and the fact that a type turns
out to be a proposition requires mathematical proof. Moreover, such a
proof may be subtle and not immediate and require significant
preparation. An example is
the proof that the univalence axiom is a proposition, which relies on
the fact that univalence implies
function extensionality, which in turn
implies that the
statement that a map is an equivalence is a proposition. Another
non-trivial example, which again relies on univalence or at least
function extensionality, is the proof that the statement that a type
X
is a proposition is itself a
proposition,
which can be written as is-prop (is-prop X)
.
A type is defined to be a set if there is at most one way for any two of its elements to be equal:
is-set : 𝓤 ̇ → 𝓤 ̇ is-set X = (x y : X) → is-subsingleton (x ≡ y)
At this point, with the definition of these notions, we are entering the realm of univalent mathematics, but not yet needing the univalence axiom.
As mentioned above, under excluded middle, the only two subsingletons,
up to equivalence, are 𝟘
and 𝟙
. In fact, excluded middle in
univalent mathematics says precisely that.
EM EM' : ∀ 𝓤 → 𝓤 ⁺ ̇ EM 𝓤 = (X : 𝓤 ̇ ) → is-subsingleton X → X + ¬ X EM' 𝓤 = (X : 𝓤 ̇ ) → is-subsingleton X → is-singleton X + is-empty X
Notice that the above don’t assert excluded middle, but instead say what excluded middle is (like when we said what the twin-prime conjecture is), in two logically equivalent versions:
EM-gives-EM' : EM 𝓤 → EM' 𝓤 EM-gives-EM' em X s = γ (em X s) where γ : X + ¬ X → is-singleton X + is-empty X γ (inl x) = inl (pointed-subsingletons-are-singletons X x s) γ (inr x) = inr x EM'-gives-EM : EM' 𝓤 → EM 𝓤 EM'-gives-EM em' X s = γ (em' X s) where γ : is-singleton X + is-empty X → X + ¬ X γ (inl i) = inl (center X i) γ (inr x) = inr x
We will not assume or deny excluded middle, which is an independent statement (it can’t be proved or disproved). We will deliberately keep it undecided, adopting a neutral approach to the constructive vs. non-constructive dichotomy. We will however prove a couple of consequences of excluded middle in discussions of foundational issues such as size and existence of subsingleton truncations. We will also prove that excluded middle is a consequence of the axiom of choice.
It should be emphasized that the potential failure of excluded middle doesn’t say that there may be mysterious subsingletons that fail to be singletons and also fail to be empty. No such things occur in mathematical nature:
no-unicorns : ¬(Σ \(X : 𝓤 ̇ ) → is-subsingleton X × ¬(is-singleton X) × ¬(is-empty X)) no-unicorns (X , i , f , g) = c where e : is-empty X e x = f (pointed-subsingletons-are-singletons X x i) c : 𝟘 c = g e
Given this, what does the potential failure of excluded middle mean?
That there is no general way to determine which of the two cases
is-singleton X
and is-empty X
applies for a given subsingleton
X
. This kind of phenomenon should be familiar even in classical,
non-constructive mathematics: although we are entitled to believe that
the Goldbach conjecture either holds or fails, we still don’t know
which one is the case, despite efforts by the sharpest mathematical
minds. A hypothetical element of the type EM
would, in particular,
be able to solve the Goldbach conjecture. There is nothing wrong or
contradictory with assuming the existence of such a magic blackbox. There
is only loss of the implicit algorithmic character of our type theory,
which most mathematicians will be perfectly happy to live with. In
these notes we don’t advocate any particular philosophy for or against
excluded middle and other non-constructive principles. We confine
ourselves to discussing mathematical facts.
Exercise. We also have that it is impossible for is-singleton X +
is-empty X
to fail for a given subsingleton X
, which amounts to
saying that
¬¬(is-singleton X + is-empty X)
always holds.
Occasionaly one hears people asserting that this says that the double negation of excluded middle holds. But this is incorrect. The above statement, when written in full, is
(X : 𝓤 ̇ ) → is-subsingleton X → ¬¬(is-singleton X + is-empty X)
.
This is a theorem, which is quite different from the double negation of excluded middle, which is not a theorem and is
¬¬((X : 𝓤 ̇ ) → is-subsingleton X → is-singleton X + is-empty X)
.
Just as excluded middle, this is an independent statement.
Exercise. Continued from the previous exercise. Also for any type
R
replacing the empty type, there is a function ((X + (X → R)) → R)
→ R
, so that the kind of phenomenon illustrated in the previous
exercise has little to do with the emptiness of the empty type.
A magma is a set equipped with a binary operation subject to no laws
[Bourbaki]. We can define the type of magmas in a universe 𝓤
as follows:
module magmas where Magma : (𝓤 : Universe) → 𝓤 ⁺ ̇ Magma 𝓤 = Σ \(X : 𝓤 ̇ ) → is-set X × (X → X → X)
The type Magma 𝓤
collects all magmas in a universe 𝓤
, and lives in
the successor universe 𝓤 ⁺
. Thus, this doesn’t define what a magma is as
a property. It defines the type of magmas. A magma is an element of
this type, that is, a triple (X , i , _·_)
with X : 𝓤
and i :
is-set X
and _·_ : X → X → X
.
Given a magma M = (X , i , _·_)
we denote by ⟨ M ⟩
its underlying
set X
and by magma-operation M
its multiplication _·_
:
⟨_⟩ : Magma 𝓤 → 𝓤 ̇ ⟨ X , i , _·_ ⟩ = X magma-is-set : (M : Magma 𝓤) → is-set ⟨ M ⟩ magma-is-set (X , i , _·_) = i magma-operation : (M : Magma 𝓤) → ⟨ M ⟩ → ⟨ M ⟩ → ⟨ M ⟩ magma-operation (X , i , _·_) = _·_
The following syntax declaration
allows us to write x ·⟨ M ⟩ y
as an abbreviation of magma-operation M x y
:
syntax magma-operation M x y = x ·⟨ M ⟩ y
For some reason, Agda has this kind of definition backwards: the
definiendum and the definiens are swapped with respect to the
normal convention of writing what is defined on the left-hand side of
the equality sign. In any case, the point is that this time we need
such a mechanism because in order to be able to mention arbitrary x
and y
, we first need to know their types, which is ⟨ M ⟩
and hence
M
has to occur before x
and y
in the definition of
magma-operation
. The syntax declaration circumvents this.
A function of the underlying sets of two magmas is a called a homomorphism when it commutes with the magma operations:
is-magma-hom : (M N : Magma 𝓤) → (⟨ M ⟩ → ⟨ N ⟩) → 𝓤 ̇ is-magma-hom M N f = (x y : ⟨ M ⟩) → f (x ·⟨ M ⟩ y) ≡ f x ·⟨ N ⟩ f y id-is-magma-hom : (M : Magma 𝓤) → is-magma-hom M M (𝑖𝑑 ⟨ M ⟩) id-is-magma-hom M = λ (x y : ⟨ M ⟩) → refl (x ·⟨ M ⟩ y) is-magma-iso : (M N : Magma 𝓤) → (⟨ M ⟩ → ⟨ N ⟩) → 𝓤 ̇ is-magma-iso M N f = is-magma-hom M N f × Σ \(g : ⟨ N ⟩ → ⟨ M ⟩) → is-magma-hom N M g × (g ∘ f ∼ 𝑖𝑑 ⟨ M ⟩) × (f ∘ g ∼ 𝑖𝑑 ⟨ N ⟩) id-is-magma-iso : (M : Magma 𝓤) → is-magma-iso M M (𝑖𝑑 ⟨ M ⟩) id-is-magma-iso M = id-is-magma-hom M , 𝑖𝑑 ⟨ M ⟩ , id-is-magma-hom M , refl , refl
Any identification of magmas gives rise to a magma isomorphism by transport:
⌜_⌝ : {M N : Magma 𝓤} → M ≡ N → ⟨ M ⟩ → ⟨ N ⟩ ⌜ p ⌝ = transport ⟨_⟩ p ⌜⌝-is-iso : {M N : Magma 𝓤} (p : M ≡ N) → is-magma-iso M N (⌜ p ⌝) ⌜⌝-is-iso (refl M) = id-is-magma-iso M
The isomorphisms can be collected in a type:
_≅ₘ_ : Magma 𝓤 → Magma 𝓤 → 𝓤 ̇ M ≅ₘ N = Σ \(f : ⟨ M ⟩ → ⟨ N ⟩) → is-magma-iso M N f
The following function will be a bijection in the presence of univalence, so that the identifications of magmas are in one-to-one correspondence with the magma isomorphisms:
magma-Id→iso : {M N : Magma 𝓤} → M ≡ N → M ≅ₘ N magma-Id→iso p = (⌜ p ⌝ , ⌜⌝-is-iso p )
If we omit the sethood condition in the definition of the type of
magmas, we get the type of what we could call ∞
-magmas (then the
type of magmas could be called 0-Magma
).
∞-Magma : (𝓤 : Universe) → 𝓤 ⁺ ̇ ∞-Magma 𝓤 = Σ \(X : 𝓤 ̇ ) → X → X → X
A monoid is a set equipped with an associative binary operation and with a two-sided neutral element, and so we get the type of monoids as follows.
We first define the three laws:
module monoids where left-neutral : {X : 𝓤 ̇ } → X → (X → X → X) → 𝓤 ̇ left-neutral e _·_ = ∀ x → e · x ≡ x right-neutral : {X : 𝓤 ̇ } → X → (X → X → X) → 𝓤 ̇ right-neutral e _·_ = ∀ x → x · e ≡ x associative : {X : 𝓤 ̇ } → (X → X → X) → 𝓤 ̇ associative _·_ = ∀ x y z → (x · y) · z ≡ x · (y · z)
Then a monoid is a set equipped with such e
and _·_
satisfying these
three laws:
Monoid : (𝓤 : Universe) → 𝓤 ⁺ ̇ Monoid 𝓤 = Σ \(X : 𝓤 ̇ ) → is-set X × Σ \(_·_ : X → X → X) → Σ \(e : X) → left-neutral e _·_ × right-neutral e _·_ × associative _·_
Remark. People are more likely to use
records
in Agda rather than iterated Σ
s as above (recall that we defined
Σ
using a record). This is fine, because records amount to iterated
Σ
types (recall that also _×_
is a Σ
type, by
definition). Here, however, we are being deliberately spartan. Once we
have defined our Agda notation for MLTT, we want to stick to
it. This is for teaching purposes (of MLTT, encoded in Agda, not of
Agda itself in its full glory).
We could drop the is-set X
condition, but then we wouldn’t get
∞
-monoids in any reasonable sense. We would instead get “wild
∞
-monoids” or “incoherent ∞
-monoids”. The reason is that in
monoids (with sets as carriers) the neutrality and associativity
equations can hold in at most one way, by definition of set. But if we
drop the sethood requirement, then the equations can hold in multiple
ways. And then one is forced to consider equations between the
identifications (all the way up in the ∞-case), which is
what “coherence data”
means. The wildness or incoherence amounts to the absence of such
data.
Similarly to the situation with magmas, identifications of monoids are in bijection with monoid isomorphisms, assuming univalence. For this to be the case, it is absolutely necessary that the carrier of a monoid is a set rather than an arbitrary type, for otherwise the monoid equations can hold in many possible ways, and we would need to consider a notion of monoid isomorphism that in addition to preserving the neutral element and the multiplication, preserves the associativity identifications.
Exercise. Define the type of groups (with sets as carriers).
Exercise. Write down the various types of categories defined in the HoTT book in Agda.
Exercise. Try to define a type of topological spaces.
We can view a type X
as a sort of category with hom-types rather than
hom-sets, with the identifications between points as the arrows.
We have that refl
provides a neutral element for composition of
identifications:
refl-left : {X : 𝓤 ̇ } {x y : X} {p : x ≡ y} → refl x ∙ p ≡ p refl-left {𝓤} {X} {x} {x} {refl x} = refl (refl x) refl-right : {X : 𝓤 ̇ } {x y : X} {p : x ≡ y} → p ∙ refl y ≡ p refl-right {𝓤} {X} {x} {y} {p} = refl p
And composition is associative:
∙assoc : {X : 𝓤 ̇ } {x y z t : X} (p : x ≡ y) (q : y ≡ z) (r : z ≡ t) → (p ∙ q) ∙ r ≡ p ∙ (q ∙ r) ∙assoc p q (refl z) = refl (p ∙ q)
If we wanted to prove the above without pattern matching, this time we
would need the dependent version 𝕁
of induction on _≡_
.
Exercise. Try to do this with 𝕁
and with ℍ
.
But all arrows, the identifications, are invertible:
⁻¹-left∙ : {X : 𝓤 ̇ } {x y : X} (p : x ≡ y) → p ⁻¹ ∙ p ≡ refl y ⁻¹-left∙ (refl x) = refl (refl x) ⁻¹-right∙ : {X : 𝓤 ̇ } {x y : X} (p : x ≡ y) → p ∙ p ⁻¹ ≡ refl x ⁻¹-right∙ (refl x) = refl (refl x)
A category in which all arrows are invertible is called a groupoid. The above is the basis for the Hofmann–Streicher groupoid model of type theory.
But we actually get higher groupoids, because given identifications
p q : x ≡ y
we can consider the identity type p ≡ q
, and given
u v : p ≡ q
we can consider the type u ≡ v
, and so on.
See [van den Berg and Garner] and
[Lumsdaine].
For some types, such as the natural numbers, we can
prove that this process trivializes
after the first step, because the type x ≡ y
has at most one
element. Such types are the sets as defined above.
Voevodsky defined the notion of hlevel to measure how long it takes for the process to trivialize.
Here are some more constructions with identifications:
⁻¹-involutive : {X : 𝓤 ̇ } {x y : X} (p : x ≡ y) → (p ⁻¹)⁻¹ ≡ p ⁻¹-involutive (refl x) = refl (refl x)
The application operation on identifications is functorial, in the sense that it preserves the neutral element and commutes with composition:
ap-refl : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) (x : X) → ap f (refl x) ≡ refl (f x) ap-refl f x = refl (refl (f x)) ap-∙ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) {x y z : X} (p : x ≡ y) (q : y ≡ z) → ap f (p ∙ q) ≡ ap f p ∙ ap f q ap-∙ f p (refl y) = refl (ap f p)
Notice that we also have
ap⁻¹ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) {x y : X} (p : x ≡ y) → (ap f p)⁻¹ ≡ ap f (p ⁻¹) ap⁻¹ f (refl x) = refl (refl (f x))
The above functions ap-refl
and ap-∙
constitute functoriality in
the second argument. We also have functoriality in the first argument,
in the following sense:
ap-id : {X : 𝓤 ̇ } {x y : X} (p : x ≡ y) → ap id p ≡ p ap-id (refl x) = refl (refl x) ap-∘ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {Z : 𝓦 ̇ } (f : X → Y) (g : Y → Z) {x y : X} (p : x ≡ y) → ap (g ∘ f) p ≡ (ap g ∘ ap f) p ap-∘ f g (refl x) = refl (refl (g (f x)))
Transport is also functorial with respect to identification composition and function composition. By construction, it maps the neutral element to the identity function. The apparent contravariance takes place because we have defined function composition in the usual order, but identification composition in the diagramatic order (as is customary in each case).
transport∙ : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) {x y z : X} (p : x ≡ y) (q : y ≡ z) → transport A (p ∙ q) ≡ transport A q ∘ transport A p transport∙ A p (refl y) = refl (transport A p)
Functions of a type into a universe can be considered as generalized
presheaves, which we could perhaps call ∞
-presheaves. Their morphisms
are natural transformations:
Nat : {X : 𝓤 ̇ } → (X → 𝓥 ̇ ) → (X → 𝓦 ̇ ) → 𝓤 ⊔ 𝓥 ⊔ 𝓦 ̇ Nat A B = (x : domain A) → A x → B x
We don’t need to specify the naturality condition, because it is automatic:
Nats-are-natural : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) (B : X → 𝓦 ̇ ) (τ : Nat A B) → {x y : X} (p : x ≡ y) → τ y ∘ transport A p ≡ transport B p ∘ τ x Nats-are-natural A B τ (refl x) = refl (τ x)
We will use the following constructions a number of times:
NatΣ : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } {B : X → 𝓦 ̇ } → Nat A B → Σ A → Σ B NatΣ τ (x , a) = (x , τ x a) transport-ap : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (A : Y → 𝓦 ̇ ) (f : X → Y) {x x' : X} (p : x ≡ x') (a : A (f x)) → transport (A ∘ f) p a ≡ transport A (ap f p) a transport-ap A f (refl x) a = refl a
If we have an identification p : A ≡ B
of two types A
and B
, and
elements a : A
and b : B
, we cannot ask directly whether a ≡ b
,
because although the types are identified by p
, they are not
necessarily the same, in the sense of definitional equality. This is
not merely a syntactical restriction of our formal system, but instead
a fundamental fact that reflects the philosophy of univalent
mathematics. For instance, consider the type
data Color : 𝓤₀ ̇ where Black White : Color
With univalence, we will have that Color ≡ 𝟚
where 𝟚
is the
two-point type 𝟙 + 𝟙
with elements ₀
and
₁
. But there will be two identifications p₀ p₁ : Color ≡ 𝟚
, one
that identifies Black
with ₀
and White
with ₁
, and another one
that identifies Black
with ₁
and White
with ₀
. There is no
preferred coding of binary colors as bits. And, precisely because of
that, even if univalence does give inhabitants of the type Color ≡
𝟚
, it doesn’t make sense to ask whether Black ≡ ₀
holds without
specifying one of the possible inhabitants p₀
and p₁
.
What we will have is that the functions transport id p₀
and
transport id p₁
are the two possible bijections Color → 𝟚
that
identify colors with bits. So, it is not enough to have Color ≡ 𝟚
to
be able to compare a color c : Color
with a bit b : 𝟚
. We need to
specify which identification p : Color ≡ 𝟚
we want to consider for
the comparison. The same considerations
apply when we consider identifications p : 𝟚 ≡ 𝟚
.
So the meaningful comparison in the more general situation is
transport id p a ≡ b
for a specific
p : A ≡ B
,
where id
is the identity function of the universe where the types A
and B
live, and hence
transport id : A ≡ B → (A → B)
is the function that transforms identifications into functions, which has already occurred above.
More generally, we want to consider the situation in which we replace
the identity function id
of the universe where A
and B
live by
an arbitrary type family, which is what we do now.
If we have a type
X : 𝓤 ̇
,
and a type family
A : X → 𝓥 ̇
and points
x y : X
and an identification
p : x ≡ y
,
then we get the identification
ap A p : A x ≡ A y
.
However, if we have
a : A x
,
b : A y
,
we again cannot write down the identity type
.a ≡ b
This is again a non-sensical mathematical statement, because the types
A x
and A y
are not the same, but only identified, and in general
there can be many identifications, not just ap A p
, and so any
identification between elements of A x
and A y
has to be with
respect to a specific identification, as in the above particular case.
This time, the meaningful comparison, given p : x ≡ y
, is
transport A p a ≡ b
,
For example, this idea applies when comparing the values of a dependent function:
apd : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } (f : (x : X) → A x) {x y : X} (p : x ≡ y) → transport A p (f x) ≡ f y apd f (refl x) = refl (f x)
With the above notion of dependent equality, we can characterize
equality in Σ
types as follows.
to-Σ-≡ : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } {σ τ : Σ A} → (Σ \(p : pr₁ σ ≡ pr₁ τ) → transport A p (pr₂ σ) ≡ pr₂ τ) → σ ≡ τ to-Σ-≡ (refl x , refl a) = refl (x , a) from-Σ-≡ : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } {σ τ : Σ A} → σ ≡ τ → Σ \(p : pr₁ σ ≡ pr₁ τ) → transport A p (pr₂ σ) ≡ pr₂ τ from-Σ-≡ (refl (x , a)) = (refl x , refl a)
The above gives
(σ ≡ τ) ⇔ Σ \(p : pr₁ σ ≡ pr₁ τ) → transport A p (pr₂ σ) ≡ pr₂ τ
.
But this is a very weak statement when the left- and right-hand identity types may have multiple elements, which is precisely the point of univalent mathematics.
What we want is the lhs and the rhs to be isomorphic, or more precisely, equivalent in the sense of Voevodsky.
Once we have defined this notion _≃_
of type equivalence, this
characterization will become an equivalence
(σ ≡ τ) ≃ Σ \(p : pr₁ σ ≡ pr₁ τ) → transport A p pr₂ σ ≡ pr₂ τ
.
But even this is not sufficiently precise, because in general there are many equivalences between two types. For example, there are precisely two equivalences
𝟙 + 𝟙 ≃ 𝟙 + 𝟙
,
namely the identity function and the function that flips left and
right. What we want to say is that a specific map is an
equivalence. In our case, it is the function from-Σ-≡
defined
above.
Voevodsky came up with a definition of a type “f
is an equivalence”
which is always a subsingleton: a given function f
can be an
equivalence in at most one way. In other words, being an equivalence
is property of f
, rather than data.
The following special case of to-Σ-≡
is often useful:
to-Σ-≡' : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } {x : X} {a a' : A x} → a ≡ a' → Id (Σ A) (x , a) (x , a') to-Σ-≡' {𝓤} {𝓥} {X} {A} {x} = ap (λ - → (x , -))
We take the opportunity to establish more equations for transport and to define a dependent version of transport:
transport-× : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) (B : X → 𝓦 ̇ ) {x y : X} (p : x ≡ y) {c : A x × B x} → transport (λ x → A x × B x) p c ≡ (transport A p (pr₁ c) , transport B p (pr₂ c)) transport-× A B (refl _) = refl _ transportd : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) (B : (x : X) → A x → 𝓦 ̇ ) {x : X} (a : A x) (σ : Σ \(a : A x) → B x a) {y : X} (p : x ≡ y) → B x (pr₁ σ) → B y (transport A p (pr₁ σ)) transportd A B a σ (refl y) = id transport-Σ : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) (B : (x : X) → A x → 𝓦 ̇ ) {x : X} (y : X) (p : x ≡ y) (a : A x) {σ : Σ \(a : A x) → B x a} → transport (λ x → Σ \(y : A x) → B x y) p σ ≡ transport A p (pr₁ σ) , transportd A B a σ p (pr₂ σ) transport-Σ A B {x} x (refl x) a {σ} = refl σ
Voevodsky’s hlevels 0,1,2,3,...
are shifted by 2
with respect to
the n
-groupoid numbering convention, and correspond to -2
-groupoids
(singletons), -1
-groupoids (subsingletons), 0
-groupoids (sets),…
The hlevel
relation is defined by induction on ℕ
, with the
induction step working with the identity types of the elements of the
type in question:
_is-of-hlevel_ : 𝓤 ̇ → ℕ → 𝓤 ̇ X is-of-hlevel 0 = is-singleton X X is-of-hlevel (succ n) = (x x' : X) → ((x ≡ x') is-of-hlevel n)
It is often convenient in practice to have equivalent formulations of
the type of hlevel 1
(as subsingletons) and 2
(as sets), which we will
develop soon.
To characterize sets as the types of hlevel 2, we first need to show that subsingletons are sets, and this is not easy. We use an argument due to Hedberg. This argument also shows that Voevodsky’s hlevels are upper closed.
We choose to present an alternative formulation of Hedberg’s Theorem, but we stress that the method of proof is essentially the same.
We first define a notion of constant map:
wconstant : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X → Y) → 𝓤 ⊔ 𝓥 ̇ wconstant f = (x x' : domain f) → f x ≡ f x'
The prefix “w
” officially stands for “weakly”. Perhaps
incoherently constant or wildly constant would be better
terminologies, with coherence understood in the ∞
-categorical
sense. We prefer to stick to wildly rather than weakly, and luckily
both start with the letter “w
”.
We first define the type of constant endomaps of a given type:
wconstant-endomap : 𝓤 ̇ → 𝓤 ̇ wconstant-endomap X = Σ \(f : X → X) → wconstant f wcmap : (X : 𝓤 ̇ ) → wconstant-endomap X → (X → X) wcmap X (f , w) = f wcmap-constancy : (X : 𝓤 ̇ ) (c : wconstant-endomap X) → wconstant (wcmap X c) wcmap-constancy X (f , w) = w
The point is that a type is a set if and only if its identity types
all have designated wconstant
endomaps:
Hedberg : {X : 𝓤 ̇ } (x : X) → ((y : X) → wconstant-endomap (x ≡ y)) → (y : X) → is-subsingleton (x ≡ y) Hedberg {𝓤} {X} x c y p q = p ≡⟨ a y p ⟩ f x (refl x)⁻¹ ∙ f y p ≡⟨ ap (λ - → (f x (refl x))⁻¹ ∙ -) (κ y p q) ⟩ f x (refl x)⁻¹ ∙ f y q ≡⟨ (a y q)⁻¹ ⟩ q ∎ where f : (y : X) → x ≡ y → x ≡ y f y = wcmap (x ≡ y) (c y) κ : (y : X) (p q : x ≡ y) → f y p ≡ f y q κ y = wcmap-constancy (x ≡ y) (c y) a : (y : X) (p : x ≡ y) → p ≡ (f x (refl x))⁻¹ ∙ f y p a x (refl x) = (⁻¹-left∙ (f x (refl x)))⁻¹
We consider types whose identity types all have designated constant endomaps:
wconstant-≡-endomaps : 𝓤 ̇ → 𝓤 ̇ wconstant-≡-endomaps X = (x y : X) → wconstant-endomap (x ≡ y)
The following is immediate from the definitions.
sets-have-wconstant-≡-endomaps : (X : 𝓤 ̇ ) → is-set X → wconstant-≡-endomaps X sets-have-wconstant-≡-endomaps X s x y = (f , κ) where f : x ≡ y → x ≡ y f p = p κ : (p q : x ≡ y) → f p ≡ f q κ p q = s x y p q
And the converse is the content of Hedberg’s Theorem.
types-with-wconstant-≡-endomaps-are-sets : (X : 𝓤 ̇ ) → wconstant-≡-endomaps X → is-set X types-with-wconstant-≡-endomaps-are-sets X c x = Hedberg x (λ y → wcmap (x ≡ y) (c x y) , wcmap-constancy (x ≡ y) (c x y))
In the following definition of the auxiliary function f
, we ignore
the argument p
, using the fact that X
is a subsingleton instead,
to get a wconstant
function:
subsingletons-have-wconstant-≡-endomaps : (X : 𝓤 ̇ ) → is-subsingleton X → wconstant-≡-endomaps X subsingletons-have-wconstant-≡-endomaps X s x y = (f , κ) where f : x ≡ y → x ≡ y f p = s x y κ : (p q : x ≡ y) → f p ≡ f q κ p q = refl (s x y)
And the corollary is that subsingleton types are sets.
subsingletons-are-sets : (X : 𝓤 ̇ ) → is-subsingleton X → is-set X subsingletons-are-sets X s = types-with-wconstant-≡-endomaps-are-sets X (subsingletons-have-wconstant-≡-endomaps X s)
In particular, the types 𝟘
and 𝟙
are sets.
𝟘-is-set : is-set 𝟘 𝟘-is-set = subsingletons-are-sets 𝟘 𝟘-is-subsingleton 𝟙-is-set : is-set 𝟙 𝟙-is-set = subsingletons-are-sets 𝟙 𝟙-is-subsingleton
Then with the above we get our desired characterization of the types of
hlevel 1
as an immediate consequence:
subsingletons-are-of-hlevel-1 : (X : 𝓤 ̇ ) → is-subsingleton X → X is-of-hlevel 1 subsingletons-are-of-hlevel-1 X = g where g : ((x y : X) → x ≡ y) → (x y : X) → is-singleton (x ≡ y) g t x y = t x y , subsingletons-are-sets X t x y (t x y) types-of-hlevel-1-are-subsingletons : (X : 𝓤 ̇ ) → X is-of-hlevel 1 → is-subsingleton X types-of-hlevel-1-are-subsingletons X = f where f : ((x y : X) → is-singleton (x ≡ y)) → (x y : X) → x ≡ y f s x y = center (x ≡ y) (s x y)
This is an “if and only if” characterization, but, under univalence, it becomes an equality because the types under consideration are subsingletons.
The same comments as for the previous section apply.
sets-are-of-hlevel-2 : (X : 𝓤 ̇ ) → is-set X → X is-of-hlevel 2 sets-are-of-hlevel-2 X = g where g : ((x y : X) → is-subsingleton (x ≡ y)) → (x y : X) → (x ≡ y) is-of-hlevel 1 g t x y = subsingletons-are-of-hlevel-1 (x ≡ y) (t x y) types-of-hlevel-2-are-sets : (X : 𝓤 ̇ ) → X is-of-hlevel 2 → is-set X types-of-hlevel-2-are-sets X = f where f : ((x y : X) → (x ≡ y) is-of-hlevel 1) → (x y : X) → is-subsingleton (x ≡ y) f s x y = types-of-hlevel-1-are-subsingletons (x ≡ y) (s x y)
A singleton is a subsingleton, a subsingleton is a set, … , a type
of hlevel n
is of hlevel n+1
too, …
Again we can conclude this almost immediately from the above development:
hlevel-upper : (X : 𝓤 ̇ ) (n : ℕ) → X is-of-hlevel n → X is-of-hlevel (succ n) hlevel-upper X zero = γ where γ : is-singleton X → (x y : X) → is-singleton (x ≡ y) γ h x y = p , subsingletons-are-sets X k x y p where k : is-subsingleton X k = singletons-are-subsingletons X h p : x ≡ y p = k x y hlevel-upper X (succ n) = λ h x y → hlevel-upper (x ≡ y) n (h x y)
To be consistent with the above terminology, we have to stipulate that
all types have hlevel ∞
. We don’t need a definition for this
notion. But what may happen (and it does with univalence) is that
there are types which don’t have any finite hlevel. We can say that
such types then have minimal hlevel ∞
.
Exercise. Formulate and prove the following. The type 𝟙
has
minimal hlevel 0
.
_has-minimal-hlevel_ : 𝓤 ̇ → ℕ → 𝓤 ̇ X has-minimal-hlevel 0 = X is-of-hlevel 0 X has-minimal-hlevel (succ n) = (X is-of-hlevel (succ n)) × ¬(X is-of-hlevel n) _has-minimal-hlevel-∞ : 𝓤 ̇ → 𝓤 ̇ X has-minimal-hlevel-∞ = (n : ℕ) → ¬(X is-of-hlevel n)
The type 𝟘
has minimal hlevel 1
, the type ℕ
has minimal hlevel
2
. The solution to the fact that ℕ
has hlevel 2 is given in the
next section. More ambitiously, after
univalence is available, show that the
type of monoids has minimal hlevel 3
.
ℕ
and 𝟚
are setsIf a type has decidable equality we can define a wconstant
function x ≡ y → x ≡ y
and hence conclude that it is a set. This
argument is due to Hedberg.
pointed-types-have-wconstant-endomap : {X : 𝓤 ̇ } → X → wconstant-endomap X pointed-types-have-wconstant-endomap x = ((λ y → x) , (λ y y' → refl x)) empty-types-have-wconstant-endomap : {X : 𝓤 ̇ } → is-empty X → wconstant-endomap X empty-types-have-wconstant-endomap e = (id , (λ x x' → !𝟘 (x ≡ x') (e x))) decidable-has-wconstant-endomap : {X : 𝓤 ̇ } → decidable X → wconstant-endomap X decidable-has-wconstant-endomap (inl x) = pointed-types-have-wconstant-endomap x decidable-has-wconstant-endomap (inr e) = empty-types-have-wconstant-endomap e hedberg-lemma : {X : 𝓤 ̇ } → has-decidable-equality X → wconstant-≡-endomaps X hedberg-lemma {𝓤} {X} d x y = decidable-has-wconstant-endomap (d x y) hedberg : {X : 𝓤 ̇ } → has-decidable-equality X → is-set X hedberg {𝓤} {X} d = types-with-wconstant-≡-endomaps-are-sets X (hedberg-lemma d) ℕ-is-set : is-set ℕ ℕ-is-set = hedberg ℕ-has-decidable-equality 𝟚-is-set : is-set 𝟚 𝟚-is-set = hedberg 𝟚-has-decidable-equality
Notice that excluded middle implies directly that all sets have decidable equality, so that in its presence a type is a set if and only if it has decidable equality.
We use retracts as a mathematical technique to transfer properties
between types. For instance, retracts of singletons are
singletons. Showing that a particular type X
is a singleton may be
rather difficult to do directly by applying the definition of
singleton and the definition of the particular type, but it may be
easy to show that X
is a retract of Y
for a type Y
that is
already known to be a singleton. In these notes, a major application
will be to get a simple proof of the known fact that invertible maps
are equivalences in the sense of Voevodsky.
A section of a function is simply a right inverse, by definition:
has-section : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X → Y) → 𝓤 ⊔ 𝓥 ̇ has-section r = Σ \(s : codomain r → domain r) → r ∘ s ∼ id
Notice that has-section r
is the type of all sections (s , η)
of
r
, which may well be empty. So a point of this type is a designated
section s
of r
, together with the datum η
. Unless the domain of
r
is a set, this datum is not property, and we may well have an
element (s , η')
of the type has-section r
with η'
distinct from
η
for the same s
.
We say that X
is a retract of Y
, written X ◁ Y
, if we
have a function Y → X
which has a section:
_◁_ : 𝓤 ̇ → 𝓥 ̇ → 𝓤 ⊔ 𝓥 ̇ X ◁ Y = Σ \(r : Y → X) → has-section r
This type actually collects all the ways in which the type X
can
be a retract of the type Y
, and so is data or structure on X
and
Y
, rather than a property of them.
A function that has a section is called a retraction. We use this terminology, ambiguously, also for the function that projects out the retraction:
retraction : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → X ◁ Y → Y → X retraction (r , s , η) = r section : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → X ◁ Y → X → Y section (r , s , η) = s retract-equation : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (ρ : X ◁ Y) → retraction ρ ∘ section ρ ∼ 𝑖𝑑 X retract-equation (r , s , η) = η retraction-has-section : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (ρ : X ◁ Y) → has-section (retraction ρ) retraction-has-section (r , h) = h
We have an identity retraction:
id-◁ : (X : 𝓤 ̇ ) → X ◁ X id-◁ X = 𝑖𝑑 X , 𝑖𝑑 X , refl
Exercise. The identity retraction is by no means the only retraction
of a type onto itself in general, of course. Prove that we have (that
is, produce an element of the type) ℕ ◁ ℕ
with the function
pred : ℕ → ℕ
defined above as the retraction.
Try to produce more inhabitants of this type.
We can define the composition of two retractions as follows:
_◁∘_ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {Z : 𝓦 ̇ } → X ◁ Y → Y ◁ Z → X ◁ Z (r , s , η) ◁∘ (r' , s' , η') = (r ∘ r' , s' ∘ s , η'') where η'' = λ x → r (r' (s' (s x))) ≡⟨ ap r (η' (s x)) ⟩ r (s x) ≡⟨ η x ⟩ x ∎
We also define composition with an implicit argument made explicit:
_◁⟨_⟩_ : (X : 𝓤 ̇ ) {Y : 𝓥 ̇ } {Z : 𝓦 ̇ } → X ◁ Y → Y ◁ Z → X ◁ Z X ◁⟨ ρ ⟩ σ = ρ ◁∘ σ
And we introduce the following postfix notation for the identity retraction:
_◀ : (X : 𝓤 ̇ ) → X ◁ X X ◀ = id-◁ X
These last two definitions are for notational convenience. See below for examples of their use.
We conclude this section with some facts about retracts of Σ
types.
The following are technical tools for dealing with equivalences in the
sense of Voevosky in comparison with invertible
maps.
A pointwise retraction gives a retraction of the total spaces:
Σ-retract : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } {B : X → 𝓦 ̇ } → ((x : X) → (A x) ◁ (B x)) → Σ A ◁ Σ B Σ-retract {𝓤} {𝓥} {𝓦} {X} {A} {B} ρ = NatΣ r , NatΣ s , η' where r : (x : X) → B x → A x r x = retraction (ρ x) s : (x : X) → A x → B x s x = section (ρ x) η : (x : X) (a : A x) → r x (s x a) ≡ a η x = retract-equation (ρ x) η' : (σ : Σ A) → NatΣ r (NatΣ s σ) ≡ σ η' (x , a) = x , r x (s x a) ≡⟨ to-Σ-≡' (η x a) ⟩ x , a ∎
We have that transport A (p ⁻¹)
is a two-sided inverse of transport
A p
using the functoriality of transport A
, or directly by
induction on p
:
transport-is-retraction : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) {x y : X} (p : x ≡ y) → transport A p ∘ transport A (p ⁻¹) ∼ 𝑖𝑑 (A y) transport-is-retraction A (refl x) = refl transport-is-section : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) {x y : X} (p : x ≡ y) → transport A (p ⁻¹) ∘ transport A p ∼ 𝑖𝑑 (A x) transport-is-section A (refl x) = refl
Using this, we have the following reindexing retraction of Σ
types:
Σ-reindexing-retract : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {A : X → 𝓦 ̇ } (r : Y → X) → has-section r → (Σ \(x : X) → A x) ◁ (Σ \(y : Y) → A (r y)) Σ-reindexing-retract {𝓤} {𝓥} {𝓦} {X} {Y} {A} r (s , η) = γ , φ , γφ where γ : Σ (A ∘ r) → Σ A γ (y , a) = (r y , a) φ : Σ A → Σ (A ∘ r) φ (x , a) = (s x , transport A ((η x)⁻¹) a) γφ : (σ : Σ A) → γ (φ σ) ≡ σ γφ (x , a) = to-Σ-≡ (η x , transport-is-retraction A (η x) a)
We have defined the property of a type being a
singleton. The singleton type Σ \(y : X) →
x ≡ y
induced by a point x : X
of a type X
is denoted by
singleton-type x
. The terminology is justified by the fact that it
is indeed a singleton in the sense defined above.
singleton-type : {X : 𝓤 ̇ } → X → 𝓤 ̇ singleton-type x = Σ \y → y ≡ x singleton-type-center : {X : 𝓤 ̇ } (x : X) → singleton-type x singleton-type-center x = (x , refl x) singleton-type-centered : {X : 𝓤 ̇ } (x : X) (σ : singleton-type x) → singleton-type-center x ≡ σ singleton-type-centered x (x , refl x) = refl (x , refl x) singleton-types-are-singletons : (X : 𝓤 ̇ ) (x : X) → is-singleton (singleton-type x) singleton-types-are-singletons X x = singleton-type-center x , singleton-type-centered x
The following gives a technique for showing that some types are singletons:
retract-of-singleton : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → Y ◁ X → is-singleton X → is-singleton Y retract-of-singleton (r , s , η) (c , φ) = r c , γ where γ = λ y → r c ≡⟨ ap r (φ (s y)) ⟩ r (s y) ≡⟨ η y ⟩ y ∎
Sometimes we need the following symmetric versions of the above:
singleton-type' : {X : 𝓤 ̇ } → X → 𝓤 ̇ singleton-type' x = Σ \y → x ≡ y singleton-type'-center : {X : 𝓤 ̇ } (x : X) → singleton-type' x singleton-type'-center x = (x , refl x) singleton-type'-centered : {X : 𝓤 ̇ } (x : X) (σ : singleton-type' x) → singleton-type'-center x ≡ σ singleton-type'-centered x (x , refl x) = refl (x , refl x) singleton-types'-are-singletons : (X : 𝓤 ̇ ) (x : X) → is-singleton (singleton-type' x) singleton-types'-are-singletons X x = singleton-type'-center x , singleton-type'-centered x
The main notions of univalent mathematics conceived by Voevodsky, with formulations in MLTT, are those of singleton type (or contractible type), hlevel (including the notions of subsingleton and set), and of type equivalence, which we define now.
We begin with a discussion of the notion of invertible function, whose only difference with the notion of equivalence is that it is data rather than property:
invertible : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X → Y) → 𝓤 ⊔ 𝓥 ̇ invertible f = Σ \g → (g ∘ f ∼ id) × (f ∘ g ∼ id)
The situation is that we will have a logical equivalence between
data establishing invertibility of a given function, and
the property of the function being an equivalence.
Mathematically, what happens is that the type
f
is an equivalenceis a retract of the type
f
is invertible.This retraction property is not easy to show, and there are many approaches. We discuss an approach we came up with while developing these lecture notes, which is intended to be relatively simple and direct, but the reader should consult other approaches, such as that of the HoTT book, which has a well-established categorical pedigree.
The problem with the notion of invertibility of f
is that, while we
have that the inverse g
is unique when it exists, we cannot in
general prove that the identification data g ∘ f ∼ id
and f ∘ g ∼
id
are also unique, and, indeed, they are not in
general.
The following is Voevodsky’s proposed formulation of the notion of
equivalence in MLTT, which relies on the concept of fiber
:
fiber : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → Y → 𝓤 ⊔ 𝓥 ̇ fiber f y = Σ \(x : domain f) → f x ≡ y fiber-point : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {f : X → Y} {y : Y} → fiber f y → X fiber-point (x , p) = x fiber-identification : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {f : X → Y} {y : Y} → (w : fiber f y) → f (fiber-point w) ≡ y fiber-identification (x , p) = p
So the type fiber f y
collects the points x : X
which are mapped
to a point identified with y
, including the identification
datum. Voevodsky’s insight is that a general notion of equivalence can
be formulated in MLTT by requiring the fibers to be singletons. It is
important here that not only the x : X
with f x ≡ y
is unique, but
also that the identification datum p : f x ≡ y
is unique. This is
similar to, or even a generalization of the categorical
notion of “uniqueness up to a unique isomorphism”.
is-equiv : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X → Y) → 𝓤 ⊔ 𝓥 ̇ is-equiv f = (y : codomain f) → is-singleton (fiber f y)
We can read this as saying that for every y : Y
there is a unique
x : X
with f x ≡ y
, where the uniqueness refers not only to x :
X
but also to the identification datum p : f x ≡ y
. It is easy to
see that equivalences are invertible:
inverse : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → is-equiv f → (Y → X) inverse f e y = fiber-point (center (fiber f y) (e y)) inverse-is-section : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) (e : is-equiv f) → f ∘ inverse f e ∼ id inverse-is-section f e y = fiber-identification (center (fiber f y) (e y)) inverse-centrality : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) (e : is-equiv f) (y : Y) (t : fiber f y) → (inverse f e y , inverse-is-section f e y) ≡ t inverse-centrality f e y = centrality (fiber f y) (e y) inverse-is-retraction : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) (e : is-equiv f) → inverse f e ∘ f ∼ id inverse-is-retraction f e x = ap fiber-point p where p : inverse f e (f x) , inverse-is-section f e (f x) ≡ x , refl (f x) p = inverse-centrality f e (f x) (x , (refl (f x))) equivs-are-invertible : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → is-equiv f → invertible f equivs-are-invertible f e = inverse f e , inverse-is-retraction f e , inverse-is-section f e
The non-trivial direction derives the equivalence property from invertibility data, for which we use the retraction techniques explained above.
Suppose that invertibility data
g : Y → X
,
η : (x : X) → g (f x) ≡ x
ε : (y : Y) → f (g y) ≡ y
for a map f : X → Y
is given, and that a point y₀
in the codomain
of f
is given.
We need to show that the fiber Σ \(x : X) → f x ≡ y₀
of y₀
is a
singleton.
We first use the assumption ε
to show that the type f (g y) ≡
y₀
is a retract of the type y ≡ y₀
for any given y : Y
.
To get the section s : f (g y) ≡ y₀ → y ≡ y₀
, we transport along
the identification ε y : f (g y) ≡ y
over the family A - = (-
≡ y₀)
, which can be abbreviated as _≡ y₀
.
To get the retraction r
in the opposite direction, we transport
along the inverse of the identification ε y
over the same
family.
We already know that this gives a section-retraction pair by
transport-is-section
.
Next we have that the type Σ \(x : X) → f x ≡ y₀
is a retract
of the type Σ \(y : Y) → f (g y) ≡ y₀
by Σ-reindexing-retract
using the assumption that η
exibits g
as a section of f
,
which in turn is a retract of the type Σ \(y : Y) → y ≡ y₀
by
applying Σ
to both sides of the retraction (f (g y) ≡ y₀) ◁ (y
≡ y₀)
of the previous step.
This amounts to saying that the type fiber f y₀
is a retract of
singleton-type y₀
.
But then we are done, because singleton types are singletons and retracts of singletons are singletons.
invertibles-are-equivs : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → invertible f → is-equiv f invertibles-are-equivs {𝓤} {𝓥} {X} {Y} f (g , η , ε) y₀ = iii where i : (y : Y) → (f (g y) ≡ y₀) ◁ (y ≡ y₀) i y = r , s , transport-is-section (_≡ y₀) (ε y) where s : f (g y) ≡ y₀ → y ≡ y₀ s = transport (_≡ y₀) (ε y) r : y ≡ y₀ → f (g y) ≡ y₀ r = transport (_≡ y₀) ((ε y)⁻¹) ii : fiber f y₀ ◁ singleton-type y₀ ii = (Σ \(x : X) → f x ≡ y₀) ◁⟨ Σ-reindexing-retract g (f , η) ⟩ (Σ \(y : Y) → f (g y) ≡ y₀) ◁⟨ Σ-retract i ⟩ (Σ \(y : Y) → y ≡ y₀) ◀ iii : is-singleton (fiber f y₀) iii = retract-of-singleton ii (singleton-types-are-singletons Y y₀) inverse-is-equiv : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) (e : is-equiv f) → is-equiv (inverse f e) inverse-is-equiv f e = invertibles-are-equivs (inverse f e) (f , inverse-is-section f e , inverse-is-retraction f e)
Notice that inversion is involutive on the nose:
inversion-involutive : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) (e : is-equiv f) → inverse (inverse f e) (inverse-is-equiv f e) ≡ f inversion-involutive f e = refl f
To see that the above procedures do exhibit the type “f
is an
equivalence” as a retract of the type “f
is invertible”, it suffices
to show that it is a
subsingleton.
The identity function is invertible:
id-invertible : (X : 𝓤 ̇ ) → invertible (𝑖𝑑 X) id-invertible X = 𝑖𝑑 X , refl , refl
We can compose invertible maps:
∘-invertible : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {Z : 𝓦 ̇ } {f : X → Y} {f' : Y → Z} → invertible f' → invertible f → invertible (f' ∘ f) ∘-invertible {𝓤} {𝓥} {𝓦} {X} {Y} {Z} {f} {f'} (g' , gf' , fg') (g , gf , fg) = g ∘ g' , η , ε where η = λ x → g (g' (f' (f x))) ≡⟨ ap g (gf' (f x)) ⟩ g (f x) ≡⟨ gf x ⟩ x ∎ ε = λ z → f' (f (g (g' z))) ≡⟨ ap f' (fg (g' z)) ⟩ f' (g' z) ≡⟨ fg' z ⟩ z ∎
There is an identity equivalence, and we get composition of equivalences by reduction to invertible maps:
id-is-equiv : (X : 𝓤 ̇ ) → is-equiv (𝑖𝑑 X) id-is-equiv = singleton-types-are-singletons
An abstract
definition is not expanded during type checking. One
possible use of this is efficiency. In our case, it saves about half a
minute in the checking of this file for correctness in the uses of
∘-is-equiv
:
∘-is-equiv : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {Z : 𝓦 ̇ } {f : X → Y} {g : Y → Z} → is-equiv g → is-equiv f → is-equiv (g ∘ f) ∘-is-equiv {𝓤} {𝓥} {𝓦} {X} {Y} {Z} {f} {g} i j = γ where abstract γ : is-equiv (g ∘ f) γ = invertibles-are-equivs (g ∘ f) (∘-invertible (equivs-are-invertible g i) (equivs-are-invertible f j))
Because we have made the above definition abstract, we don’t have
access to the given construction when proving things involving
∘-is-equiv
, such as the contravariance of inversion:
inverse-of-∘ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {Z : 𝓦 ̇ } (f : X → Y) (g : Y → Z) (i : is-equiv f) (j : is-equiv g) → inverse f i ∘ inverse g j ∼ inverse (g ∘ f) (∘-is-equiv j i) inverse-of-∘ f g i j z = f' (g' z) ≡⟨ (ap (f' ∘ g') (s z))⁻¹ ⟩ f' (g' (g (f (h z)))) ≡⟨ ap f' (inverse-is-retraction g j (f (h z))) ⟩ f' (f (h z)) ≡⟨ inverse-is-retraction f i (h z) ⟩ h z ∎ where f' = inverse f i g' = inverse g j h = inverse (g ∘ f) (∘-is-equiv j i) s : g ∘ f ∘ h ∼ id s = inverse-is-section (g ∘ f) (∘-is-equiv j i)
The type of equivalences is defined as follows:
_≃_ : 𝓤 ̇ → 𝓥 ̇ → 𝓤 ⊔ 𝓥 ̇ X ≃ Y = Σ \(f : X → Y) → is-equiv f
Notice that this doesn’t just say that X
and Y
are equivalent: the
type X ≃ Y
collects all the ways in which the types X
and Y
are
equivalent. For example, the two-point type 𝟚
is equivalent to
itself in two ways, by the identity map, and by the map that
interchanges its two points, and hence the type 𝟚 ≃ 𝟚
has two
elements.
Again it is convenient to have special names for its first and second projections:
Eq→fun : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → X ≃ Y → X → Y Eq→fun (f , i) = f Eq→fun-is-equiv : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (e : X ≃ Y) → is-equiv (Eq→fun e) Eq→fun-is-equiv (f , i) = i invertibility-gives-≃ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → invertible f → X ≃ Y invertibility-gives-≃ f i = f , invertibles-are-equivs f i
Example:
Σ-induction-≃ : {X : 𝓤 ̇ } {Y : X → 𝓥 ̇ } {A : Σ Y → 𝓦 ̇ } → ((x : X) (y : Y x) → A (x , y)) ≃ ((z : Σ Y) → A z) Σ-induction-≃ = invertibility-gives-≃ Σ-induction (curry , refl , refl)
The identity equivalence and the composition of two equivalences:
id-≃ : (X : 𝓤 ̇ ) → X ≃ X id-≃ X = 𝑖𝑑 X , id-is-equiv X _●_ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {Z : 𝓦 ̇ } → X ≃ Y → Y ≃ Z → X ≃ Z (f , d) ● (f' , e) = f' ∘ f , ∘-is-equiv e d ≃-sym : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → X ≃ Y → Y ≃ X ≃-sym (f , e) = inverse f e , inverse-is-equiv f e
We can use the following notation for equational reasoning with equivalences:
_≃⟨_⟩_ : (X : 𝓤 ̇ ) {Y : 𝓥 ̇ } {Z : 𝓦 ̇ } → X ≃ Y → Y ≃ Z → X ≃ Z _ ≃⟨ d ⟩ e = d ● e _■ : (X : 𝓤 ̇ ) → X ≃ X _■ = id-≃
We conclude this section with some important examples.
The function transport A p
is an equivalence.
transport-is-equiv : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) {x y : X} (p : x ≡ y) → is-equiv (transport A p) transport-is-equiv A (refl x) = id-is-equiv (A x)
Alternatively, we could have used the fact that transport A (p ⁻¹)
is an inverse of transport A p
.
Here is the promised characterization of equality in Σ
types:
Σ-≡-≃ : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } (σ τ : Σ A) → (σ ≡ τ) ≃ (Σ \(p : pr₁ σ ≡ pr₁ τ) → transport A p (pr₂ σ) ≡ pr₂ τ) Σ-≡-≃ {𝓤} {𝓥} {X} {A} σ τ = invertibility-gives-≃ from-Σ-≡ (to-Σ-≡ , η , ε) where η : (q : σ ≡ τ) → to-Σ-≡ (from-Σ-≡ q) ≡ q η (refl σ) = refl (refl σ) ε : (w : Σ \(p : pr₁ σ ≡ pr₁ τ) → transport A p (pr₂ σ) ≡ pr₂ τ) → from-Σ-≡ (to-Σ-≡ w) ≡ w ε (refl p , refl q) = refl (refl p , refl q)
Similarly we have:
to-×-≡ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {z t : X × Y} → (pr₁ z ≡ pr₁ t) × (pr₂ z ≡ pr₂ t) → z ≡ t to-×-≡ (refl x , refl y) = refl (x , y) from-×-≡ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {z t : X × Y} → z ≡ t → (pr₁ z ≡ pr₁ t) × (pr₂ z ≡ pr₂ t) from-×-≡ {𝓤} {𝓥} {X} {Y} (refl (x , y)) = (refl x , refl y) ×-≡-≃ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (z t : X × Y) → (z ≡ t) ≃ (pr₁ z ≡ pr₁ t) × (pr₂ z ≡ pr₂ t) ×-≡-≃ {𝓤} {𝓥} {X} {Y} z t = invertibility-gives-≃ from-×-≡ (to-×-≡ , η , ε) where η : (p : z ≡ t) → to-×-≡ (from-×-≡ p) ≡ p η (refl z) = refl (refl z) ε : (q : (pr₁ z ≡ pr₁ t) × (pr₂ z ≡ pr₂ t)) → from-×-≡ (to-×-≡ q) ≡ q ε (refl x , refl y) = refl (refl x , refl y)
The following are often useful:
ap-pr₁-to-×-≡ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {z t : X × Y} → (p₁ : pr₁ z ≡ pr₁ t) → (p₂ : pr₂ z ≡ pr₂ t) → ap pr₁ (to-×-≡ (p₁ , p₂)) ≡ p₁ ap-pr₁-to-×-≡ (refl x) (refl y) = refl (refl x) ap-pr₂-to-×-≡ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {z t : X × Y} → (p₁ : pr₁ z ≡ pr₁ t) → (p₂ : pr₂ z ≡ pr₂ t) → ap pr₂ (to-×-≡ (p₁ , p₂)) ≡ p₂ ap-pr₂-to-×-≡ (refl x) (refl y) = refl (refl y) Σ-cong : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } {B : X → 𝓦 ̇ } → ((x : X) → A x ≃ B x) → Σ A ≃ Σ B Σ-cong {𝓤} {𝓥} {𝓦} {X} {A} {B} φ = invertibility-gives-≃ (NatΣ f) (NatΣ g , NatΣ-η , NatΣ-ε) where f : (x : X) → A x → B x f x = Eq→fun (φ x) g : (x : X) → B x → A x g x = inverse (f x) (Eq→fun-is-equiv (φ x)) η : (x : X) (a : A x) → g x (f x a) ≡ a η x = inverse-is-retraction (f x) (Eq→fun-is-equiv (φ x)) ε : (x : X) (b : B x) → f x (g x b) ≡ b ε x = inverse-is-section (f x) (Eq→fun-is-equiv (φ x)) NatΣ-η : (w : Σ A) → NatΣ g (NatΣ f w) ≡ w NatΣ-η (x , a) = x , g x (f x a) ≡⟨ to-Σ-≡' (η x a) ⟩ x , a ∎ NatΣ-ε : (t : Σ B) → NatΣ f (NatΣ g t) ≡ t NatΣ-ε (x , b) = x , f x (g x b) ≡⟨ to-Σ-≡' (ε x b) ⟩ x , b ∎ ≃-gives-◁ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → X ≃ Y → X ◁ Y ≃-gives-◁ (f , e) = (inverse f e , f , inverse-is-retraction f e) ≃-gives-▷ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → X ≃ Y → Y ◁ X ≃-gives-▷ (f , e) = (f , inverse f e , inverse-is-section f e) equiv-to-singleton : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → X ≃ Y → is-singleton Y → is-singleton X equiv-to-singleton e = retract-of-singleton (≃-gives-◁ e)
There is a canonical transformation (X Y : 𝓤 ̇ ) → X ≡ Y → X ≃ Y
that
sends the identity identification refl X : X ≡ X
to the identity
equivalence id-≃ X : X ≃ X
. The univalence axiom, for the universe
𝓤
, says that this canonical map is itself an equivalence.
Id→Eq : (X Y : 𝓤 ̇ ) → X ≡ Y → X ≃ Y Id→Eq X X (refl X) = id-≃ X is-univalent : (𝓤 : Universe) → 𝓤 ⁺ ̇ is-univalent 𝓤 = (X Y : 𝓤 ̇ ) → is-equiv (Id→Eq X Y)
Thus, the univalence of the universe 𝓤
says that identifications X
≡ Y
of types in 𝓤
are in canonical bijection with equivalences X ≃ Y
, if by
bijection we mean equivalence,
where the canonical bijection is
Id→Eq
.
We emphasize that this doesn’t posit that univalence holds. It says what univalence is (like the type that says what the twin-prime conjecture is).
univalence-≃ : is-univalent 𝓤 → (X Y : 𝓤 ̇ ) → (X ≡ Y) ≃ (X ≃ Y) univalence-≃ ua X Y = Id→Eq X Y , ua X Y Eq→Id : is-univalent 𝓤 → (X Y : 𝓤 ̇ ) → X ≃ Y → X ≡ Y Eq→Id ua X Y = inverse (Id→Eq X Y) (ua X Y)
Here is a third way to convert a type identification into a function:
Id→fun : {X Y : 𝓤 ̇ } → X ≡ Y → X → Y Id→fun {𝓤} {X} {Y} p = Eq→fun (Id→Eq X Y p) Id→funs-agree : {X Y : 𝓤 ̇ } (p : X ≡ Y) → Id→fun p ≡ Id→Fun p Id→funs-agree (refl X) = refl (𝑖𝑑 X)
What characterizes univalent mathematics is not the univalence axiom. We have defined and studied the main concepts of univalent mathematics in a pure, spartan MLTT. It is the concepts of hlevel, including singleton, subsingleton and set, and the notion of equivalence. Univalence is a fundamental ingredient, but first we need the correct notion of equivalence to be able to formulate it.
Remark. If we formulate univalence with invertible maps instead of equivalences, we get a statement that is provably false in MLTT, and this is one of the reasons why Voevodsky’s notion of equivalence is important. This is Exercise 4.6 of the HoTT book. There is a solution in Coq by Mike Shulman.
We have two automorphisms of 𝟚
, namely the identity function and the
map that swaps ₀ and ₁:
swap₂ : 𝟚 → 𝟚 swap₂ ₀ = ₁ swap₂ ₁ = ₀ swap₂-involutive : (n : 𝟚) → swap₂ (swap₂ n) ≡ n swap₂-involutive ₀ = refl ₀ swap₂-involutive ₁ = refl ₁
That is, swap₂
is its own inverse and hence it is an equivalence:
swap₂-is-equiv : is-equiv swap₂ swap₂-is-equiv = invertibles-are-equivs swap₂ (swap₂ , swap₂-involutive , swap₂-involutive)
We now use a local module to assume univalence of the first universe in the construction of our example:
module example-of-a-nonset (ua : is-univalent 𝓤₀) where
The above gives two distinct equivalences:
e₀ e₁ : 𝟚 ≃ 𝟚 e₀ = id-≃ 𝟚 e₁ = swap₂ , swap₂-is-equiv e₀-is-not-e₁ : e₀ ≢ e₁ e₀-is-not-e₁ p = ₁-is-not-₀ r where q : id ≡ swap₂ q = ap Eq→fun p r : ₁ ≡ ₀ r = ap (λ - → - ₁) q
Using univalence, we get two different identifications of the type
𝟚
with itself:
p₀ p₁ : 𝟚 ≡ 𝟚 p₀ = Eq→Id ua 𝟚 𝟚 e₀ p₁ = Eq→Id ua 𝟚 𝟚 e₁ p₀-is-not-p₁ : p₀ ≢ p₁ p₀-is-not-p₁ q = e₀-is-not-e₁ r where r = e₀ ≡⟨ (inverse-is-section (Id→Eq 𝟚 𝟚) (ua 𝟚 𝟚) e₀)⁻¹ ⟩ Id→Eq 𝟚 𝟚 p₀ ≡⟨ ap (Id→Eq 𝟚 𝟚) q ⟩ Id→Eq 𝟚 𝟚 p₁ ≡⟨ inverse-is-section (Id→Eq 𝟚 𝟚) (ua 𝟚 𝟚) e₁ ⟩ e₁ ∎
If the universe 𝓤₀
were a set, then the identifications p₀
and
p₁
defined above would be equal, and therefore it is not a set.
𝓤₀-is-not-a-set : ¬(is-set (𝓤₀ ̇ )) 𝓤₀-is-not-a-set s = p₀-is-not-p₁ q where q : p₀ ≡ p₁ q = s 𝟚 𝟚 p₀ p₁
For more examples, see [Kraus and Sattler].
Here are some facts whose proofs are left to the reader but that we will need from the next section onwards. Sample solutions are given below.
Define functions for the following type declarations. As a matter of
procedure, we suggest to import this file in a solutions file and add
another declaration with the same type and new name
e.g. sections-are-lc-solution
, because we already have solutions in
this file. It is important not to forget to include the option
--without-K
in the solutions file that imports (the Agda version of)
this file.
subsingleton-criterion : {X : 𝓤 ̇ } → (X → is-singleton X) → is-subsingleton X retract-of-subsingleton : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → Y ◁ X → is-subsingleton X → is-subsingleton Y left-cancellable : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X → Y) → 𝓤 ⊔ 𝓥 ̇ left-cancellable f = {x x' : domain f} → f x ≡ f x' → x ≡ x' lc-maps-reflect-subsingletons : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → left-cancellable f → is-subsingleton Y → is-subsingleton X has-retraction : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X → Y) → 𝓤 ⊔ 𝓥 ̇ has-retraction s = Σ \(r : codomain s → domain s) → r ∘ s ∼ id sections-are-lc : {X : 𝓤 ̇ } {A : 𝓥 ̇ } (s : X → A) → has-retraction s → left-cancellable s equivs-have-retractions : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → is-equiv f → has-retraction f equivs-have-sections : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → is-equiv f → has-section f equivs-are-lc : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → is-equiv f → left-cancellable f equiv-to-subsingleton : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → X ≃ Y → is-subsingleton Y → is-subsingleton X comp-inverses : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {Z : 𝓦 ̇ } (f : X → Y) (g : Y → Z) (i : is-equiv f) (j : is-equiv g) (f' : Y → X) (g' : Z → Y) → f' ∼ inverse f i → g' ∼ inverse g j → f' ∘ g' ∼ inverse (g ∘ f) (∘-is-equiv j i) equiv-to-set : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → X ≃ Y → is-set Y → is-set X sections-closed-under-∼ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f g : X → Y) → has-retraction f → g ∼ f → has-retraction g retractions-closed-under-∼ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f g : X → Y) → has-section f → g ∼ f → has-section g
An alternative notion of equivalence, equivalent to Voevodsky’s, has been given by André Joyal:
is-joyal-equiv : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → (X → Y) → 𝓤 ⊔ 𝓥 ̇ is-joyal-equiv f = has-section f × has-retraction f
Provide definitions for the following type declarations:
one-inverse : (X : 𝓤 ̇ ) (Y : 𝓥 ̇ ) (f : X → Y) (r s : Y → X) → (r ∘ f ∼ id) → (f ∘ s ∼ id) → r ∼ s joyal-equivs-are-invertible : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → is-joyal-equiv f → invertible f joyal-equivs-are-equivs : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → is-joyal-equiv f → is-equiv f invertibles-are-joyal-equivs : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → invertible f → is-joyal-equiv f equivs-are-joyal-equivs : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → is-equiv f → is-joyal-equiv f equivs-closed-under-∼ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } {f g : X → Y} → is-equiv f → g ∼ f → is-equiv g equiv-to-singleton' : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → X ≃ Y → is-singleton X → is-singleton Y subtypes-of-sets-are-sets : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (m : X → Y) → left-cancellable m → is-set Y → is-set X pr₁-lc : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } → ((x : X) → is-subsingleton (A x)) → left-cancellable (λ (t : Σ A) → pr₁ t) subsets-of-sets-are-sets : (X : 𝓤 ̇ ) (A : X → 𝓥 ̇ ) → is-set X → ((x : X) → is-subsingleton (A x)) → is-set (Σ \(x : X) → A x) to-subtype-≡ : {X : 𝓦 ̇ } {A : X → 𝓥 ̇ } {x y : X} {a : A x} {b : A y} → ((x : X) → is-subsingleton (A x)) → x ≡ y → (x , a) ≡ (y , b) pr₁-equiv : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } → ((x : X) → is-singleton (A x)) → is-equiv (λ (t : Σ A) → pr₁ t) pr₁-≃ : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } → ((x : X) → is-singleton (A x)) → Σ A ≃ X pr₁-≃ i = pr₁ , pr₁-equiv i ΠΣ-distr-≃ : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } {P : (x : X) → A x → 𝓦 ̇ } → (Π \(x : X) → Σ \(a : A x) → P x a) ≃ (Σ \(f : Π A) → Π \(x : X) → P x (f x)) Σ-assoc : {X : 𝓤 ̇ } {Y : X → 𝓥 ̇ } {Z : Σ Y → 𝓦 ̇ } → Σ Z ≃ (Σ \(x : X) → Σ \(y : Y x) → Z (x , y)) ⁻¹-≃ : {X : 𝓤 ̇ } (x y : X) → (x ≡ y) ≃ (y ≡ x) singleton-types-≃ : {X : 𝓤 ̇ } (x : X) → singleton-type' x ≃ singleton-type x singletons-≃ : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → is-singleton X → is-singleton Y → X ≃ Y maps-of-singletons-are-equivs : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } (f : X → Y) → is-singleton X → is-singleton Y → is-equiv f logically-equivalent-subsingletons-are-equivalent : (X : 𝓤 ̇ ) (Y : 𝓥 ̇ ) → is-subsingleton X → is-subsingleton Y → X ⇔ Y → X ≃ Y singletons-are-equivalent : (X : 𝓤 ̇ ) (Y : 𝓥 ̇ ) → is-singleton X → is-singleton Y → X ≃ Y NatΣ-fiber-equiv : {X : 𝓤 ̇ } (A : X → 𝓥 ̇ ) (B : X → 𝓦 ̇ ) (φ : Nat A B) → (x : X) (b : B x) → fiber (φ x) b ≃ fiber (NatΣ φ) (x , b) NatΣ-equiv-gives-fiberwise-equiv : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } {B : X → 𝓦 ̇ } (φ : Nat A B) → is-equiv (NatΣ φ) → ((x : X) → is-equiv (φ x)) Σ-is-subsingleton : {X : 𝓤 ̇ } {A : X → 𝓥 ̇ } → is-subsingleton X → ((x : X) → is-subsingleton (A x)) → is-subsingleton (Σ A) ×-is-singleton : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → is-singleton X → is-singleton Y → is-singleton (X × Y) ×-is-subsingleton : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → is-subsingleton X → is-subsingleton Y → is-subsingleton (X × Y) ×-is-subsingleton' : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → ((Y → is-subsingleton X) × (X → is-subsingleton Y)) → is-subsingleton (X × Y) ×-is-subsingleton'-back : {X : 𝓤 ̇ } {Y : 𝓥 ̇ } → is-subsingleton (X × Y) → (