CS 345H: Programming Languages (Honors) Spring 2024

Lecture 10: Types Potpourri: Products and Sums

Let's wrap up our discussion of the simply typed lambda calculus (STLC) by visiting a few extensions to the type system. None of these extensions is especially surprising, but together they demonstrate that STLC is a close analog of real programming languages. We're going to do a whirlwind tour of two extensions to STLC: product types (which you might know as tuples) and sum types (which you might know as disjoint unions or enums).

Pairs

One language feature we've already seen in Coq, Racket, Dafny, and Rust, and indeed in most languages, are pairs or tuples. Pairs allow us to create ad-hoc data structures comprised of multiple base terms; while not strictly necessary, they're a very convenient feature.

There's not much to the formalization of pairs. As always, we start by adding syntax. We'll add three new constructors to the lambda calculus syntax:

t := ...
   | (t, t)
   | t.1
   | t.2

The first new constructor is how we create a new pair, and the two other constructors are how we access the first and second elements of the pair. These accessor constructs are often called projections.

Now we need new semantics. We're going to stick with the call-by-value behavior we've been using for the rest of our semantics. Let's start with two rules for projection, which let us keep evaluating a pair until it becomes a pair of values: $$ \frac{t_p \rightarrow t_p'} {t_p.1 \rightarrow t_p'.1} \: \textrm{SProj1} $$ $$ \frac{t_p \rightarrow t_p'} {t_p.2 \rightarrow t_p'.2} \: \textrm{SProj2} $$ But how does the pair itself take a step? We'll need two rules that specify the call-by-value evaluation order: first evaluate the first element, then evaluate the second. $$ \frac{t_1 \rightarrow t_1'} {(t_1, t_2) \rightarrow (t_1', t_2)} \: \textrm{SPair1} $$ $$ \frac{t_2 \rightarrow t_2'} {(v_1, t_2) \rightarrow (v_1, t_2')} \: \textrm{SPair2} $$ Notice we're using the same convention as before, where a metavariable $v$ means "value", so the rule $\textrm{SProj2}$ only applies once the first element has finished evaluating. Finally, once both elements of the pair are values. we can finally apply the projection operation to extract whichever side of the pair we were looking for: $$ \frac{} {(v_1, v_2).1 \rightarrow v_1} \: \textrm{SPairExt1} $$ $$ \frac{} {(v_1, v_2).2 \rightarrow v_2} \: \textrm{SPairExt2} $$ Nothing too surprising there.

Product types

Now we need to give types for pairs to fit them into STLC. The type of a pair is called a product type, because the space of values is the Cartesian product of the two base types. We need to add one new constructor to the syntax of types:

T := bool
   | T -> T
   | T × T

To extend the typing judgment, we need one new rule for each new syntactic constructor in the calculus itself. The rule for pair creation is the introduction rule for product types, and essentially just delegates to the types of the two sides: $$ \frac{\Gamma \vdash t_1 : T_1 \quad\quad \Gamma \vdash t_2 : T_2} {\Gamma \vdash (t_1, t_2) : T_1 \times T_2} \: \textrm{TPairIntro} $$ Now we need two elimination rules for product types, one for each of the two projections: $$ \frac{\Gamma \vdash t : T_1 \times T_2} {\Gamma \vdash t.1 : T_1 } \: \textrm{TPairElim1} $$ $$ \frac{\Gamma \vdash t : T_1 \times T_2} {\Gamma \vdash t.2 : T_2 } \: \textrm{TPairElim2} $$ That's it!

It's not too hard to extend pairs and product types to a more general notion of tuples if we wanted to, but it's a bit notationally inconvenient.

Disjoint unions

The essence of pairs is the ability to combine two values into one. Product types are, in some hand-waving sense, the "and" of two values—a pair $T_1 \times T_2$ has both a value of type $T_1$ and a value of type $T_2$. The dual of a product type would therefore be some notion of an "or" of two values, where the value could be either type $T_1$ or type $T_2$. Indeed, this is a language feature you've probably seen before, where a value could be a disjoint union of two types; for example, "either a number or a boolean". Disjoint unions are more commonly bottled up into an enum or variant type, which can be one of several disjoint types.

Generalized variants are a bit notationally annoying, just like generalized pairs, so we'll study the simpler version where we just have exactly two types. The values here are called sums and the types are called sum types.

Let's start with the syntax again. We'll add three new constructors:

t := ...
   | inl t
   | inr t
   | case t of inl x => t | inr x => t

The first two new constructors are the ways we create a sum. Their names can be read as "inject left" and "inject right"—we can think of inl and inr as functions that map from a base type to the sum type. The third constructor is how we use sums, and might look familiar—it's a simplistic version of pattern matching. It lets us evaluate differently based on which of the two variants the sum value is. In each of the two branches of the case, we'll bind x to the value inside the sum and then evaluate the body t.

Now we need semantics. Let's start with the administrative rules that let us evaluate a sum. First, the ability to step inside the inl and inr constructors: $$ \frac{t_1 \rightarrow t_1'} {\texttt{inl } t_1 \rightarrow \texttt{inl } t_1'} \: \textrm{SInl} $$ $$ \frac{t_1 \rightarrow t_1'} {\texttt{inr } t_1 \rightarrow \texttt{inr } t_1'} \: \textrm{SInr} $$ Now the administrative rule that lets us keep evaluating the $t_1$ parameter inside a case term: $$ \frac{t_1 \rightarrow t_1'} {\texttt{case } t_1 \texttt{ of inl } x_2 \texttt{ => } t_2 \texttt{ | inr } x_3 \texttt{ => } t_3 \rightarrow\\ \texttt{case } t_1' \texttt{ of inl } x_2 \texttt{ => } t_2 \texttt{ | inr } x_3 \texttt{ => } t_3} \: \textrm{SCase} $$ Finally, once the parameter has reduced to a value, we can decide which side of the case to run. In both cases, we evaluate by substituting the variable x for the value of the sum inside the body: $$ \frac{} {\texttt{case } \texttt{(inl }v_1\texttt{) of inl } x_2 \texttt{ => } t_2 \texttt{ | inr } x_3 \texttt{ => } t_3 \rightarrow t_2[v_1/x_2]} \: \textrm{SCaseLeft} $$ $$ \frac{} {\texttt{case } \texttt{(inr }v_1\texttt{) of inl } x_2 \texttt{ => } t_2 \texttt{ | inr } x_3 \texttt{ => } t_3 \rightarrow t_3[v_1/x_3]} \: \textrm{SCaseRight} $$ Again, not a lot that's surprising. Our call-by-value semantics often follow the same structure: evaluate the inner part of something as far as we can, and then if we reached a value, evaluate the outer part.

Sum types

Just like with product types, now we need to introduce sum types to give types to the new sum constructors. First, let's add new the new syntax for sum types:

T := bool
   | T -> T
   | T × T
   | T + T

A sum type $T_1 + T_2$ is either a $T_1$ or a $T_2$.

Now let's extend the typing judgment. Just like before, we'll have introduction and elimination rules. There are two ways to introduce a sum type, one for each of the two sum constructors: $$ \frac{\Gamma \vdash t_1 : T_1} {\Gamma \vdash \texttt{inl } t_1 : T_1 + T_2} \: \textrm{TInl} $$ $$ \frac{\Gamma \vdash t_1 : T_2} {\Gamma \vdash \texttt{inr } t_1 : T_1 + T_2} \: \textrm{TInr} $$ And there's one way to eliminate sum types, via the case constructor: $$ \frac{\Gamma \vdash t_1 : T_2 + T_3 \quad\quad \Gamma,x_2:T_2 \vdash t_2 : T \quad\quad \Gamma,x_3:T_3 \vdash t_3 : T} {\Gamma \vdash \texttt{case } t_1\texttt{ of inl } x_2 \texttt{ => } t_2 \texttt{ | inr } x_3 \texttt{ => } t_3 : T} \: \textrm{TCase} $$ This is probably the most complex of the rules we've added so far. It operates in vaguely the same way as the introduction rule $\textrm{TAbs}$ for abstractions from Lecture 7. First, the term $t_1$ that we're matching on needs to be a sum type. Then, each of the two cases needs to evaluate to the same type $T$, but like with abstaction, each case gets an additional assumption that $x_2$ and $x_3$ have the type of their corresponding variant of the sum type ($T_2$ or $T_3$ respectively). If all those assumptions are true, then the whole case term has the type $T$ that both branches evaluate to.

A uniqueness problem

While product types were pretty easy, and both product and sum types preserve soundness too, there's an interesting property of STLC that is no longer maintained by sum types. We never really discussed it, but the base STLC type system has a uniqueness property:

Theorem (uniqueness): Any closed term $t$ has at most one type.

This property is no longer true once we add sum types. The problem is that, once we have shown that a term $t$ has type $T_1$, we can use rule $\textrm{TInl}$ (or $\textrm{TInr}$, it doesn't matter which) to prove that $\texttt{inl } t$ (or $\texttt{inr } t$) has type $T_1 + T_2$ (or $T_2 + T_1$) for any type $T_2$. For example, inl True has type Bool + Bool, but also type Bool + (Bool -> Bool), Bool + Nat, etc.

Why do we care about uniqueness? First, it makes some proofs easier. But more importantly, it makes syntactic type checking easier: with uniqueness, we can check the type of a term by just applying the typing rules "bottom up" (starting from the entire type and recursing). But with sum types, to apply the TInl and TInr rules we need to know in advance what type $T_2$ we're going to use for the "other" side of the sum.

This is unfortunate! There are a few different solutions. One is type inference—we can try to "guess" or "infer" what the type should be. A simpler mechanism is just to annotate the inl and inr constructors with the other type; these are sometimes called tags. Note that the tag is about the other type of the sum, not the one we're actually using, since we can already typecheck that side.