Skip to content

Commit

Permalink
nits before release (including Martin’s proofreading)
Browse files Browse the repository at this point in the history
  • Loading branch information
MevenBertrand committed Apr 15, 2022
1 parent 74c2367 commit bb51fcf
Show file tree
Hide file tree
Showing 13 changed files with 130 additions and 96 deletions.
2 changes: 1 addition & 1 deletion Manuscript/biblio.bib
Original file line number Diff line number Diff line change
Expand Up @@ -3457,7 +3457,7 @@ @InProceedings{Nguyen2019
}

@Online{CoqManual,
author = {{The Coq Development Team}},
author = {{Coq Development Team}, The},
date = {2022-01},
title = {The Coq proof assistant reference manual},
url = {https://coq.inria.fr/refman/},
Expand Down
8 changes: 4 additions & 4 deletions Manuscript/bidir-ccw.tex
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ \subsection{Constrained inference in disguise}
and the other judgements are inlined.
Once again, reduction is not enough to perform constrained inference, this time
because type constructors can be hidden in subsets:
a type such as $\{f : \Nat \to \Nat \mid f\ 0 = 0 \}$
an inhabitant of a type such as $\{f : \Nat \to \Nat \mid f\ 0 = 0 \}$
should be usable as a function of type $\Nat \to \Nat$.
An erasure procedure is therefore required on top of reduction to remove subsets in the places where we use constrained inference.

Expand Down Expand Up @@ -270,8 +270,8 @@ \section{Properties of the Bidirectional System}
The first two relate it to the
undirected one: it is both \kl(bidir){correct} – terms typeable in the bidirectional system are typeable in the undirected system – and \kl(bidir){complete} – all terms typeable in the undirected system are also typeable in the bidirectional system.
Next, we investigate \kl{uniqueness of types}, and its relation to the choice of a strategy for reduction.
Finally, we show how \kl{strengthening} can be shown for undirected \kl{CCω} by proving it on the
directed side.
Finally, we expose how \kl{strengthening} can be shown for undirected \kl{CCω}
by proving it on the bidirectional side.

\subsection{Correctness}

Expand Down Expand Up @@ -480,7 +480,7 @@ \subsection{Uniqueness}
\end{theorem}

\begin{proof}
Since $\Gamma \vdash t \ty T'$, then by correctness
Since $\Gamma \vdash t \ty T$, by correctness
there exists some $T'$ such that $\inferty{\Gamma}{t}{T'}$ and moreover $T' \conv T$.
Similarly, there exists some $S'$ such that $\inferty{\Gamma}{t}{S'}$
and moreover $S' \conv S$.
Expand Down
2 changes: 1 addition & 1 deletion Manuscript/bidir-intro.tex
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
In the context of \kl{Church-style} abstraction, the closest there is to a description of
bidirectional typing for \kl{CIC} is probably the one given by the
\kl{Matita} team \sidecite{Asperti2012},
who however again concentrate on the challenges posed by the
which however concentrates again on the challenges posed by the
elaboration and unification algorithms.
They also do not consider the problem of completeness with respect to a given undirected system, as it would fail in their setting due to the undecidability of higher order unification.

Expand Down
4 changes: 2 additions & 2 deletions Manuscript/bidir-pcuic.tex
Original file line number Diff line number Diff line change
Expand Up @@ -233,5 +233,5 @@ \subsection{Polymorphic Inductive Types}
in theory and in practise. Can we give a presentation of polymorphic inductive types
that is as lightweight as pair types in \cref{fig:bidir-pair}?
The bidirectional presentation is valuable there, because now
it is clear what the specification of an alternative syntax:
it should stay be complete, in the sense of \cref{thm:comp-cumul}.
it is clear what the specification of an alternative syntax is:
it should remain complete, in the sense of \cref{thm:comp-cumul}.
85 changes: 47 additions & 38 deletions Manuscript/gradual-dependent.tex
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ \subsection{Fundamental Trade-Offs}
and revisiting them in the context of a dependent type theory (\cref{sec:graduality}).
This finally leads us to establish a fundamental impossibility in the gradualization
of \kl{CIC}, which means that at least one of the desired properties has to be sacrificed (\cref{sec:fire-triangle}).
With all set up, we can finally present our \kl{gradual},
With all set up, we can finally present our \kl(typ){gradual},
\kl{dependently} typed system, \kl{GCIC}, and its main characteristics
(\cref{sec:gcic-overview}).

Expand Down Expand Up @@ -250,7 +250,7 @@ \section{Safety and Normalization, Endangered}[Safety and Normalization]
term $\Omega$, defined as $\delta~\delta$
where $\delta$ is $\l x.\ x\ x)$.
In the \intro{simply-typed lambda calculus}%
\sidenote[][2em]{Hereafter abbreviated as \intro{STLC}.},
\sidenote{Hereafter abbreviated as \intro{STLC}.},
as in \kl{CIC}, \emph{self-applications} like $\delta\ \delta$ and $x\ x$ are ill-typed.
However, when introducing gradual types, one usually expects to accommodate such idioms,
and therefore in a standard gradually-typed calculus such as
Expand All @@ -274,7 +274,7 @@ \subsection{Axioms}
why would one want to gradualize \kl{CIC} instead of simply postulating
an axiom for any term (be it a program or a proof) that one does not feel like providing (yet)?

Indeed, we can augment \kl{CIC} with a wildcard axiom $\axiom \ty \P A : \uni.\ A$.
\AP Indeed, we can augment \kl{CIC} with a wildcard axiom $\axiom \ty \P A : \uni.\ A$.
The resulting system, called \intro{CICax}, has an obvious practical benefit: we can use
$\axiom A$%
%\sidenote{Hereafter written $\axiom[A]$.}
Expand Down Expand Up @@ -329,7 +329,7 @@ \subsection{Axioms}
\subsection{Exceptions}
\label{sec:extt}

\sidetextcite[0em]{Pedrot2018} present the exceptional type theory \intro{ExTT},
\AP \sidetextcite{Pedrot2018} present the exceptional type theory \intro{ExTT},
demonstrating that it is possible to extend a
type theory with a wildcard term while enjoying a satisfying notion of \kl{safety},
which coincides with that of programming languages with exceptions.
Expand All @@ -343,7 +343,7 @@ \subsection{Exceptions}
For instance, in \kl{ExTT} the following conversion holds:
\[\ind{\Bool}{\rai[\Bool]}{\Nat}{0,1} \conv \rai[\Nat]\]

Notably, such exceptions are \intro{call-by-name} exceptions, so one can only
\AP Notably, such exceptions are \intro{call-by-name} exceptions, so one can only
discriminate exceptions on positive types – \ie inductive types –, not on negative
types – \ie function types. In particular, in \kl{ExTT}, $\rai[A \to B]$ reduces to
$\l x : A.\ \rai[B]$ are \kl{convertible}.
Expand Down Expand Up @@ -405,7 +405,8 @@ \section{Gradual Simple Types}

\subsection{Static semantics}

\intro[gradual]{Gradually typed} languages introduce the \reintro{unknown type}, written $\?$,
\AP \intro[gradual typing]{Gradually typed} languages introduce the
\reintro{unknown type}, written $\?$,
which is used to indicate the lack of static typing information \sidecite{Siek2006}.
One can understand such an unknown type as an abstraction of the
set of possible types that it stands for \sidecite{Garcia2016}.
Expand Down Expand Up @@ -438,7 +439,7 @@ \subsection{Dynamic semantics}
a gradual language must detect inconsistencies at runtime if it is to satisfy \kl{safety},
which therefore has to be formulated in a way that encompasses runtime errors.

For instance, if the function $f$ above returns $\false$,
\AP For instance, if the function $f$ above returns $\false$,
then an \kl{error} must be raised to avoid reducing to $\false + 1$ – a closed stuck term,
corresponding to a violation of safety.
The traditional approach to do so is to avoid giving a direct reduction semantics to gradual
Expand All @@ -460,7 +461,8 @@ \subsection{Dynamic semantics}
There are many ways to realize each of these approaches,
which vary in terms of efficiency and eagerness of checking \sidecite{Herman2010,TobinHochstadt2008,Siek2010,Siek2009,Toro2020,BanadosSchwerter2021}.

\subsection{Conservativity} A first important property of a gradual language is that it is a
\subsection{Conservativity}
\AP A first important property of a gradual language is that it is a
\reintro{conservative extension} of a related static typing discipline:
the gradual and static systems should coincide on static terms.
This property is hereafter called \reintro{Conservativity},
Expand All @@ -483,7 +485,7 @@ \subsection{Conservativity} A first important property of a gradual language is
\kl{CIC} \emph{as a logic} – which it clearly is not!

\subsection{Gradual guarantees}
The early accounts of gradual typing emphasized \kl(grad){consistency} as the central idea.
\AP The early accounts of gradual typing emphasized \kl(grad){consistency} as the central idea.
However, \sidetextcite{Siek2015} observed that this characterization left too many
possibilities for the impact of type information on program behaviour,
compared to what was originally intended \sidecite{Siek2006}.
Expand All @@ -507,10 +509,14 @@ \subsection{Gradual guarantees}

\AP The \kl{static gradual guarantee} (\intro{SGG})
ensures that imprecision does not break typeability:

\begin{minipage}{\textwidth}
\begin{property}[\intro{Static Gradual Guarantee}]
If $t \pre u$ and $\vdash t \ty T$, then $\vdash u \ty U$
for some $U$ such that $T \pre U$.
\end{property}
\end{property}
\end{minipage}

%
This \kl{SGG} captures the intuition that “sprinkling $\?$ over a term“
maintains its typeability. As such, the notion of \kl{precision} $\pre$ used to
Expand Down Expand Up @@ -579,8 +585,8 @@ \subsection{Graduality}
characterize the good dynamic behaviour of a gradual language:
the runtime checking mechanism used to define it, such as casting,
should only perform type-checking, and not otherwise affect behaviour.
%
Specifically, they mandate that precision gives rise

\AP Specifically, they mandate that precision gives rise
to \intro{embedding-projection pairs} (\reintro{ep-pairs}):
the cast induced by two types related by precision forms an adjunction,
which induces a retraction.
Expand Down Expand Up @@ -611,7 +617,7 @@ \subsection{Graduality}
equivalent} to the original one.

% A couple of additional observations need to be made here, as they will play a major role in the development that follows.
\AP These two approaches to characterizing gradual typing highlight
These two approaches to characterizing gradual typing highlight
the need to distinguish
\emph{syntactic} from \emph{semantic} notions of precision.
Indeed, with the usual syntactic \kl{precision} from \sidetextcite{Siek2015},
Expand Down Expand Up @@ -640,7 +646,7 @@ \subsection{Graduality}
But as we uncover later on, it turns out that in certain settings – and in particular dependent types – the \kl{embedding-projection} property imposes \emph{more}
desirable constraints on the behaviour of casts than the \kl{DGG} alone.

In regard of these two remarks, in what follows we use the term \intro{graduality}
\AP In regard of these two remarks, in what follows we use the term \intro{graduality}
for the \kl{DGG} established with respect to a notion of \kl{precision} which also
induces \kl{embedding-projection pairs}.

Expand Down Expand Up @@ -718,7 +724,7 @@ \subsection{Relaxing conversion}
The proper notion to relax in the gradual dependently-typed setting is therefore \kl{conversion},
not syntactic equality.

\sidetextcite{Garcia2016} give a general framework for gradual typing that explains how to relax any type predicate to account for imprecision:
\AP \sidetextcite{Garcia2016} give a general framework for gradual typing that explains how to relax any type predicate to account for imprecision:
for a binary type predicate $P$, its \intro{consistent lifting} $\mathop{\tilde{P}}(A,B)$ holds
if there exist static types $A'$ and $B'$ in the denotation%
\sidenote{Concretization, in abstract interpretation parlance.}
Expand Down Expand Up @@ -805,8 +811,8 @@ \subsection{Observational refinement}
The situation in the dependent setting is however more
complicated if the theory admits divergence.%
\sidenote{
There exist non-gradual dependently-typed programming languages that admit divergence, \eg \kl{Dependent Haskell} \cite{Eisenberg2016},
\kl{Idris} \sidecite{Brady2013}.
There exist non-gradual dependently-typed programming languages that admit divergence, \eg \kl{Dependent Haskell} \cite{Eisenberg2016} or
\kl{Idris} \cite{Brady2013}.
We will also present one such theory in this article.
}%
\margincite{Eisenberg2016}%
Expand Down Expand Up @@ -891,22 +897,23 @@ \subsection{Gradualizing \kl(tit){STLC}}
We show that $\Omega$ is \emph{necessarily} a well-typed, diverging term in any
gradualization of \kl{STLC} that satisfies the other properties.

\begin{minipage}{\textwidth}
\pagebreak

\begin{marginfigure}
\includegraphics{Fire_triangle.pdf}
\caption{The Fire Triangle of Graduality}
\end{marginfigure}

\begin{theorem}[\reintro{Fire Triangle of Graduality} for \kl{STLC}]
\label{thm:triangle-STLC}

Suppose a gradual type theory that satisfies both \kl{conservativity} with respect to
\kl{STLC} and \kl{graduality}. Then it cannot be \kl{normalizing}.

\end{theorem}
\end{minipage}

\begin{marginfigure}[-4em]
\includegraphics{Fire_triangle.pdf}
\caption{The Fire Triangle of Graduality}
\end{marginfigure}

\begin{proof}

We pose $\Omega \coloneqq \delta\ (\asc{\delta}{\?})$ with
$\delta \coloneqq \l x : \?.\ (\asc{x}{\? \to \?})~x$
and show that it must necessarily be a well-typed, diverging term.
Expand Down Expand Up @@ -969,8 +976,8 @@ \subsection{Gradualizing \kl(tit){CIC}}

Additionally, \kl{graduality} can be specialized to the simply-typed fragment of the theory,
by setting the unknown type $\?$ to be $\?[\uni[0]]$.
Therefore, we can apply \cref{thm:triangle-STLC},
and we get a well-typed term that diverges, finishing the proof.
We can then apply \cref{thm:triangle-STLC},
and get a diverging well-typed term, finishing the proof.
\end{proof}

\subsection{The Fire Triangle in practice}
Expand All @@ -990,7 +997,8 @@ \subsection{The Fire Triangle in practice}
there are effectively-terminating programs with imprecise variants that yield termination
contract errors.

To date, the only related work that considers the gradualization of full dependent types with
\AP To date, the only related work that considers
the gradualization of full dependent types with
$\?$ as both a term and a type, is the work on \intro{GDTL} \sidecite{Eremondi2019}.
\kl{GDTL} is a programming language with a clear separation between the typing and execution
phases, like \kl{Idris} \sidecite{Brady2013}.
Expand Down Expand Up @@ -1034,7 +1042,7 @@ \subsection{Three in One}
\kl{GCIC}, with two parameters%
\sidenote{This system is precisely detailed in \cref{fig:ccic-ty}}.

The first parameter characterizes how the universe level of a Π-type is determined
\AP The first parameter characterizes how the universe level of a Π-type is determined
in typing rules: either as taking the \emph{maximum} of the levels of the involved
types – as in standard \kl{CIC} – or as the \emph{successor} of that maximum.
The latter option yields a variant of \kl{CIC} that we call \intro{CICs} – read “\kl{CIC}-shift”.
Expand All @@ -1057,7 +1065,7 @@ \subsection{Three in One}
\paragraph{… and three meaningful theories.}

Based on these parameters, we develop the following three variants of \kl{GCIC},
whose properties are summarized in \cref{tab:gcic}
whose properties are summarized in \cref{fig:gcic-summary}
% with pointers to the respective theorems
– because \kl{GCIC} is one common parametrized framework,
we are able to establish most properties for all variants at once.
Expand All @@ -1070,21 +1078,21 @@ \subsection{Three in One}
with \kl{errors} and \kl{unknown terms}, and replacing \kl{conversion} with
\kl(grad){consistency}. This results in a theory that is not normalizing.

Next, \intro{GCICs} satisfies both \kl{normalization} and \kl{graduality},
\AP Next, \intro{GCICs} satisfies both \kl{normalization} and \kl{graduality},
and supports \kl{conservativity}, but only with respect to \kl{CICs}.
This theory uses the universe hierarchy at the \emph{typing level} to detect and forbid
the potential non-termination induced by the use of \kl(grad){consistency}
instead of \kl{conversion}.

Finally, \intro{GCICT} satisfies both \kl{conservativity} with respect to \kl{CIC}
\AP Finally, \intro{GCICT} satisfies both \kl{conservativity} with respect to \kl{CIC}
and \kl{normalization}, but does not fully validate \kl{graduality}.
This theory uses the universe hierarchy at the \emph{computational level} to detect
potential divergence, eagerly raising errors.
Such runtime failures invalidate the \kl{DGG} for some terms,
and hence \kl{graduality}, as well as the \kl{SGG}, since in our dependent setting it depends
on the \kl{DGG}.

\begin{table*}[h]
\begin{figure*}[h]
\begin{tabular}{ccccccc}
& \kl{Safety} & \kl{Normalization} & \kl{Conservativity} wrt. & \kl{Graduality} & \kl{SGG} & \kl{DGG} \\
\kl{GCICP} \rule{0pt}{4ex}
Expand Down Expand Up @@ -1113,8 +1121,8 @@ \subsection{Three in One}
\end{tabular}\\

\caption{\kl{GCIC} variants and their properties}
\label{tab:gcic}
\end{table*}
\label{fig:gcic-summary}
\end{figure*}

\paragraph{Practical implications of \kl{GCIC} variants.}
Regarding our introductory examples, all three variants of \kl{GCIC}
Expand Down Expand Up @@ -1227,7 +1235,7 @@ \subsection{Typing, Conversion and Bidirectional Elaboration}
and the downcast (gain of precision) $\castrev{v}{\?}{\Nat}$ reduces successfully
if $v$ is such a tagged natural number, or to an error otherwise.

We follow a similar approach for \kl{GCIC}, which is
\AP We follow a similar approach for \kl{GCIC}, which is
elaborated in a type-directed manner to a second calculus,
named \reintro{CastCIC} (\cref{sec:cast-calculus}).
The interplay between typing and cast insertion is however more subtle in the
Expand All @@ -1240,7 +1248,8 @@ \subsection{Typing, Conversion and Bidirectional Elaboration}
\cite[Core language]{CoqManual}, where terms input by the user in the \kl{Gallina} language
are first elaborated in order to add implicit arguments, coercions, etc.
The computation steps required by conversion are
performed on the elaborated terms, never on the raw input syntax.}
performed on the elaborated terms, never on the raw input syntax.}%
\margincite{CoqManual}%

In order to satisfy \kl{conservativity} with respect to \kl{CIC}, ascriptions in \kl{GCIC}
are required to satisfy \kl(grad){consistency}. For instance, $\asc{\asc{\true}{\?}}{\Nat}$ is well-typed by \kl(grad){consistency} – used twice –, but $\asc{\true}{\Nat}$ is ill-typed.
Expand Down Expand Up @@ -1273,7 +1282,7 @@ \subsection{Precisions and Properties}
As explained earlier (\cref{sec:graduality}), we need three different notions of
precision to deal with \kl{SGG} and \kl{graduality}.

At the source level – \kl{GCIC} –,
\AP At the source level – \kl{GCIC} –,
we introduce a notion of \reintro{syntactic precision}, that captures the
intuition of a more imprecise term as "the same term with sub-terms and/or
type annotations replaced by $\?$", and is defined without any assumption of typing.
Expand All @@ -1296,7 +1305,7 @@ \subsection{Precisions and Properties}
for \kl{CastCIC} – in its variants \kl{CCICP} and \kl{CCICs} – on \kl{structural
precision}.

However, as explained in \cref{sec:grad-simple}, we cannot expect to prove \kl{graduality}
\AP However, as explained in \cref{sec:grad-simple}, we cannot expect to prove \kl{graduality}
for these \kl{CastCIC} variants with respect to \kl{structural precision} directly.
In order to overcome this problem, and to justify the design of \kl{CastCIC},
we build two models kinds of models for \kl{CastCIC}. The first%
Expand Down
Loading

0 comments on commit bb51fcf

Please sign in to comment.