In the philosophical approach proposed in Dalla Poozza Tarski's semantics for classical propositional logic is compatible with Heyting's interpretation of intuitionistic connectives, if the former applies to the logic of propositions and the latter is regarded as an informal model for the logic of judgement and of obligation. Namely, a language LP is proposed where Heyting's interpretation holds of formulas with assertive pragmatic force, deontic pragmatic force is interpreted as a modal operator and Tarski's semantics applies to radical formulas, without a sign of pragamtic force. We present here a sequent calculus that formalizes reasoning with mixed assertive and deontic contexts: these are essential to formalize fundamental schemes in legal reasoning such as conditional obligations ("if A is forbidden and it is assertible that A, then sanction B is obligatory"). The guiding principle of our formalization is Hume's law ("we cannot derive an `ought' from an `is' and viceversa"). Taking this principle seriously forces the use of a mixed relevant and ordinary intuitionistic consequence relation. Finally, we conclude that linear logic is the logic of pragmatic schemes where the sign of pragmatic force is unspecified, and such that the scheme remains valid under the widest range of substitutions of assertive or deontic signs of force for the unspecified ones.
We present a survey of results on graphs associated to formal proofs. From graph-theoretical properties we derive consequences that explain the expansion of proofs under cut-elimination, both for the propositional calculus and the predicate calculus. Linear, polynomial, exponential and super-exponential complexities arise from the topology of the graphs.
We start by briefly reviewing the `glue language' approach to the semantic interpretation of natural language. This uses an (implicational) fragment of linear logic to assemble the meanings of words and phrases in a syntactically analysed sentence. This raises an important, but neglected, proof search problem of relevance to computational linguistics.
From a given set of premises (representing lexical items), and with given conclusion in mind (representing an analysis of the full sentence), there is typically a multitude of distinct normal form derivations, each assigning a distinct proof/meaning term to the sentence. Many applications in computational linguistics require exhaustive proof search, where all alternative derivations are found, and as efficiently as possible.
In the case of glue language derivations, it turns out that all alternative derivations arise through different ways of introducing modifiers into a single skeleton derivation. In logical terms, modifiers correspond to identities for atomic or complex formulas. Given this feature of glue language derivations, the following three questions are addressed:
Embedding problems within a logic is an attractive way of formalizing: the ground is known to be sound, there are usually many nice results etc. Unfortunately, in general, while the coding is sound (positive results in for the initial problem are translated as theorems), it is far from being reciprocal (logic gives false positives). A typical example of this phenomenon is the translation of lcc into linear logic by Paul Ruet : while from intuitionistic linear logic success are observable (Saraswat and Lincoln), failures are not.
The same problem arises in other works, while other domains need more structural policies (typically linguistics). Each time one realizes that one way out is to move to a system which has a bigger number of connectives, which are linked together via domain specific rules. There is therefore a need for a generic study of hybrid systems which allows to design a logical system suitable for a given problem.
Linear logic proves to be an excellent ground for such constructions. Guided by works by M. Abrusci, U. Reddy, Ph. De Groote and P. Ruet, the key point in order to superimpose several times linear logic (commutative or cyclic) in a single system is to design structured contexts, or blocks.
Then one may add any number of commutative and cyclic multiplicative families and still obtain a system with a natural phase semantics, soundness, completeness and cut elimination results in both classical and intuitionistic schemes.
These families may communicate via several linkage rules, such as entropy or relax. Both orientations are valid, and suits different readings.
Intuitionistic systems proves to be much more robust to the addition of linkage rules. The only ``risk'' is to have equivalent families in case of circular dependencies. Classical systems do not support finer linkage rules than \emph{entropy}. Any finer rule, such as relax, makes the system collapse to linear logic.
Modalities are also accommodated nicely in this framework.
The structure of objects employed in the study of Natural Language Semantics has been increasingly being complicated to represent the items of information conveyed by utterances. The complexity becomes a source of troubles when we employ those theories in building linguistic applications such as speech translation system. To understand better how programs operating on semantic representations work, we adopt a logical approach and present a monadic and multiplicative linear logic. In designing the fragment, we refine on the multiplicative conjunction to employ both the commutative and non-commutative connectives. The commutative connective is used to glue up a set of formulae representing a semantic object conjointly. The non-commutative connective is used to glue up a list of formulae representing the argument structure of an object, where the order matters. We also introduce to our fragment a Lambek slash, the directional implication, to concatenate the formula representing the predicative part of the object and the list of formulae representing the argument part. The monadic formulae encode each element of the argument part by representing its sort with the predicate and the element as the place-holder. The fragment enjoys the nice property of being decidable. To encode contextual information involved in utterances, however, we extend the fragment with the exponential operator. The context is regarded as a resource available as many as required, but not infinitely many. We encode the items of context with the exponential operator, but ensure that the operator should appear only in the antecedent. The extention keeps the fragment decidable because the proof search will not fall into an endless search caused by the coupling of unlimited supply and consumption. We show that the fragment is rich enough to encode and transform semantic objects employed in the contemporary linguistic theories. The result guarantees that the theories on natural language semantics can be implemented reasonably and safely on computers.
In earlier work we gave a characterization of theoremhood in terms of the Connection Method for the multiplicative fragments of Affine (Contraction-Free) and Linear Logic. Such matrix characterisations constitute a basis for the adaption of the classical proof search procedures to the just mentioned logics.
Based on this work computationally advantageous variants of the previously developed matrix characterisations are established. Instead of the traditional complementarity of (horizontal) paths they hinge on the total connectedness of a sufficient number of virtual columns (vertical parths) in a matrix (non-purity). This yields to a new view of the proof search process: Instead of having to examine all possible extension steps for a not yet complementary partial path, we just have to `complete' not yet totally connected virtual columns. This means less possibilities and thus pruning away parts of the search space.
Thus we derive a pruning technique for proof search in acyclic (and linear) matrix graphs, which can be used for multiplicative Affine Logic. Extending proof search to multiplicative Linear Logic, we show that the additionally required minimality of spanning sets is automatically assured by depth-first search, and that the necessary connectedness of all literals can be tested incrementally.
We review the notions of Lambek Categorial Grammars, of Montague Grammars, and of Type Logical Grammars~[3]. Then we propose a general framework, inspired by these different formalisms. In this new framework, an abstract lexicon is simply a higher-order linear signature, and a concrete view of such a lexicon is a realization of the signature by means of linear lambda-terms. This scheme allows several views of a same lexicon to be defined: typically, a syntactic view and a semantic view. We argue that the resulting formalism is well-suited for an implementation.
This work has the purpose of describing the multiplicative quantifiers that arise from infinitary versions of the tensor and the par connectives. Blass has made the observation that an infinitary version of the par connective would need a generalized form of sequent and its proof theory would be quite hard. In this talk we present:
This talk draws parallels between a number of logical and grammatical formalisms, with a view to adapting ideas and methods available for one approach to some of the others. The approach of categorial grammar is firstly introduced, as well as a specific categorial system, the (product-free) Lambek calculus. Then, the relationship between the Lambek calculus and the implicational fragment of linear logic is discussed, and it is shown how the latter can be used as a basis for implementing the former by use of the labelled deduction methodology of Gabbay. We then discuss the relationship between the Lambek calculus and grammatical representations which employ tree descriptions. This leads to a demonstration that Lambek deductions can be compiled to a variant of the formalism Multiset-valued Linear Indexed Grammar, in which categories are augmented with labels that capture the Lambek system's ordering constraints. This compilation allows for an efficient deduction method which is related to Earley's top-down, predictive, chart parsing algorithm. This approach is readily adapted to provide an efficient tabular deduction method for implicational linear logic.
We show how linear typing can be used to write functions which modify their heap allocated arguments in a functional way without explicit ``pointer surgery.''
We present this both as a ``design pattern'' for writing C-code in a functional style and as a compilation process from linearly typed first-order functional programs into malloc-free C code.
The main technical result is the correctness of this compilation.
We discuss various extensions, in particular an extension with FIFO queues admitting constant time catenation and enqueuing, and an extension of the type system to fully-fledged intuitionistic linear logic.
le several toy examples, such as in the ``blocks world'' or concerning cigarettes and chocolates, have been used to demonstrate the applicability of Linear Logic to capture the concept of resources in the real world, Linear and other substructural logics have not yet been successfully used to model temporal properties, which are necessary for verification of computing systems (software and hardware). Traditionally, such systems are specified using variants of temporal logics, which capture many of the notions of behaviour over a possibly infinite computation; however, what they are not ideal for is to describe and reason about each step of the computation.
Linear logic can be seen to model single computation steps directly, as it is often claimed that Linear Logic captures the notion of state, and a single step in a computation is a change of state. If a computation step {\em consumes} certain properties of state and generates new properties, it can be represented by a linear formula P1(s) -o P2(s) where s is the representation of state and P1 and P2 are predicates characterizing each state. Programs consist of actions, which can either be single commands represented as above or structuring constructs. We can represent a choice between two actions (a generalized version of if-then-else) by the linear par.
Loops play an important role in computations. The simplest model of a loop is an action that can be repeated (potentially infinitely) many times until some condition is met. Using linear constructs, this can be written simply with an exponential around an action; of course the condition needs to be coded in.
There are still many issues to be addressed - such as how to represent concurrency, or data scoping, or non-determinism, or which variant of linear/substructural logic is ideal for this kind of use, from the semantical point of view. It is hoped that this presentation will fuel further discussion and result in collaboration between logic developrs and a applications community keen on exploring new avenues to facilitate formal verification.
This talk consists of three parts. The first part reviews the use of substructural logics in formal approaches to natural language grammar and semantics. In particular, labelled deductive systems of the form < label, formula > are considered as a means to model substructural logics. Employing different algebras over the labels that guide the inference the formulas they are attached to, a whole landscape of substructural logics arise that spans from the non-associateve Lambek calculus NL (weakest) to the Lambek-van Benthem claculus (strongest). the latter has a correspondence to multiplicative linear logic (MLL).
In the second part of the talk a tabular deduction method is presented. Its aim is to providean efficient means of processing with grammars that are modelled as LDS-style substructural logics. The method is based on a method proposed by Hepple (see also talks by Hepple, Crouch). The method presented here extends Hepple's method by including the means for handling directional implicaitons, structural rules abd many modals. Finally in the third part we raise several questions regarding various formal characteristics we would like to establish for complex systems in which substructural logics of various strengths coexist and interact.
This talk is the first announcement of a research program.
There are many areas of computer science that need a ``theory of spaces whose points are little spaces themselves". For example in formal language theory (both as a discipline by itself and as an aid to linguistics), the individual words/sentences generated by a formal language are sets-with-structure. In this case the structure is very simple, being only a finite total ordering whose elements are marked by atomic symbols. But this structure, in addition to being variable from word to word (the vectors vary in length), is naturally endowed with a strong spatial character (words are definitely one-dimensional spaces). At the same time the set of all such words/sentences generated by a formal language has a strong intuitive, while hard to formalize, notion of "neighborhood" attached to it. Two sentences may be related by a simple substitution of words, or an active-passive transformation, which shows they are more closely related than two random samples. For another example replace the space of all sentences by the space of all their parsing trees. Here, the main difference is that the spatial character of the "little spaces" is not simply linear anymore, but tree-like.
So we have identified two levels of "spaceness", "big" and "small", the former serving as domain of variation, in the sense of Lawvere, for the latter. It turns out that once it is recognized, this situation appears in many areas of computer science and applied mathematics: concurrency theory (a process is a big space and its little spaces are its states), statistical learning theory, knowledge representation, rewriting theory, population biology...
We will present a general theory of such spaces. It is informed by two paradigms, that have to be adapted to fit together. One is the Grothendieck-Lavwere theory of toposes, with its connection both to geometry and to model theory. The second one is linear logic: The operations that generate and split little spaces will be seen as generalized multiplicative connectors of linear logic, while the structure that unites all the little spaces into a big one proceeds from the additive fragment of linear logic. The natural "cement" between these two paradigms will be seen to be a class of theories in linear universal algebra, which can be seen as a "general theory of little spaces", and at the same time, as a theory of contexts for a multi-modal multiplicative fragment. A common slogan in proof theory is that "a logic is generated by its theory of contexts". The extension of a theory of contexts to a full logic can be be seen roughly as the addition of the negative connectors (par and with) to the purely positive fragment (tensor and plus). This adds the important property of *functionality* to the theory of spaces, which can be already observed in the simplest possible example, that of MALL.
Type Logical Grammar is a theory of categorial grammar which defines the relation of syntactic formation logically, by an interpretation of categorial types, rather than by stipulation of a formal system. The logic is generally some variety of non-commutative, probably intuitionistic, linear logic, paradigmatically Lambek's Syntactic Calculus. Since derivations are proofs, computational concerns raise the question: what is the essential structure of proofs? To which a powerful answer is: proof nets, leading to the slogan `proof nets as syntactic structures of language'.
We have introduced methods based on proof nets addressing four aspects of categorial processing: tabularization of Lambek proof nets, type logical generation, preevaluation of lexico- syntactic interaction and incrementality and best-first resolution of local and global ambiguity. The first and second are not presented here. The third observes that if syntactic derivation and lexical semantics are represented uniformly as proof nets, the substitution of lexical semantics into derivational semantics, usually postevaluated by on-line lambda conversion, can be preevaluated by off-line lexical Cut-elimination `getting rid of lambda conversion once and for all'. The fourth observes that a wide range of psycholinguistic processing phenomena are explained on the plausible assumption that incremental processing resolves axiom link dependencies as soon as possible.
We consider the following decision problems:
I describe a ``spatial" reading of the logic BI of bunched implications, which derives from Reynolds's possible world semantics of imperative programming. In this view, BI's implications and conjunctions get the following readings:
A -* B |- A -> B$ is not derivable
A |- (A -> A -> B) -> B is derivable
The number-of-uses interpretation would answer the opposite for these judgements; for example, in the second the assumption $A$ is used twice. We conclude that BI does not support, in an evident way, a uses interpretation. But the spatial reading provides a view of connectives which provides a way of understanding why these judgements are the way they are.
The aim of this paper is to give a brief presentation of some connections that can be established between translations and normalization proceedures. Special attention will be given to translations from classical linear logic into intuitionistic linear logic.
The double-negation family of translations (Gentzen, Goedel, Kolmogorov) can be easily justified, from the proof-theoretical point of view, by the normalization strategy used by Prawitz. We can also prove that Seldin's normalization procedure gives rise to a new translation from classical logic into intuitionistic logic. When we consider classical linear logic, we can prove that:
We trace the evolution of two systems: Syntactic Control of Conference (SCI), due to Reynolds, and Intuitionistic Linear Logic (ILL), due to Girard. SCI has a a clear resource interpretation, where functions take arguments with which they share no resources. ILL, on the other hand, has a clear process interpretation where tensor means independent processes, and linear function space means processes that involve communication between the argument and the result. The BCI fragment of ILL does admit an emergent interpretation in terms of resources, but this interpretation does not seem to extend to ``!''. The author's Linear Logic Model of State (LLMS) is an extension of ILL with additional connectives to represent processes evolving in time, and servers to model imperative programs with SCI type regimen. Such processes inhabit structures called ``object spaces,'' which form a symmetric monoidal closed category. This category can be Yoneda-embedded into a functor category (Kripke model). This gives rise to a model of Bunched Implications (BI) with two closed structures: one, symmetric monoidal, to model SCI functions, and the other, cartesian, to model general imperative programming functions. All said and done, this evolution represents an excellent study in the interplay of resource and process interpretations of substructural logics.
This talk is meant to explain the growing interest in the connection between resource-logical theories of grammar and the minimalist grammars of the transformational tradition in syntax. Although there are substantial differences that can be debated, the prospects also look good for identifying a valuable common ground. In particular, the rich descriptive tradition of transformational theory may become more accessible to resource-logical frameworks, and the frameworks may stimulate a more sophisticated understanding of the mechanisms of minimalist grammar, in particular from a computational viewpoint (parsing and learning algorithms). Linear logic is an appealing framework for a logical treatment of minimalist grammars; indeed linear logic is a neat and well studied logic from a proof theoretical perspective which is able to handle both resource logic for syntax (like the Lambek calculus) and logic for semantics (like intuitionistic proofs depicting formulae of usual predicate calculus used in Montague semantics).
I will firstly survey the generative perspective on language. Pollock presents various stages of the Chomskyan theories. I will insists on Chomsky's minimalist program for linguistics, which gets closer to Lambek's categorial grammars:
Resource control in functional programming was an early application of linear logic. The idea was that as linear variables are used exactly once, objects of a linear type should have exactly one pointer to them. Hence update-in-place for linear objects should be possible. This turned out to be problematic, as was first observed by Gunter et al. I present in this talk a different approach. I describe a calculus which separates language constructs for resource control from language constructs for manipulating data. In this way we obtain a calculus which describes update-in-place. Linearity plays only a limited role in this calculus: if the all operations on a given type are destructive, it follows that variables of this type are linear. If a type admits also non-destructive operations (eg arrays) linearity is not a necessary condition for update-in-place. In general, with higher-order function, linearity is also not a sufficient condition for update-in-place.
This is joint work with Valeria de Paiva.
We draw attention to a number of constructions which lie behind many concrete models for linear logic; we develop an abstract context for these and describe their general theory. Using these constructions we give a model of classical linear logic based on an abstract notion of game. We derive this not from a category with built-in computational content but from the simple category of sets and relations. To demonstrate the computational content of the resulting model we make comparisons at each stage of the construction with a standard very simple notion of game. Our model provides motivation for a less familiar category of games (played on directed graphs) which is closely reflected by our notion of abstract game. We briefly indicate a number of variations on this theme and sketch how the abstract concept of game may be refined further.
Communications protocols are significant elements of today's electronic communications infrastructure. The ability to formally specify these protocols, and the ability to verify their properties is essential. Many formalisms have been used. Some are more suited to specification, others are more suited to verification. This talk advances Linear Logic as a formalism that is suited to both the specification and verification of protocols. The TCP/IP protocol is taken as an example. A Linear Logic specification of the interface to the IP and TCP layers is presented, as well as a specification of the data transfer portion of the protocol. Properties of the IP are proved. The composition of the specifications of the IP layer interface and the data transfer protocol is shown to have the same properties as the TCP layer interface, namely that data is transferred as a stream of octets. The approach leads to relative simple specifications that enable verification. However this work has indicated that the addition of temporal reasoning to Linear Logic would be a great advantage.
Page maintained by E.Ritter@cs.bham.ac.uk