Do-Form: Submissions to Stage 1

The following papers have been submitted to stage 1 (problem & tool = “nail & hammer” descriptions) and accepted. The following paragraphs – one per paper – summarise their nail/hammer aspects and advise authors planning to submit to stage 2 on how to match them.

Revised versions of these papers have been published in the symposium proceedings.

Quick links to submissions: 1 2 3 4 5 6 7

Submission 1 (nail)

SAsSy – Scrutable Autonomous Systems (Nava Tintarev, Nir Oren, Roman Kutlak, Matt Green, Judith Masthoff, Kees van Deemter and Wamberto Vasconcelos): stage 1, stage 2

Failure to understand complex (e.g. distributed) autonomous systems may lead to unrealistic expectation or lack of trust (e.g. in dangerous environments), and thus limits their adoption. The authors discuss the importance of making them scrutable, i.e. adding to them a mechanism that can explain the alternatives for actions and its decisions, so that it can communicate with humans in an understandable way on a level that allows users to cope with potentially large amounts of data. Making autonomous systems scrutable rests on four foundations:

  1. a representation for planning
  2. human-understandable reasoning mechanisms
  3. translating logical statements into natural language
  4. techniques that enable the user to cope with a deluge of information

Each of them requires formalisation and has its own unique challenges, as does the integration of all of these into a single system.

The SAsSy project aims at developing a hammer for this nail but is in an early stage and could therefore (re)use existing hammers. Respondents to this work can be found on two different ends: non-trivial applications as well as contributions to the different parts (1–4). The authors have so far identified the following possible hammers, and studied relevant literature:

  • 1. knowledge acquisition
  • 1. and 2. argumentation theory
  • 2. and 3. natural language generation
  • 4. diagrams, user modelling, user evaluation

Submission 2 (nail and hammer)

Transparency of Environmental Computer Models (Martine de Vos): stage 1, stage 2

The environment is complex to model: long-term processes, complex mechanisms, lots of variability, not everything well understood, lack of empirical data. Current computer models of this, as well as the process of their development, lack transparency: What is incorporated in the model and what not, how trustable are the results? This restricts their applicability and reusability by outsiders/non-developers. The following is needed to make them more transparent:

  1. make their underlying conceptual model explicit
  2. encourage model developers (i.e. domain experts) to do (1), using formalisation
  3. encourage them by incentives

The following previously existing solutions are not yet completely sufficient:

  • teaching best practice to modellers (when they don't see an urgent need for transparency, while society has this need)
  • high-level annotation of scientific models and workflows, and provenance annotation of scientific data

The author presents the approach of explicitly describing the conceptual models themselves (concepts and relations) as ontologies, and using the latter to

  • facilitate reviews of the model
  • communicate the model and its underlying scientific knowledge

Respondents to this work can help to address requirements (1–3), or apply ontologies as a means of review and communication in other application settings.

Submission 3 (hammer)

A Vision of Collaborative Verification-Driven Engineering of Hybrid Systems (Stefan Mitsch, Grant Olney Passmore and Andre Platzer): stage 1, stage 2

Hybrid systems with both discrete and continuous dynamics are an important model for real-world physical systems. The key challenge is how to ensure their correct functioning w.r.t. safety requirements. Promising techniques to ensure safety seem to be model-driven engineering to develop hybrid systems in a well-defined and traceable manner and formal verification to prove their correctness, forming the vision of verification-driven engineering. Despite the remarkable progress in automating formal verification of hybrid systems, the construction of proofs of complex systems often requires significant human guidance, since hybrid systems verification tools work over an undecidable theory and cover different fields of mathematics. It is thus not uncommon for verification teams to consist of many players with diverse expertise (e.g. on several fields of mathematics, verification, machine-supported theorem proving, etc.) This paper presents a verification-driven toolset for collaboratively engineering large-scale hybrid systems, which supports

  • modeling hybrid systems
  • exchanging and comparing models and proofs, and
  • managing verification tasks

Particular strenghts include

  • decomposition of hybrid systems (so that they can be verified by verifying properties of their subsystems),
  • efficiently solving multivariate polynomial inequalities (by integrating existing provers; work in progress),
  • collaborative development of models and proofs (in a generic language and in domain-specific ones)

Respondents to this work can offer new application domains.

Submission 4 (hammer)

Interacting with Ontologies and Linked Data through Controlled Natural Languages and Dialogues (Ronald Denaux, Vania Dimitrova and Anthony Cohn): stage 1, stage 2

Ontologies play an important role in many different appication areas. Although the languages in which they can be speficied look superficially simple, domain expert have problems to apply them appropriately. This paper presents three steps to enable domain experts to build ontologies:

  1. Rabbit, a controlled natural language frontend for OWL, which has been successfully tested with domain experts in cartography
  2. ROO, an editor that helps domain experts to avoid modelling mistakes by guiding them through the ontology authoring process
  3. a facility that enriches this editor with interactive feedback on logical consequences of new facts to be added to an ontology, i.e.:
    • the fact already is in the ontology
    • can be derived from it,
    • contradicts it,
    • is new (and if so whether anything follows from it)
    So far this makes authors aware of problems, but does not really yet help to resolve them
  4. a dialogue-based user interface for giving more differentiated feedback while the author is building an ontology (but useful beyond ontology authoring)

Respondents to this work could evaluate whether the tools presented here are really independent of the application domain, or try to scale the techniques to more expressive logics.

Submission 5 (hammer)

Explaining the Outcome of Knowledge-Based Systems; a discussion-based approach (Martin Caminada and Mikołaj Podlaszewski): stage 1, stage 2

Nonmonotonic reasoning is a framework for analysing the construction of arguments based on rules of thumb which are subject to exceptions. This submission proposes such a discussion-based approach for explaining the outcome of knowledge-based systems that use nonmonotonic (defeasible) inference. An interactive Python-based frontend is available, which aids in the process of constructing arguments. A key feature is the ability not only to establish an argument, but also to give a (one hopes) explanation of the justification of the argument suitable for human users to digest.

Respondents could apply this technique in their respective domains; it seems particularly promising in the social sciences.

Submission 6 (hammer and nail)

Developing an Auction Theory Toolbox (Manfred Kerber, Christoph Lange, Colin Rowat and Wolfgang Windsteiger): stage 1, stage 2

Auctions allocate trillions of dollars in goods and services every year. Auction design can have significant consequences, but its practice outstrips theory. The authors aim at advancing auction theory with help from mechanised reasoning. To that end they are developing a toolbox of formalised representations of key facts of auction theory, which will allow auction designers to have relevant properties of their auctions machine-checked. In the starting phase of this effort, they are investigating the suitability of different mechanised reasoning systems (Isabelle, Theorema, and TPTP) for reproducing a key result of auction theory: Vickrey's celebrated 1961 theorem on the properties of second price auctions – the foundational result in modern auction theory. Based on their formalisation experience, they give tentative recommendations on what system to use for what purpose in auction theory, and outline further steps towards a complete auction theory toolbox.

Respondents could apply a similar toolbox building methodology in other domains, or could contribute additional mechanised reasoning know-how to the authors' auction theory toolbox building effort.

Submission 7 (nail)

Model Validation and Test Portfolios in Financial Regulation (Neels Vosloo): stage 1, stage 2

Modern finance relies on software, whether to price assets ('valuation models') or to compute portfolios' riskiness ('capital models'). Given the quantities of money involved, errors in financial software can lead to tens or hundreds of millions of dollars of losses. At the same time, model validation and checking is typically performed using traditional, manual methods. This constellation of observations has led to high-level concern about whether the modern financial system is overly reliant on computational models, whose possibilities for failure are not well understood or controlled (q.v. the 2012 Beddington/Foresight report).

In the midst of this clearly very large domain area, Vosloo outlines a tractable starting point for the consideration of formal methods/mechanised reasoning: how can regulators develop minimal 'test portfolios', capable of efficiently ensuring that capital models incorporate relevant risk factors, benchmarking those capital models against each other, and stress testing them under a variety of market conditions.

Respondents could outline how to apply verification techniques to this problem.

This page was last modified on 10 April 2013.