Arrow Research search

Author name cluster

Stephen H. Muggleton

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

10 papers
2 author rows

Possible papers

10

IJCAI Conference 2020 Conference Paper

Complete Bottom-Up Predicate Invention in Meta-Interpretive Learning

  • Céline Hocquette
  • Stephen H. Muggleton

Predicate Invention in Meta-Interpretive Learning (MIL) is generally based on a top-down approach, and the search for a consistent hypothesis is carried out starting from the positive examples as goals. We consider augmenting top-down MIL systems with a bottom-up step during which the background knowledge is generalised with an extension of the immediate consequence operator for second-order logic programs. This new method provides a way to perform extensive predicate invention useful for feature discovery. We demonstrate this method is complete with respect to a fragment of dyadic datalog. We theoretically prove this method reduces the number of clauses to be learned for the top-down learner, which in turn can reduce the sample complexity. We formalise an equivalence relation for predicates which is used to eliminate redundant predicates. Our experimental results suggest pairing the state-of-the-art MIL system Metagol with an initial bottom-up step can significantly improve learning performance.

AAAI Conference 2020 Conference Paper

Learning Higher-Order Programs through Predicate Invention

  • Andrew Cropper
  • Rolf Morel
  • Stephen H. Muggleton

A key feature of inductive logic programming (ILP) is its ability to learn first-order programs, which are intrinsically more expressive than propositional programs. In this paper, we introduce ILP techniques to learn higher-order programs. We implement our idea in Metagolho, an ILP system which can learn higher-order programs with higher-order predicate invention. Our experiments show that, compared to first-order programs, learning higher-order programs can significantly improve predictive accuracies and reduce learning times.

IJCAI Conference 2020 Conference Paper

Turning 30: New Ideas in Inductive Logic Programming

  • Andrew Cropper
  • Sebastijan Dumančić
  • Stephen H. Muggleton

Common criticisms of state-of-the-art machine learning include poor generalisation, a lack of interpretability, and a need for large amounts of training data. We survey recent work in inductive logic programming (ILP), a form of machine learning that induces logic programs from data, which has shown promise at addressing these limitations. We focus on new methods for learning recursive programs that generalise from few examples, a shift from using hand-crafted background knowledge to learning background knowledge, and the use of different technologies, notably answer set programming and neural networks. As ILP approaches 30, we also discuss directions for future research.

IJCAI Conference 2016 Conference Paper

Learning Higher-Order Logic Programs through Abstraction and Invention

  • Andrew Cropper
  • Stephen H. Muggleton

Many tasks in AI require the design of complex programs and representations, whether for programming robots, designing game-playing programs, or conducting textual or visual transformations. This paper explores a novel inductive logic programming approach to learn such programs from examples. To reduce the complexity of the learned programs, and thus the search for such a program, we introduce higher-order operations involving an alternation of Abstraction and Invention. Abstractions are described using logic program definitions containing higher-order predicate variables. Inventions involve the construction of definitions for the predicate variables used in the Abstractions. The use of Abstractions extends the Meta-Interpretive Learning framework and is supported by the use of a user-extendable set of higher-order operators, such as map, until, and ifthenelse. Using these operators reduces the textual complexity required to express target classes of programs. We provide sample complexity results which indicate that the approach leads to reductions in the numbers of examples required to reach high predictive accuracy, as well as significant reductions in overall learning time. Our experiments demonstrate increased accuracy and reduced learning times in all cases. We believe that this paper is the first in the literature to demonstrate the efficiency and accuracy advantages involved in the use of higher-order abstractions.

IJCAI Conference 2015 Conference Paper

Learning Efficient Logical Robot Strategies Involving Composable Objects

  • Andrew Cropper
  • Stephen H. Muggleton

Most logic-based machine learning algorithms rely on an Occamist bias where textual complexity of hypotheses is minimised. Within Inductive Logic Programming (ILP), this approach fails to distinguish between the efficiencies of hypothesised programs, such as quick sort (O(n log n)) and bubble sort (O(n2 )). This paper addresses this issue by considering techniques to minimise both the textual complexity and resource complexity of hypothesised robot strategies. We develop a general framework for the problem of minimising resource complexity and show that on two robot strategy problems, 1) Postman 2) Sorter (recursively sort letters for delivery), the theoretical resource complexities of optimal strategies vary depending on whether objects can be composed within a strategy. The approach considered is an extension of Meta-Interpretive Learning (MIL), a recently developed paradigm in ILP which supports predicate invention and the learning of recursive logic programs. We introduce a new MIL implementation, MetagolO, and prove its convergence, with increasing numbers of randomly chosen examples to optimal strategies of this kind. Our experiments show that MetagolO learns theoretically optimal robot sorting strategies, which is in agreement with the theoretical predictions showing a clear divergence in resource requirements as the number of objects grows. To the authors’ knowledge this paper is the first demonstration of a learning algorithm able to learn optimal resource complexity robot strategies and algorithms for sorting lists.

ECAI Conference 2014 Conference Paper

Bias reformulation for one-shot function induction

  • Dianhuan Lin
  • Eyal Dechter
  • Kevin Ellis
  • Joshua B. Tenenbaum
  • Stephen H. Muggleton

In recent years predicate invention has been underexplored as a bias reformulation mechanism within Inductive Logic Programming due to difficulties in formulating efficient search mechanisms. However, recent papers on a new approach called Meta-Interpretive Learning have demonstrated that both predicate invention and learning recursive predicates can be efficiently implemented for various fragments of definite clause logic using a form of abduction within a meta-interpreter. This paper explores the effect of bias reformulation produced by Meta-Interpretive Learning on a series of Program Induction tasks involving string transformations. These tasks have real-world applications in the use of spreadsheet technology. The existing implementation of program induction in Microsoft's FlashFill (part of Excel 2013) already has strong performance on this problem, and performs one-shot learning, in which a simple transformation program is generated from a single example instance and applied to the remainder of the column in a spreadsheet. However, no existing technique has been demonstrated to improve learning performance over a series of tasks in the way humans do. In this paper we show how a functional variant of the recently developed MetagolDsystem can be applied to this task. In experiments we study a regime of layered bias reformulation in which size-bounds of hypotheses are successively relaxed in each layer and learned programs re-use invented predicates from previous layers. Results indicate that this approach leads to consistent speed increases in learning, more compact definitions and consistently higher predictive accuracy over successive layers. Comparison to both FlashFill and human performance indicates that the new system, MetagolDF, has performance approaching the skill level of both an existing commercial system and that of humans on one-shot learning over the same tasks. The induced programs are relatively easily read and understood by a human programmer.

AAAI Conference 2006 Conference Paper

Towards Chemical Universal Turing Machines

  • Stephen H. Muggleton

Present developments in the natural sciences are providing enormous and challenging opportunities for various AI technologies to have an unprecedented impact in the broader scientific world. If taken up, such applications would not only stretch present AI technology to the limit, but if successful could also have a radical impact on the way natural science is conducted. We review our experience with the Robot Scientist and other Machine Learning applications as examples of such AIinspired developments. We also consider potential future extensions of such work based on the use of Uncertainty Logics. As a generalisation of the robot scientist we introduce the notion of a Chemical Universal Turing machine. Such a machine would not only be capable of complex cell simulations, but could also be the basis for programmable chemical and biological experimentation robots.