Filter By:

Journal Check one or more journals to show results from those journals only.

Choose more journals

Article type Check one or more article types to show results from those article types only.
Subject Check one or more subjects to show results from those subjects only.
Date Choose a date option to show results from those dates only.

Custom date range

Clear all filters
Sort by:
Showing 1–13 of 13 results
Advanced filters: Author: Timothy Lillicrap Clear advanced filters
  • A general reinforcement-learning algorithm, called Dreamer, outperforms specialized expert algorithms across diverse tasks by learning a model of the environment and improving its behaviour by imagining future scenarios.

    • Danijar Hafner
    • Jurgis Pasukonis
    • Timothy Lillicrap
    ResearchOpen Access
    Nature
    Volume: 640, P: 647-653
  • Multi-layered neural architectures that implement learning require elaborate mechanisms for symmetric backpropagation of errors that are biologically implausible. Here the authors propose a simple resolution to this problem of blame assignment that works even with feedback using random synaptic weights.

    • Timothy P. Lillicrap
    • Daniel Cownden
    • Colin J. Akerman
    ResearchOpen Access
    Nature Communications
    Volume: 7, P: 1-10
  • People are able to mentally time travel to distant memories and reflect on the consequences of those past events. Here, the authors show how a mechanism that connects learning from delayed rewards with memory retrieval can enable AI agents to discover links between past events to help decide better courses of action in the future.

    • Chia-Chun Hung
    • Timothy Lillicrap
    • Greg Wayne
    ResearchOpen Access
    Nature Communications
    Volume: 10, P: 1-12
  • The backpropagation of error (backprop) algorithm is frequently used to train deep neural networks in machine learning, but it has not been viewed as being implemented by the brain. In this Perspective, however, Lillicrap and colleagues argue that the key principles underlying backprop may indeed have a role in brain function.

    • Timothy P. Lillicrap
    • Adam Santoro
    • Geoffrey Hinton
    Reviews
    Nature Reviews Neuroscience
    Volume: 21, P: 335-346
  • A reinforcement-learning algorithm that combines a tree-based search with a learned model achieves superhuman performance in high-performance planning and visually complex domains, without any knowledge of their underlying dynamics.

    • Julian Schrittwieser
    • Ioannis Antonoglou
    • David Silver
    Research
    Nature
    Volume: 588, P: 604-609
  • Starting from zero knowledge and without human data, AlphaGo Zero was able to teach itself to play Go and to develop novel strategies that provide new insights into the oldest of games.

    • David Silver
    • Julian Schrittwieser
    • Demis Hassabis
    Research
    Nature
    Volume: 550, P: 354-359
  • A deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation. Richards et al. argue that this inspires fruitful approaches to systems neuroscience.

    • Blake A. Richards
    • Timothy P. Lillicrap
    • Konrad P. Kording
    Reviews
    Nature Neuroscience
    Volume: 22, P: 1761-1770
  • One of the ambitions of computational neuroscience is that we will continue to make improvements in the field of artificial intelligence that will be informed by advances in our understanding of how the brains of various species evolved to process information. To that end, here the authors propose an expanded version of the Turing test that involves embodied sensorimotor interactions with the world as a new framework for accelerating progress in artificial intelligence.

    • Anthony Zador
    • Sean Escola
    • Doris Tsao
    ReviewsOpen Access
    Nature Communications
    Volume: 14, P: 1-7