News

Bookmark and Share

This article is in the news archive.

Neurocompuation:From Brains to Machines - Report on workshop

On 25th November 2015, over 70 researchers from across the University of Cambridge gathered for an interdisciplinary workshop at Corpus Christi College on Neurocomputation: from brains to machines, chaired by Professor Zoe Kourtzi and organised by Cambridge Neuroscience with support from Cambridge Big Data. The aim of the workshop was to advance our understanding of how biological and artificial systems solve sensory and motor challenges and brought together speakers from a range of disciplines, from cognitive neuroscience and brain imaging to engineering, computer vision and robotics. The goal was to encourage dialogue using a common language of computational techniques that allow us to extract informative signals from rich biological data and design artificial systems with practical applications.

A recurring theme of the workshop was the progress that has been achieved in developing computational models of the brain that are also biologically, highlighting the importance of a continuing dialogue between biological scientists and engineers to uncover, and take inspiration from, the mechanisms underlying brain function and cognition.

Dr Andrew Welchman (Department of Psychology) opened the proceedings with a presentation on Seeing in depth: computations and cortical networks. Using modern brain imaging methods Dr Welchman’s group are studying how our brains can deal with the complicated inference problem of 3-D perception, allowing us to understand our three-dimensional world from the inputs captured by a 2-D retina.

To understand how the brain deciphers its inputs, they use an approach based on a convolutional neural network, which can be trained to understand depth. The emerging structures in the neural network resemble those seen in visual processes in the brain’s real circuitry, as revealed by high-resolution fMRI imaging. This can explain several prior observations (e.g. 1995 http://www.nature.com/nature/journal/v374/n6525/abs/374808a0.html).

Traditionally, psychologists and engineers have approached the problem of understanding neural computations (that is, how the brain converts inputs to outputs) using very different approaches. In recent years, the increasingly successful application of machine learning to these problems has begun to lead to a convergence of approaches. The studies described by Dr Welchman reveal the importance of bridging computational techniques with large volumes of experimental data – the underlying motivation for this interdisciplinary workshop.

Dr Andrew Fitzgibbon (Microsoft Research, Cambridge) presented research into what it means, from a mathematical point of view, to describe three-dimensional shape. Recovering stable representations of object shapes from images, using methods that are computationally inexpensive to run, is important in applications such as computer vision, object recognition, animation and computer graphics. In his talk, Learning about Shape, he reviewed previous some of the previous attempts to represent images from linear combinations of shape bases, pointing out that these bases are computationally (and often practically) expensive to derive and are unstable on account of parameter conflation. Instead, using spline-like representations (by fitting splines to the images and simultaneously performing principal components analysis, PCA) provides a better way to learn 3D shapes from images. The question remains as to how this learning process resembles the methods that the brain uses to learn shapes from a series of visual inputs.

Dr Fumiya Iida (Department of Engineering) presented his group’s work Towards a robot that can develop body and mind together. Dr Iida leads the Biologically Inspired Robotics Laboratory. He reviewed work from earlier in his career when he studied honeybees. Despite their small brain, they are able to forage over large distances find their way home using only their vision. It turns out that the key to this process is the mechanical design of the bee’s visual system, where the ‘sensor morphology’ (the arrangement of the visual receptors on the body) is heterogeneous, leading to greatly improved depth perception via the additional insight from motion parallax. This simplifies the computational cost of processing the inputs and explains the bee’s navigational ability at low cognitive cost. Translating these insights into artificial robotics led to development of robots with a similarly optimal distribution of sensors, to reduce the computational cost of navigation.

The overall theme of this work is therefore to build robots that can adapt their shape, and even robots that can design themselves, from a robot that can construct and operate its own tools to carry out particular tasks to robots that can ‘design’ their own morphology. Evolutionary robotics has been used to mimic evolution by natural selection, with a robot programmed to design and construct ‘child’ robots and select those that perform best at a given task (for example locomotion in a straight line). Dr Iida is keen to develop the robotics community at Cambridge and further afield and highlighted the Soft Robotics online community (softrobotics.org) and a forthcoming conference.

Dr Ben Seymour (Department of Engineering) presented work on The Human Pain System: from Biological Models to Robots. Using an engineering approach to pain as a control problem (a computational problem aimed at minimising pain through learning and experience), the Seymour Lab have developed models to guide our understanding of the fundamental architecture of the pain system. Their research reveals a three-level structure of the human pain and reward system consisting of a Pavlovian (passive associative) system, a habit system and a goal-directed system. Brain imaging using fMRI has shown good correlation with the predictions of the model based on instrumental (reinforcement) learning, giving confidence in their sophisticated engineering model.

Dr Seymour then asked whether the nature of human pain learning can inspire new control algorithms for autonomous agents. His research finds that indeed, borrowing insights from biology leads to better algorithms, with the three system model (Pavlovian, instrumental, model-based) outperforming one- or two-system models on a range of tasks in simulation. This has applications to the clinical context, for example this model can be used to create robots with psychiatric disorders such as OCD, whose behaviour can then be studied.

Dr Scott Yang (Department of Engineering) presented his work on Active sensing in the categorization of visual patterns, which aims to understand how humans interpret visual scenes, which typically require us to accumulate information from multiple locations. Using an experimental approach based on asking participants to classify pattern types in an image (for example, horizontal or vertical stripes), aspects of the participants’ strategy were elucidates. In one experiment, locations within a masked image were progressively revealed to the participant, either chosen by them, or by an ‘ideal’ Bayesian active sensing algorithm. This algorithm controls which areas are revealed based on a score of how much uncertainty in classification is reduced when the image is probed at that location.

The research finds that the sensing algorithm led to improved ability of the participants to classify the images, compared to the situation where the participants chose their own locations. This suboptimal performance by the human subjects arises from prior biases, perceptual noise and inaccuracies in eye movements.

Dr Andy Thwaites (Department of Psychology /MRC Cognition and Brain Sciences Unit) presented The Kymata Atlas: Mapping the information processing pathways of the cortex. The Atlas (https://kymata-atlas.org)) is an online tool to visualise the processing steps in the brain, from input to output, including aspects such as the position in the brain, the latency (the time at which each pathway operates) and the activity involved in each pathway (for example: visual, auditory or tactile input).

Pathways are identified experimentally using advanced imaging techniques such as electro-magnetoencephalography [EMEG] or functional magnetic resonance [fMRI] imaging. By building up a map of the pathways and processing steps in the brain can bring researchers a step closer to understanding the brain’s ‘source code’, drawing a parallel with the processing pathways implemented by a computer program.

The methods used to identify these transforms as well as their latency in expression in different brain regions has been reported in a range of sensory contexts, for example in colour perception or auditory perception. An example, including a description of the methods, may be found in: http://journal.frontiersin.org/article/10.3389/fncom.2015.00005/abstract

Dr Nikolaus Kriegeskorte (MRC Cognition and Brain Sciences Unit) addressed Deep neural networks: a new framework for understanding how the brain works. As an example of the usefulness of neural networks, he presented a set of data on inferior temporal cortical representations from an experiment on how humans and monkeys respectively identify representational similarity in visual stimuli (identifying whether two images represent similar objects, and the differences in processing when an image represents an animate or inanimate object). Prior to the use of deep neural nets, there were no suitable computational models that could explain these data. Although some models from computer vision were able to perform object recognition, they contained several hard-engineered features which were not plausible as representations of a biological system, and of 27 models tested, all failed to explain the experimental data.

Since 2012 however, there is an emerging body of work using deep neural nets to explain biological vision, allowing us to capture the computations and gain a deep understanding of vision by understanding the features of these computations. Not only are deep neural nets effective in explaining the empirical results, they are also biologically plausible and are hence extremely promising in providing a link between brains and machines. They are also making progress in explaining more complex processes such as human language, as a later speaker discussed.

Dr Barry Devereux, Department of Psychology, University of Cambridge continued the discussion on neural networks, with a talk entitled Using neural network models of conceptual representation to understand visual object processing. Dr Devereux’s research aims to investigate the manner in which sensory processing interacts with the activation of distributed semantic representations. For example, while a basic object recognition program can apply the label ‘orange’ and ‘banana’ to two images; a semantic layer of understanding is required to identify what the objects have in common.

By combining a deep convolutional network model of vision with feature based representations (based on the CSLB Property Norms), visual information and semantic features can be linked. The properties of the computational model (for example, the latency with which the visual information and the identification of distinctive semantic features)show good correlation with real fMRI data from participants undertaking visual identification and classification tasks, showing that the model is biologically plausible and can explain the information processing pathways involved in recognition and semantic classification.

The final speaker of the day, Dr Cai Wingfield (Department of Psychology) presented work on a different kind of recognition – that of speech. His talk, Understanding human speech recognition: Reverse-engineering the engineering solution using MEG and RSA addressed the mechanisms of speech comprehension in humans and an approach that is currently being taken to mimic this process in silico.

Unlike visual objects speech stimuli are time sensitive, adding significant complexity to its processing in the brain. Although effective speech comprehensive systems have been designed, there is no standard computational model of speech comprehension – to date, only humans have this faculty. Moreover, the most effective artificial systems are based on designs that are not biologically plausible. Dr Wingfield’s presentation highlighted the HTK (Hidden Markov Model Toolkit), developed in Cambridge, as an effective system for transcribing speech. Deep neural networks play a role here too, and have been shown to disambiguate speech. In combination with brain imaging studies, these models can give insight into how sounds are processed in the brain. However, significant future work is needed in understanding the processes by which the brain converts sound to meaning.

The workshop was followed by a networking reception where further discussion ensued. The broad range of subjects covered and disciplines represented demonstrated the importance of interdisciplinary working to bring together new perspectives and to guide future research directions for experimental and computational scientists alike.

Posted on 11/01/2016

Further news

Go to the news index page.