Cambridge Neuroscience Event

This event profile is in the events archive.


WelcomePlenaryProgramme
RegistrationDirectionsSponsors
ExhibitingContact UsAbstracts


Neurocomputation: from brains to machines

When

November 25th 2015: 13:30-17:30

Where

McCrum Lecture theatre, Corpus Christi College

Description

The aim of the workshop is to advance our understanding of how biological and artificial systems solve sensory motor challenges. We will bring together speakers who work on the problems of recognition and action from diverse perspectives: cognitive neuroscience and brain imaging, computer vision and robotics. The goal is to encourage dialogue using a common language of computational techniques that allow us to extract informative signals from rich biological data and design artificial systems with practical applications.

Back to top

Plenaries

N/A

Back to top

Programme at a Glance

Chaired by Professor Zoe Kourtzi

14:00-14:20            Dr Andrew Welchman, Department of Psychology, University of Cambridge

  Seeing in depth: computations and cortical networks.

14:20-14:40            Dr Andrew Fitzgibbon, Microsoft Research, Cambridge

  Learning about Shape

14:40-15:00            Dr Fumiya Iida, Department of Engineering, University of Cambridge

  Towards a robot that can develop body and mind together

15:00-15:20            Dr Ben Seymour, Department of Engineering, University of Cambridge

  The Human Pain System: from Biological Models to Robots

15:20-15:50            Break

 

15:50-16:10            Dr Scott Yang, Department of Engineering, University of Cambridge

  Active sensing in the categorization of visual patterns

16:10-16:30            Dr Andy Thwaites, Department of Psychology, University of Cambridge/MRC Cognition and Brain   Sciences Unit

 The Kymata Atlas: Mapping the information processing pathways of the cortex

16:30-16:50            Dr Nikolaus Kriegeskorte, MRC Cognition and Brain Sciences Unit

  Deep neural networks: a new framework for understanding how the brain works

16:50-17:10            Dr Barry Devereux, Department of Psychology, University of Cambridge

  Using neural network models of conceptual representation to understand visual object processing

17:10-17:30            Dr Cai Wingfield, Department of Psychology, University of Cambridge Invited

 

17:30-18:30            Drinks

18:30-                      Delegates are invited to join at the Eagle pub

 

Dr Andrew Welchman, Department of Psychology, University of Cambridge

Seeing in depth: computations and cortical networks.

Human perception is remarkably flexible: we experience vivid 3-D structure under diverse conditions from the seemingly random dots of a ‘magic eye’ stereogram to the aesthetically beautiful, but obviously flat, canvasses of the Old Masters. How does the brain achieve this apparently effortless robustness? Using modern brain imaging methods we are beginning to unpick how different parts of the visual cortex support 3-D perception, tracing different computations in the dorsal and ventral pathways. In this talk I will describe work that uses functional brain imaging (fMRI) in combination with computational analysis techniques to increase our insight into the functions of the visual cortex. 

 

Dr Andrew Fitzgibbon, Microsoft Research, Cambridge

Learning about Shape

Vision is naturally concerned with shape. If we could recover a stable and compact representation of object shape from images, we would hope it might aid with numerous vision tasks. Just the silhouette of an object is a strong cue to its identity, and the silhouette is generated by its 3D shape. In computer vision, many representations have been explored: collections of points, "simple" shapes like ellipsoids or polyhedra, and various algebraic surfaces.  For years, people have avoided spline-like representations because their recovery from data is considered unstable.  I will show how a certain spline-like representation, subdivision surfaces, can be used to recover shape from images.  I show how we can address the previously-difficult problem of recovering 3D shape from multiple silhouettes, and the considerably harder problem which arises when the silhouettes are not from the same object instance, but from members of an object class, for example 30 images of different dolphins each in different poses. This requires that we simultaneously learn the shape space and the projections from each instance into its image.  Despite the strong practical and mathematical influence, I hope the implications for neural representations will be something we can discover and discuss.


Dr Fumiya Iida, Department of Engineering, University of Cambridge

Needs to speak early

Towards a robot that can develop body and mind together

Computers are extremely powerful when dealing with virtual worlds, but how can we bring them out of the box? Compared to the brain power, physical bodies of machines are still primitive in terms of their physical capabilities of adaptation and flexibility. From this viewpoint, we have been working on the technological means that enable physical robots to vary their own mechanical bodies to improve autonomy and adaptability in uncertain and unstructured task environments. I will introduce the state-of-the-art enabling technologies toward a robot that can develop body and mind together.


Dr Ben Seymour, Department of Engineering, University of Cambridge

Hopes to be there by 2pm but will assign later in the programme

The Human Pain System: from Biological Models to Robots

Although pain is a ubiquitous part of our lives, it is almost completely unknown how the brain turns input from pain fibres into the emotional and sensory feeling of pain: i.e. how the pain system is actually built. Our research aims to discover the fundamental 'design architecture' of the pain system, by piecing together individual underlying component processes that yield pain-related behaviour. To do this, we adopt an explicitly engineering-based approach that draws on the primary function of the pain system to recognize and subsequently minimise harm through sensing pain. Using this approach, it is possible to build 'systems-level' models of the pain system, and test and refine these models through human behavioural and neuroimaging experiments. To date, this approach has been very successful, and sufficient to allow us to build a basic skeleton model of the human system. The engineering-level approach then allows us to simulate and implement this system to show that it works in artificial system, such as robots. Indeed, in some domains it is possible to improve on existing control systems for autonomous robots using neurally-inspired designs.


Dr Scott Yang, Department of Engineering, University of Cambridge

Active sensing in the categorization of visual patterns

Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations.


Dr Andy Thwaites, Department of Psychology, University of Cambridge/MRC Cognition and Brain Sciences Unit

The Kymata Atlas: Mapping the information processing pathways of the cortex

Every millisecond, different types of sensory receptors send information about the environment to the brain. Different regions of the brain then carry out specific processing steps on this information (for example calculating ‘pitch’, ‘loudness’ or ‘colour detection’).

Identifying these processing pathways is an important goal of neuroscience. Models of processing pathways are constantly being hypothesised by researchers, and with recent advances in neuroimaging techniques, many of these pathways can be tested against data from electro-magnetoencephalography [EMEG] or functional magnetic resonance [fMRI] imaging.

Pathways for which strong evidence is established can be collated into maps, and such maps allow users to trace the transformation of information as it moves through the brain. During this talk, I will present some of the pathways that can be found in the Kymata Atlas (https://kymata-atlas.org), one of the largest maps available to the public, and discuss how engineering applications can make use of them.


Dr Nikolaus Kriegeskorte, MRC Cognition and Brain Sciences Unit

Deep neural networks: a new framework for understanding how the brain works

Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Artificial neural networks are inspired by the brain and their computations could be implemented in biological neurons. Although designed with engineering goals, this technology provides the basis for tomorrow’s computational neuroscience. In order to test such models with massively multivariate brain-activity data, we can characterise the representational spaces in brains and models by matrices of representational dissimilarities among stimuli. Deep convolutional neural nets trained for visual object recognition have internal representational spaces remarkably similar to those of the human and monkey ventral visual pathway. Modern neural net technology puts an expanding array of complex cognitive tasks within our computational reach. We are entering an exciting new era, in which we will be able to build neurobiologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence.

 

Dr Barry Devereux, Department of Psychology, University of Cambridge

Using neural network models of conceptual representation to understand visual object processing

According to most neurocognitive accounts of semantic processing, concepts’ meanings are made up of distributed, overlapping, feature-based representations (Haxby et al. 2001; Cree & McRae 2003; Taylor et al. 2011, Tyler & Moss, 2001). However, the manner in which sensory processing interacts with the activation of distributed semantic representations remains unclear. We addressed this issue in a study using familiar visual objects. We combined a deep convolutional network model of vision (Krizhevsky et al. 2012) with an attractor network model of concept semantics (Cree et al. 2006) where information about object meaning is represented as a pattern of activation across semantic feature units. This integrated visuo-semantic model maps high-level visual representations onto feature representations, encoding statistical information about semantic features (e.g. feature frequency, or sharedness) and their relationship to high-level visual information. We found that early stages of the semantic model showed stronger activation for features shared by many objects and features which are visual in nature (e.g. “is long”), compared with more distinctive and non-visual features. Using representational similarity analysis (Kriegeskorte et al. 2008) and searchlight analysis (Kriegeskorte et al. 2006), we tested the ability of the model to explain patterns of activation in fMRI data where 16 participants named pictures of 131 objects (Clarke & Tyler 2014). We calculated 8 dissimilarity matrices (DMs) corresponding to the 8 layers of the deep convolutional network and 20 DMs corresponding to the 20 processing stages of the attractor network. The 28 DMs delineate a trajectory through a space of representational geometries, from pixels to detailed object semantics. Layers of the deep convolutional network explained pattern similarities in early visual cortex, consistent with previous results (Khaligh-Razavi & Kriegeskorte 2014; Güçlü & van Gerven, 2015). However, DMs corresponding to the early stages of the attractor network, where activation of shared and visual semantic features is strong relative to non-visual and distinctive features, better explained pattern similarity in the posterior fusiform. The final stage of the semantic attractor network model, where detailed semantic representations, including both shared and distinctive features, are maximally activated, best explained pattern similarity in bilateral perirhinal cortex (see also Clarke et al., 2014, Tyler et al. 2013). Taken together, the results show how models integrating visual and distributed semantic representations can account for fMRI pattern-information throughout the ventral temporal object processing stream.

 

Dr Cai Wingfield, Department of Psychology, University of Cambridge

Understanding human speech recognition: Reverse-engineering the engineering solution using MEG and RSA

There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important example. Automatic Speech Recognition (ASR) systems are now emerging, constructed solely on engineering and computational principles that provide a fully computationally specified model for speech recognition. This research bridges the gap between human and machine solutions to the speech recognition problem, using multivariate pattern analysis techniques to compare incremental 'machine states', generated as the ASR analysis progresses over time, to the incremental 'brain states' generated as the same inputs are heard by human listeners. Pilot results show that ASR-based models can illuminate the mechanisms of human speech comprehension.

 

Back to top

Directions

The entrance to the McCrum Lecture Theatre is off Benet Street through a passage on the right of the Eagle pub

Back to top

Registration

Registration for this event is now closed.Back to top

Sponsors

This workshop is sponsored by the Cambridge Neuroscience and Big Data Strategic Research Initiatives.

Back to top

Exhibiting

N/A

Back to top

Contact

Please contact Dr Dervila Glynn for further information.



Further events

Go to the events index page.