Dr André Grüning

AKA Andre Gruning, André Gruning, Andre Grüning, Andre Gruening, André Gruening

Coordinates

Dr André Grüning
Lecturer in Computing
Office: 14BB02
Student Office Hours: Monday, 1400 — 1500 and Tuesday 1000 — 1100.
Research Day: Thursday.

Department of Computing
University of Surrey
Guildford, Surrey
GU2 7XH
United Kingdom

Phone: +44-1483-68-2648
Fax: +44-1483-68-6051
Electronic:
PGP Public Key: download here

Research

I am a member of the Nature-Inspired Computing and Engineering group within the Dept. of Computing.

My research interests include:

Neuroscience and Cognitive Science

My interests lie mainly in the fields of the theory and application of neural networks, cognitive modelling, (formal and natural) languages, complexity, learning algorithms and neuroscience. Generally, I believe that we must understand better how theoretical results, neural processing, and the environment interact to form intelligent and successful artificial and natural agents. To me it is also very important to bridge the huge gap between neuroscience on the one side and cognitive science on the other: In the end we must strive to explain how cognitive processes run on the neural hardware the brain supplies.

Language comes in discrete chunks (words), however it is ultimately processed on the neural hardware of the brain that is analog in nature (rates and spike times take on continuous values), and thus it is a priori unclear whether complexity concepts from symbolic/digital computation theory apply to analog computation in neural hardware or generally in dynamical systems. I am therefore interested in the interplay of different types of formal languages with different complexities and the nature of their dynamical systems representation in recurrent neural networks (Güning 2006).

Since the nature of hand-coded dynamic representations used in the now classical proofs of Turing equivalence of recurrent neural networks is quite different from those that emerge when a recurrent neural network is actually trained on a particular formal language, I became interested also in learning algorithms for artificial neural networks that are both efficient and neurally or cognitively realistic.

Gradient descent algorithms for example are quite powerful, however they are not biologically or cognitively realistic for two main reasons: they are fully supervised and they use synapses antidromatically. For reinforcement learning algorithms it is the opposite: they are cognitively more plausible and even neurally plausible implementations exists, however many problems that can be learnt with gradient descent cannot with reinforcement learning. (Grüning 2007).

I am also interested in cognitive science and especially in modelling language processing. Collaborators and I are exploring the domain (in)dependence of statistical learning strategies in human subjects for correlations in visual stimulus sequences that otherwise are typical for language processing.

Biological Modelling

I am also interested in modelling biological systems on the systems level, and here especially population dynamics. Population can be studied as dynamics systems in mathematics and modelled for example with agent-based computer simulations.

In the Spring Semester 2013 I will be 0.6 FTE on a MILES mini-sabbatical to do research on robust microbial communities.


Programming Interests


Teaching

André Grüning Currently I am teaching the following modules. Teaching material can be accessed via SurreyLearn:
AS 2009 — AS 2012
COM2033 Advanced Object-Oriented Programming
SS 2013
COM2027 (formerly COM2015) Software Engineering Project
COMM031 Collective Intelligence
I have also taught the following modules:
AS 2011
COM1031 Computer Logic together with Dr Jonathan Clark.
COM2003 Object-Oriented Software Engineering
SS 2008 — SS2012
COM2027 (formerly COM2015) Software Engineering Project
SS 2008 — SS2009
CS187/COM1007 Cognitive Processes

$Id: teaching.html 20 2009-11-15 19:51:21Z ag0015 $


Dissertation Projects

List of student projects

Learning algorithms for neural networks that are efficient and biologically realistic at the same time

How can we find training algorithms for artificial neural networks that are both efficient and biologically realistic?

PhD, MSc

Artificial neural networks are both biologically inspired models of the nervous system and trainable computational devices. They can be trained with gradient-descent learning algorithms such as back-propagation. However back-propagation is not biologically realistic because it requires an extensive circuitry to calculate error gradients. So-called weight perturbation methods have been suggested that can do without extensive circuitry but estimate only an approximation to the true error gradient by making use of "noise" in a clever way. "Noise" or random perturbations are also used in genetic training algorithms that do not follow a gradient, but use trial-and-error in an evolutionary way.

One aim of this project is to compare the performance of weight perturbation and genetic algorithms and develop some ideas for their improvement.

References

  1. André Grüning. Elman backpropagation as reinforcement for simple recurrent networks. Neural Computation, 19(11), 2007.
  2. Benjamin~A. Rowland, Anthony~S. Maida, and Istvan S.~N. Berkeley. Synaptic noise as a means of implementing weight-perturbation learning. Connection Science, 18(1):69--79, 2006.
  3. Genetic algorithms in Wikipedia and references therein.

Computational properties of simple recurrent networks

To understand what types of computations artificial neural networks are capable of when they are trained. What is the computational power of an artificial neural network? Can it process complicated formal languages? And is there a relation to the grammar of natural languages?

Depends a lot where one want to put the focus: MSc, PhD

It is well established that simple recurrent networks can learn so called regular languages. These are formal languages without recursive structure and they require finite memory only. In this project we want to explore whether and how artificial neural networks can also process more complicated recursive structures that are both interesting from a computational and a linguistic viewpoint.

References

  1. Jeffrey L. Elman, Elizabeth A. Bates, Mark H. Johnson, Annette Karmiloff-Smith, Domenico Parisi, and Kim Plunkett. Rethinking Innateness: A Connectionist Perspective on Development. MIT Press, Cambridge, Mass., 1996.
  2. Mikael Bodén and Alan Blair. Learning the dynamics of embedded clauses. Applied Intelligence, 19(1/2):51--63, 2003.
  3. Paul Rodriguez. Simple recurrent networks learn context-free and context-sensitive languages by counting. Neural Computation, 13:2093--2118, 2001.
  4. André Grüning. Stack- and queue-like dynamics in recurrent neural networks. Connection Science, 18(1):23--42, 2006.

Computational modelling of natural language

Natural languages are a phenomenon that is hard to grasp in computational terms since so many different ingredients enter it: phonology, morphology, syntax, semantics, world knowledge, information theory\dots

Depending on the particular problem choosen in this domains: PhD, MSc, BSc.

In the project, the student will pick an interesting problem from the vast area of natural language and build a computational model for it. Two examples of more concrete project ideas could be

  1. modelling the pattern of distributions of full nouns or proper names (the dog, the cat, Alice, Bob, the tree \dots) and referential expressions (I, she, him, its \dots) in a text. Think about the slightly different meanings of the following sentences: 1. After she had left the party, Alice went home. 2. After Alice had left the party, she went home. 3. After she had left the party, she went home. 4. After Alice had left the party, Alice went home.
  2. modelling the distribution of \textit{mass} and \textit{count} nouns across different languages. A count noun is one that can easily be counted: one tree, two trees, three trees, many trees. A mass noun is one that cannot be easily counted without changing the basic meaning of the word: salt, much salt, but "two salts"? Different languages seems to treat different nouns differently in this respect.

References

  1. André Grüning and Andrej A. Kibrik. Modeling referential choice in discourse: A cognitive calculative approach and a neural network approach. In António Branco, Tony McEnery, and Ruslan Mitkov, editors, Anaphora Processing: Linguistic, Cognitive and Computational Modelling. John Benjamins, Amsterdam, 2004.


Admin Roles

I currently have the following admin roles. Click on the links to get to know more. Former admin roles include the following:

Academic CV Sketch

since 09/2007
Lecturer, Department of Computing, University of Surrey, UK.
10/2004 — 09/2007
Research fellow in Alessandro Treves' Computational Neuroscience Group at SISSA, Trieste, Italy.
04/2004 — 09/2004:
Research fellow in Nick Chater's Cognitive Psychology Group at the Department of Psychology, University of Warwick, UK.
02/2004:
Ph.D. (Dr.rer.nat) in Computer Science, University of Leipzig.
03/2000 —03/2004:
Member of the Neural Networks and Cognitive Systems group (head: Jürgen Jost and Thomas Wennekers) at the Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany.
July 1999:
Diploma (equiv. M.Sc.) in Theoretical Physics, University of Göttingen.
10/1993 — 02/2000:
UG and PG Studies of physics and mathematics at the University of Göttingen, Germany, and, for the academic year 1995/6 (Erasmus Exchange Programme), at University of Uppsala, Sweden.


Publications

Preprints

A. Grüning and A. Vinayak PG. The accumulation theory of ageing. Preprint.

2012

I. Sporea and A. Grüning. Supervised learning in multilayer spiking neural networks. Neural Computation, 2012. In Press, Preprint.

A. Grüning and I. Sporea. Supervised learning of logical operations in layered spiking neural networks with spike train encoding. Neural Processing Letters, 36(2), 117--134, 2012, Preprint. Ca. 22 pages.

S. Notley and A. Grüning. Improved spike-timed mappings using a tri-phasic spike timing-dependent plasticity rule. In Proceedings of the International Joint Conference on Neural Networks. 2012. Preprint.

J. Chrol-Cannon, A. Grüning and Y. Jin. The emergence of polychronous groups under varying input patterns, plasticity rules and network connectivities. In Proceedings of the International Joint Conference on Neural Networks. 2012. Preprint.

I. Sporea and A. Grüning. Classification of distorted patterns by feed-forward spiking neural networks. In Proceedings of the International Conference on Articifial Neural Networks, Lecture Notes in Computer Science. Springer, 2012. Preprint.

N. Yusoff, A. Grüning and S. Notley. Pair-associate learning with modulated spike-time dependent plasticity. In Proceedings of the International Conference on Articifial Neural Networks, Lecture Notes in Computer Science. Springer, 2012. Preprint.

P. Ioannou, M. Casey and A. Grüning. Evaluating the effect of spiking network parameters on polychronization. In Proceedings of the International Conference on Articifial Neural Networks, Lecture Notes in Computer Science. Springer, 2012. Preprint.

N. Yusoff and A. Grüning. Biologically inspired sequence learning. In International Symposium on Robotics and Intelligent Sensors (IRIS), Procedia Engineering. Elsevier, 2012. Preprint.

N. Yusoff and A. Grüning. Learning anticipation through priming in spatio-temporal neural networks. In Proceedings of the ICONIP 2012, vol. Part I, 7663 of Lecture Notes in Computer Science, p. 168sqq. 2012. Preprint.

2011

I. Sporea and A. Grüning. Reference time in SpikeProp. In Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE, San Jose, CA, August 2011. Preprint.

N. Yusoff, I. Sporea and A. Grüning. Neural networks in cognitive science -- an introduction. In P. Lio and D. Verma (eds.), Biologically Inspired Networking and Sensing: Algorithms and Architectures. IGI Global, Hershey, PA, 2011.

2010

N. Yusoff and A. Grüning. Supervised associative learning in spiking neural network. In K. Diamantaras, W. Duch and L. Iliadis (eds.), ICANN (1), vol. 6352 of Lecture Notes in Computer Science, pp. 224--229. Springer, 2010.

I. Sporea and A. Grüning. Modelling the McGurk} effect. In ESANN 2010 proceedings, European Symposium on Artificial Neural Networks - Computational Intelligence and Machine Learning. Brugge, 2010. Preprint.

I. Sporea and A. Grüning. A distributed model of memory for the McGurk effect. In Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE, Barcelona, 2010. Preprint.

2009

N. Yusoff, A. Grüning and A. Browne. Modelling the Stroop Effect: Dynamics in inhibition of automatic stimuli processing. In Proceedings of the 2nd International Conference in Cognitive Neurodynamics (ICCN 2009), Lecture Notes in Computer Science. Springer, 2009. Preprint.

I. Sporea and A. Grüning. Modelling of the McGurk effect. In Frontiers in Behavioral Neuroscience. Conference Abstract: 41st European Brain and Behaviour Society Meeting. 2009.

N. Yusoff, A. Grüning and T. Browne. Competition and cooperation in colour-word Stroop Effect: An association approach. In Frontiers in Behavioral Neuroscience. Conference Abstract: 41st European Brain and Behaviour Society Meeting. 2009.

2007

A. Grüning. Elman backpropagation as reinforcement for simple recurrent networks. Neural Computation, 19(11), 3108--3131, 2007, Preprint.

2006

A. Grüning. Stack- and queue-like dynamics in recurrent neural networks. Connection Science, 18(1), 23--42, 2006, Preprint.

A. Grüning and A. Treves. Distributed neural blackboards could be more attractive. Behavioral and Brain Sciences, 29(1), 79--80, 2006, Preprint.

2005

A. Grüning. Back-propagation as reinforcement in prediction tasks. In W. Duch, J. Kacprzyk, E. Oja and S. Zadrozny (eds.), Proceedings of the International Conference on Artificial Neural Networks (ICANN'05), vol. 3697 of LNCS, pp. 547--552. Springer, Berlin, Heidelberg, 2005. Preprint.

A. Grüning. Dynamic representations of stack- and queue-like syntactic structures. In A. Cangelosi, G. Bugmann and R. Borisyuk (eds.), Proceedings of the Ninth Neural Computation and Psychology Workshop Modelling Language, Cognition, and Action (NCPW9). World Scientific, New Jersey, 2005.

2004

A. Grüning and A. A. Kibrik. Modeling referential choice in discourse: A cognitive calculative approach and a neural network approach. In Antonio Branco, T. McEnery and R. Mitkov (eds.), Anaphora Processing: Linguistic, Cognitive and Computational Modelling. John Benjamins, Amsterdam, 2004. Preprint.

2003

A. Grüning and A. A. Kibrik. A neural network approach to referential choice. In I. Kobozeva, N. Laufer and V. Selegey (eds.), Computational Linguistics and Intellectual Technolgies -- Proceedings of the Dialogue 2003 International Conference, Protvino, pp. 260--266. Nauka, Moscow, 2003.

2002

A. Grüning and A. A. Kibrik. Referential choice and activation factors: A neural network approach. In A. Branco, T. McEnery and R. Mitkov (eds.), Proceedings of the 4th Discourse Anaphora and Anaphor Resolution Colloqium (DAARC 2002). Edições Colibri, Lisbon, 2002. Preprint.

Theses

A. Grüning. Neural Networks and the Complexity of Languages. Doctoral disseration, School of Mathematics and Computer Science, University of Leipzig, 2004. Abstract.

A. Grüning. Ladungssektoren positiver Energie im Hochenergielimes des Schwinger-Modells [Charge sectors of positive energy in the high energy limit of the Schwinger model]. Diploma (Master's) thesis, Institute for Theoretical Physics, University of Göttingen, 1999.


Private Interests

These include

Links


$Id: index.php 34 2011-11-22 13:03:41Z ag0015 $