1950-1 – Maze Runner – Ian P. Howard (England)


Maze runner 2 x640 1950 1   Maze Runner   Ian P. Howard (England)

Ian P. Howard with his Maze Runner (Photo supplied by Ian P. Howard – April 2008)

Maze runner 1 x640 1950 1   Maze Runner   Ian P. Howard (England)

 Note the novel use of hexagonal tiles to give the branched track.

Maze runner 3 x640 1950 1   Maze Runner   Ian P. Howard (England)

 1950 1   Maze Runner   Ian P. Howard (England)

Prof. Dr. Ian P. Howard
Distinguished Research Professor of Psychology and Biology
Founder of the Centre for Vision Research
York University
Biography (by Prof. Dr. Wim van de Grind)
Ian Howard's first publication appeared in 1953, one year after he obtained a BSc degree from the university of Manchester UK. Fourteen publications and a book later he obtained his PhD in experimental psychology from Durham University. These early publications range from an electromechanical maze runner, through colour vision to interocular transfer (in Nature). Ian Howard has been and is a towering figure in the field of human spatial orientation (the title of his first book in 1966, with W.B. Templeton), human visual orientation (his second book in 1982), binocular vision and stereopsis (third book in 1995 and 2002, with B.J.Rogers) and seeing in depth (fourth book in 2002). His innovative research has been widely cited and he has been an invited and highly inspiring speaker at innumerable conferences. His books have become standard works in their fields. One reason is their encyclopedic character, another is the order they have created in large parts of perception research. Ian Howard is also famous for his amazingly innovative experimental contrivances and set-ups, that have attracted scientists from all over the world to his lab.  If you want to hang upside down in a tilted room or tickle your acelleration sensors, visit Ian. These wonderful devices have (a.o.) enabled the precise study of conflicts between the vestibular and visual system and made it possible to study and explain several disorientation syndromes and illusions experienced by pilots and astronauts in their demanding working environments. He also designed experiments for the Space lab aboard the Space shuttle, and wrote recommendations to improve spatial orentation during space flights. Ian Howard is a highly original scientist, who found several new and fruitful ways to unravel the interplay between the partly independent control systems of our spatial orientation and navigation. In one approach he showed that optokinetic eye-movements (OKN) induced by illusory self-motion (vection), have separable head-centric, world-centric and oculocentric components. He also reported that OKN is only evoked by stimuli moving in the plane of binocular convergence, which means that the (cortical) stereoscopic system also influences the (subcortical) OKN centres. These insights made OKN pivotal in studies of multisensory spatial orientation. It is only one example of original work by Ian Howard and coworkers, illustrating very nicely how it is possible in studies at the behavioural level to gain deep insights in the inner workings of the brain. Interestingly, Ian produced more scientific publications after his retirement (1993) than before that time, even though he has always been the highly productive scientist with wide-ranging interests and capabilities that he is today.


Howard, I. (1953). A note on the design of an electro-mechanical maze runner. Technical Report 3, University of Durham, Durham -see pdf here.

In April of 2008 I was in contact with Ian, now based in Canada. Ian recalls seeing Grey Walter's Tortoise well before the Festival of Britain in 1951. It inspired him "to make a mobile machine that would solve a problem and do more than avoid obstacles." It is interesting to note here that the full richness of M. Speculatrix, yet alone the short lived M. Docilis (CORA in tortoise form) is difficult to observe under normal conditions e.g. light intensity, battery charge levels are variables that change the behaviour, and casual observation would make the tortoise less interesting.  

It ran over a maze with a probe that felt its way along grooves between hexagonal plates. On its first run it turned right at every junction. Dead ends, consisting of vertical pegs, were inserted at selected points, so that the maze pattern could be changed. When a switch on one or other  end of the runner hit a dead-end peg the machine reversed. When the machine reached the end of the maze it was put back at the start. On the second run it ran to the end and avoided all dead ends. A series of switches on a circular disc, which can be seen on the right of the machine recorded the sequence of correct turns.

Ian was an undergraduate at Manchester University in 1951, the same year in which Ferranti installed the computer  Mk1 at Manchester University. Alan Turing was at Manchester Univerisity then. The maze runner was shown on TV in the North of England in about 1954.  Ferranti considered making a version for the Festival of Britain in 1951 but they decided on an electronic version of NIM. Further, Ian went to the Festival of Britain and played the game on the machine. He knew the winning strategy. He was a close friend of Dietrich Prinz who worked for Ferranti, and were both members of the Manchester International Club. Prinz showed Ian his end-game chess programme running on the Ferranti computer at the university in 1951. 

Ian believed it to be the first maze runner, not aware of the US based Smith / Ross 'rat' of 1935. As can be seen by the photographs above, Ian's maze runner and its maze still exist.

Ian went on to study human visual perception. You will find a record of this under Howard IP in Pubmed. If one remembers the Great International Egg Race in 1972? Ian's machine came in second.

extract from
Cybernetics and Mental Functioning
R. Thomson; W. Sluckin
The British Journal for the Philosophy of Science, Vol. 4, No. 14. (Aug., 1953), pp. 130-146.

2 Learning
By learning we mean acquiring skills, habits, and ideas. Learning goes on as we solve problems or even as we fail to solve them.
Clearly, much learning takes place by trial and error. Solving a problem makes it more likely that, in future, success in a similar situation will be achieved more easily and speedily.
In learning a manual skill we try persistently until all the incorrect moves have been eliminated. In acquiring a habit less and less effort is wasted on eliminating incipient wrong moves until the habit becomes ingrained. Obtaining knowledge, too, consists essentially in developing correct reactions to mental clues. It may be said that negative feed-back ensures that unless success is complete we go on trying.
At the turn of the century E. L. Thorndike formulated the Law of Effect. Briefly this states that successful features of behaviour are stamped in, and the unsuccessful ones eliminated. In other words, each success modifies subsequent activity of the learner in the direction of increasing the probability of selecting such a step again, or each failure leads to a lesser probability of subsequently attempting a similar step. In the simplest case, a failure will make it certain that the wrong move will not be repeated, whle a success will make it certain that the learner will make the correct move again on a future occasion.
It is theoretically not difficult to design a machine that will learn in this sense of the word. Mr I. P. Howard of the Department of Psychology, University of Durham, has actually constructed a ' mechanical rat ' which will run any maze (provided the width of the lanes is within certain limits). It will, of course, make mistakes during the first trial, though it will always successfully complete the run. On subsequent attempts it will make no mistakes. It will have 'learned' not to do so by 'experience'.

To make mechanical learning more like trial-and-error learning of living creatures, a mechanism could be made to be capable of initially responding to a situation in a number of ways ; and which response it will make can be made to be a matter of chance. Once, however, the response labelled correct has been made, the chances of it being made again on a subsequent occasion will be a little greater than before, and so on, until after some time the probability of a correct response will become virtually a certainty. Such a mechanism will 'learn by experience' just as an organism learns by experience.
0vert trial-and-error is, of course, not the only kind of learning.
We have not so far mentioned conditioning, or learning by association, or learning by insight. In recent years it has become increasingly clear to psychologists that the differences between the various kinds of learning are not of a fundamental nature. 'The "kind " of learning which the experimenter finds depends on the nature of the problem which he sets, on what he is looking for, on what aspects of the subject's activity he chooses to observe.'l(Kellogg-1948)
It may be plausibly maintained that in every kind of learning incorrect responses, whether overt or incipient, are eliminated, and correct ones are stamped in. And this makes learning a process which, basically, can be imitated by a machine.
The most primitive view of learning is associated with what Popper calls 'the bucket theory of mind'-a primitive mechanical model. More advanced views of learning are associated with the theory that knowledge manifests itself as modifications of reaction to external environment. This involves a more elaborate mechanical (or electronic) model.
Learning is associated with intelligence. This has been described by reference to behaviour only, as 'the property of reacting on a basis of probability as determined by the individual organism's incomplete sensory samples of the environment'.2(Coburn1951) It is interesting that it has been possible to design an electrical circuit, or to put it more picturesquely, a hypothetical nervous system' in conformity with data from behaviour experiments, which will exhibit intelligent behaviour in this sense.



First published 1954
Made and printed in Great Britain

An interesting development has been the construction of electro-mechanical maze-runners which learn to solve maze problems by trial and error.This opens the possibility of the imitation by machine of other forms of trialand-error learning.
Several quite 'sophisticated' maze learners have recently been described (see Chapter 2). At the present time, 'mechanical rats' solve mazes by trial and error, following the procedure of systematic exploration. The `rat' runs the maze for the first time, follows some systematic method of search, registers all the errors it has made, and when placed at the entrance to the maze for the second time, it runs straight through to the goal without making any mistakes.

The process consists of two stages: (a) the solving of the problem, and (b) the remembering of the solution.  The solving of the problem need not necessarily be done by systematic exploration.  Theoretically random exploration, wasteful and protracted as this method would be, could also ultimately lead to the solving of a maze. Consider the simplest situation where all lane junctions are T-junctions. The probability that the machine will go the right way at the first junction to which it comes is 0.5 (one chance in two). The probability that it will go the right way at the second junction is also 0.5. Therefore the probability that the machine will negotiate the two junctions correctly is 0.5 x 0.5, that is 0.25. The probability that a machine will negotiate a series of such junctions correctly, that is, that it will solve a maze by sheer chance, is quite small. Nevertheless, chance behaviour or random exploration does not preclude a solution, though probably only after very many unsuccessful trials.
What sort of procedure does a living rat follow when it has to run a maze the layout of which is not open to inspection? A close observation shows that a live rat running a maze is somewhere half-way between the two theoretically extreme methods of search for solution; the rat's exploration is neither fully systematic nor entirely random. Small wonder that the 'natural' approach is of this kind. The systematic behaviour is economical but too rigid to meet all possible situations which the animal might encounter in life. The chance behavior is theoretically good in any situation but much too uneconomical to be practicable. The behaviour of a real animal avoids the worst, if it does not quite get the best of the two worlds.
There are also two extreme possibilities in the remembering of the achieved solution: the rigidly correct repetition or the chance behaviour. The purely chance behaviour allows for no profiting whatsoever from experience. But the rigidly correct repetition leaves no room for further improvement when conditions have somewhat altered, for instance when a short-cut has been created in the maze.
A 'mechanical rat' learns a maze perfectly during the first or trial run. On subsequent runs it will make no errors. A real rat learns a maze gradually; it has to run a maze a number of times before it succeeds in eliminating all errors. Now, theoretically, though perhaps not easily in practice, a 'probability device' could be built into a mechanical maze runner. An error made would then not entirely ensure a correct move on a subsequent occasion. It would merely increase by a given amount the probability of a correct move on a subsequent occasion. Fewer and fewer mistakes would be made during each run, until errors would become very unlikely, and correct, straight-through runs assured.
Finally arises the problem of the imitation of insightful behaviour. The real question here is what constitutes a reasonable degree of imitation of such behaviour. Obviously, an artefact which simply reacts correctly to a signal will not do. There must be the possibility and the avoidance of error.
To satisfy our basic requirements such an artefact would have to solve problems by trial-and-error in some situations and by 'insight' in others. Preferably it would have to solve the same problem by both means: if given some clues only, it would proceed by trial and error; if given more and sufficient clues, it would immediately act in the correct manner.

Learning to Run Mazes
For over a quarter of a century animal psychology has used rats and sometimes other animals to run mazes. Stylus mazes, which blindfolded human subjects learn to solve with their fingers, are also customarily found in psychological laboratories. Animals and human beings appear to tackle the solution of mazes by trial and error. Can this process be imitated by machine ?
In 1938, T. Ross, in America, described a device capable of running and learning a simple maze. The maze
runner moved on a network of toy train tracks. Another such device, rather more elaborate, was built in 1952 (also in America) by R. A. Wallace. It, too, ran on rails. Once it solved a maze, it would 'remember' the solution and, on a subsequent occasion, run straight through without making errors.
The versatile Shannon, too, interested himself in the construction of a mechanical maze solver. He demonstrated in 1951 a maze-solving machine of somewhat different construction. A panel of 25 squares may be made into a maze by fixing a set of movable partitions upon it. The maze may be altered at will by a re-arrangement of the partitions. The maze is explored by a 'sensing finger' which can 'feel' the walls of the maze as it comes against them. The machine has to guide the finger through the maze to the goal. The goal, in the form of a pin, can also be moved into any of the 25 squares. Shannon's maze solver runs the maze for the first time following the 'exploration strategy'. Its errors are registered by its 'memory'. In the second run, the maze solver follows the 'goal strategy' and makes no further errors.
In 1950, entirely independently and unaware of any previous such attempts, I. P. Howard of the University of Durham began constructing a model 'rat', or an electro-mechanical maze-runner, as it has been called; he described it fully in 1953. Howard's 'rat' will run any maze provided the width of the lanes is within certain limits. It runs on three wheels and has three spring-loaded `feelers', one on each side and one in front. Its 'body' consists essentially of small motors, a set of electromagnetic relays and a 'memory' wheel. Like Shannon's maze solver, the 'rat', when placed at the entrance of the maze, begins a systematic exploration; it has been arranged that it should do this by always turning to the
left in the first place. The 'rat' eliminates all the blind alleys, and in doing so registers its 'errors' on the 'memory' wheel which controls the settings of the various relays. On a second and subsequent runs, the 'rat' makes no further mistakes. Howard's maze runner can learn any maze. The mazes used can be the same as those used by live rats. Such mazes are conveniently re-arranged by the shifting of partitions which have legs plugging into holes in a large metal panel.
More recently, J. A. Deutsch of the University of Oxford has constructed a 'machine with insight'. This, the most advanced of its kind, can not only learn simple mazes, but takes advantage of short-cuts when such are introduced. It can also learn two mazes, and when the two share a common point, the machine can find its way `to whichever of the two goal boxes the demonstrator makes it seek' without any further trial and error.
It might be wondered whether devices of this description are more than mere toys. What is the value and significance of the 'mechanical animals', conditioned reflex models and maze runners? Do they in any way help in the study of learning processes? The answer to these and similar questions must be deferred until Chapter 7, in which learning and problem solving are discussed at some length.
Berkeley, E. C. Giant Brains. Wiley, New York, 1949.
Booth, A. D., and Booth, K. H. V. Automatic Digital Calculators. Butterworth, London, 1953.
Bowden, B. V. (Ed.). Faster than Thought. Pitman, London, 1953.
67 C 2
Hartree, D. R. Calculating Instruments and Machines. C.U.P., Cambridge, 1950.
Murray, F. J. The Theory of Mathematical Machines. King's Crown Press, New York, 1948.
Walter, W. Grey. The Living Brain. Duckworth, London, 1953.
Ashby, R. W. 'Design for a Brain', Electronic Engineering, Vol. 20, 1948.
Ashby, R. W. 'Mechanical Chess Player', Cybernetics, Trans. Ninth Conf. (Ed. Foerster, H. von), Macy Foundation, New York, 1953.
Deutsch, J. A. 'A Machine with Insight', Quarterly Journal of Experimental Psychology, Vol. 6, 1954.
Howard, I. P. 'A Note on the Design of an Electro-mechanical
Maze Runner', Durham Research Review, No. 3, 1953. Maginniss, F. J. 'Differential Analyzer Applications', General
Electric Review, Vol. 48, 1945.
Mays, W., and Prinz, D. G. 'A Relay Machine for the Demon-
stration of Symbolic Logic', Nature, Vol. 165, 1950. McCallum, D. M., and Smith, J. B. Feed-back Logical Computers', Electronic Engineering, Vol. 23, 1951.
McCallum, D. M., and Smith, J. B. 'Mechanical Reasoning: Logical Computers and their Design', Electronic Engineering, Vol. 23, 1951.
Shannon, C. E. 'Programming a Computor to Play Chess', Philosophical Magazine, Vol. 41, 1950.
Shannon, C. E. 'Presentation of a Maze-Solving Machine',
Cybernetics, Trans. Eighth Conf. (Ed. Foerster, H. von), Macy Foundation, New York, 1951.
Slater, E. 'Statistics for the Chess Computor and the Factor of Mobility', Symposium on Information Theory, Ministry of Supply, London, 1950 (reprinted 1953).

Tags: , , , , , , ,

This entry was posted on Friday, November 13th, 2009 at 3:02 pm and is filed under Maze Solvers. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Leave a Reply