Some pointers to press and other media interest in the work on the Cognition and Affect project and the CoSy (Cognitive Systems for Cognitive Assistants) project at the University of Birmingham.
The use of that title, which had absolutely nothing to do with the content of my work, and the word 'robonerd' used in the printed version, is typical of the junk produced by modern journalists and presenters who don't realise that patronising the public is no way to inform and educate them.
The article makes a number of completely unjustified assumptions about what I expect robots with the abilities of human toddlers to be able to do (e.g. I nowhere suggested that "we're talking about a machine that could spawn a race of cyber-nerds capable of creating entirely new forms of mathematics".)
The actual aim of building a working system (still a long way off) would be to test a theory by showing that it is capable of explaining how toddlers learn what they do and how some of them can develop into mathematicians. Of course, that sort of test would have to be complemented by empirical research.
Anyone wanting to get more detailed information about what is being proposed can look at these slide presentations:
a related presentation on www.slideshare.net
My work is more concerned with a search for scientific and philosophical explanations and implications for mathematical education than designing 'robogeeks', but publishers have their own constraints on reporting!
Others commenting on this:http://gadgets.softpedia.com/news/Mathematician-Robot-Heading-our-Way-1643-01.html
Mathematician Robot Heading Our Way Artificial intelligence expert Aaron Sloman wants to break the boundaries of both robotics and mathematics
By Georgiana Bobolicu, Gadgets Editor 3rd of March 2009, 13:22 GMT
This does, however, relate to one of my long-standing concerns about human mathematical capabilities, e.g.
The New scientist article manages to sum up my comments quite well in a few lines:
Aaron Sloman, another artificial intelligence researcher, at Birmingham University in the UK, says interaction with the environment is vital to intelligence. But he also points out that the human brain is capable of working with concepts not grounded in the physical world.
"That is why you can think about a person in Birmingham whom you have never met," he says, "How does an architect design a new skyscraper, long before there is anything to see, feel, touch, or manipulate?"
There's a link to this on the AITOPICS web site:
A report in the Education Guardian by Chris Arnot (4th Jan 2005) summarised in ACM Technews.
Artificial intelligence We ask whether computers can think in a human fashion
An online version is at http://www.telegraph.co.uk/connected/main.jhtml?xml=%2Fconnected%2F2001%2F07%2F05%2Fecfai05.xml
The article starts
The contribution is included in this article:
AI's greatest trends and controversies, edited by Haym Hirsh amd Marti A. Hearst, in IEEE Intelligent Systems and their applications, Jan/Feb 2000 pp 8--17available online in plain text format http://www.computer.org/intelligent/articles/AI_controversies.htm and in PDF http://www.computer.org/intelligent/ex2000/pdf/x1008.pdf
The article starts thus:
The transition to the next millennium gives us an opportunity to
reflect on the past and project the future. In this spirit, we
have asked a set of distinguished scholars and practitioners who
were involved in AI's formative stages to describe, in just a few
paragraphs, the most notable trend or controversy (or nontrend or
noncontroversy) during AI's development.
* 28 March 1998I have some notes on pride that I wrote as a result of being interviewed for this article, here: http://www.cs.bham.ac.uk/research/cogaff/pride.html
* Magazine issue 2127
"DO birds consider whether the nest they have built is better than the one built in the next tree?" asks Aaron Sloman, professor of artificial intelligence and cognitive science at the University of Birmingham. "I doubt it." If his assessment is correct, it's a safe bet that you won't find a bird that feels pride. Sloman's point is that being proud isn't easy. At the very least you need a sense of "self" and a way to compare yourself with others. So feeling superior takes a higher level of mental complexity than most animals can muster. That doesn't apply to all emotions. The fear that keeps animals out of the path of a predator or a speeding car, for instance, is almost universal across species and needs little or no thought. But it's the complex emotions like pride that fascinate Sloman. He believes that they arise naturally from the information-processing ...
The URL is: http://www.newscientist.com/nsplus/insight/emotions/reason4.html
This is actually the fourth page of an article by David Concar New Scientist Planet Science: You're wrong, Mr. Spock
(The report says that I don't do any system building. That's not so. See, for instance the overview of the SIM_AGENT Toolkit. My own summary of my talk at that conference is here.)
An interview with Rikke Magnussen was broadcast on Danish Radio, in 1998. Title: ``Computers with feelings (Computer med fxleser)'' Online audio version at http://www.dr.dk/harddisk/realaudi/ark98.htm Real Audio week 11 (REALAUDIO FRA UGE 11) 1998. The audio file is http://www.dr.dk/harddisk/realaudi/sloman.ram (I have not tried listening to it!)
There was a short interview about our work broadcast in the SoundBytes programme on the BBC world service on 8th June 1997. This is online at the BBC world service site: http://www.bbc.co.uk/worldservice. In this interview Violet Berlin asked how we might enable computers to feel and express emotions. E.g. in future should we be able to see computers not just win at chess but also enjoy winning? The audio file is online. (I have not tried listening to it!)
Design for a table of the mind
8 March 1996
By Kam Patel
The article starts:
Researchers working on consciousness could benefit greatly from incorporating notions of design employed by engineers and software creators into their work, according to Aaron Sloman of Birmingham University's school of computer science.
In a lecture last week at the Royal Society of Arts, Professor Sloman, a philosopher, advocated a systems approach to consciousness. He said that much research wrongly uses such words as "conscious", "aware" and "experience" as if they had clear fixed meanings and there were clear distinctions between things they apply to and things they do not apply to.
This generates the illusion that consciousness is something that is either present or absent in an object and tempts researchers to ask "pseudo- questions" such as which animals have consciousness, how it evolved, could a robot have it and whether consciousness was reducible to physics?
Another tempting mistake is to present consciousness as a matter of degree as in differences between states of consciousness and differences between animals.
Following a press release from Digital after they sold us some Digital Unix Alphastations late in 1996, there was a flurry of enquiries about our research from reporters and broadcasters, and several articles appeared in computer magazines and newspapers over the following months. E.g. Computer Weekly for May 29th 1997 had an article about our work. Apparently not available online.
See also ONLINE INTERVIEWS ABOUT AI, ALIFE, EMOTIONS, ETC.
An article 'Behind the Brain' by Geoff Watts, apparently from a BBC Radio4 broadcast in 2000 is available here http://www.fortunecity.com/emachines/e11/86/behind3.html
More information on the Cognition and Affect project.
The SIM_AGENT toolkit.
Talks and presentations since about 2001 (PDF)