This is just a brief **thank you** to all who took the trouble to read and respond to my post. I have been swamped with other things, but hope soon to respond.

I'll post here, but will append an edited version of my replies, using only first names and brief summaries of what I am responding to, on the publicly accessible page:

Anyone who does not want his/her comments to be summarised please send me a 'NotMine' message, either here or direct to

Likewise, if you don't want even your first name to be used in a summary outside this forum, send a 'NotMe' message. For both, make it 'NotMine+NotMe'. For various reasons I can't provide an externally commentable page.

Some of the issues raised here were partly addressed in a note I posted in January 2012, after there had been quite a lot of ICT-bashing by computer scientists/engineers. At that time, I felt babies were being thrown out with bathwater, and suggested that we need to broaden education in **ICT** to cover **ACT** (Applying Computing Technology), i.e. using computational thinking and computational tools other than programming languages, to solve interesting or hard practical problems. That could include finding the right parameters in a configurable model to enable it to emulate some natural process accurately, or control some complex machine, or generate and test examples of mathematical models to test a mathematical conjecture, etc.

An intermediate case could be using a compiler-compiler to produce a new high level programming language.

Richard: in 1992 I wrote the longest review of Penrose's first book (The Emperor's New Mind) but he ignored all the arguments in his second book. I've talked with him at some length, and even represented his point of view at an AISB debate around 20 years ago, when he was too ill to attend, and he tutored me by phone on his position.

He is a brilliant physicist and mathematician, but not really a philosopher.

I am aware of the concerns you point at, which I think involve deep philosophical controversies, on which I too have written a lot, but I don't think it's fruitful here. If you are correct and the question 'What computational mechanisms account for various human capabilities?' has a false presupposition, that falsity will be most successfully made clear by taking the 'what' question very seriously, and trying as hard as we can to find or create suitable mechanisms. Through the failures of sophisticated, well informed, attempts we can hope to learn a lot -- if we fail.

Two of my challenges to AI along those lines are here: one on requirements for modelling and explaining animal vision, and one on requirements for modelling and explaining the human mathematical reasoning that led to Euclid's elements. In both cases AI isn't even close.

But that enterprise requires deep engagement with many other disciplines to find out what the requirements are (in the engineer's sense) for adequate models, whereas many AI researchers work from seriously deficient requirements analysis and have to keep making lame excuses for their failures -- or worse: describing them as successes because they meet shallow requirements.