A partial index of discussion notes is in
It may be hard to believe, but each table has one pair of edges equal to a radius of the circle and one pair twice the length of the radius.
The fact that adding the two circles, and the rulers showing the radius and twice the radius, does not remove the illusion helps to clarify the depth of what has to be explained.
I think this is one of very many fragments of evidence showing that common theories about the functions of human/animal vision systems are mistaken, and as a result AI vision systems based on or inspired by those theories are also at worst mistaken, or at best, useful pieces of engineering, but with little overlap with biological vision.
See also the Meta-Morphogenesis project, attempting to unravel what the very many information-processing products of biological illusion actually are, as opposed to which many people think they "obviously" are.
In particular, instead of trying to interpret the optical evidence using (possibly coordinate based) global metrics for length and angle, biological vision makes use of collections of partial orderings, and in general does not automatically detect or eliminate anomalies that result from the inference mechanisms used, because most of the time they work very well -- for biological purposes, which are not necessarily the purposes engineers attempt to give their robots!
As this example shows, there can be more than one system of "measurement" at work
simultaneously, in this case the 2-D measures of optical relationships (in what Gibson
calls "The optic array") and the system of measurement, or spatial relationships, used in
the 3-D interpretation derived from the 2-D information.
A similar point is made in Arnold Trehub's work (e.g. Chapter 14).
The Muller-Lyer illusion is partly similar, partly different. See
A more general theory will have to account for vision as producing interpretations that are not purely geometrical but include causal and functional relationships, information about the materials perceived (not just their shapes, etc.), information about biological and other classifications of objects and object parts, and in some cases information about states of mind, intentions, purposes, and affective states of other individuals.
This example, with two faces is in part like the Shepard rotating table because for many
human viewers (I have not tried a wide range of ages and cultures) the two faces not only
appear to express different emotional states, but also differ in how the eyes look, though
in this case the difference in appearance is not geometrical. (The eyes in the two images
are identical, but here the context changes something more subtle than the geometric
interpretation of image contents.)
For more on multi-layer interpretation of visual contents see chapter 9 of
The Computer Revolution in Philosophy (1978).
Identifying evolutionary transitions leading to such types of visual functionality is part
of the task of the Meta-Morphogenesis project.
The image was drawn using William Chia-Wei Cheng's (Bill Cheng's) ancient, but still very
useful drawing program, Tgif. Tgif is an interactive 2-D drawing tool under X11, available
for Linux and most UNIX platforms. (I use it on various versions of Fedora.)
This web site will be absorbed into the Meta-morphogenesis project's goal of tracing
changes in information processing across biological evolution, development and learning.
(On the importance of abilities to see what's possible and not possible.)