Chris asked ”I am curious why the effort was put into studying individual letter recognition. There must be some basis that would be strong enough to commit funds to do it? Letter-to-letter interaction is always going to happen if we are to create words. Comprehension requires it.”
This project was the first of a series that Sheedy has undertaken with the goal of gaining greater understanding of legibility. Testing both letters in isolation and letters in the context of words was a natural outcome of believing that both are important. We were surprised by the letter superiority effect when it showed up in the first study. I even asked it the numbers had been accidently switched as the finding is counterintuitive since it goes against the word superiority effect. It wasn’t until the effect was replicated in further studies that Sheedy realized that this was an important finding.
Peter wrote “It could be argued that the task (to identify at thresholds) forces the subject to revert to a letter by letter processing routine, thus bringing into play the inhibitory force of contour interference.”
Sheedy does use a threshold size methodology to demonstrate that lowercase letter legibility is greater than lowercase word legibility. The crowding effect is certainly more pronounced at smaller sizes, but has also been demonstrated to appear at larger sizes (e.g. typical eye charts). Peter, your critique is sound, but can you propose a way of testing for the existence of an unparsimonious secondary processing routine? Your model would be more compelling if it accounted for more of the data in the literature.
Kevin the 'more parsimonious' 'primary' route requires an ad hoc post-perceptual-processing work-around with a fairly high computational load (especially if wickel-coding is correct), and some highly controversial assumptions--like that recognition is mediated by 'abstract' letter codes--to explain an effect that might be more economically explained in perceptual processing terms, using the notions of 1) the word being a vector over a distinctive pattern of distinctive features, and 2) rhythmic spacing inhibiting 'componential abstraction.' (1) is an assumption that has not been fully implimented in neural net models, and there is some evidence for (2). And 'crowding' (the terms need further refinement) is part of the story.
The 'if it accounted for more of the data in the literature' is a bit of a red herring, since I believe my model accounts for, or is consistent with, at least as much of the data in the literature. Only, I have not explicitly specified precisely how in relation to each data set.
So this needs to be more systematically specified, but one and the same data set can support divergent conclusions if it doesn't distinguish sufficiently between relevant options, or keep them all in play or ubder consideration. This is I think what happens. And each account encounters areas of dark matter that it has trouble sorting out. Mine is results illcited by cross-case masked priming.
Indeed, I or we need to devise a series of conclusive tests. Bill and others call for this as well, and maybe we can come up with some.
" one of the problems of scientific research is that funding sources tend to prejudice goals and results."
No Way! I'm not even gonna go there. The place I'm stuck, is here: How can the research into screen reability be correct if the fonts are not readable? An interesting problem.
A new thread. "Why will Johnny no longer want to read?"