Monitor on Psychology interview with Kevin Larson

enne_son's picture

From the Microsoft Typography website:

Redmond - 2 November 2010
Monitor on Psychology, the magazine of the American Psychological Association (APA), has published an interview with Kevin Larson, of Microsoft’s Advanced Reading Technologies team, in the November 2010. The interview focuses on the interactions of typographers and psychologists in trying to improve onscreen reading. Monitor on Psychology is a general psychology magazine, sent to every member of the APA.

Here is the url:
http://www.apa.org/monitor/2010/11/read-onscreen.aspx

Stephen Coles's picture

"We’ve also done some tests to see how quickly someone can read a “good” page layout versus a “bad” page layout. The good layout used indentation to mark new paragraphs and larger text to identify a headline, while the bad one used underlining for headlines, which is an aesthetically poorer method of marking text as important, and no indentations. We were disappointed to find no differences in reading speed or comprehension."

This is disappointing indeed. Though, in his presentations, Kevin has added that readers indicate they "preferred" the good layout. I'd like to hear more about that.

Si_Daniels's picture

I think this was the study that Kevin presented at ATypI Helsinki at which Professor Spiekermann commented that both layouts were bad. :-)

david h's picture

"We were disappointed to find no differences in reading speed or comprehension."

Reading on paper? Reading on computer screens?

Kevin Larson's picture

The good layout versus bad layout studies were presented at ATypI Helsinki. There are also papers published in Typo Magazine and HCI conference proceedings. Hopefully these links to my skydrive work. I haven’t used this before.

The Beier legibility studies mentioned in the article are published in the current issue of Information Design Journal, and deserve much attention.

Cheers, Kevin

Nick Shinn's picture

Kevin underestimates his influence.
To say that typographers believe people read word shapes rather than letters may have been true in the past, and may still be true at large, but it's nonetheless a bit of a generalization.
I for one see no reason to doubt this recent scientific discovery, and I'm sure most Typophiles who have followed the readability debates here feel the same way.

However, just because people read letters doesn't mean the shape of a word is unimportant, or that it is only important as mere aesthetics.
People "read" the aesthetic qualities of typography, and this reading is quite important, even though such readability cannot be boiled down to a simple measurement of speed.

joeclark's picture

The citation in Information Design Journal to which kevlar refers appears to be:

“Design improvements for frequently-misrecognized letters”
Sofie Beier
Kevin Larson
DOI: 10.1075/idj.18.2.03bei
In: Information Design Journal 18:2. 2010. ca. 100 pp.

John Hudson's picture

"We were disappointed to find no differences in reading speed or comprehension."

I wouldn't be disappointed by such a finding, but then I don't reckon speed and comprehension to be the only measures of a good reading experience. In some circumstances, they're not even the primary measures.

enne_son's picture

“Reading psychologists care about what happens cognitively when people see fonts and words.”

What about what happens perceptually?

Kevin Larson's picture

> “What about what happens perceptually?”

That is one of many generalizations in the article, but I think it is generally accurate. There are 27 chapters in Snowling & Hulme’s 2005 book The Science of Reading. None of the chapters are devoted to visual perception. Hopefully that will change in the future.

Mel N. Collie's picture

Another round of Grand Larsony: Word Theft;)

In order to believe in letter-only reading theory, one must accept that the part of humanity reading scripts without letters, isn't reading, and that the other part of humanity reading scripts with letters, invented words in cahoots with the space between words without a reason, no?

Is this about screen reading? someone asks. We all of course should be praying for the higher resolution that'll end these studies. :)

John Hudson's picture

David: In order to believe in letter-only reading theory, one must accept that the part of humanity reading scripts without letters, isn't reading...

Evidence of letterwise word recognition -- which is what we're talking about not 'letter-only reading' -- has no implications at all for word recognition in scripts that do not use letters, any more than a word-shape theory has implications for scripts, e.g. Thai, that do not visually demarcate word boundaries.

Would it really surprise you to discover that people read different writing systems in different ways?

...and that the other part of humanity reading scripts with letters, invented words in cahoots with the space between words without a reason, no?

You're begging the question by assuming that the only possible reasons for spacing words are linked to word shape recognition and not compatible with letterwise recognition.

Visual demarcation of word boundaries has testable benefits in cueing fixations, as seen in variation of saccade length relative to word shapes perceived in the parafovaea, i.e. word spaces enable us to anticipate the length and position of words, long before we're able to recognise what the words are. Visual demarcation of words also has testable benefits in letterwise word recognition, because knowing where a word ends and how many letters it contains enables the brain to exclude possible readings involving letters from adjacent words (and hence miss out on those humorous misreadings encountered in some URLs, e.g. therapistfinder, penisland, expertsexchange, whorepresents, etc.). So that's two benefits to word spacing that don't imply a role for word shape in word recognition and that are perfectly compatible with letterwise word recognition.

Nick Shinn's picture

Is this about screen reading?

No readability research on re-faxed real estate documents just yet.

Nick Shinn's picture

From a legibility perspective, the purpose of kerning is not the irrelevancy of making attractive word shapes.
Kerning supports the parallel letter recognition model, by evenly spacing letters so that the status and distinction of each is maximally asserted, disconnected from adjacent letters. Uneven spacing is noise.

William Berkson's picture

I think the focus on letters in isolation is a mistake. Here, I discuss the process I used to try to make my Williams Caslon Text as readable as possible.

As you can see from the first graphic, there is more to a readable typeface than individual letters. This graphic I think shows that the eye-brain combination is putting some kind of grid over the words, and interpreting based on relationship of the marks on paper (or screen) to that. Much of readability is in how the letters in a style relate to one another, and hence to this mental grid. And this involves weight density and distribution, size of counters and relation to spacing and so on.

This is aside from the question that Peter and I have discussed with Kevin, whether the issue of whether we first identify letters, then do a look-up, or whether we identify words and phonemes as whole patterns.

In other words, whether or not Peter is right about whole patterns—and I am inclined to believe there is some truth in his view—I think even the one graphic shows that how the letters relate to the mental grid is a key issue. Because even on the letter by letter view "interactive activation", if I remember the term correctly, allows us to identify words when we don't have all the letters in a word identified. This means that one or two ambiguous characters—such as I and l—don't damage readability as much as you might think.

The "matrix" view puts a lot of readability in the color and rhythm of the text. And by the way this was Matthew Carter's view on Verdana, where he has said that the main advance was on the spacing, rather than simply the letter shapes themselves.

John Hudson's picture

Bill; As you can see from the first graphic, there is more to a readable typeface than individual letters. This graphic I think shows that the eye-brain combination is putting some kind of grid over the words, and interpreting based on relationship of the marks on paper (or screen) to that.

While I'm not unsympathetic to your idea of gridded feature recognition, I think you are reading too much into your own graphic. What the graphic primarily demonstrates to me is that it is hard to read unconventional use of letters that do not correspond to the norms of the writing system. If you took out the uppercase letters and varied only the weight and style of the letters, then I think you would have something more interesting, but with weight differences you would be affecting the spatial frequency as well as a presumed perceptual grid, so perhaps the better test of the latter would be similarly weighted combinations of roman and italic letters.

Kevin Larson's picture

William, Williams Caslon is lovely and your paper about its creation was enjoyable to read. Congratulations.

The components that you identified as important for good readability all seem quite sensible and should be accounted for in a complete model of word recognition: good rhythm, even color, stroke ratio, and avoiding the picket fence. I think these could be compatible with parallel letter recognition models. Good rhythm and even color in particular seem to relate to vision psychologists' concept of spatial frequency. Pelli has shown that we tune to particular spatial frequencies while reading, and that tuning changes by the font that we are reading. If every letter is from a different font or has irregular letter spacing, then it won't be possible to tune to a spatial frequency.

William Berkson's picture

>hard to read unconventional use of letters that do not correspond to the norms of the writing system.

John, why should unconventional use of letters be such a problem, if all of the letters make easily readable words when they are with their stylistic mates? Why should any training at all be needed? Because we have familiarity with all the letter forms, there must be *more* involved in "training" the mind to read words than simply familiarity with individual letters, and which letters go to make up a word with a specific meaning.

I would also guess that it would be harder to train readers of another script, who didn't know roman script, to read this kind of text than text which keeps to its stylistic mates only. That would show that not only is there "more" than what is involved in isolated letter recognition, but that some mixtures are inherently more difficult for the brain to handle. I also doubt that even with training we could ever read such text with the ease of standard text faces. I am guessing that something fundamental in our visual decoding apparatus is being stressed.

I mixed all of the variations—regular and bold, caps and lower case, roman and italic, to maximize the disruption of word recognition. That makes the graphic a refutation of the idea that ability to quickly identify individual letters, and a quickly accessible knowledge of spelling is the whole story. That's all that I'm claiming is demonstrated.

My theory of a "matrix" is a vague model, and not a well worked out theory of the "more." I suspect that this portion of reading, the portion that is not in recognition of individual shapes by themselves, is a bigger area for making type more readable than the area of making Latin letters more highly differentiated for isolated quick recognition. Of course there are other issues such as resolution, figure-ground contrast, back-lighting of screens, etc. that are also important.

Kevin, thanks for the kind words.

I agree with you that my graphics and analysis are all compatible with a parallel letter recognition model. What I don't think they are compatible with is holding that legibility of individual letters in isolation—ability to differentiate them quickly and reliably—is the same thing as readability—comfort in reading words. The importance of this other dimension of readability is also why I am skeptical that there is much to be gained in increasing differentiation of letters in the Latin alphabet, which are highly differentiated already, especially in stressed, serifed faces.

In the Hebrew alphabet, there are much more serious problems of differentiation. The scripts of Eliyahu Koren, recently discussed in the "Hebrew" section here, do increase differentiation of Hebrew letters in a way that remains aesthetically attractive, and are felt by many native readers to increase readability.

William Berkson's picture

Chris, that saying is very clever, but it can be used in defense of complete BS!

Great moral and religious ideas are not so testable, but great *scientific* ideas are. That's what makes them scientific, in the view of my old teacher Popper at any rate. —A little off the track but I was very happy to see that Popper's ideas on science were used to defeat in court the bogus claims of the "intelligent design" people.

dezcom's picture

So called scientific tests "can be used in defense of complete BS!" just as well. The NRA has a bunch of "data" on guns stopping crimes they would like you to read :-)
Science is not always "Good Science" so we have to scrutinize it completely and not put scientists on a level with deity as is too often done.
I have also seen when marketing people work their magic with "focus" group testing ( I might change the spelling a bit replacing an "o" with a "u" on that one.) Lois was an advertising guy and I am sure he was referring to marketing shell games when he talked about testing and made that quote.
Yes, Bill, there are 'bogus claims of the "intelligent design" people' and bogus claims of every other kind of people including scientists. All I am saying is that nobody should get a free pass just because they hang a shingle on their door which says "scientist" or "clergy" or "design professional". Yes w know there is peer review and tests repeated and then results overturned after a few years. The problem is, nobody erases the errant conclusions people have drawn from the data. Keep a healthy amount of doubt when you listen to any expert. The real scientists will welcome the skepticism and only the charlatans will cry fowl.

Kevin Larson's picture

> Keep a healthy amount of doubt when you listen to any expert.

In 6 years as a member of typophile, I don’t think I’ve ever made a statement that someone didn’t call BS. There appears to be plenty of healthy skepticism of scientists here. On the other hand, I wonder how often statements from Bringhurst or the other great typographers are challenged?

dezcom's picture

I am happy challenge Bringhurst and all exacters of rules and methods and I know others do as well. It is like AP style or Strunk and White. You can say it fits the style according to Bringhurst or according to Carson and say "it is in the book" therefore you have done a proper job by some other person's standards. It might still be very pedestrian typography. An insecure person might always do it according to "the book" so they have some authority to point to when questioned. You don't always need a manual. I would rather someone try a bunch of things and make mistakes because they will learn from that. Following a cookbook never made a chef.

I admire Kevin for at least coming back for more and listening to our collective flacking. He takes what he can and tries again and comes back for more. A lesser person might just talk only to his colleagues and ignore any questioning of their work from the unwashed proletariat. ;-)

Mel N. Collie's picture

JH>Would it really surprise you to discover that people read different writing systems in different ways?

In a way, Yes. We are all trying to write so the letters go away. Would it surprise you to discover that people read one writing system in different ways?

The common thread here though is that for some reason, Kevlar "pits" the "typographers" against the "psychologists." But before anyone forgets, it is the 'minor' destruction of writing on PC screens by the software behind them, that has brought on this "intensity" of questions about reading.

KL> In 6 years as a member of typophile, I don’t think I’ve ever made a statement that someone didn’t call BS.

That's BS! ;)

The other thing to note, is that until this destructive phase of writing is passed Bringhurst and all such gospel SHOULD be substantially rewritten to be relevant to typography on the web.

William Berkson's picture

>The problem is, nobody erases the errant conclusions people have drawn from the data.

That's not true within science. You won't read any articles defending Phlogiston in current science journals. Or for that matter questioning the reality of evolution. Politicians, on the other hand will get up and blather and contradict well tested solutions—with no contrary evidence or argument.

The important thing is that within science people do feel the need to address experimental evidence. I'm not saying that there is no poor science; there is plenty of that. But the scientific standard of respect for observation and experiment creates a mechanism of self-correction, of reducing errors of science that other fields unfortunately lack.

I have often disagreed with Kevin here, but I've never seen him spouting BS, because he does respect the results of observations and experiment. The reality is that it is very hard to get decisive answers from nature, but when they do happen, they stick.

enne_son's picture

What Denis Pelli found was that the spatial frequency channel that is used is centered at the frequency of stroke-level information — not letters. An explicit presupposition of parallel letter recognition models is the channeling of feature information into independent slots. This is slot-processing. In well-set pages of type we find narrow phase alignment. On the basis of gestalt principles it seems likely that narrow phase alignment is antagonistic to slot-processing. What adherents of parallel letter recognition models might then need to explain, but haven‘t, is how, under conditions of narrow phase alignment, slot-processing ever gets off the ground.

The successes of the interactive-activation account of how perceptual processing in reading works is really what solidified the case for the parallel recognition model of word recognition. When faced with a spacing effect qualified scientists resorted to ad hocery. In recent years other types of networks have been explored in some domains, including speech perception. These networks rely on notions like non-independance or interfacilitation, and supercapcity. Supercapacity and interfacilitation could illuminate the word-superiority effect, and why, in reading there is an “inhibition of incipient recognition for letters,” to quote Edmund Burke Huey.

When I explained my ‘single-tiered, intrinsic integration, matrix resonance’* alternative recently to Jay McCelland, one of the architects of the interactive-activation scheme, his response was: If I had but world enough and time I might join you in your quest. I don’t think of this as endorsement, but I do think of it as a recognition that there is a promising and attractive alternative that ought to be pursued.

In the Monitor on Psychology interview Kevin says “research from reading psychologists suggests...” Wouldn't scientific candor dictate some stronger recognition that the jury is still out.

* I used this phrase in my TypeCon Atlanta presentation. (The term matrix resonance is a favourite of Bill‘s but also occurs in the cognitive scientific literature on neural network coding. My source for the other two terms is the cognitive scientific literature as well)

William Berkson's picture

I agree with Peter that there are promising alternatives to the "parallel processing with interactive activation" views of reading, the current "orthodoxy". And I think they are worth pursuing scientifically.

John Hudson's picture

Bill: I mixed all of the variations—regular and bold, caps and lower case, roman and italic, to maximize the disruption of word recognition. That makes the graphic a refutation of the idea that ability to quickly identify individual letters, and a quickly accessible knowledge of spelling is the whole story. That's all that I'm claiming is demonstrated.

But has anyone ever claimed that ability to quickly identify individual letters independent of their conventional use within the writing system is the whole story?

Writing systems are, well, systems, and reading is an activity that takes place within the normative contexts provided by those systems (including the sub-system norms of particular styles). If you change the normative contexts, e.g. by mixing letters from different styles or breaking normal rules about the use of bicameral forms, then it doesn't surprise me at all that word recognition is slowed.

Further, you wrote above: This graphic I think shows that the eye-brain combination is putting some kind of grid over the words, and interpreting based on relationship of the marks on paper (or screen) to that.

This is very clearly claiming that more is demonstrated than a refutation that quick letter recognition is the whole story: you claimed that the graphic showed that a particular perceptual and cognitive mechanism is in use during reading, and I don't think it does. You disrupted normative reading context in at least three different ways in the illustration -- case, style, slant --, making it impossible to isolate factors in a way that would imply inhibition of a particular mechanism.

William Berkson's picture

John, I said "some kind of grid". That is not very specific. What I mean is that some kind of mental template that uses the spatial relation of marks to one another within the word is operating. That I do think is a fair conclusion. If you have an alternative theory, I'd be happy to hear it.

I strongly suspect that there is a mental grid eg in the case of Chinese. That's because you learn to write Chinese characters on a grid! I'm also not saying that the grid is all inborn. I'm sure a lot of it is learned, or created in the learning process.

>But has anyone ever claimed that ability to quickly identify individual letters independent of their conventional use within the writing system is the whole story?

Well, denying the validity of any distinction between legibility and readability has the consequence that the distinguishably of individual letters is the whole story of readability. And I may not be remembering correctly, but I thought you and Kevin did deny the usefulness of the distinction. The unimportance of these other factors seems to me a consequence of the denial. And Kevin refers to a study on the distinguishably of individual letters, which I don't think is much of an issue with latin script.

I'm not saying you were consistent :)

Nick Shinn's picture

The results of social science experiments are always open to vastly different "readings", i.e. interpretations.

Reading is a social phenomenon.
Yet it seems to me that reading researchers are under the illusion they are investigating a basic physiological process, believing that letters are not cultural artefacts, merely natural shapes, the grammatical content of which can be deciphered at different speeds, and that's all there is.

Reading science can reveal how we read, but as soon as it gets into passing typographic judgement, it is nothing but market research, or at worst banal scientism.

William Berkson's picture

Nick, reading involves physiology of the eye, psychology—how the brain works in perception and cognition—AND is a social phenomenon.

Are you denying the value of studying the first two?

Nick Shinn's picture

Whatever gives you that idea?

dezcom's picture

"That's not true within science."
It may be less true in science, but that is not the point. The problem is that errant reaction to interpretations by scientists is acted upon by the doers and gatekeepers of the world, not the scientists. Scientists enquire, theorize, test and publish what they think is in evidence. They may be best intentioned and even accurate in their data-collection but that does not make a dent in what the errant doers do with that information. The doers are people who make stuff, decide stuff, and disparage stuff; politicians and government officials, the legal system bureaucrats, school boards, teachers, students, marketing people, editors, graphic designers, clients, etc. They don't always "get the memo" that a certain study has been found misleading or new information about it has come to light. They just carry on either being naive and unaware or simply choose the study which best props up their own bias. They go on armed with the words "Scientists say" or "I read that" without question. It is not what scientists SAY, it is what they are REPORTED to have said that causes the problem. Doers don't talk to scientists, they talk to themselves, their neighbors, and listen to their media (be it Fox News or MSNBC, Rush Limbaugh, The Group, or Type Radio.) Years ago the psychiatric wisdom was that Autism was caused by unloving mothers. Psychiatric theorists and most laymen know now that that is not true. That does not help the mothers of Autistic children who ever heard the old mistaken tales even once from "Aunt Flo" or their neighbors pet sitter.
All I am saying is have a healthy skepticism of every source. We still read articles about how well Helvetica does in readability.

"That's BS! ;)"

I am proud to say I have now joined the elite company of Raymond Lois, Kevin Larson, and David Berlow and been told online: That's BS!. At least people are skeptical of what I say--now they just need to be just as skeptical of what they think scientists perhaps might possibly be saying ( by way of Dr Phil or the Washington Times).

Chris That's BS!

Nick Shinn's picture

This is brilliant:

http://www.amazon.com/Bullshit-Harry-G-Frankfurt/dp/0691122946

Basically, he argues that BS is neither true not false.
So it's BS to say that Arial is more readable than Times, because, yes it may be in some instances, but not in others.
And that is why scientific pronouncements on the general readability of fonts vis-a-vis one another are BS.

enne_son's picture

Bullshit makes good fertilizer. It provides food for thought and empirical investigation and hence the growth of a discipline.

William Berkson's picture

Chris, I'm afraid that what impacts on public consciousness is arguments over statistical tests—which are very hard to get anything conclusive on—or sciences that are weak (clinical psychology) or barely scientific at all (economics). But what is taken for granted is the greatest advance in the history of humanity, which is due to science. I am typing on a computer, whose results can be seen within seconds all over the world, and looking through reading glasses. All this is based on really solid science, which we all take for granted. In traditional societies one of ten women die in child birth, today death in childbirth is very rare in wealthy countries. And in our lifetime, the life expectancy has gone up considerably, because of hard science, not debates over statistics.

Your complaints about weak and bad science are fully justified, but the glorious advances of science are none the less true. And they are based on a method that is fundamentally honest in respecting real observation and experiment, while drawing on the heights of human imagination and ingenuity in creating new theories. The dark side is there, but so is the bright.

John Hudson's picture

Bill: And I may not be remembering correctly, but I thought you and Kevin did deny the usefulness of the distinction [between legibility and readability].

I've never denied the usefulness of the distinction, and I've even tried to convince Kevin that the terminological distinction is useful in discussing possible functional distinctions. But I also think that the distinction is largely instinctive -- i.e. we have tended to proceed from usefulness of the terminological distinction to the assumption that there is a functional distinction -- and a lot of question begging takes place as a result of that.

Re. the mental grid, I am not disagreeing with your theory; I am disagreeing with the claim that your graphic provides evidence that supports the theory. In the first place because your graphic illustrates how word recognition can be retarded, not how it is normally achieved, and in the second place because you disrupted word recognition in multiple ways from which it is impossible to isolate a particular cause for the retardation. Yes, disruption of a grid might account for the retardation, but so might any one of the disruptions of the sub-system norms of the writing system, or disruption of spatial frequency, or simply the fact that the brain is too busy thinking 'That looks weird' to devote sufficient attention to word recognition.

dezcom's picture

"...but the glorious advances of science are none the less true."

I have never said they were not true. Just because science has had many successes, like the computer, does not mean we should not question everything it says. Science also made nuclear, biological, and chemical weapons but that does not mean we reject everything they have said either. There even have been good things come of nuclear science like medicine and alternativve energy. "It is an ill wind..."
Also, it is much easier to verify physical science than it is social or behavioral science. Yes, we can count eye regressions but what we get from the data is fuzzier than a Chesure Cat and just might possibly sorta kinda almost be Bullshit.

We should neither condemn scientists nor genuflect at their feet in awe and admiration. We should question and verify before we elevate them to deity.

enne_son's picture

John, as long as there is a difference between letter identification (which is a form of categorical perception) and visual word-form resolution (aka word recognition) there is going to be warrent for a distinction. It does not matter to what extent ease of categorical perception of the graphemic components of the word in a given font turns out to be a determinant of ease of visual word-form resolution in the perceptual processing of actual texts.

Mel N. Collie's picture

Well okay then! I'll axe the question this way; is reading research a hot topic these daze because;
A. Readers don't read very well, suddenly for no apparent reason
B. Computers don't write very well because OS vendors are cheapskates
C."Typographers" and psychologists don't agree on word vs letter recognition.

Take your time on this one, getting the wrong answer can make you lots of friends, while actually facing up to the right answer however...

William Berkson's picture

A little more on scientific method.

>I think you are reading too much into your own graphic.

John, any theory "reads too much" into results of observation and experiment. That is because the data is finite, and theories universal. As Popper said, scientific theories are always conjectures that go beyond the data. The goal is to have imaginative explanations for the data, and then test them.

The reason for my conclusion is that some kind of "matrix" is the only way I see of *explaining* the data. —In this case the data is the difficulty of reading a word composed of many different letters which are legible on their on. Alternative explanations, or more refined versions of my vague model, would help to test, and to enlighten so I am all in favor of that.

I agree with you that any good theory would have to account for how reading changes with each variation alone: mixed case, mixed style (italic and roman), and mixed weight. The point of the graphic is to refute the idea that the clarity and unambiguity of individual letters is the only important factor in readability. That's a starting point.

Chris, genuflecting is not my thing, as you know :)

I am all in favor of looking at scientific claims critically, and there is a lot of weak and bad science out there, and misleading claims of "scientific authority" for whatever view a person is a partisan of. But I am also in favor of recognizing the power and even the moral goodness of scientific method. The cure for bad science is good science, not no science. That is the main point I am trying to make.

Good science is having the courage to make theories testable—so specific that they are vulnerable to possible refutation by observation and experiment. And then it is being honest about the relations of the theory and the data. There is a courage and honesty in good scientific method, and it is a force for progress of humanity. I hear very often criticism of weak and bad science, but I don't hear much public understanding and appreciation for good and great science. Depreciation of scientific method is to me a blow against honesty and progress for humanity.

dezcom's picture

"Depreciation of scientific method"

This will be my last post on this subject, since the circle has now been orbited the requisite 10,000 times. I am not now or never have been deprecating the scientific method. My brother, brother-in-law, and Both my kids are scientists. (julian is in Padua today giving a paper on seismology.) I am complaining about misuse of conclusions based on untested theory. "As Popper said, scientific theories are always conjectures that go beyond the data." Question the conjectures until they are proven or disproven.

I am happy to have science! some of my best friends are scientists, Scientists are people, they make mistakes and are misquoted too.

Happy Chanuchah, Bill!

William Berkson's picture

Thanks, Chris!

The underlying issue here is whether scientific research in reading holds any promise to give insights that are valuable for readers and type designers. I feel that it does hold promise, and that feeling is not based so much on success so far in reading research, but on the general promise and success of scientific research in other fields.

Richard Fink's picture

"I wonder how often statements from Bringhurst or the other great typographers are challenged?"

To summarize unscientifically, Bringhurst can kiss my egalitarian ass.

Regards,

rich

dezcom's picture

"Bringhurst can kiss my egalitarian ass."

;-)

John Hudson's picture

Bill, I didn't object to you reading too much into data, I objected to you reading too much into a picture that you had made, and I pointed out that the picture didn't demonstrate what you claimed that it demonstrated because there are at least three different explanations for retardation of word recognition potentially at play in the picture, and hence you can't reliably ascribe the cause of retardation to disruption of a mental grid.

Further, the result of a simple experiment that I conducted -- mixing together only roman and sloped roman forms within a word, i.e. varying only the slant angle to disrupt the presumed mental grid (rather than varying style, case, weight and other factors as in your illustration) -- was that word recognition was much less retarded than in your example. This leads me to conclude that while there may be a mental grid on which the features of letters are perceived, simply shifting some features off that grid by varying the letter slant does not retard word recognition to the same degree that a combination of other factors will that more grievously depart from the learned, conventional sub-system norms of the writing system, e.g. mixing letters of different cases.

[With regard to the latter, I think it is important to remember that the case distinction is used in English and other bicameral writing systems as a grammatical cue, i.e. it provides information about the structure of text and the kind of words we are looking at. Mixing uppercase letters unconventionally into a word doesn't only disrupt the visual pattern and spatial frequency of the word as it would conventionally be read, but also disrupts the our expectation of the normal role of uppercase letters.]

enne_son's picture

Bill, putting a grid over the word and matrix resonance might profitably be thought of as two different things. The grid put over words comes from how the permutations inherent in the script system implement cartesian space. Roman versus italic; capitals versus lower case. Lower case uses the x-height axis in a way capitals don‘t understand. Italics use the y-dimension differently than roman by giving it a slant. When these different ways are intermixed in a single word (and compounded with differences in weight — which translates into salience and equals cue-value), matrix resonance becomes difficult to achieve, because the clumps of information are out of synch with each other — they‘re on different registers, or in different keys as it were. A kind of stressed parallel letter recognition becomes the only perceptual processing option.

What I think this knocks out of the word-recognitional equation is trans-graphemic interfacilitation at the role-unit level. This is a kind of neurophysical cross-talk across letter boundaries while processing letter-parts across the entire word. Edmund Burke Huey’s simultaneous coactivation of letter parts. This is essential for speed, comfort and especially transparency.

This, by the way is one of the reasons why, though I find Denis Pelli and Kathryn Tillman’s data interesting, I don't accept the conclusions of their triple dissociation study. I don't buy the analysis of wht the various manipulations knock out. Part of the reason for this is that we have different functional ontologies of reading.

William Berkson's picture

Well, I probably shouldn't have used the word "grid", because I think that it exists but is only part of the story. The term "matrix", which I used in my TypeCon talk, and Peter says is also in the psychological literature, is more general and better. Grid seems to imply that the whole story is whether something falls on or off the grid. And I think a lot more is involved, such as weights, angles, etc.

My slide I think shows that a mental "matrix" is operating and a grid, or several grids are only part of it.

One comparison is enlightening here: I have seen in several places that there are more connections in the human brain than stars in the universe. Even though vision is only one part of the brain, it is clear that we are dealing with a hugely complex and highly adaptable system for pattern recognition.

Partly because of the brain's pattern recognition power, I doubt that we fail to use pattern information from across more than one letter when recognizing a word, as Peter has argued.

enne_son's picture

Bill, parallel letter recognition can be thought of as relying on matrix resonance. Except it hypothesizes two levels of resonance, one keyed to letters and the other to orthographic shape.

Syndicate content Syndicate content