Reading Research: What is Needed?

paul d hunt's picture

Eben's thread on ligatures & readability got me to thinking. It seems (correct me if I'm wrong) that there is little reliable research on the mechanics of reading and hardly anything exhaustive. My question is what is needed to provide such research (besides money)? I'm thinking a custom built suite of typefaces that allows researchers to isolate particular variables as best as possible. And of course reading environment is a factor. What environments should reading be tested in? What are the questions that need to be addressed? But before putting the cart before the horse, i guess i should go back to: what are the proposed goals of reading research? what are the pressing issues that need to be covered? why do we need research on reading? A lot of questions, hopefully we can get some answers here (and i'm sure a whole other set of questions...) In short, I guess i'm wanting to hear your proposals about the aims and methods of a reading research regimen.

Alessandro Segalini's picture

Sure we have schools, places where the best commodity should be the dialogue, eventually those are the places for such a need.

alexfjelldal's picture

I think Luc de Groot has been involved in some research on typography and dyslexia, maybe he knows something.

dezcom's picture

My first question would be: Is there a problem with reading? Maybe there isn't?

The difficulty is separating the reading part from the comprehension part. Kevin Larson said in a recent post that he thinks the longest delay in reading is the comprehension part. My question is, how do we know when saccades are duplicated or regression occurs if this is due to inability to decipher the letterforms and words (reading) or an inability to understand what the author is trying to say. I hope Kevin can clarify what he said because I may not have gotten it right.

ChrisL

William Berkson's picture

I don't think the problem is so much what to test, so much has having good measures to test them. Stuff that seems obvious to typographers and type designers it seems is difficult to test for. So what is needed are measures in addition to reading speed, which will more readily and sharply discriminate better and worse readability. I have proposed a measure of reading comfort: how fast comprehension decreases with time. Kevin says he is proposing another new measure--I guess in the new Typo article. What is it?

Another way of improving testing besides new measures is to make better use of marginal readers, such as those learning to read, and those with reading difficulties, or 'dyslexia', and compare these to expert readers. Here what is important is to have good theories of the learning-to-read process and of kinds of reading difficulties. That will help define good tests. The variations in good and bad type may be magnified for these marginal readers, though the relationships may be complex and not straightforward.

As to what to test, some factors that come to mind are:

1. Type size--as defined by x height. 2. Weight--percentage of black to white 3. Serifs--presence or absence. 4. Extenders--effect of various heights. 5. Leading--amount of leading at a given x-height (and influence of serifs, extenders). 6. Letter spacing--amount, evenness. 7. Measure or line length. 8. Counter size--and ratio of counter size to spacing. 9 Word spacing 10. How these factors vary with optical size.

These all interact, so you would also have to test for combinations.

Norbert Florendo's picture

During the Type & Design Education Forum at TypeCon 2006/Boston, Audrey Bennett made a presentation regarding letterforms as a potential tool for aiding readability for children with learning dissabilities (among other pertinent topics she covered).

She is trying to gain some momentum with type design industry, educational institutions and funding (grants, etc.).

You might try contacting her directly, as she is very sincere about her efforts.

Audrey Bennett
Associate Professor of Graphics
Department of Language, Literature,and Communication
Rensselaer Polytechnic Institute
110 8th Street
Troy, New York 12180-3590
O 518.276.8129
F 518.276.4092
bennett@rpi.edu

FYI- here is a link to an earlier version of her Powerpoint presentation at TypeCon.

> P.S. -- me bad! After TypeCon I had promised to expand the Typographic Education special interest section of Typophile with presentations, summaries and contact info of the presenters (who gave permission) during Type & Design Education Forum. I do have the files to organize and upload and will keep my promise to make them available.

No excuse, but right after TypeCon I started a new full-time position and have been very busy... scarcely time to post as often as I used to.

Erik Fleischer's picture

Oh boy. Be careful what you wish for. This is Pandora's box all over again.

Having said that...

why do we need research on reading? what are the proposed goals of reading research?

To find out whether and how design can improve the efficiency and efficacy with which readers process printed text. From a more philosophical or ideological point of view, the ultimate goal is to promote the dissemination of knowledge.

Alessandro Segalini's picture

This is quite readable in my opinion, with a soft touch of clouds :
http://www.filosofico.net/filos52.htm

hrant's picture

Kevin with a gentle mood adjustment.

hhp

hrant's picture

> Is there a problem with reading?

Caveman #1: Grunt, I wonder if I can do something
to get to my girlfriend’s cave quicker in the mornings.
Caveman #2: Grunt, is there a problem with walking?

Caveman #1 invented shoes. Caveman #2 got removed from the gene pool.

Text fonts are like walking shoes: you don’t notice any flaws unless they’re really bad, but no matter how good they are they still end up hurting you if you walk long enough. There will always be ways to make fonts/shoes more comfortable for a longer duration.

hhp

crossgrove's picture

Chris,

Not a problem with reading; a problem with our understanding of what makes reading happen. We want to know so we can influence the meaningful variables.

As Bill points out, the many variables interact, so it's a very big project to try to separate them. I do think a new suite of fonts should be developed, but let's all chant together: it should be type designers (experienced, thorough, open-minded ones), not students, brain scientists or linguists who develop them. The complexity of assembling a team of appropriate experts, asking the right questions, testing thoroughly, and finding meaningful results is a tall order. So, Paul, I would say, no, there isn't anything even approaching exhaustive or even complete research out there. It hasn't even been tried.

enne_son's picture

What I'd like to see is a Research Institute for the Scientific Study of Perceptual Processing in Reading, existing under the institutional umbrella of a School of Typography or Design.

The idea here would be to go beyond developing new measures of readability; or developing new proofs of readerability; or devising comparative tests that show a more sophisitcated understanding of the stimulus material used in these tests (i.e., type), though this last concern is certainly a priority. The idea here would be to develop an understanding of the perceptual and neuro-physical mechanics of reading that is detailed enough to make plain why the things that typographers and type designers know are critical or worth attending to, are indeed critical and worth attending to, like spacing.

Currently the perceptual mechanics of reading are massively under explored by students of reading and reading problems. And currently pivotal dimensions of processing (like, for instance, lateral interference) are not well integrated into the picture. A good deal is known about perceptual mechanics, neural behaviour in the multi-layered visual cortex, perceptual learning and object recognition that hasn't been effectively applied and tested in the domain of what I call 'visual wordform resolution'. The Design / Typography School umbrella would gaurantee the data currently available is (re)viewed with eyes sensitized by typographical craft knowledge, and that the tests are designed in a way that reflects expert knowledge of the stimulus material and trenchant intuitions about their functioning in conditions of use. Eyes sensitized by typographical craft wisdom might predispose the researcher to introduce relevant but overlooked perspectives that integrate a greater range of observations and in doing this redirect the field.

So there is a benefit to the typographical community and to the field of reading research.

ebensorkin's picture

decipher the letterforms and words (reading) or an inability to understand what the author is trying to say

Chris, that's a really interesting idea. William, nice list!

Here are some more possible variables

- paper & ink combinations ( how is that for exponential factors? - actually there aught to be a way of contrasting things without going too crazy )
- Serif shape
- contrast with ground/paper
- Reverse color ( white on black )

Nick had some solid ideas about this ina thread I connot seem to dig up now. Anybody else know where it is?

Ultimately though I do think that there is some purpose ( efficency if nothing else) in doing tests which are designed to see what kind of meta theory might be best at describing the mechanisms at work in reading. It seems to me that david berlow said something to the effect that whatever theory is used to desribe what happens in reading latin characters had better have some way of dealing with arabic & chinese too or it was going to be sub-standard. I tend to agree. ( ...If that's an acceptable paraphrasing and my memory isn't fooling me utterly that is.) I would for instance, like it very much if some of Peter's theories could be tested.

Ultimately what I am personally curious about is if small subtle contextual variations in lettershape and spacing might be helpful. Erik Van Blockland made a typeface that varied randomly called Kosmik. Of course because Kosmic is psudorandom or even super-random in that it avoids the sameness that randomizing can produce. So.... it *is* contextual. But the contextual I am interested in is variation to be sure but based on attempting to get better Notan & letter relationship rather than variation for it's own sake. Sometime soon I will start a thread about this again. I am assembling lots of bits & pieces at the moment! All of this is to say I would like this idea tested one day too.

But what has intrigued me most about this thread is not the pure theory or the atomization of type factors - it's the ideas people have had about alternatives to pure speed for measuring type success. In my view, that question is probably the richest vein to be mined of all.

Luca(as) where are you?

muzzer's picture

If all you blokes put your heads together I reckon youd make a font that's heaps better than all these cruddy ones these days. Are you up for it???

Muzz

paul d hunt's picture

But the contextual I am interested in is variation to be sure but based on attempting to get better Notan & letter relationship rather than variation for it’s own sake. Sometime soon I will start a thread about this again.

Or you can revive this one, if it suits your needs.

enne_son's picture

[Eben] "…measuring type success…"

It's great to measure the success of a drug protocol.
But we need to understand why in biological terms a specific drug works and how they work in combination.

To guide intelligent action in our manipulation of typographical micro-variables we need to know how things acheive what they acheive.

This should motivate research into perceptual processing and perceptual learning.

William Berkson's picture

>Is there a problem with reading? Maybe there isn’t?

That is a really good question, Chris. I think the answer is that sometimes we know there is a problem.

For example a friend of mine was talking about the requirements for grant proposals for NSF. They are required to be a single column of 10 point Times New Roman, single spaced on letter sized paper. This makes the measure way too long, and my friend was saying that it really makes ploughing through piles of grant applications a chore for the reviewers.

Another example I think are screen fonts, which in that low resolution environment are fatiguing enough that most people don't want to read extended text--only read text in snippets.

So we know there is a range of more and less readable text. Now you can reasonably question is whether it is possible to improve on the best typefaces--such as Baskerville, Garamond, etc.--set at a size, measure and spacing we know are excellent. Maybe the limits are difficulties of comprehension, as you say, and not of perceptual processing.

Personally, I think there may be only small improvements in readability possible in typeface design, though I think these are possible. The main thing for text design about knowing scientifically what is better and worse is that it would give guidance so the new design would be at least as good as the best out there now. And maybe slightly better.

But the benefits could be great in other areas. For example if the scientific testing showed that long measures were bad, the NSF wouldn't specify them. Forms would not have type too small or crowded. And screen manufacturers and font designers would know what realistically to shoot for.

But the fruits of research will be more marked in other areas. first of all in lay-out, if we knew how type size, based on x height, and leading relates to ideal line length, it would help avoid blunders in in newsletters, magazines, newspapers, the internet, etc. And similarly for many other variables.

In addition, there may be benefits for teaching reading and helping dyslexics--we don't know.

Over all the justification for research into perceptual processing in reading is what Benjamin Franklin said. When asked of what use were his electrical researches--which then yielded only amusing gadgets, he said: "Of what use is a new born baby?" Of course later he invented the lightning rod, and then our whole modern economy became based on electricity. Reading research is still in the 'newborn baby' stage, I think. But there is plenty of reason to hope for helpful results.

paul d hunt's picture

or example a friend of mine was talking about the requirements for grant proposals for NSF. They are required to be a single column of 10 point Times New Roman, single spaced on letter sized paper.
Is there any guideline for margins? >^P

Maybe the limits are difficulties of comprehension, as you say, and not of perceptual processing.
So for research, there should be different levels of complexity of texts for testign purposes as well? Perhaps one text that only incorporates the commonest of words and information, and then sets of texts that progress in complexity of vocabularity and concept? This may be the hardest variable to isolate?

hrant's picture

> there may be only small improvements in readability possible in typeface design

I agree, in terms of percentage. But it is over the length of a book, or the inadequancy of lighting, etc. that this small amount can make or break things, or at the very least improve one's life a bit. In fact it's not really about finishing that long book an hour or two earlier, it's about quality of life. And this is the sort of thing that tends to escape formal science, often leading its practitioners to become detached from people at large, and thus being dismissed by them. To me, a good scientist is first of all a wise person.

hhp

dezcom's picture

"In fact it’s not really about finishing that long book an hour or two earlier, it’s about quality of life."

That is exactly it in a nut shell. I just wonder if we truly know what we are measuring. We know there are eye movements; we know the quantity and type of movement can vary. I have yet to see how this is extrapolated into a better or worse reading experience though and if so, how much better ot worse?

Line length (based on character count) has long been known to affect reading. This is not something the type designer can control, only the typesetter/user--in William's scenario, perhaps the czar who decrees the oversized measure :-) Tinker's testing, the testing from the Royal College, and every other one I know has shown the long line length to be a killer of reader comfort. Strange that the NSF Grants Czar does not believe in research.

I think we need 2 sets of variables to test. One set would be aimed at the design of type. The other set would be aimed at the configuration of type on a page or screen. The target in one case is a type designer and the other is a graphic designer.

ChrisL

hrant's picture

Well we only need the former. Hey, it's already hard
enough to get their attention - don't give them choices! :-)

hhp

Nick Shinn's picture

The research that I am aware of generally seems to be predicated on new typeface development, whether it is for fast reading on computer screens, or a dyslexia project. These seem able to get the funding.

But surely it would be better to test exisiting types, and to have independent researchers tackle the same problem so that we can get some proper scientific peer review.

**

Having said that (which is to say, I am all for learning more about how things work and why), I don't believe that a physiological understanding of reading has more than a very oblique theoretical relevance to type design. Numerical method is most useful in product development and testing, ie technology, not in providing general scientific principles that will improve or validate the process of everyday typography or type design.

hrant's picture

Where "everyday" means mainstream, commercial, etc. I agree. You don't try to optimize type for money, that's for sure (heck, it's hard enough to just make type for money). And you don't do it to please the masses (consciously). You do it because you respect the craft, and care about the user. Like Peter said (elsewhere) it's not for everybody (and I don't mind that one bit).

hhp

enne_son's picture

Perhaps we need to distinguish between "brute' and 'fine' issues. Brute issues are issues of size and leading and line length and consistency of spacing and choice of type of type that measurably affect the sustainability (with good comprehension and enjoyment) of the perceptual-attentional connection with the text. We could call these: perceptual text-navigational processing issues and develop measures for them.

Fine issues relate to adjustments or manipulations of contrast, construction, spacing that may be minor or hard to detect in perceptual text-navigational processing (base-level readability) terms but show an advantage in visual word-form resolutional terms, because they reduce superfluous spikes in the retinal and neural receptive field.
____________

For dyslexia, we need an idea of what the problem is to do product development and testing. Is the problem rooted in a perceptual processing deficit or a phonological processing deficit or both, or one as a consequence ot the other. And if the deficit is a perceptual processing one is it because of the structure of the script, an error in pedagogy or a neurological-anatomical deficit?

William Berkson's picture

The discussion of perceptual processing vs cognitive understanding of text here gave me a new idea. Maybe the best way to discriminate better and worse readability is with material that is cognitively very difficult for the reader. Then changes to a worse type design or graphic design would be the 'straw that broke the camel's back' so to speak.

In other words, when the material is cognitively easy, the taxing character of the type or layout is will not be so easy to measure, but near the borderline, or breakdown into very slow reading or abandonment will be more evident.

What do you think Kevin?

hrant's picture

On the contrary, we need easy material - but we also need very good testing.

Unnatural stimuli cannot yield good insights into natural processes such as immersion. When the reader is reading very slowly because the material is hard, the high-speed stuff we're looking at understanding doesn't not come into play. This is in fact the core problem with existing testing. But Kevin might in fact agree with you because from what I gather he doesn't see the immersion, he sees it as one "linear" black box so to speak.

hhp

ebensorkin's picture

But we need to understand why in biological terms a specific drug works and how they work in combination. - Peter

I agree 100%. Which is why I am so interested in your work. In fact I think it was it my relative satisfaction with your ideas about this question which made additional perameters of success so compelling. It seemed ( perhaps falsely ) that the one thing lacking for your theories to be tested properly was a broader, richer, more detailed, and maybe more commonsensical view of what typographic success would be composed of. What would this increased quality of life, as Hrant would have it, look like? I don't mean that this lack was your but rather ours. Chris' & Williams' posts clearly got me.

I may have been overly severe in my post by divorcing pure theory ( which what it seems like you have emphasized ) from hypothesis' about how we might measure success in type. Maybe I am wrong in thinking that bringing in the variable of Comprehension clarifies things. Maybe it just adds a new variable. A new chore. What do you think?

Does your theory about perceptual and neuro-physical mechanics of reading as well as the salient feature set ( how am I doing here?) available to be processed; suggest particular a theory of salient measurement as well? In other words- what besides speed would you like to see measured? Perhaps none of the meta-measurements is relevant. Perhaps it is just mico-measurements to seek & confirm salient features that you feel we need instead.

I really do want to know.

BTW - Nice post Carl!

Nick, What do you mean by 'Numerical method '?

Nick Shinn's picture

Numerical method: quantitative analysis.

brampitoyo's picture

By the way, do you know that elementary schools are open to such testing and education?

I heard this problem some time ago, firsthand, where a teacher complained about how the existing typeface ("a Times or an Arial", she said) hampered the readability and comprehension of her students. I supposed it has to do with sizes rather than letterform, but this is pure speculation.

She then mused about how almost every teacher in her school felt that way, and that they will benefit by having a "font advisor" come in and train them to effectively use type. Not necessarily to achieve the most aesthetically pleasing setting, but ultimately to improve the kids' reading comprehension level.

About the ease of getting through the school board bureaucracy, she said, "some of the board members are already aware of such concern". I didn't know if this meant anything or not.

Is there any school teacher that care to enlighten us? Otherwise, we'll just go straight to the school board and go from there :)

TBiddy's picture

So, I've got a personal mystery (which might add another unnecessary dimension to this) that I'm trying to solve.

I get motion sickness pretty bad. I can't read on any moving vehicle— I'm talking "barfy." In the last few months I've realized that I can read the Washington Post Express on a train on my way to work with little discomfort.

I saw this as a victory for myself and thought that maybe I was able to read on a moving subway car if I was standing up. Yesterday I tried to read a paperback book...and, um I got "barfy." However, I seemed to read at my normal speed perhaps even faster than usual, but I still got sick.

Why would I be able to read the Washington Post Express, without feeling sick, but not a paperback?

This had me thinking, could a typeface be made for people with motion sickness to eliminate reading discomfort? Anyway...a personal puzzle that I thought might be of use here.

hrant's picture

> Why would I be able to read the Washington Post Express,
> without feeling sick, but not a paperback?

My guess is the larger newspaper blocked more of the (moving) background from your view. Sort of an extension of the benefit of wide margins.

> could a typeface be made for people with motion sickness

Interesting!

hhp

Erik Fleischer's picture

The idea here would be to develop an understanding of the perceptual and neuro-physical mechanics of reading that is detailed enough to make plain why the things that typographers and type designers know are critical or worth attending to, are indeed critical and worth attending to, like spacing.

That makes a lot of sense, but perhaps the question should be framed differently: one should be completely open to the possibility that "the things that typographers and type designers know are critical" may not be critical after all. It's difficult enough to conduct good research without biases; it's almost impossible if the researcher is bent on proving his beliefs to be true.

But what has intrigued me most about this thread is not the pure theory or the atomization of type factors - it’s the ideas people have had about alternatives to pure speed for measuring type success.

Efficiency and effectiveness (or efficacy) are different things, but I've been wondering how effectiveness can be measured. Surely measures of comprehension and retention have a lot more to do with the reader's familiarity with the subject and all sorts of cognitive processes unrelated to visual processing?

I don’t believe that a physiological understanding of reading has more than a very oblique theoretical relevance to type design. Numerical method is most useful in product development and testing, ie technology, not in providing general scientific principles that will improve or validate the process of everyday typography or type design.

What method, then, would be appropriate for such an investigation? Ethnography? Wouldn't the goal be to arrive at universal conclusions? In the end, if the goal is to produce generalizable conclusions, one cannot run from statistics and numbers.

it should be type designers (experienced, thorough, open-minded ones), not students, brain scientists or linguists who develop them.

But perhaps brain scientists and/or linguists should be allowed to test them? Type designers would certainly be better equipped to choose what factors should be isolated and tested. But scientists from other fields would perhaps be in a better position to analyse how those factors actually affect reading.

Nick Shinn's picture

What method, then, would be appropriate for such an investigation?

I don't believe any method would be appropriate. Some things are not susceptible to quantitative analysis. This concerns the principle of demarcation between science and pseudo science.

ebensorkin's picture

Nick, I agree with your point ( if I have it) that testing factors in reading, type, comprehension etc is more likely to benefit a technology company well before it will help a type design company. And even graphic designers before type designers. And ideally school kids before us too. But I am still not sure I completely get what you are saying. Are you saying that such research might have oblique relevance to type design or no relevance to type design whatsoever? The phrase 'suceptible to quantitaive analysis' suggests a kind of binary model that seems too simple to me. It's not a question I think of 'either' & 'or' so much as a question of 'also' & 'and'.

dezcom's picture

Scientists have been studying the Stradivarius violin for decades, trying to figure out why it sounds so good and to see if modern technology can replicate the instrument design using all the modern marvels we have available. So far, they have failed.
The technology used by Stradaveri was knocking on wood—literally. He would carve a basic shape with simple hand tools and knock on the wood in various places, listening for that certain sound that his ears could recognize as "correct". He would carve and knock, carve and knock at his slab of wood until it was right. Scans and X-rays instruments reveal that no two are alike in modulation of thickness. The assumption is that each piece of wood needed its own variation in thickness to resonate sound properly. Stradaveri used his ears and knuckles in the same way that punch cutters used their eyes and smoke tests. Tinker a bit then look then tinker some more. The human mind and senses are just incredible. Sometimes we might just have to say, in this hi-tech world, that humans have abilities which are not reproducible by machine and this is "OK"!
Should scientists close up shop? Absolutely not! But once in a graet great, it is OK to just cry "uncle".

ChrisL

William Berkson's picture

>But once in a graet great, it is OK to just cry “uncle”.

No.

Choz Cunningham's picture

All the inquiry into fonts would do is create substantiated, predictable facets to use when designing fonts. NO harm in having that in my toolbox, when I glue something together around the house, I appreciate science leading to a consistent range of dry-times based on the environment.

So much of these proposed scientific inquiries are based on legibility=speed=succes. This suggests a hunt for "the one true font". Yuck-that explains some of the designer resistince. There are a lot of other factors we all might be intersted in. Do certian existing fonts lend themselves to increased comprehension of exact data? Of overall comprehension and recall? People often visualize words to recall spellings, dates, etc. What fonts do the mind use? Are there fonts that purposely slow down reading speed and does that have a use? Is kerning simply aesthetic, or can it alternate kerning profiles predictably change the meaning of a message?

Do people learn to write foreign laguages better in certain typefaces? Can they? Some faces are inappropriate for body text, while similar ones are great. Is there exact calculatable figure predicting where the change is?

Then there is the idea of letterforms themselves. Would a font where each letter contextually joined to the "starting spot" of the next letter help? Arrowed serifs, non-baseline oriented fonts? These all sound like wacky ideas, but without numbers to confirm that we run on gut.

There are dozens more of ideas, and new ones will spawn from research. Sounds like something for a University to have fun with. For a long time. Remember, we still down even know much about a cat's purr.

Choz Cunningham
!Exclamachine Type Foundry
The Snark

ebensorkin's picture

“the one true font”

Fear of a 'one true font' where the industry is concerned is of course poppycock. Fear of an editorial board on a magazine beleiving in it is likewise. Fear of government offices or the odd small client getting religion about a 'one true font' & becoming a pain to work with might possibly be justified. But even there... designers have a rough time getting their clients to stick to agreed plans already! I can't seem to worry about that 'issue' at all. Fear of people bleating on about stuff which is just silly is something we will all just have to deal with. That's never going to stop.

dezcom's picture

We already went through the "One true font" phase, it was called Helvetica. I don't think that will be revisited any time soon.

ChrisL

Nick Shinn's picture

The phrase ‘suceptible to quantitaive analysis’ suggests a kind of binary model that seems too simple to me.

I said some things are not susceptible, which doesn't mean that there aren't many things that may be susceptible to varying degrees.

Erik Fleischer's picture

Chris (& everyone else),

You make the point that Stradivari was a master craftsmen that achieved near-perfection not by mathematical calculations or by reading scientific treatises on violin engineering. He just "played it by ear", quite literally.

First of all, I would remark that his method (according to you -- I don't really know anything about Stradivari) is used even in computer science. Some problems, particularly complex systems (http://typophile.com/node/28777/164470), have so many variables that they cannot be solved by means of equations -- or at least not without ridiculous amounts of time and processing power. Or the variables may be known, but exactly how they interact (i.e. the equations) is not known.

In such cases, solutions may be worked out by trial and error: knock, carve, knock. For instance, when the equations used by a certain encryption method are very complex and too many variables (such as the original characters) are unknown, the computer may try to crack a password by attempting different combinations sequentially.

Another approach in computer science is to use swarms of "intelligent" agents: each agent, which in a way represents a variable in a complex system, is programmed to behave in a certain way; the agents are left to fend for themselves, and the results are recorded.

The reason, then, that almost no things human -- such as reading -- can be solved or controlled through "pure science" is because there are too many variables and equations, many of them unknown. What the master craftsman does is use his experience -- those missing variables and equations -- and the trial-and-error methodology to create solutions. This is a task that can probably never be accomplished through "pure science", simply because all the variables and equations known by all master craftsmen (and women) will probably never be recorded and codified in a usable way.

However, the knowledge and experience of the master craftsman has limitations, in the sense that those variables and equations in his brain are specific to his culture, habits etc. While scientific research is no substitute for the master craftsman and woman, it can certainly help them make better decisions and judgements, and even try different solutions that go against their personal beliefs.

Is this more or less what you were saying?

hrant's picture

You guys don't understand. It's Nick what's not susceptible
to quantitative analysis. Or really just analysis, period.

hhp

enne_son's picture

[there are too many things to address in previous post, and I like a lot of what is said in many of them, including those of the 'new' voices]

[Chris] 2 sets of variables to test

Eric I am suggesting approaching the field with my type-craft-generated attunements and biases, and elaborating them in perceptual processing terms to the point where I can test if the need to be rejected.

Nick, the important part of science is not the numerical analysis, but what conclusions are drawn from it. As you insist, there are many things that can't be effectively quantified (comprehension; gestural atmospheric import; readability), but you can gauge--sometimes numerically--some of their manifestations, or some of their determinants. The manifestations have to be indicative of something, so we abstract and study possible determinants.

Eben, I think plain old reading speed is too crude a metric for gauging the effects of manipulating typographical micro-variables, even it over the space of a large text read in one sitting my manipulation reduces the time it takes to get through the passage. I have no way of telling whether all the other possible things that contribute to reading speed have been controlled. The types of tests I'm trying to get a bead on involve fourier transforms, gauging the advantage to speed of recognition of various kinds of 'priming', assessing the impact of various manipulations on the magnitude of the Word Superiority Effect, ideal observer modeling of neuromechanical flow-through where lateral interference is represented and slot processing is allowed or prohibited.

The best measures of typographical success are probably anecdotal. Other than that all we can do is try and pinpoint what part of the map of reading varying this or that addresses.
____________________

Comments about my graphic:
1) The competant reader has abilities at every level
2) Some levels are more impacted by culture, education, familiarity and sensitivity than others
3) Variations at every level affect all the others in some degree over some extent, and they affect the sustainability and satisfactoriness of our engagement with the text over time.

When we say x improves readability I want to know what is meant in functional anatomical or perceptual-mechanical or gestural atmospheric terms.

William Berkson's picture

http: I guess my link above was too subtle. The point is that Prof. Nagyvary has more successfully than ever before been able to create violins that sound like the Stradavarius' violins. And he has been able to do it by combining his discoveries in chemistry with traditional craft skills. Similarly, scientific research in reading might help inform and strengthen the artistry of those who draw type, so that the results are better.

Erik Fleischer's picture

Peter (Enneson),

If you've published any articles on subjects relevant to this thread, could you please tell me where they can be found? I feel that I need to study some more to be able to exchange ideas with you (and so many other knowledgeable people).

Thanks,

Erik

Kevin Larson's picture

Q: Is it better to conduct studies with easy or challenging content?

Theoretically, improved type design and paragraph layout should have the same impact on easy or difficult text. Improved type design and paragraph layout work to improve the word recognition process, and easy word recognition is beneficial no matter what the content. If you were interested in organizational factors like providing good headings, then this is certainly more relevant to the more challenging text.

In practice it’s difficult to find appropriate content difficulty. If you use content that is too challenging (i.e. having an expert in literature read a physics textbook) then you’ll find that the time to read will increase dramatically, and reading speed differences will be determined more by the number of times a person needed to reread long passages than by any typographic differences. If content is too easy (i.e. having a professor of physics read a high school physics textbook) then the reader is likely to scan the text rather than truly read it.

Another challenge is that reading performance differs from person to person, so I prefer to have the same person read text from each of the typographic conditions being examined. This means that more than one sample of content is necessary, and it’s desirable to have them of equal difficulty so it doesn’t impact the results (though counterbalancing the passages reduced this problem).

Every study is open to criticism, so I think it’s best to do something that’s good enough and document what you did.

Cheers, Kevin

dezcom's picture

Erik,
I think you have said pretty much what I was trying to say. Thank you for your elaboration. Like you and William, I feel the need for collaboration between science and craft (and whatever you would call human abilities to interpret, solve, and create). I do feel there is a danger from both the arenas though. Humans interpret data; humans set up testing procedures; humans define criteria and devise theories to prove or disprove. All of these things can involve human error and provide a sense of being "correct" just because it was science. The other side is human hubris and self assurance that "it is my art, therefore it must be great". Neither art nor science should be accepted without question. We always need to be humble enough to listen to other voices even if what we hear does not fit our own theory or our own ego.

Peter,
I much like your model! You at least seem to have included the meat of the issues. The dilemma is sorting out the interaction of effects. If I say reading is a function of all of your variables, I still have the hard job of measuring how they interact. I might look at one variable sometimes a\nd the cumulative effect of all of them at other times. Making the quantum leap to what to extrapolate from it is the devil of it all. I still don't see how we can remove deriving meaning from the act of perception of forms and decoding signs.
ChrisL

Kevin Larson's picture

Chris asked:
“My question is, how do we know when saccades are duplicated or regression occurs if this is due to inability to decipher the letterforms and words (reading) or an inability to understand what the author is trying to say. I hope Kevin can clarify what he said because I may not have gotten it right.”

There is a lot of ongoing research investigating the cause of saccades, including developing models trying to predict when they are going to occur. Some regressive saccades are caused by content problems. For example, if I read “The horse raced past the barn fell”, I can guarantee a regressive saccade after reading the word fell. It’s less likely to happen if I write “The horse that was raced past the barn fell” and even less likely if I precede it with a story about two horses that were raced down different paths, one near a barn. Other regressive saccades are likely caused by word level problems. In some of our eye tracking studies looking at ClearType versus b/w rendering we have seen reductions in regressive saccades indicating that letter quality can improve the mechanics of reading.

Cheers, Kevin

William Berkson's picture

>Theoretically, improved type design and paragraph layout should have the same impact on easy or difficult text.

Yes, this seems reasonable to me. However, what I am thinking of is a threshold kind of effect may be operating. I had suggested that one might measure reading comfort (actually discomfort) by decline in comprehension with time. My suspicion is that what John Hudson calls 'readerability' can compensate a for a lot of less than optimal conditions by an increased investment of mental effort.

Thus less than optimal type design or layout would barely show up or not at all in a short test--it would be wiped out by increased mental effort of the subject. But if the person were pushed by reading relatively difficult content for a longer period, then you would see a sharp decline of comprehension--and perhaps speed as well--at markedly different times for better and worse conditions, when mental fatigue set in.

Do you see what I am getting at Kevin? Does it make sense?

enne_son's picture

Erik: See my "Thesseloniki" text printed in Typo#13 along with one of Kevin's and one of Hrant's.

Chris: I don't "remove meaning from the act of perception of forms". The graphic merely isolates relevant areas of functioning. I claim that encapsulated with the neural code for the visual wordform "x" is a meaning, or 'node' in our personal construct system, as well as a spelling and a phonetic signature. At each visual wordform resolutional event a node or meaning is directly accessed and placed (as it were) within the sense-following stream.

Linda Cunningham's picture

Ah, well, this constituted a majority slice of my Masters thesis. Colours, fonts, size, icons, blahblahblah.

Conclusion: yes, there are optimal combinations that work for most people. Are they readable to everyone? No, because there's always a trade-off in what works best, and no enforced standards in non-language communication methods (icons!).

And has been pointed out, then there's "comprehension" and "cultural standards"....

Erik used the term "Pandora's box" and that pretty much sums it up.

Linda

Syndicate content Syndicate content