Archive through August 13, 2004

John Hudson's picture

I've just got a commission to design a display face, which will be my first engagement with something not intended primarily for extended text (excepting work on Zapfino) in almost ten years. Should be interesting.

Miguel Hernandez's picture

>Better would be think in the grade of legibility who those typefaces have. For example, Swift have 96% of legibility, Times 87% and Not Caslon a 10%.

Juan Pablo, where do you got those percents?

mh

Nick Shinn's picture

>the three things you mention are the tip of the iceberg

Hrant, those three things I mention are based on the experience of many years designing typefaces in as many genres as I could try. I'm prepared to back up my statement with proof -- give me any display face and I'll quickly and easily turn it into a text face using those three principles (licensing issues notwithstanding). And I also guarantee not to increase the ugliness quotient.

>the three things you mention are the tip of the iceberg

And the rest of the iceberg is artistry, aka "drawing" -- an intuitivity activity of hand, eye, and a part of the brain opposed to analytical thinking.

>and you'll graduate from mimickry

Thanks. I have 21 retail typefaces for sale. 18 are original designs, 3 might be considered revivals. I'd hardly call that mimicry.

>self-serving artistry.

Hrant, I have used the word "craft" to describe "making" (not designing) fonts, and have carefully avoided the Art word. However, I'm surprised that a type designer, you, would be so vehemently opposed to artistry, and so eager to sing the praises of ugliness.

It's true that a text type will have bolder features than it's display version (as per my 3 principles), but these bold features should be attractively rendered, because there are always situations where a "text" face will be used at display size.

What's your problem with artistry? Why do you think that self-expression is an indulgence rather than a gift? Surely any type designer with a strong personal style (Goudy, Frutiger, etc.) is self-expressive, and yet such people have produced many perfectly servicable typefaces. Frutiger's eponymous face has become a benchmark of our era, mimicked by Myriad. And yet, when one looks at his oeuvre, it belongs.


hrant's picture

> using those three principles

Then let me just throw out one extra "principle" which you'd have to incorprate from now on, if you want to raise the level of your craft*: overshoots have to increase in inverse proportion to intended point size. But there are many more "principles" to add, if you want to work with prefab lists of tasks to perform a conversion. Having bunches of principles is better than having nothing, and your list of three is a great start, but it's better to understand the reasons behind it all. Otherwise you'll have to fish around forever or hope that master designers (none of whom know everything) are generous with their knowledge (good luck). For example, the overshoot thing is mentioned only one place that I know of: as a little footnote in H Carter's translation of Fournier. But I figured it out before running into that, and it was just a great piece of validation at that point. I didn't figure it out because I'm very smart, but simply because I had taken the time to think about the root causes, instead of working with prefab lists.

* Not the same thing as the quantity of fonts one produces, or how much money one makes from them.

Display and text are different animals. Canned, blind routines will only produce monkeys dressed as humans.

Expression:
As I've said before, nothing a human makes can be devoid of personal expression, and nothing made for human consumption should be devoid of personal expression. But the difference is where the expression is coming from and who it's meant to serve. The expressions of an artist making pretty, harmonious formal shapes is selfish and largely wasteful; the expression of a craftsman in spite of himself making fonts that serve the reader is better, in my mind. Prettiness, I'm really sick of - it's a succubus.

--

Nick, I don't want to bash your work. Your display work is indeed original, and your text font work is solid. What riles me is your blindness to the higher reaches of text font design, that's all. Not something that will make anybody a fortune or reduce world hunger, but still.

hhp

markm's picture

Hrant should spend less time repeating what he's read about fonts and more time doing what real designers do and if he's not sure what that is I'll clarify: you can either choose fonts and set them properly, knowing what looks good, what sizes they will be viewed at and how they will print - blibberblabbering about setting words such as paradoxal - schmoxal don't make you a good designer or letterform expert - you either have it or don't.

hrant's picture

I guess we have different ideas of what Design is. To me groping in the dark is not a key ingredient, and "knowing what looks good" is simply not good enough - knowing why things work is the only way to mark real progress. Otherwise you're just a peon, expressing emotions that nobody else really needs to care about, and regurgitating drivel from your predecessors.

BTW, I haven't picked this stuff up from reading, but thinking. And yes, I know that's something most designers are trained to avoid as much as possible.

hhp

Nick Shinn's picture

>overshoots have to increase in inverse proportion to intended point size

I wouldn't say that's a generally advisable principle, and I don't consider that ignoring it has any bearing on readability/legibility.

If anything, it's an example of something which seems like a clever idea in theory, but in practice provides a meaningless aesthetic gloss.

If, for text type, one increases the heft of hairlines and serifs, then a size-specific compensation is not necessary for curved overshoots, although it does improve characters with sharp apexes, in particular v and w, in low-contrast typefaces.

I came across this phenomenon, noticing a "shortness" of v/w in one of my faces (Beaufort) at small size, and at first thought it was a problem with hinting: I noticed it because it didn't look quite right, not because of an optical theory.

If anything, if one intends to follow such an overshoot principle, it would be of most use in the lighter weights of sans serif types, where it would make sense to actually decrease the overshoot of baseline curves, owing to the visual erosion of the corners of vertical stems. But I'm not aware of any such optically scaled sans faces.

>Prettiness, I'm really sick of

I repeat my challenge: name a (pretty) display face, and I will make a perfectly good text version of it (which you might even find satisfyingly ugly), using the 3 principles I mentioned earlier in the thread.

hrant's picture

> meaningless aesthetic gloss.

Even though the whole point is to make/keep the overshoot invisible? Even though we're talking about sizes where the layman (the whole point, remember?) can't even [consciously] tell apart Times and Garamond? Hmmm, and overshoots in general are "meaningless aesthetic gloss", then. Whatever.

Anyway, the overshoot factor is just one of the many things* that fall outside your Big Three. But I'm sure you've thought of every single difference between text and display cuts (and of course they're all cosmetic -pardon me- aesthetic differences), and have a very good justification for not worrying about any of them. Those three things are enough. As you said, text face design is simple... Yeah, that sure is highly believable.

* More than a dozen of them, as per John Fiscella.

> I noticed it because it didn't look quite right, not because of an optical theory.

But the point of course is that if you knew the theory you wouldn't have made that mistake in the first place. Knowing the Why isn't just satisfying, it's useful - it saves time and improves the quality of the end result. Not to mention all the mistakes you'll fail to catch... But at least you won't lose sleep over them, since you won't know they're there. :-/

> I will make a perfectly good text version of it

Who gets to decide if it's "perfectly" good? The same gang of clods that "decides" most things in type design? Graphic designers who think Mrs Eaves is a text font? No game.

hhp

dezcom's picture

>. . .but thinking. And yes, I know that's something most designers are trained to avoid as much as possible.

Since your methodology is so science based, I hope you will share with us the way you have proven this hypothesis. Perhaps you just have a far less rigorous definition of graphic designer than I do. A graphic designer is not just a person who owns a Mac, a copy of InDesign or Quark, and a fist full of faddy fonts.

I don't know where these graphic designers you speak of are trained, but not in any of the design schools where I or any of my colleagues have either attended, lectured, or taught. (Carnegie Mellon University, Ohio State University, Rhode Island School of Design, University of Cincinnati, Yale, HFG Ulm, and others). Our training was as much science and communication as studio courses. We had to prove everything we did was valid message development and well fitted to information processing. Granted this was the old days when any designer or typographer could recite the names of every typeface in existence over lunch.

ChrisL

Nick Shinn's picture

>As you said, text face design is simple... Yeah, that sure is highly believable.

No, that's not what I said. What I said was that it's easy to take a display face and turn it into a text face, following three basic principles.

>No game.

Rather than continue this debate in the abstract, I offered a practical demo, but you seem to be concerned that the court of public opinion, as represented by Typophile participants, is a "gang of clods". Shame on you for being so snooty.

hrant's picture

Of course I said no such thing about Typophilers in general (although any group of people will have some clods in it). The clods are aesthetes who cast themselves as expert craftsmen even though all they do is churn out safe, predictable crap - people who make fonts only to make money or to promote a stale ideology - people who have very low (and often highly misguided) standards of what a text font should be. In my experience, Typophilers are generally too humble and too doubtful of the establishment to be clods.

In terms of Typophilers, the only thing I can do is hope that my posts in this thread serve to raise a big red flag againt your reductionist, progress-free stance.

hhp

pablohoney77's picture

personally, i wanna see hrant name a face and see nick's text version of it. i mean what can be lost here? could be a good learning experience for all of us following this thread. so who else is with me?!

hrant's picture

That's not fair - I just name a font, and Nick has to do all the work? Well, OK.
I'll choose something from his own library, to reduce any obstacles.

OK, checking...

Worldwide
:->
Kidding! That's already a text font, although part of my point is that -predictably- it's too mainstream.

OK, what about Eunoia?
I think it's a good choice because Nick might see it as an easy candidate, while I see things in it that clearly can't survive the sort of blind conversion he's promoting.

And even if the results convince neither him or me to change position, it might not be a total waste: at least he might end up with a font that many people will enjoy using!

hhp

William Berkson's picture

Hrant, your insulting remarks about Nick's work are absurd and foolish.

I'm sure Nick can make Eunoia into a text type, but I agree with you that it won't be a good text type if it looks like Eunoia. As Nick explained in another thread, he designed Eunoia to have a lot of 'dazzle', which can be great for display, but will be awful in text.

As I said above, I suspect you are both mistaken about display vs text. I think you can as a rule take a text type, and keep its look while making it a good display type - but not the other way around.


Nick Shinn's picture

OK Hrant, Eunoia it is.

>1. Sturdier details (eg thicker hairlines and serifs)
This won't be necessary, as the thinner strokes are sufficiently thick at 8 or 9 pt.

>2. Avoid small counters
Because Eunoia is a condensed face, I will increase the horizontal scaling to make the counters larger.

>3. Wider sidebearings and serifs
I will open up the fit -- would it also be acceptable to adjust some of the sidebearings selectively?

Also:

Shall I use 9/12 pt Helvetica (75 or 85) as a "target" to compare "Eunoia Text" with?

Am I allowed to select characters from the three versions of Eunoia, to build one text version?

as8's picture

What is Eunoia? A font? Where can I see it?
"Eu" could be "Europa" & "noia" means "boredom" to me.
AS

A. Scott Britton's picture

Hrant and Nick, you guys really know how to entertain!

When did you design Eunoia, Nick?

William Berkson's picture

Eunoia is at Shinntype and means 'beautiful thinking' according to Nick.

Nick, I know this is Hrant's challenge, but here are two points. 1. A lot of the distinctive look of Eunoia is in the narrow counters. Significantly broadening them to give less sparkle will make the face have a very different look. 2. Helvetica is in my opinion not a satistactory text face, and so not a standard for arguing your case.

Nick Shinn's picture

Alessandro, Eunoia is shown (and explained a bit) at my website, www.shinntype.com.

William Berkson's picture

My bet: Nick will come up with a good face, and then the argument will break out on whether it is the same as Eunoia, and whether it is a text face.

Nick Shinn's picture

>1. A lot of the distinctive look of Eunoia is in the narrow counters. Significantly broadening them to give less sparkle will make the face have a very different look.

Let's see what happens.

2. Helvetica is in my opinion not a satistactory text face, and so not a standard for arguing your case.

OK, but someone has to come up with an acceptable bold sans text face for Eunoia Text to go up against, or else the exercise is pointless. Hrant?

We're talking print here, BTW, so I will be providing PDFs.

William Berkson's picture

I am uneasy with sans faces being text anyway, but a fair standard would be Meta.

as8's picture

Oh, grazie.

"Eunoia is the shortest word in English
to contain all five vowels,"

Yeah, that is nice!

"and the word quite literally means

A. Scott Britton's picture

>it is cool like a thought-out lipstick.

Alessandro, I believe you just made it onto my list of potentially the greatest sounding phrases ever uttered.

Grazie to you, sir.

hrant's picture

(William, I will ignore your insults.)

Based on what I've just read, I was going to say "call the whole thing off" (I'll explain why below), but also based on what I've just read, I think some good might come out of it, and I'm glad you're willing to make the effort. So I'll try to play along, but a huge caveat has just hit me:

This seems like some kind of technical challenge, but you can't base a technical comparison on fundamentally different ideologies - so there's no way this can really be resolved like this. Like when you ask if Helvetica would be a good comparison, to me that's the end of the conversation... You think it's a text face, but to me no sans is a real text face (I mean for book setting, the real test), and Helvetica is one of the worst of the sans. To me this means it will probably break down into a "yes it is", "no it isn't" argument*... So I'm not sure how productive this will be. But at least some people seem to be enjoying it!

*
Like William said: "Nick will come up with a good face, and then the argument will break out on whether it is the same as Eunoia, and whether it is a text face." A very astute prediction. And yes, it will be a good face. But not as good as one you could design from scratch, especially if you were armed with a non-relativist view of text face design, one based on the Why more than the How. BTW, how a WRBWWRM disciple can even distinguish between text and display at all is beyond me to begin with.

--

>>1. Sturdier details (eg thicker hairlines and serifs)
> This won't be necessary, as the thinner strokes are sufficiently thick at 8 or 9 pt.

You're probably trying to make your point more to third parties than me, and I can't speak for them, but maybe you should know that from my perspective Eunoia's stroke contrast is too high for a text face.

>>2. Avoid small counters
> Because Eunoia is a condensed face, I will increase the horizontal scaling to make the counters larger.

Well, the main thing is balancing black and white, although the balance between white and black is different for display and text. But Eunoia seems to have texty spacing to begin with. So maybe just its narrowness would have to be reduced (maybe to the point of Stone's Print, to me the narrowest font that remains highly readable). And yes, William is right that it would change the character of the face, but I don't think it would destroy it, not least because it has so much character to begin with.

>>3. Wider sidebearings and serifs
> I will open up the fit -- would it also be acceptable to adjust some of the sidebearings selectively?

Sure - that seems to fit into your rule.
I never thought you meant the conversion would be totally mechanical.

> Am I allowed to select characters from the three versions of Eunoia, to build one text version?

If you think the three versions are part of a harmonious whole (I think they are), then sure.

--

BTW, if you want the most texty sans to compare against, you might go with Legato. Or if you wanted something more accessible, maybe Syntax. Do those make sense to you? I guess it depends on what you (and others) think of those fonts in terms of readability. This is hard... We're coming up against the issue of choosing a balance between functionality and character, and that I'm pretty sure no two of us will ever totally agree on...

hhp

degregorio2's picture

if you say:
"Helvetica is in my opinion not a satistactory text face.", you are not understanding typography.

I

Thomas Phinney's picture

I guess I don't understand typography, either, because I agree that Helvetica is not a satisfactory face for large amounts of body text.

Legibility is not the same as readability. Many people have studied both. Readability is what counts for extensive body text.

Regards,

T

degregorio2's picture

Legibility and readability, are in the limit of to be synonymous. When the persons ak for legibility, you answer for legibility and readability... making the difference.

and when the persons ak for readability, you answer for legibility and readability... making the difference too.

When i speak of legibility, i am speaking of studies of what happend when you read, and i believe is no interesting to say all the time the differences between legibility and readability.

i hope you understand me. The english is still very hard to me. Excuse me.

JP

Nick Shinn's picture

Sorry Hrant, I don't have Syntax or Legato.

I'll compare it with a few established bold sans -- Bell Gothic, Folio, Franklin, Trade Gothic, and Helvetica.

I'll post the "Eunoia Text' PDF tomorrow.

Nick Shinn's picture

Alessandro, JP,

"Oiseau" (bird) is the shortest word in French to contain all five vowels -- how about in your languages?

degregorio2's picture

simplemente, fascinante.

hrant's picture

Franklin is good.

Legibility and readability exist distinctly because our consciousness and subconscious exist distinctly. Ignoring this will confine you to the creation of display fonts.

I don't know about Armenian, but in Arabic there's "waawee", which not only contains all three vowels, but is the only Arabic word with no consonants. It means wild dog.

hhp

John Hudson's picture

Legibility is not the same as readability. Many people have studied both. Readability is what counts for extensive body text.

When planning the legibility/readability panel discussion at the recent conference in Thessaloniki, we found that the cognitive/perceptual psychologists (Kevin Larson and Mary Dyson) generally made no distinction between legibility and readability. They thought the apparently common typographer's distinction between decipherability of individual glyphs (legibility) and reading of text (readability) prejudiced discussion of the latter by assuming that two different cognitive phenomenon were at work, for which they say there is no empirical evidence. Generally, they used the term legibility more often than readability. Mary noted that if she were to make a distinction at all it would be in terms of discussing readability at a macro-typographic level, e.g. page layout, document structure.

In my closing comments of the panel, I suggested a parallel between reading and playing chess. Our understanding of chess, as a function, differs from our experience of chess, as a game. Our ability to experience chess as a game is a result of the complexity of the function: when we play chess we are not actively conscious of the function, only of the current state of the board, the immediate move and the moves ahead, as far as we can anticipate them. Similarly, when we read, we have an holistic experience of reading that does not necessarily correspond to what empirical studies tell us about the actual mechanics of reading. So, for example, the fact that we have an 'immersive' experience of reading does not necessarily mean that there is such a thing as immersive reading, distinct from what Hrant calls deliberative reading, except experientially. That is, there seems to be no empirical evidence to support the view that different cognitive phenomena are happening between what we experience as reading immersively -- getting lost in the text -- and reading deliberatively, which I think may describe a range of experiences from studying text, reading carefully, reading things that are functionally hard to read, etc.

When we talk about legibility and readability as separate things, or about immersive vs. deliberative reading, we should be careful not to beg the question. We shouldn't assume that experiential and anecdotal evidence corresponds to the actual mechanics of reading.

Now, this seems to me to have interesting implications for type design and typography, because in a sense both the experience of reading (holistic; pleasurable; elegantly engaging the mind) and the mechanics of reading (atomistic; mechanical; based on the fairly crude, brute processing power of the brain) are valid targets of design. We can design to enrich the experience of reading, including the aesthetic experience, and we can design to assist the cognitive function of reading. We know a lot about the former because we're all readers. We still know relatively little about the latter, and the only way to know more is empirical study. Anecdotal evidence simply is not reliable, because it relies to much upon the experience of reading, which we know from existing studies does not necessarily correspond to what our eyes and brains are actually doing while we read. For example, who would have thought we read in saccades before empirical study determined it?

[I'm going to alert Kevin and Mary to this thread, in case they want to contribute or to clarify or correct anything I have written here.]

hrant's picture

> there seems to be no empirical evidence

1) I think there is some. For example Tinker/Patterson's and Watts's findings that there seems to be a cut-off around age 10 where children can start to read immersively; and Taylor & Taylor's findings that there are about 60 words in English that are often skipped over by most adults; this is a very big clue.
2) There's not enough empirical evidence because the tests have been faulty. Nobody is going to read at full speed in a lab. Kevin asked me: Then how do you measure it? I replied: Just because you can't measure it* doesn't mean it doesn't exist.

* Or better: haven't measured it yet.

We need much, much, better tests.

> Anecdotal evidence simply is not reliable

That's too much of a blanket statement. Emprical data is nominally totally reliable, but no actual study is ever 100% objective, neither in terms of subjects (underperforming undergrads are the norm) nor in terms of stimuli (Courier?! Puhleez). It's very easy to misinterpret emprical data (which is almost always the case), because it always requires assumptions. Where do those assumptions come from? The best place is typographic anecdotal evidence, not the background of a cognitive psychologist. Anecdotal evidence is not totally reliable, no, but it's real. The ideal relationship between empirical data and anecdotal evidence is an iterative one, where they guide each other, often backtracking, always doubting, and refining.

If you remember, Kevin was indicating that loose letterspacing is fine, serifs don't help, even that all-caps isn't so bad. Those are all a result of taking the limited empirical evidence too seriously. During my lunchtime conversation with him, I think the one I think I convinced him of is that "it's not so simple". His reply was that we need more testing. I agree. But I'm not going to stop believing something that makes so much sense in the meantime.

Deliberation versus Immersion: The qualitative line is tricky to find. It might very well be at the 10-year-old threshold mentioned above. So even though graduate students read faster than undergrads (and hopefully both groups know the alphabet equally well...), I might agree that there's no qualitative difference between them. However what I would state is that most empirical data is collected in such a way as to not measure deep immersion: the point where boumas are used, not just individual letters. Another big clue is the reading speed measurements: Kevin's 400 wpm is way too slow - it's not true immersion. You can see this yourself just by taking some casual measurements, both through RSVP systems and through some extended reading sessions.

BTW, to me, nore than chess, reading is more like drinking coffee: you can do it for pleasure (no performance issues), or you can do it to be ready for a meeting in 5 minutes (major performance issues). Another nice parallel is driving in moderate traffic: you can just go along casually, or you can push the limits and try to get to your destination faster; regressions are fender benders, except nobody pulls over. :-)

> who would have thought we read in saccades before empirical study determined it?

Some people. Marius Audin for example wrote that he was not surprised in the least by Javal's findings. In fact, all you need to do is observe somebody's eyes while their reading. You can't really believe nobody did that until the late 19th century.

A lot of times a "formal discovery" is given too much credit, and some people who have become of aware of something "unexpected" simply haven't published papers on it, so the world at large never finds out.

hhp

degregorio2's picture

John, nobody can explain it better than you.

In spanish, the literal traduction of "legibility" is considerated the part of typography what study the graphic signs and how to work in the read.

The traduccion of "readability", lecturabilidad, is not in the dictionary.

Francisco Galvez, in his book typographic edication, explain the "lecturabilidad" like a visual comfort.

Definitly, in spanish lenguajes, there are many schools what explainthe concept.

I prefer explain all the concept like an only word, "legibilidad" (ame that Mr. G

dezcom's picture

There have been studies of legibility for signage, CRT, long text. Studies comparing serif to sans serif where one wins for long text and the other for screen or when reversed or for signage. I am curious if anyone is aware of the parameters of studies and can speak to where one kind of reading begins and the other ends. Clearly reading "War and Peace" is in the long text arena. Clearly reading a Stop sign is not. But where is the break point? Is it "The Old Man and the Sea," a simple brochure, a paragraph, an entry in the phone book, or any word longer than 8 characters? Also, how much of a difference is there? If I read "War and Peace" will it take me a day longer if it were set in a Sans instead of a Serif or only an hour? What if I only read 20 pages a night of it? If it all boils down to 10 minutes difference to read "The Old Man and the Sea," then what is the big deal?
I have heard a few people (and only from type savvy people) say reading a Sans really impairs their reading a great deal. Frankly, I don't find any difference in my own reading. The average lay person just reads without making much of a consious judgement.
I wonder how much of it is a self fulfilling prophesy or even something as simple as we like one over the other only for aesthetic reasons and therefore our self-percieved reading ability increases.
My real question or actually theory is that years of legibility/readability studies surely measure something but what? I have never seen a conclusion in a study sufficiently explain the "why" aspect. There are just too many leaps of faith from "Serifs help us read faster" to "how they do it" There was a time when the greatest scientific minds on the Earth were convinced that the world was flat or the Sun traveled about the Earth. Later someone else figured it out and proved them wrong. Is that where we are now with readability/legibility? Will someone prove us all wrong in a few years?
This is getting to be a new thread. I will post it as a new thread so I don't interfere with the excellent one John Hudson started going here.

ChrisL

hrant's picture

In English, "legibility" has a pretty clear meaning: a "clinical" one concerning the decipherability of letters. Some people also use it to mean the decipherability of continuous text (but only because to them it's the same thing). The word "readability" however is heavily overloaded - it has three meanings, depending on context: some (in fact most) people use it to mean the ease of reading the content (unrelated to the type or typography); most designers use it to mean the mechanical ease of reading a composition (irrespective, or more often inclusive, of the typeface); but we type designers need to use the term in our context of individual fonts - this is because some fonts (like sans fonts with large x-heights) are highly legible but not very readable, and other fonts (like serif fonts with generous extenders) are pretty legible (except for things like signage or really small type) as well as highly readable for the duration of an entire book for example. Another distinction here is the "g" for example: the monocular form is highly legible (because it's "simple", obvious), but the bino helps readability more (because it's more distinctive).

Spanish? It needs a good word for readability. :-)
I think Jorge has found one, but I forget what it was.

BTW, John, would you care to mention Peter Enneson's stance as well? He was a panelist in Greece, and has had some of the deepest thoughts about readability I've seen anywhere.

hhp

William Berkson's picture

Kevin Larson's paper shows the hypothesis of word recognition primarily by word shape has been refuted. But what role spacing or caps or serifs or a lot of other questions seem to be open.

For example he dismisses the ease of lower case readability vs caps as simply a matter of familiarity, but I don't see any reference to a paper to test this issue. You can only make conclusions when you have independent tests. That fact that you can learn to read quickly backwards does not test the caps vs lower case issue. Maybe you are slowed just as much backwards with caps. Also, for example, I have seen examples where irregular spacing slows down reading. Has this been tested?

The more refined your hypotheses are, the better your tests can be and the more they can tell you. There has been a lot of interesting work, but I by no means see from the paper that type design does not affect readability. Further, I would think that testing the theories of successful typographers - those whose faces people like to read - would likely be very informative.

dezcom's picture

Hrant! Did you get a face lift or take a trip at the speed of light? You look MUCH younger now :-)

John Hudson's picture

The ideal relationship between empirical data and anecdotal evidence is an iterative one, where they guide each other, often backtracking, always doubting, and refining.

But that ideal relies upon an actual iterative relationship of the empirical data and anecdotal evidence. That is, they actually have to support each other. You can't just decide that they are going to support each other and build a model of reading based on that premise. That would be like the Ptolomeic model of the universe: it works within the confines set by the presupposition of the position of the Earth, but it isn't actually true.

My point is that the anecdotal evidence about reading is evidence about the experience of reading, which may differ considerably from the mechanics of reading, of which we are not consciously aware when reading. This is not to say that such anecdotal evidence is useless; on the contrary, the observation defines exactly how that evidence is useful: in understanding the experience of reading which, as I noted, is a legitimate target for design. But we need to be aware that designing to improve the experience of reading is not necessarily the same thing as designing to improve the function of reading.

We now know a certain amount about how the brain works, and can describe the activity of thinking in measurable ways. The most remarkable thing about this knowledge, I find, is how different this activity is from our experience of thinking. We do not experience thought as the firing of neurons, we experience it as something that, if we were to try to describe it, would be a philosophical concept of the mind, not a psychological understanding of the brain. This is what I mean by my comparison of reading to playing chess, which has nothing to do with the relative acts of drinking coffee slowly or quickly or driving in different kinds of traffic. In terms of game theory, chess is not a game, it is a function in which, if both players always make the best possible move, white will always win. So this is what we know about chess, what we can understand in empirical terms from studying the rules; this is akin to what legibility studies are telling us about how we read in cognitive terms. But because of the complexity of the function, we can experience chess as if it were a game, and even the world's most powerful computers cannot predict at every stage what the perfect move is. This is our experience of chess, as a game in which both players have an opportunity to win; this is akin to our experience of reading and the anecdotal evidence to which it gives rise. Now, it may well be that some of the anecdotal evidence we derive from our experience of reading may in fact line up with the empirical data or, as you point out, with data that we don't have yet because we have not figured out the tests that might produce that data. But my point is that we can't actually know whether the anecdotal evidence is valuable, except insofar as it provides an understanding of the experience of reading (which may well turn out to be more important in typographic terms than a cognitive understanding).

hrant's picture

1) The word-shape model is lousy (although it's not totally off - it is a step in the right direction). The bouma model is the right one. A bouma can be a word. It can also be a letter. Essentially, it is a cluster of letters that is recognized as a whole, due to its frequency (hence familiarity to the reader) and its distinctiveness (especially its pattern of extenders). The reason words do become important is because the blank space is such a great delimiter of boumas. And short & frequent words make great boumas on their own*. When I explained all this to Kevin, he was happy that I didn't believe in the word-shape model. He said that the model that I -and Peter- were proposing was much more sophisticated (and I think by extension more worthy of consideration by him) than the word-shape model. You see, Kevin hadn't thought of the bouma model simply because the bulk of his emprical data didn't reveal it (but neither does it counter it). It comes from some empirical data, a bunch of anecdotal evidence, and plain old logic. It's motivated from the desire and need to reconcile empiricism and 500 years of experience.

* But the space isn't everything. Which is why the word "readjust" is so hard to read.

2) Almost everything Kevin -and others in his position- believe about type comes from empirical data. As a result his position is consistent, "convergent", but also does not penetrate into true immersion, simply because that just hasn't been measured yet - not properly.

3) Kevin has made no formal statements about serifs, letterspacing, etc., but theories always lead to conclusions, or at least ideas. When I pressed him about letterspacing, he admitted that his model (parallel-letterwise) would prefer pretty loose letterspacing. Letterspacing that any serious practicing typographer would reject. That doesn't mean he's wrong, but I think it's highly probably that he is. Serifs, same thing: the letterwise model is anti-serif. And then there's x-heights, and a bunch of other stuff. There are too many anecdotal clues that the letterwise model is only half the story. And there's some empirical evidence too (like Taylor's 60).

Kevin is very open-minded. And where I learned some very interesting things from him about deliberative reading, I could tell he also took away at least one thing: a glimmering new doubt.

> I by no means see from the paper that type design does not affect readability.

Well, of course he doesn't say that. No empiricist can be a WRBWWRM-er.

hhp

hrant's picture

> they actually have to support each other.

Yes, but never absolutely. The model that I build may not be True, but maybe nothing can be. All I can really aim for is... Making Sense!

> designing to improve the experience of reading is not necessarily
> the same thing as designing to improve the function of reading.

That's very true.
I'm the one always saying: readers don't necessarily know what's "good for them".

> can't actually know whether the anecdotal evidence is valuable

True. That's where Faith comes in - no human decisions are possible without it.

hhp

John Hudson's picture

Kevin has made no formal statements about serifs, letterspacing, etc., but theories always lead to conclusions, or at least ideas. When I pressed him about letterspacing, he admitted that his model (parallel-letterwise) would prefer pretty loose letterspacing.

I've also discussed this with Kevin, and I find it hard to believe that he would say anything so definite as the parallel letter recognition model preferring loose fitting. He's a cautious guy, and tends not to make statements that he can't back-up with evidence. I do know that he is very interested in the spacing question now, and I believe he is designing some tests, which is great news. One of the best things that came out of the panel in Thessaloniki was the idea that typographers can help cognitive scientists figure out what sort of reading tests actually lead to useful insights.

I also think Kevin is familiar enough with the non-intuitive results of tests involving lateral masking -- certainly that was a key focus of Peter Enneson's comments in Thessaloniki -- not to assume that loose fitting would be better for a parallel letter recognition.

The scary thing, of course, is the possibility that a well-designed test of the effect of letterspacing on reading speed and/or comprehension might just tell us what we don't want to hear: that loose fitting produces better results. As Mary said in Thessaloniki, if the empirical data went against what we intuitively believe based on anecdotal evidence, would we change our practice? At least with my take on all this, one would still be able to make the claim that tighter spacing benefits the experience of reading. :-)

John Hudson's picture

I'm the one always saying: readers don't necessarily know what's "good for them".

In the sense that reading is not only a cognitive process but also an intellectual and cultural one, readers may well know what is good for them. This is what I mean when I say that the experience of reading is a valid target for design. Design that improves the experience of reading isn't necessarily design that makes reading faster or more accurate, because the experience of reading is multifaceted. It is also, to a very large degree, influenced by the text. There are texts -- what might be called informational texts -- in which speed and accuracy are the most important aspects of the reading experience: we want to get the information from the text quickly and to understand it correctly. But such texts are the least interesting from an experiential point of view, precisely because the experience is so narrow and shallow. Reading a literary text is a very different experience even though -- and this is the kicker -- the cognitive process involved may be exactly the same.

William Berkson's picture

> theories always lead to conclusions

Yes and then good scientific method is to test whether the logical consequences are borne out in reality. You learn the most when the theory is refuted, and ideally the experiment is designed so it has the most possibility of revealing error.

>that typographers can help cognitive scientists figure out what sort of reading tests actually lead to useful insights.

That's what I was also saying above. My old teacher Karl Popper, the philosopher of science, used to say that your testing is only as good as your theorizing. The more rich the alternative theories - and many will be wrong - the more informative testing you can do.

By the way, I do think that some of the letter spacing these days is too tight, so I'm not worried about the tests coming to this conclusion!

hrant's picture

> I find it hard to believe that he would say anything so definite

But that is what he said in Thessaloniki - you don't remember? Sure, it's not like he declared "loose fitting is good"*, but he did say that the letterwise model should prefer looser fitting than anecdotal evidence (he calls it something else though) would indicate. My challenge to that is what has sparked his interest in the testing of spacing. The problem though is this: if he still hovers under 400 wpm (due to poor stimuli and/or subjects) all he's going to find is more empirical evidence to back up the letterwise model, still missing the existence of boumas entirely, because boumas only kick in when it matters, not when you're cruising. I hope I managed to convey that to him, otherwise it's just wasted effort.

* For one thing, what's "loose"?

> might just tell us what we don't want to hear

I personally want to hear anything that can be true - I don't care about sacred cows, except if they taste good over mesquite.

Also, I would suggest that most type designers would love to hear that boumas don't exist - because that makes their job so much easier: you don't have to worry about an infinite number of possible combinations*, just a finite (and usually pretty small) set of nice, pretty little objects. And the only font that would really be left out in the cold, out of all 100,000 of them, is Legato.

* Or if you already don't worry about that, you won't have to feel guilty about it, plus you can finally shut Hrant up to boot!

> such texts are the least interesting from an experiential point of view

And that -coupled with the fact that much of the reading we [have to] do is a chore and not a pleasure- is exactly why such reading needs the most help, the most attention to readability. Things that we enjoy reading can have much more liberty to sacrifice readability for "atmosphere".

Furthermore, not all of reading experience can be... experienced directly in the consciousness. Over the span of many pages of immersive reading, the "oblique", subconsious experience seeps through, even though we probably won't even realize what's happening! When I made Maral for an Armenian magazine, everybody loved it. For about three months. But eventually the same people who had praised it started complaining about readability. Even though nobody -not even me, at that stage of my understanding- had any idea what was really going on. All we had was the forceful -but unprovoked- anecdotal evidence that Maral has low readability (at normal reading sizes), even though the main initial points of praise for it was in fact that it was easy to read! If I had known about boumas back then, that mistake would have been avoided. BTW, if you want to replicate this phenomenon in Latin, set a entire magazine with long articles for a 2-3 months in Quadraat Headliner.

hhp

Kevin Larson's picture

Yes, I did say that the parallel letter recognition model predicts that each letter needs enough letterspace so that it can be recognized on it's own. This goes against the idea that very tight spacing is necessary in order to create good word shapes. It does beg the question of how much space is necessary - obviously too much space is harmful as all the letters within a word could fall outside the area of the fovea. This is an area of great interest for me right now.

I do believe the interaction between generally accepted wisdom and empiricism can be a fruitful exchange. In order to further this I would like to learn more about immersive reading as I do not understand this concept. One hallmark of immersive reading appears to be a reading rate of greater than 400 wpm. I know of no evidence that there are situations where people will read (where reading is different from skimming) at a rate of 400 wpm or greater. While it is true that the lack of evidence for something is different than evidence against this possibility, I think the assumption has to be that it is Not possible until proven possible. A second hallmark appears to be the skipping of a handful of short, high frequency words. But this effect can be seen at normal (sub 400 wpm) reading speeds and even at the slowest of reading rates, including situations when people are reading highly challenging text. And finally a third hallmark is that you need to be of a certain age to read immersively. If I was interested in conducting a test that could differenciate between immersive and non-immersive reading, what should I measure?

Cheers, Kevin

p.s. I won't be able to continue this conversation tomorrow because I'll be away, not beacause of lack of interest.

William Berkson's picture

Kevin, my hypothesis is that appropriate letter space is a function not only of absolute distance, but also of the ratio of the width of the counters to the letter space. Furthermore, it becomes much more of an issue at text sizes, where the counters and letter spaces are more near the limits of vision. If you are testing spacing, I think considering the variables of the ratio of counter to letter spacing, as well as absolute size would be worthwhile.

I would further hypothesize that these ratios are the reason a lot of san serifs are less easy to read. In printed Helvetica (not a condensed version) the interletter spaces are too small compared to the counters. The problem is that if you space a circular sans more widely then the words can 'fall apart'. Narrower sans are, I believe, more readable.

My guess would be that the combination of right counter/letter space ratio and having the eye find and hold the line easily and/or knit together letters as words is accomplished better in text sized with serifed faces.

Another important factor to consider is not just reading speed and comprehension but also fatigue. I can read the typophile postings in the screen Arial/Helvetica, but it is more fatiguing than reading a book at similar size in a good serif font, properly designed as far as leading and measure. It may be that fatigue is closely correlated to ability to catch misspellings, which you report in your paper, but it may also be different.

Finally, I don't know exactly what Hrant means by immersive vs deliberative reading, but here's what I mean: when you are aware of the shapes of the letters and when not. For example in look at a logo such as Coca-Cola's you are aware of the shapes and their emotional impact. When you are immersed in reading long text such awareness vanishes, while you are only aware of meaning. Hence my argument above about the importance of aesthetics in text type - not an issue of readability.

hrant's picture

Hi Kevin.
I think letterspacing might in fact be the best way to test the letterwise/bouma theories, because it doesn't affect the letterforms - you can use a single font for testing*. BTW, I'd like to say that we're lucky to have you: a well-funded :-) field researcher who actually listens to type people. Our best chance yet to get to -or at least near- the bottom of this.

* Although choosing the font carefuly is very important - more below.

> One hallmark of immersive reading appears to be a reading rate of greater than 400 wpm.

Well, I think working with a firm number like that is dangerous. Especially coupled with a term ("immersive") that we don't fully grasp yet (I know I don't). Maybe the best way to look at this is to say that:
1) People can read at rates much higher than what the bulk of field research has indicated as some kind of ceiling (around 400). I say this based on some pretty firm anecdotal evidence.
2) We'd like to help as many people as possible read as fast as possible (I mean through type design), even if it just means going from 250 to 300 wpm.

In terms of deliberation versus immersion, I'm realizing that maybe the only useful distinction is at the very low end of reading speeds*. So the reading measurements in the studies Kevin relies on are not deliberative, they are immersive, but the use of boumas is only faintly hinted at** - I believe because of faulty stimuli and subject (as I wrote).

* Equivalent to what a child does before he's around 10 (nominally).

** Kevin, think again of that skipped "and" in your example.

So immersion isn't anti-letterwise (and the fovea -coupled to the brain's parallelism- provides enough data not to require boumas), but letterwise is only the surface, and the degree to which immersion is bouma-based has yet to be fully revealed (I mean empirically). I think it's basically totally correlated to a reader's reliance on the parafovea.

> I think the assumption has to be that it is Not possible until proven possible.

And by extension you're saying we can't assume boumas exist until we get empirical proof.

If you're presenting a paper at a psychology conference, sure you shouldn't make such "anecdotal" claims. But what's wrong with a typographer making an educated assumption, in order to improve his work? Lacking solid empirical proof (not as easy to come by as it might seem, as far as I'm concerned), why not use less formal means of decision-making? That's exactly what humans do 100 times a day.

> the skipping of a handful of short, high frequency words ... can be seen at
> normal (sub 400 wpm) reading speeds and even at the slowest of reading
> rates, including situations when people are reading highly challenging text.

Exactly.
How does one explain that in the lettewise model? And why would there be a qualitiative difference between let's say a 3-letter word that's extremely frequent (like "and") versus a 4-letter word that's moderately frequent (like read)? My belief is that the degree of immersion correlates to how often we skip boumas (not necessarily words): the more experienced the reader*, the more often, the deeper into the parafovea, and with the greater confidence** can he assume that the fuzzy bouma is a certain word. Experience with the context of the semantic content helps too.

* Think of the undergrad versus grad student performance difference you yourself mentioned.

** This is a key thing here, to me. The difference between the fault-tolerant, heuristic nature of the brain, versus the too-algorithmic letterwise model.

> you need to be of a certain age to read immersively.

You know a lot more about the actual changes that the human brain goes through as it's growing, but I think part of it must be training: you could teach a child to read boumas sooner if you help him move beyond deciphering the letters and compiling them. The age of 10 seems to be a typical threshold, but to some extent it must be malleable, no?

> If I was interested in conducting a test that could differenciate between
> immersive and non-immersive reading, what should I measure?

If I'm right that anything above that nominal age 10 is indeed immersive, just to different degrees, then I think what you might look for instead is the "hidden world" of boumas. I'm not proposing you devise experiments that "push" a model that might be false, what I'm saying is that you might devise experiements to find something only faintly hinted at so far, if it's there.

So you would choose a book font (small x-height, serifs, tight spacing, etc.), set it in a highly "comfortable" way, and somehow try to be as unintrusive as possible: maybe don't tell people you're actually measuring reading, tell them you're measuring something else; or maybe get them to sign a consent form saying that any time during the following year you can secretly measure their reading. :-) Also, you need to choose test subjects that have a chance at getting deeply immersed, like older people who read a lot of books. Not easy, all this, I know.

BTW, in terms of what to measure, you could stick with the overall reading speed measure, but here's an idea that might be better: measure saccade length; and observe particularly long saccades; try to see why/how a reader can skip so much.

I'm personally convinced of what you will find: people skipping entire words (or deep into the latter half of long words, like in German), much more than that limited Taylor set, where the fovea has zero hope of seeing the individual letters. That's how -with smart guesswork- you get to really high reading speeds. That's "deep immersion".

hhp

Giampa's picture

Why would an author, or poet, care about "read speed". What about shorter movies, shorter symphonies?

Speed is not the message.

Syndicate content Syndicate content