bouma slam

enne_son's picture

I was born Gereformeerd. Gereformeerd is or was in my youth a denomination within the Dutch Protestant orbit stemming from the Calvinistic response to 15th-century Catholicism. While keeping our distance from denominational expressions of Christianity my immediate family and I still keep to some of the observances ensconced in the wider Christian tradition. For Christmas I asked for and received copies of Stanislas Dehaene’s recent book Reading in the Brain: the Science and Evolution of a Human Invention, and Oliver Sacks’s The Mind's Eye. The Mind's Eye describes, among other things, the case of Howard Engel, the Canadian detective story writer who lost his ability to read due to a lesion caused by a stroke, in what Dehaene, the author of the other book, and his research associates have dubbed the Visual Word Form Area of the brain (VWFA). In Reading in the Brain the VWFA is called “The Brain's Letterbox”. Reading in the Brain describes: how we read; the VWFA; the neuronal basis for reading; the evolution of writing; learning to read; the dyslexic brain; reading and symmetry; the author’s own pet theory of neuronal recycling; and it speculates about neuronal recycling as the basis of culture.

Along the way Dehaene forcefully (his word) blasts (my word) the Whole Language approach to reading instruction, mostly on account of the apparently well-documented damage it’s "Whole-Word" instructional bias has caused to literacy. The bug-a-boo is the approach’s sidelining of the systematic inculcation of phonemic awareness. "Cognitive psychology directly refutes any notion of teaching via a 'global" or "whole language" method." Dehaene incidentally keeps his description of this method vague.

I haven’t finished reading the entire book, but noticed in passing that, as part of this critique, Dehaene mentions that the emphasis on the global shape of words “also invaded the world of typography, where the term “bouma” (named after the Dutch psychologist Herman Bouma), was coined to refer to the contours of words.” Deheane adds, “In the hope of improving readability, typographers intentionally designed fonts that created the most distinctive visual “boumas.” His source for this observation is — wouldn't you know it — Kevin Larson's 2004 “The Science of Word Recognition.”

I of course find this somewhat disappointing because the slight involves a misreading of what Bouma Shape was actually coined to refer to — something more than “contour” or the “global shape” of words. Perhaps the slight or slam is excusable, but digging deeper into both the psychological literature where it originated (Insup and Martin M. Taylor’s The Psychology of Reading), or Paul Saenger’s Space Between Words, or the peregrinations on typophile surrounding the term might have induced Dehaene to explore the matter a bit more fully.

The book is wonderfully provocative though, and it has it’s uses. In some places the language positively sparkles. And it has an endorsement from Oliver Sacks. Moreover, it’s a fairly manageable and, for the most part, internally cohesive compendium of a host of empirical rummaging in various domains. It’s also a bit of patch-work in places, in that there are disparities between several of the ways of seeing reading that Dehaene recruits to build his case. For example, Dehaene passes along the interactive activation model but his own local-combination detection based approach doesn’t fully mesh with the interactive activation account’s basic premises.

But somewhere in the section on “The Brain’s Letterbox” I had to take a break. The reason's were: 1) my ever-mounting frustration with the — to me — too rudimentary notion of how words are encoded in the brain — i.e., according to their spellings — became insurmountable, as did 2) an intense feeling that I had to make a more significant concession to the parallel letter recognition scheme than I had felt able to make before. In other words, I felt I needed to rethink the “inhibition of incipient recognitions for letters” idea I had found in Edmund Burke Huey. I felt the need to try and plot a “convergence of perspectives.”

The long and short of this is that my thinking is entering a bit of a new phase. Basically the idea is that on the new iteration of my scheme, categorical perception at the letter level can and does occur, but it doesn’t normally or necessarily lead to an independent downstream labeling in the higher regions of the brain. This is a kind of paradox that might require further elaboration. A similacrum of parallel letter recognition happens, but parallel letter recognition as currently schematized isn’t the central or foundational mechanism. The central mechanism remains for me the variety of “feature analytic processing” or “simultaneous co-activation of letter parts” I tried to describe in the recent “Monitor on Psychology” thread, and prior to that in TypeCon Atlanta, and the central event for me remains Bill’s notion of matrix resonance.

I’m thinking of starting a typophile blog page to describe 1) how I now see the structure of the underlying matrix — the inner bouma, if you will, 2) how I think it becomes established, and 3) how the connection with it is made in normal reading contexts.

I think I might call my scheme “distal ring plus proximate ring coding” or some such thing. DRAPE-coding for short.

I'll do that when I get a chance, and if my intuitions about this stick.

Nick Shinn's picture

“In the hope of improving readability, typographers intentionally designed fonts that created the most distinctive visual “boumas.”

1. Type designers, not typographers.
2. Typefaces, not fonts.
3. It never happened, pace Larson.

It's hard to take scientists' typographic ruminations seriously when they don't understand the most basic terminological distinctions.

Jack B. Nimblest Jr.'s picture

"" Deheane adds, “In the hope of improving readability, typographers intentionally designed fonts that created the most distinctive visual “boumas.” His source for this observation is — wouldn't you know it — Kevin Larson's 2004 “The Science of Word Recognition.” "

...I wonder what anecdotal experience turned this Maltoid into a fact. What "typographers" did this?

dezcom's picture

Sounds like grand Larsony to me.

William Berkson's picture

We need a new marker to collect best typophile puns :)

Té Rowan's picture

I'm sensing a Bubblegum Crisis in here.

Nick Shinn's picture

...typographers [sic: type designers] intentionally designed fonts that created the most distinctive visual “boumas.”

I don't believe the major concern of type designers is either letter shapes or word shapes -- this formulation of our readability concern is simplistic, if not banal. We place a great deal of importance on "fit" -- of which letter spacing (sidebearings and kerning) is as important a component as glyph shape.

I design typefaces as combinatory glyph reading systems with the goal of making individual letters look good as well as syllables, words, sentences and paragraphs -- at sizes from small body text to large display, in various media and with various text content. My readability concerns transcend the simple distinction that scientists have so far made between reading letters and reading words.

I will not have my work reduced to the level of their understanding.
It is a gross misrepresentation.

quadibloc's picture

Of course, one can go back to "Typographical Printing-Surfaces" to look at comparisons of letter outlines for differences. But the original point was about the "boumas", which is supposed to be the look of an entire word.

I suppose that if research is to transcend the idea that words are only perceived as individual letters, according to their spelling... one should study people who read Hebrew or Arabic, where the individual letters are particularly challenging to read, and so being able to use context speeds things up considerably - too much to be ignored.

enne_son's picture

Nick, Stanislas Dehaene puts forward a combinatorially-based hierarchical coding scheme that moves from oriented bars through letter fragments, case-specific letter shapes, and local bigrams to frequent substrings, morphemes, small words and beyond. He might applaud your concern to make all these things look good, and recruit your list as evidence for the aptness of his scheme.

In relation to your ‘fit’ comment, I’d say the type designer and typographer is oriented to strengthening our immediate or everyday, ordinary, lived-experience of words as 1) unitary rather than broken-up or fractured; 2) internally differentiated but not chaotic; 3) uncrowded; 4) familiar rather than foreign, or strange; and 5) sense-invested rather than as cryptic or meaningless things. As you say, “letter spacing (sidebearings and kerning) is as important a component as glyph shape” in ‘fit’. Perhaps scientists understand this, perhaps they don’t. But a hierarchical scheme such as Dehaene presents, with local combination detection as it’s foundational routine and parallel letter recognition as it’s defining mechanism, has difficulty addressing how it is that we experience words in this way.

So that's in part where my discomfort lies, and one of the things that motivates my search for a less heirarchical and less letter-based coding scheme. But of course, “global shape” or “outer contour” don’t do any better at addressing or lived experience of words. I want a coding scheme that is both richly evidence-based and addresses the lived experience of skilled readers.

enne_son's picture

John [Savard], do you think the Word Superiority Effect might be larger in Hebrew or Arabic?

Nick Shinn's picture

I want a coding scheme that is both richly evidence-based and addresses the lived experience of skilled readers.

I don't believe that's possible.
Consider music, where coding has been split into categories such as tempo, pitch, timbre, harmony, etc.
Listeners can decode, concentrating on these different categories as they see fit.
The map is not the territory.

quadibloc's picture

I believe there is indeed a good chance that, yes, due to both Hebrew and Arabic being normally written so that individual letters are considerably less legible (or certain pairs of letters are much harder to distinguish, to be specific) than with the Latin alphabet, it may well be easier to experimentally detect the contribution of recognizing the shape of a word as a whole in reading those languages.

The skepticism of other posters about being able to determine what is really going on when people read through the kind of clumsy experimental techniques that are available is justified, but, on the other hand, that is not an excuse for not doing what we can to at least begin to have hard evidence and quantitative measures which can be a solid foundation for some forms of progress.

Not everything can be made a science instead of an art. But those things that have been changed from craft to engineering have yielded great progress. So if even a sliver of the type designer's art can be aided by science, it could well be worthwhile.

Chris Dean's picture

@enne_son: I think a blog is a great idea. Any chance of a slightly more plain-language version so you can share your ideas with those who don't have a background in psychology?

enne_son's picture

Nick, I know the map is not the territory, but I think I’ve happened upon a description that can address your skepticism about the powers of coding schemes. John, I’m not sure I can produce hard evidence at the outset, or introduce quantitative measures without assistance, but there are tantalizing convergences in existing data sets. Christopher, I’ll do my best.

I’m going to wait with my description until I’ve finished reading Dehaene.

As a side note to my comment about Dehaene’s dismissal of the Whole Language approach to reading instruction let me say that, on my view the testing of and systematic inculcation of phonemic awareness is vitally important. It's just not all there is. Systematic inculcation of phonemic awareness forces the mind to create vectors over bigrams, syllables, morphemes and small words. This is good. It also tends to erect a hierarchical and dendritically complex coding architecture. (Dendritic, means tree-like.) This is the kind of architecture Dehaene's book describes. Such hierarchies can be difficult for a processing routine to navigate. It's basis is a decomposition into letter-wholes. A less coarse-grained, more elemental recoding has to happen alongside of vector-formation to make word recognition rapid, automatic and effortless.

Nick Shinn's picture

Re. phonemic awareness: what are the results of reading testing on those who have been born completely deaf?

dezcom's picture

When we first learn to read, we sound out syllables and check memory maps for similar phonic or written matches. We also do this when we come across a foreign word or words new to us that are in our own language. After enough readings, we learn the word as well as the glyphs and then read it more quickly without needing so much time needed for analysis. This process never stops as long as you come across an unfamiliar pattern of glyphs. Read any foreign text or a technical jargon that you are not familiar with and you slow down, decode, compare, and interpret or find a source to translate what is still a mystery. I don't see how we are talking about an exclusive either-or situation. The brain is much smarter than we give it credit for. Reductionism only works some of the time, the brain works all of the time, even if the answer is "I don't know that word"

Chris Dean's picture

Miller, P. (2010). Phonological, orthographic, and syntactic awareness and their relation to reading comprehension in prelingually deaf individuals: What can we learn from skilled readers? Journal of Developmental and Physical Disabilities, 22(6), 549–580.

Abstract
This study seeks to provide new insight into the phonemic, orthographic, and syntactic awareness of individuals with prelingual deafness and the way those contribute to reading. Two tests were used: one designed for the assessment of phonemic/orthographic awareness (PO/OA) and another examining reading comprehension (RC) in contexts where prior knowledge was either helpful or not. Participants were 83 prelingually deaf individuals (DIs): 21 primary school, 36 high school, and 26 university students. The control group consisted of 85 hearing individuals (HIs) from parallel education levels (29 primary school, 29 high school, 27 university). Contrary to predictions made by current reading theories, findings imply that the failure of DIs to develop sensitivity to the phonological properties of words may not underlie their reading difficulties. Rather, this weakness seems to reflect a processing deficit at the supra-lexical (sentence) level where the final meaning of single words is elaborated by its integration based upon syntactic (structural) knowledge.

I have not read the paper. This is the abstract from the Web of Science database.

enne_son's picture

Christopher, I'll have to look into that some time. It’s hard to imagine why their difficulty should be in the realm of sense-following. I wonder if that’s because syntax in sign is different from syntax in written texts based on the English model of how sentences are formed.

Chris, your absolutely right, I shouldn’t give the impression that it must be either / or. More likely it’s both / and. At some point the coding schemes must intersect. The underlying architecture should allow for at least two routes.

dezcom's picture

"Whole Language" meets "Hooked on Phonics" = "Now how can we still mess this up for kids learning to read?"

eliason's picture

the brain works all of the time

Speak for yourself!

dezcom's picture

Well, maybe not when you are drinking heavily and curling ;-P

Kevin Larson's picture

I developed the word recognition talk and paper because the people on my team at Microsoft (as well as other typographers and type designers) told me that they were designing for word shape. Here is a recent quote from p.23 of Jost Hochuli’s 2008 edition of Detail in typography:

“Given that an adult reader’s eye registers not individual letters but whole words, or parts of them, it is not surprising that words play a particularly important role in the reading process. With an easily readable typeface the individual letters are always designed with regard to their impact as parts of a word. While being clearly differentiated, they must be capable of fitting together as harmoniously as possible into whole words.”

Kevin Larson's picture

I gave a talk at ATypI Dublin about dyslexia and what psychologists have learned about reading acquisition. I didn’t actually tackle the whole language debate because it’s not a debate anymore. The whole language idea for teaching reading was to provide a positive, reading-rich environment, and children will just naturally learn to read on their own. While it is certainly a good idea to read to your kids, it is false that kids will learn to read without direct instruction (some will, most won’t). Kids need to learn certain precursors to reading, particularly letter recognition and phonemic awareness. Kids who have learned these skills will go on to become proficient readers, and kids that are struggling to read in the 2nd and 3rd grades are still poor at these skills. I hope to turn this talk into a paper this year.

Nick, the profoundly deaf struggle to learn to read. Conrad (1979) found that only 5 of 205 teenagers were reading at grade level. The 5 who were reading at grade level were the only ones able to detect words that were phonologically related.

William Berkson's picture

I certainly designed Williams Caslon Text with the goal of having the letters make easy-to-read words. In my view—and that of many type designers—how the letters relate to one another is much of the challenge in type design.

Further, making letters relate to one another well I think is demonstrably important whether or not Peter is right about our reading in patterns of sub-letter features across whole words—rather than than, as Kevin has held, we first resolve the word into identified letters.

Or so I argued in the second part of my account of work on the revival.

dezcom's picture

When I first drive to a location I am not familiar with, I consult a map, or one of the online directions sites for detailed directions. After I have made the journey enough times, I not only don't need the directions but have probably found a shortcut and a way to avoid traffic. That does not mean that someone else may either know it like the back of his hand already or else never be able to find it unassisted. We have to allow for the various pathways to reading and not assume all readers are possessed of the exact same knowledge or way of achieving it. My wife MUCH prefers a GPS that spouts out "Turn left in one tenth of a mile" to my preference of a map (or at least street names mentioned instead of discreet distances). Luckily, I can still find maps and she can use her Garmin ;-)

enne_son's picture

Right Kevin, I think I can empathize with your motivation: it seems wrong to suggest that the adult reader’s eye doesn’t — in some form or other — register letters, but I wonder if you think that when Jost Hochuli says ‘whole words’, what this contains for him can be adequately circumscribed by terms like “envelope”, or “outline”, or “global shape”, or locutions like “the raw pattern of x-height neutral, ascending and descending shapes?” Is this in effect what the typographers you consulted and Jost Hoculi were anxious to convey?

And if it isn’t, or can’t, can a marshaling of evidence that shows that “envelope”, or “outline”, or “global shape”, or “the raw pattern of x-height neutral, ascending and descending shapes” are in fact not used actually falsify Hochuli’s claim?

I’m not sure how to properly operationalize ‘whole words’, but it seems wrong to me to do this with the terms I've place inside double quotation marks.

The writings of James Townsend and Michael Wenger lead us to ask: “Is it possible to translate, [pe (their words, with my insertion here): within the realm of perceptual processes in reading], an inchoate notion of holistic or gestalt processing into a set of more precisely specified possibilities based on the characteristics of information processing?” Townsend and Wenger’s try to do this (witin the neural network domain) in terms of process architectures, stopping rules, degrees of process independence and process capacity. We might be well-served by taking a close look at how they try to do and test this.

enne_son's picture

More precisely Bill, reading — or coding — for the pattern of sub-letter features across whole words connects us with whole words as lettered things. This is so because of the structure of the matrix representation of the words.

As I said above, I’m going to wait with my description of that until I’ve finished reading Dehaene.

Jack B. Nimblest Jr.'s picture

Kevin> quoting.., "With an easily readable typeface the individual letters are always designed with regard to their impact as parts of a word."

Look. One of the fundamental, no... foundational principles of readability design, in the realm of optical trickery, is the modest to excessive exaggeration of the lowercase ht. relative to all other hts. Now, the accompanying ascenders and descenders are shortened in this process. That's not an option, it's a requirement, and this most assuredly has a clear impact on diminishing the uniqueness of word shape. Thus, anyone who says what's quoted above is not saying it right, or worse, not doing it right, or even worse simply not a designer of readable faces and just blabbing for gold.

Maybe, this Hochuli fellow was thinking of headline fonts, where indeed the design of larger size provides spacial freedom to wander more in word shape via letter forming, but not in design for readability. On a much larger scale, I still wonder how those hundreds of millions of letterless Chinese get any reading done at all, without word shape recognition.

quadibloc's picture

Legibility and readability are two different things, and legibility is easier to measure than readability - as others have noted in this forum in other discussions.

Typefaces like the Linotype newspaper face Corona, or the 4 3/4 size of Times Roman designed for what the British call "Newspaper Smalls", known as Claritas, most definitely exaggerate the x-height. This allows letters like a, c, e, and o to be distinguished from each other.

But once letters have a large enough size so that legibility is not the issue, instead of going from 11 point Times Roman to 14 point Times Roman, going to 11 point Times Roman with three points of leading is a more reasonable thing to do. (Admittedly, three points of leading is a bit much, unless the lines are quite long.)

Better yet, though, would be to use that empty space, and make the ascenders and descenders longer.

Most English words can't be distinguished from each other purely on the basis of their pattern of ascenders and descenders. So if a lowercase "e" looks like a dot in the point size in use, reading will not take place. So "word shape" will have to involve delving a bit further down into the word, and it will only be a help to reading, not the main engine of reading.

William Berkson's picture

Thanks David H. for summary of the paper on Hebrew vs English reading. It makes sense that Hebrew readers process via a common root path, since the roots are so much of an organizing principle in Hebrew. In English we have such a mixture of sources we are most often unaware of roots. I don't know whether this implies anything with regard to the issue of letter recognition first or not. That's because in Hebrew root and meaning usually, but not always go together.

David B., Peter's theory does not privilege extenders over other letter features, and their length is not the main thing. This idea that whole word pattern is just a matter of ascenders and descenders was an oversimplification of the idea of word pattern. In Peter's view it was a straw man used to dismiss a promising line of research, which he has taken up.

enne_son's picture

Still, Bill, reading David Berlow and John Savard’s latest posts, I think Kevin might have been right in thinking there was something important to respond to. We users of the English language, including those of us who deal with type quite readily take terms used to evoke ideas of the whole word pattern (as you say) as referring to what's contained in the simplifications. No doubt the ambiguity of the word “shape” plays a role in this.

Nick Shinn's picture

One of the fundamental, no... foundational principles of readability design, in the realm of optical trickery, is the modest to excessive exaggeration of the lowercase ht. relative to all other hts.

IMO this is a "legibility" requirement.
Consider this model of the reading process:

1. Sentence structure (purely textual) suggests a set of options for the upcoming word.
2. Word shape narrows the field.
3. However, at small size, legibility is more important to fine tune the exact coding, as more acuity is needed to distinguish letters precisely, for instance, differentiating between "e", "a" and "s". Therefore x ht is made larger for small text.

enne_son's picture

Nick, for your #2, I'd say the compilation of ensemble or summary statistics in parafoveal vision narrows the field. Both terms occur in the perceptual psychophysical literature on object perception and crowding, and in some cases reading. Summary statistics capture more than information about the outer contour of the word, but not as much or as accurately as the information made available by ‘sharp-rendering’ in foveal vision. This is shown by a juxtaposition of the tests that Kevin recruits to argue that “letters are used,” with tests that show words in parafoveal vision are susceptible to “silent substitution” [Denis Pelli]. But in the case of words, summary statistics render precise identity indeterminate.

In relation to your #3, I’d drop the word ‘legibility’ and say access to role-unit level information is important to fine tune the exact coding. This is as true at normal sizes, where acuity is not a problem, as at small sizes. At small sizes, it's not that more acuity is needed, but that there are thresholds to discrimination affordance, which can to some degree be addressed by fiddling with x-height.

Role-unit level information is letter-part level information, that is information about the shapes of counters and individual strokes.

John Hudson's picture

Peter: Deheane adds, “In the hope of improving readability, typographers intentionally designed fonts that created the most distinctive visual “boumas.” His source for this observation is — wouldn't you know it — Kevin Larson's 2004 “The Science of Word Recognition.”

Kevin: I developed the word recognition talk and paper because the people on my team at Microsoft (as well as other typographers and type designers) told me that they were designing for word shape.

Type designers have historically made a lot of more or less voodoo claims about how they work -- e.g. 'designing the white not the black' --, especially then they are to be quoted in print. The disconnect between how people claim to work, or how they think they work, and how they are actually observed to work can be considerable.

Words are the primary context of letters, so of course when we design letters we look at them in words, and we adjust them to produce words that look good. We also look at them in sentences, and paragraphs, and pages of text, because these are also contexts of letters, and we want the typeface to look good in all these contexts, and different contexts reveal different things about the design. For instance, the slightly-too-heavy left diagonal on a letter v might not reveal itself when looking at some individual words, but will reveal itself in page tests.

I can't honestly recall any colleagues claiming to design for word shape, in the sense of word envelope. Rather, I reckon we might reasonably claim to design for words, in Peter's excellent phrase, as ‘lettered things’. And by the same token we design for sentences, for paragraphs, for pages....

dezcom's picture

"I can't honestly recall any colleagues claiming to design for word shape, in the sense of word envelope."

I quite agree. I know I tend to design for letterfit with left and right neighbor. Typically, the same kind of groupings into classes we use for kerning, tend to yield what I will call "shape groups" for fitting and sidebearings. I, and others, make necklaces of glyphs that put each glyph in position with all of its potential neighbors as well as control characters to check fit. After coming to some reasonable satisfaction with this process, long texts of various languages are set and printed at various sizes looking for fit, color, readability, and even aesthetics. This is not just a word outline, it is permutations of all players to seek problems out and solve them.

Kevin Larson's picture

"I can't honestly recall any colleagues claiming to design for word shape, in the sense of word envelope."

That is fantastic if most typographers and type designers don't actually believe in simple word shape, then we’re in agreement. I know I come across arguments that suggest that this isn’t always the case. The page after the Hochuli quote above shows a word (Alphabet) with an envelope drawn around the word.

James Felici’s 2003 book The Complete Manual of Typography shows three images of the bottom half of words, the top half of words, and unoccluded words: “Our comprehension of the text we read is based largely on the tops of lowercase letters, the bottom halves consisting mostly of “legs and feet.” The top sample here is impossible to decipher, but the middle sample is comprehensible. Rendering those shapes into patterns enables us to recognize whole words and phrases at a glance.”

dezcom's picture

"“Our comprehension of the text we read is based largely on the tops of lowercase letters, the bottom halves consisting mostly of 'legs and feet.' "

'...and only believe some of what you read'

Rolf Rehe's1974 book "Typography, How to Make it Most Legible" [the one with the crashing over-tracked Helvetica cover] became the mantra of many design educators. Page 35 shows a word-shape example to describe how lowercase reads better than uppercase. There were other studies at that period in time professing the "Top half of words" theory. This is the period in time that spawned most of today's design professors, therefore the spreaders of the true WORD on what is good typography to today's young designers. Sadly, some of these theories which were tested to some degree, were quickly elevated to the level of "GOSPEL TRUTH" in the eyes of adjunct professors seeking tenure on the grounds that they were actually using Science to teach design. I guess some folks, including Mr Felici, still give these theories more ink than they deserve. When social scientists measured for the density of information in upper vs lower halves of words, they found an interesting tidbit of information which may help other scientists explain reading in the future. It was the design education community of that era that went a bit overboard with it, not the scientists. In practical use, cutting off the bottom half does less damage than cutting off the top half but it still Does Damage. It is like saying most human activity and sustaining organs are in the the top half of the body. Cuttoff the head or thorax and death swiftly follows. Cuttoff the lower half and you will still live to be a happy paraplegic with a colostomy bag and a hampered sex life. Design professors used this kind of thing as a parlor trick to show they read the research.

John Hudson's picture

I can't speak for typographers, Kevin: they may believe that the little men behind our eyes read for us and whisper the text to our inner ear.

Nick Shinn's picture

I wouldn't equate any general understanding amongst typographers that people read word shapes with the way that type designers work.

Do type designers ignore the idea that people read word shapes, even if we happen to believe it's true? I would say so. There is a gulf between theory and practice. I mean, how would a type designer put the "word shape" theory into practice?

H&FJ give some indication of not having done so in their description of the nonetheless very word-shapey Mercury, by not mentioning word-shape readability benefits, leading me to believe that it was not an explicit criterion in the type's design, whereas these other criteria which led to short capitals with much higher ascenders are cited: "…[allowing ascenders] to help define the lower case", "…more compact capitals that help economize on space", "…a useful advantage for older typesetting systems that aren’t equipped to handle true small caps", and "…look[ing] less conspicuous on the printed page." --

Maximal Lowercase
Every typeface designed for small sizes begins with a large x-height, which helps a font feel bigger than it really is. Where Mercury differs from most newspaper faces is in its ascenders and descenders: many fonts truncate these forms, rather than allowing their shapes to help define the lowercase.

Minimal Caps
Mercury uncouples the height of the ascenders from the size of the caps, and introduces more compact capitals that help economize on space. As an added benefit, abbreviations set in ALL CAPS merge more seamlessly with the text, a useful advantage for older typesetting systems that aren’t equipped to handle true small caps.

Efficient Widths
Even many classic newspaper faces have sprawling capitals (Times Roman, despite its popularity, has some of the worst.) Mercury’s capitals have been made as discreet as possible in order to look less conspicuous on the printed page — an especially important feature when setting news stories that are filled with proper names.

I would suspect that if word-shape had been the principle upon which a type was designed, then it would have been mentioned in promotional material and type histories -- but I have never come across any such thing.

If we accept that prioritizing word shape leads to small x-height, then ironically, or perhaps paradoxically, the claim of "large x-height for superior readability" is usually made, if any readability claims relative at all to word shape are made, and I have never seen a face described as "small x-height for superior readability".

quadibloc's picture

@kevlar:
"I can't honestly recall any colleagues claiming to design for word shape, in the sense of word envelope."

That is fantastic if most typographers and type designers don't actually believe in simple word shape, then we’re in agreement.

As three experts have already noted above, there's a distinction between typographers and type designers with respect to this issue. Typographers do believe in word shape to some extent, which is why they prefer using normal upper and lower case to ALL CAPS. But there isn't much that type designers can do about word shape; after designing each individual letter to be beautiful, and then ensuring that the letters look like they belong together, and then adjusting for the right side-to-side fit, there's not much they can do for more distant concerns.

However, there is one respect in which this isn't entirely true.

Type designers - at least if they're trying to revive Jenson or Aldus - do tend to try for generous ascenders and descenders. It's commercial considerations, not aesthetic ones, that led to the stubby descenders of Caslon 540, Goudy Old Style, and even Times New Roman.

Of course, there are also typefaces in which the x-height is very small for a deliberate effect in a display face.

But the ascender and descender height of a classic face (Cloister Lightface, Bembo, Poliphilus, Caslon Old Style, Garamond) is, I would think, generally regarded as optimal for readability provided a suitable point size of type is used.

For applications where smaller point sizes are used, well, you can't have readability if you don't have legibility: the two are not the same, but legibility is a prerequisite for readability. Definitely at 8 point and below, and even at, say, 10 point (and the default settings on computers are such that "12 point" is really often "10 1/2 point, with 1 1/2 point leading" - I was quoting foundry type sizes, not what you get these days) you do need a large x-height. But that's a situation where one is struggling for legibility in the face of severe constraints, not one in which optimal readability is possible of achievement.

Jack B. Nimblest Jr.'s picture

JH> Type designers have historically made a lot of more or less voodoo claims about how they work...

An interesting take, and that may be so, but I think it's a different thing to publicize a flavor of voodoo, as if it were normal, when it is in fact contrary to good practice.

The bottom line on letter vs. word for me, and perhaps for type design aimed at any sort of automated composition, is that I can give you letters that might look rather uneven and contrary-looking to each other (see Goudy, Brody or Excoffon), but if you allow an expert composer control over the word making, readability emerges from a seeming mishmash of letters.

And, I can give you even and compatible-looking letters to each other with perfect spacing,(see a lot of stuff), but if you arbitrarily miscompose them (even by thin spaces), there are situations where words will just become a mishmash of letters.

So, I think it's just simply much more important not to fool around with word shape than pretty much anything else.

John Hudson's picture

David: I think it's just simply much more important not to fool around with word shape than pretty much anything else.

What you imply by ‘word shape’, though, based on your comments, is something very different from word envelope as illustrated by Rolf, Hochuli et al. You are talking about the relationship of letters and how they succeed or fail to cohere as what I'll call wordforms, since the meaning of ‘word shape’ is ambiguous. This is a cognate of letterforms and, like letterforms, wordforms are composed of the relationship of positive and negative shapes, and it has to do, as you say, with how type is used (composed). The obvious conclusion to draw is that any typeface optimally produces wordforms at a specific size, track, resolution, etc., and may only acceptably produce wordforms at some variations of those factors, and not at all at others.

[By the way, speaking about type for specific sizes and resolutions, when a font is identified at webtype.com as being intended for a particular px size range, is that CSS pixels or actual device pixels? That is, is it an optical size target or a ppem target?]

Jack B. Nimblest Jr.'s picture

John, I did not use the words envelope or form. "Letterforms" are as you say, a combination of positive and negative space... Word "forms" if you insist, are not like letterforms, being a combination of letters, and not combinations of positive and negative spaces. It's too late by word forming to think of letters as anything but letters.
[And, would you ever use "CSS pixels" to describe anything to the public without saying so.?]

enne_son's picture

David [B]: below is a riff on what you wrote.

A letter is a combination of 1) positive shapes built up from black strokes, and 2) white or negative space. The white space is the space around and inside the letter and the negative space, the space between the strokes. The space around the letter is background. When the negative space is the space between adjacent or opposing strokes — that is, the space inside the letter where the background shows through —, it can have a shape, or be given form. Negative or white space becomes evoked form with a more or less tractable shape. Designers of letters attend to both: for the designer a letter is two shapes. Both the black and the white are information for vision. Perceptual psychologists and cognitive scientists tend to forget about the white.

A word is exponentially more complex. On the one hand it is — at least in alphabetic scripts — an ordered combination of letters. On another level, it is an aggregation of the shapes, the more elemental whites, and the simple-to-complex stroke-constituted blacks. Within the word the combination of letters makes a second set of whites — between-letter whites. These whites appear to be less shaped and less informative. Yet conventional craft wisdom wants all the whites to be ‘in synch’ and it wants the blacks to be well-coordinated according to their proportions (the implementation of Cartesian space), weights, compositional logic (construction) and ‘swept-object’ logic (contrast). The total wordfrom is the totality of all this: the totality of in synch whites and the coordinated blacks.

Notan-think wants the black and white to be completely equipotent. Designers should at least think of how they relate. (This has to do with ‘colour.’)

All of these things interface with sight. A functional ontology of reading has to go hand in hand with a well-developed theory of writing. Currently they don’t. For example, arranging the whites so they are in synch, and tuning the blacks so they are well-coordinated affects ‘phase’ alignment. Arguably this will have an impact on processing routines. It will bias the system toward taking a more holistic or gestalt approach. But what that means in neural architecture and information processing terms is unclear. So the question becomes again, “Is it possible to translate an inchoate notion of holistic or gestalt processing into a set of more precisely specified possibilities based on the characteristics of information processing?”

I said both the black and the white are information for vision. By Stanislas Dehaene’s logic the decomposition of words into their component letters and the increase in phonemic awareness has to have something to do with the adaption of certain parts of the lateral geniculate nucleus to reading. I’ll bet ‘reading’ the whites, and the neural entrenchment of a decomposition of the blacks into role-units (through writing!) does too.

enne_son's picture

I’ll bet ‘reading’ the whites, and the neural entrenchment of a decomposition of the blacks into role-units (through writing!) does too.

They're probably the key!

Jack B. Nimblest Jr.'s picture

It's your thread Peter, but I'd reriff like: A letter is a combination of 1) positive shapes built up from black strokes, and 2) two kinds of white or negative space. The a.) white space "inside" the letter, completely by black and within the reach of black's extrema and the b. white space "outside" the letter, by definition space subject to modification horizontally and vertically at the behest of the employer of a font in the act of composition. . .

John Hudson's picture

David: I did not use the words envelope or form.

Of course you didn't, but you used the term ‘word shape’ in the context of a discussion in which, among other things, this has been associated with word envelope. My point was that it would be best to avoid this ambiguous phrase and be quite specific about what we mean, individually, when we talk about letters and words. I agreed with your comments, but thought you risked confusion by referring to word shape.

[And, would you ever use "CSS pixels" to describe anything to the public without saying so.?]

I wouldn't mention ‘px’ at all in relationship to the Web without specifying exactly what I meant. px is a CSS unit measurement, and since @font-face implies use of CSS, I think it is more than likely that use of ‘px’ is going to be interpreted in terms of CSS code. I wanted to confirm whether or not that was your intention.

Jack B. Nimblest Jr.'s picture

John, I used word shape in the context of a discussion where Larson was being quoted as claiming word shape exaggeration and readability improvement are on de same path. They are not. My apologies to you if I was unclear.

>I wouldn't mention ‘px’ at all in relationship to the Web without specifying exactly what I meant...

Lol, well go on ahead then... But first, let me fill you in, since you don't type direct any web font services, and maybe ya don't know any normal users.
There is no "actual size" for specification type on the web.(ahem)
Most users have no clue about how pixels become px to become different from pixels.
Our larger RE fonts read well to 4.5 point (where a pixel is at least .5 point).
WT says "Paragraphs, 9px–14px", so that's how much space you have to specify "exactly what you mean".
If you don't say exactly and precisely the right thing, the word shapes can become unclear through misunderstanding of size. :)

HaVe aT It!

enne_son's picture

“[…] white space “outside” the letter, by definition space subject to modification horizontally and vertically at the behest of the employer of a font in the act of composition.”

I guess the space is subject to modification and the shape it begins to make subject to greater completion horizontally and vertically at the behest […]

dezcom's picture

White space not enclosed by the glyph takes on great importance when glyphs have neighbors. The resulting shape of the interlocking white spaces created when set together is every bit as descriptive of the letter-word conglomeration as the black that it brings to life. You cannot make a form without it being affected by the shape it makes in its surroundings. To call this a "word shape" for want of a better term is okay, but to then redefine "wordshape" as something loosely encompassing the blurred outer edge of the word, is a different story. If we need a term for the lassoed shape of the word without consideration of all the internal and external whitespace, then let's create 2 separate terms. Granted, some negative shapes do less to activate the figure than others but this, perhaps, aids in reading and helps us decode the forms. I don't know this to be true but I would hate to throw out avenues of investigation just to be neat and tidy with terminology. We have a tendency to spend more time arguing about the exact classification of pigeon holes than we do looking at the migrations of birds.

Syndicate content Syndicate content