NEGATIVE SPACE - Letter Spacing.

Primary tabs

55 posts / 0 new
Last post
nhallam's picture
Offline
Joined: 6 Aug 2008 - 7:43pm
NEGATIVE SPACE - Letter Spacing.
0

I am currently writing a paper that is discussing Massimo Vignelli's quote:

"We think typography is black and white. Typography is really white, its not even black. Its the space between the blacks that really makes it."

I'm looking or any previous writings or information about the topic that I could reference in my paper.

I would have thought that someone would have written about the topic, but I have yet to find any serious writings.

If you know of anything, please post it up!

N

Blank's picture
Offline
Joined: 25 Sep 2006 - 2:15pm
0

What have you read so far? I’m confused as to how you aren’t finding anything, because the concepts of counterforms and spacing coms up in just about every typography textbook, most calligraphy manuals, most significant books on type design, and in interviews with type designers.

nhallam's picture
Offline
Joined: 6 Aug 2008 - 7:43pm
0

J,

Sorry, i should have been clearer. I have of course read a lot in type text books etc. but i am looking for research that explores the importance of good letter spacing vs. non. I am putting forward the idea that the figure ground relationship between the positive and negative ground in type design and in book/poster design, is a vital aspect to the stressfulness and clarity of the intended communication.

As you know a lot has been written about "how to" correctly do this, but im trying to determine "why" it is necessary.

Why is it, for example, that Helvetica is often judged as a more aesthetically appealing typeface than Arial, even when looked at by people who know nothing about type. Being so similar i believe that there must be a sub-conscious psychological reason for this.

Ricardo Cordoba's picture
Offline
Joined: 6 Jun 2005 - 6:57pm
0

I think that the figure/ground relationship of type is one thing, and the quality of Helvetica vs. Arial is another. The former does not necessarily affect the latter. See "The Scourge of Arial" for more info on Helvetica vs. Arial, and the reasons (not necessarily aesthetic or related to negative space).

For general discussions of the white space between black type, see:

Counterpunch, by Fred Smeijers

Letters of Credit, by Walter Tracy

...and also these online resources:

http://briem.ismennt.is/

http://www.typeworkshop.com/index.php?id1=type-basics

Manlio Napoli's picture
Offline
Joined: 15 Feb 2006 - 9:13am
0

See also Detail In Typography by Jost Hochuli.

Eben Sorkin's picture
Offline
Joined: 22 Jan 2004 - 4:19pm
0

It sounds like you are suggesting that the "white*" space has a similar role in (illustration?), poster design text, and micro typography. I think this is mistaken. The thing about the white space is that as far as I can see there are many phenomena occurring at once: you have overshoots which is a kind of optical illusion , and the other similar illusions which I won't enumerate here, scale specific illusions**, and then interactions the specific qualities of the design such as regularity ( or not) of forms, Horizontality, and so on. It is a rich mix.

And you might also want to look at the phenomena of "crowding". Do a search for Pelli and crowding. The article "The uncrowded window of object recognition" by Denis G Pelli & Katharine A Tillman in Nature Nueroscience might be the best place to begin.

But back to the why, I think that the reason you don't find much about why is that it is on some level a scientific / optical / neurological question about perception. As designers we try to become sensitive to what is happening & what works and thereby develop an intuitive sense of what's happening. We can observe, name and compare common typographic phenomena - this is probably the case with Tim's book for instance, and is the case when we look at a different phenomena like overshoots. However without insulting that ability which helps us a great deal (quite the reverse in fact!), I think it is safe to say that this kind of activity isn't so much explaining why but what. To get at why you would need to do scientific experiments. And again, this is not to say that you can't get a great deal done by analyzing what. And indeed you can start to develop some theories about why from this and even test them in a soft sense in your own designs.

In terms of Helvetica and Arial, I think that given enough time and attention you will eventually see why one is better drawn than the other in general. But you will also discover that there is not "one" Helvetica, and by this I don't mean Neu Helvetica or different weights, I mean that there are actually different versions. You will also discover that Arial is good at things that Helvetica isn't - some specific screen applications, and in specific rendering environments. As far as it's relevance to your stated topic I have to agree with Ricardo.

*since it may simply be a the contrasting color

** If you look at what works best for type design at various sizes ( and you might want to get Tim Ahrens' new book http://typophile.com/node/56793 ) you will find that the designs for various sizes differ. For instance the counters for letters that will be very small need to get larger and the spaces between letters do too. And the features need to be more beefy/crude/large/obvious. Also, experimenting a little you can find that a badly drawn font which is well spaced will outperform a one which is better but which has spacing which is worse. To me this fact alone explains the Massimo Vignelli quote. Ironically it is the size specific quality of the spacing ( look at how close the i & l are set ) that makes Helvetica nice when it is set large and not so nice set small. It is that lack of robustness in the design of its spacing, among other things; that makes Helvetica a very poor choice for signage systems for instance.

Blank's picture
Offline
Joined: 25 Sep 2006 - 2:15pm
0

Try reading the section about spacing in Gerrit Noordzij’s The Stroke. I’m pretty sure Cyrus Highsmith talks about it a little in this interview. IIRC Mike Parker discusses it some more in the extra features of the Helvetica DVD, if this is a big paper it might be worth trying to interview him for his thoughts on the topic.

Christopher Timothy Dean's picture
Joined: 22 Oct 2006 - 10:49pm
0

@ nhallam:

Sorkin made some good points. In order to answer the "why" you will need to conduct controlled experiments with a directed research question, a clearly articulated hypothesis and independent and dependent variables. To conduct research at this level, you will probably need about two years of undergraduate psychology specifically focused on cognition. Psych 2000 (or the equivalent of the primary 2nd year psychology course at your institution), a few labs, and an intro stats course should be a good start. And your supervisor will more than likely need to be research scientist, not a designer.

Sorkin mentioned:

Pelli, D.G., Tillman, K. A. (2008) The uncrowded window of object recognition. Nature Neuroscience. 11(10) 1129–1136

This is a well respected journal with a high impact factor. Check out the references. There is a lot of good stuff there, but it's pretty narrow-cast science. Pelli and Tillman (2008) is a review article and is a little more digestible. You may find the reference somewhat obscure, and you'll probably need to get someone who knows stats to help you interpret the results. I usually do as I still haven't got as much math under my belt as I would like. I believe I had to go through the database and journals at my school to get this, which usually requires paid access. My school pays about $3,000,000 a year to have access to all of this knowledge (pier-reviewed journals &c) so you may need to call in a favour. However, I often have great success simply contacting the author directly and asking if they wouldn't mind sharing their research with you.

Larson also has a nice piece of work online:

The science of word recognition.

One of the earlier players in this game:

Bouma, H. (1973). Visual Interference in the Parafoveal Recognition of Initial and Final Letters of Words, Vision Research, 13, 762-782.

A lot of research in this area is still in dispute however. It's fairly hot at the moment. To date, the answers to our "why" questions are primarily supported by convention, aesthetics and intuition, and not objective measures of human performance and empirical data. There are two pretty strong camps on this area so be mindful of both. Often, threads of this nature end up in a "art vs science" debate. Myself, I favour science.

J.P. Knox's picture
Offline
Joined: 8 Oct 2004 - 11:00am
0

G Noordzij's LetterLetter contains some articles about pos-neg space relations, for example "The dimensions of the mental image and its origins" and "The puzzle".

Peter Enneson's picture
Offline
Joined: 21 Mar 2005 - 1:17pm
0

To my mind, in reading, the entire word is figure. The ground is the surrounding space the word stands out against. One of the great oversights of perceptual psychology, it seems to me, is the failure to understand that within the word as bounded map, both the black and the white are information contributing to visual word-form resolution. Psychology sees the black but has nothing to say about the white.

Underlying this is the principle that the rods and cones in the retina are responsive to reflected light.

Christopher Timothy Dean's picture
Joined: 22 Oct 2006 - 10:49pm
0

Fovial and parafovial physiology may also be worthy of investigation. I'll see if I can dig up a study on Monday.

James Montalbano's picture
Joined: 18 Jun 2003 - 11:00am
0

Why would anyone take anything Massimo Vignelli says seriously! Have you ever looked at his line of men's clothes? The clothes that he claimed would revolutionize the way men dressed for business. As far as anyone can tell Massimo is the only one to ever wear those clothes!

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Whenever people start talking about ‘designing the white’ or about how the white is more important than the black, my bogosity meter starts to twitch. It sounds really cool to say one ‘designs the white not the black’, but what does it mean? What is the process? How does this process differ from designing the black?

Peter: both the black and the white are information contributing to visual word-form resolution

I'm not convinced of that or, at least, not convinced that the kind of information contributed by the white is either similar to or as important as that contributed by the black. Letters are positive shapes occupying negative space. The white is space more or less occupied by a given letter shape. Yes, the letter shape creates internal and inter-letter space-shapes, but those shapes a) vary much less than the letter shapes so provide relatively little cue value and b) are not independently featural, their size and shape is defined by the letter weight and style. Letter shapes of different weights and styles may still retain obvious structural and featural similarities, but the white spaces within and around them will not.

Underlying this is the principle that the rods and cones in the retina are responsive to reflected light.

And yet we do not obtain more information from more bright things: we obtain information from contrast which enables us to discern and recognise shapes. When I see a raven fly across a bright sky, the shape of the sky around the raven isn't where I'm getting the information from to recognise the bird: it's the bird that is the shape, and the sky is only providing a field of sufficient contrast to see the shape.

Peter Enneson's picture
Offline
Joined: 21 Mar 2005 - 1:17pm
0

John, I think designing the white means being attuned to the white as shape in the process of designing, and it means designing so the whites are all in the same key, to use a musical metaphor.

In some words, like ‘look’ the wide open counters are the most salient feature. This is even true for ultra bolds. I can't believe it isn't used. Some recent tests appear to confirm this. Considered from the point of view of role-units, counters vary at least as much and are as rich in cue-value as stems and curves.

What I'm still wondering about is if the white uses the same spatial frequency scale as the black. It could be that it uses a coarser scale. And if it does, what does this indicate in process terms. Is information at the density or granularity of the salient whites resolved first?

It’s convenient that the raven is black. If it were an osprey, you'd be using the white as information.

Ricardo Cordoba's picture
Offline
Joined: 6 Jun 2005 - 6:57pm
0

During the 1990s, I went to Alberto Breccia's comics workshop for a brief three months, and he had a great exercise for beginners: to train you to "see" and appreciate the white space, he would have you pick a black and white photo from a collection he had. Then, you would have to draw the picture on a colored piece of paper -- red, green, blue, etc. -- and fill in both the white and black areas. You would end up with a high-contrast drawing, and having to paint in the white areas made you appreciate that the white space was indeed part of a dynamic between it and the black. The white was a shape, just as much as the black was.

William Berkson's picture
Offline
Joined: 26 Feb 2003 - 11:00am
0

I think it was Frutiger who first said that he designs the whites, rather than the blacks. I think that's a huge exaggeration to make the point that whites are important, which is true.

I think there are a lot of different effects at play with whites. One of the important ones is the phenomenon which I learned from Kevin Larson is called "Mach Bands".

Here from the Wikipedia article of that name is an example:

The white Mach Band is the whiter band you see to the left of the black. (There is also a blacker one to the right.] It is so obvious that you will be surprised that it disappears when you cover the black on the screen with a piece of white paper.

It is also noted by G. Noordzij, though he mistook the cause. The cause, as the article says, lies not in optics, but in the way our brain processes contrasts.

This is one of the things that Helvetica and Frutiger in different ways are designed to take advantage of. The closed shapes of the Helvetica bold e I think result in two Mach Bands enhancing each other in the 'gap', causing the letter to look bright and scintillating. The smaller eye of the e in bold weights also makes the Mach bands more vibrant. Helvetica really does take advantage of this.

Frutiger does it in a different way. If you compare the e's in Frutiger and Myriad, you will see that the lower terminal in Frutiger swells lightly, producing a sharp angle in the black and a more vibrant Mach Band effect than in Myriad, which tapers and is much less vibrant and assertive, even though the difference in the outlines is very small.

[Sorry I don't have the time to copy and post the pictures]

The way Mach Bands are handled is an important feature of whites, but only one. Also it is not necessarily desirable. In a text face, you may well want less shimmer, and more quietness.

There are a lot of other issues, and I think it's important to recognize that there isn't just one effect involved.

Christopher Timothy Dean's picture
Joined: 22 Oct 2006 - 10:49pm
0

@ enne_son,

"Some recent tests appear to confirm this."

Citation(s)?

Eben Sorkin's picture
Offline
Joined: 22 Jan 2004 - 4:19pm
0

I would like to again raise a distinction between between John's model and a notanic model in which the forms are not opposed but linked. Perhaps rather than linked I should say inseparable. Take the example of the bowl. A bowl is one because of the empty space made inside it. It would be silly say the most important part of a bowl is the empty part of the bowl or the body of the bowl. If they are a bowl they are not opposed. The opposite is true. I do want to say that despite my preference for thing about B&W in terms of notan that I don't think that this means that black & white have to be the same, equal, or equivalent. Far from it. In fact I would be surprised if that was so. Inseparability does not preclude difference. But at the same time as far as I know we don't have substantial research based reasons to characterize either one's role. If I am mistaken I would love to be corrected. Obviously type is not bowls or physical in the same way and type isn't one shape. But despite all these differences it seems to me that the reality that opposition is nonsense carries through. I realize that this analogy may sound like Taoist propaganda to some and that is okay. I accept that.

But rather than dwell on white vs black, or notan or not notan, I would suggest that the interesting way of looking at this is to think in terms of features and their clarity or ability to be detected.

The fact that they are in B&W just happens to make things less complicated. When you detect an animal in the forest - perhaps a deer, your image may be incomplete because of bushes grass or brush; or because of tree trunks or branches, but if you see enough you mentally assemble the parts into a whole deer. This might be in full and posisbly very flat color which is low contrast or it may be in near silhouette. But in all cases in we are assembling recognized features, if given enough of them, into recognition of a 'deer'.

Deer in silhouette: http://www.flickr.com/photos/harve64/2271532149/

The process of recognizing letters is a similar process. We are recognizing features ( Peter would say salient features ). And it is the job of the black & white (however characterized) not just to make the features themselves easy to detect, but also to group, or as Pelli would put it to Integrate them into a whole 'a' or 'g'.

See: Crowding is unlike ordinary masking: Distinguishing feature integration from detection http://journalofvision.org/4/12/12/

So "white's" role, inseparable from "black's" or not, is to facilitate this process.

Bill thanks for the Mach band.

Peter : In terms of tuning the white not simply as a necessary aspect of the black but a a feature, in the way that we look at thins & thicks in black strokes, stem widths, and so on - what aspects are you thinking of? I assume you mean things like wordspace, letterspaces and countershapes. Perhaps lines spaces get a look in as well. One other kind one occurs to me as well: proximal spaces such as with punctuation, but also within glyphs such as between the dot in an ! and the stroke above as well as in the openings or spaces of: a e, c 3 & 5, U & H, f & t, and so on. Do you have others you would like to bring up? Is this what you mean?

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Eben: So “white’s” role, inseparable from “black’s” or not, is to facilitate this process.

Yes. By providing a contrasting ground. :)

William Berkson's picture
Offline
Joined: 26 Feb 2003 - 11:00am
0

One of the other issues in dealing with white is figure/ground ambiguity or lack of it. In the West, 'notan' seems to most often refer to illustrations in which the white is used as foreground, such as snow on a mountain in Chinese and Japanese brush painting with ink.

In Helvetica Bold or Black, and other bold or black type faces, I think the counters tend to pop out partly because as the black dominates more, so the eye may want to read the white as 'figure' rather than background. I don't know how much the effect in Helvetica Bold is due to Mach bands, and how much to figure/ground ambiguity, but I suspect both are at work.

In fonts for extended text, it seems that figure/ground ambiguity is actually a bad thing. You want the blacks to be more salient, to be 'read' and the whites not to read. One of the most interesting things to me in the testing of Clearview is that medium weights are more legible at a distance than bold ones. In text fonts, I think you have something like 1/3 black 2/3 white on average between the base line and x-height. That keeps the white as ground and black as figure, and avoids any ambiguity.

That's where I'm with John, where also my Bullshit Detector--that's what we called it in the midwest when I was growing up--starts ringing with phrase "designing the whites".

The whites are critical, but I am doubtful that their actual shape is as salient a factor as that of the blacks. For example as well as 'dark-light' (notan) being one balance in Chinese theory of ink brush painting, 'thick - thin' is another. When you alter the thick-thin balance of the black in a typeface it is very noticeable. But you are only altering the white in a counter by a few percent, where you might be doubling or tripling the thickness of a thin. And the eye responds to small absolute but big relative change of the black in a way that shows to me it's more salient, i.e. has more impact.

This is not to deny that we 'read' closed counters, etc., as Peter argues. That may be. And having the right letter spacing, and counters in proportion to the right letter spacing is no doubt important. But I am skeptical that the details of the shapes of the whites are as important as those of the blacks.

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Peter: I think designing the white means being attuned to the white as shape in the process of designing, and it means designing so the whites are all in the same key, to use a musical metaphor.

In my experience, the ‘white as shape’ is a by product of the shape of the letters. As for getting the whites in the same key, I think this happens as a result of getting the letters in the same key (which increasingly I think of in terms of tuning to a consistent spatial frequency channel).

In some words, like ‘look’ the wide open counters are the most salient feature.

I think that's begging the question. Certainly when looking at the word ‘look’ the counters of the double o are the largest features, but does that make them the most salient when reading? I'd argue that the most salient features from a word recognition perspective are those that distinguish ‘look’ from ‘lock’, the closest confusable word in our language. It is not the counters that are salient, but the rings enclosing the counters.

Considered from the point of view of role-units, counters vary at least as much and are as rich in cue-value as stems and curves.

I don't think that is true, at least not for the Latin script. The number of counters is far less than the number of letters, and almost all those counters are limited to the x-height range. Of these, most are of similar size and related shape (making possible the historical use of the counterpunch). Indeed, most of them differ from each other only by virtue of the difference in the arrangement of their bounding stems and curves. Again it is the shape and weight of the letter the wholly determines the size and shape of the revealed ground.

Fred Smeijers gives the example of ‘not’ and ‘hot’ -- two words with identical internal white space --, and notes that the preponderance of letters with one or more vertical sides greatly reduces the variety of inter-letter white shapes.

What I’m still wondering about is if the white uses the same spatial frequency scale as the black.

? As I understand it, spatial frequency is a measure of transition between contrasted objects. I'm not sure that it makes sense to question whether ‘white uses the same spatial frequency scale as the black’, because what spatial frequency refers to the interaction of the white and the black.

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

re. Mach bands...

Bill, I fail to see the relevance of mach bands to type, except in terms of antialiased text on screen. The mach bands phenomenon appears to rely on a gradient transition between dark and light. I also note that the Wikipedia entry discusses and illustrates this in terms of dark and light, not black and white. Finally, the mach bands phenomenon is scale-dependent, as can be demonstrated by putting your illustration onscreen and then looking at it from different distances. So I don't think it is possible to say that a given typeface design exploits this phenomenon, other than in particular rendering modes at particular sizes.

Christopher Timothy Dean's picture
Joined: 22 Oct 2006 - 10:49pm
0

A simple experiment to run regarding counters would be to run a reading comprehension test using the counters as an independent variable. Condition 1 would have a normal typeface, condition 2 would have the same typeface with the counters filed in. You'd have to outline the entire page and manually remove the counters, or customize a typeface, which would make materials preparation a bit time consuming. I'm pretty confidant that no one has conducted a study of this nature (but I tend to forget what I read, unless I've read it three times). A quick database search (PsychINFO and Web of Science) didn't turn up anything at a glance. Counterbalancing a 2x2 within subjects design isn't that hard. Data entry and analysis would be a breeze.

Ss 1 = A1, B2
Ss 2 = A2, B1
Ss 3 = B1, A2
Ss 4 = B2, A1

A = counter
B = passage

A simple t-test should do for data analysis (keeping in mind I'm not as advanced as I would like to be in stats-land. I prefer to spend my time on experimental design. But for fun, I have taken to developing three dimensional Latin squares. I'll get there someday).

@ Sorkin, thank for the reference to Pelli et al. (2004). I wandered around the paper and ended up at a great link to the Sloan font (scroll down about three screens and there's a little blue link to download. It's free). Strangely enough, the same day I read this, I had a doctors appointment. They are moving offices so they were throwing stuff out. I noticed an old eye chart in the pile *yoink!* Check out that capital G!

(I appear to be having problem inserting images. It keeps saying "Could not copy image. Error. " Anyone?).

Craig Eliason's picture
Offline
Joined: 19 Mar 2004 - 1:44pm
0

(I appear to be having problem inserting images. It keeps saying “Could not copy image. Error. ” Anyone?).
Is there a space in the filename?

Christopher Timothy Dean's picture
Joined: 22 Oct 2006 - 10:49pm
0

Yay!

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Christopher: A simple experiment to run regarding counters would be to run a reading comprehension test using the counters as an independent variable. Condition 1 would have a normal typeface, condition 2 would have the same typeface with the counters filed in.

What's the purpose of such an experiment? No one is denying that the existence of counters is important, and removing them doesn't affect the debate about the way in which they are important. I maintain that they are by products of the shape of the letters: by removing them you are changing the shape of the letters. Also, letters include both open and closed counters and, arguably, any internal white space constitutes a kind of counter. Where would you draw the line on what to remove? Is the space between the arms of a k a counter?

I'm not sure if there is a reliable test for the independent significance of counters in reading, because there is no such thing as an independent counter. You can't obscure the counter without obscuring the letter shape, and we already have good research on obscuring letter shapes (such that we know what parts of letters are essential to recognition).

Peter Enneson's picture
Offline
Joined: 21 Mar 2005 - 1:17pm
0

Christopher, I was referring to a graphic in a draft of a study by Caroline Blais of the University of Montreal and Daniel Fiset. It used to be on line, but seems to have been pulled, perhaps for reworking and publishing. It was called “Skilled Readers Process Words Letter by Letter in Nearly Optimal Sequence.” It used the ‘bubbles’ technique of Gosselin and Schyns to uncover the features involved at different moments in image discrimination. A ‘classification image’ for a representative test word javel shows space-time voxels reaching statistical significance at the counter of the e, the right arm of the v and the interletter shape between the j and a. The results are interpreted using a serial reader modeling paradigm, but I thing a role-unit based modeling paradigm should be explored as well.

I think the bubbles / classification image technique has great potential for identifying what features or units are being used in perceptual processing.

John, we are at a stalemate. My aprior is that the feature-manipulative attunement that characterizes type design leads to the deliberate gauging of counters according to the integrity of their shape. And this becomes a functional operator in the evolution of type. The Noordzijian idea that a letter is two shapes, and a word a complex of these leads to the idea that all of this is used. Dean is right in suggesting tests for this, but I think it opens up fertile perspectives on what perceptual processing in reading entails, that is it opens up interesting perspectives on what really happens at a neural transmission level in the visual cortex.

Christopher Timothy Dean's picture
Joined: 22 Oct 2006 - 10:49pm
0

Hudson: To make this feasible, I would operationalize "counters" as closed counters. To look at open counters and inter-letter counters (is there a term for this?) could, depending on the design, create far to many combinations to successfully counterbalance. I think we're looking at a trade-off between internal and external validity.

"and we already have good research on obscuring letter shapes"

I particularly like Arditi, A., Cho, J. (2007). Letter case and text legibility in normal and low vision. Vision Research (47) 10. 2499-2505

as it refutes the Bhouma model. Do you recommend any other readings?

Christopher Timothy Dean's picture
Joined: 22 Oct 2006 - 10:49pm
0

And before I'm busted, it's spelled Bouma.

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Peter: My aprior is that the feature-manipulative attunement that characterizes type design leads to the deliberate gauging of counters according to the integrity of their shape.

Yes, I think this does happen, but largely as a by product of what we're doing with the black structure and features. But just because something happens in design doesn't mean that it is significant in reading, and it certainly doesn't mean that this kind of design developed because of significance in reading. There's a lot of stuff going on in type design, and not all of it is significant in its contribution to reading. Readability, as I apparently never tire of saying, is a prerequisite, not an achievement. The sheer variety of typefaces strongly indicates that much of the distinctive design in any given typeface is something built on top of readability, not something contributing directly to it.

The Noordzijian idea that a letter is two shapes, and a word a complex of these leads to the idea that all of this is used.

No, it doesn't lead to that idea. This is what I am saying: the Noordzijan description of what is happening in a letter -- which I think is one of the most useful contributions to type design, as I explained in my lecture at St Bride's a couple of weeks ago -- does not lead anywhere in terms of what we use during reading. Further, I don't think the 'two shapes' idea is necessary to the description. I ask, what is the sufficient role of the white? -- and it is to provide a contrastive ground for the letter shape. Without experimental evidence to the contrary, I'm not inclined to assign shape significance to the white.

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Christopher, there's a Pelli paper that Kevin Larson showed me that identifies, if I recall correctly, the features of individual letters that are indispensable to recognition. I'll direct Kevin's attention to this discussion -- he may have some interesting input, in any case -- and hopefully he can provide a citation for the paper.

William Berkson's picture
Offline
Joined: 26 Feb 2003 - 11:00am
0

John, Noordzij saw this as a general phenomenon, as I remember reading. Also the Mach bands is only one illustration of 'lateral inhibition' in which edges are enhanced in perception. In Christopher's G above, I see a whiter band just outside the black, but I think that's an artifact of perception. And near the edge of the black looks blacker than the center. Don't you see these as well? Any why wouldn't they be relevant to design, particularly of display type, if the eye sees them?

Christopher Timothy Dean's picture
Joined: 22 Oct 2006 - 10:49pm
0

nhallam,

I Didn't think to ask. What school, program, course is this paper for? Undergraduate? Graduate? Who teaches this class?

Peter Enneson's picture
Offline
Joined: 21 Mar 2005 - 1:17pm
0

John, to the best of my knowledge, Pelli nowhere identifies the features that are indispensable to recognition. He does conclude in several places that items more primitive than letters — parts at the granularity of strokes — are indispensable. He does this in his 2006 paper “Feature Detection and Letter Identification,” and in his 2002 paper with Majaj, “The role of spatial frequency channels in letter identification.’

All his papers are at http://www.psych.nyu.edu/pelli/papers.html

You say: “[…] just because something happens in design doesn’t mean that it is significant in reading, and it certainly doesn’t mean that this kind of design developed because of significance in reading.” […] “[T]he Noordzijan description of what is happening in a letter does not lead anywhere in terms of what we use during reading.”

I think your second statement is far too categorical. The Noordzijian statement and central attunements emergent in the design process can certainly lead to potentially important hypothesis. They did in my case. My claim is that your idea that (inside the bounded map created by the blacks, interletter and intraletter whites) the role of the white is “to provide a contrastive ground for the letter shape” is a widely held and oestensibly common sense assumption that needs to be explored, when it comes to what the visual cortex uses. I know of no evidence that has a bearing on this, either for or against. Certainly the white provides a contrastive field, but inside the bounded map or bouma it also provides criterial information toward rapid automatic visual wordform resolution in immersive reading, or so I think. I know I'm just repeating my mantra, but it seems to me to provide a powerful ground for understanding the importance of the white inside design.

Did you say you gave a presentation relevant to this. Is it available?

To get back to the original contaxt for this thread, Vignelli's claim that “[t]ypography is really white, its not even black. Its the space between the blacks that really makes it.” This sounds very much like Noordzij's comment that “[t]he white of the word is my only holdfast.” I think this is true of typography as a practice in the sense that in defining interletter spacing as well word spacing and line spacing, it is getting a handle on the white that makes it.

Christopher Timothy Dean's picture
Joined: 22 Oct 2006 - 10:49pm
0

eanne_son,

Thank you very much for that great link to Pelli's published papers. It's refreshing to see work around typography that is supported by more than convention.

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Bill: In Christopher’s G above, I see a whiter band just outside the black, but I think that’s an artifact of perception. And near the edge of the black looks blacker than the center. Don’t you see these as well? Any why wouldn’t they be relevant to design, particularly of display type, if the eye sees them?

The image Christopher posted is a) antialiased and b) huge. As I said: mach bands seem to occur when there is a gradient shift between dark and light and, critically in terms of type design, they are size dependent. Most text, in print, is too small for this kind of perceptual artefact to be present. Which is a good thing considering that the primary affect of such artefacts is shimmer.

[Note also that the closely fitted square around Christopher's G probably contributes to some of the perceptual artefacts.]

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Peter: The Noordzijian statement and central attunements emergent in the design process can certainly lead to potentially important hypothesis. They did in my case.

When I say that they don't lead anywhere, I mean that they don't lead deductively to a particular conclusion. Of course they might prompt or inspire hypotheses, but that's you taking the ideas somewhere, not them leading.

My claim is that your idea that (inside the bounded map created by the blacks, interletter and intraletter whites) the role of the white is “to provide a contrastive ground for the letter shape” is a widely held and oestensibly common sense assumption that needs to be explored, when it comes to what the visual cortex uses.

I entirely agree. I'm being tenaciously Occamist, not budging from what seems to me the minimally sufficient role of the white, precisely because so much woolly fluff has been said by designers about the importance of the white. I'm not sure that my stated view is actually a widely held assumption: it seems to me that the widely held assumption, at least as represented in what designers say, is that the white is almost mystically important, and indeed what is said about it is largely mystification. I'm pushing for something more rigorous.

Certainly the white provides a contrastive field, but inside the bounded map or bouma it also provides criterial information toward rapid automatic visual wordform resolution in immersive reading, or so I think. I know I’m just repeating my mantra, but it seems to me to provide a powerful ground for understanding the importance of the white inside design.

As a hypothesis, I'm sympathetic to this. But I think it is begging the question to justify the hypothesis on the grounds that it provides understanding of the importance of the white within design. This is what is at question: how important is the white within design?

I think white is critical to rhythm and, hence, to spatial frequency. Which is to say that the size of the white areas relative to each other is important, and this provides the impetus to modulate the black shapes and their spacing in ways that maintain appropriately sized white counters and inter-letter areas. All very important. What I am not convinced about is that the shape of the white is important; indeed, the shape of the white is necessarily subservient to the style of the individual typeface, as expressed in the shapes of the black. There are a few examples of true ‘designing the white’, in which the shapes of white areas are the starting point and dictate something of the shape of the black and hence the style of the typeface; Legato is the obvious example. But the vast majority of type designers throughout history, regardless of what they might say about the white, have been in the business of designing black shapes.

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Did you say you gave a presentation relevant to this. Is it available?

I'd been told that the lecture would be recorded, but in the event they didn't record it. I may give it again at TypeCon, but this is not confirmed yet.

To get back to the original contaxt for this thread, Vignelli’s claim that “[t]ypography is really white, its not even black. Its the space between the blacks that really makes it.” This sounds very much like Noordzij’s comment that “[t]he white of the word is my only holdfast.” I think this is true of typography as a practice in the sense that in defining interletter spacing as well word spacing and line spacing, it is getting a handle on the white that makes it.

I can agree with all of that, because none of it implies or requires a significance for the shape of the white in reading. It is critical to get a handle on the amount of white, but the shape of the white, whether inside the letters, between the letters, between the lines of text, or anywhere else on a page including text is wholly derived from the shape of the letters, the spacing of the letters, and the arrangement of the text on the page.

Now for Noordzij, who insists on total control of typography, tuning typefaces for specific uses, it makes more sense to talk in terms of black and white working together, because he is treating the black as mutable as the white. But for most typographers the black is a given.

Peter Enneson's picture
Offline
Joined: 21 Mar 2005 - 1:17pm
0

[John] “I’m not sure that my stated view is actually a widely held assumption”

I was referring to psychologists of reading here.

[John] “[…] I think it is begging the question to justify the hypothesis on the grounds that it provides understanding of the importance of the white within design.”

I was trying to justify entertaining the hypothesis.

Nick Shinn's picture
Offline
Joined: 8 Jul 2003 - 11:00am
0

Vignelli was being provocative, showboating.
His rhetoric, hyperbole.
But of course, positive and negative, black and white, are two sides of the same coin.
What he meant is that superior designers (such as himself) don't forget the verso.

William Berkson's picture
Offline
Joined: 26 Feb 2003 - 11:00am
0

John, I take it from your reply that you do see the effect, though for some reason you don't admit it. And it doesn't depend on the outer frame in the G above.

As I said this is but one example of the more general effect of 'lateral inhibition'. I don't know the literature on this and what conditions are labeled 'Mach Bands' and what labeled 'lateral inhibition', but illusions of brighter and darker patches in black and white designs are quite common, and don't depend on the screen or gradations of gray.

Here is another example of 'shimmer', and here, in the Hermann Grid Illusion, only black and white are involved:

Further, it works pretty small. And of course type fits into various grids, so it is relevant to type design.

Now there is debate about what exactly causes these various illusions, but the important point is simply that they are at work in type design.

I think it is absurd to deny that such 'pop' or 'shimmer' has played an important role in the popularity of Helvetica Bold for display.

Slight scintillation as I said helps get attention in display type, and so is something a designer might want to take advantage of. Nick Shinn said he deliberately did so in his Eunoia design.

In smaller sizes, such effects still happen, but more with bolder type, and probably more with sans serifs than with serifs. In general, bold type--more black--seems to give rise to these effects more.

That generally such effects are less evident in text faces is not a reason to dismiss the importance of these optical effects. Rather it is evidence that traditional text faces in general avoid them, and that it is a good idea to do so.

That faces used for extended text are generally of medium weight, and not very light or bold may be partly because these conditions avoid such disturbing illusions. And it may be that serifs help defeat such effects. For example, one of the articles shows that non-square shapes in the Hermann Grid can kill the scintillation.

I really think you are making a mistake in dismissing these effects as irrelevant.

They are by no means the only reason for the importance of attending to whites in type design, but they are one reason.

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Bill: I take it from your reply that you do see the effect, though for some reason you don’t admit it.

Yes, I see the effect. I thought I had acknowledged that. But my point was that the G is antialiased, so there is a gradient shift between the black and white, and, more importantly, it is huge. The further you get away from the image, the smaller it becomes, and the less obvious the mach band effect. I don't doubt that if you were able to get back far enough, the effect would disappear, as it does when I look at the grey image on the Wikipedia mach bands page from more about 15 feet away. This is what leads me to conclude that such effects are size-dependent.

I think it is absurd to deny that such ’pop’ or ’shimmer’ has played an important role in the popularity of Helvetica Bold for display. Slight scintillation as I said helps get attention in display type, and so is something a designer might want to take advantage of.

I don't deny any of this. But I didn't think we were talking about display type. Sure, at display sizes these kinds of effects are significant, and are significant at the letter level. I don't think they are significant at text sizes, and I think it is a good thing that they are not. I also think that the absence of such effects at text sizes is one of the factors that has contributed to those sizes being preferred for continuous text.

They are by no means the only reason for the importance of attending to whites in type design, but they are one reason.

Okay, this is good. This is what I'm looking for. I think one can take the observation further, though, and say that this is a good reason for approaching design of text and display types in different ways with regard to the importance of the white.

John Hudson's picture
Offline
Joined: 21 Dec 2002 - 11:00am
0

Peter, I've been through the Pelli papers and have not found the illustration that Kevin Larson showed me last time I was at Microsoft, which showed the parts of individual letters necessary for recognition. I thought this was Pelli, but perhaps it was another researcher. Hopefully Kevin can provide the information.

Kevin Larson's picture
Offline
Joined: 11 Aug 2004 - 12:47am
0

Peter says “Psychology sees the black but has nothing to say about the white….Underlying this is the principle that the rods and cones in the retina are responsive to reflected light.”

The rods and cones (photoreceptors) respond to both white and black. There is a base firing rate for both. When looking at something that is the perfect average of the room luminance (i.e. a medium grey), the photoreceptors will fire at a baseline rate. If the photoreceptors then switch to looking at something black, they will fire at a rate lower than the baseline (thus seeing black). If the photoreceptors switch to looking at something white, they will fire at a rate higher than baseline (thus seeing white). White and black are both “seeable”, thus the same photoreceptors and recognition processes can be used for black text on a white background and for white text on a black background.

White (or background) is necessary for letter recognition because without sufficient interior and exterior letterspace, the letter is unrecognizable. But I would argue that we recognize the shapes of the letters and not the shapes of the spaces because that is an easier comparison task. If we look at a single letter, we need to decide if it’s one of 26 possible letters (in English). If we’re looking at the spaces, there are 26-squared (or 676) possible spaces. It’s more efficient to search among 26 possibilities, and our brains like efficiency.

This has been a very interesting discussion, and a pleasure to read!

Kevin Larson's picture
Offline
Joined: 11 Aug 2004 - 12:47am
0

John, the article I showed you with the key features important for letter recognition wasn’t Pelli. It was an article from the folks that Peter mentioned (Fiset and Gosselin are two of them) using the bubbles methodology. In the bubbles methodology they take letters filtered in different spatial frequency channels then different portions of the letter are further occluded. Recognition can only happen if the most important frequencies and letter components remain. Unfortunately I don’t have a citation with me.

Kevin Larson's picture
Offline
Joined: 11 Aug 2004 - 12:47am
0

Christopher, on counterbalancing S3 should be A1, B1 and S4 should be A2, B2.

If you found statistical differences with a t-test analysis, those differences would hold true, but it’s not the perfect test because it treats your 4 conditions as independent from each other. A 2x2 ANOVA would be a better analysis because it lets you compare your two A1 conditions against your two A2 conditions, and your two B1 conditions against your two B2 conditions. This reduces the probability of dismissing real differences as a chance event.

As John points out, interpreting the results is definitely challenging.

Peter Enneson's picture
Offline
Joined: 21 Mar 2005 - 1:17pm
0

Here are some links.
two of the bubbles authors:
http://www.mapageweb.umontreal.ca/gosselif/cv.html
http://viscog.psy.umontreal.ca/~fisetdaniel/cvDan.htm
three of the bubbles papers:
http://www.mapageweb.umontreal.ca/gosselif/Fisetetal_CogNeuro.pdf
http://www.mapageweb.umontreal.ca/gosselif/FISET_COGNEURO_2008.pdf
http://www.mapageweb.umontreal.ca/gosselif/FISET_PSYCHSCIENCE_2008.pdf
see the review of this work and the pelli work here:
http://www.mapageweb.umontreal.ca/gosselif/Grainger_et_al.pdf

[Kevin] “If we look at a single letter, we need to decide if it’s one of 26 possible letters (in English). If we’re looking at the spaces, there are 26-squared (or 676) possible spaces.“

In the review, feature-based letter perception is discussed. First the visual cortex needs to resolve stimulus-derived information into the features. I think current guesses place the amount to less than half of 26. If the intra-letter whites are used this figure can be elevated. (Let’s leave the perceptual processing role of the inter-letter white moot for now.)

This might suggest the most efficient route to word recognition is to resolve identity directly at the feature level. My model is an attempt to outline the basics of such a route. The review opts for a hierarchical — instead of direct — route. Heirarchical processing is less efficient, from a neural computation point of view.

Peter Enneson's picture
Offline
Joined: 21 Mar 2005 - 1:17pm
0

It could also be instructive to calculate redundancy in words at the orthographic versus the more primitive feature-unit level, comparing as well redundancy with just the blacks, and then redundancy using also the internal whites. Redundancy contributes to efficiency as well.

Peter Enneson's picture
Offline
Joined: 21 Mar 2005 - 1:17pm
0

[Kevin] “When looking at something that is the perfect average of the room luminance (i.e. a medium grey), the photoreceptors will fire at a baseline rate. If the photoreceptors then switch to looking at something black, they will fire at a rate lower than the baseline (thus seeing black).”

Might the necessity for a baseline firing rate be the reason it is important to be concerned about the even ‘colour’ of the text-block?

Chris Lozos's picture
Offline
Joined: 25 Feb 2004 - 11:00am
0

Massimo's quote indicates a closeness to Swiss modernists, not only in typography, but in graphic design. See both Emil Ruder's "Typographie" and Armin Hofmann's "Graphic Design Manual". The Swiss of that time quite often built the black by subtraction using white. It was important to make the space "active" in the letterform design as well as the typography and page layout.

ChrisL

Chris Lozos's picture
Offline
Joined: 25 Feb 2004 - 11:00am
0

You can also search threads here refering to "Bouma" and find elaborate and heated discussions about the subject.

ChrisL