David Kindersley on Spacing

William Berkson's picture

I recently got hold of the monograph 'Optical letter spacing for new printing systems' (1976) by David Kindersley. I was looking for enlightenment on his ideas, but unfortunately I find the presentation rather difficult to understand.

His basic idea seems to be that the amount of space a letter should have--advance width times vertical extension--should be equal to the total white space within the letter, as defined by the black extremes left right, top and bottom--plus a fixed amount.

One determines the left-right placing of the letter within this space the following way:
1. if one slides a vertical bar left and right over the letter, when the total left and right white spaces are equal, it is the correct 'optical center'.
2. The letter should be placed with its optical center equal distance between the extremes of the advance width.

I'm not even sure whether I've got these basics right. Does anyone know more about Kindersley's ideas, and what happened to them? Did anyone take them up? Were they rejected? How do they relate to Tracy's principles in 'Letters of Credit'?

dezcom's picture

William,
I admit to knowing nothing about Kindersley but I wonder how feasible such a solution is. It seems to me that most type designers (except those who are fond of calculation) can arrive at reasonable letterspacing by eye and experience more quickly than making all the measurements and calculations a system like his might require. To me, from reading your post above, it seems the concept of proximity is not allowed for sufficiently in the method described above. Proximity may be my own dream as a concept, since I have not seen it mentioned elsewhere, but I can't see it not having a great affect in spacing. I define proximity as the distance between the 2 closest points of adjacent glyphs in a word. A double v will have very close proximity, a double i or l will not. My theory is that proximity mitigates area of white space in adjoining letters. An example is the word "wavy" vs. the word "tilt"; here the proximity of the vy in wavy counteracts the large space created by the adjacent opposing diagonals.
Kindersley's theory in intriguing to me in the same way as statistical observations are like IQ. It is interesting to see an IQ number of a person compared to how well he functions in the real world but it is not always a good predictor. Is Kindersley always a good predictor of spacing and how would you judge that anyway unless by eye?

Just an observation, not an absolute.

ChrisL

William Berkson's picture

>type designers (except those who are fond of calculation) can arrive at reasonable letterspacing by eye and experience

Well, the reason for my interest in Kindersley's work is that what the eye should be looking for is not totally obvious to me. As has been discussed in other threads, there is tension between different goals in spacing. One is to not have the extremes of letters touching. Another is to have the white area between letters approximately equal (the 'pouring sand between letters analogy'); another is to have a even rhythm of the counters moving across the page, etc.

What should the trade-offs between these different goals be? The thing that worries me and has me looking at different theories is that what looks to the eye like good spacing for display might not be best from the point of readability in continuous text.

For example, I think that taking into account proximity is necessary but dangerous. It is necessary because otherwise legibility is hurt when eg rn are not clearly separate. It is dangerous because if you give it too much emphasis you can hurt other more important goals such as good rhythm and color. In this thread Raph argues that the influence of serifs—and hence proximity—should be greater at large sizes, smaller at small sizes.

I don't know that formulas such as Kindersley was seeking--and which Adobe's 'optical spacing' actually uses--will ever be as good or better than the eye of a designer, though I don't see how you can rule it out. My purpose, though, is not to find a formula, but just to understand the aspect of spacing he was exploring.

Maybe it would help to put up his basic diagram.

Here is what Kindersley says about it:

"In this 'C' the optical centre is very near the third moment centre and the mathematical centre. The mathematic centre is equidistant between the left and right projections of a letter (including serifs).

"The other centres are from the left: (1) area; (2) first moment (gravity); (3) second moment (interia); (4) third moment (optical?); 5 mathematical centre or the vertical halfway between the left and right projections of the letter.

"The optical centre must coincide with the mathematical centre of the space. These two factors, the correct space and the correct optical centre, alone provide the key to the interchangeability of characters. At this time I could see no way of finding the optical center except by eye ..."

I understand the center of area, but I'm not totally sure what area he is talking about--I am guessing it is the white area in the box bounded by the extremes of the black, but I'm not sure. The 'first moment' of gravity I assume is the center of gravity of the black figure, if it were solid. The 'second moment' I don't understand, as moment of inertia is, according to what I read, defined with respect to an axis of rotation. It doesn't define an axis, that I can see.

Later in the book, Kindersley explains the idea of finding the optical center by sliding a vertical bar left and right until the white space is equal on both sides. The bar is then at the optical center. He actually used a light behind the letter and a black wedge for the vertical bar. And he measured the total light projected.

At any rate, if someone can enlighten me on these issues, and the status of Kindersley's effort, I would be interested.

dezcom's picture

It seems to work like physics where an uneven mass (compare a hammer to a bowling ball) causes the centre to shift balance point. You might also add magnetism to alude to my proximity discussion above. It all seems very exciting to me, not as a type designer, but just as an inquisitive human being.

I quite agree that display spacing is different than text spacing. This is why multiple axis designs are needed--not only for the change in glyph outlines but metrics as well.
I think there is a point in display type where type designers must hand over the final spacing adjustments to the graphic designer/typographer setting the job. The individual characters at the chosen size can cause an interaction requiring individual adjustment to suit the words, context, and intended meaning. Turning over your typeface to a graphic designer is like handing your child, who has grown to adulthood, over to the world having done the best we could as parents in the time we had to raise them :-) I guess I must be suffering from empty nest syndrome :-)

ChrisL

enne_son's picture

Bill, are you aware of the Kindersley article "Space Craft" in Visible Language VII 4 (Autumn 1973)?

I'll (re)read it later today to see if it clarifies anything. I think it probably won't because it is earlier by 3 years, and probably shorter (13 pages in length).

His guiding principle seems to be:
"The advantages of optical spacing, with as little compromise as possible, are quite considerable. Letters form more cohesively into words and thus increase our pattern recognition." (page 314)

Here he sounds a bit Ferlinghettiish:
"I am hoping for nothing less than a new typographical attitude to text faces--where alphabets are designed to fit together rather than to fit into rectangles. This means an appreciation of a letter's optical centre and the need to design a letter into the mathematical centre of it's space." "In my opinion the next evolutionary step for our alphabet lies in spacing, conjunction, ligaturing, and kerning. Words will take the place of letters. [...] Research with the help of the opthalmologist is long overdue. The opthalmologist is aware that in 1973 no research has yet really revealed how the eye reads. Could anything be more important?" (page 323f)

An abstract is here

hrant's picture

DK was d man.

William, understanding this stuff requires a grasp the concept of moments.

hhp

crossgrove's picture

Wasn't one of Kindersley's goals to build a system that could be used mechanically to get the gross spacing of a typeface out of the way, saving time? I could be thinking of someone else. I thought it was intended to automate spacing.

As with Gill's big, geometric drawings to guide signmakers in the underground, it could be a useful system in a specific application, but terribly misleading in another situation.

William Berkson's picture

>a grasp the concept of moments.

In what field is this theory? Where do I read about it?

Doing a search on the internet, there seems to be a concept of moment in structural engineering different from the normal 'moment of inertia' in physics. But I don't see anything in these discussions about 'second moment' even in structural engineering having to do with a center. The rotation of an object normally seems to be around the center of gravity, so that would not be different from what he calls the 'first moment'.

In any case, it is his 'optical center' which is his key concept, and not this other stuff, so perhaps it doesn't matter.

>Kindersley article “Space Craft”

No, I didn't know about this. Please let me know if it's any clearer.

My feeling is that he was on to something about the appropriate total space and centering being related to overall white space. But I'm not sure he really had a handle on it.

His idea of 'designing to the optical center' you quote means that he thought spacing could be interfered with by design. This seems correct, but did he discuss how to 'design to the optical center', giving examples of good and bad design? I don't see anything like this in the book, which would indeed be interesting.

raph's picture

We had an awesome thread on these issues a year ago entitled the bouma of space craft. See also Theory of Spacing. [edit: I see William linked the first thread already.]

Bottom line, Kindersley's ideas are intriguing but seem unlikely to me to result in excellent spacing. Purely geometrical concepts like "moment" seem less useful as the basis for spacing than, say, spatial frequencies, which at least have some relation to the way the human visual system actually works.

I don't understand the relevance of second and third moments. The standard definitions don't yield a position (as does the first moment), but rather a measure of how far away the "mass" is from the center. I'd probably get a better understanding of what Kindersley meant if I had his paper handy to read.

Did you see this interview with David Holgate, who worked with Kindersley on the spacing research? There's also a review by our own hrant. I'm pretty sure one of the threads mentioned above linked this comp.fonts thread, with fascinating input by Chuck Bigelow, John Hudson, and others, as well.

If you're doing the spacing work by hand, as opposed to trying to construct an automated system, then I think Tracy's approach is basically sound.

Lastly, you might be interested in trying iKern. I haven't heard much from Igino lately, but I think his ideas deserve wider exposure.

John Hudson's picture

I'll contribute more to this thread in the next few days, as time permits. I'll just say for now that I have, over the years, read everything that Kindersley wrote about spacing, and I have found it impossible to derive a clear picture of his system from anything that he wrote, whether in individual articles and pamphlets or taken as a whole.

William Berkson's picture

Thanks to Raph for the links and to Peter for a copy of 'Space Craft'. And thanks, John, I am relieved to hear it is not just me who finds Kindersley difficult to decipher.

Reading 'Space Craft' and the interesting links of Raph, I am starting to get a grip on what Kindersley was up to.

First, it seems that I got it wrong in my first post. Kindersley was measuring and working on only the black of the letter, not the counters. Here, from 'Space Craft' is a diagram of his light apparatus.

The letter itself is transparent, and the background black, so his measurements are based only on the area within the outlines of the letter, not the counters.

Given that he is only looking at the 'black' (in reverse), I now understand why he kept trying different filters to weight the different parts. the filter he gives in the diagram has gradually less black and more transparancy at the outside. This gives more weight to the right and left extremes of the letter in determining the 'optical center'.

But it seems that he never found a filter that would automatically find the same 'optical center' that he could identify by his expert eye.

I'll post further my tentative conclusions on Kindersley in a little while.

William Berkson's picture

Based on the additional information, especially John Hudson’s interesting comments in the Raph’s linked thread, here are my current hypotheses about Kindersley’s work.

1. It seems that the computer program based on Kindersley’s researches was never taken up. In the interesting thread linked by Raph, one person includes an e-mail from 1992 by the company who owned the rights saying that the program was “hard to implement in terms of getting the maths right every time, and choosing the correct variables to control the calculations.” It sounds like it wasn’t practical, whereas the stuff done by Zapf and URW was—which has been incorporated into Adobe’s ‘optical spacing.’

2. Kindersley was a master letter and had great eyes for letter design and spacing. I suspect that his ideas of a natural ‘optical centre’ and ideal minimum advance width are sound, but that he never got them formalized into a practical algorithm, which was his goal. My guess is that he failed formalize his intuition because his efforts at formalization used only the black of the letters, didn’t include the effect of the white on the black. His approach shown in the above diagram of the light box thus was fundamentally flawed. I suspect that to formalize the ‘optical center’ properly one would need to include both white and black—that’s maybe why it’s different from the center of gravity of the black.

3. Leaving aside the issue of making an algorithm to do spacing automatically, which I myself am not interested in pursuing, I suspect that the ‘optical center’ idea—following intuition that includes the counters rather than a program--leads to somewhat different and probably better results than the ‘equal area between the letters’ idea.

4. The idea of designing around the optical center is fascinating, and may in the end be one of the most useful ideas he has. John, does he discuss this only in the Sassoon book, or is it elsewhere also? Can you describe the basic idea?

5. Finally the ideal 'advance width' of a character design also raises the questions of designing the characters so the widths fit into a rhythm, and how best to do this. Perhaps the Fourier information can help here, but maybe also Kindersley's ideas.

hrant's picture

> used only the black of the letters

Not really, if you think about it.
His system essentially processes notan.

hhp

William Berkson's picture

>His system essentially processes notan.

How can that be when his light meter only receives information about light going within the outline of the letter?

edit: I mean his efforts at formalization, not his intuition, which I'm sure did process both black and white.

enne_son's picture

"My guess is that he failed formalize his intuition because his efforts at formalization used only the black of the letters."

Bill, how could Kindersley have believed "[...] my system differs from any other in that it takes into account the space* inside a letter, as does the eye."

* "space as a tangible factor, equal in importance to the letter itself."

John Hudson's picture

William, I'll need to dig out my Kindersley notes and publications to refresh my memory. I suspect your analysis is correct: Kindersley's ideas never got as far as an implementable algorithm that would give reliable results. I think this is why it is so hard to make sense of what he wrote: he was writing around the fact that it didn't really work.

I've been meaning to discuss this stuff with Eiichi Kono, who knew Kindersley (and his wife) quite well, and may be more familiar with the unpublished side of things.

In my own experiments, I found the optical centre spacing based on the uppercase R as explained in the Sassoon book to be quite good for spacing caps-to-caps. I met Sassoon in 2002, but didn't think to ask her about Kindersley's spacing. I wonder if she understood it any better than we do. Actually, I wonder if anyone ever understood it, or whether it was briefly a popular topic for publication because it sounds impressive.

I do appreciate very much Kindersley's point in the Visible Language article about consistent spacing being more important than letter outline integrity, i.e. that letters should be allowed to collide and overlap if this maintains consistent spacing.

John Hudson's picture

Peter: how could Kindersley have believed “[…] my system differs from any other in that it takes into account the space* inside a letter, as does the eye.”

By wanting to believe it? As I've remarked before, almost every type designer and lettering artist will wax lyrical about the importance of the white, of the equal status of the white relative to the black, and about 'designing the white', but if you look at actual design processes and, in this case, mechanical processes, it is hard to find evidence of practical application of such ideas. [This is why something like Legato stands out so much: Evert actually did design the white.]

hrant's picture

> almost every type designer and lettering artist will wax lyrical ...

That is SO true, it's sad.

And foremost in this lip service are the chirographers.

hhp

William Berkson's picture

>Bill, how could Kindersley have believed “[…] my system differs from any other in that it takes into account the space* inside a letter, as does the eye.”

>* “space as a tangible factor, equal in importance to the letter itself.”

Here is my reading of these. In the first quote (p. 320) he is refering to the space *within* the outlines of the letter, what is normally black, but is reversed within his light box set up. He is in particular not referring to the counters, which his system does not directly take into account.

The second quote comes from earlier in the essay, and there he is indeed talking about space, but primarily about spacing between letters, which is supposed to be indicated by his system. He does mention also designing letters with too much space within, so he clearly thought his system did take into account the internal spacing also, and you and Hrant are right that he *wanted* his system to.

But does he actually take space into account effectively?

The only element that takes into account counter space was his 'inverse square law' for his 'wedge' filters. As I understand it--and that is not so much!--it works like this: The wedge has a density of vertical lines or or dots that is concentrated at the center, and then falls off to the right and left (but not vertically) in the inverse square law.

This means that much more light is transmitted by the parts of the letters that extend farther to the left and right of the center of the filter. So for example sliding a letter to the left and right in front of the wedge, when one puts the the vertical of the L near the center of the wedge, it is shaded quite a bit, whereas the right end of the leg is quite bright. Thus the 'optical centre,' when the light to the right and left of the light box is equal, is further to the right than would be center of gravity.

I haven't tried doing calculations, but I think that is the only way space within the counters are taken into account.

And this doesn't, I think, deal effectively with the real optical centre as he would get it by eye, or the adjustments that he admitted you had to make after doing his system. John points out that the algorithm at least as explained in Sassoon doesn't take into account that the thin sides of the N would normally get less space than the thick stems of the H. This involves the 'ideal minimum advance width' as I noted. But his later system doesn't seem to even have attempted to capture this, but rather the system using the upper case R, which only uses the 'optical center' and the actual extension of the leg.

I thought he earlier idea was that he could even tell you how long the leg should be, which would be really informative.

So, Hrant and Peter, I do think he wanted to take into account space within counters, but did so only indirectly and unsuccessfully.

I think that his approach was heavily influenced by the fact that he was a letterer and not a type designer. He thought that all type was spaced badly and that photo type, doing away with the metal body, finally allowed proper spacing as a letterer would do. So far so good. But type designers, even into the pantagraphic punch cut days, thought of letters as done by a process of subtraction from a metal block, rather than by addition of ink to paper. Thus they naturally focused strongly on counter space. Kindersley took an 'ink' approach and I think the problem is basically that 'you can't get there from here' with that approach.

enne_son's picture

Hrant: And foremost in this lip service are the chirographers.

Gerrit Noordzij (Letterletter [H&M: page 4]): "David Kindersley has invented the first system that simulates the calligrapher's perception."

hrant's picture

And I'm Queen Nefertiti.

hhp

William Berkson's picture

>I’m Queen Nefertiti

Well, now I finally understand why you're a monarchist :)

dezcom's picture

LOL!!!

ChrisL

William Berkson's picture

Ok, so the penny just dropped on how you can use the 'second moment' or 'moment of inertia' in defining an optical center.

The first moment is the mass at a point times the lever arm. The second moment is the mass times the length of the lever arm, squared.

Now think of the letter as solid, with a uniform depth and density, and it is fixed to the baseline. If you slide a fulcrum left and right along the base line, then at some point you get the summing up of the first moment (mass at a point X length of lever arm) to the right and left the same. That is under the center of gravity.

If instead you use the second moment (mass X length of lever arm squared), then in assymetrical letters you will get a different point from the center of gravity when the fulcrum will balance the sum of the moments of inertia to the left and right of the fulcrum. The point of equality will be different because you are weighting the parts of the letter more distant from the fulcrum much more heavily.

So that is what his light box was doing--it was an analogue method of doing the calculus to find out the point at which the sum of the second moments would balance on two sides of a line through the letter.

And you could do the same for a third moment, using the cube of the distance from the optical center. With the help of a scientist, Kindersley was able to print out a 'wedge' or filter that would do the analogue calculation for that, as well, using his light box. This would weight the more distant parts of the letter from the center even more heavily.

So it all makes sense now. The only problem is that in the end it doesn't seem to have worked in capturing the more accurate intuition of Kindersley in identifying optical centers and spacing letters. It needed too many qualifications and exceptions.

I'm going to re-read the monograph now that I understand the basic concept.

Right now my working hypothesis is that the optical center is one factor in good spacing, and there are others such as roughly equal area between letters. What I think may be worth pursuing here is the idea of 'designing around the optical center'.

It can happen with a design of a letter that when the side bearings give approximately equal area with other letters, the optical center of a letter is not in the middle of the advance space. This would mean that you haven't 'designed to the optical center', and the letter should be redrawn.

Kindersley's ideas about ideal advance width, and how these relate to rhythm in type is also an interesting question.

Overall, my feeling is now more positive that there are valuable things to be learned from Kindersley's effort, even if in the end it didn't work.

William Berkson's picture

Ps. Also I should change my answer to Peter, above. Now I think he was talking about counter space more than I acknowledged above. I just suspect he was some distance from capturing its true impact.

enne_son's picture

William, thanks for the clarifications.

"Now think of the letter as solid [...]"
The problem with this is how to determine the 'extent' of the white in partially enclosed forms like the 'c': where does the space inside the letter end and the interletter space between the 'c' and it's neighbour begin?

Another issue is deciding the dimension of the advance. Is it a pure measure from the optical centre, or does it vary from letter to letter, which it must, I would think, in a proportional font. The Kindersley method seems to hold out the promise of a consistent spacing mechanism but does it also contain an method for determining how wide letter spaces should be?

Gerrit Noordzij wants to use the space inside the letter as the benchmark. The space inside tha letterforms and the space between them must be in synchronicity.

William Berkson's picture

>The problem with this is how to determine the ‘extent’ of the white in partially enclosed forms like the ‘c’

Peter, as I understand it, all the calculations are based on the black. The white is only taken into account indirectly by the use of the 'moments' in the calculation of the optical center of the black. That he was able to get so far without directly looking at the white is what is intriguing about his approach.

If I am getting it right, it starts from an opposite approach from Gerrit Noordzij. Now the black and white are defined by the same stroke, so perhaps it is not a total surprise that you can start with one or the other and get similar results. But I am still surprised that Kindersley's approach was able to get anywhere at all.

My guess is that both black and white do make contributions, and that Kindersley and Noordzij both have a part of the truth, but how much I don't know.

Incidentally, in now re-reading Kindersley's 1976 edition of 'Optical letterspacing', which I am now mostly understanding, he does discuss designing around the optical center. More later.

hrant's picture

> The space inside tha letterforms and the space between them must be in synchronicity.

No, the notans of potential boumas must be
in "synchronicity" with our reading firmware.

The relationship between adjacent whites is kid's play, just like the
general misguided practice of making the blacks formally harmonious.

hhp

John Hudson's picture

No, the notans of potential boumas must be in “synchronicity” with our reading firmware.

Practically, what does that mean?

I think the statement, accepting your usage of notans and boumas, is true, but it seems to be true in the same way as 'the amount of oxygen in the atmosphere must be in "synchronicity" with out breathing firmware'; i.e. it doesn't tell us anything about how to achieve this synchronicity. Such a statement also doesn't tell us anything about our breathing firmware's tolerance for greater or lesser proportions of oxygen.

hrant's picture

At the very least, it tells us that forming the black (even while minding the white) is arbitrary. Practically, it tells us for example that the Letterror "shading" technique (which you've said you sometimes use in actual design, if only at a conceptual stage) has no bearing on what [I believe] is the main function of text type: to be read. So practically it tells you to stop doing that. It would in fact be a lot less bad if you "fleshed out" your black with circular motions. Even random motion would be better than simulating a broad-nib pen! As one of my favorite riddles goes: "What's the difference between a headless chicken and FILL-IN-THE-BLANK?" Answer: "A headless chicken has a random chance of going in the right direction."

But really, most practically:
For starters let's just work on grasping -and admitting- this original truth, shall we?

> ‘the amount of oxygen in the atmosphere must be
> in “synchronicity” with out breathing firmware’

That's not such a bad parallel.
And also parallel: the oxygen in the atmosphere doesn't
need to be in synchronicity with the oxygen on Mars.
The problem in type design is we still imagine that
our Mars is teeming with life.

hhp

hrant's picture

Less practically, but [as a result] more fundamentally:
We have been told by the chirographers that the insistence of contemporary type design technologies on having us draw outlines is contrary to how type should be designed. I say to you: we are blessed because we can now draw outlines - that is the direct access to the backbone of notan.

hhp

John Hudson's picture

We have been told by the chirographers that the insistence of contemporary type design technologies on having us draw outlines is contrary to how type should be designed.

Who? Where?

I say to you: we are blessed because we can now draw outlines - that is the direct access to the backbone of notan.

As is cutting metal punches. The making of type has almost never been a chirographic process, regardless of the designer's thinking about letterforms. In fact, only stroke-based font technologies are not firmly outline-based.

raph's picture

William wrote: Ok, so the penny just dropped about this "moment" concept. Yes, that makes a tremendous amount of sense, and it's also clear how the photographic filter wedge + light sensor technology could measure these quantities. That said, my intuition is telling me that it's far too crude a measure to be the primary basis for good spacing, as it does not take proximity into account at all, among other things.

hrant: I must confess that I find some of your recurring themes to be frustrating because, to me, they're too vague and slippery. The use of the word "notan" to describe dark/light relationship sounds nice, but what does it actually mean? As to whether Kindersley's machine is measuring the black or white, I think you'll find that it doesn't actually matter. If you measured, say, a letter and its inverse image, you'd find that the sum of the two light flux quantities would be exactly the same as all-white. So, saying that it measures notan, black, or white brings no real insight to the table.

Similarly for your insistance that stroke is secondary to outline. What does that mean in practice? That good fonts are not slavishly chirographic (in that the character shape should not just be a convolution between a static pen outline and a zero-width path)? We already knew that - all non-strawman presentations of chirography discuss include the principle that the pen shape is not fixed, but varies.

One reason I've come to believe that chirography is important is that my visual system seems to be very good at doing a chirographic analysis or decomposition of an outline, let's say for example the lower right serif of a 60pt metal Centaur capital K. Even though the modulation of outline is extremely subtle, my eye seems to have no difficulty seeing that, at the very right side, the stroke is a continuation of the main diagonal stroke of the K, but the rest of that part of the outline belongs to the horizontal stroke comprising the bottom-right serif. Further, it makes intuitive sense to me that the human visual system is going to be very good at this kind of analysis, in parallel, and with extremely high bandwidth. Quick, which branch belongs to which tree? You'll care if you need to climb a tree suddenly because a tiger is chasing you.

To convince me that "outline is more important than chirography" is more than a platitude will take a more detailed argument, with at least the possibility of empirical verification, than I've seen you advance. And I believe that one of the best tests for theories of readability, or type perception in general, is an algorithm that accurately predicts the perception of spacing.

William Berkson's picture

Re-reading ‘Optical spacing’ by Kindersley, I now have a better grip on two important points I didn’t get earlier.

1. How he determined the advance width of characters.
2. The idea of designing a character to the optical center.

Advance width.

He used the light box to try to set the advance width of characters, but in the end it didn’t fully work. His method was to measure the total light coming through a letter (in reverse, clear on black) after going through his ‘wedge’ filter--with the optical center of the letter at the center of the wedge, if I understand it rightly.

He then put clear rectangles surrounded by black in front of the wedge. These were the same height as the letters in the case of the caps, but of varying widths. The width of rectangle that gave the same light as the letter was the basis for the advance width. To it was added a fixed amount that was determined by checking the minimum space that the most asymmetrical letter, L (later R, reports John) would take to be centered on its optical center without kerning.

There had to be changes for the lower case, but even for the caps he reports that he had problems with O and the C. The O was consistently spaced slightly too widely using the ‘second moment’ wedge filter. The C with serifs was also placed off center slightly, as the optical center as determined by eye was to the right of where indicated by the filter.

In order to compensate for his filter giving wrong results, he stuck with the inverse square filter, as it seemed to him “intuitively right”, but started changing its shape to various rings and frames. At this point the thing seemed so ad hoc to me that I lost interest. In my view stubbornly sticking to a refuted theory instead of looking elsewhere was a fundamental mistake.
Later I will post on ‘designing to the optical center’, which may be in the end Kindersley's most important idea, and my overall conclusions so far.

dezcom's picture

William,
Thank you for your diligent persuit of this. This sounds like a great paper to present at Typecon or elsewhere.

ChrisL

Nick Shinn's picture


Helvetica has greater divergence in its "round" and "straight" side-bearing widths than News Gothic. This is shown in the top example.

In the bottom example I reversed the situation.
(Note: I eyeballed this, so it may not be totally accurate, but it does show the difference.)

William Berkson's picture

Interesting, Nick. That does follow what I quote below from Kindersley, as Helvetica has wider round letters than Trade Gothic, if I am seeing right.

Here's my next-to-last installment of 'my book report':

The first edition of the ‘Optical Spacing’ monograph was in 1966, and the second, 1976, which I now have a photocopy of, has a final section ‘after 10 years.’ There is a final 2001 edition, edited by his widow, which must have additional material, but I can’t find any copy around here (Washington DC area).

In the ‘after 10 years’ section Kindersley starts sounding much more like Walter Tracy and Gerrit Noordzij. First of all he notes that ‘hand spacing’ will give more letter space to the combination ii, and then progressively a hair less to the nn, mm and the oo, the last of which has the least space of all. He comments:

“ Obviously the letter form with the largest internal space to stroke ratio needs the least space outside of itself like the ‘O’. Whereas a letter such and an ‘I’ needs all of its space externally. …The human eye sees the whole letter and requires a space outside that letter consistent with the dispersed black and white in the letter. …The bolder and more condensed the letter form—take ‘black’ letter—the more consistent the spaces. The thinner or lighter the stoke and the more extended the ‘letter form’ the greater the variation of spaces. The effect of paying attention to this natural optical characteristic is to give the page a remarkable consistent colour and at the same time avoid the ‘picket fence’ look of many type faces.”

This is interesting and seems sound, but as far as I can tell none of this derives from his optical light box measurements, but rather from his practiced eye. So my general conclusion is that Kindersley’s efforts at an algorithm were a bust, but that he has a lot of interesting insights. So I think the thing is to look past the mess of the failed algorithm, and at the ideas.

I’ll post next on what I have found most interesting, the idea of ‘designing to the optical centre.’

Nick Shinn's picture

Concerning the balance of positive and negative space, it's not possible to speak of the spacing of letters, as if the glyphs are designed first and then the spaces added after. The spacing and letterforms are conceived of as a related whole, and the relationships between them are a matter of judgement on the part of the designer.

As you say Bill, Kindersley appears to be looking for a general principle which explains the consensus of judgements exhibited in the canon of best-practice faces. But I'm not sure there is a consensus: does Chalkboard play Helvetica to Comic Sans' News Gothic?

hrant's picture

> it’s not possible to speak of the spacing of letters, as if the
> glyphs are designed first and then the spaces added after.

Except that's how almost everybody does it. :-/
You're talking about what should be the case, not what is.

And what about kerning? That affects everything too, but you'd be very hard-pressed to
find a designer who doesn't follow the tired old "space first, kern later/sort-of" routine.

hhp

dezcom's picture

"...Helvetica has wider round letters than Trade Gothic..."

Not only that but Trade has flatter sides in the O which means there are more points in proximity than Helvetica's more rounded O. A more drastic example is comparing a very round Futura to the flat-sided Din O. It is not just a matter of a more open and larger counter space but the actual shape of the space. The flatter the vertical sides of the O, the larger the sidebearings should be and conversely. This is true even if it is not a condensed face.

ChrisL

Nick Shinn's picture

you’d be very hard-pressed to find a designer who doesn’t follow the tired old “space first, kern later/sort-of” routine.

That's the physical sequence, but I suspect that most experienced type designers are aware, when they're setting sidebearings, of how kerning may subsequently be used to help smooth things out, and work accordingly. Also, when doing kerning, it sometimes becomes apparent that kerning won't fix a situation, and one has to go back and change the sidebearings, or even the glyph shape. That's something that I often have to do when designing an f that will work with the f_i ligature to form ffi, and also as a non-ligated f + i, and as part of an f_i ligature. A lot of back-and-forthing.

William Berkson's picture

[my longest post ever--hopefully worth it!]

Nick, I think Kindersley is trying to capture best practice in lettering—not type--and bring that to type. He thought type was badly spaced. I don’t think knew the tradition in type spacing, at least at first. As Tracy notes, his own type founder’s approach to spacing goes back hundreds of years, at least to Fournier. The metal type problem of a single set of side-bearings for a letter is a non-problem for the letterer, as he or she is hand spacing each letter, and can also vary the shape of the letter. But because so many compromises do not need to be made, the letterer can try to get much closer to perfection in spacing. And this means that maybe lettering masters like Kindersley had a better eye for what the ideal is. Then when photo type and then digital type came along, the forced compromises of metal were a bit relaxed. So there was an opening for Kindersley to make a contribution. And I think he did, though the ‘system’ didn’t work.

Is the idea of ‘optical centre’ valid? I do think there is something to it, at least for roman type, which sits on a base line. We have an instinctive sense of center of gravity—where to pick up an object where it will balance. And I think that this affects what we see as balanced in type. However, I suspect that the reason that the optical center is sometimes a bit off from the center of gravity is because of the opening of counters. I think this affects somehow what we see as ‘the letter’, and so its balance. So an algorithm that captures Kindersley’s idea would need to include white space in its calculations. It might even work with calculations on white only—which was my initial misreading of Kindersley.

Given that the idea of ‘optical centre’ has something to it, I think Kindersley’s most interesting stuff is on ‘designing to the centre’.

His basic principle of spacing is “each letter should appear to be exactly in the centre between its two neighbors. …To me this is the only criterion.” As we will see, he also brings in even color for considering the correct advance width, but what is notable here is that he leaves out proximity or equality of white space between letters.

‘Designing to the centre’: “The secret of a good fit and an economical one lies precisely in moving the optical centre towards the mathematical center.” By ‘moving’ he means changing the design of the letter to shift the optical center. By the ‘mathematical center’ I believe he means equal distance from the sides of the letter (not the side bearings). Thus designing to the center means bring the optical center of the letter, in so far as possible, to the center of the letter. Thus ‘designing to the center’ involves designing a *balanced asymmetry* in the glyph.

The following is his example, a discussion of the ‘L’.

The interesting thing here is that adding a serif to the leg of the L *decreases* the space needed to the left of the L, by moving the optical center more to the right. One of the interesting features of this analysis is that it focuses on the black of the letter, and shows how focusing on the distribution of the black is critical to good design.

Also this analysis opens the way to understanding an important function in the design of serifs and varying weight. For example what is the point of the tail of the ‘a’? Easy to answer if follow Kindersley, but who else has an answer? The tail of the ‘a’ moves the optical center of the ‘a’ nearer to the center of the glyph. Same with making the top arm of the ‘a’ shorter than the bowl.

As John notes above, Kindersley's strictness on optical centers leads him to downgrade adjusting spaces because of proximity. He first designs an upper case wide enough so that no kerning is needed. –It is interesting in this context that Tschichold regarded tight spacing of caps as a flat blunder. If you want to track closer, then Kindersley says it is better to keep the optical centers in the middle of the side bearings, and overlap the blacks of photo or digital characters, rather than re-space to avoid contact.

So take for example the letter ‘r’. Does it threaten to contact letters to the right? Well then shorten the arm and thicken the end of it. Done together, these will give you better asymmetrical balance—bringing the optical center more to the middle of the letter—and it will space better. Pretty powerful concepts, I think.

A second feature of ‘designing to the centre’ involves even color. Kindersley comments on the demand in photo type for a limited number of advance widths:

“The space allotted to any character must ideally be an expression of that character. Therefore it would also be true that, if the space is predetermined, the letter must be an expression of the space. In other words, the intensity of two characters that are to occupy similar spaces must be equal for the retina. This does not mean, of course, that the designs must be similar, but that their weight expressed in moment terms around a centre must be equal. If therefore it is desirable to produce an alphabet with only six widths, it is equally desirable that the letters have only six different weights, if spacing and color are considered important.”

I think it also follows from his thinking that the different width characters will also have proportionally different weights, so the whole alphabet will have even color, even while it will have characters that are quite divergent from one another and often quite asymmetrical in shape.

I first was drawn to look up Kindersley’s work when I saw the ‘C’ diagram above--I'm not sure where--and was intrigued that he had a different approach to spacing than the traditional type one, and especially that he didn’t focus on avoiding contact and uniformity of inter-letter spacing. While these are valid concerns, I think a lot may still be learned and creatively extended from Kindersley’s approach.

dezcom's picture

Nick is right. It is not a linear process. Some glyphs are apt to space quickly while others need numerous revisions to either outline, sidebearings, or kerning (or all). The ones with holes and single protrusions like f, r, T, L, P, 7, etc., are more likely to need multiple surgeries than H or n. I find that the first drawing of the glyph outlines is the quickest part of the process. The endless tinkering afterwards eats up the hours.

ChrisL

William Berkson's picture

>multiple surgeries

Yeah, why I'm glad that I took a day and a half to figure out Kindersley is that I think he helps us think about how to do those surgeries well.

dezcom's picture

So I guess you are ready for your scrubs now William? :-)

ChrisL

hrant's picture

> A lot of back-and-forthing.

Yes, iteration. That's much more accurate than your first claim
in terms of what really happens, but not really the same thing
(and still not as good) as thinking about things all at once.

hhp

Nick Shinn's picture

but not really the same thing (and still not as good) as thinking about things all at once.

One part of the design process, the overall conception of how positive and negative space function in a particular typeface, can't be "better" than another part, the practical business of refining and real-izing how that conception works, letter by letter. They involve different kinds of thought, interrelated, indispensible.

enne_son's picture

Bill, Thanks for following this through. I'm intrigued by several things, but wonder above all if Kindersley brings this back around to the comment in "Space Craft" I quoted above: “The advantages of optical spacing, with as little compromise as possible, are quite considerable. Letters form more cohesively into words and thus increase our pattern recognition.” (page 314)

William Berkson's picture

Yes, Peter, I do think that his analysis of evenness of color and good spacing have a lot to do with success in the 'letters forming cohesively into words'. How to test its truth is another story :)

But in particular I think his analysis may help us understand the advantages in readability of traditional stressed, seriffed type over sans serif. His theory gives an important function to serifs and varied weighting in spacing, which I haven't seen elsewhere.

hrant's picture

> One part of the design process ...

It's not a different part, it's a different way of designing a
font. And one way can certainly be better (at least in some
aspect, for example readability, or style) than another.

hhp

Syndicate content Syndicate content