To The Bone
Originally published in Lettres de Toulouse, Expérimentations pédagogiques dans le dessin de lettres, Paris, B42, 2018.
Between the late 1960s and the late 1970s, several publications would upset deep-rooted conceptions about the design of typefaces. They were the work of people with very different backgrounds, who nevertheless shared in common their grasp of forms, which are seemingly static (the engraved and printed letter) from a dynamic viewpoint, by revealing the scriptural gestures which are at work in them.
In 1968, Father Edward Catich published The Origin of the Serif, a work devoted to Roman epigraphy in antiquity, in which he comes out against the theories, which were then predominant in that discipline. He actually asserted that the action of the stone engraver’s chisel has nothing to do with the appearance of the typical ends of the vertical stems (the serifs). Here, the engraver is merely emphasizing an element formed during a stage prior to the engraving when the inscription is drawn with a flat brush on the stone. Catich, an accomplished epigraphist himself, based his observations on his, to say the least, singular personal experience. Before becoming a priest and then a teacher, he worked as a sign painter as a teenager in Chicago. This craft calls for great mastery of the flat brush, the tool favoured by sign painters. With enough practice and skill, a sign painter is capable of executing, in just a few lines, letters which are remarkably regular, by varying the angle of the tool as required. The extremely supple brush is especially suited to all manner of inscription: unlike the quill pen, it can be used vertically, on hard surfaces (such as stone), and for large or monumental dimensions.
What Catich demonstrated was that it is possible to execute majestic Roman capitals, like those which feature on the famous Trajan’s Column, with a series of strokes made with a flat brush, the “spatula”. According to him, it is the nature of this tool, which is flat at its tip, and its angle which infuse letters with the contrast between thick and thin, the slight diagonal axis, and the appearance of the serif — all elements which are absent from archaic Greek and Latin inscriptions. With this book, over and above the debate among specialists, aimed at acknowledging or otherwise the use of this tool, Catich introduced another way of understanding forms of inscriptions: no longer through the outlines delimited by engraving, but through the underlying stroke. This trace is the outcome of a gesture, combined with a scriptural tool, an implicit writing gesture, prior to the engraving work of the inscription. As such, this new and somewhat iconoclastic approach challenged the boundaries between palaeography and epigraphy, which deal respectively with the history of writing (on vellum or paper mainly) and inscriptions (on hard surfaces, such as stone).
A few years later, the Dutch calligrapher and type designer Gerrit Noordzij, a teacher at the Royal Academy of Fine Arts in The Hague (Kabk), in his turn set forth bold theories about the form of typographic characters. His works were published in various forms from 1973 on (Dossier A-Z 73, LetterLetter, The Stroke of the Pen), and question the existing historical concepts and the classification of typefaces, with the proposal of a new viewpoint. First and foremost, he asserts that “the white of the word” is the primary element, and not “the black of the letter”. It is this inner space which creates the rhythm of writing, and the “black” of letters cannot be dissociated from this counter-form. Next, the outlines of letters are produced by an underlying stroke, combined with a writing tool, whose shape will determine the look of the typeface. Depending on whether the pen’s nib is flat or pointed, the quality of the contrast differs. He singles out three methods for distributing weights: by translation (with a flat- nibbed tool, whose angle remains fixed), by expansion (with a flexible pen, the weight being applied by pressure, at the level of the vertical stems), and by rotation (with a flat-nibbed tool, varying the tool’s angle). Lastly, he singles out two types of construction, continuous (“cursive”) and discontinuous (“interrupted”), and two types of slope, vertical and slanted.
Like Catich, Noordzij comes up with a new set of keys for describing typographic forms: an objective description, based on observation of the structure rather than on historical origins. There is no more talk of Roman and italics, classical and modern, and Garalde and Didone, but of types and quantities of contrasts, axes, angles, etc. By developing these parameters on the basis of variational axes, Noordzij produces a cubic representation, which tends to encompass all possible forms. His theories, which form the basis of his teaching at the Kabk, have had a considerable influence on several generations of type designers hailing from this school. In a way, this programmatic vision foreshadows the advent of computer-assisted typographic design and the overall spread of interpolation tools, concepts of axes, and “design space”, meaning the space of drawing and variation between matrices at the extremes. A marked tropism links Erik Van Blokland and Just Van Rossum, students of Noordzij and founders of Letterror, and Guido Van Rossum, inventor of the Python language. Frederik Berlaen, for his part, was a student of Erik Van Blokland at Type & Media. Type design, for them, is closely bound up with the design of computer tools.
Noordzij’s works are at once iconoclastic at the historical level, visionary with regard to the objectivization of the formal features of typefaces (based in particular on overall structural characteristics, rather than on drawing details), and essentially conservative at the formal level. In them, typographic forms are determined by an underlying calligraphic logic, which acts like nothing less than a drawing matrix.
At the same time, in the late 1970s, at Stanford University in the United States, the mathematician Donald Knuth was at work on the TeX and Metafont software packages. The story is famous: dissatisfied by the loss of typographic quality in his scientific publications, following the shift from Monotype composition to photocomposition, Knuth put his research on hold, along with the publication The Art of Computer Programming, and devoted himself to this problem for six months. The problem would sorely exercise him, and in fact kept him busy for several years. He had set himself the challenge, on the one hand, of reproducing, using mathematical and computer means, the design quality of the Monotype Modern 8A, which he was particularly fond of (which would become Computer Modern, the most highly developed Metafont), and, on the other hand, creating a word processing software capable of composing complex formulae, called TeX. Here again, Knuth’s wager was to describe not the exterior outlines of typefaces, but their interior skeleton, to which different parameters were applied, acting like virtual pens. The idea was seductive but its application complex. Many aspects of the typeface can be easily generalized (the vertical proportions such as the values of the ascenders and descenders, the x-height, the contrast between thick and thin, the axis along which they are distributed, and the tension of the curves), but numerous drawing details do not follow the rule. So Knuth needs no less than 62 different parameters to get close to the look of Monotype Modern 8A with the Computer Modern Metafont. But this metafont may then take on all manner of forms, by altering one or other of the parameters. Without encompassing all potential forms in a single font, metafonts usher in a new multi- faceted or shape-shifting typography. “The form of the letters is no longer engraved, nor drawn, but described. [Knuth] in this way brings to typography an abstract system for designing what constitutes a typeface […], going beyond the question of the output medium, historical typographic classifications, and the fixed nature of a font. The code here becomes a method, a model of thinking for design”.
Catich, Noordzij and Knuth share in common the fact that they broach the design of letters not through their exterior outlines, but through their inner structure; the tool (physical or virtual) is thus granted a decisive influence, acting as an interface between the skeleton and the final silhouette. The look of the typefaces is the outcome of these two factors combined, a dialectic which authorizes a novel plasticity. At first glance, limiting a letter to its skeleton might seem simplistic. In the one or two above-mentioned systems, the operation, on the contrary, permits a considerable expansion of the potential forms, through a simple variation of the tool’s parameters (thickness, contrast, angle…).
If typographic forms undeniably issue from handwritten forms, albeit only historically, is it nevertheless possible to reproduce all historical typographic forms based on this inner logic? Nothing is less sure. Over time, the form of typefaces has shed its handwritten sources, and it has done this through the gesture of the punchcutter. His action is more akin to a sculptural task, the counter-shapes being removed from the tip of the metal rod. Unlike the painted gesture of the stone engraver, as revealed by Catich’s study, nothing here attests to any underlying handwritten trace: the influence of the pen is just a memory, an essentially cultural and contingent leftover.
The programmes developed by Knuth and Noordzij cannot encompass all typographic forms, any more than CMYK processes can reconstitute the whole range of the colour spectrum. As flexible as they may appear, they are still idealized systems, capable of simulating a large number of forms, but falling short precisely as a result of what constitutes their strength, their systematic logic. By applying a tool-based parameter to the whole skeleton, the result, to be sure, is thoroughly coherent, but cannot, for example, take arbitrary variations into account (like Granjon’s italics and Dutch baroque typefaces), these being the nub of numerous creations.
For centuries the outlines of typefaces were set from the outside. Bézier curves, which are used to define the outlines of PostScript fonts, perpetuate this approach to form: if digital typography has considerably simplified the design process, it has not essentially changed the logic, which describes characters shapes. We still have the description of the black shape, frozen in its counter-form. Expressing surprise over the immobility of typographic signs at the heart of a digital paradigm, where textual matter is more moveable than ever, Nick Sherman describes digital fonts as “ice cubes” floating on the surface of liquid layouts. This is a paradoxical immobility, whereas the design and development phases of digital fonts frequently make use of dynamic design methods, such as the interpolation of shapes. In that text, published in January 2015, Sherman makes a case for the advent of fonts which can be adapted to the “responsive” interfaces of digital media.
On 14 September 2016, at the Association Typographique Internationale conference held in Warsaw, the 1.8 version of the OpenType format was presented: jointly developed by the major players in the digital industry (Apple, Adobe, Google, Microsoft, Monotype, W3C, etc.), this version offers the possibility of designing and deploying variable fonts. A single file may contain several axes of variations, which radiate outwards from a central matrix. This technology is akin to the Multiple Masters developed by Adobe or (even more so) to Apple’s TrueType GX, both of which enjoyed mixed success in the mid-1990s. Our day and age seems, nowadays, more favourable for accommodating such a change, with, in particular, the general spread of on-screen texts and webfonts: in this respect, the loading of a single font file containing all its variables represents a major challenge for web players, which contributes to a general consensus on the subject.
This variable-font technology is without any doubt the most significant development in the typographic industry in the past twenty years. It involves far-reaching changes in terms of typeface development and commercialization, but the upheaval will be most egregious above all for users. In fact, the interpolation of shapes has long been at the heart of the creation of digital fonts — but, hitherto, they were “frozen” prior to their distribution. When the designer comes up with a family of typefaces, he organizes the weights envisaged on one or more axes (width, weight, optical size, etc.). As in the Multiple Masters technology, he designs the extremes (for example, the Light and the Black) of each one of these axes, and then uses the computer’s calculus capacities to interpolate the intermediary designs (the Regular, between the Light and the Black). The philosophy behind variable fonts is slightly different, for rather than starting from the extremes (the corners of a square or cube, depending on the number of axes), the drawing starts from the centre. Based on this central drawing, deltas are developed, which are variational axes that may head in any direction (unlike the Multiple Masters, where the design is developed on the x and y axes).
Whether variable, flexible, dynamic or liquid, the time seems ripe for typography to adopt outlines which are more mobile than ever. Some of the authorship of this way of creating fonts can be credited to Catich, Noordzij and Knuth, who all managed to read a potential dynamic into the fixed outlines of typefaces, by separating their physical envelope from their skeleton. Skeletons which are forever on the move.
-
David Vallance, “Décrire les modèles”, in .txt2 (Esad Grenoble-Valence & Editions B42, 2015), p. 48. →
-
Nick Sherman, “Variable Fonts for Responsive Design”, in A List Apart, January 23 2015, on: http://alistapart.com/blog/post/variable-fonts-for-responsive-design. →