The Controversial Comic Sans

I figured I would pass along Emily Steel's Wall Street Journal article: Typeface Inspired by Comic Books Has Become a Font of Ill Will.

Comments (2)

METATYPE1 and meta-fonts

I've been spending time lately learning more about working with METATYPE1, mostly for my own projects, but with the eventual hope of writing some tutorials.  While working on one of my running examples, I was encountering some difficulty expressing what I wanted in a reasonably declarative fashion. So I decided to see how it was done in Latin Modern.

I was dismayed to learn that Latin Modern is not a meta-font like Computer Modern. Instead the Type 1 versions of Computer Modern (which was developed by either Bluesky or Y&Y) were decompiled into MetaPost code as raw path outlines. So at that point all of the useful abstractions in Knuth's original code and specifications have been lost.

The only other major typeface developed in METATYPE1 that I know about, Antykwa Toruńska, has no source available and from the description I highly suspect that it was developed by creating raw paths that matched the scanned specimens. This got me thinking about whether there are any meta-fonts that have been developed in METATYPE1, or even whether Computer Modern might be the only full meta-font family in existence. I just skimmed through the METAFONT sources that are included in TeXLive, but didn't see anything particularly promising yet.

In any event, going back to the original issue, I have been starting to think that maybe the limitations of METATYPE1 are perhaps not worth being able to directly generate Type 1 fonts. It could be entirely possible that working in METAFONT and using something like mftrace to generate outline fonts from high-resolution bitmaps will produce results of sufficient quality. I'm hoping to do some tests to compare the two approaches this weekend.

(It is worth noting, that the comment about METATYPE1 on the mftrace page is slightly incorrect or out of date.  METATYPE1 can handle overlaps, there are just complicated restrictions on how overlapping may occur.  Finding clean approaches to avoid these restrictions was why I became interested in looking at the Latin Modern code to begin with.)

Comments (2)

Quality fonts

The other day on Digg I saw a link for 30 high-quality free fonts for professional designs. Many of the samples seem decent, but I guess it sparked the question in my mind of just what constitutes a "high-quality font".

I suppose when I think of a a quality font, I tend to expect a consistent design along with some of the following:

  • composing characters or glyphs for most diacritical marks, ideally Greek and Cyrillic glyphs as well
  • proper kerning
  • appropriate ligatures
  • old-style numbers
  • optical sizes

Given these criterion, offhand I have to say that perhaps the best high-quality free fonts that I can think of off the top of my head are probably the TeX Gyre fonts and the Latin Modern family.  I would be curious to hear about other recommendations.

Comments (2)

The Fifth Element

The other day, a colleague of mine pointed out to me that Aarhus University recently rolled out a new, somewhat controversial,  visual identity that includes a novel geometric alphabet that they call its "fifth element".

Comments

Fonts in LaTeX, Errata

About seven months ago, Vasile Gaburici alerted me to the fact that otftotfm has had experimental support for OpenType fonts TrueType outlines for quite some time. Furthermore, it will use kerning tables that ttf2tfm will ignore.  I am now finally getting around to writing a post to highlight this fact.

It seems likely that otftotfm may also work on pre-OpenType TrueType fonts because the OpenType font format is essentially the same as the TrueType format with potentially additional tables. At least, when I did cursory search on my computer I could not find any TrueType fonts that proved to be incompatible with otftotfm.

Therefore, if you want to use a TrueType font with pdfLaTeX you should ignore the instructions I give in ∃xistential Type Fonts in LaTeX, Part Three: pdfTeX and TrueType and use the same instructions as I gave for OpenType fonts in Fonts in LaTeX, Part Two: pdfTeX and OpenType. For your convenience, I have also created an updated the zip file for the example that uses otftotfm instead of ttf2tfm.

Comments

Resumption

That was a longer hiatus than I had intended. Partly because not everything went according to plan.

My original plan was that, upon returning from my vacation, I would spend my remaining time at EPFL writing a technical report explaining everything I knew about the problems with Scala Classic. Instead, on my first day back in the office, Martin came by with a draft of a new formal system, DOT (for Dependent Object Types), that he came up with while he was on vacation. After about four weeks I managed to root out pretty much all of the obvious problems with DOT, but another four weeks was not enough to get the proofs and the metatheory into a state that we were happy with. I am not really sure what will happen with DOT in the long term.

There are a few clever things about DOT that avoid some of the problems I encountered with Scala Classic. For example, Scala Classic only had intersection types while DOT has both intersection and union types, which solves some problems with computing the members of types.  However, with Scala Classic I was trying to model a minimal subset Scala language as it exists, without adding any new features that many never be supported by the full language. I have no idea if we will see union types in the actual Scala implementation anytime soon.

The other thing DOT does is give up the goal of trying to be a minimal subset of Scala and throws out quite a few important things from a user perspective. For example, there are no methods (only λ-abstractions), no existentials, no built-in inheritance or mix-in composition (though you can encode it by hand), and object fields must be syntactically values.

This last restriction is particularly important because it solves by fiat the problems that arose in Scala Classic from using paths that have an empty type in dependent type projections. However, it also means that you may not be able to directly encode some Scala programs into DOT without an effect or termination analysis. Therefore, while DOT has the potential to be a useful intermediary step, there will still be more work to be done to provide a semantics for a more realistic subset of the Scala language.

I have been in Copenhagen for a little over a month now, but I am not really ready to comment on the research I have been doing here yet.  So in the meantime I'll probably focus more on fonts and typography.

Comments (1)

Out with the cheese, in with the pastry

I have unfortunately been quite busy the past several months, and have not had as much time to write about what I have been doing as I would have liked.  For the most part, I have been splitting my time between teaching and research.  I will hopefully go into more detail about the latter in the near future.

However, I figured I should take some time now, before I leave for my first proper vacation in a year, to announce that I have accepted a postdoc position at ITU in Copenhagen with Carsten Schürmann.  This has been in the works for a while now, but this week I finally was able to make it official.  My plan is to start at ITU at the beginning of March, where I will be working on things relating to the LF family of logical frameworks, Twelf, and Delphin.

There are probably a vanishingly small number of you that care about what this means for Scala Classic, but I hope to write something much more detailed about it when I get back from my vacation.  I'll leave you in suspense with the short answer: despite all my efforts, it simply is not possible to prove it sound without making it a very different language.  Which is rather unfortunate.

Comments (4)

Algorithmic puzzle

I was assigned the task of fixing a bug in the Scala standard library involving the indexOf, which given a receiver object that is a sequence and another sequence of the correct type, checks whether the latter is contained within the latter and returns the index. The current version does not correctly handle the case when a suffix of the receiver object matches a strict prefix of the argument (for example, List(1,2,3).indexOf(List(3,4)) will report a match at the index of 2). This should be fixed for the upcoming 2.7.2-RC2 release.

As soon as I started rewriting the code, I wondered why the original author hadn't just used an off the shelf searching algorithm. However, a quick search reminded me why: algorithms like Knuth-Morris-Pratt and Boyer-Moor construct a table based upon the sequence to search for. However, Scala sequences may be infinite so it is not possible to blindly go ahead and attempt to construct a table, because doing so may diverge.

Furthermore, there is no way to test whether a sequence is finite without potentially diverging. So it is not possible to first test whether the argument is finite, because if the receiver object is finite then indexOf should return that there is no match. Alternately, testing whether the receiver object is finite would be incorrect because it is possible the argument is finite an could potentially match.

However, it seems like it should still be possible to do better than O(nm), where n is the length of the receiver and m the length of the argument. For example if you start out with the sequence 1, 2, 3, 1 ... and the pattern 1, 3, 4 ... it seems like it should be possible to exploit the fact that you've looked ahead and know that there is no point and comparing 2 with 1. Alternately it seems like it might be possible to lazily build a table from the argument, but I would need to think longer to see whether it is always possible, in Knuth-Morris-Pratt for example, to fill in a complete prefix of the table without having processed the entire pattern.

In any event, searching with combinations of keywords like "string", "searching", "lazy", "infinite", etc. did not really turn anything up. One possible direction might be to look at "incremental" search algorithms like those used in text editors, etc. However, I expect that because they are geared to interactive use that the pattern will usually be quite small and therefore much thought has not been put into optimizing them.

Comments (6)

Literally dependent types

Given that the formalization of Scala Classic has ground to a halt, for reasons I may go into later, I spent part of today hacking on the Scala compiler itself to add support for singleton literals. Currently, Scala allows singleton types for stable identifiers. My modification allows literals to be used in singleton types. I can't promise that it will be in the forthcoming Scala 2.7.2 release, but I would like it to be.

Overall it was far less work than I was expecting.  Scala already internally supports what it calls "constant types", there is just no way to write them in Scala source code presently.  Consequently, most of the effort was in extending the parser.

Given my modifications, it is now possible to write code like the following:

  1.  
  2. scala> val x : "foo".type = "foo"
  3. x: "foo".type = foo
  4.  

What I was not expecting was that out-of-the-box things like the following would work:

  1.  
  2. scala> val x : "foobar".type = "foo" + "bar"
  3. x: "foobar".type = foobar
  4. scala> val y : 10.type = 2 * 5
  5. y: 10.type = 10
  6. scala> def frob(arg : 10.type) : 6.type = arg - 4
  7. frob: (10.type)6.type
  8.  

Unfortunately the excitement soon passes when you realize all the things you can't do with singleton literals (yet). Even if we turn on the experimental dependent method support, you can't write things like

  1.  
  2. def add(arg : Int) : (arg + 5).type = arg + 5
  3.  

because these are exactly what they are called, singleton literals, not full-blown dependent types.

One cute example, based on a use suggested by Sean McDirmid, would be that some people might do something like the following with implicits:

  1.  
  2. implicit stringToColor(arg : String) : java.awt.Color = java.awt.Color.getColor(arg)
  3.  

However, with singleton literals you can tighten it up by ensuring that for some strings it will never fail:

  1.  
  2. implicit redToColor(arg : "red".type) : java.awt.Color = java.awt.Color.RED
  3. implicit blueToColor(arg : "blue".type) : java.awt.Color = java.awt.Color.BLUE
  4.  

Happily, the type inferencer already chooses the most specific implicit conversions.

In any event, they will hopefully become more useful as the type system continues to grow. I am also sure someone will probably come up with a clever use for them that hasn't occurred to me yet. If so, let me know.

Comments (5)

Fonts in LaTeX, Part Three: pdfTeX and TrueType

Update: The information in this post is out of date: otftotfm does presently have support for TrueType outlines. See my errata post for more information.

In the previous part of this tutorial, I explained how to put together the minimal infrastructure needed to use an OpenType font with pdfLaTeX.  However, I used the tool otftotfm to generate the font metrics TeX needs to lay out text. However, otftotfm only supports OpenType fonts that use PostScript font outlines, as opposed to TrueType font outlines. So in this part of the tutorial I will explain how to put together the necessary infrastructure for TrueType fonts. In preparation for that, we will first make a few changes to what we had done earlier.

For those that would find it useful, I've put together a  zip file containing all the files from the tutorials (except the fonts, which I don't want to deal with distributing).

Firstly, we are going to move the uses of \DeclareUnicodeCharacter out of UPagella.fd and into uenc.def:

  1.  
  2. \ProvidesFile{uenc.def}
  3. % We are declaring an encoding named "U"
  4. \DeclareFontEncoding{U}{}{}
  5.  
  6. % Technically these are not "allowed" in .def files,
  7. % but this is really the logical place to put the
  8. % declarations.
  9.  
  10. % τ (0x03C4) maps to 0xF8 in the encoding
  11. \DeclareUnicodeCharacter{03C4}{\char"F8}
  12. % ε (0x03B5) maps to 0xF9 in the encoding
  13. \DeclareUnicodeCharacter{03B5}{\char"F9}
  14. % χ (0x03C7) maps to 0xFA in the encoding
  15. \DeclareUnicodeCharacter{03C7}{\char"FA}

As I mention in the comments, the documentation on font encoding definition files does not list \DeclareUnicodeCharacter to be one of the allowed declarations in a such a file, but it works, and it seems like the more logical place to configure it than in the font definition file.

Now that we have removed the uses of \DeclareUnicodeCharacter from UPagella.fd, it looks like:

  1.  
  2. \ProvidesFile{UPagella.fd}
  3.  
  4. % Delcaring a font family called "Pagella" for the encoding "U"
  5. \DeclareFontFamily{U}{Pagella}{}
  6.  
  7. % Declare that font family "Pagella", for encoding "U", has a shape
  8. % with weight medium (m) and normal (n) slant (in otherwords, upright)
  9. \DeclareFontShape{U}{Pagella}{m}{n}{
  10. % For all sizes...
  11. <->
  12. % ... use the font named
  13. TeXGyrePagella-Regular--custom--base
  14. }{}

I am going to use Deja Vu Sans as the example TrueType font. Fortunately, if you followed everything from the second part of the tutorial, there is not much that needs to be done.

First, we need to generate metrics for Deja Vu Sans. As before, if you are using TeX Live, you'll have the necessary program:

% ttf2tfm DejaVuSans.ttf -q -T custom
ttf2tfm: WARNING: Cannot find character `compwordmark'
         specified in input encoding.
...
...
ttf2tfm: WARNING: Cannot find character `zdotaccent'
         specified in input encoding.
DejaVuSans   DejaVuSans.ttf Encoding=custom.enc

The program ttf2tfm is kind of unusual in that it first takes the filename argument and then all the options. So we've passed it the TrueType font we want to generate metrics for, DejaVuSans.ttf, the option -q to tell it not to print quite so much information, and the option -T custom which tells it to use the encoding defined in the file custom.enc we created in previous part.

Unlike otftotfm, ttf2tfm does not generate an entry that we could use in our map file, custom.map, so we need to write one ourselves. You will want to start with the map we generated by otftotfm for Tex Gyre Pagella, and you will want to add the line:

DejaVuSans <custom.enc <DejaVuSans.ttf

This says to map the TeX font name DejaVuSans to the file DejaVuSans.ttf using the encoding custom.enc. To learn more about the format of map files, there is a section on them in the pdfTeX manual.

Now we just need to create a font definition file for Deja Vu Sans. However, it is essentially the same as the one we created for TeX Gyre Pagella:

  1.  
  2. \ProvidesFile{UDejaVuSans.fd}
  3.  
  4. % Delcaring a font family called "DejaVuSans" for the encoding "U"
  5. \DeclareFontFamily{U}{DejaVuSans}{}
  6.  
  7. % Declare that font family "DejaVuSans", for encoding "U", has a shape
  8. % with weight medium (m) and normal (n) slant (in otherwords, upright)
  9. \DeclareFontShape{U}{DejaVuSans}{m}{n}{
  10. % For all sizes...
  11. <->
  12. % ... use the font named
  13. DejaVuSans
  14. }{}

We have just replaced all occurrences of Pagella with DejaVuSans.

Finally, we just need to update our example document to use Deja Vu Sans:

  1.  
  2. \documentclass{article}
  3. \usepackage[utf8]{inputenc}
  4. \usepackage[U]{fontenc}
  5. \pdfmapfile{+custom.map}
  6. \renewcommand{\rmdefault}{Pagella}
  7. \renewcommand{\sfdefault}{DejaVuSans}
  8.  
  9. \begin{document}
  10. Testing pdfLaTeX!
  11.  
  12. Greek: τεχ.
  13.  
  14. \begin{sffamily}
  15. Testing pdfLaTeX!
  16.  
  17. Greek: τεχ.
  18. \end{sffamily}
  19. \end{document}

Here we have used \renewcommand to set the default sans serif font, \sfdefault, to be DejaVuSans. In the body of the document, we've copied the text and surrounded it with the sffamily environment to have it typeset in sans serif.

Now we have everything we need to run pdflatex:

% pdflatex test-pdflatex.tex
This is pdfTeXk, Version 3.141592-1.40.3 (Web2C 7.5.6)
 %&-line parsing enabled.
...
...
(./test-pdflatex.aux) (./upagella.fd) (./udejavusans.fd) [1]
(./test-pdflatex.aux) ){custom.enc}{a_qnnnfc.enc}<./TeXGyrePage
lla-Regular.pfb>
Output written on test-pdflatex.pdf (1 page, 34857 bytes).
Transcript written on test-pdflatex.log.

And we have the desired output:

Testing pdfLaTeX with both OpenType and TrueType fonts

And that's everything you need to get started with TrueType fonts and pdfLaTeX. Again, if you encounter any problems or notice any omissions, let me kow. I'll do some investigation and there will possibly be a fourth part on using fontinst.

Comments (3)

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »