We should take legibility as historical process rather than a generic quality. This way something illegible is something you have not got acquinted to yet. Some stuff are easy to, some take more time and effort.
Legibility is the ease with which a reader can decode symbols. In addition to written language, it can also refer to behaviour[1] or architecture,[2] for example. From the perspective of communication research, it can be described as a measure of the permeability of a communication channel. A large number of known factors can affect legibility.
In everyday language, legibility is commonly used as a synonym for readability. In graphic design, however, legibility is often distinguished from readability. Readability is the ease with which a reader can follow and understand words, sentences and paragraphs. While legibility usually refers to the visual clarity of individual symbols, readability is more about their arrangement or even the choice of words.[3][4]Legibility is a component of readability.[citation needed]
Not all writing benefits from optimizing for legibility. Texts that are supposed to be eye-catching or whose appearance is supposed to hold certain connotations could deliberately deviate from easy legibility for these purposes. Corresponding typefaces are called display typefaces.[6]
It has been shown that threshold legibility performance correlates inversely with the age of the readers. Older readers are disproportionately affected by other adverse factors in visual design, such as small text size.[9]
Despite contrary opinions, serifs have little observable influence on reading speed. At low resolution, the additional spacing between letters required for the serifs seems to improve legibility, whereas otherwise they have a slightly adverse effect.[15] For special groups, the picture may look different: the dyslexics community[clarification needed] seems to be convinced that serifs are unnecessary visual clutter, which makes the text less accessible and makes the letter shapes deviate more from the simpler forms known from school.
Eye tracker studies support the theory that increasing complexity of shapes reduces legibility.[16] The addition of vowel marks in Arabic script has contradictory effects, but appears to be detrimental to legibility overall.[16] Freestanding letters are easier to recognize than ones with adjacent elements; this is known as crowding effect.[5]
Common measures to improve legibility at lowest resolution include the use of wide apertures/large open counters, large x-height, low stroke variability, big features, etc., while some improvements like ink traps[clarification needed] are specific to different presentation media.[17]The positive effect of more open apertures could be experimentally confirmed for the opening of the lowercase e, but not for the larger opening of the lowercase c. Narrow letter shapes such as f, j, l and i usually benefit from larger tails that widen their shape, except for the lowercase f.[10]
While a large x-height is generally considered helpful for legibility at low resolutions, the dyslexics community holds the theory that short ascenders/descenders tend to cause confusion. Dyslexics and learners also seem to prefer less regularity between individual letterforms, especially further differentiating features in glyphs that are often just mirrored versions of other letters, as in the group b, d, p and q, since the human brain seems to have evolved to recognize (symmetrical) three-dimensional objects regardless of their orientation in space.[18][19] This is the basis for some of the most devout endorsements of the otherwise much hated Comic Sans typeface.[20] Other important aspects seem to be the familiarity of the glyph shapes, the absence of serifs and looser spacing.[21][22]While textbook versions perform better with inexperienced readers/learners, most experienced readers seem to be more comfortable with the traditional two-story print forms for a and g.[23][24]
Subjects and methods: Handwritten prescriptions were received from clinical units of Medicine Outpatient Department (MOPD), Primary Care Clinic (PCC) and Surgery Outpatient Department (SOPD) whereas electronic prescriptions were collected from the pediatric ward. The handwritten prescription was assessed for completeness by the checklist designed according to the hospital prescription and evaluated for legibility by two pharmacists. The comparison between handwritten and electronic prescription errors was evaluated based on the validated checklist adopted from previous studies.
Conclusions: This study revealed a high incidence of prescribing errors in handwritten prescriptions. The use of e-prescription system showed a significant decline in the incidence of errors. The legibility of handwritten prescriptions was relatively good whereas the level of completeness was very low.
Definitely provide benefits to users. Why else have a website? But you also need to reduce the barriers to using it (i.e., lower the cost). For online copy, the barriers to use fall into 3 categories: legibility, readability, and comprehension, each of which is defined and discussed below.
The main way to test legibility is to measure reading speed in words per minute for a sample of users, as they read some simple text. Because people read at drastically varying speeds, this is best done as a within-subjects test, where the same test participants test multiple systems. If users are, on average, say, 20% slower when reading from your design than when reading from a reference design, then you know that your site has poor legibility. (See our course on Measuring User Experience for more on within-subjects vs. between-subjects study designs.)
Cruxiness is a key part of epistemic legibility that perhaps could use some extra emphasis here. Don't just be clear about the evidence on which your beliefs are based - prioritize and weigh that evidence. Being explicit about some of the justifications for your beliefs is helpful. Far better, though, to be explicit about which of these justifications, if falsified, you would consider most damaging to your argument.
Today, dynomight made an interesting nuance in Observations about writing and commenting on the internet. It seems that just optimizing epistemic legibility may cause people to fail to listen altogether:
I also appreciate that you made the costs of legibility explicit. Duncan's essay Ruling Out Everything Else touches on a related theme, but I only realized some time after reading that essay that trying to be so precise in one's speech can result in written discussions getting much longer. Which imposes yet other costs.
You can spot the existence of a hidden meta level by noticing self-contradictory arguments, lack of legibility, loose and context-dependent or even non-existent definitions of crucially important terms... At which point it doesn't pay to go through the object level anymore, and instead focus on the meta.
Rereading your comment, I think you're saying that legibility will arise by itself well enough so long as someone is on Simulacrum level 1, caring only about the truth, and if their writing is not legible, they probably have an agenda and you'd better focus on finding out what that is, or just ignore what they said.
Something you allude to, but don't make very explicit, is that legibility occurs inside the reader. In this way, it is much like beauty. What we term beautiful things are things that either we personally find beautiful, or things that most viewers will believe are beautiful (out of what subset of observers?). That said, I expect the tips in "How to be Legible" will tend to improve legibility for nearly all readers, so I definitely think it's coherent to talk about a work as legible in and of itself.
To be clear, I do believe that the work itself is a significant contributor to the epistemic legibility. My point was merely that the work can only ever be legible with respect to a particular reader or audience (or even expected audience). In this way, I believe it is similar to inferential distance. An idea is not inferentially distant on its own, it is only inferentially distant from someone's perspective[1]. Where inferential distance deals with the difficulty of understanding a work, epistemic legibility deals with the difficulty of verifying a work. Perhaps an example in which the inferential distance is low but epistemic legibility is low or high depending on the audience will be illuminating:
Rajiv writes a well researched book about encryption algorithms, in which he cites various journal articles published on the subject. Carol is a bright computer scientist with poor judgment who phished people for their bank credentials and got caught. As a result, she is imprisoned and due to the nature of her crime forbidden from accessing the internet while serving her time. She receives a copy of Rajiv's book through the prison library system, and enjoys it quite a bit. She understands all of the ideas, but wants to check out some of the journal articles to see if Rajiv relayed the ideas correctly and also to further her understanding. The prison librarian checks the price of getting access to these articles, her eyes briefly widen when Carol informs her that she intends to get about a dozen of these articles, and then the librarian tells her there is no way in which she gets any of these articles. Carol is completely unable to verify any of the claims made in Rajiv's book. A year later, Carol is released from prison, and a couple quick searches reveal the journal articles that Rajiv references, and she is able to verify Rajiv's claims to her heart's content. In this example, the inferential distance is low throughout, but the epistemic legibility of Rajiv's book depends in part on whether or not Carol is imprisoned (unimprisoned Carol could be replaced with some other bright unimprisoned computer scientist).
aa06259810