Earcon Meaning

0 views
Skip to first unread message

Germain Aguilera

unread,
Aug 4, 2024, 7:03:13 PM8/4/24
to loyfilroze
Earconsare structured sounds, defined as an audio sound/message representing a specific event or conveying other information and feedback to the user. However, Earcons differ from Auditory Icons in that earcons are generally synthesized tones or sound patterns and have no direct relationship to the event. They are only used as a representation of the event.

Earcons and Auditory Icons allow for quick information transmission and provide us with a tangible confirmation of the action performed. In a world saturated by visual stimulus, earcons make it possible to supplement our visual sense with the aural or even replace it where visual confirmation is not possible.


An earcon is a brief, distinctive sound that represents a specific event or conveys other information. Earcons are a common feature of computer operating systems and applications, ranging from a simple beep to indicate an error, to the customizable sound schemes of modern operating systems that indicate startup, shutdown, and other events.[1]


The name is a pun on the more familiar term icon in computer interfaces. Icon sounds like "eye-con" and is visual, which inspired D.A. Sumikawa to coin "earcon" as the auditory equivalent in a 1985 article, 'Guidelines for the integration of audio cues into computer user interfaces.'[2]


Earcons provide an enhancement to screen reader usage due to their brevity and subtleness, which is an improvement over using much longer spoken cues to provide context: using a short, distinctive beep when an interface's button is selected can be much faster and therefore more convenient to hear than using speech synthesis to say the word "button".[5]


Due to being non-spoken audio sounds, users must learn to associate the earcons with their meanings to be able to fully benefit from them. To help with learning such associations, some screen readers will also speak the meanings of their respective earcons, albeit towards the end of their full description of an interface element. It is recommended that earcons be introduced early on when learning how to use a screen reader to ensure that they become impulsively (and eventually, subconsciously) associated through habitual usage.[4]


Use this Scrabble dictionary checker tool to find out whether a word is acceptable in your scrabble dictionary. When you enter a word and click on Check Dictionary button, it simply tells you whether it's valid or not, and list out the dictionaries in case of valid word. Additionally, you can also read the meaning if you want to know more about a particular word.


SCRABBLE is a registered trademark. All intellectual property rights in and to the game are owned in the U.S.A and Canada by Hasbro Inc., and throughout the rest of the world by J.W. Spear & Sons Limited of Maidenhead, Berkshire, England, a subsidiary of Mattel Inc. Mattel and Spear are not affiliated with Hasbro. Words with Friends is a trademark of Zynga with Friends.


LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.


Conceptually, sonifying statistical data is pretty trivial. A single data point translates to an auditory signal. Present several data points sequentially and parallel, and you may end up with data-driven music.


Justin Joque, a librarian at the University of Michigan, has sonified stock trade data. A tone plays representing daily trading volume and changes to closing value of the Dow from 1928 to 2011. As stock market activity increase over time, the tone increases in pitch and complexity.


More data sonifiers we learn to known in Chau Tu's article 'How to listen to Data' (February 2017). Lauren Oakes and Nik Sawe investigated the decades-long decline of yellow cedar trees in Alaska. They translated the data points into different keys, pitches, and instruments.


Computer programmer Brian Foo has digitized data for the New York Public Library. On his website Data-Driven DJ, you can listen to data driven music. For example, there is a musical sequence based on Beijing's air quality data over the years 2012-2015. Another song is a sonification of income inequality along selected subway tracks in NYC. There the quantity and dynamics of the instruments correspond to the median incomes along the track.


Indices rely on associations, without showing the meaning directly. A picture of an artifact of your favourite hobby could be an index for you. For "wind", it is hard to construct a visual icon. Thus we use an index: a picture of a windsock.


If you enter a community of graphic designers, you might find people who use the above terms in slightly different ways. The terms icon and symbol are not always used in their semiotic meanings. Similarly, when speaking to audio designers, the commonly established meanings of the same words change again. Here's the basic vocabulary to discuss auditory user interface design:


Direct relations use the sound made by the target event, whereas indirect relations substitute a surrogate for the target, and require an additional learning process to develop the relationship between the sound and its meaning. Anyway, as long as a sound evokes the associated sound of an object or action, it is classified as an auditory icon. (Dingler, Lindsay, Walker, 2008)


Earcons are different. They are abstract sounds. They are brief and distinctive, generally synthesized tones or sound patterns. By definition, earcons are always nonverbal. Earcons have only indirect relations to their meanings. Door bells and low battery indicators are earcons.


In addition to auditory icons and earcons, there is an even more common form of auditory signals: speech. Using semiotic terms, almost all spoken words are symbols - only the onomatopoetic words are conceptually icons.


And finally, there are also spearcons. Spearcons are iconised speech. To create a spearcon, you take a spoken phrase and speed it up until it can no longer be recognized as speech. Spearcons can be created automatically using first text-to-speech, then a speed algorithm. Each spearcon will be unique due to the specific underlying speech phrase. This allows spearcons to be both distinct and at the same time allows similar phrases to form families of related sounds, like earcons.


To fulfil the promise I gave in the beginning, there's one more term to explain: what is a sound icon? A sound icon is an instrument invented in 1965 in Romania by Horațiu Rădulescu, a Romanian-French composer. More concretely, a sound icon is a grand piano standing on its side, played by bowing the strings.


Thank you for still reading - you passed the test. Textual information often feels boring or hard-to-understand. If sonified, the same information might sound attractive and clear. Turning dull statistical information into music certainly sounds like a good idea.


In game design, good user interface design self-evidently includes good auditory design. In professional tools, we live in a less developed design culture. To communicate something serious, designers often stop to visual solutions: "To communicate an event, place a visual icon on the screen. To communicate a change in the interaction mode, change the dominant color of the whole screen." In our contemporary culture we believe that serious tools should look, feel and sound boring. I believe this will change. Well designed sound worlds will form the new frontier of also professional tool UIs.


Consider an operation & maintenance room where users monitor an industrial process or a telecom network day in, day out. Users are expected to stare at the screens to notice deviations. However, users might be looking at another screen when something critical happens. Communicating status changes by auditory signals improves the situation.


US patent application 9325589 B1 describes an audible network traffic notification system. The system analyses data and associates network activity to various sounds which correspond to different levels of suspicious or non-suspicious activity. For example, the incoming HTTP requests may be presented as raindrops. The heavier the rain, the more traffic we have.


In this system, positive or neutral events have gentle, natural, pleasant sounds. In addition to raindrops, the data may sound like running water, infant cooing, birds chirping, crickets, or any other tranquil sound. Harmful or potentially threatening activity have more alarming sounds: dogs barking, crows cawing, lions roaring, thunder, explosions, alarms, sirens, alerts, buzzers, gunshots and so on.


In this example, the soundscape of rain, birds and dogs forms a holistic data presentation where users can follow many information types subconsciously even when their primary focus is on something else - for example in visual information.


Along a single use case, different senses can serve different needs. The message continues naturally from one sense to another: You first hear a knock on a door, then you look through the door's window to see who is there. You hear the phone play a sound, then you open it to see the message.


When presenting data, replacing a visual message with a sonified message might not add much value. Using visual and sonified messages together will increase value. Two senses are more powerful than one.


When we speak of information presentation, we face also ethics. We know that a chart may simultaneously look good and twist the truth. Over time, the readers of charts have learned to question them: "Why does that axis not start from zero? Why does this data show in red, feeling dangerous, while the other one is blue, associating to safety and calmness?" It will take some time until the audience will learn to question data-driven music in the same way. Music has a strong ability to trigger emotions. We will face two long learning curves: first how to sonify data, and then how to listen to it critically.

3a8082e126
Reply all
Reply to author
Forward
0 new messages