Anyone who has spent any time in a critical care area of a hospital is aware how noisy it can be. There is noise from telephones, pagers, and equipment, as well as conversations. In the operating room, there is also noise from electrical surgical instruments such as scalpels and drills, suctioning, ventilation machines, and so on. There is so much noise that one group of researchers went so far as to suggest there should be a department of sound in every hospital to monitor and control noise pollution!
However, incidental noise is not the only issue when considering sound in hospitals. Clinicians and healthcare workers need an information environment that makes the most of auditory as well as visual information. Unfortunately, most studies of user-centered design in healthcare have focused on the visual environment rather than the auditory environment. Investigations of auditory aspects of medical devices have tended to focus on auditory alarms, with a view to how difficult alarms are to discriminate and how uninformative and annoying they can be.
Healthcare workers are usually on their feet when caring for patients, moving from place to place. Visual displays require a worker to be oriented towards a display or to remember to look at it—two requirements that are often challenged in busy healthcare environments. Auditory displays, on the other hand, do not require a worker to be oriented in any direction and will usually be heard (even if not listened to) whether a worker remembers to attend to it or not. Studies show that healthcare workers respond much faster to auditory than to visual alarms. For this reason, auditory displays are used for alerts and alarms. However, researchers are trying to make auditory displaysmore informative.
Earcons and Auditory Icons
Most auditory displays in healthcare environments are earcons. Earcons are non-verbal audio messages that are used in the computer/ user interface to provide information to the user about some computer object, operation, or interaction. Earcons are built from motifs, which are short sequences of notes that can be transformed in different ways. Compound earcons add information rather like stringing words together. Hierarchical earcons add information by transforming the original motif with changes in rhythm, pitch, timbre, or register. Earcons have a syntactic relationship with the events or states they indicate. A very simple example is the ISO 9703-2 recommendation for alarms, with three slow notes for medium-priority alarms, and three fast notes followed by two fast notes for high-priority alarms.
In contrast, auditory icons are everyday sounds mapped to computer events by analogy with everyday sound-producing events. Auditory icons have a semantic relationship with the events or states they indicate. Auditory icons are rare in healthcare, possibly because it is hard to make sure that the sounds of everyday events will not be misunderstood by users in a safety-critical context such as healthcare. However, good semantic mapping can be important in auditory display design.
Melodic Alarms
Some of the difficulties of designing auditory displays for medical equipment are apparent in the IEC 60601-1-8 standard for medical electrical equipment alarms. IEC 60601-1-8 contains many extremely valuable suggestions that will improve auditory alarms. However, one suggestion is that melodies should be used to distinguish different alarm sources rather than continuing to rely on the undifferentiated single-tone sounds recommended in ISO 9703-2.
In an attempt to improve learning and memorability, melodies were chosen that are semantically related to the alarms while still conforming to the rhythm. For example, a series of increasing notes was used for a temperature alarm and a series of notes going up and down was used for a breathing alarm. However, all the melodic alarms are played using the same instrument, same register, same tempo, and same rhythm.
Studies appearing since the alarm standard was published report that people find it difficult to learn the melodic alarms. We have found that even after two solid sessions of practice, hardly any test users achieve perfect performance. Telling users about the semantic mapping between the melody and the alarm it signals appears not to help. Many users persistently confuse pairs of alarms that sound particularly similar. This is potentially dangerous. If there is one alarm sound for all alarms, then users turn round to see which it is. If there are different alarm sounds, then users may wrongly think they have correctly identified an alarm and so do not turn around to check, and sometimes get it wrong.
Moreover, user tests suggest that the people who are best at identifying the IEC 60601-1-8 alarms are not the people the alarms are really designed for. Users with musical training learn faster than those with-out musical training. Worst of all, nursing staff seem to learn the IEC 60601-1-8 alarms more slowly and confuse them more often than university students do, even when the semantic mapping between the alarms and the alarm melodies is made clear.
There are two areas where the development process probably could have been improved. First, there may not have been enough attention to what is known about earcon design. For example, researchers found that timbre (tone quality difference) is more important than pitch for grouping and discrimination. A different number of notes in each earcon also helps discrimination, as does a change in rhythm. Even just a timbre change (a piano note vs. a trumpet note, for example) might have made it easier for people to learn the IEC 60601-1-8 alarms. Using the technical language of earcon design, there are too many syntactic constraints on the underlying motif for the melodic alarms, as earcons, to be discriminated. Brewster has found that earcons made up of a more complex combination of sound parameters can be recalled equally well by non-musicians as by musicians. Relatively little training is needed, even with large sets of earcons, if sounds are well-designed and structured.
Second, the IEC 60601-1-8 earcons were not tested with representative users before being put in the standard. For the developers, the melodies seemed obviously linked to what the alarms mean. For many users, however, the melodies by themselves do not discriminate between the alarms and are not obviously linked to what the alarms mean. Simple user tests would have uncovered the slow rate of learning and some of the more persistent confusions. Overall, there is cause for concern with the melodic alarms, but further user tests are needed to see if they fare better or worse in tests closer to clinical conditions.
Blood Pressure Earcons
An example of where earcons have been informed by earcon design principles, and tested before potential clinical use, is the blood pressure (BP) earcons developed by Marcus Watson at the University of Queensland, St Lucia, Australia. The BP earcons signal a patient’s systolic and diastolic BP after a reading is taken from a non-invasive BP cuff. Two initial tones serve as a pitch reference tone for the systolic and diastolic tones that follow. If the systolic reading is higher than normal, the third tone is higher and shorter than the reference tone, and if lower than normal, it is lower and longer than the reference tone. The fourth tone is mapped similarly for the diastolic BP.
Initial laboratory studies show that after less than an hour of training, non-healthcare participants can discriminate systolic and diastolic BP on a 9-point auditory scale with an average error of 0.5 point on the scale. Preliminary analysis of results for a full-scale patient simulator test suggests that anesthetists quickly learn to use the BP earcons. Anesthetists’ subjective reports indicate that they recognize the value of being informed about BP without having to remember to check the monitor, but that further design refinements may be needed prior to full clinical use of the concept.
Parameter Mapping
Parameter mapping is the representation of data parameters in sound dimensions. The end result often tends to be a continuous or intermittent sound, rather than discrete sounds like earcons and auditory icons.
A well-known example of parameter map-ping is the Geiger counter. The rate at which a series of clicks plays depends on the level of radiation detected. More complex cases map further data parameters onto sound dimensions such as frequency, “piggybacking” them onto a base sound. With parameter mapping, the relationship between states of the world and the sounds is said to be lexical. (Some authors use the term sonification to describe parameter mapping, distinguishing it from auditory icons and earcons, whereas other authors use the term sonification to describe all non-speech auditory displays.)
For over two decades, parameter mapping has already been used very successfully in healthcare with the variable-tone pulse oximetry display, which informs healthcare workers about a patient’s heart rate and oxygen saturation. Researchers are exploring whether the concept can be extended to other vital signs, so that clinicians can get a more complete representation of a patient’s status without have to continually look at visual monitors.
Conclusion
It is probably better to think in terms of developing multimodal information environments than developing auditory displays. Auditory displays must work alongside existing visual displays, tangible instruments, and so on, all of which give feedback about a patient’s status. It is important to have an integrated picture of healthcare workers’ responsibilities, work contexts, and activities to decide whether an auditory display is the most effective support. While it can be helpful to provide users with information in redundant modalities so users can develop strategies best fitted for dynamically changing needs, sometimes redundancy is confusing if the location and timing of information differs and quick responses are needed. The best solution will depend on good application of theoretical knowledge, thorough understanding of the user’s work, and effective collaboration with users to achieve valid tests of prototype concepts.
Auditory displays also affect the social and communicative context of work much more than visual displays because they are perceptually “obligatory”—unless there is auditory masking or the ears are blocked, sounds reach the ears even if they are not attended to. Part of the design challenge with auditory displays—much more than with visual displays—is designing how auditory displays might control users’ attention. How might an auditory display provide back-ground confirmation when all is well, but attract attention when there is an important change? Researchers have developed princi-ples for making auditory displays sound urgent, but determining when auditory displays should sound urgent involves under-standing the context and we may not have adequate sensing or data fusion algorithms to achieve this reliably.
It is encouraging that medical equipment manufacturers are increasingly hiring human-computer interaction and human factors practitioners to join product development teams. It is also encouraging that editors of healthcare journals such as Journal of the American Medical Informatics Association, Quality and Safety in Healthcare, Journal of Clinical Monitoring and Computing, Anesthesia and Analgesia, and others are increasingly publishing papers that promote a user-centered design process. Despite this, medical electrical equipment manufacturers and members of standards committees could benefit from much greater awareness of the value that human factors and usability professionals can bring to equipment design, and the dangers of not taking such considerations into account.