The goal in applying auditory cues to computing is to collapse the boundary between sight and sound, to synthesize a more total UI experience. Auditory icons function as an aural syntagma, their purpose is to direct; their materials do not operate independently of reception and cannot be excised from the signified/signifier algorithm. UI sonic environments are necessarily signifying, and material intensive quantities, or the attributes of the sound that can be represented numerically, serve a semiotic purpose. From Cabral and Remijn’s article, “Physical features of the sound(s) used in auditory icons, such as pitch, reverberation, volume, and ramping, can be manipulated to convey, for example, the perceived location, distance, and size of the referent.” The sounds of our digital lives, and particularly the aforementioned processes of aural discretization, complicate sonic materialism as a content-blind approach. The mythology of sound as pure expression without any underlying meaning cannot hold for digital noises. Within the confines of a UI, and even in the case of ostensibly pure information (the sequencer’s role within a synthesizer), content and intention are everything. Conversion from the continuous to the discrete does not discard of the continuous just because the formal qualities of the information have changed, there is still a coherence of sense across different means of expression, a residual meaning structure. Numbers, in this instance, are still qualic, because the prior phenomenal is communicated through sequences of numbers. Like my reliance on the aural reassurance of my iPhone’s functionality, I needed sound to make sense of my digital surroundings, to align my experience with some recognizable mode of participation, ultimately enabled by the ways in which sound is configured materially.