DIASS - Digital Instrument for Additive Sound Synthesis

DIASS - a Digital Instrument for Additive Sound Synthesis - is the "Rolls Royce bulldozer" of sound synthesis. It provides a precision tool for the sonification of scientific data and is the most flexible instrument currently available to composers of experimental music. DIASS gives the user complete control over the details of a sound. It is written in C and assumes access to high-performance computing equipment.

Table of Contents

  1. General Description
  2. The Instrument
  3. The Editor
  4. Loudness Scaling
  5. M4C
  6. M4CAVE
  7. Works Produced with DIASS
  8. Chronology
  9. User's Manual
  10. Publications
  11. Credits
  12. Contacts

  1. General Description

    DIASS creates complex sounds through a summation of simple sine waves (partials), whose amplitudes and frequencies evolve independently in time. Control parameters specify the evolution of individual components as well as the evolution of the complex sound.

    DIASS consists essentially of two parts: the instrument proper, which computes the samples, and an editor, through which the user enters and modifies the instructions for the instrument. The DIASS instrument functions as part of the M4C synthesis language.

    DIASS can produce sounds composed of up to 65 partials, where each partial is controlled by up to 13 static and 12 dynamic parameters. A continuous sound wave is approximated by 16-bit samples, and DIASS takes 22,050 or more samples for each second of sound. Furthermore, DIASS contains a unique algorithm to achieve the perception of equal loudness across the energy spectrum, regardless of timbre.

    DIASS is written in C. It has been designed for a distributed-memory environment; the parallel implementation uses the standard MPI message-passing library. A C++ version is under development.

    The performance of the code depends greatly on the number of partials in the sound and the number of active controls for each partial. A typical piece lasting 2'26" and comprising 4-500 sounds of medium to great complexity can be computed on 30 nodes of an IBM SP in slightly less than 11 minutes. The score file for this piece (two-channel output) requires 12.9 MB of memory.

    You are invited to listen to some demonstrations of sounds produced with DIASS on the IBM SP.

  2. The Instrument

    The DIASS instrument reads a score file, which is a sequence of I(nstrument)-cards containing the data needed to synthesize a sound from its partials and combine the sounds into a piece. Click here to learn more about I-cards. The score file is processed within the framework of the synthesis language M4C. The result is a sound file, which contains the discrete samples of the sound wave for the entire piece. The usual sampling rate for DIASS is 22,050 samples per second.

    DIASS can handle an arbitrary number of consecutive and/or simultaneous sounds. Currently, the number of partials in a sound is limited to 65. Every partial can be controlled independently through 12 static parameters (which do not vary for the duration of the sound) and 13 dynamic parameters (which may vary during the life of the partial). The static parameters include

    The dynamic parameters include

    A number of macros apply simultaneously to all partials in a sound. They include

    Click here to learn more about controls and input.

  3. The Editor

    The editor in DIASS prepares a raw score file, which contains all the information needed for the synthesis of a piece by the instrument. It comes in a slow and a fast version. The slow version (DIASSIN) is most appropriate for brief musical examples and for a systematic exploration of sound space. By answering a series of questions, either through a menu or through a Graphic User Interface (GUI), the user can create new sounds. The process is slow because of the large number of options available. The fast version of the editor is appropriate for scientific sonification. It is also recommended for music composition in production mode or when sounds are synthesized following the output of a computer-assisted composition program. A script provides the answers to the questions posed by the menu of the slow version.

  4. Loudness Scaling

    A unique feature of DIASS is the scaling of amplitudes to achieve a desired perceived loudness at the level of each sound and to prevent the occurrence of clipping at the level of an entire piece.

    The loudness routines incorporate various results of psychoacoustic research. The software uses the Fletcher-Munson curves of equal loudness [H. Fletcher and W. A. Munson, Loudness, its definition, measurement, and calculation, J. Acoust. Soc. Am. 5 (1933), 82] and the concept of critical bands as formalized by Stevens [S. S. Stevens, Measurement of loudness, J. Acoust. Soc. Am. 27 (1955), 815] and Zwicker, Flottorp, and Stevens [E. Zwicker, G. Flottorp, and S. S. Stevens, Critical bandwidth in loudness summation, J. Acoust. Soc. Am. 29 (1957), 548].

    The anticlip option guards against overflow when more than one sound is played simultaneously. Overflow, the occurrence of sample values in the sound file exceeding the available register size, causes "clipping" when the sound file is played. The anticlip routines guarantee that all computed sample values fit in 16-bit registers, while maintaining the desired perceived loudness ratios of all the sounds in the piece. Practically, this feature implies that DIASS can produce works with a wide dynamic range in a single run of the program. There is no need to resort to "post-production" digital or analog mixing.

    After the loudness and anticlip routines have been applied to the raw score file, the latter becomes the final score file, which is then passed on to the instrument.

  5. M4C

    DIASS functions as part of the M4C synthesis language developed by Beauchamp and co-workers at the UIUC. Its sequential version requires only one processor. A parallel version, DIASS_M4C, designed for a distributed-memory multiprocessor environment, uses the standard MPI message-passing library. A "master" node distributes the computation of the sounds among the slave nodes, the "slave" nodes compute the sounds, and a "mixing" node integrates the sounds into the piece as they are delivered by the slave nodes.

  6. M4CAVE

    To assist in the perception of sound features, we have developed M4CAVE, a software program to represent complex sounds as graphical objects in an immersive virtual-reality (VR) environment. The environment can be a CAVE -- a room-size three-dimensional VR environment --, or an Immersadesk -- a two-dimensional representation of a three-dimensional environment.

    Images are computed on the fly from score files. These score files are the same as the files from which the sound files are generated, and a one-to-one relationship between visual attributes and sound qualities guarantees that the visual images are exact representations of the sounds.

  7. Works Produced with DIASS

    The following compositions by Sever Tipei have been produced with DIASS:

    1. AGA MATTER for piano and computer generated tape, 1992
    2. RICE MATTERS for computer generated tape, 1993
    3. CURSES for solo male voice, backup group and computer generated tape, 1996 (partially produced with DIASS)
    4. A.N.L.-folds, manifold compositions for computer generated tape, 1996
    5. Sonic Residues for computer generated tape (50 variants, performed December 21, 1997 at the Linden Gallery, Melbourne, Australia)
    6. BERLIN-folds #1, #2 for computer-generated tape, 1998
    7. A.A.-folds for computer generated tape, Installation at the Int'l Computer Music Conference 1998, Ann Arbor, Michigan

  8. Chronology

    DIASS was designed by Sever Tipei and Christopher Kriese. The initial code was written by Christopher Kriese in 1991-92 and first implemented on the CRAY Y-MP at the National Center for Supercomputer Applications (NCSA) at the University of Illinois at Urbana/Champaign (UIUC).

    Between 1994 and 1996, David Ralley, working under the supervision of Sever Tipei, expanded the capabilities of the instrument. With Cheryl Herndon, he added the loudness routines. Arun Chandra implemented various improvements in 1996 and 1997.

    A team including Dave Blumenthal, Mario Lauria, Thomas Lawrence, and Scott Pakin modified M4C and wrote the parallel version of M4C (DIASS_M4C). Their work was done at the UIUC as a class project for Tipei's seminar Musical Applications on Supercomputers in the 1995 Spring semester. DIASS_M4C was subsequently implemented on the IBM Scalable POWERparallel System (SP) at Argonne National Laboratory (ANL) by Dave Blumenthal.

    Since 1996, DIASS has been the core element of a joint project between the UIUC and ANL. The project is currently co-directed by Sever Tipei (UIUC) and Hans Kaper (ANL).

    During the 1996 Spring semester, Dave Blumenthal and Max Levchin developed M4CAVE. M4CAVE was implemented at Argonne by Elizabeth Wiebel, an undergraduate student from St. Norbert College in De Pere, Wisconsin, who was sponsored by Argonne's Student Research Participation Program, and Morris Chukhman, graduate student at the UIUC.

    Presently, DIASS is being rewritten in C++. Preliminary code analysis was done by Mike Piacenza, Bill Whitehouse, and a group from the Musical Applications on Supercomputers seminar. Their ideas were further worked out at Argonne by Régine Migieu, a student from the Université Claude Bernard - Lyon I (France) and implemented by Ming Zhu at the UIUC.

  9. User's Manual

    In the future you will be able to click here to see the user's manual for DIASS.

  10. Publications

    DIASS was demonstrated at

  11. Credits

    DIASS was developed with funds provided by the UIUC Research Board. The work at Argonne is supported by the MCS Division. Until 1994, the NCSA provided time on their CRAY Y-MP. Since 1994, all computations are done in Argonne's Center for Computational Science and Technology.

  12. Contacts

    Hans G. Kaper, MCS Division
    Argonne National Laboratory
    Argonne, IL 60439
    E-mail: [email protected]
    (630) 252-7160
    Sever Tipei, School of Music
    University of Illinois at Urbana/Champaign
    Urbana, IL 61801
    E-mail: [email protected]
    (217) 333-6689

    [ Scientific Sonification | MCS Division | UIUC Computer Music Project | Hans G. Kaper | Sever Tipei ]

    Last updated: October 21, 1998 (HGK)