creates complex sounds through
a summation of simple sine waves
(partials), whose amplitudes and frequencies
evolve independently in time.
Control parameters specify the evolution of
individual components as well as the evolution
of the complex sound.
DIASS consists essentially of two parts:
the instrument proper,
which computes the samples,
and an editor,
through which the user enters
and modifies the instructions for the instrument.
The DIASS instrument functions as part of the
M4C synthesis language.
DIASS can produce sounds composed of up to 65 partials,
where each partial is controlled by up to 13 static
and 12 dynamic parameters. A continuous sound wave
is approximated by 16-bit samples, and DIASS takes 22,050
or more samples for each second of sound. Furthermore,
DIASS contains a unique algorithm to achieve the perception of
equal loudness across the energy spectrum,
regardless of timbre.
DIASS is written in C. It has been designed for a
the parallel implementation uses the standard
MPI message-passing library.
A C++ version is under development.
The performance of the code depends greatly on
the number of partials in the sound and the number
of active controls for each partial.
A typical piece lasting 2'26"
and comprising 4-500 sounds
of medium to great complexity can be computed
on 30 nodes of an IBM SP
in slightly less than 11 minutes.
The score file for this piece (two-channel output)
requires 12.9 MB of memory.
You are invited to listen to some
of sounds produced with DIASS on the IBM SP.
The DIASS instrument reads a score file,
which is a sequence of I(nstrument)-cards
containing the data needed to synthesize a sound
from its partials and combine the sounds into a piece.
Click here to learn more about
The score file is processed within the framework of
the synthesis language M4C.
The result is a sound file,
which contains the discrete samples of
the sound wave for the entire piece.
The usual sampling rate for DIASS is
22,050 samples per second.
DIASS can handle an arbitrary number
of consecutive and/or simultaneous sounds.
Currently, the number of partials in a sound
is limited to 65.
Every partial can be controlled independently
through 12 static parameters
(which do not vary for the duration of the sound)
and 13 dynamic parameters
(which may vary during the life of the partial).
The static parameters include
The dynamic parameters include
- start time
- reverberation (hall size, decay rate)
- tremolo (amplitude modulation; magnitude, rate)
- vibrato (frequency modulation; magnitude, rate)
- amplitude transient (magnitude, rate)
- frquency transient (magnitude, rate)
- panning (stereo effect)
- mix between direct and reverberated sound
A number of macros apply simultaneously
to all partials in a sound.
- crescendo/decrescendo (amplitude change)
- glissando (frequency change)
- tuning/detuning (change of the frequency ratios)
- change of the amplitude ratios
- enhancing/dampening selected frequency bands
- loudness scaling (to achieve a prescribed perceived loudness level)
Click here to learn more about
controls and input.
The editor in DIASS prepares a raw score file,
which contains all the information needed
for the synthesis of a piece by the instrument.
It comes in a slow and a fast version.
The slow version (DIASSIN)
is most appropriate for brief musical examples and
for a systematic exploration of sound space.
By answering a series of questions,
either through a menu or through a
Graphic User Interface (GUI),
the user can create new sounds.
The process is slow because of the large number
of options available.
The fast version of the editor is appropriate
for scientific sonification.
It is also recommended for music composition
in production mode or when sounds are synthesized
following the output of a computer-assisted
A script provides the answers to the
questions posed by the menu of the slow version.
A unique feature of DIASS is the scaling of amplitudes
to achieve a desired perceived loudness
at the level of each sound and to prevent the occurrence
of clipping at the level of an entire piece.
The loudness routines incorporate
various results of psychoacoustic research.
The software uses the Fletcher-Munson curves of equal loudness
[H. Fletcher and W. A. Munson,
Loudness, its definition, measurement,
J. Acoust. Soc. Am. 5 (1933), 82]
and the concept of critical bands
as formalized by Stevens
[S. S. Stevens,
Measurement of loudness,
J. Acoust. Soc. Am. 27 (1955), 815]
and Zwicker, Flottorp, and Stevens
[E. Zwicker, G. Flottorp, and S. S. Stevens,
Critical bandwidth in loudness summation,
J. Acoust. Soc. Am. 29 (1957), 548].
The anticlip option guards against overflow
when more than one sound is played simultaneously.
Overflow, the occurrence of sample values in the sound file
exceeding the available register size,
causes "clipping" when the sound file is played.
The anticlip routines guarantee that all computed
sample values fit in 16-bit registers, while maintaining
the desired perceived loudness ratios of all the sounds
in the piece.
Practically, this feature implies that DIASS can produce
works with a wide dynamic range in a single run of the program.
There is no need to resort to "post-production" digital or
After the loudness and anticlip routines
have been applied to the raw score file,
the latter becomes the final score file,
which is then passed on to the instrument.
DIASS functions as part of the
M4C synthesis language
developed by Beauchamp and co-workers at the UIUC.
Its sequential version requires only one processor.
A parallel version, DIASS_M4C,
designed for a distributed-memory
multiprocessor environment, uses the standard
A "master" node distributes
the computation of the sounds
among the slave nodes,
the "slave" nodes compute the sounds,
and a "mixing" node integrates
the sounds into the piece
as they are delivered
by the slave nodes.
To assist in the perception of sound features,
we have developed
a software program to represent complex sounds
as graphical objects in an immersive
virtual-reality (VR) environment.
The environment can be a
a room-size three-dimensional VR environment --,
a two-dimensional representation
of a three-dimensional environment.
Images are computed on the fly from score files.
These score files are the same as the files
from which the sound files are generated,
and a one-to-one relationship between
visual attributes and sound qualities
guarantees that the visual images are
exact representations of the sounds.
The following compositions by Sever Tipei
have been produced with DIASS:
- AGA MATTER for piano and computer generated tape, 1992
- RICE MATTERS for computer generated tape, 1993
- CURSES for solo male voice, backup group and computer
generated tape, 1996 (partially produced with DIASS)
for computer generated tape, 1996
- Sonic Residues for computer generated tape
(50 variants, performed December 21, 1997 at the
Linden Gallery, Melbourne, Australia)
- BERLIN-folds #1, #2 for computer-generated tape, 1998
- A.A.-folds for computer generated tape, Installation
at the Int'l Computer Music Conference 1998, Ann Arbor, Michigan
DIASS was designed by Sever Tipei and Christopher Kriese.
The initial code was written by Christopher Kriese in 1991-92
and first implemented on the CRAY Y-MP
Center for Supercomputer Applications (NCSA)
University of Illinois at Urbana/Champaign (UIUC).
Between 1994 and 1996, David Ralley, working under the supervision
of Sever Tipei, expanded the capabilities of the instrument.
With Cheryl Herndon, he added the loudness routines.
Arun Chandra implemented various improvements in 1996 and 1997.
A team including Dave Blumenthal, Mario Lauria, Thomas Lawrence,
and Scott Pakin modified M4C and wrote the parallel version of M4C
Their work was done at the UIUC as a class project for Tipei's seminar
Musical Applications on Supercomputers
in the 1995 Spring semester.
DIASS_M4C was subsequently implemented on the
IBM Scalable POWERparallel System (SP)
at Argonne National Laboratory (ANL)
by Dave Blumenthal.
Since 1996, DIASS has been the core element
of a joint project between the UIUC and ANL.
The project is currently co-directed by
Sever Tipei (UIUC) and Hans Kaper (ANL).
During the 1996 Spring semester, Dave Blumenthal and Max Levchin
M4CAVE was implemented at Argonne by Elizabeth Wiebel,
an undergraduate student from St. Norbert College
in De Pere, Wisconsin, who was sponsored by
Argonne's Student Research Participation Program,
and Morris Chukhman, graduate student at the UIUC.
Presently, DIASS is being rewritten in C++.
Preliminary code analysis was done by
Mike Piacenza, Bill Whitehouse, and a group from the
Musical Applications on Supercomputers seminar.
Their ideas were further worked out at Argonne by
Régine Migieu, a student from the
Université Claude Bernard - Lyon I (France)
and implemented by Ming Zhu at the UIUC.
In the future you will be able to click here
to see the user's manual for DIASS.
H. G. Kaper and S. Tipei,
Compositions, Music Visualization,
and Scientific Sonification in an Immersive
Proc. Int'l Computer Music Conference '98,
Ann Arbor, Michigan (October 1998), pp. 399-405
H. G. Kaper, S. Tipei, and E. Wiebel,
Computing, Music Composition, and the Sonification of
submitted for publication.
H. G. Kaper, D. Ralley, and S. Tipei,
Perceived Equal Loudness of Complex Tones:
A Software Implementation for Computer Music Composition,
Proc. 4th Int'l Conf. in Music Perception and Cognition
(Montreal, Canada, August 1996), pp.127-132
H. G. Kaper, D. Ralley, J. Restrepo, and S. Tipei,
Additive Synthesis with DIASS_M4C on
Argonne National Laboratory's IBM POWERparallel System (SP),
Proc. 1995 Int'l Computer Music Conference
(Banff, Canada, September 1995),
International Computer Music Association,
San Francisco, California, 1995, pp. 351-352
C. Kriese and S. Tipei,
A Compositional Approach to Additive Synthesis
Proc. 1992 Int'l Computer Music Conf.
(San Jose, California, September 1992),
International Computer Music Association,
San Francisco, California, 1992, pp. 394-395
DIASS was demonstrated at
- SuperComputing'95, San Diego, California (December 1995)
- SuperComputing'96, Pittsburgh, Pennsylvania (November 1996)
- SuperComputing'97, San Jose, California (November 1997)
- ICAD '97, Palo Alto, California (November 1997)
- Int'l Computer Music Conference '98, Ann Arbor, Michigan
- SuperComputing'98, Orlando, Florida (November 1998)
DIASS was developed with funds provided by
the UIUC Research Board.
The work at Argonne is supported by the MCS Division.
Until 1994, the NCSA provided time on their CRAY Y-MP.
Since 1994, all computations are done in Argonne's
Center for Computational Science and Technology.
Hans G. Kaper, MCS Division
Argonne National Laboratory
Argonne, IL 60439
Sever Tipei, School of Music
University of Illinois at Urbana/Champaign
Urbana, IL 61801
Scientific Sonification |
MCS Division |
Computer Music Project |
G. Kaper |
Last updated: October 21, 1998 (HGK)