From Bioelectricity to Brain Computer Interfaces

The untold story of the brainwaves sonification!

Early studies on bioelectrical phenomena

The history of the brainwaves research and in general the study of bioelectrical phenomena in animals or plants, starts around 1780 with Luigi Galvani. Galvani discovered that he could cause contraction in a frog’s leg muscles by applying an electrical current to exposed nerves. This work was followed by that of Emil Heinrich Du Bois-Reymond, considered the founder of modern electrophysiology, who in the 1840s began to measure biological currents in electric fish and later in humans via electrodes embedded directly in his own arm. Toward the late of 19th century, the British neurophysiologist Richard Caton investigated the brain tissue spontaneous currents. In 1875 Caton reported the successfully measurements of brain electrical activity using electrodes implanted directly in the exposed cerebral hemispheres of rabbits and monkeys. It represents the very first recognized demonstration of the electrical brain’s activity. But at the time, it was not believed to be possible to extract meaningful measures by more non-invasive methods, as placing electrodes on the scalp. In 1912 Vladimir Vladimirovich Pravdich-Neminsky, a Ukrainian physiologist, recorded a EEG from a dog on photographs. He also coined the term electrocerebrogram.

In 1920 Alexander Forbes, replaces string galvanometer with a vacuum tube to amplify EEG’s small electrical signal. This becomes the standard for EEG amplification. Forbes with his fifty-six years of active research, contributed over 100 publications to the development of the realm of neurophysiology. Human brainwaves were first measured four years later, in 1924, by Hans Berger, at the time an unknown German psychiatrist. Five years after, in 1929 Berger publishes his brainwave results with the title Uber das Elektrenkephalogramm des Menschen (On the Electroencephalogram of Man). He reported the first single channel scalp recording of a human EEG from occipital region. He also publishes reports on intercortical activity and partial complex seizures. But the English translation of such work did not appear until 1969.

The 30s’ : The EEG diffusion

In 1932, Jan Friedrich Tonnies develops the very first modern EEG for Hans Berger whose discovery of the “Electroencephalogram” was recognized worldwide two years later. It consists in multichannel ink-writing EEG machine, with its moving paper and vibrating pens. Together with Brian Matthews he also designed one of the first differential amplifiers that same year before he spent three years at the Physiology lab of H. S. Gasser who was the director of the Rockefeller Institute for Medical Research at that time. Donald B. Lindsley, co-founder of UCLA’s Brain Research Institute, was pioneer in the study of human brain waves. Lindsley was one of the first scientists to use the newly discovered technique of electroencephalography to record electrical brain activity.

During his postdoctoral studies with Alexander Forbes and Hallowell Davis at Harvard University (1933-35), Lindsley himself served as the subject for the premier public demonstration of EEG to the American medical community. Initially, Berger’s work was largely ignored. It was not until five years after his first paper was published, when his results were verified by the pioneering physiologists E.D. Adrian and B.H.C. Mathews, that his discovery began to draw attention. In their 1934 article in the journal Brain, Adrian and Matthews also reported the successfully attempt to sonify the measured brainwave signals which they had recorded according to Berger’s methods. While listening to his own alpha presented through a loud speaker, Adrian tried to correlate his subjective impression of hearing the alpha come and go with the activity of looking or not looking with his eyes. This was the first example of the sonification of human brainwaves for auditory display.

The same year (1934) Fisher and Lowenback demonstrate the first spikes on an EEG due to epileptic events, Ralph Gerard and Franklin Offner develop concentric needle electrodes and develop EEG equipment and Dr. Frederic Gibbs won a small grant to develop instrumentation to process electroencephalographic data. His goal was to apply the knowledge gained by Hans Berger and confirmed by Lord Adrian to clinical applications. In 1935 Gibbs approaches Albert Grass, a recent graduate of MIT, to design three devices to amplify human EEG potentials. Grass designs moving coil galvanometers, which enables the embryonic EEG instrumentation to accurately and reliably record the EEG frequencies on chart paper. The addition of these new galvanometers to his early amplifiers becomes the Grass Model I (fig. 1), used by Gibbs, Lennox, Davis and others. In 1936 the first EEG laboratory is founded at Massachusetts general hospital.

first eeg device grass modelGrass Model I, developed by Albert Grass was the first commercially successful electroencephalograph.

EEG during the Post-war

In 1945, Albert Grass, who first built his 3–channel EEG machine at the Harvard Medical School, formed the Grass Instrument Company. This was for many years the leading supplier of EEG machines around the world, and its provision of reliable instruments allowed a rapid advance in the clinical application of EEG. Two years later, in 1947, The American EEG Society was founded and the first International EEG congress was held. In most of early EEG studies, scientist as H.W. Shipton (1949), V.J. Walter and W.G. Walter (1949), J. Corriol and H. Gastaut (1950), M.G.T. Hewlett (1951), E.C. Turton (1952) reported various methods of generating stimulus in the auditory domain that in some way followed brain rhythm frequencies. In 1949 H.W. Shipton designed an electronic trigger circuit suitable for investigating how feedback plays an important part in the analysis of human behaviour through the use of external stimuli as a means of evoking waveforms of clinical importance. However, in these works only the amplitude of the alpha waves or other simple and direct characters of EEG signals was utilized as the driving sources of the musical sound.

Up to that time, signal processing was done with analog systems, often using electronic circuits or mechanical devices, but since the 50’s digital computers were begun to use to simulate signal processing systems, before implementing in analog hardware. It was a cheap way to vary parameters and test system output. In 1954, Robert Frances attempt to measure physiological concomitants of formal musical perception through polygraph readings, including EEG, GSR, heart rate and respiration rate while subjects listened to carefully selected musical examples. Frances was able to use variations in polygraph readings, associated with alpha brainwave blocking, to differentiate among subjects with more or less musical training, subjects instructed to listen actively versus passively, and subjects instructed to listen with an analytical versus a spontaneous attitude. In the late 1950’s, Joe Kamiya studied the phenomenon of internal perception or the awareness of private internal experiencing. Accidentally he discovered that a subject could learn through feedback to reliably discriminate between alpha and beta dominant cortical states, and then further demonstrated that a subject could learn to produce such alpha or beta brain states on demand. In 1958 Jasper leads a committee to standardize the EEG 10/20 system of placing electrodes on the scalp.

First EEG Sonifications in the 60s’ and the 70s’

In 1961, Neal Miller of Rockefeller University in New York, first suggested that the autonomic nervous system could be as susceptible to training as the voluntary nervous system, that people might learn to control their heart rate and bowel contractions just as they learned to walk or play tennis [18]; the biofeedback paradigm began to be clearly articulated and became widely known during all the 60s. Digital signal processing also started to become a discipline and although its roots go much further back. This made it possible to predict that traditional analog processing devices such as filters and spectrum analyzers would become digital and result in big improvements for many applications. In 1965 Cooley and Tukey discover efficient algorithm for Fast Fourier Transforms (FFT). That made feasible real-time signal processing as well as algorithms previously thought impossible to implement on digital computers. The follow year, 1966, Goldensohn assembles first closed circuit television EEG recording system. Finally automatic data processing becomes prevalent within EEG signal processing.

The 1965-1975 decade was definitive and formative for digital art precisely because the technology was new and in rapid transition, as well as, the imaginations and aspirations of the artists. The use of brainwaves to generate music did not occur until 1965. Alvin Lucier had begun working with physicist Edmond Dewan in 1964, performing experiments that used brainwaves to create sound. The next year, he was inspired to compose a piece of music using brainwaves as the sole generative source. Music for Solo Performer achieved a direct mapping of a soloist’s alpha: amplified alpha signals were used to activate, either acoustically or mechanically, an array of percussion instruments. It was presented, with encouragement from John Cage, at the Rose Art Museum of Brandeis University in 1965. In 1967, Edmond Dewan described experiments using subjects wired to an EEG device, which records and graphs the electrical activity of the brain. With practice, the subjects were able to reduce the amplitude of their brains’ alpha rhythms, to transmit Morse code to a teleprinter.

In the late 1960s, Richard Teitelbaum was a member of the innovative Rome-based live electronic music group Musica Elettronica Viva (MEV). In performances of Spacecraft (1967) he used various biological signals including EEG and ECG signals as control sources for electronic synthesizers. Organ Music and In Tune, both realized in 1968, added heart beat and breath sounds, sensed with contact microphones, to EEG signals in the creation of an electronic music texture [20]. Over the next few years, Teitelbaum continued to use EEG and other biological signals in his compositions and experiments as triggers for nascent Moog electronic synthesizers.

Simultaneously, Manfred Eaton another early experimenter, carried out experiments in music and bioelectric phenomena at the ORCUS Research Center in Kansas City. Eaton described the application of various biosignals to artistic projects in order to study the aesthetic responses to the stimuli. He initially published an article titled Biopotentials as Control Data for Spontaneous Music (Orcus) in 1968 [21]. In that summer, Erkki Kurenniemi visited an electroacoustic music conference organized by Teatro Comunale in Florence, Italy. During the conference Kurenniemi was introduced to Eaton’s ideas of biofeedback as a source of musical or compositional material. Two of Kurenniemi’s instruments Dimi-S and Dimi-T, are based on these ideas. Dimi-T where T stands for thinking, was developed in 1973; it used alpha waves produced by the human brain while sleeping for controlling the pitch of an oscillator.

In October 1969 the Biofeedback Research Society was formed, held its first meeting in Santa Monica, and the phenomenon of biofeedback officially received its name. One year later, Rosenboom starts to experiment in musical production using alpha rhythms and explorations of the relation of alpha wave production to music perception. In Ecology of the Skin (1970), an environmental demonstration-participation performance, Rosenboom used brainwaves and heart signals from performers and audience members and their translation into a musical texture, to explore the various states of awareness and consciousness associated with music performance. In 1971, the French scientist Roger Lafosse and the music pioneer Pierre Henry proposed a sophisticated live performance system known as Corticalart (art from the cerebral cortex). In a series of free performances done in 1971, Henry in dark sunglasses with electrodes hanging from his head, projected the content of his brainwaves changed the colour of the image according to his brainwave variations. The same year, Eaton first published his manifesto Bio-Music: Biological Feedback Experiential Music Systems.

In the 70’s early BCI research was also started, Professor Jacques J. Vidal first introduced the idea of direct brain computer communication. In his BCI Laboratory at the University of California Los Angeles (UCLA), under a grant from the National Science Foundation, a successful project demonstrated that a computer-generated visual stimulation is able to evoke people to produce a certain response which could provide a communication channel between the subject and a computer. In 1973, he published Toward Direct Brain-Computer Communication, this research also mark the first appearance of the expression brain–computer interface in scientific literature. The Advanced Research Projects Agency (ARPA) also tended to develop similar communication systems driven by brainwaves for use in military applications. The aim was to improve the performance of military personnel especially in tasks involving heavy mental loads, the investigation produced valuable insights but, due to the technological, it made minimal progress toward its goals. Around the same time, Rosenboom founded the Laboratory of Experimental Aesthetics at York University in Toronto, where pioneering collaborations between scientists and artists was encouraged. The laboratory undertook experimentation into the artistic possibilities of brainwaves and other biological signals in cybernetic biofeedback artistic systems. Many artists and musicians visited and worked at the facility during this time including John Cage, David Behrman, LaMonte Young, and Marian Zazeela. Some of the results of the work at this lab were published in the book Biofeedback and the Arts (Aesthetic Research Centre of Canada, 1976). Parallel to the research in Toronto, the Montréal group SONDE, along with Charles de Mestral, did some brainwave performances. At Logos in Ghent, Belgium, real-time brainwave triggered concerts were presented in 1972 and 1973. In Baltimore the Peabody Electronic Music Consort did performances. Rosenboom and others continued their work at Mills College.

Toward the end of the 1970s, biofeedback and brainwave research fell into a period of quiescence due to many factors, primarily a lack of funding and of sufficiently powerful computers and secondarily because the scientific community became increasingly sceptical. Ancoli and Kamiya (1978) critiqued the methodological weaknesses and inconsistencies of many of the early studies on alpha feedback training. They found the quality and length of training inadequate in many studies, and criticized researchers for neglecting to monitor such critical variables as the social interactions between experimenter and subject, and instructional set. In 1979 Basmajian declared that “Alpha feedback is still a mystery but it is not an acceptable treatment method”. For about ten years, almost nothing happened in the field.

Brainwave Music from the 90s’to the nowadays

During the 1980’s numerous IC technology advancements, as fixed-point and floating-point microprocessors for digital signal processing and in general though computing capabilities diffusion in nearly every electrophysiology laboratory, were sufficient to allow advanced signal processing. Digital EEG becomes prevalent and topographic mapping makes EEG popular in a variety of clinical fields. Kamiya’s work on voluntary production of alpha states coincided with the dawning counter-cultural interest in altered states of consciousness, and the emergence of a new interest in Eastern religions, the psychology of consciousness, and in transpersonal psychology (Moss & Keen, 1981; deSilva, 1981). Cleve Barry Moler, a mathematician and computer programmer, invented Matlab a numerical computing package. Recognizing its commercial potential, in 1984 he co-founded MathWorks with Jack Little, to commercialize and to continue its development. Matlab was first adopted by researchers and practitioners in control engineering, but quickly spread to many other domains. It is now popular amongst scientists involved in signal processing and numerous toolboxes were developed for EEG analysis. In that decade, pioneer researchers also tried to apply to EEG data analysis techniques developed in electrical engineering and information theory, including time/frequency analysis and Independent Component Analysis (ICA). These techniques revealed EEG processes whose dynamic characteristics are also correlated with behavioral changes, events that cannot be seen before due technical limitations that constrained researchers to confine their EEG data analysis.

In 1990, real time digital EEG monitoring is standard. The availability of fast and low-cost digital computers and the improvements in signal processing algorithms allow EEG to be used in the field of cognitive neurosciences as a subject of research. The same year, U.S. President George Bush declares the decade starting in 1990 the “Decade of the Brain”. Benjamin Knapp and Hugh Lusted, began working on a computer interface called the BioMuse. It permitted a human to control certain computer functions via bioelectric signals including EEG and EMG. In 1992, Atau Tanaka  was commissioned by Knapp and Lusted to compose and perform music using the BioMuse as a controller. Tanaka continued to use the BioMuse, primarily as an EMG controller, in live performances throughout the 1990s. In 1996, Knapp and Lusted wrote an article for Scientific American about the BioMuse called Controlling Computers with Neural Signals.

In the early 2000s there has been a renewed interest in brainwave music and a resurgence in its performance. In 2001, Neam Cathode at Montreal’s Oboro Gallery showed Cyber Mondrian a work that incorporated Mondrian-like generated images with synthesized sound that was controlled using the Interactive Brainwave Visual Analyzer (IBVA) system. The same year, Andrew Brouse, created his InterHarmonium an internet-enabled brainwave performance system based on Max/MSP  and OpenSoundControl. In 2002 David First created OPERATION: KRACPOT using “brainwave entrainment” and the phenomenon of the Schumann resonances to create haunting music. In 2003, James Fung and Steve Mann prepared an improvised collective musical piece created interactively from the brainwaves of audience participants. The Regenerative Brainwave Music (REGEN3) was orchestrated by feeding tiny micro-voltages gathered from forty wired performers into a responsive EEG network: a “cyborg collective” comprising the cybernetic interactions between performers, musicians, electronics, and computing machines. Adam Overton, a student of David Rosenboom at CalArts, performed his series of works entitled Sitting. Breathing. Series and Other Biometric Work.

Many other artists utilized brainwaves for musical purposes, a Montreal group made up of Pierre Droste, Andrew Culver and Charles de Mestral, and also Janez Janša who developed Brainscore and led the project Brainloop. Even an underwater brain concert has been performed during the ICMC 2007 using an electroencephalophone. Much of this new work was naive in the sense that the musicians were not fully cognisant of the rich history of brainwave music and research which has preceded them. There has also been something of a bifurcation between those using hobbyist “biofeedback” equipment or techniques and those preferring to take a more rigorous “scientific” approach.

Over the past decade, also BCI research has grown rapidly and become a very popular research topic around the world. There are several active BCI research groups at universities, including the University of British Columbia, the Wadsworth Centre in Albany, the Berlin BCI group, a joint venture of several German research organisations, the University of Tübingen always in Germany; the Wadsworth Center in US and the Graz BCI research in Austria. At Neuromusic lab (University of Plymouth), Eduardo Miranda runs numerous researchers on brainwave music using Brain-Computer Interfaces.


Andrew Brouse 2004, A young person’s guide to brainwave music: forty years of audio from the human EEG.
Larry R. Squire (1996), The history of neuroscience in autobiography, Volume 1. Society for Neuroscience, 1996.
Berger, H. (1969), “On the Electroencephalogram of Man”, The Fourteen Original Reports on the Human Electroencephalogram, Electroencephalography and Clinical Neurophysiology, Supplement No. 28. Amsterdam: Elsevie
Adrian, E.D., and Matthews, B.H.C. (1934), “The Berger Rhythm : Potential Changes from the Occipital Lobes in Man.” Brain, 57(4): 355-8
H.W. Shipton, “An Electronic Trigger Circuit as an Aid to Physiological Research”, Journal of the British Institute of Radio Engineers 4 (1949) pp. 374-383.
R. Francès, The Perception of Music, trans. from the 1958 edition by W.J. Dowling (Hillsdale, NJ: Lawrence Erlbaum Associates, 1988).
Kamiya, 1969, 1994; Gaarder & Montgomery, 1977, p. 4
Swartz BE, Goldensohn ES “Timeline of the history of EEG and associated fields” Electroencephalogr Clin Neurophysiol. 1998 Feb;106(2):173-6.
Lucier, “Statement On: Music for Solo Performer” (1971), in D. Rosenboom, ed., Biofeedback and the Arts, Results of Early Experiments, 2nd Ed. (Vancouver: Aesthetic Research Centre of Canada Publications, 1976) pp. 60-61.
R. Teitelbaum, “In Tune: Some Early Experiments in Biofeedback Music (1966-74)” (1974)
M.L. Eaton, “Bio-Potentials as Control Data for Spontaneous Music”, ORCUS Research Technical Publication, TP-3001 (Kansas City, MO: 1968)
Rosenboom, D. “Extended musical interface with the human nervous system,”
LEONARDO MONOGRAPH SERIES International Society for the Arts, Sciences
and Technology (ISAST), 1997.
Vidal, J. “Toward Direct Brain-Computer Communication.” Annual Review of
Biophysics and Bioengineering, L. J. Mullins (Ed.) (1973) pp. 157—80.
H. Moravec’s “When Will Computer Hardware Match the Human Brain?” Journal of Evolution and Technology. Vol. 1,1998
Delorme, A., & Makeig, S. (2004). EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of neuroscience methods, 134(1), 9-21. doi: 10.1016/j.jneumeth.2003.10.009.]
Hugh S. Lusted and R. Benjamin Knapp, “Controlling Computers with Neural Signals,” Scientific American. (October 1996): 82-87.
E. Miranda and A.Brouse, “Toward Direct Brain Computer Musical Interfaces,” in Proc. of the Conf. on New Instruments for Musical Expression (NIME), 2005, pp. 216-219