fb

The music creation process has changed a lot over the past 50-odd years as music technology shifted from an analog reality to a digital one. Whereas sheet music, guitar tabs or even simpler notation is what is most often used by performers playing acoustic or electric instruments, MIDI offers a similar language for digital devices to play the music created without the need for a live performer. This article covers the story of MIDI, from its 1.0 origins to forward-thinking 2.0 innovations, highlighting the ways this revolutionary technology has changed music along the way.

MIDI, short for Music Instrument Digital Interface and often written as “midi”, is a communications standard that was originally created for digital music technologies to have a common way of communicating with each other. Last year, after more than thirty years of MIDI dominating digital music production, MIDI 2.0 was announced to the excitement of artists worldwide. In anticipation of another MIDI revolution, let’s step back to the reason it was created in the first place, and review the impact it’s had on the world of music since then.

The problem: gear was becoming increasingly difficult to interconnect

Before MIDI was invented in the early 1980s, digital synthesizers transferred information using different company standards. This made sense, as it allowed companies like Oberheim, Yamaha, and Roland to box out their competitors by making their products incompatible with competing products. This left music creators, who usually bought gear from a host of different manufacturers, on their own when dealing with the increasingly complex issue of compatibility.

The solution: a common language to improve communication between devices

In response to this issue, people like Ikutaro Kakehashi from Roland and Dave Smith from Sequential Circuits pushed for an industry communications standard. There was initially a heavy backlash against the concept of MIDI. Designers were skeptical of its limitations, companies were protective of their closed product ecosystems, and musicians feared for their futures as the musical applications of MIDI became more clear. In the end, hardware manufacturers ultimately agreed that MIDI would benefit their shared market. At the 1983 winter National Association of Music Merchants (NAMM) trade show, a Roland Jupiter 6 synthesizer spoke with a Prophet 600 synthesizer by Sequential Circuits, bringing MIDI to the public eye.

So, what exactly is MIDI?

It might be easier to start with what MIDI is not: audio. MIDI files transmit information, rather than actually generate or record a piece of audio. In this way, MIDI data is more like sheet music, where a computer or other device reading the MIDI data can then play it. One of the powerful benefits of MIDI is that this interpretation can be changed – instruments, pitch, and arrangements can all be easily adjusted.

MIDI involves three main components: MIDI messages (or MIDI protocol), physical transports, and file formats. The MIDI protocol is what describes the information to be shared, using different kinds of standardized messages like Note Off or On. MIDI messages can be broken down into two high-level classifications of channel messages and system messages, depending on where they send information. Physical transports are the carriers that transport these messages, including 5-Pin MIDI, USB, Firewire, Ethernet, WiFi (LAN), and Bluetooth. Standard MIDI File (SMF) formats usually use a *.mid extension. Loading a Standard MIDI File into a DAW will let the user adjust the interpretation of the file. Because they don’t have to store any actual audio, MIDI files tend to be smaller than file formats like *.mp4 or *.wav.

The impact of MIDI on music

1983 – Communication between instruments

The first demonstrations of MIDI’s capability included a person playing notes on one synth, with the MIDI information being sent to and played back by another synth. The video below features midi co-creator Dave Smith discussing the conception and early days of MIDI.

1984 – Workflow innovation

MIDI-supported instruments were soon joined by MIDI-supported computers. This allowed for a revolutionary music production workflow innovation: MIDI mockups. Music scores could now be translated into MIDI, sent through a sequencer and played back by different synths using approximate replicas of the desired sounds. Using MIDI mockups, composers could now cheaply demo and tweak their scores, delaying the need to hire professional players until the score had been perfected. Watch legendary composer Hans Zimmer talk about his composition workflow based on MIDI mockups using Cubase.

1985 – Instrument innovation

The digitization of traditional instruments had begun back in the 1970s, but having the MIDI standard allowed new instruments both virtual and physical to be invented that interpret and play MIDI instructions in new ways. The last 15 years have seen an explosion of MIDI controller development, from established manufacturers to new startups and even DIY lovers repurposing video game controllers into MIDI-based instruments.

1986 – Digitization of music production

Just as MIDI was being introduced, personal computers were growing more powerful and popular too. Personal computers of the 1980s like the Apple IIe, Atari ST, and Commodore 64 all supported MIDI. Software editors complementing the MIDI workflow called Digital Audio Workstations (DAWs) also evolved into the 1990s and beyond, with notable early products including Sound Tools (1989), Studio Vision (1990), Pro Tools (1991), and Cubase Audio Mac (1992). MIDI is versatile and able to support a wide range of protocol applications, including, but not limited to, other instruments, lights, and smoke machines. Many DAWs also use highly visual interfaces, meaning MIDI is an easy way to start making music arrangements with little-to-no formal training in music theory.

Take a look at this 1986 demonstration of then-cutting-edge “MIDI Music”, which used a MIDI-compatible computer with an early DAW to send information to a synth, which would then manipulate and output audio. This was before virtual instruments allowed the entire music creation and audio manipulation process to take place within a virtual setting.

MIDI 2.0: What to expect

The original MIDI protocols proposed during the 1980s have been slightly adjusted over the years, but the 2019 announcement of MIDI 2.0 truly represents a new and exciting phase in history for MIDI. Google, imitone, Native Instruments, Roland, and more have all been contributing to 2.0’s ongoing development. Upcoming developments to watch for include:

New devices

New MIDI 2.0 instruments like Roland’s A-88MKII keyboard are going to transmit information between each other, rather than only transmit information one way. They’ll also fall back to MIDI 1.0 when necessary so other older devices are not made obsolete.

Message and timing improvements

MIDI 1.0 messages are typically 7 bit. In MIDI 2.0, velocity is 16 bit, while the 128 control change messages, 16,384 registered controllers and assignable controllers, poly and channel pressure, and pitch bend are all 32 bit. MIDI 2.0 will also address known issues with precision and sync by introducing ‘jitter timestamps’ for improved timing accuracy. These changes will help permit more sophisticated input device design.

Forward-thinking design

Easy to transfer ‘profiles’ that store device configurations could help shorten the time it takes to map a unique MIDI device, thereby improving one’s workflow efficiency. These profiles can also be connected to ‘property exchanges’ that help the MIDI user quickly learn about their device. Both profiles and property exchanges reflect the developers’ priorities for MIDI 2.0 to be a forward-thinking extension of MIDI 1.0.