MIDI, or the Musical Instrument Digital Interface, is a means by which computers and musical instruments can communicate. It's a language that allows you to give instructions to a computer that it will then send to the synthesizer on your sound card, or to any other MIDI devices that you may have available. MIDI is a great way to work with music and has very powerful capabilities that will appeal to users of all levels. There are lots of unfamiliar terms and concepts in the MIDI language, though, and it's easy to get frustrated if you don't have a grasp of some basic ideas. The first section of this guide will help you understand what MIDI is and teach you what it can do for you. History
MIDI was born in the early 1980's when electronic instrument makers, primarily in the US and Japan, recognized that their instruments must be able to talk to one another. After the details were worked out, manufacturers soon began to include electronic circuitry in their equipment that allowed them to understand the instructions MIDI used. Before long, nearly every instrument maker in the world had adopted the standard, and though there have been refinements and modifications to MIDI along the way, even the earliest MIDI instruments are still capable enough to be used today. Since its adoption, MIDI has dramatically changed the way music is created, performed and recorded. What is MIDI
MIDI is a universally accepted standard for communicating information about a musical performance by digital means. It encompasses both a hardware and software component, and though it could be used for sending information about many other things, such as the control of lighting in a theater, or even to control your coffee maker, it was developed to transmit instructions about music. Like a music score, on which notes and other symbols are placed, a MIDI transmission carries instructions that must be acted on by some device that can make sound. While a clarinet or guitar player will interpret a written music score and produce the sound required, it is most likely a synthesizer or drum machine that will react to MIDI information. Fortunately for us, a complete set of these instructions can be captured and stored by a computer, and several types of music software can be used to edit and alter them. If the information is sent to several different MIDI devices, an entire electronic orchestra can be at the musician's disposal. MIDI does not (except in rare cases) actually transmit sound electronically; you couldn't connect a MIDI cable to a loudspeaker and expect to hear anything (you'd probably damage both your speakers and your ears if you tried!). Instead, it is the sound-producing capabilities of the synthesizer, whether it's on a sound card in your computer or a stand-alone device, that will create the sound you hear. How Does it Work
A MIDI transmission consists of a series of signals, called bits for binary digits, that pass through a MIDI cable. These signals are electrical impulses, some strong, some weak, that represent the 1s and 0s that make up the language of computers (any device that wants to send or receive MIDI data must be equipped with a microprocessor, the "brains" of every computer). When the impulses reach their destination, for example a synthesizer, the operating system of the synthesizer interprets them as a series of instructions that usually result in the production of a sound. This sound must be amplified, so the synthesizer will typically be connected to an amplifier or mixer.
The bits in the MIDI transmission move at a fairly high rate of speed, 31,250 per second to be exact, and are transmitted in a serial manner, meaning one after another. (A parallel transmission contains a number of signals that pass at the same time). Every bit does not represent a different note or event, however. Bits are grouped into strings of 10 to create MIDI messages, each of which conveys important information about some musical event (Fig.1). (Individual MIDI messages are actually eight bits or one byte long, though when messages are being transmitted, two extra bits, called the stop and start bits, are added to the beginning and end of the byte, hence the 10 bit length of most events.)
Fig 1. -MIDI data is transmitted using a 10-bit packet that includes a start and stop bit.-
Some MIDI messages detail specific aspects of a musical performance: what notes should be heard; how loud they should be; what type of sound (trumpet, drum, flute) should play the notes, etc.; while others are more general in nature. Together, MIDI messages represent an entire language of musical actions, and can be used to convey all the details of a complete symphony or a simple hymn. What it Takes
In order to communicate in the language of MIDI, a device should be able to send and receive MIDI information, though many common devices are created to do primarily one or the other. A sound card in a computer, for example, must be given instructions that are generated by some other source; it cannot create any MIDI messages on its own. Similarly, certain electronic instruments known as tone or sound modules, are also only able to respond to messages generated from the "outside." By contrast, a class of instruments called keyboard controllers are intended for transmitting MIDI data only, and have no way to make sound. Whatever their capabilities, all MIDI devices must contain a microprocessor, which is a computer chip that deciphers and acts upon MIDI messages, as well as physical connections called Ports, for sending and receiving data. MIDI Channels
One of the great capabilities of MIDI is its ability to transmit messages to different electronic musical instruments at the same time. Each instrument can distinguish which messages are for it because the messages contain channel information, which acts like an address or shipping label for the message. These MIDI channels are not physically separated, i.e., they are not transmitted on separate strands of wire. Rather, the different channel numbers (1-16) are contained in the beginning of the MIDI message, and determine whether an instrument or device will respond to that message. In this way, messages can be directed to certain devices, while other devices, which might also be receiving the information, will ignore them. Most newer instruments can be programmed to respond to any one or even all MIDI channels. Because of this, the user has extensive control over how different instruments react to the information that they receive.
There are certain classes of messages called system messages that don't use a channel, since they are intended for all devices connected to the MIDI chain. Messages that deal with tuning or timing information are in this category. There are also other cases where individual messages do not need their own channel label, for example when all the notes of a melody are to played by a certain instrument on the same channel. In this case, a channel designation can be set at the beginning of the melodic sequence and used for all messages in that series. MIDI Messages
MIDI messages are the language of MIDI; they are the words MIDI uses in a transmission to communicate the information that must pass from a source to a destination. There are many types of MIDI messages, though they all fall into two categories: channel messages and system messages. Channel messages are those that carry specific channel information, such as those described above. These include messages such as what note an instrument should play (called a Note Message), and Program Change messages, which tell the instrument what sound it should make while playing the note. System messages, as described above, are either intended for all the instruments currently connected to the transmitting device, or are meant to convey information to a specific instrument that is general in nature and doesn't represent specific details of a performance.
Most messages consist of at least two bytes. The first byte is called the status byte, which tells the receiving device what type of message it is. Basically, it identifies the message and prepares the device for a response. MIDI uses the numbers between 128 and 255 for this part of the message. What follows is the actual data the device needs; these bytes are called data bytes. They represent the details of the message; the values the instrument will use to perform its task. MIDI uses the numbers 0 to 127 for data bytes. Some messages use only one data byte, others need two, while some need none at all. We'll look at a few common messages to see what type of information they contain. Note On and Note Off Messages
Perhaps the most basic of all messages is the pair called Note On and Note Off. A Note On message is transmitted when a key is pressed on a keyboard, and a Note Off is transmitted when it is released. When a synthesizer receives a Note On message, it looks immediately for additional information, specifically, a data byte that details what note it should play and another that specifies how loud it should play it. MIDI has only 128 different numbers for designating pitch and loudness (or velocity) levels, so immediately after the Note On message is sent, a data byte representing a number between 0 and 127 will appear for the Note Number, followed by another that specifies the velocity level for that note. The note will continue to play until the Note Off message is received, and it too must contain note and velocity numbers. Note and velocity details must be included with the Note Off message because it is possible that a synthesizer will be playing several notes when the Note Off is received. If it received the Note Off without a specific Note number, it wouldn't know which note to stop playing. The Velocity number that appears with the Note Off is not quite as important; in fact, some synthesizers simply ignore it. Nevertheless, it will be sent as part of the data. The Note On / Note Off combination constitutes the most common pair of messages in any MIDI transmission, though there are many other parts of the transmission that we need to explore (Figure 3).
Fig 3. -The MIDI message Note On is followed by two data bytes, as is the Note Off message.- Program Change Messages
When a synthesizer is first turned on, it will load one of its sounds into its RAM (random access memory) and prepare itself to receive note messages. These sounds are permanently stored in the synthesizer's ROM (read only memory) and are, in essence, individual computer programs that tell the device how to create the required sound. When the computer is directed to load a new sound, it must change the program it is currently running so it will be ready to play notes using the new tone. Hence, the MIDI message that tells the device what sound to make is called a Program Change message. Program Changes are followed by a single data byte.
MIDI devices use two different numbering schemes to catalog their programs, either 0-127 or 1-128, and it is important to know which scheme the different devices you will be using employ. A recent standardization of this numbering scheme, called the General MIDI specification, states that the numbers will run from 1-128, and also specifies which sounds will have what numbers. We'll take a closer look at General MIDI at the end of this section. Control Change Messages
Control Change messages are used to represent some change in the status of a physical control on a device. These controls are the foot pedals, volume sliders, modulation wheels, and similar peripherals found on most electronic instruments. Some control messages act like simple on and off switches; for example, the sustain pedal on a synthesizer can only be down or up, so a single status byte can be used to specify which state the pedal is in, and no data byte is needed. Other controls are continuously changing and need to be represented by more detailed data called continuous controller data. For example, if you move the pitch wheel on a synthesizer very slowly from its resting position to one extreme up or down, MIDI transmits data representing the wheel's position at numerous points along its path. In this case, the data must be very high in resolution, so 14-bit (2-byte) messages are used. This provides a total of 16,384 values to track the movement of the wheel.
Controller data can be used for many different functions in MIDI, even multiple functions at the same time. For this reason, the different controller data streams are numbered from 0 to 127. Some of these controller numbers have become standardized to control certain tasks, for example, controller 10 (often abbreviated cc 10) is most often used for panning between left and right speakers, while controller seven (cc 7) is typically used for volume changes. Many synthesizers allow the user to change the effect controller data will have. When this is possible, any controller could theoretically be used to control any aspect of a sound that changes over time. System Messages
One final category of MIDI messages is called system messages. There are several types of system messages, but they all share the characteristic of transmitting information without a channel assignment. As a result, all instruments that receive messages of this type would act upon them, though one particular type of system message, called system exclusive, is intended for communicating only with a device or devices made by a specific manufacturer. System exclusive is often used when a musician wants to transmit large amounts of data to a specific synthesizer or receive data from the device. Because all major instrument makers have an ID number (#7 for Kurzweil devices, #67 for Yamaha, etc.), a message can be "addressed" to one device and all other receiving instruments will see it, but ignore it. For example, all the instructions specifying how a synthesizer makes it sounds could be "dumped" from the device and stored on a computer. Users could then trade custom libraries of sounds, or reload all the original factory settings if their equipment's memory were wiped out. Moreover, a whole new setup of sounds could be sent by a computer just before actual note data was transmitted, thereby getting the instrument properly configured before the music starts.
Other system messages include timing messages, which provide information about the tempo of the music; and Song Position messages, which indicate where a recorded MIDI sequence should begin playback. These last messages are particularly useful with synthesizers that contain built-in sequencing capabilities. General MIDI
Before General MIDI (GM) was popularized, there was no consistency in the way manufacturers numbered the sounds in their instruments, so that on one device program #1 might be a piano, while on another, it might be a flute. Because MIDI data files (or sequences) often contain program change instructions, i.e., the actual specifications for which sound an instrument should use to perform each layer of the music, it was unlikely that music created for one synthesizer would sound correct when performed by another. With the adoption of General MIDI, files that use its numbering scheme are now "portable," meaning they will sound identical, or nearly so, when played by different instruments. This assumes, of course, that the instruments conform to the GM specification (Table 1).
The General MIDI Program Change Specification
In addition to a standardized assignment of program change numbers, General MIDI includes several other guidelines, the most important of which is the use of Channel 10 for drum sounds. It also provides a Drum Map, which is the fixed assignment of certain drums sounds to specific MIDI note numbers (Table 2). For example, sending middle C, MIDI note #60, will trigger a high bongo sound on the receiving General MIDI instrument. A "C" one octave below, note #48, will produce a Hi-Mid tom, and so on. This mapping scheme provides yet another layer of standardization, thereby insuring that MIDI sequences can be transported among different studios and desktop systems around the world.
The General MIDI Drum Map MIDI Hardware
Different MIDI devices have different capabilities and functions. We'll look closely at the various options on a traditional synthesizer first, then explore some of the other options that are found on different types of instruments. Synthesizers
When you first look at a synthesizer, you are likely to see a piano-style keyboard, one or more rows of buttons and perhaps a few "sliders" or "wheels" (Figure 4).
Fig 4. -A MIDI synthesizer (with integrated keyboard controller).-
Inside the synthesizer are the sound-producing components, the actual brains of the unit, that respond to messages received when a key is pressed on the keyboard or when a MIDI message is sent from some other source. The keyboard part of the unit is called a controller, which is a term used for any MIDI device that can initiate an action. There are other types of controllers, including those in the form of a guitar (guitar controllers), drum machines (drum controllers), and even those that look and work like wind instruments (wind controllers). It's possible to buy a controller that does not include the capability to make any sound, and it's just as easy to buy the sound producing components alone, which are devices commonly known as tone or sound modules. In essence, the devices we commonly refer to as "synthesizers" are actually tone modules with integrated keyboard controllers attached.
Keyboard synthesizers are by far the most common MIDI devices, although the tone modules included with nearly all sound cards for the PC are also extremely common. Like any device that wants to join into a MIDI conversation, synthesizers are equipped with the proper connectors that allow MIDI information to pass in, and sometimes out. These connectors, called MIDI ports, are usually grouped in threes: MIDI In, MIDI Out and MIDI Thru. Figure 5 below shows a standard arrangement of the Ports on the back of a synthesizer, and also shows the end of a MIDI cable, which connects the sending and receiving devices. Unlike single ended audio plugs (guitar cords and stereo RCA plugs), MIDI cables and Ports use a 5 pin DIN connection. The MIDI communication does not have to be two-way; for example the MIDI input of device one can be connected to the MIDI Out of device two, but not vice versa. The MIDI Thru port is used to relay the information that is sent to a device on to yet another unit without altering it in any way. By using this port, many MIDI instruments can be chained together, allowing a single controller to transmit to numerous different sound-producing devices simultaneously.
Fig 5-Three standard MIDI ports and a MIDI cable.-
To connect a MIDI synthesizer to a computer, the computer must have a MIDI interface, which typically contains the same three MIDI ports described above. Like the synthesizer, the MIDI interface converts the electrical signals it receives to the proper format needed by the computer. The MIDI interface might be a separate card that installs into a free PC expansion card slot; it could be a stand-alone, external unit that attaches to the PC's parallel or serial port; or it might be an integrated part of a sound card. Some sound cards use proprietary connectors for their MIDI hookup and require an optional MIDI adapter for connections to external MIDI units. On the Macintosh, the interface is almost always external, and typically connects to either the modem or printer port. General Features of MIDI Hardware
Keyboard and other MIDI controllers share many common features. Most have the ability to detect how hard a key was pressed. This feature, called Velocity Sensitivity, is used to determine a note's loudness, or amplitude. Like other controllers, a keyboard controller typically works by constantly watching the position of every key on the keyboard. An optical sensor is used to determine whether a key is up in its at-rest position, or down. Then, whenever a key is pressed, the instrument knows exactly how long it takes for the key to go down, and it assigns a value to that note by measuring the time it took to go from its starting point to the bottom of the keyboard. This value is called velocity, meaning "speed," but actually determines how loud the note will be played. An instrument that has the ability to measure this speed is said to be velocity sensitive.
Synthesizers and tone modules have many other features, including the ability to play many notes at once. This capability, called polyphony (for "many sounds"), usually ranges from eight notes, up to a maximum of 32, or in rare cases, 64. (Musicians usually use the term Voices when describing the polyphonic capabilities of an instrument, so "8-voice polyphonic" means the device can play eight notes at once.) When a device receives a new message after it has already reached its maximum, it must decide how to allocate its resources. For example, it might choose to drop the oldest note it is playing, or maybe it would drop the lowest or softest note. Some instruments will just ignore the new note that puts it over the top. In a professional synthesizer, this allocation might be programmable by the user, though in many cases it is fixed by the manufacturer.
It's important to keep in mind that certain sounds on a tone module might use up more than one voice. For example, even a simple flute sound could require two notes (or voices) of the available polyphony, while a complex, evolving sound, such as those often intended for use as movie soundtrack backgrounds, might require four or more voices. Playing a four note chord using a sound that requires 4 voices could, in theory, use the entire polyphonic capability of a 16-voice synthesizer. Other sounds, such as drum sounds, typically use only a single note of polyphony, and are not likely to be needed for playing chords!
When a synthesizer can make more than one type of sound at the same time, it is called multitimbral. This term comes from the French word timbre (pronounced "tam-ber"), which means tone or sound color. If a synthesizer can make the sound of a trumpet, flute, clarinet and oboe simultaneously, it is clearly multitimbral. How many different timbres can be used at once is a significant factor in determining the usefulness of a tone module for one's music; for example if you plan to write your next symphony using a single synthesizer, you should be sure it is at least 16-part multitimbral and has 24 or more voices of polyphony. For choral music, four-part multitimbral and 8-voice polyphony might be adequate, but obviously the more the merrier.
One final basic feature of a MIDI device is its ability to respond to instructions on several different MIDI channels at once. This subject was mentioned earlier, but to review, MIDI can keep all the different layers of a musical performance separate from one another by transmitting each layer on its own channel, so an instrument should be able to handle the full "bandwidth" of a MIDI transmission, which is 16 different channels. Most instruments allow the user to set the Reception Mode of a MIDI device, which determines how it will respond to the messages it receives. The most common (and useful!) Reception Mode is called OMNI OFF \ POLY, which tells the device to distinguish what channel messages are on (OMNI OFF), and play back several notes at once if requested to do so (POLY, from polyphonic). Many older synths were limited to other reception modes, which kept them from distinguishing the different channels of a transmission. For example, if OMNI were ON, the device might play all messages without regard for their channel status. In nearly all recent devices, Reception Mode is selectable, though OMNI OFF/POLY is by far the most common Mode in use today.
Most synthesizers have the ability to assign one sound to play over part of the keyboard, and another sound to play over the rest. This is called keyboard splitting or zoning, and would allow you, for example, to play a bass guitar sound with the left hand on the low notes, and a piano sound with the right hand on the high notes (Figure 6). Synthesizers, by the way, typically offer keyboards that range from as few as four octaves, or forty-eight notes, to full, traditional piano lengths of just over seven octaves, or eighty-eight notes.
Fig 6. -A MIDI keyboard split into two zones.- Programability
There is a wide range of programming options available on synthesizers today, but most have capabilities that allow the user to design sounds with great precision. Normally, you can layer different sounds, combining a flute with a cymbal for example, or merging the beginning portion of a trumpet with the sustaining segment of a cello. Numerous filters are also typically available, which, like the tone controls on a stereo system, let you raise or lower a sound's treble or bass response. Another useful programming feature is an envelope generator. Because natural sounds do not remain static throughout their duration-the piano, for example, begins to fade away or decay immediately after a note is struck-these generators allow the user to change the way a sound evolves over time. Normally, the characteristic that changes most is the sound's amplitude (loudness), but envelopes might also be applied to the sound's pitch or even timbre. The shape of the envelope is usually alterable, which allows the user to determine how long it takes for the sound to move through each of its "segments." In the example below, the sound will move gradually to its peak level during the attack segment, get a bit softer during the decay, maintain a steady level over the sustain segment, then slowly fade during the release (Figure 7).
Fig 7. -The four segments of an amplitude envelope. - Samplers
Samplers are electronic devices that allow you to record audio, manipulate it, and play it back using MIDI commands. In effect, they allow the entire range of acoustic sounds to be employed in a musical composition. Under the control of MIDI messages, dog barks, train whistles, car horns and more can be integrated alongside violins and guitars, but samplers can be used for a lot more than just sound effects. Because of their extensive capabilities, samplers are used to create entire original compositions, using exacting reproductions of traditional instruments. Composers can preview their orchestral works and arrangers can listen to elaborate horn arrangements before committing the music to notation. In addition to these tasks, an entire musical style has evolved that uses samplers to store short phrases from existing recordings that are then used as the basis for entirely new musical compositions. While some of these capabilities are possible using traditional synthesizers, samplers expand the musician's palette of sounds enormously.
All samplers contain sample RAM that is used to hold digital recordings while the sampler processes them and plays them back. The amount of RAM determines the total length of recording time available to the unit. For example, if a sampler were to record sound using the quality of a commercial CD, it would require just over 10 MEGS (10,000,000 bytes) of RAM to hold just one minute of stereo or two minutes of monophonic sound. Many professional samplers contain hard disk drives for more permanent storage of recordings, while some also include floppy drives for moving sounds into and out of the unit. Besides the standard audio outputs used to record and playback, some samplers provide direct digital connections so sound can be moved back and forth to a digital tape recorder (DAT) or computer.
Among the many features of most samplers, one particular favorite is looping. This function allows the sampler to play repeatedly a short segment of sound. Using looping, small recordings can be played back for long periods of time, saving RAM and storage resources. When a sound loops, it merely plays through to the end, then restarts at the beginning or at some other point while the key is being held down. Looping works particularly well with string and wind sounds, but some sounds cannot be sustained; drum hits and other short sounds with sharp attacks, for example, simply do not loop well.
Among the other techniques samplers provide are filtering; crossfading, in which one sound fades out while another fades in; and pitch shifting, where the original pitch of a sampled sound is raised or lowered. Pitch shifting is very useful when you need to change or transpose the pitch of a sound, perhaps to change the key of your music. Unfortunately, after a certain amount of shifting in either direction, the sound will no longer resemble the original. It is very common for a sampler to use a technique known as multisampling, in which the original sound is recorded at numerous different pitch levels, and each individual sample is assigned to playback over a different range of the keyboard. In this way, no single sample has to be shifted beyond a small amount.
Samplers provide numerous other manipulation techniques, some of which will be mentioned in the section on digital audio. These include time compression/expansion, which is the ability to stretch or shrink sounds without changing their pitch; amplitude modulation, a technique used to change the sample's amplitude (loudness) at a variable rate; and playing back a sound in reverse. In all, samplers offer versatile options to the musician for shaping and crafting their music. Drum Machines
One final MIDI device is the drum machine and a related instrument, the drum controller (Figure 8). The drum machine, one of the most common of all MIDI peripherals, typically contains buttons or "pads" for playing drum sounds "live," and internal software to generate or store MIDI data. The sounds in the drum machine are most often sampled drums, i.e., digital recordings of actual acoustic drums. Unlike a sampler, you can rarely add your own sounds to such devices; instead you are limited to playback of the internal sounds, perhaps with some minor alterations.
While the buttons on a typical drum machine can be used to play the instrument in "real-time," you can also record any pattern of button presses right into the device. When requested, the drum machine will then play back the patterns you've created. In this way, one can create elaborate drum parts "note by note," then play them back repeatedly and at any tempo required. Drum machines also typically include preset patterns, providing very realistic drum parts that musicians who don't play the instrument can use in their own productions. Unfortunately, many of these patterns sound "canned," and their overuse has created somewhat of a backlash against this type of device. Creative drum programming by capable musicians can, however, produce excellent results. Guitar and Wind Controllers
While the vast majority of MIDI music emanates from keyboard controllers and synthesizers, instrument makers have come to realize that many other instrumentalists would like to share in the joy of MIDI. For this reason, various types of guitar and wind controllers have been created to provide a familiar performance interface for players of these instruments. While they typically produce no sound on their own, these instruments can be connected to tone modules or samplers, which then generate sound under their control.
Shaped and performed like traditional six string guitars, MIDI guitars contain small sensors that detect the player's finger position on the strings, as well as the amount of pressure applied to the string by the pick. Most can also track movements of the string and convert this bending into continuous controller data. Some guitar controllers even allow the user to assign a different MIDI channel to every string, thereby offering the ability to play up to six different sounds on the receiving device simultaneously. Wind controllers can easily detect which keys have been closed by the player, but must make far more difficult measurements of the amount of air pressure passing through the device's mouthpiece. Typically, a sensor in the mouthpiece is used for such measurements, and over the years, the accuracy of wind controllers has improved dramatically. Because a single MIDI note can be used to generate an entire chord, (if the receiving synthesizer is so programmed), musicians who have spent most of their lives playing monophonic (single note) instruments, now have the ability to play elaborate, chordal textures.
One final controlling device is the pitch-to-MIDI converter. This somewhat uncommon device is attached to a traditional acoustic wind instrument such as a saxophone or trumpet and converts the acoustic tones the instrument generates into MIDI notes. The Pitch-to-MIDI converter offers perhaps the best of both worlds, in that a musician can use his or her favorite instrument to create a performance that combines "natural" and synthetic sounds. Unfortunately, the conversion is not always accurate, and these devices still must undergo some refinements before they will be completely reliable. Nevertheless, converters are becoming more common, and offer musicians including singers tremendous expressive qualities in a MIDI performance. MIDI Software
There are many categories of MIDI software available. Perhaps the most common is the MIDI sequencer, which is a type of program that can record, edit and playback MIDI data. Sequencers, which originally were often found as stand-alone hardware devices, have very powerful capabilities to transform MIDI information, and today represent a very complex and mature category of software. Sequencers share many basic features, and allow the user to put the strength of a personal computer to the task of making music.
Like a multi-track tape recorder, sequencers most often arrange multiple layers of MIDI information into tracks. Each track represents an independent melody or part of the music. The number of tracks in a sequencer can range from as few as sixteen in an entry-level program, to hundreds, or even thousands in others. Each track can be used to hold any type of MIDI data, and there is no single standard for how this information should be arranged. Rather, the best sequencers give the user a high degree of flexibility in organizing the various types of information their music requires.
Figure 9 below shows the main screen of a popular Windows-based sequencer, Cakewalk Pro Audio(TM). Along the left side of the figure you can see the various tracks; the first sixteen tracks are shown here, but different screen resolutions would allow you to see more or fewer at once on your own monitor. Each track is assigned to a specific MIDI channel, though you can see that several of the tracks have the same setting. This indicates that the events on all of these tracks will go to the same destination. Most sequencers allow you to put information for several channels on the same track, though this could make editing the information somewhat more difficult. The right half of the screen represents the actual data, which is organized into segments called clips in Cakewalk.
Fig 9. -The Track View of Cakewalk Pro Audio.-
Sequencers typically provide different ways to view and edit your data, and it's important to understand the function of each of a program's work areas. Usually, one will find a Piano Roll view, where individual or small groups of notes can be altered; a Track Overview, where entire measures or even whole tracks can be manipulated; a Notation or Staff view, where the music is represented using standard music notation; and an Event View, which is a text-based list of all the events in one or more tracks. The editing options that such programs provide are numerous and vary greatly among programs, but typically, one can cut, copy and paste data, as well as apply extensive modifications to the music, such as raising or lowering the pitch and volume characteristics, and expanding or compressing the amount of time a section takes to playback.
Some programs also provides features that can assist the user with the operation of his/her MIDI hardware. It is not uncommon to find sequencers that will list all the different sounds in your synthesizer, allowing you to work with specific names rather than the less familiar patch numbers. Some will also import or export system exclusive (Sysx) data to a synthesizer, meaning you can load an entire setup of sounds before the first note is played. While they don't offer all the editing capabilities of full-blown patch editors (discussed later), these patch librarian features are very useful, especially in settings where there are two or more MIDI devices.
Overall, sequencers are the most common of all MIDI software programs, and provide tremendous power that can be applied to the production of music. Notation Programs
Another category of MIDI software is the notation, or transcription program (Figure 10). Because standard notation remains the most common way to represent music, an entire market has been established for programs that let musicians work "the old fashioned way." Typically, these programs provide huge libraries of musical symbols that can be entered onto the page to produce professional looking scores. Some even allow the user to create new symbols. Sophisticated page layout features, often comparable to high-end desktop publishing programs, are also included in the more advanced notation software, and all programs of this type offer printing options.
Fig 10. -A view of standard music notation.-
Most programs allow "point and click" entry as well as real-time transcription from a MIDI keyboard. With real-time entry, musicians can play their music directly into the program and see it appear instantly on screen as notation. Once the notes are recorded, numerous editing capabilities, such as the cut, copy and paste features of a word processor, are available. Other editing functions needed by musicians, such as the ability to shift or "transpose" the music up or down are also commonly found. Patch Editor/Librarians
Because of the complexity of many of today's synthesizers, an entire software niche has developed to facilitate the control of such devices from a computer. Patch editors typically display all of a synthesizer's programming controls on one or two computer screens, allowing the user to "see into" the synthesizer and control it directly from the computer keyboard (Figure 11). Rather than spend many minutes pushing buttons, trying to locate a particular screen within the synthesizer's own display, the patch editor lays all the device's parameters before the user, and allows him or her to make extensive changes with the sweep of the mouse or press of a few keys. Changes made on the computer screen are typically sent immediately to the device, making it possible to preview them before any permanent changes are made.
Stand-alone librarian programs, or those usually included with the patch editor, simply store all the device's sounds and make them available for quick searching or sorting. Typically, a librarian will request a "dump" from the device via Sysx, then show the user the sounds currently available on the instrument. This listing can then be stored on a computer and reloaded into the device if needed. Not only are the names of the patches stored, but also the specifications as to how the sounds are created. In other words, if the internal memory of a synthesizer were wiped out, the librarian could send a list of the original factory programs back to the synthesizer and return it to its original status.
Librarians are also commonly employed when users owning the same equipment wish to share programs they have created. Simply load the sounds into the librarian and save them on a floppy disk, then transport them to another computer anywhere in the world. Integrated Programs
An interesting trend in MIDI software today is the appearance of integrated programs that combine many of the features of the programs listed above. Like their counterpart in the business world, the "desktop suite," these integrated programs offer professional sequencing, notation, patch librarian, and in some cases, digital audio functions in an all-in-one environment. This trend shows tremendous promise, and has far-reaching implications for the user. It will be exciting to see how far it develops.
<<previous | next >>