Random Thoughts – Randocity!

The Evolution of Sound Recording

Posted in audio engineering, audio recording, history by commorancy on February 14, 2023

edisonBeginning in the 1920s and up to the present, sound recordings have changed and improved dramatically. This article spans 100 years of audio technology improvements. Though, audio recording spans all the way back to Phonautograph in 1860. What drove these changes was primarily the quality of the recording media available at the time. This is a history-based article and is 20,000 words due to the material and ten decades covered. Grab a cup of your favorite hot beverage, sit back and let’s explore the many highlights of what sound recording has achieved since 1920.

Before We Get Started

Some caveats to begin. This article isn’t intended to offer a comprehensive history of all possible sound devices or technologies produced since the 1920s. Instead, this article’s goal is to provide a glimpse of what has led to our current technologies by calling out the most important technological breakthroughs in each decade, according to this Randocity author. Some of these audio technology breakthroughs may not have been designed for the field of audio, but have impacted audio recording processes, nonetheless. If you’re really interested in learning about every audio technology ever invented, sold or used within a given decade, Google is a the best place to start for that level of exploration and research. Wikipedia is also another good source of information. A comprehensive history is not the intent of this article.

Know then that this article will not discuss some technologies. If you think that a missing technology was important enough to have been included, please leave a comment below for consideration.

This article is broken down by decades, punctuated with deeper dives into specific topics. Thus, this sectioning is intended to make this article easier read over multiple sittings if you’re short on time. There is intentionally no TL;DR section in this article. If you’re wanting a quick synopsis you can read in 5 minutes, this is not that article.

Finally, because of the length of this article, there may still be unintended typos, misspellings and other errors still present. Randocity is continuing to comb through this article to shake out any loose grammatical problems. Please bear with us while we continue to clean up this article. Additionally in this cleanup process, more information may be added to improve clarity or as the article requires.

With that in mind, onto the roaring…

blocker

1920s

Ah, the flapper era. Let’s all reminisce over the quaint cloche hats covering tightly waved hair, the Charleston headbands adorned with a feather, the fringe dresses and the flapper era music in general, which wouldn’t be complete without including the Charleston itself.

blocker

blocker

For males, it was all about a pinstripe or grey suit with a Malone cap, straw hat or possibly a fedora. While women were burning their hair with hot irons to create that signature 1920s wave hairstyle and slipping into fringe flapper evening dresses, musicians were recording their music using, at least by today’s standards, antiquated audio equipment. At the time in the 1920s, though, that recording equipment was considered top end professional!

In the 1920s, recordings produced in a recording studio were recorded or ‘cut’ by a record cutting lathe. Hence, the use of the term “record cut”. This style lathe recorder used a stylus which cut a continuous groove into a “master” record, usually made of lacquer. The speed? 78 RPM (revolutions per minute). Typically, a studio using an acoustic microphone had one microphone. When electrical microphones appear, this setup requires an immense sized amplifier to feed the sound into that lathe recorder. Prior to the 1920s, records were made via acoustic microphones (no electricity involved). By the 1920s, this era ushered in the use of electrical amplifiers to improve the sound quality, improving microphone placement and numbers and make the recordings sound more natural by improving the volume on the records produced on the master recording. Effectively, a studio recorded the music straight onto a “cut” record… which that record would be used as a master to mass produce shellac 78 RPM records, which would then be sold in mass to consumers in stores.

This also meant that there was no such thing as overdubbing. Musicians had to get the entire song down in one single take. It’s possible multiple takes were utilized to get the best possible version, but that would waste several master discs until that best take could be made.

Though audio recording processes would improve only a little from 1920 to 1929, going into the 30s, the recording process would show much more improvement. We would have to wait until 1948 before 33 RPM records would be introduced to see a decidedly marked improvement in sound quality on records. Until then, 78 RPM shellac records would remain the steadfast, but average quality standard for buying music.

With non-electrical recordings of 1920s, these recordings utilized only a single microphone to record the entire band, including the singer. It wouldn’t be until the mid to late 20s, with electrical recording processes using amplifiers, that a two channel mixing board becomes available, allowing for placement of two or more microphones connected via wire, one or more mics for the band and one for the singer.

Shellac Records

Before we jump into the 1930s, let’s take a moment to discuss the mass produced shellac 78 records, which remained popular well into the 1950s. Shellac is very brittle. Thus, dropping one of these records would result in the record shattering into small pieces. Of course, one of these records can also be intentionally broken by whacking it against something hard, like so…

blocker

blockerDonnaReedBreak

blocker

Shellac records fell out of vogue primarily because shellac became a scarce commodity due to World War II wartime efforts, but also because of its lack of quality when compared with vinyl records. Because of the scarcity of shellac combined with the rise of vinyl records, this led to the demise of the shellac format by 1959.

This wartime scarcity of shellac also led to another problem; the loss of some audio recordings. This shellac scarcity led to people performing their civic duty by turning in their shellac 78 RPM records to help reduce this shellac scarcity. Also around this time, some 78 records were made of vinyl due to this shellac shortage. While a noble goal, the turning in of shellac 78s to help out the war effort also contributed to the loss of many shellac recordings. In essence, this wartime effort may have caused the loss of many audio recordings that may never be heard again.

1920s Continued

As for cinema sound, it would be the 1920s that introduces moviegoers to what would be affectionately dubbed, “talkies“. Cinema sound as we know it, using an optical strip along side the film, was not yet available. Synchronization of sound to motion pictures was immensely difficult and ofttimes impractical to achieve with the technologies available at the time. One system used was the sound-on-disc process, which required synchronizing a separate large phonograph disc with the projected motion picture. Unfortunately, this synchronization could be impossible to achieve reliably. The first commercial success of this sound-on-disc process, albeit with limited sound throughout, was The Jazz Singer in 1927.

Even though the sound-on-film (optical strip) process (aka Fox Film Corporation’s Movietone) would be invented during the 1920s, it wouldn’t be widely in use until the 1930s, when this audio film process becomes fully viable for commercial film use. Though, the first movies released using Fox’s Movietone optical audio system would be Sunrise: A Song of Two Humans (1927) and then a year later Mother Knows Best (1928). Until optical sound became much more widely and easily accessible to filmmakers, most filmmakers in the 1920s utilized sound-on-disc (phonographs) to synchronize their sound separately with the film. Only the Fox Film Corporation, at that time, had access (due to William Fox having purchased the patents in 1926) for the Movietone film process. Even then, the two films above only sported limited voice acting on film. Most of the audio included in those two pictures consisted of music and sound effects, with very limited voice acting.

Regardless of the clumsy, low quality and usually unworkable processes for providing motion picture sound in the 1920s, this decade was immensely important in ushering sound into the motion picture industry. If anything, the 1920s (and William Fox) proved that sound would become fundamental to the motion picture experience.

Commercial radio broadcasting also began during this era. In November of 1920, KDKA began broadcasting its first radio programs. This began the era of commercial radio broadcasting that we know today. With this broadcast, radio broadcasters needed ways to attenuate the signal to accommodate the broadcast frequency bandwidth requirements.

Thus, additional technologies would be both needed and created during the 1930s, such as audio compression and limiting. These compression technologies were designed for the sole purpose of keeping audio volumes strictly within radio broadcast specifications, to prevent overloading the radio transmitter, thus giving the listener a better audio experience when using their new radio equipment. On a related note, RCA apparently held most of the patents for AM radio broadcasting during this time.

blocker

1930s

By the 1930s, women’s fashion had become more sensible, less ostentatious, less flamboyant and more down-to-earth. Gone are the cloche hats and heavy eye makeup. This decade shares a lot of its women’s fashion sensibilities with 1950s dress.

Continuing with the sound-on-film discussion from the 1920s, William Fox’s investment in Movietone, would only prove useful until 1931, when Western Electric introduced a light-valve optical recorder which superseded Fox’s Movietone process. This Western Electric process would become the optical film process of choice, utilized by filmmakers throughout 1930 film features and into the future, even though Western Electric’s recording equipment design proved to be bulky and heavy. Fox’s Movietone optical process would continue to remain in use for producing Movietone news reels until 1939 due to the better portability of the sound equipment, thanks in part to Western Electric’s over-engineering of its light-valve’s unnecessarily heavy case designs.

As for commercial audio recording throughout the 1930s, recording processes haven’t changed drastically from the 20s, except new equipment was introduced to aid in recording music better, including the use of better microphones. While the amplifiers got a little smaller, the microphone quality improved along with the use of multi-channel mixing boards. These boards were introduced so that instead of recording only one microphone, many microphones (as many as six) could record an orchestra including a singer mixed down into one monophonic recorded input for the lathe recorder. This change allowed for better, more accurate, more controlled sound recording and reproduction. However, recording to the lathe stylus recorder was still the main way to record, even though audio tape recording equipment was beginning to appear, such as the AEG/Telefunken Magnetophon K1 (1935).

At this time, RCA produced its first uni-directional ribbon microphone, the large 77A (1932) and this mic became a workhorse in many studios. There is some discrepancy on the exact date the 77A was introduced. However, it was big and bulky, but became an instant favorite. However, in 1933, RCA introduced its smaller successor to the 77A, the RCA 44A, which was a bi-directional microphone. The model 77 would go on to also see the release of the 77B, C, D and DX. However, the two latter 77 series microphones wouldn’t see release until the mid-40s, after having been redesigned to about the size of the 44.

There would be three 44 series models released including the 44A (1933), 44B (1936) and the 44BX (1938). These figure 8 pattern bi-directional ribbon microphones also became the workhorse mics used in most of the recording and broadcast industries in the United States, ultimately replacing the 77A throughout the 30s. These microphones were, in fact, so popular, some are still in use today and can be found on eBay. There’s even an RCA 44A replica being produced today by AEA. Unfortunately, RCA discontinued manufacture of the 44 microphone series in 1955. RCA would discontinue producing microphones altogether in 1977, ironically RCA’s last model released was the model 77 in 1977. The 44A sported an audio pickup range between 50 Hz to 15,000 Hz… an impressive dynamic range, even though the lathe recording system could not record or reproduce that dynamic range.

A mixing board when combined with several new workhorse 44A mics allowed a sound engineer to bring certain channel volumes up and other volumes down. A mixing board use allowed vocalists to be brought front and center in the recording, not drowned out by the band… with sound leveled on the fly during recording by the engineer’s hand and a pair of monitor headphones or speakers.

One microphone problem during the 20s was that microphones were primarily omni-directional. This meant that any noise would be picked up from anywhere around the microphone. This also meant that in recording situations, everything had to remain entirely silent during the recording process, except for the sound being recorded. By 1935, Siemens and RCA had introduced various cardioid microphones to attempt to solve for extraneous side noise. These uni or bi-directional microphones only picked up audio directly in front of the microphone, but not sounds outside of the microphone’s cardioid pattern. This improvement was important when recording audio for film when on location. You can’t exactly stop car honking, tires squealing and general city noises during a take. The solution was the uni-directional microphone, introduced around 1935.

Most recording studios at the time relied on heavy wall-mounted gear that wasn’t at all easy to transport. This meant recording had to be done in the confines of a studio using fixed equipment. This portability need led to the release of this 1938 Western Electric 22D model mixer, which had 4 microphone inputs and a master gain output. It sported a lighted VU meter and it could be mounted in a portable carrying case or possibly in a rack. This unit even sported a battery pack! In 1938, this unit was primarily used for radio broadcast recording, but this or similar portable models were also used when recording on-location audio for films or news reels at the time.

western-electric-22d

In the studio, larger channel versions were also utilized to allow for more microphone placement, but still mixing down into a single monophonic channel. Such studio typically used up to 6 microphones, though amplifiers sometimes added hiss and noise, which might be audibly detectable if too many were strung together. There was also the possibility that phase problems could exist if too many microphones were utilized. The resulting output recording would be monophonic for the mass produced shellac 78 RPM records, for radio broadcast or for movies shown in a theater.

Here are more details about this portable Western Electric 22-d mixing board…

WE22d-schematic

Lathe Recording

Unfortunately, electric current during this time was still considered too unreliable and could cause audio “wobble” if electrical power was used to power the turntable during recording. In some cases, lathe recorders used a heavy counterweight suspended from the ceiling which would slowly drop to the floor at a specified rate which would power the rotation of the lathe turntable to ensure a continuous rotation speed. This weight system provided the lathe with a stable cut from the beginning to the end of the recording, unaffected by potential unstable electrical power. Electrical power was used for amplification purposes, but not always for driving the turntable rotation while “cutting” the recording. Spring based or wound mechanisms may have also been used.

1930s Continued

All things considered, this Western Electric portable 4 channel mixer was pretty far ahead of the curve. With technology like this, these 1930 audio innovations led us directly into 60s and 70s era of recording. This portable mixing board alone, released in 1938, is definitely ahead of its time. Of course, this portability was likely being driven by both broadcasters, who wanted to record audio on location, and by the movie industry who needed to record audio on-location while filming. Though, the person tasked with using this equipment had to lug around 60 lbs of equipment, 30 lbs on each shoulder.

Additionally, during the 1930s and specifically in 1934, Bell Labs began experimenting with stereo (binaural) recordings in their audio labs. Here’s an early stereo recording from 1934.

Note that even though this recording was recorded stereo in 1934, the first commercially produced stereo / binaural record wouldn’t hit the stores until 1957. Until 1957, the monophonic / monaural 78 RPM records remained the primary standard for purchasing music during the 1930s.

For home recording units (and even used in professional situations) in the 1930s, there were options. Presto created various model home recording lathes. Some of Presto’s portable models include the 6D, D, M and J models, which were introduced between the years 1932 and 1937. The K8 model was later introduced around 1941. Some of these recorders can still be found on the secondary market today in working order. These units required specialty blank records in various 6″, 8″ and 10″ sizes and sported 4 holes in the center. This home recording lathe system recorded at either 78 or 33⅓ speed. In 1934, these recorder lathes cost around $400, equivalent to well over $2,000 today. By 1941, the price of the recorders had dropped to between $75 and $200. The blanks cost around $16 per disc during the 30s. That $16 then is equivalent to around $290 today. Just think about recording random noises on a $290 blank disk? Expensive.

Finally, it is worth discussing Walt Disney’s contribution to audio recording during the late 30s and into 1940. Fantasia (produced in 1939, released in 1940) was the first movie to sport a full stereo soundtrack. This was achieved through the use of a 9 track optical recorder. These 9 optical tracks were mixed down to 4 optical tracks for use when presenting the audio in a theater. Optical audio recording and playback is the method a sprocket film projector uses to play back audio through theater sound equipment (see 1920s above), prior to the introduction of magnetic analog audio and later digital audio in the 90s. Physically running along side the 35mm or 70mm film imagery, an audio track runs vertically throughout the entire length of the film. The audio track is run through a separate audio decoder and amplifier at the time the projector is flipping images.

To operate the Fantasia film with stereo in 1940, a theater would need two projectors running simultaneously. The first projector ran the visual film image and that film also contained one mono optical audio track (for backup or for theaters running Fantasia only in mono). The second “stereo” projector ran four (4) optical tracks consisting of the left, right and center audio tracks (technically, a 3.0 sound system). The fourth track managed an automated gain control to allow for fades as well as volume increase and decrease in the audio. This system was dubbed Fantasound by Disney. Note that Fantasound apparently employed an RCA compression system to make the audio sound better and keep the audio volumes consistent (not too loud, not to low volume) while watching Fantasia. At the time when shellac recordings were common, seeing a full color and stereo feature in the theater would have been a technical marvel.

Disney’s Fantasia vs Wizard of Oz

It is worth pausing here to discuss the technical achievement of both Walt Disney and MGM in sound recording and reproduction. Walt Disney contributed greatly to the advancement of theatrical film audio quality and stereo films. Fantasia (produced in 1939, released in 1940) was the first movie to sport a full stereo soundtrack in a theater. This was achieved through the use of a 9 track optical recorder when recording the original music soundtrack. These 9 optical tracks were then mixed down to 4 optical tracks for use when presenting the audio in a theater. According to Wikipedia, the Fantasia orchestra was outfitted with 36 microphones, these 36 mics were condensed down into the aforementioned 9 (less, actually) optical audio tracks when recorded. One of these 9 tracks was a click track for animators to use when producing their animations.

To explain optical audio a bit more, optical audio recording and playback is the method a sprocket film projector uses to playback audio through theater sound equipment. This optical audio system remained in use prior to the introduction of digital audio in the 90s. Physically running along side the 35mm or 70mm film reel imagery, there is an optical, but analog audio track that runs vertically throughout the entire length of the film. There have been many formats for this optical track. The audio track is run through a separate analog audio decoder and amplifier at the same time the projector is flipping through images.

For a theater operator to operate the Fantasia film in stereo in a theater in 1940, a theater would need two projectors running simultaneously along with appropriate left, right and center speakers, speaker amplifiers, speakers hidden behind the screen and, in the case of Fantasound, speakers mounted in the back of the theater. The first projector would present the visual film image on the screen, while that film reel also contained one mono optical audio track (used for backup purposes or for theaters running the film only in mono). The second “stereo” projector ran four (4) optical tracks consisting of the left, right and center audio tracks (likely the earliest 3.0 sound system). The fourth track managed an automated gain control to allow for fades as well as automated audio volume increase and decrease. This stereo system was dubbed Fantasound by Disney. At the time when mono shellac recordings were common in the home, seeing a full color and stereo motion picture in the theater in November of 1940 would have been a technical marvel.

Let’s pause here to savor this incredible Disney cinema sound innovation moment. Consider that it’s 1940, just barely out of the 30s. Stereo isn’t even a glimmer in the eye of record labels as yet and Walt Disney is outfitting theaters with basically what would be considered today’s modern multichannel audio theater standard (as in 1970s or newer) stereo type sound system. Though Cinerama, a 7 channel audio standard, would land in theaters as early as 1952 featuring the documentary film This Is Cinerama, it wouldn’t be until 1962’s How The West Was Won that the theater goers actually got a full scripted feature film using Cinerama’s 7 channel sound system. In fact, Disney’s Fantasound basically morphed into what would become Cinerama, using three synchronized projectors like Fantasound, but Cinerama used multiple projectors for a different reason than for multichannel sound.

Cinerama also gave pause to sound recording for film. It made filmmakers invest in more equipment and microphones to ensure that all 7 channels were recorded so that Cinerama could be used. Clearly, even though the technology was available for use in Cinemas, filmmakers didn’t exactly embrace this new audio technology as readily as theater owners were willing to install it. Basically, it wouldn’t be until the 60s and on into 70s that Cinerama and the later THX and Dolby sound systems became in common use in cinemas. Disney ushered the idea of stereo in theaters in in the 40s, but it took nearly 30 years for the entire film industry to embrace it, including easier and cheaper ways to achieve it.

Disney’s optical automated volume gain control track foreshadows Disney’s use of animatronics in its own theme parks beginning in the 1960s. Even though Disney’s animatronics use a completely different mechanism of control, the use of an optical track to control automation of the soundtrack’s volume in 1939 was (and still is) completely ingenious. Though, this entire optical stereo system, at a time when theaters were still running monophonic motion pictures, was also likewise quite ingenious (and expensive).

Unfortunately, Fantasia’s stereo run in theaters would be short, with only 11 roadshow runs using the Fantasound optical stereo system. The installation of Fantasound required a huge amount of equipment, including installation of amplifiers, speakers behind the screen and speakers in the back of the theater. In short, it required the equipment that modern stereo theaters require today. See the link just above for more details on this.

Consider also that the Wizard of Oz, which was released in 1939 by MGM, was also considered a technical marvel for its Technicolor process, but this musical film was released to theaters in mono. Though this film’s production did record most, if not all, of the audio for the Wizard of Oz on a multitrack recorder during filming, which occurred between 1938 and 1939. It wouldn’t be until 1998 when The Wizard of Oz’s original 1939 recorded multitrack audio was restored and remastered in stereo, finally giving The Wizard of Oz its full stereo soundtrack from its original 1930s on-set multitrack recordings.

Here’s Judy Garland singing Over the Rainbow from the original multitrack masters recorded in 1939, even though this song wasn’t released in stereo until 1998 after this film’s theatrical re-release. Note, I would embed the YouTube video inside this article, but this YouTube channel owner doesn’t allow for embedding. You’ll need to click through to listen.

As a side note, it wouldn’t be until the 1950s when stereo becomes commonplace in theaters and until the late 50s when stereo records also become available for home use. In 1939, we’re many years away from stereo audio standards. It’s amazing then that, between 1938 and 1939, MGM had the foresight to record this film’s audio using a multitrack recorder during filming sessions, in addition to MGM’s choice of employing those spectacular Technicolor sequences.

blocker

1940s

In addition to Disney’s Fantasia, the 1940s were punctuated by World War II (1939-1945), the holocaust (1933-1945) and the atomic bomb (1945). Because of the Great Depression and the frugality beginning in 1929 and lasting through the late 1930s, this frugality moved into the 1940s, in part because of left over anxieties from the Great Depression, but also because of the wartime necessity to ration certain types of items including sugar, tires, gasoline, meat, coffee, butter, canned goods and shoes. This rationing led housewives to be much more frugal in other areas of life including hairstyles and dress… also because the war surged the prices of some consumer goods.

Thus, this frugality influenced fashion and also impacted sound recording equipment manufacturing, most likely due to the early 1940s wartime efforts requiring manufacturers to convert to making wartime equipment instead of consumer goods. While RCA continued to manufacture microphones in the mid 40s (mostly after the war), a number of other manufacturers also jumped into the fray. Some microphone manufacturers targeted ham operators, while others created equipment targeted at “home recordists” (sic). These consumer microphones were fairly costly at the time, equivalent to hundreds of dollars today.

Some of 1940’s microphones sported a slider switch which allowed moving the microphone from uni-directional to bi-directional to omni-directional. This meant that the microphone could be used in a wide array of applications. For example, both RCA’s MI-6203-A and MI-6204-A microphones (both released in 1945) offered a slider switch to move between the 3 different pickup types. Earlier microphones, like RCA’s 44A, required opening up the microphone down to the main board and moving a “jumper” to various positions, if this change could be performed at all. Performing this change was inconvenient and meant extra setup time. Thus, the slider in the MI-6203 and MI-6204 made performing this change much easier and quicker. See, it’s the small innovations!

During the 1940s, both ASCAP and, later, BMI (music royalty services aka performing rights organization or PRO) changed the face of music. In the 1930s, most music played on broadcast programs had been performed by a live studio orchestra, employing many musicians. During the 1940s, this began to change. As sound reproduction became better sounding, these better quality sound recordings led broadcasters to using prerecorded music over live bands during broadcast segments.

This put a lot of musicians out of work, musicians who would have otherwise continued gainful employment with a radio program. ASCAP (established in 1914 as a PRO) tripled its royalties for broadcasters in January of 1941 to help out these musicians. In retaliation for these higher royalty costs to play ASCAP music, broadcasters dropped using ASCAP music from its broadcasts, instead choosing public domain music and, at the time, unlicensed music (country, R&B and Latin). Disenchanted by ASCAP’s already doubled fees in 1939, broadcasters created their own PRO organization, BMI in 1939 (acronym for Broadcast Music Incorporated). This meant that music placed under the BMI royalty catalog would either be free to broadcasters and/or supplied at a much lower cost than that music licensed by ASCAP.

This tripling of fees in 1941 and, subsequent, dropping of ASCAP’s catalog by broadcasters put a hefty dent in ASCAP’s (and its artist’s) bottom line. By October of 1941, ASCAP had reversed its tripled royalty requirement. During this several month period in 1941, ASCAP’s higher fees helped to popularize genres of music which were not only free to broadcasters, but these genres were now being introduced to unfamiliar new listeners. Thus, these musical genres which typically did not get much air play prior, including country, R&B and Latin music, saw major growth in popularity during this time via radio broadcasters.

The genre popularity growth is partly responsible for the rise of Latin artists like Desi Arnaz and Carmen Miranda throughout the 1940s.

By 1945, many recording studios had converted away from using the lathe stylus recording turntables  and began using magnetic tape to prerecord music and other audio. The lathe turntables were still used to create the final 78 RPM disc master from the audio tape master for commercial record purposes. However, broadcasters didn’t need this when using reel to reel tape for playback.

Reel to reel tape also allowed for better fidelity and better broadcast playback over those noisy 78 RPM shellac records at the time. It also cost less to use because a single reel of tape could be recorded over again and again. With tape, there is also less hiss and way less background noise, making for a more professional listening and playback experience in a broadcast or film use. Listeners couldn’t tell the difference between the live radio segments and prerecorded musical segments.

Magnetic recording and playback would also give rise to better sounding commercials, though commercial jingle producers did record commercials using 78 RPMs during that era. From 1945 until about 1982, recordings had been produced almost exclusively using magnetic tape… a small preview of things to come.

While the very first vinyl record was released in 1930, this new vinyl format wouldn’t actually become viable as a prerecorded commercial product until 1948, when Columbia introduced its first 12″ 33⅓ RPM microgroove vinyl long playing (LP) record. CBS / Columbia was aided in producing this new format by the aforementioned Presto company who helped CBS develop the vinyl format. Considering Presto’s involvement with and innovation of its own line of lathe recorders, Columbia leaning on Presto was only natural. This Columbia LP format would eventually replace the shellac 78 RPMs in short order.

At around 23 minutes per side, the vinyl LP afforded a musical artist with about 46 minutes of recording time. This format quickly became the standard for releasing new music, not only because of the format’s ~46 minutes of running time, but also because it offered way less surface noise than when using shellac 78s. Vinyl records were also slightly less brittle than shellac records, giving them a bit more durability over shellac records.

By 1949, RCA had introduced a 7″ version of this 33⅓ microgroove vinyl format intended for use with individual (single) songs… holding around 4-6 minutes per side. These vinyl records at the time were still all monaural / monophonic. Stereo wouldn’t become available and popular until the next decade.

Note that Presto Recording Corporation continued to release and sell both portable, professional and home lathe recorders during the 1940s and on into the 50s and 60s. Unfortunately, the Presto Recording Corporation closed its doors in 1965.

blocker

1950s

By the 1950s, some big audio changes are in store; changes that Walt Disney helped usher in with Fantasia in 1940. Long past the World War II weary 1940s and the Great Depression ravaged 1930s, the 1950s had become a new prosperous era in American life. Along with this new prosperous time, fashion rebounded and so too did the musical recording industry and the movie theater industry. So too did musical artists who now began focusing on a new type of music, rock and roll. As a result of this new musical genre, recording this new genre needed some recording changes.

Because the late 1940s and early 1950s ushered in the new filmed musical format, many in Technicolor (and one in stereo in 1940), this led to audio advancements in theaters. Stereo radio broadcasts wouldn’t be heard until the 60s and stereo TV broadcasts wouldn’t begin until the early 80s, but stereo would become common place in theaters during the 1950s, particularly due to these musical features and the pressures placed on cinema by the television.

Musical films like Guys and Dolls (1955) were released in stereo along with earlier releases like Thunder Bay (1953) and House of Wax (1953). Though, it seems that some big musicals, like Marilyn Monroe’s Gentlemen Prefer Blondes (1952) was not released in stereo.

This means that stereo film recording for some films in the early 50s was haphazard and depended entirely on the film’s production. Apparently, not all film producers placed value in having stereo soundtracks for their respective films. Blockbuster films, many including Marilyn Monroe, didn’t include stereo soundtracks. However, lower budget horror and suspense films did include them, probably to entice moviegoers in for the experience.

By 1957, the first stereo LP album is released, which ushers in the stereophonic LP era. Additionally, by the late 1950s, most film producers began to see the value in recording stereo soundtracks for their films. No longer was it vogue to produce mono soundtracks for films. At this point, producers choosing to employ mono soundtracks did so out of personal choice and artistic merit, like Woody Allen.

Here’s a vinyl monophonic version of Frank Sinatra’s Too Marvelous for Words recorded for his 1956 album Songs for Swingin’ Lovers. Notice the telltale pops and clicks of vinyl. Even the newest untouched vinyl straight from the store still had a certain amount of surface noise and pops. Songs for Swingin’ Lovers was released one year before stereo made its vinyl debut. Though, Sinatra would still release a mono album in 1958, his last monophonic release entitled Only the Lonely. Sinatra may have begun recording of the Only the Lonely album in late 1956 using monophonic recording equipment and likely didn’t want to release portions of the album in mono and portions in stereo. Though, he could have done this by making side 1 mono and side 2 stereo. This gimmick would have made a great introduction to the stereo format for his fans and likely helped to sell even more copies.

blocker

blocker

This song is included to show the recording techniques being used during the 1950s and what vinyl typically sounded like.

Cinerama

In the 1950s, Cinema had the most impact on audio reproduction and recording. Because of Disney’s 1940 Fantasound process, this invention led to the design of Cinerama. A more simplified design by Fred Waller modified from his previous ambitious multi-projector installations. Waller had been instrumental in creating training films for the military using multi-projector systems.

However, in addition to the 3 separate, but synchronized images projected by Cinerama, the audio was also significantly changed. Like Disney’s Fantasound, Cinerama offered up multichannel audio, but in the form of 7 channels, not 4 like Fantasound. Cinerama’s audio system design was likely what led to the modern versions using DTS, Dolby Digital and SDDS sound. Cinerama, however, wasn’t intended to be primarily about the sound, but about the picture on the screen. Cinerama was intended to provide 3 projected images across a curved screen and provide that curved widescreen imagery seamlessly (a tall order and it didn’t always work properly). Cinerama was only marginally intended to be about the 7 channel audio. The audio was important to the visual experience, but not as important as the 3 projectors driving the imagery across that curved screen.

Waller’s goal was to discard the old single projector ideology and replace it with a multi-projector system akin to having peripheral vision. The lenses used to capture the film images were intended to be nearly the same focal length as the human eye in an attempt to be as visually accurate as possible and give the viewer an experience as though they were actually there, though the images were still flat, not 3D.

While Waller’s intent was to create a ground breaking projection system, the audio system employed is what withstood the test of time and what drove change in the movie and cinema sound industries. Unlike Fantavision, which used two projectors, one for visuals and one for 4 channel sound using optical tracks, Cinerama’s sound system used a magnetic strip which ran the length the film. This magnetic strip held 6 channels of audio with the 7th channel provided by the mono optical strip. Because Cinerama had 3 simultaneous projectors running, the Cinerama system could have actually supported 21 channels of audio information.

However, Cinerama settled on 7 audio channels, likely provided by the center projector. Though, information about exactly which of the three projectors provided the 7 channels of audio output is unclear. It’s also entirely possible that all 3 film reels held identical audio content for backup purposes. If one projector’s audio dies, one of the other two projectors could be used. The speaker layout for the 7 channels was five speakers behind the screen (surround left, left, center, right, surround right), two speaker channels on the walls (left and right or whatever channels the engineer feeds) and two channels in the back of the theater (again whatever the engineer feeds). There may have been more speakers than just two on the walls and in the rear, but two channels were fed to these speakers. The audio arrangement was managed manually by a sound engineer who would move the audio around the room live while the performance was running to enhance the effect and provide surround sound features. The 7 channels were likely as follows:

  • Left
  • Right
  • Center
  • Surround Left
  • Surround Right
  • Fill Left
  • Fill Right

Fill channels could be ambient audio like ocean noises, birds, trees rustling, etc. These ambient noises would be separately recorded and then mixed in at the appropriate time during the performance to bring more of a sense of realism to the visuals. Likely, the vast majority of the time, the speakers would provide the first 5 channels of audio. I don’t believe that this 7 channel audio system supported a subwoofer. Subwoofers would arrive in theaters later as part of the Sensurround system in the mid 1970s. Audio systems used in Cinerama would definitely influence later audio systems like Sensurround.

The real problem with Cinerama wasn’t its sound system. It was, in fact, its projector system. The 3 synchronized projectors projected separately filmed, but synchronized visual sequences. As a result, the three projected images overlapped each by a tiny bit. As a result of this overlap, both the projector played tricks to keep that line of overlap as unnoticeable as possible. While it mostly worked, the fact that 3 cameras were used that weren’t 100% perfectly aligned when filming led to problems with certain imagery on the screen. In short, Cinerama was a bear to use as a cinematographer. Very few film projects wanted to use the system due to its difficulty of filming scenes and it was even more difficult to make sure the scene appeared proper when projected. Thus, Cinerama wasn’t widely adopted by filmmakers nor theater owners. Though, the multichannel sound system was definitely something that filmmakers were interested in using.

Ramifications of Television on Cinema

As a result of the introduction of NTSC Television in 1941 and because of TV’s wide and rapid adoption by the early 1950s, the cinema industry tried all manner of gimmicky ideas to get people back into cinema seats. These gimmicks included, for example, Cinerama. Other in-cinema gimmicks included 3D glasses, smell-o-vision, mechanical skeletons, rumbling seats, multichannel stereo audio and even simple tricks like Cinemascope… which used anamorphic lenses to increase the width of the image instead of requiring multiple projectors to provide that width. The 50s were an era of endless trial and error cinema gimmicks in an effort to get people back into the cinema. None of these gimmicks furthered audio recording much, however.

Transition between Mono and Stereo LPs

During the 1960s, stereophonic sound would become an important asset to the recording industry. Many albums plastered the words “Stereo”, “Stereophonic” or “In Stereophonic Sound” written largely across parts of the album cover. Even the Beatles fell into this trap with a few of their albums. However, this marketing lingo was actually important at the time.

During the late 50s and into the early 60s, many albums were dual released both as monophonic and as a separate stereophonic release. These words across the front the album were intended to tell the consumer which version they were buying. This marketing text was only needed while the industry kept releasing both versions to stores. And yes, even though the words do appear prominently on the cover, some people didn’t understand and still bought the wrong version.

Thankfully, this mono vs stereo ambiguity didn’t last very long in stores. By the mid-1960s nearly every album released had converted to stereo, with very few being released in mono. By the 70s, no more mono recordings were being produced, except when an artist chose to make the recording mono for artistic purposes.

No longer was the consumer left wondering if they had bought the correct version, that is until 1976’s quadrophonic releases began… but that discussion is for the 70s. During the late 50s and early 60s, some artists were still recording in mono and some artists were recording in stereo. However, because many consumers didn’t yet own stereo players, record labels continued to release mono versions for consumers with monophonic equipment. It was assumed that stereo records wouldn’t play correctly on mono equipment, even though they played fine. Eventually, labels wised up and recorded the music in stereo, but mixed down to mono for some of the last monophonic releases… eventually abandoning monophonic releases altogether.

blocker

1960s

By the 1960s, big box recording studios were becoming the norm for recording bands like The Beatles, The Rolling Stones, The Beach Boys, The Who and vocalists like Barbra Streisand. These new studios were required to produce the professional and pristine stereo soundtracks on vinyl. This required heavy use of multitrack mixing boards. Here’s how RCA Studio B’s recording board looked when used in 1957. Most state of the art studios, at the time, would have used a mixing board similar to this one. The black and white picture shown on the wall behind this multitrack console depicts a 3 track mixing board, likely in use prior to the installation of this board.

blocker

Photo by cliff1066 under CC BY 2.0

blocker

RCA Studio B became a sort of standard for studio recording and mixing throughout the early to mid 1960s and even into the late 1970s. While this board may accept many input channels, the resulting master recording may record only as few as two tracks to as many as eight tracks through the mid-60s. It wouldn’t be until the late 60s that magnetic tape technologies would improve to allow recording 16 channels and then later to 24 channels by the 1970s.

Note that many modern mixing boards in use today resemble the layout and functionality of this 1957 RCA produced board, but these newer systems support more channels as well as effects.

Microphones of the 1960s also took to being majorly improved once again. No longer were microphones simply utilitarian, now they were being sold for luxury sound purposes. For example, Barbra Streisand almost exclusively recorded with the Neumann M49 microphone (called the Cadillac of Microphones, with a price tag to match) throughout the early 60s. In fact, this microphone became her staple. Whenever she recorded, she always requested a very specific serial number for her Neumann M49 from the rental service. She felt that this microphone specifically made her voice sound great.

However, part of the recording process was not just the microphone that Barbra Streisand used. It was also the recording equipment that Columbia owned at the time. While RCA’s studios made great sounding records, Columbia’s recording system was well beyond that. Barbra’s recordings from the 60s sound like they could have been recorded today on digital equipment. To some extent, that’s partially true. Barbra’s original 1960s recordings have been cleaned up and restored digitally. However, you have to have an excellent product from which to start to make it sound even better.

Columbia’s recordings of Barbra in the 60s were truly exceptional. These recordings were always crystal clear. Yes, the clarity is attributable to the microphone, but also due to Columbia’s high quality recording equipment, which was leaps and bounds ahead of other studios at the time. Not all recording systems were as good as what Columbia used, as evidenced by the soundtrack to the film Hello Dolly (1969) which Barbra recorded for 20th Century Fox. This recording is more midrangy, less warm and not at all as clear as the recordings Barbra made for Columbia records.

There were obviously still pockets of less-than-stellar recording studios recording inferior material for film and television, even going into the 1970s.

Cassettes and 8-Tracks

During the early 1960s and specifically in 1963, a new audio format was introduced in the Compact Cassette, otherwise known as simply a cassette tape. The cassette tape would go on to rival that of the vinyl record and have a commercial life of its own, which is still in diminished use to this day. Because the cassette didn’t rely on a stylus moving, there were way less constraints on the bass that could be laid down into it. This meant that cassettes ultimately had better sonic capabilities than vinyl.

In 1965, the 8-track or Stereo 8 format was introduced, which became extremely popular for use inside of vehicles initially. Eventually, though, the cassette tape and eventually the multi changer CD would replace 8 track systems in car stereos. Today, CarPlay and similar Bluetooth systems are the norm.

The Stereo 8 Cartridge was created in 1964 by a consortium led by Bill Lear, of Lear Jet Corporation, along with Ampex, Ford Motor Company, General Motors, Motorola, and RCA Victor Records (RCA – Radio Corporation of America).

Quote Source: Wikipedia

blocker

1970s

The 1970s were punctuated by mod clothing, bell bottom jeans, Farrah Fawcett feathered hair, drive-in movies and leisure suits. Coming out of the psychedelic 1960s, these bright vibrant colors and polyester knits led to some rather colorful, but dated rock and roll music (and outfits) to go along.

Though, drive-in theaters appeared as early as the 1940s, drive-in theaters would ultimately see their demise in late 1970s, primarily due to urban sprawl and the rise of malls. Even still, drive-in theaters wouldn’t have lasted into the multitrack 7.1 sound era of rapidly improving cinema sound. There is no way to reproduce such incredible surround sound within the confines of automobiles of the era, let alone today. The best that drive-in theaters offered was either a mono speaker affixed to the window or tuning into a radio station with the radio, which might or might not offer stereo sound, usually not. The sound capabilities afforded by indoor theaters, coupled with year round air conditioning, led people indoors to watch films any time of the day and all year round rather than watching movies in their cars only at night and when weather permitted. Thus, brutally cold winters don’t work well for drive-in theater viewing.

By the 1970s, sound recording engineers were also beginning to overcome the surface noise and sonic capabilities of stereo vinyl records, making stereo records sound much better. During this era, audiophiles were born. Audiophiles are people who always want the best audio equipment to make their vinyl records sound their absolute best. To that end, audio engineers pushed vinyl’s capabilities to its limits. Because diamond needles must travel through a groove to playback audio, if the audio gained too much thumping bass or volume, it could jump the needle out of its track and cause skipping.

To avoid this turntable skipping problem, audio engineers had to tune down the bass and volume when mastering for vinyl. While audio engineers could create two masters, one for vinyl and one for cassette, that almost never happened. Most sound engineers were tasked to create one single audio master for a musical artist and that master was strictly geared towards vinyl. This meant that a prerecorded cassette got the same audio master as the vinyl record, instead of a unique master created for the dynamic range available on a cassette.

Additionally, cassettes came in various formulations. From ferric oxide to metal (Type I to Type IV). There were eventually four different cassette tape formulations available to consumers, all of which commercial producers could also use when producing commercial duplication. However, most commercial producers opted to use Type I or Type II cassettes (the least costly formulations available). These were also available all the way through the 1970s. Type IV was metal and could produce the best sound available due to its tape formulation, but didn’t arrive until late in the 1970s.

8-tracks could be recorded, but there was essentially only one tape formulation. These recorders began appearing in the 1970s for home use. It was difficult to record an 8-track tape and sometimes more difficult to find blanks. Because each tape program was limited in length, you must make sure the audio doesn’t gap over from one track to the next or else you’ll have a jarring audio experience. With audio cassettes, this was a bit easier to avoid. Because 8-tracks had 4 stereo programs, each of the 4 stereo program segments is fairly short. Because the entire 8-track tape is 80 minutes, that would be 20 minutes per stereo track. It ends up more complicated for the home consumer to divide their music up into four 20 minute segments than it is to manage a 90 minute cassette with 45 minutes on each side.

Because a vinyl record only holds about 46 minutes, that length became the standard for musical artists until the CD arrived. Even though cassettes could hold up to 90 minutes of content, commercially produced prerecorded tapes held only the amount of tape need to match the 46 minutes of content available on vinyl. In other words, musical artists didn’t offer extended releases on cassettes during the 70s and 80s. It wouldn’t be until the CD arrives that musical artists were able to extend the amount of content they could produce.

As for studio recording during the 1960s and 1970s, most studios relied on Ampex or 3M (or similar professional quality) 1/2 inch or 1 inch multitrack tape for recording musical artists in the studio. Unfortunately, many of these Ampex and 3M branded tape formulations ended up not archival. This led to degradation (i.e., sticky-shed syndrome) in some of these audio masters 10-20 years later. The Tron Soundtrack, recorded in 1982 on Ampex tape, degraded in the 1990s to the point that the tape needed to be carefully baked in an oven to reaffix and solidify the ferric coating. After it had been carefully baked, there were effectively a few limited shots at re-recording the original tape audio onto a new master. It’s possible a baked master could also be played a few times onto several masters. Some Ampex tape audio master recordings may have been completely lost from the lack of being archival. Wendy Carlos explains in 1999 what it took to recover the masters for the 1982 Tron soundtrack.

Thankfully, cassette tape gluing formulations didn’t seem to suffer from sticky-shed syndrome like the some formulations of Ampex and 3M professional tape masters did. It also seems that 8-track tapes may have been immune to this problem as well.

For cinematic films made during the 1970s, almost every single film was recorded and presented in stereo. In fact, George Lucas’s Star Wars in 1977 ushered in the absolute need for stereo soundtracks in summer blockbusters to direct action sequences of the shots timed to orchestral music. Musical cues timed to each visual beat has now become a staple in filmmaking since the first Star Wars in 1977. While the recording of the music itself is much the same as it was in the 60s, the use of this orchestral music timed to visual beats became the breakthrough feature of filmmaking in the late 1970s and beyond. This musical beat system is still very much in use today in every summer blockbuster.

As for vinyl records and tapes of the 70s, surface noise and hiss is always a problem. To counter this problem, cassettes employed Dolby noise reduction techniques almost from the start. Commercially prerecorded tapes are encoded with a specific type of noise reduction. The associated player would need to be set on the same reduction type to reduce the inherent noise via that noise reduction. Setting a tape on the wrong noise reduction setting (or none at all) could cause the high end to be lost or, in many cases, for the audio playback to distort. For tapes, the most commonly used noise reduction was Dolby B, with the occasional use of Dolby C. Though, tapes could be encoded with Dolby A, B, C or S. The most commonly sold noise reduction for commercially prerecorded music cassette tapes was Dolby B, which began around 1965, but which remained in use throughout the 70s and 80s.

DBXFor vinyl, most vinyl albums didn’t offer or include noise reduction systems at all. However, starting around 1971, a relatively small number of vinyl releases were sold containing the DBX encoding noise reduction system. The discs were signified with the DBX encoded disc notation. This system, like Dolby’s tape noise reduction system, requires a decoder to playback the vinyl properly. Unfortunately, no turntables or amplifiers sold, that Randocity is aware, had a built-in DBX decoder. Instead, you had to buy and then inline a separate DBX decoder component in your Stereo Hi-Fi chain of devices, like the DBX model 21 decoder. DBX vinyl noise reduction was not just noise reduction, however. It also changed the audio dynamics of the recorded vinyl groove. DBX grooved disks thinned out and reduced the sonics and dynamics dramatically, making listening to a DBX encoded vinyl disc without a decoder nearly impossible. The DBX decoder would uncompress these compressed and thinned tracks back into their original sonically and suitably dynamic audio range.

To play a DBX encoded vinyl disk back properly, it required buying a DBX decoder component (around $150-$200 in addition to the cost of an amplifier, speakers and a turntable). This extra cost was for only a handful of vinyl disks, though. Not really worth the investment. DBX is unlike Dolby B reduction on tape, which if Dolby B is not decoded, still sounded relatively decent sonically even without the noise reduction enabled. DBX encoded vinyl discs are almost impossible to listen to without a decoder. For this reason, it’s likely why only very few vinyl discs were released encoded with DBX. However, if you were willing to invest in a DBX decoder component, the high and low ends were said to sound much better than a standard vinyl disc containing no noise reduction. The DBX system expanded and played these dynamics better, but probably not as full a sound as a CD can reproduce. DBX encoded vinyl likely meant that a fully remastered or at least better equalized version of the vinyl master was produced for these specific vinyl releases.

With that said, Dolby C and Dolby S are more like DBX when reproducing dynamics than Dolby A and B, which these first two were strictly noise reduction, not offering dynamic enhancement. These noise reduction techniques are explained in this section under the 1970s area because this is where they rose to their most prominent use, moreso on cassettes than on vinyl. Of course, these noise reduction techniques are not needed on a CD format, which is yet to come during the 80s.

For professional audio recording, in 1978, 3M introduced the first digital multitrack recorder for professional studio use. This recorder used one inch tape for recording up to 32 tracks. However, it priced in at an astonishing $115,000 (32 tracks) and $32,000 (4 tracks), which only a professional recording studio could afford. Studios like Los Angeles’s A&M Studios, The Record Plant and Warner Brother’s Amigo Studios all installed this 3M system.

Around 1971, IMAX was introduced. While this incredibly large screen format didn’t specifically change audio recording or drastically improve audio playback in the cinema, it did provide a much bigger screen experience which has endured to today. It’s included here to be complete for the 70s, but not so much for its improvements to audio recording, though it did improve film requirements for filmmakers.

For advancements in cinema sound, the 1970s saw the introduction of Sensurround. While there weren’t many features that supported this cinema sound system, it was mostly for good reason. The gimmick primarily featured a huge rumbling, theater shaking subwoofer (or several) aimed directly at the audience from below the screen. Nevertheless, subwoofers have since become common and have even endured as a constant in theaters since the introduction of Sensurround, just not to the degree of Sensurround. Like the 50s near endless gimmicks to drive people back into the theaters, the 1970s tried a few of these gimmicks such as Sensurround to also captivate and drive people back into theaters.

Earthquake Sensurround

In case you’re really curious, a few film features supporting Sensurround were Earthquake (1974), the Towering Inferno (1974) and Battlestar Galactica (1978). The Sensurround experience was interesting, but the thundering, rattling subwoofer bass was, at times, more of a distraction than it added to the film’s experience. It’s no wonder why it only lasted through the 70s and why only a few filmmakers used it. Successor cinema sound systems include DTS, Dolby Digital and SDDS, while THX ensured proper sound reproduction to ensure those rumbling, thundering bass segments can be properly heard (and felt).

Digital Audio Workstations

Let’s pause here to discuss a new audio recording methodology introduced as a result of the advent of digital audio… more or less required for producing the CD. As a result of digital audio recorders becoming available in the late 70s and early 80s and with accessibility of easy to use computers now dawning, the DAW or digital audio workstation is born. While computers in the late 70s and early 80s were fairly primitive, the introduction of the Macintosh computer (1984) with its impressive and easy to use UI made building and using a DAW much easier. It’s probably a little early to discuss DAWs during the late 70s early 80s here, but because it factors into nearly every type of digital audio recording prominently during the late 80s, 90s, 00s and beyond, the discussion is placed here.

Moving into the late 80s with even easier UI based computers like the Macintosh (1984), Amiga (1985), Atari ST (1985), Windows 3 (1990) and later Windows 95 (1995), DAWs became even more available, accessible and usable by the general public. With the release of Windows 98 and newer Mac OS systems, the DAW systems became even more feature rich, commonplace and easy to use, ultimately targeting home musicians.

Free open source products like Audicity, which first released in 1999, also became available. By 2004, Apple would include its own DAW, GarageBand, with its own Mac OS X and iOS operating systems. Acid Music Studio by Sonic Foundry, another home consumer DAW, was introduced in 1998 for Windows. This product and Sonic Foundry would subsequently be acquired by Sony, but then was later sold to Magix in 2016.

Let’s talk more specifically about DAWs for a moment. The Digital Audio Workstation was a ground breaking improvement over editing using analog recording technologies. This digital visual editing system allows for much easier digital audio recording and editing than any previous system before it. With tape recording technologies of the past, to move audio around required juggling tapes by re-recording and then overdubbing on top of existing recordings. If done incorrectly, it could ruin the original audio with no way back. Back in the 50s, the simplest of editing which could be done with analog recordings was playing games with tape speeds and, if possible on the tape recorder itself by overdubbing.

With digital audio clips in a DAW, you can now pull up individual audio clips and place them on as many tracks as are needed visually on the screen. This means you can place drums on one track, guitars on another, bass on another and vocals on another. You can easily add sound effects to individual tracks or layer them on top with simple drag, drop and mouse moves. If you don’t like the drums, you can easily swap them for an entirely new drum track or mute the drums altogether to create an acoustic type of effect. With a DAW, creative control is almost limitless when putting together audio materials. In addition, DAWs support plugins of all varying types including both digital instruments as well as digital effects. They can even be used to synchronize music to videos.

DAWs are intended to allow for mixing multiple tracks down into a stereo (2 track) mix in many different formats, including MP3, AAC and even uncompressed WAV files.

DAWs can also control external music devices, like keyboards or drum machines or any other device that supports MIDI control. DAWs can also be used to record music or audio for films allowing for easy placement using the industry standard SMPTE timing controller. This allows complete synchronization of an audio track (or set of tracks) with a film visual’s easily and, more importantly, consistently. SMPTE can even control such devices as lighting controllers to allow for live control over lighting rig automation, though some lighting rigs also support MIDI. A DAW is a very flexible and extensible piece of software used by audio recording engineers to take the hassle out of mixing and mastering musical pieces and speed up the musical creation process… even being able to use it in live music situations.

While DAWs came to existence in the early 1980s for professional use, it was the 1990s and into the 2000s which saw more home consumer musician use, especially with tools like Acid Music Studio, which based their entire DAW around managing creative loops… loops being short for looped audio clips. Sonic Foundry sold a lot of prerecorded royalty free loops which the user could use those royalty free loops in the creation of musical works. Though, if you wanted to create your own loops in Acid Music Studio using your own musical instruments, that was (and still is) entirely possible.

The point is, once the DAW became commonplace, it changed the recording industry in very substantial ways. Unfortunately, with the good so comes the bad. As technology improved with DAWs, so too did technologies to improve a singer’s vocals… thus was born the dreaded and now overused autotune vocal effect. This effect is now used by many vocalists as a crutch to make their already great voice supposedly sound even better. On the flip side, it can also be used to make bad vocalists sound passable… which is personally how it’s being used these days. I don’t personally think autotune makes vocals sound better ever, but I don’t matter when it comes to such recordings. With DAWs out of the way, let’s segue into another spurious 1970s audio technology topic…

Quadrophonic Vinyl Releases

In the early 1970s, just as stereo began to take hold, JVC and RCA corporations devised Quadrophonic vinyl albums. This format expected the home consumer to buy into an all new audio system including a quad decoder amplifier, a quad turntable, two additional speakers for a total of four and to purchase into albums that supported the quad format. This was a tall (and expensive) order. As people had just begun investing in somewhat expensive amplifiers and speakers to support stereo, JVC and RCA expected the consumer to toss all of their existing (mostly new) equipment and invest in brand new equipment AGAIN. Let’s just say that that didn’t happen. Though, to be fair, you didn’t need to buy a quad turntable. Instead you simply needed to buy a CD-4 cartridge for your turntable and have an amplifier that could decode the resulting CD-4 encoded data.

For completion, the CD-4 system offered 4 discrete audio channels: left front, left back, right front and right back. Quad was intended to be enjoyed with four speakers each placed in a square around the listener.

This hatched quad plan expected way too much of consumers. While many record labels did adopt this format and did produce perhaps hundreds of releases in quad, the format was not at all successful due to consumer reticence. The equipment was simply too costly for most consumers to toss and replace their existing HiFi equipment. Stereo remained the dominant music format and has remained so since. Though, with the advent of quad’s special stylus cartridges, it did help improve stereo recordings by improvements with styluses and higher quality vinyl formulations needed to produce the quad vinyl LPs.

Note also that while quad vinyl LP releases made their way into record stores in the early 1970s, no cassette version of quad ever became available. However, Q8 or quad 8-track tapes arrived as early as 1970, two years before the first vinyl release. Of course, 8-track tapes at the time were primarily used in cars… which would have meant upgrading your car audio system with two more speakers and a new decoder car player with four amplifiers, one for each speaker.

The primary thing that the quad format was successful at doing, at least for consumers, was muddy the waters at the record store and introduce multichannel audio playback, which wouldn’t really become a home consumer “thing” until the DVD arrived in the 1990s. However, for a consumer shopping for albums in the 1970s, it would have been super easy to accidentally buy a quad album, take it home and then realize it doesn’t play. Same problem exists for Q8 tapes; though Q8 tapes had a special quad notch that may have prevented it from playing in some players. And now, onto the …

blocker

1980s

In the 1980s, we see big hair, glam rock bands and hear new wave, synth pop and alternative music on the radio. Along with all of these, this era ushers us into the digital music era using the new Compact Disc (CD) and, of course, players. The CD, however, would actually turn out to be a couple of decade stop gap for the music and film industries. While the CD is still very much in use and available today, its need is diminishing rapidly with the likes of music services, like Apple Music. But, that discussion is for the 2010s and into the 2020s.

Until 1983, vinyl, cassettes and, to a much lesser degree, 8-track tapes were the music formats available to buy at a record store. By late 1983 and into 1984, the newfangled CD hit the store shelves, but not majorly as yet. At the same time, out went 8-Track tapes. While the introduction of the CD was initially aimed at the classical music genre, where the CD’s silence and dynamic range works exceedingly well to capture orchestral music arrangements, pop music releases would take a bit more time to ramp up. By late 1984 and into 1985, popular music eventually begins to dribble its way onto CD as record labels begin re-releasing back catalog in an effort to slowly and begrudgingly embrace this new format. Though, bands were also embracing this new format, thus new music began releasing onto the CD format faster than back catalog.

However, the introduction of the all digital CD upped the sound engineer’s game once again. Like vinyl took a while for sound engineers to grasp, so too did the CD format. Because the top and bottom sonic end of the CD is effectively unlimited, placing those masters made for vinyl onto a CD made for a lower volume and a sonically unexciting and occasionally shrill music experience.

If you buy a CD made in the mid 1980s and listen to it, you can tell the master was originally crafted for a vinyl record. The sonics are usually tinny, harsh and flat with a very low volume. These vinyl master recordings were intended to prevent the needle from skipping and relied on some of the sonics to be smoothed out and filled in by the turntable and amplifier itself. A CD needs no such help. This meant that CD sound engineers needed to find their footing on how deep the bass goes, how high the treble can get and how loud it can be. Because vinyl (and the turntable itself) tended to attenuate the audio to a more manageable level, placing a vinyl master onto CD foisted all of these inherent vinyl mastering flaws onto the CD buying public. This especially, considering the price tag of a CD was typically priced around $14.99 when vinyl records regularly sold for $5.99-$7.99. Asking a consumer to fork over almost double the price for no real benefit in audio quality was a tall order.

Instead, sound engineers needed to remix and remaster the audio to fill the audio dynamics and sonics of a CD. However, studios at the time were cheap and wanted to sell product fast. That meant existing vinyl masters instantly made their way onto CDs, only to sound thin, shrill and harsh. In effect, it introduced the buying public to a lateral, if not inferior product that all but seemed to sound the same as vinyl. The only improved audio masters being tailored for CD were many classical music artists. Pop artist older catalog titles were simply being rote copied straight onto the CD format… no changes. To the pop, rock and R&B buying consumer, the CD appeared to be an expensive transition format with no real benefit.

The pop music industry more or less screwed itself with the introduction of the CD format before it even got a foothold. By the late 80s and into the early 90s, consumers began to hear the immense difference in a CD as musical artists began recording their new material using the full dynamic range of the CD, sometimes on digital recorders. Eventually, consumers began to hear the much better sonics and dynamics capable of the CD format. However, during the initial 2-4 years after the CD was introduced, many labels released previous vinyl catalog onto CD sounding way less than stellar… dare I say, most of those CD releases sounded bad. Even new releases were a mixed bag depending on the audio engineer’s capabilities and equipment access.

Further, format wars always seem to ensue with new audio formats and the CD was no exception. Sony felt the need to introduce their smaller MiniDisc format, a lossy compressed format. While the CD format offered straight up uncompressed digital audio at 16 bit, the MiniDisc offered compressed audio akin to an MP3. The introduction of the MiniDisc (MD) meant that this was the first time a consumer was effectively introduced to an MP3-like device. While the compression on the MD wasn’t the same as MP3, it effectively produced the same result. In effect, you might actually say a MiniDisc player was the first pseudo MP3 player, but used a small optical disc for its music storage.

The CD format was not dissuaded by the introduction of the MD format. If anything, many audiophile consumers didn’t like the MD for the fact that it used compressed audio, making it sometimes sound worse than a CD. Though, many vinyl audiophiles also didn’t embrace the CD format likening it to a very cold musical experience without warmth or expression. Many vinyl audiophiles preferred and even loved the warmth that a stylus brought to vinyl when dragged across a record’s groove. I was not one of these vinyl huggers, however. When a CD fades to silence, it’s 100% silent. When a vinyl record fades to silence, there’s still audible vinyl surface noise present. The silence and dynamics alone made the CD experience golden… especially when the deep bass and proper treble sonics are mixed correctly for the CD.

The MiniDisc did thrive to an extent, but only because recorders became available early in its life along with many, many players from a lot of different companies, thus ensuring price competition. That, and the MD sported an exceedingly small size when compared to carrying around a huge CD Walkman. This allowed people to record their own already purchased audio right to a MiniDisc and easily carry their music around with them in their pocket. The CD didn’t offer recordables until much, much later into the 90s, mostly after computers became commonplace and those computers needed to use CDs as data storage devices. And yes, there were also many prerecorded MiniDiscs available to buy.

During the late 70s and into the early 80s, bands began to experiment with digital recording units in studios, such as 3M’s. In 1982, Sony introduced its own 24 track PCM-3324 digital recorder in addition to 3M’s already existing 1978 32 track unit, thus widening studio options when looking for digital multitrack recorders. This expanded the ability for artists to record their music all digital at pretty much any studio. Onto the cinema scene…

THX_logoIn the early-mid 80s, a new sound theater system standard emerged in THX by LucasFilm. This cinema acoustical sound standard is not a digital audio format and has nothing to do with recording and everything to do with audio playback and sound reproduction in a specific sized room space. At the time, theaters were coming out of the 1970s with short lived audio technologies like Sensurround. In the 1970s, theater acoustics were still fairly primitive and not at all optimized for the large theater room space. Thus, many of the theater sound systems were under-designed (read installed on the cheap) and didn’t appropriately or correctly fill the room with audio, leaving the soundtrack and music, at times, hard to hear. When Star Wars: Return of the Jedi was on the cusp of being released in 1983, George Lucas took an interest in theater acoustics to ensure moviegoers could hear all of the nuanced audio as George Lucas had intended in the film. Thus, the THX certification was born.

THX is essentially a movie theater certification program that ensures that all “certified theaters” must provide an optimal audio acoustical experience for moviegoers. Like the meticulous setup of Walt Disney’s Fantasound in 1940, George Lucas likewise wanted ensure his theater patrons could correctly hear all of the nuances and music within Star Wars: Return of the Jedi  in 1983. Thus, any theater that chose to certify itself via the THX standard must outfit each of their theaters appropriately to present the audio to acoustically fill the theater space correctly for all theater patrons.

However, THX is not a digital recording standard. The digital recording standards like Dolby Digital and DTS and even SDDS are all capable of supporting theaters certified for THX. Theaters certified for THX also play the Deep Note sound to signify that the theater is acoustically certified to present the feature film just to come. In fact, even multichannel analog systems such as Fantasound, if it were still available, could benefit from an acoustically certified THX theater. Further, each cinema must individually outfit each individual theater in the building to acoustically uphold the THX standard. That means that the manager of each theater must work with THX to ensure that each theater in a given megaplex adheres to the THX acoustic standard before each theater can be certified. THX means having the appropriate volume levels needed to fill the space for each channel of audio no matter where the theater patron chooses to sit within the theater.

CD Nomenclature

When CDs were first introduced, it became difficult to determine whether a musician’s music was recorded analog or digital. To combat this confusion, CD producers put 3 letters onto the cover to tell consumers how the music was recorded, mixed and mastered. For example, DDD meant that the music was recorded, mixed and mastered using only digital equipment. This likely meant a DAW was entirely used to record, mix and master. Other labels you might see included:

DAD = Digital recording, Analog mixing, Digital mastering
ADD = Analog recording, Digital mixing, Digital mastering
AAD = Analog recording, Analog mixing, Digital mastering

The third letter on a CD would always be D because every CD had to be digitally mastered regardless of how it was recorded or mixed. This nomenclature has more or less dropped away today. I’m not even sure why it became that important during the 80s, but it did. It was probably included to placate audiophiles at the time. I honestly didn’t care about this nomenclature. For those who did, it was there.

Almost all back catalog releases recorded during the 70s and even some into the 80s would likely have been AAD simply because digital equipment wasn’t yet available when most 70s music would have been recorded and mixed. However, some artists did spend the money to take their original analog multitrack recordings back to an audio engineer to convert them to digital for remixing and remastering, thus making them ADD releases. This also explains why certain CD releases of some artists had longer intros, shorter outros and sometimes extended or changed content from their vinyl release.

DAT vs CD

Sony further introduced its two-track DAT professional audio recording systems around 1987. It would be these units that would allow bands to mix down to stereo digital recordings more easily. However, Sony messed this audio format up for the home consumer market.

Fearing that consumers could create infinite perfect duplicates of DAT tapes, Sony introduced a system that would limit how many times a DAT tape could be duplicated. Each time a tape was duplicated, a marker was placed onto the duplicated tape. If a recorder detected a counter marker at the allowed max duplication number, all recorders supporting this copy protection system should prevent the tape from being duplicated again. This copy protection system all but sank Sony’s DAT system as a viable consumer alternative. Consumers didn’t understand the system, but more than this, they didn’t want to be limited by Sony’s stupidity. Thus, DAT was dead as a home consumer technology.

This at the time when MiniDisc had no such stupid duplication requirements. Sony’s DAT format silently died while MiniDisc continued to thrive throughout the 1990s. Though, to be fair, the MD’s compression system would eventually turn duplicated music into unintelligible garbage after a fair number of recompression dupes. The DAT system utilized uncompressed audio where the MD didn’t.

The stupidity of Sony was that it and other manufacturers also sold semi-professional and professional DAT equipment. The “professional” gear was not subject to this limited duplication system. Anyone who wanted to buy a DAT recorder could simply by up to semi-professional gear from any manufacturer, like Fostex, where no such copy protection schemes were enforced or used. By the time these other manufacturer’s gear became available, consumers didn’t care about the format.

A secondary problem with the DAT format was that it used helical scanning head technology, similar to the head was used in a VHS or BetaMax video system. These heads spin rapidly and can go out of alignment easily. As a result, a DAT car stereo system was likely not long term feasible. Meaning, if you hit a bump, the spinning head might change alignment and then you’ll have to readjust. Enough bumps and the whole unit might need to be fully realigned. Even the heat of scorching summer days might damage the DAT system.

Worse, helical scanning systems are subject to getting dirty quickly, in addition to alignment problems. This meant the need to regularly clean these units with a specially designed cleaning tape. Many DAT recorders would stop working altogether until you used a cleaning tape in the unit, which would reset the cleaning counter and allow the unit to function again until it needed another cleaning. Alignment problems also didn’t help the format. A recording made on one DAT unit might prevent playing the tape on another unit. Head alignment is critical between two different units. This might mean getting a tape from your friend, whose DAT machine is aligned differently from yours, that won’t play. CDs and MDs didn’t suffer from this alignment problem. What that meant was that while you could always playback DATs recorded in your own unit, a friend might not be able to play your DAT tapes in their unit at all, suffering digital noise, static, long dropouts or silence on playback.

DAT was not an optimal technology for sharing or when using outside of the home for audio. Though, some bootleggers did adopt the portable DAT recorder for bootlegging concerts. That’s pretty much no longer needed, with smartphones now taking the place of such digital recorders.

Though, Sony would more than make up for the lack of DAT being adopted as a home audio format after the computer industry adopted the DAT tape as an enterprise backup tape solution. Once DAT tape changers and libraries became common, DATs became a staple in many computer centers. All was not lost for Sony in this format. DAT simply didn’t get used for its original intended purpose, to be a home consumer digital audio format. Though, it did (and does) have a cult audiophile and bootleg following.

blocker

1990s

By the 1990s, the CD had quickly become the new staple of the music industry (over vinyl and cassettes). It was so successful, it caused the music industry to stop producing vinyl records entirely, before their recent resurgence in the 2010s for a completely different reason. Cassettes and 8-track tapes also went the way of the dinosaurs. Though, 8-tracks had been more or less gone from stores by 1983, the prerecorded cassette continued to limp along into the early 90s. Though, even newer digital audio technologies and formats are yet on the horizon, they won’t make their way into consumer’s hands until the late 1990s.

Throughout the 1990s, the CD remains the primary digital audio format of choice for commercial prerecorded music. By 1995, you could even record your own audio CDs using blanks, thanks to the burgeoning computer industry. This meant that you could now copy an audio CD or convert all of the audio tracks from a CD into MP3s (called ripping) and/or make an MP3 CD, which some later CD players could play. And yes, there were even MiniDisc car stereos available later in the decade. The rise of the USB drive also gave life to MP3s as well. This meant you could easily carry a lot more music from place to place and from computer to computer than can be held on a single CD. The MP3’s portability and downloadability along with the Internet gave rise to music downloading and sharing sites like Napster.

Though, MP3 CDs could be played in some CD players, this format didn’t really take off as a standard. This left players primarily using the audio CD as the means of playing music while in a car, thus multi-CD car changers were born. The car stereo models that supported MP3 formatted CDs would have an ‘MP3’ label printed on the front bezel near the slot where you insert a CD. No label means MP3s were not supported. Though, the rise of separate mp3 players further gave rise to car auxiliary input jacks by car manufacturers, which began because of clumsy cassette adapters. If the car stereo had only a cassette player, you would need to use a cassette adapter to plug in your 3.5mm jack equipped mp3 player. Eventually, car players would adopt the Bluetooth standard so that wireless playback could be achieved when using smart phones, but the full usefulness of that technology wouldn’t become common until many years after the 1990s. However, Chrysler took a chance and integrated its own Bluetooth UConnect system into one of its cars as early as 1999! Talk about jumping on board early!?!

Throughout the 1990s, record stores were also still very much common places to shop and buy audio CDs. By the late 1990s, the rise of DVD with its multichannel audio had also become common. Even big box electronics retailers tried to get into the DVD act with Circuit City banking on its new DiVX rental and/or purchase format, which mostly disappeared within a year of introduction. This also meant big box record stores were still available such as Blockbuster Music, Virgin Megastore, Tower Records, Sound Warehouse, Sam Goody, Suncoast, Peaches, Borders and so on. The rise of the Blockbuster Video Rental stores would eventually became defunct as VHS died over DVD, which then switched to digital streaming around the time of the Blu-ray. Some blame Netflix for Blockbuster’s demise when it was, in fact, Redbox’s $1 rental that did in Blockbuster Video stores, which were still charging $5-6 for a rental at the time of their demise.

By 1999, Diamond had introduced the Rio MP3 player. Around that same time, Napster was born (a music sharing service). The Diamond Rio was the first actual MP3 player placed onto the market, not counting Sony’s MD players. It was a product that mirrored the want of digital music downloads, which were afforded by Napster… a then music download service. I won’t get into the nitty gritty legal details, but a battle ensued between Napster and the music industry and again between Diamond (for its Rio player) and the music industry. These two lawsuits were more or less settled. Diamond prevailed, which left the Rio player on the market and allowed subsequent MP3 players to come to market, which further led to Apple’s very own iPod player being released a few years later. Unfortunately, Napster lost its battle, which left Napster mostly out of business and without much of a future until it chose to reinvent or perish.

Without Diamond paving the legal way for the MP3 player’s coming in 1999, Apple wouldn’t have been able to benefit from this legal precedent with its first iPod, released in 2001. Napster’s loss also paved the way for Apple to succeed by doing music sharing and streaming right, by getting permission from the music industry first… which Napster failed to do and was not willing to do initially. If only Napster had had the foresight to loop in the music industry initially instead of alienating them.

As for recordings made during the 90s, read the DAW section above for more details on what a DAW is and how most audio recording worked during the 90s. Going into the early 90s, traditional recording methods may have been employed, but that was quickly replaced by computer based DAW systems as Windows 98, Mac OS and other computer systems made a DAW quick and easy to install and operate. Not only is a DAW now used to record all commercial music, it is also used to prerecord audio for movies and TV productions. Live audio productions might even use a DAW to add live effects while performing live.

Though, some commercial DAW systems like Pro Tools sport the ability to control a physical mixing board’s controls. With Pro Tools, for example, the DAW shows a virtual mixing board identical to a physical mixing board attached. When the virtual mixing board controls are moved, so too does it rotate the knobs and move the sliders of the attached specific (and quite expensive) physical mixing board. While the Pro Tools demo was quite impressive, it was very expensive to buy (both Pro Tools and the supported mixing board); it was mostly a novelty. When you’re recording a specific song with live musicians, such an automated system handling a physical board might be great if you’re wanting to make sure all of the musical parts are performed live in a professional sounding way without having a sound engineer sitting there tweaking all of the controls manually. Still, moving the sliders and knobs with automation software is cool to watch, but is way overpriced and not very practical.

To be fair, though, Pro Tools was originally released at the end of 1989, but I’m still considering it a 1990s product as it would have taken until the mid-90s to mature into a useful DAW. Cubase, a rival DAW product, actually released earlier in 1989 than Pro Tools. Both products are mostly equivalent in features, with the exception of Pro Tools being able to control a physical mixing board where Cubase, at least at the time I tested it, could not.

As for cinema sound, 1990 ushered in a new digital format in Cinema Digital Sound (CDS). Unfortunately, CDS had a fatal flaw that left some movie theaters in the lurch when presenting. Because CDS replaced the optical audio track on film with a magnetic strip of digital 5.1 sound (left, right, center, S-left and S-right and low frequency effects), this left the feature (and the format) without sound if the audio strip were damaged. As a result, Dolby Digital (1992) and Digital Theater Systems (DTS — 1993) quickly became the preferred formats for presenting films with digital sound to audiences. Dolby Digital and DTS used alternative placement for the film’s digital tracks leaving the optical track available for backup audio “just in case”. For completeness, Sony’s SDDS also uses alternative placement as well.

According to Wikipedia:

…unlike those formats [Dolby Digital and DTS], there was no analog optical backup in 35 mm and no magnetic backup in 70 mm, meaning that if the digital information were damaged in some way, there would be no sound at all.

CDS was quickly superseded by Digital Theatre Systems (DTS) and Dolby Digital formats.

Source: Wikipedia

However, Sony (aka Columbia Pictures) always prefers to create its own formats so that it doesn’t have to rely on or license technology from third parties (see: Blu-ray). As a result, Sony created Sony Dynamic Digital Sound (SDDS), which saw its first film release in 1993’s The Last Action Hero. However, DTS and Dolby Digital, at the time, remained the digital track system of choice when the film was not released by Sony. Likewise, Sony typically charged a mint to license and use its technologies. Thus, producers would opt for systems that cost less in the final product if the product were not being released by a Sony owned film studio. Because Sony also owned rival film studios, many non-Sony studios didn’t want to embrace or use Sony’s technological inventions, choosing Dolby Digital or DTS over Sony’s SDDS.

Wall of Sound and Loudness Wars

Sometime in the late 1990s, sound engineers using a DAW began to get a handle on properly remastering older 80s music. This is about the time that the Volume War (aka Loudness War) began. Sound engineers began using sound compression tools and add-ons, like iZotope’s Ozone, to push audio volumes ever higher and higher, while remaining under the maximum threshold of the CD’s volume capability to prevent noticeable clipping. These remastering tools meant, at least to the subsequent remastered audio, much louder sound output than before adding compression.

Such remastering tools have been a tremendous boon to audio and artists, though Ozone didn’t really begin until middle of the 2010s. Thus, we’re jumping ahead a little. Prior to using such 2010’s tools, Cubase and Pro Tools already offered built-in compression tools that afford similar audio compression to iZotope Ozone, but which required a more manual tweaking and complexity. These built-in tools have likely existed in these products since the mid 1990s.

The Wall of Sound idea is basically pushing an audio track’s volume to the point where nearly every point in the track has the same level of volume. It makes a track difficult to listen to, offers up major ear fatigue and is generally an unpleasant sonic experience for the listener. Some engineers have pushed the compression way too far on some releases. CDs afford impressive volume differences, from the softest whisper to the loudest shout. These dynamics in music can make for tremendous artistic uses. When compression is used on pop music, all of those dynamics are lost… instead replaced by a Wall of Sound that never ends. Many rock and pop tracks fall into this category, only made worse by a tin eared, inexperienced sound engineers with no finesse over a track’s dynamics. However, sometimes it’s the band requesting the remaster and giving explicit instructions, but sometimes it’s left up to the sound engineer to create what sounds best. Either way, a Wall of Sound is never a good idea.

As a result of improving sound quality through these new mastering, this invigorated the process of remastering those old crappy-sounding, vinyl-mastered 1980 CD releases… finally giving that music the sound quality treatment it should have had when those CDs originally released in the 1980s. That, and record labels needed yet more cash to continue to operate.

These remastering efforts, unfortunately, left a problem for consumers. Because the CD releases mostly look identical, you can’t tell if what you’re buying (particularly when buying used) is the original 1980s release or the updated and remastered new release. You’d need to read the dates printed on the CD case to know if it were pressed in the 1980s or in the late 1990s. Even then, this vinyl master CD pressing problem continued into the early 1990s. It wouldn’t be until around the late 1990s or into the 2000s when the remastering efforts really began in earnest. This meant that you couldn’t assume a 1993 CD release of a 1980s album was remastered.

The only way you know if the CD is remastered is 1) buying it new and seeing a sticker making this remastering claim and 2) listening to it. Even then, some older CDs only got very minimal sound improvements (usually only volume) when remastered over their 1980s CD release. Many remasters didn’t improve the bottom or top ends of the dynamics of the music and only focused on volume… which only served to make that tinny, vinyl remaster even louder. For example, The Cars’s 1984 release, Heartbeat City, is an good example of this problem. The original release on CD had thin, tinny audio, clearly indicative that the music was originally mastered to accommodate vinyl. The 1990s and 2000s remasters only served to improve the volume, but left the music dynamics shallow, thin and tinny, with no bottom end at all… basically leaving the original vinyl remaster’s sound almost wholly intact.

A sound engineer really needed to spend some quality time with the original material (preferably from the original multitrack master) bringing out the bottom end of the drums, bass and keyboards while bringing the vocals front and center. If remastered correctly, that album (and many other 1980s albums) could sound like it was recorded on modern equipment from at least the 2000s, if not the 2010s or beyond. On the flip side, Barbra Streisand’s 1960’s albums were fully digitally remastered by John Arrias who was able to reveal incredible sonics. Barbra’s vocals are fully crisp and entirely clear along side the music backing tracks. The handling of remixing and remastering of many rock and pop bands was ofttimes handed to ham-fisted, so-called sound engineers with no digital mastering experience at all.

Where in the 1930s, it was about simply getting a recording down to a shellac 78 rpm record, in the 90s for new music, it was all about pumping up the sub-bass and making the CD as loud as possible. All of this in the later 90s was made possible by digital editing using a DAW.

MP3

Seeing as this is an article about The Evolution of Sound, this article would be remiss if it didn’t discuss and describe the MP3 format’s contribution to audio evolution. The MP3 format, or more specifically, lossy compression, was invented by Karlheinz Brandenburg, a mathematician and electrical engineer working in conjunction with various people at Fraunhofer IIS. While Karlheinz has received numerous awards for this so-called audio technological improvement, one has to wonder if the MP3 really was an improvement to audio? Let’s dive deeper.

What exactly is lossy compression? Lossy compression is an algorithmic technique by which an mathematical algorithm takes in a stream of uncompressed digital audio content and then removes and rearranges that audio to reduce or eliminate extraneous, unnecessary or repetitive segments via an encoder. When the decoder plays back the resulting compressed audio file, it recreates that audio on-the-fly based on the encoded data back into a suitably similar audio form supposedly indistinguishable from the original uncompressed audio. The idea here is to produce audio so aurally similar to the uncompressed audio that the ears cannot distinguish a difference from the original uncompressed audio content. That’s the theory, but unfortunately this format isn’t 100% perfect.

Unfortunately, not all audio is amenable to being compressed in such a way. For example, MP3 is not at all capable of producing low volume content without introducing noticeable audible artifacting. Instead of hearing only the audio as expected, the decoder also introduces a digital whine… the equivalent of analog static or white noise. Because pop, rock, R&B and Country music rely on guitars, bass and drums, keeping the volumes mostly consistent throughout the track, the MP3 format works perfectly fine for these. For orchestral music with low volume passages, the MP3 format isn’t always the best choice.

Some of this digital whine can be overcome by increasing the bit rate of the resulting file. For example, many MP3s are compressed at 128k bits per second (kbps). However, this bit rate can be increased to 320 kbps, thus reducing digital whine and increasing the overall sound fidelity. The problem with increasing bit rates is that it also increases the resulting size of the file. Thus, 320 kbps MP3 file sizes might not be that far off in size from an uncompressed .WAV file. Why suffer possible audio artifacts using the MP3 format when you can simply store uncompressed audio and avoid this?

Let’s understand why MP3s were needed throughout the 1990s. Around 1989-1990, a 1 GB sized SCSI hard drive might you cost around $1000 or more. Considering that a CD holds around 700 megabytes, you could extract the contents of about 1.5 CDs onto a 1 GB sized hard drive. If you MP3 compressed those same CD tracks, that same 1GB hard drive might be able to hold 8-10 (or more) CDs worth of MP3s. As the 1990s progressed, hard drive sizes would increase and these prices would also decrease, eventually making both SCSI and IDE drives way more affordable. It wouldn’t be until 2007 when the first 1TB sized drive launched. From 1990 through to 2007, hard drive sizes were not amenable to storing tons of uncompressed audio wave files. To a lesser degree, we’re still affected by storage sizes even today, making compressed audio still necessary, particularly when storing audio on smart phones. We’re getting too far ahead.

Because of the small storage capacities of hard drives throughout the 1990s, the need for much smaller storage of audio files was necessary, thus the mp3 was born. You might be asking, “Well, what about the MiniDisc?” It is true that Sony’s MiniDisc format also used a compressed format. Sony, however, like it always does, devised its own compression technique called ATRAC. This compression format is not unlike MP3 in terms of its design. As for specifically how Sony’s ATRAC algorithm works exactly is unknown because it is a proprietary format. Because of ATRAC’s proprietary nature, this article will not speculate on how Sony came about creating it. Suffice it to say that Sony’s ATRAC arrived 2 years after the MP3 format’s initial release in 1991. Read into that what you will.

As for the advancement of audio in the MP3 format, lossy compression has really set back audio quality. While the CD format sought to improve on audio and did so by making tremendous strides in its near 0db silence, the MP3 only sought to make audio “sound” the same as an uncompressed CD track. With the word “sound” being the key to MP3. While MP3 did mostly achieve this goal with most musical genres, the format doesn’t work for all music and all musical styles. Specifically, certain electronic music with sawtooth or exactly square wave forms can suffer. Certain passages of very low volume can also suffer under MP3’s clutches. It’s most definitely not a perfect solution, but MP3 solved one big problem, reducing the file sizes down to fit on the small data storage products available at the time.

Data Compression vs Audio Compression

Note that the compression discussed above regarding the MP3 format is wholly different from audio compression used to increase volumes and reduce clipping when remastering a track. The MP3 compression above is strictly a form a data compression, but data compression designed specifically for audio tracks. Audio volume compression used in remastering (see Loudness Wars), is not a form of data compression at all. Audio compression used in remastering is a form of analog compression and limiting. It seeks to raise volume of most of a track, but only compresses down (or lowers the volume of) the peaks that would otherwise reach above the volume ceiling of the audio media.

Remastering (music production) audio compression is intended to increase the overall volume of the audio without introducing audio clipping (clicking and static heard if audio volumes increase above the audio volume ceiling). In other words, remastered audio compression is almost solely intended to increase volumes while eliminating or introducing unwanted noises. The MP3 compression described above is solely intended to reduce file storage sizes of audio files on disc, while maintaining the audio fidelity and quality as a reasonably close facsimile to its original uncompressed audio content. Note that while audio compression techniques began in the 1930s to support radio broadcasts, the MP3 format was created in the 1990s. While both of these techniques improved during the 1990s, they are entirely separate inventions used in entirely separate ways.

For the reasons described in this section, I actually question the long term viability of the MP3 format once storage sizes become sufficiently large that uncompressed audio is the norm. MP3 wasn’t designed to improve audio fidelity at all. It was solely improved to reduce file storage sizes of compressed audio.

blocker

2000s

In the 2000s, we then faced the turn of the millennium and all of the computer problems that went along with that. With the near fall of Napster and the rise of Apple (again), the 2000s are punctuated by smart phone devices, the iPod and various ever smaller and lighter laptops.

At this point, I’d also be remiss in not discussing the rise of the video game console which has now become a new form of storytelling, like interactive cinema in its own right. These games also require audio recordings, but because they’re computer programs, they rely entirely on digital audio to operate. Thus, the importance of using a DAW to create waveforms for video games.

Additionally, the rise of digital audio and video in cinemas further pushes the envelope for audio recording. However, instead of needing a massive mixing board, audio can be recorded in smaller segments into audio files, then those files are “mixed” together using a DAW by a sound engineer, who can then play all of the waveforms back simultaneously in a mixed format. Because the sound files can be moved around on the fly, the timing can be changed, they can be added, removed, volumed up or down, have effects added, run backwards, sped up or slowed down or even duplicated multiple times to create unusual echo effects and new sounds. With video games, this can be done by the software while running live. Instead of baking down music into a single premade track, video games can live mix many audio clips, effects and sounds into a whole continuous composition live at the time the game plays. For example, if you enter a cave environment in a game, the developers are likely to apply some form of reverb onto the sound effects of walking and combat situations to mimic the sound you might experience inside of a cave environment. Once you leave the cave, that reverb effect goes away.

The flexibility of sound creation in a DAW is fairly astounding, particularly when a sound engineer is doing all of this on a small laptop on location or when connected to their desk system at an office. The flexibility of using a video game console to live mix tracks into the gameplay on the fly is even more astounding. The flexibility of using a laptop remotely on a movie set is further amazing when you need to hear the resulting recordings played back instantly with effects and processing applied.

In the 2000s, these easy to use and affordable DAW software systems opened the door up to home musicians and even professionals. This affordability made DAW systems within the reach for small musicians to create professional sounding tracks even on a limited budget. As long as the home musician was studious with their learning of the DAW software, these musicians could now produce tracks that rivaled tracks professionally recorded, mixed and mastered at an expensive studio.

While the 1930s wanted to give home users a simple way to record audio at home, this was actually achieved once DAWs like Acid Music Studio arrived and could be easily run on a laptop with minimal requirements.

Not only were DAWs incredibly important to the recording industry, but so too were small portable recording and mixing devices like the Zoom H1n. These handheld devices sport two microphones and are battery operated. The H1n supported recording 4 track inputs and could record two or four tracks simultaneously onto an SD card in various digital audio formats. These recorders also sported multiple input types in addition to the built-in microphones. While these handheld units are not technically a DAW, they do offer a few built-in minimal DAW-like tools. Additionally, the resulting audio files produced by an H1n could be imported into a DAW and used in any audio mix.

These audio recorders are incredibly flexible and can be used in a myriad of environments to capture audio clips. For on the go of capturing ambient background effects, such as sirens, water running, rain falling or cars honking, this handheld recorder offers the perfect way to do this. Its resulting audio files from the built-in microphones are always incredibly crisp and clear, but you must remain perfectly silent to not have distracting noises picked up by the incredibly sensitive microphones.

There have been a number of Zoom handy recorder products including models going back to 2007. The H1n is one of its newest models, but each of these Zoom recorder products work almost identically in recording capabilities to the earlier models.

iPhone, iPod, iPad and Mac OS X

This article would be remiss if it failed to discuss the impact the iPod, iPad and iPhone have had on various industries. However, one industry it has had very little impact on is the sound recording industry. While the iPad and iPhone do sport microphones, these microphones are not high quality. Meaning, you wouldn’t want to use the microphones built into these devices for attempting to capture professional audio.

These included microphones work fine for talking on the phone or using Facetime or for purposes where the quality of the audio coming through the microphone is unimportant. As a sound designer, you wouldn’t want to use the built-in microphone for any purposes of recording professional audio. With that said, the iPad does sport the ability to input audio channels into its lightning or USB-C ports for recording into GarageBand (or possibly other DAWs) available on iOS, but that requires hooking up an external device.

Thus, these devices, while useful for their apps and for games and other mostly fun uses, are not intended to be used for trying to record professional audio content. With that said, it is possible to record audio into GarageBand via separate audio input devices connected to an iPhone or iPad.

A MacBook is much more useful for the purposes of audio recording because these typically have several ports which could sport several input or output audio devices such as mixing boards supporting multiple audio inputs, connecting a device up like the Zoom H1n or even controlling devices via MIDI and possibly all of the above. You can even attach extensive storage space to store these resulting recorded audio files, unlike an iPad and iPhone which don’t really have these large storage options available.

While the iPad and iPhone are groundbreaking devices in some areas, audio recording, mixing and mastering is not one of those areas… that’s also because of the limited storage space on these devices combined with its lack of high quality microphones. Apple has contributed very little to the improvement and ease of professional digital audio recording with its small handheld devices. The exception here is Apple’s MacBooks and MacOS X, when using software like GarageBand, Audacity or Cubase… software that’s not easily used on an iPhone or iPad.

Let’s talk about the iPod here, but last. This device arrived in Apple’s inventory in 2001, long before the iPad or iPhone. This device was intended to be Apple’s claim to fame… and for a time, it was. This device set the tone for the future of the iPhone and iPad and even Apple Music. This device was small enough to carry, but had large enough storage capacity to hold a very large library of music while on the go. The iPod, however, didn’t really much change audio recording. It did slightly improve the quality of audio with its improvement of AAC. While AAC encoding did improve the audio quality and clarity over MP3 to a degree, the quality improvements were mostly negligible to the ears over a properly created MP3. What AAC did for Apple, more than anything, is offer a protection system to prevent users from pirating music easily when saved in Apple’s AAC format. MP3 didn’t (and still doesn’t) offer these copy protections.

AAC ultimately became Apple’s way of enticing the music industry to sign onto the Apple iTunes store as it gave music producers peace of mind knowing that iPod users couldn’t easily copy and pirate music stored in that format. For audio consumers, the perceived enhanced quality is what got some consumers to buy into Apple’s iTunes marketplace. Though, AAC was really more about placating music industry executives than about enticing consumers.

blocker

2010s

The 2010s are mostly more of the same coming out of the 2000s with one exception, digital streaming services. By the 2010s, content delivery is quickly moving from physical media towards sales of digital product over the Internet via downloads. In some cases for video games, you don’t even get a digital copy. Instead, the software runs remotely with the only pieces pumped to your system being video and audio. With streaming music and video services, that’s entirely how they work. You never own a copy of the work. You only get to view that content “on demand”.

By this point in the 2010s, DVD, blu-rays and other physical media formats are becoming quickly obsolete. This is primarily due to conversion to streaming and digital download services. Even video games are not immune to this digital purchase conversion. This means that big box retailers of the past housing shelves and shelves of physically packaged audio CDs are quickly disappearing. These brick and mortar stores are now being replaced by digital streaming services (Apple Music, Netflix, Redbox, Hulu, iTunes and Amazon Prime), yes even for video games with services like Sony’s PlayStation Now (2014) and Microsoft’s GamePass (2017). Though, it can be said that Valve’s Steam began this video game digital evolution back 2003. Sony has also decided to invest even more in its own game streaming download platform in 2022, primarily in competition with GamePass, with its facelift to PlayStation Plus Extra.

As just stated above, we are now well underway in converting from physical media to digital downloads and digital streaming. Physical media is quickly becoming obsolete, along with the retailers who formerly sold those physical media products… thus many of these retailers have closed their doors (or are in the process)… including Circuit City, Fry’s Electronics, Federated, Borders / Waldenbooks and Incredible Universe. Some of these retailers like Barnes and Noble and Best Buy are still hanging on by a thread. Because Best Buy also sells appliances, such as washers, dryers along with large screen TVs, Best Buy is somewhat diversified to not be fully reliant on the conversion from physical media to digital purchases. It remains to be seen if Best Buy can survive once consumers switch entirely to digital goods and Blu-rays are no longer available to be sold. This means that were those digital content goods to disappear tomorrow, Best Buy may or may not be able to hang on. Barnes and Noble is still in a questionable position because they don’t have other tangible goods than books. They must rely primarily on physical book sales to keep this company afloat. GameStop is also in this same situation with physical video games, though they survive primarily by selling used consoles and used games.

Technological improvements in this decade include faster computers, but not necessarily better computers as well as somewhat faster Internet, but faster networking is entirely relative to where you live. While CPUs improve in speed, the operating systems seem to get more bloated and buggier… including iOS, Mac OS X and even Windows. Thus, while the CPUs and GPUs get faster, all of that performance is soaked up almost instantly by the extra bloatware installed on the operating systems by Apple and Microsoft and Google’s Android… making an investment in new hardware almost pointless.

Audio recording during this decade doesn’t really grow as much as one would hope. That’s mainly due to services like Apple Music, Amazon Music, Pandora, Tidal and even, yes, Napster. After 1999 when Napster more or less lost its case with the music industry, it was forced to effectively change or die. Apparently, Napster decided to become a subscription service, reinventing itself. Apparently, this allowed Napster to finally get the blessing of and force royalty payments to the industry with which had lost its legal file sharing battle. Musical artists are now creating music that sells only because they have a fan base, but not because the music actually has artistic merit.

As for Napster, it all gets more convoluted. From 1999 to 2009, Napster continued to exist and grow its music subscription service. In 2009, Best Buy wanted a music subscription service for its brand and bought Napster. A couple years later, in 2011, and due primarily to money loss problems within Best Buy, Best Buy was forced to sell the remnants of Napster to the Rhapsody music service including Napster’s subscriber base along with the Napster name. In 2016, Rhapsody bizarrely renames itself to Napster… which is where we are today. The Napster that exists today isn’t the Napster from 1999 or even the Napster from 2009.

The above information about Napster is more or less included as a follow-on to the previous discussion about Napster’s near demise. This information doesn’t necessarily further the audio recording evolution, but it does tertiarily relate to the health of the music and recording industry as a whole. Clearly, if Best Buy can’t make a solid go of its own music subscription service, then maybe we have too many?

As for cinema sound, DTS and Dolby Digital (with its famous double D logo) along side THX’s acoustical room engineering became the digital standards for theater sound. Though since, audio innovation in cinema has mostly halted. This decade has been more about using the previously designed innovations than about improving the cinema experience. In fact, you would have thought that after 2019’s COVID, Cinemas would have wanted to invigorate the theater experience to get people back into the auditoriums. The only real innovation in the theater has been to seating, but not to the sound or picture improvements.

This article has intentionally overlooked the transition from analog film cameras to digital cameras (aka digital cinematography), which began in the mid 1990s and has now become quite common in cinemas. Because this transition doesn’t directly impact sound recording, it’s mentioned only in passing. Know that this transition from film to digital cameras occurred. Likewise, this article has chosen not discuss Douglas Trumball’s 60 FPS Showscan film projection process as it likewise didn’t impact the sound recording evolution. You can click through to any of the links to get more details for these visual cinema technologies if you’re so inclined.

Audio Streaming Services

While the recording industry is now firmly reliant on a DAW for producing new music, that new music must be consumed by someone, somewhere. That somewhere includes streaming services like Napster, Apple Music, Amazon Music and Pandora.

Why is this important? It’s important because of the former usefulness of the CD format. As discussed earlier, the CD was more or less a stop-gap for the music industry, but at the same time it propelled audio recording process in a new direction and offered up a whole new format for consumers to buy. Streaming services, like those named above, are now the place to go to listen to music. No longer do you need to buy and own thousands of CDs. Now you just need to pay for a subscription service and you have instant access to perhaps millions of songs at your fingertips. That’s like walking into a record store and opening every single CD in the store and listening to every single one of them for a small monthly fee. This situation could only happen on a global Internet scale, never on a single store sized scale.

For this reason, record stores like Virgin Megastore and Blockbuster Music (now out of business) no longer need to exist. When getting CDs was the only way to get music, CDs made sense. Now that you can buy MP3s from Amazon or, better, sign up for a music streaming service, you can listen to any song you want at any time you want just by asking your device’s virtual assistant or by browsing.

The paradigm of listening to commercial music has now shifted during the 2010s. Apple Music launched in 2015, for example. Since 2015, this service has now gained 88 million subscribers as of 2022 and counting. The need to buy prerecorded music, particularly CDs or vinyl, is almost nonexistent. The only people left buying CDs or vinyl are collectors, DJs or music diehards. You can get access to brand new albums the instant they drop simply by being a subscriber. With devices like iOS and Apple Music, you can even download the music to your device for offline listening. You don’t need to rely on having access to the internet to listen. You simply need access to download the tracks, but not to listen to them. As long as your device remains subscribed, all downloaded tracks remain valid.

It also means that if you buy a new device, you still have access to all of the tracks you formerly had. You would simply need to download them again.

As for music recording during this era, the DAW is firmly the entrenched recording software of choice whether in a studio or at home. Bands can even set up home studios and record their tracks right in their own studio. No need to lease out expensive studio space when you can record in your own studio. This has likely put a punch onto former studios that relied on bands showing up to record, but it was an inevitable outcome of the DAW, among other music equipment changes.

Though, it also means that the movie industry has an easier time of recording audio for films. You simply need a laptop or two and you can easily record audio for a movie production while on location. What was once cumbersome and required many people handling lots of equipment is likely down to one or two people using portable equipment.

As for Cinema audio, effectively, not much has changed since the 1970s other than perhaps better amplifiers and speakers to better support THX certification. Though by the 2010s, digital sound has become ubiquitous, even when using actual developed film prints, though digital cinematography is now becoming the defacto standard. While Cinemas have moved towards megaplexes containing 10, 20 or 30 screens, the technology driving these theaters these hasn’t much changed this decade. Other competitors to THX have come into play, like Dolby Atmos (2012), which also offers up optimal speaker placement and volume to ensure high quality spatial audio in the space allotted. While THX’s certification system was intended for commercial theater use, Dolby Atmos can be used either in a commercial cinema setting or in a home cinema.

blocker

2020s

We’re slightly over 2 years into the 2020s (100 years since the 1920s) and it’s hard to say what this decade might hold for audio recording evolution. So far, this decade is still riding out what was produced in the 2010s. When this decade is over, this section will be written. Until then, this article is awaiting this decade to be complete. Because this article is intended as a 100 year history, no speculation will be offered as to what might happen this decade or farther out. Instead, we’ll need to let this decade play out to see where audio recording goes from here.

Note: All images used within this article are strictly used under the United States fair use doctrine for historical context and research purposes, except where specifically noted.

Please Like, Follow and Comment

If you enjoy reading Randocity’s historical content, such as this, please like this article, share it and click the follow button on the screen. Please comment below if you’d like to participate in the discussion or if you’d like to add information that might be missing.

If you have worked in the audio recording industry during any of these decades, Randocity is interested in hearing from you to help improve this article using your own personal stories. Please leave a comment below or feedback in the Contact Us area.

↩︎

Who is the worst President in U.S. History?

Posted in history, presidential administration by commorancy on January 14, 2021

There’s a clear answer here. Donald J. Trump… who is yet again self-writing the pages of the history books. Let’s explore why.

The Big Lie

While this isn’t the first lie Donald Trump has told, it is certainly one of the longest running lies repeated over and over and over. What was this “big” lie? Donald Trump perpetuated over and over that the election was stolen from him. In reality, the election has been proven time and time again to have been the most secure election ever.

While Trump was busy making up fake and fraudulent information about how election centers “rigged” the election against him, the election centers diligently counted the votes received 3, 4 and 5 different times… all in proving the vote was in favor of Joe Biden.

Worse, the election fraud argument was not only baseless, it was delusional. Claiming that an underdog had enough money, people and resources to enter election centers and modify voting equipment was not only statistically improbable, this claim was outright insane.

This continual lie perpetuated to Trump’s very own supporters which led to the very insurrection he incited on Capitol Hill… but I get ahead of myself.

Other Lies and Conspiracies

When COVID-19 hit U.S. soil in January of 2020, Trump began perpetuating the lie that COVID-19 is not at all dangerous. That perpetuated lie, while only somewhat smaller than the lie told about the election, has led to many thousands of deaths in the United States. Instead of taking action and making a plan to combat the spread of COVID-19 throughout the United States, Donald Trump instead decided to visit Mar-A-Lago to golf, golf and yet more golf.

Because of his “COVID-19 isn’t dangerous” rhetoric, not only did Trump not do anything until mid-February (ensuring the virus had ample time to spread across the United States and in large metropolitan areas), he espoused fringe science and chose never to wear a mask and did not mandate mask requirements at all.

When Trump claimed to have gotten COVID-19, he was admitted to Walter Reed National Military Medical Center for treatment prior to the election. While many thousands of people were dying because of inadequate treatments due to Trump’s lack of enthusiasm to admit the dangers of COVID-19 and take action, Donald Trump received extraordinary special world class treatment using various experimental drugs. Even after this, he (and the people around him) STILL chose to not wear masks at rallies, still didn’t espouse the dangers and still has not provided any positive plan to help reduce the spread.

Vaccine

Trump loves to take credit for everything even when he had nothing to do with that thing’s success. The vaccines were developed in spite of Trump, not because of him. Both the CDC and the World Health Organization prompted the vaccine creation. Trump, by far, almost completely ignored COVID-19 unless it gave him a feather in his cap… which was almost never. Until the vaccines arrived, he didn’t do shit to manage the COVID-19 spread. During the summer, the spread began to level off, but not because Trump did anything. It was happenstance alone that tapered it off. As the holidays arrived, COVID-19 ramped up big time… and that’s where we are right now.

As COVID-19 began surging, the vaccine arrived. Yet, the rollout plan is not a plan. In fact, it’s just a suggestion. As a result of the lack of planning, the vaccine has only been given to around 4 million people when Trump promised 100 million would be vaccinated by the end of 2020. Yet another lie that Trump perpetuated.

Even while the virus was surging out of control, Trump all but ignored this fact and focused almost with blinders on his “Big Lie” (see above) to the exclusion of all else. His “rigged” election rhetoric not only got old, those of us who had seen through this blatant lie continually wonder what Trump is smoking.

Suffice it to say, while the vaccine has been mostly a flop… not from a functional perspective, but from a rollout perspective. So now, Biden will be left with the task of finding a way to innoculate millions more in record time since Trump failed to make this task a priority.

Insurrection

Trump tried very hard to convince us all of his “big lie”. Most of us were not having it. However, he did manage to rope in a bunch of like-minded thinkers who, for whatever reason, allowed Trump to pull the wool over on them. These people, unfortunately, also tended to be gun toting, irrational extremists. As a result, when Trump a rally in Washington DC at the Ellipse, he incited his crowd to “fight like hell”. These specific words and those of similar style out of his cabal riled up the crowd into a frenzy.

As they marched their way down to Capitol Hill, they began surging into the building, making their way into offices, damaging windows, doors and even defecating and peeing along the way. As a result of this violent mob, it left 5 people dead, including a DC officer tasked with protecting the building.

Again, those of us out here not under the influence of Trump’s lies, realized that Trump now poses a significant threat to the United States. No sitting United States President has ever attacked the very buildings and constitution that they have sworn to uphold. This left Congress in a quandary.

Congress is given limited options to remove a sitting president from office. These include invoking the 25th amendment, impeachment and requesting resignation. Unfortunately, Donald Trump sees no benefit in resigning, so that option was flat off of the table. The only other two were request Mike Pence to invoke the 25th amendment and declare Donald Trump unfit to lead or impeach Donald Trump.

Speaker of the house, Nancy Pelosi gave Pence adequate time to choose invoking the 25th amendment, to which he declined… at which point Nancy Pelosi drafted one Article of Impeachment and began the process of seeing this article through to passage.

On January 13th, the Article of Impeachment passed with 232 votes including 10 Republican votes. Though, the vast majority of the Republican party voted against passage.

With the passage of this impeachment, that totals two (2) successful impeachments by the House of Representatives. Unfortunately, the first impeachment did not reach conviction in the Senate. Hopefully, this second impeachment will. While conviction would be another first for Donald Trump, the most historical fact is that Donald Trump is the first president to have been successfully impeached twice by the House of Representatives.

It’s not over yet

We’re still living history in the making with Donald Trump. Not only does being impeached twice make him the obvious choice for worst president, but inciting an insurrection on Capitol Hill is entirely unprecedented. Not only was there an insurrection, the intended goal was to stop the electoral vote counting, but worse, sway Pence to count the electoral votes in Donald Trump’s favor… an act of sedition.

Unfortunately, Donald Trump’s extremist cabal is now firmly unleashed. At this point, it’s anyone’s guess as to what they have planned. However, the current intelligence is that they plan to have armed “protests” at not only all 50 state capitol buildings, but again in DC to block Joe Biden’s inauguration. However, there are now at least 15,000 troops occupying Washington DC until the inauguration is over. That’s more troops than are deployed in parts of the middle east… to protect against Donald Trump’s incited extremists. If those extremists attempt another incursion in DC, they’re in for a rude awakening.

You just can’t make this shit up!

For now, we’re under wait and see for exactly how this article ends. However, one thing is absolutely certain. Regardless of what happens between now and inauguration day, Donald Trump is now officially the Worst President in the history of the United States.

Such a dubious distinction, Donald.

↩︎

Tagged with: , , ,

Reopening: The best laid plans

Posted in analysis, economy, history, virus by commorancy on April 29, 2020

motivational quote

As the United States sees the COVID-19 infection rate reach 1 million (a milestone as some are calling it), that’s not a milestone to cheer. In fact, it’s a milestone to jeer. Why? Let’s examine how our leaders have entirely failed us.

Donald Trump

Let’s start with our President. While the President has taken the pulpit regularly to spout all sorts of seeming nonsense unrelated to the virus and which has incited some people to take rash and sometimes suicidal actions, our leaders are lost amid this pandemic. Our political leaders have never been trained to handle such the medical reality our society now faces.

It’s clear our current country leadership is still and was, when the virus arrived, not in any way prepared to handle this medical crisis. Even though those in the medical community and many outside of it have long predicted another pandemic, our political leadership team took no action to prepare, stock up, or in any way plan for such an eventuality. Instead, the US government was so busy trying to kowtow to the constant rhetoric from large businesses and make sure they were happy (so they could get large political contributions), they were blinded by these now unnecessary, never ending and futile actions.

In fact, the largest businesses of the world, including Wall Street, effectively drove us to the present dire and unprecedented medical situation that we all must now face. By diverting government attention and money away from a brewing medical crisis, they were constantly being distracted by unnecessary business demands.

DPA

Sure, the economy is important, but not at the expense of what is now taking place. The President is negligent in his handling of this crisis, make no mistake. He took no immediate action on January 1, after COVID became known to the world. He failed to invoke the Defense Production Act (DPA) timely, which could have diverted necessary industry resources towards producing more PPE for hospital workers and other critical industries (including supply lines). Instead, he did nothing. It took weeks before he actually invoked it in a limited fashion, but only after the PPE shortage was already at the breaking point. Even then, our hospitals are still critically short on supplies.

This means that the general public now has no access to these supplies to protect themselves because literally all of it is now being diverted to hospitals. Even with those diverted supplies, the hospitals are still short on supplies. Many are requiring their workers to reuse masks for days that were designed to be used for an hour or two. It’s no wonder why some of these critical care workers are dying from COVID.

Economic Disaster Looming

While attempting to avert other economic disasters (or at least perceived disasters) prior to the outbreak, this left a huge hole in our preparedness for such a pandemic, like COVID. Yet, we now see exactly how much of an economic disaster that being at this level of unprepared means to the United States.

The President may claim to be “doing good things”, but the reality is, he and other past administrations did nothing to prepare for this eventuality, nor have they been effective at the leadership needed to drive this virus from US soil. Instead, they chose to ignore the warnings and focus on other unrelated tasks that vied for their immediate attention. Tasks that, while extremely diversionary, shouldn’t necessarily be minimized. They were, of course, important at the time. However, they shouldn’t have ignored the warnings. We’d already had two previous tastes of what was to come with COVID including the H1N1 (2009 Pandemic) and SARS (2003 Pandemic). These two weren’t that long ago.

Those warning shots should have been the necessary wake up call to get the government to mobilize a pandemic plan. Between 2009 and the present, the government could have been granting hospitals the necessary money they’ve needed to stock up on supplies. They could have had the hospital industry stocked and ready with medical supplies should the need arise.

In fact, the government could have been building ready-made hospital structures out of FEMA buildings, ready and waiting for the time when such a pandemic might occur. I realize that at the time of non-crisis it can be difficult to justify such preparedness dollars. However, it is crystal clear that the United States leadership failed us on this one.

What that means is that where we are today, the United States is in an amazingly fragile position economically. We simply don’t know where our economy may head in the next 6-12 months. There’s no way to even know if the upcoming Presidential election is even possible due to the outbreak. A second larger and deadlier wave is still looming and will likely hit sometime around September or October, just prior to the election. It may hit even before then based on reopening plans of many cities and states.

1% Infection Rate of the US Population

Let’s dive right into this topic as there’s no way to sugarcoat this. To date, the United States has confirmed less than 1% of the population infected. This means that approximately 99% of the population is still susceptible to COVID. Considering that we’ve had 60,000+ deaths and counting for just over 1 million people infected, it’s not hard to extrapolate these numbers.

There are around 330 million people living in the United States. If 50% of those 330 million people become infected, that’s 165 million people infected. Extrapolating up the death rate, let’s do the math. 60,000 people dead for 1 million infected is a 6% mortality rate. Of course, some could argue that the 1 million could be under reported. That the infection rate is much higher. This could be true. However, we have no way to confirm the number that have been infected. Let’s assume that this 6% number is, in fact, accurate.

That 6% is much higher than the reported 1.25% death rate from this virus. Partly, this may be due to the overcrowding situation in New York City where the vast majority of these deaths have occurred. New York City isn’t, unfortunately, an anomaly. It is a reality. It is a reality that will be seen played out all over the nation should the nation reach a 50% infection rate at the same moment in time.

When hospital overcrowding becomes a major issue (and it will), hospitals will be forced to turn away patients. Hospitals will have no choice. This is where that 1.25% number goes up, perhaps even over the 6% that we’re currently seeing. For the sake of argument, I’ll present death numbers for both 6% and 1.25%. Keep in mind that 1.25% is a number that can only be achieved WITH the help of hospital care. Without hospital care, that number will soar much higher… just as it has with the present situation.

At 165 million people infected (50% of the US population) and at 1.25% mortality rate, that’s 2,062,500 people (2 million) dead. At a 6% death rate (more likely what we’ll observe if we reach 165 million people infected at once), that 6% number means 9.9 million people dead. Those 9.9 million dead bodies have to go somewhere, perhaps lining the streets in body bags in some locales. Even 2 million bodies have to go somewhere. There might not even be enough body bags to handle 2 million, let alone 9.9 million people dead. Crematoriums will be working overtime just to keep up with this dead body count.

This whole situation will turn into a literal nightmare scenario for leaders and citizens alike. Where people are chanting to open cities back up today, this scenario is what’s driving leaders to consider reopening way too early. In fact you can’t reopen anything while the virus is still being spread. You simply can’t do it. We already know this virus spreads exponentially over time. It takes ONE person to spread the virus to others who will then spread theirs infection to others. Reopening the economy will lead us down the road to a much, much higher infection rate than the 1% of the population we have today.

As a result, the somewhat overcrowded situation at hospitals will turn into a situation of hospitals turning away patients. There will be no place for the sick to go. They’ll have to head home and take their chances at home. Without access to immediate medical care, many will die. This raises the mortality rate tremendously and it will go way beyond the 1.25% mortality rate.

Low Numbers and Infection

I see a lot of ignorant people downplaying the present COVID situation. I see a lot of people claiming that it’s not that serious, the numbers are low or there’s nothing to fear or spouting other such nonsensical rhetoric. They’re making these statements without claim to how they arrived at that thought rationale (other than the numbers appear to be low). Make no mistake, the low numbers we are seeing for infection have nothing to do with the virus’s severity level or its ability to kill. The low numbers have everything to do with the present distancing and staying-at-home orders. Lifting that and the virus will take hold in much, much larger numbers.

Should the world re-open as it was, the death rate will soar way beyond the 1 million infected so far. We will see a second wave of death and infection rates that far exceed anything the world has seen to date. Yes, it’s coming.

Ignorant People

There are many people who feel that the government is lying and this is all a ruse by the government to erode human and constitutional rights. That’s actually the irrational view. If you value your own life so little as to not understand the ramifications of this virus, then perhaps you need become infected and take your chances. While my view might be considered a rather callous view, some people need to learn life’s (or death’s) lesson’s the hard way. It’s Social Darwinism.

If you choose to venture out by defying stay-at-home orders, become infected and then become a death statistic, that’s your choice. You made that choice. I won’t feel any sympathy for anyone making such stupid choices. That was your choice to make, as stupid as it may have been. You made that decision and you died.

As with Darwinian ideals, the point is to rid the world’s population of people who are simply unwilling to grasp certain realities. If you are unwilling or unable to grasp life and death concepts, then perhaps the world’s gene pool is a better place without you in it. If you want to venture out and become infected, the door’s right over there.

This is why I show no sympathy towards any anti stay-at-home protesters. If you’re out and about attempting to get infected, you may also be the ones spreading the infection. For these folks, I’ll let the virus run its course on them. If they succumb at home, then so be it. If they survive, and some will, then so be it. At least some of these folks with dumb ideas won’t survive. But, that’s on them and there’s no sympathy here. I’ll just point to the Darwin Awards, of which there will be many issued this year. From the Darwin Awards web site:

Darwin Award winners eliminate themselves in an extraordinarily idiotic manner, thereby improving our species’ chances of long-term survival.

While the Darwin Awards may not recognize those COVID-19 anti stay-at-home protesters as “extraordinarily idiotic”, I personally think this situation qualifies. Of course, if the Darwin Awards were to qualify every COVIDiot, they’d be updating that web site for years to come. I can understand why this site might want to disqualify all COVIDiots of the world to avoid this problem… allowing them focus on those deaths that don’t apply to COVID-19.

Personally, I think the Darwin Awards needs to create a special site devoted to the COVIDiots who off themselves by contracting the virus and die by thinking it doesn’t exist or that it is some conspiratorial government rights erosion scheme.

Likewise, for parents who want to send their kids out to the playground and who then contract COVID-19 from another child on the playground, I have no sympathy. You sent your child out there, that was your choice. If your family perishes, that was also your choice.

Population Control

While COVID-19 is a virus and it does have a fairly grim death statistic, perhaps it is also intended to be an equalizer of sorts. It could be used as a way to “thin the herd”, so to speak. It doesn’t matter the age, race, color or gender, this virus is non-judgemental. It will infect a 1 year old as easily as any other age. Whether it ends up killing is entirely based on that person’s immune system and health. People with asthma or other lung diseases or conditions may be at higher risk.

The point is, COVID-19 is now quickly becoming a population equalizer. That means that while it started out as a viral problem, it is quickly beginning to thin the herd. For those COVIDiots who choose to run around claiming that it’s fake or that “God will protect them”, you’re living in a fantasy world. Nothing will protect you from COVID-19 except staying away from other people. Latching onto a fantasy that “God” or any other outside influence will protect you is delusional. The virus will just as easily invade your system as it will anyone else’s. Whether it will kill you, you’ll have to take your chances. You may survive, you may not. Once you’re infected, it’s yours until you win or lose.

If COVID-19 thins the herd, then so be it. That’s how the chips fall and that’s how they will remain. If the United States loses 10% of its population to COVID-19, then that’s where it ends up. I won’t say that a loss of 10% of the population won’t be devastating to the economy, but it will recover in time. Eventually, enough babies being born will make up for this loss. The population will again grow after COVID-19 is over and done. Until then, COVID-19 will become the most modern version of a great equalizer, just as the 1918 Pandemic was.

Doomed to Repeat

If the 1918 H1N1 Pandemic taught us anything, it was that you can’t let your guard down even for a moment. When everyone thought that the 1918 pandemic was subsiding by late summer, restrictions were relaxed. That’s when the second even more deadly wave hit the country. The reason for this second wave was attributed to soldiers bringing it home and spreading an even deadlier version around.

Today, we have a very high possibility of this situation repeating itself. It won’t repeat because of soldiers, however. History will repeat due to worldwide travel, loosened social restrictions, people thinking it’s over and because businesses are itching to open back up.

Once movie theaters, concert venues, sporting events and any other large crowd gatherings are allowed to exist, this is where the second wave will begin. Worse, if we end up with a different mutated strain ending up on US soil, one that hasn’t yet been passed around, it could reinfect survivors of the earlier strain. It might even kill those survivors due to the body’s already weakened state from the last infection. In other words, a second wave is almost inevitable.

As some states are poised to reopen due to “dropping numbers of infection”, this false sense of security will be their undoing. By states and leaders opening it all up, people WILL venture out thinking that the situation is over. It’s no where near over. 1% of the US infected? No, we’ve barely just scratched the surface. There’s still 99% of the population left to infect… and infect many of these people the virus will. As restrictions relax, people will start doing the things they have been desperately craving while stuck inside. They’ll be doing all of these things with even more careless abandon than usual, only to find that 14 days later they, their friends and their families are now infected. Yet, there won’t be hospital space for the throngs more who are now infected.

This is where and how the much more serious second wave begins.

Election Day

Depending on when this second wave begins, election day may or may not be possible. If enough people become infected before election day, the sick won’t be able to head to the polls. Those uninfected will also be scared to turn out. How will the government entice people out of their homes when, within 1 to 2 months, 100,000 or more people are dead? Yeah, that’s a major challenge.

The government needs to begin planning for an alternative election polling system now. If they don’t begin this process today, come November 3rd, few may actually turn out to vote. If fewer than 10% of the population show up to vote, is that fair representation of the populous?

Let’s consider if some of the current candidates end up succumbing to COVID-19 in this second wave and, for obvious reasons, are no longer on the ballot. How does that work? Yeah, some contingency planning is in order here. Perhaps they can plan for voting from your car (using NFC) or some other similar hands-off voting system with a phone app. For the candidate infections, that’s a whole different bag.

Phone apps have proven their safety, effectiveness and security if properly built. A vote-in-car system could be used by holding your phone up to the window to be read by an external NFC device. It could even be done by connecting to a local WiFi service at the voting center which automatically forces a login page which is used to both verify identity, then collect votes right from the car. No need to leave the car. There are plenty of hands-off approaches that don’t require leaving the confines of your vehicle to vote, yet still allow you to vote in person. These are contingencies that the government needs to consider now. If they aren’t considering such countermeasures, come November 3rd, we could see a huge problem with voter turnout.

The Second Wave IS Coming

A second even larger wave is looming on the horizon. It’s only a matter of time. Once state leaders realize they can’t keep their states closed forever, they will begin to open without realizing that that action is a ticking time bomb. The second wave will be born from each governor’s loosening of restrictions. It will be these very actions around the US that leads to a second larger and potentially even more deadly wave.

When the second wave arrives, this will spur an even bigger panic, not only in the economy, but by people across the globe.

Is there a better way? Personally, I don’t see it. I think we have to let the leaders have their folly. It’s the only way for them to wake up to the realities of this tragic situation. There’s really no way around it. I’m not intending to be a “Debbie Downer” more than a realist. Sometimes you have to look at a scenario and understand that people will do what they do, regardless of how smart that choice is. The pressures from business (and the looming economic failure) is tremendous. It’s exactly what’s driving the leaders to reopen. It is also this exact driving force that will lead us to a second wave of COVID-19. It’s an inevitable outcome.

We’ve tied our society so heavily to the economy that one can’t survive without the other. When you have a virus invading this space, it’s a damned-if-you-do-damned-if-you-don’t scenario. There is no correct choice here. Stay closed, the economy crashes. Open up, the economy crashes because of the second wave.

The economy will crash either way. There’s no way around it. Leaders think that opening up is the road that leads us to recovery. For a short amount of time it will appear that way while we are lulled into a false sense of security. That is until the virus latches onto yet even more people and drags them down into death’s spiral. Then, the whole situation starts ALL over, but this time it will infect unchecked because it’ll be too late.

Being a realist, a lot of people see us as “wet blankets”. We always bring up the one thing that people never want to hear. Everyone wants to see the rosy side of all situations. The problem is, the rosy side almost never exists. It’s a fantasy in someone’s mind. The realist sees the situation as it is (and can predict how it will play out). It’s like truth. People don’t always want to hear the truth. It’s too brutal and too realistic. People want to be told lies so they can feel better about themselves.

In the case of that dress or those shoes, these are harmless lies that don’t hurt people. Lying that the virus is subsiding will only serve to get people dead. Being a realist has its place and can help prevent the deaths of so many. Yet, our leaders are so gung-ho to get the economy restarted, they’re willing to sacrifice many people to that end. Yes, there are still more sacrifices yet to come.

The difficulty is that, as realism dictates, the downside of opening up into a second wave is an even bigger economic disaster. There’s no way to prepare for or prevent this event. We just have to wait it out. The economy will recover, eventually. Just not today. Not tomorrow. In fact, the economy won’t recover until COVID-19 is either eradicated or there’s a vaccine to protect the entire world’s population. After COVID is gone, the economy can begin to recover and those who survive can bring back the world into a new prosperous era. Perhaps we can even learn a thing or two about the value in pandemic preparedness. Considering that we really learned nothing from 1918’s pandemic, then again perhaps not.

↩︎

Is loosening Social Distancing a good thing?

Posted in economy, Health, history by commorancy on April 26, 2020

an empty street under cloudy sky

I know a lot of people are going stir-crazy being stuck in without much to do. Movie theaters are closed. Beaches are closed. Concerts are canceled. Work is performed at home. Kids are home schooled. All of the normal social things we do every day, like shopping and restaurants are not really available (other than grocery shopping, of course). Let’s explore what it means to loosen social distancing.

Viruses

Like the Flu or Colds, a virus is a virus. No, we don’t yet have inoculation for even the common cold or the flu. For the flu, we have the once a year flu shot. This shot is formulated to contain a very specific set of inactive flu strains that “someone” deems as the “most likely” to hit the population. When you get a flu shot, the body acts on these inactive flu strains like they would live flu, which teaches the body how to fight off each specific strain.

Unfortunately, the flu mutates regularly and often. This means that it’s easy for the flu shot formulation to miss one or two or many strains that might hit during a given flu season. This is why taking a flu shot can be hit-or-miss. It means that even if you do take a flu shot, you can still get the flu. Why is that?

It’s because flu strains are not all alike. The body can only recognize specific flu strains to combat. If a new flu strain comes along, the body won’t recognize it as something it has fought before. This allows that flu strain to get a foothold and make the body sick before the immune system response learns and kicks in against this invader.

Enter COVID-19 / SARS-CoV-2

Two names for the same virus. SARS-CoV-2 is actually the virus strain name. The difficulty with SARS-CoV-2 is mutation. Like the flu, a mutation could be ignored by the immune system as a past infection. Meaning, if you have had SARS-CoV-2 and SARS-CoV-3 comes along, the antibodies created for SARS-CoV-2 may not be recognized or used against this new virus. This means you could get COVID again. If you’ve recovered the last time, this time it might result in death. Even the strain on the lungs from a previous infection might damage the lungs enough to cause a new infection to kill. This virus is difficult to handle and even more difficult to know exactly how it might mutate.

Yes, it could mutate into an even more virulent and deadly strain. This is why a vaccine against SARS-CoV-2 might be an impossible task. What I mean is that it may be next to impossible to create a vaccine that covers not only SARS-CoV-2, but every possible strain that could follow. If the medical community hasn’t been able to create a flu vaccine that functions against ALL flu virus strains, how are they going to create a COVID vaccine that covers all current and future COVID virus strains?

The answer to this question is uncertain. What does this have to do with relaxing social distancing requirements? Everything.

Herd Immunity

Considering the above regarding the flu, there is no such thing as herd immunity against the flu or even the seasonal cold virus. We regularly get these viruses even after having had previous flu or colds in the past. It’s inevitable and we understand how this works. Some of us are more lucky than others and rarely get these. Some people get colds and flu frequently, like every single virus that rolls around. Logically, we must apply this same behavior to COVID.

Opening the World

Eventually, the world must reopen. That’s a given. The question is, when is the best time to do that? Given the realities of how viruses operate, there’s no “best” time to do it. This virus is here to stay. It will continue to infect the world. At least SARS-CoV-2 will. Unfortunately, herd immunity isn’t likely to work with this virus. It might for a short time, but we all know that any immunity we may have for past colds and flu last, at most, one season. When the next season rolls around again just a few months later, we’re again susceptible, perhaps even to a strain we’ve previously had. We’re never tested for the exact strains of colds and viruses that we get to know for sure if we’re being reinfected by the same strain.

With COVID following the same patterns as the cold and flu viruses, it’s inevitable that the world must reopen. Yes, perhaps to a new more cautious reality. Perhaps we can’t ever go back to the throngs of people crowding together into a mosh pit, club or similar body-to-body crowds. Even large sporting events which formerly drew large crowds, like football and the Olympics, may find it hard to operate in this new reality.

One thing to realize is that simply because the world reopens doesn’t mean people will venture out in it. Just because parks or beaches or concert halls or Broadway have reopened, doesn’t mean the crowds will come.

COVID is still dangerous

Simply because the world has reopened doesn’t mean that COVID has magically disappeared. It is still very much being passed from person to person. Worse, not even 1% of the US population has been infected as of the numbers being released today in late April. The population would have to see at least 3.3 million infected before we’ve even reached 1% of the population. Consider that we must see at least 80-90% of the rest of the population infected before this virus may ever be considered “over”.

Second Larger Wave is Coming

Considering these above grim statistics, relaxing social distancing requirements WILL lead to a second even larger wave of infection. It’s inevitable. If at least 90% of the population is still uninfected, that means this virus has a lot more work to do before this situation can be called “over”…. let alone consider relaxing shelter-at-home requirements.

These states which are relaxing social distancing are doing so at their own peril and without any reason for doing so. They’re relaxing requirements because of social and economic pressure, not because it’s prudent or in the interest of public safety.

This is where things get grim… very, very grim. As I said, since 90% of the United States population has not been infected, relaxing shelter-at-home is only likely to “stir the pot” causing an even larger second wave.

Depending on how much gets relaxed, it could get much worse much, much faster this second time around. Why? Because any relaxing of requirements indicates to many people that the situation is over… that they’re now safe… that the virus has been contained… and such similar thought rationales. These are all false assumptions made based solely in irrational actions by local government leaders. Basically, these leaders are leading many to their deaths by these reckless actions.

Milestones

The only two ways we can ever be safe from COVID is to know that 99% of the world’s population has had this strain or that it has been eradicated 100% from the population. Unfortunately, the former assumes there are no other strains out there. The latter is almost impossible to achieve at this time. With any virus, we know there are other strains. In fact, with COVID, there were, at the time of the Wuhan outbreak, 2 strains. An earlier strain and a newer strain. It was this newer strain that jumped into humans and began its deadly trek around the world.

It will again be a new strain that jumps around the world. How many strains will there be? No one knows. Will those new strains be as deadly, more deadly or less deadly than the current strain? Again, we don’t know.

We also don’t know that someone who has survived one strain of COVID has any protection from any future strains… and this is the problem with relaxing any social distancing or, indeed, reopening the world.

How can we proceed?

This is the basic problem to solve. So, how exactly do we proceed? As much as it pains me to write this, we may have to open the world and let the chips fall where they may. Whomever dies, dies. Whomever doesn’t, doesn’t. The Herbert Spencer adage (usually attributed to Darwin) of “Survival of the Fittest” may have to win this situation in the end.

Whomever is left after COVID-19 does its dirty deed may be the only outcome available to the world. It’s not an outcome without major ramifications, however. If we can’t eradicate the virus from the world in another way, then letting it play out in the population as a whole is the only other way to handle it. There are two choices here:

  1. Find a reliable and quick testing methodology. Require everyone to be tested, then force isolate anyone who is found infected until either they die or they recover. Isolate any recovered persons for another 30 days to ensure they are no longer contagious. Rinse and repeat until no one else left in the world has it. Difficulty level: 10
  2. Allow the virus to run its course through the entire world’s population infecting everyone it can and let the chips fall where they may. This is the “Survival of the Fittest” approach. Whomever lives, lives. Whomever dies, dies. Difficulty level: 1

While scenario 2 is the easiest, it’s also the most costly to the world’s population, and indeed the economy. All told, if everyone in the world becomes infected and 1.25% is the average death rate holds steady (hint: it won’t), that means up to 96 million people dead across the globe or up to 4.13 million dead in the United States.

This assumes status quo and that the virus doesn’t mutate into a second deadly strain with an even higher death rate. If the virus mutates into a single deadlier strain, scenario 2 will lead to even more millions dead. If it mutates into multiple deadlier strains, then it could end up with a billion or more dead.

Yes, scenario 2 might be the least difficult, but it is the scenario that leads to an untold number of dead not only in the US, but around the globe.

Scenario 1, on the other hand, has a high difficulty factor. It will lead to not only a high economic toll, but it could change the world economy forever. Though, with scenario 1, we may be able to contain COVID-19. We may even put the genie back into the bottle (i.e., eradicate it from the population). Attempting this one could could save many, but at a huge economic cost.

Economic Impact

Either scenario affords major economic impact across the board. Billions of dead means much lower tax base for all countries. The US had been relying on 330 million people in tax revenue (the estimated population of the US). If 10 million die, that’s 320 million in a new tax base. Assuming any of those 10 million who died were high contributors to the tax base, that revenue has dried up. That’s a lot of money to lose and a lot of economic impact.

If under Scenario 2, multiple mutations sweep the world and kill 10x more than expected, that’s 100 million dead in the US. The new reality could see the United States at 230 million… the same population that the US saw in 1981. If the population gets to 200 million, that’s the number the US saw in 1968. The more who die, the worse the economic impact for the United States and the farther back in time we go. Millions dead means many empty houses, a huge mortgage crisis and the list of economic problem goes on and on.

Flattening the Curve

This concept is important for one specific reason. What does it mean, though? By attempting to slow the infection rate through stay-at-home measures, this keeps hospitals above water for patient load. Relaxing the stay-at-home orders means more people out and about and more people getting infected. More infections means more people sick at once.

This is the exact opposite of flattening the curve. Relaxing social distancing will have an inverse impact of flattening the curve for an already overtaxed hospital system. What that means is that those who become infected during a higher demand hospital period are more likely to die at home. Hospitals have limited numbers of beds, limited staff and limited means to treat very limited numbers of people in a given area.

In densely populated urban areas, hospitals will become overloaded quicker. This means densely populated urban areas like Los Angeles, San Francisco, Houston, New York City, New Orleans, Atlanta, St. Louis, Detroit and so on will see significantly higher death rates under scenario 2. The death rate will climb and never stop if stay-at-home orders are lifted AND people venture out in the expected droves that they always have.

Ultimately, scenario 2 will likely lead to a much higher death rate than the currently estimated 1.25% simply due to the saturation of patients with not enough hospitals to cover the load. This scenario playing out is inevitable with an early relaxing of distancing requirements by reopening of social areas, shops and businesses.

What can I do?

You can say, no. Basically, if the United States (and the world) adopts a “Survival of the Fittest” approach to handling this crisis, then your health is left up to you. If you want to believe that everything is safe and you can venture out into the world without a care, then that’s your choice. If you get COVID-19, expect that you may end up trapped at home in your own bed without any means or access to medical care. Hospitals will likely be over-saturated with patients. You’ll be left to fend off the virus yourself. If your body can survive, it will. If it can’t, you’ll die.

This also means you can end up bringing the virus home to your children, your parents, your friends and your partner. You could end up infecting them as well. They, like you, will take their chances with the virus… at home… and very likely not in hospital care.

“Survival of the Fittest”

This may end up being the approach that governments are forced to adopt in the end. The world economy can’t survive without a population to operate it. Unfortunate, this catch-22 situation of opening up the population also means a much higher death rate once the dust settles. It’s effectively a no-win scenario for any government leader. Scenario 1 is almost impossible to achieve without some severe military measures enacted (see China’s handling). Scenario 2 is the easiest to achieve as it takes little to enact. Scenario 1 likely leads to death from people starving and unable to live due to economic impact. Scenario 2 leads to death from an overburdened hospital system while the economy flounders along at a snail’s pace, along with exponential growth in infections.

Unfortunately, death is an entirely inevitable as an outcome under either scenario. Unless the government leaders step up and halt the concept of money and the transfer of money between businesses as a metric of success and instead ask businesses to operate their businesses without quid-pro-quo for an extended period of time, this no-win situation will see to the deaths of millions of people in time no matter which path is chosen. Money flow must halt while society heals and the virus is eradicated from the population. This is the only way scenario 1 works.

Money and its Continued Necessity

The root of this situation is money. In fact, it is the single thing that’s leading our entire situation. If our economy was founded on something other than money, we might have had a chance to survive this situation with a minimal death toll.

Unfortunately, money is driving the need to reopen the economy which is driving the “Survival of the Fittest” scenario. No one can predict how the world will look in 2 years. We simply can’t foresee the number of deaths that might result. The higher the number of deaths, the worse the economies will fare. It’s a vicious cycle being driven by the insatiable need for ever more money… a silly metric when world survival in at stake.

Instead, survival in this world should never have been about money. It should have been about the positive benefits that humans can offer to one another without the driving need for acquisition of a piece of paper.

We are put on this earth to learn, grow and understand our universe. That’s the driving need why we are here. Knowledge is the currency. It’s what keeps our society functioning. It’s the scientists, architects, mathematicians, engineers and thinkers who keep our society flowing, growing, moving and functioning. It’s not money. Money is a means to an end, but is not the end itself. The end goal is the acquisition of knowledge, not money.

That’s where society needs to rethink money’s place in this world. Does money help acquire knowledge? No. It helps acquire sustenance and material possessions. Do we need jets or fast cars or million dollar houses? No. That’s unnecessary luxury. What helps humanity is the acquisition of knowledge and using that knowledge to progress society and humanity further. In that goal, computers are important, but only from the need for access to and for acquiring knowledge.

Money, on the other hand, doesn’t have anything to do with the acquisition of knowledge. Sure, higher learning institutions take money and, in quid-pro-quo form, teach you something. Though, technically, you could learn that something on your own. You don’t need to pay an institution to learn. You can read the books for yourself.

Sounds like Communism

I’m not advocating communism here. I’m actually advocating something beyond communism. I’m advocating that we need to learn to rebuild a society based on the currency of knowledge and the acquisition of knowledge rather than of money. The more “wise” you are, the more you contribute to the world’s betterment, the more you are afforded and the more you are revered. That’s what the world needs to achieve. This is the ideal a prosperous world needs to grow well into the future. Those who do and learn and give back are afforded the riches of the world. Those who choose not to learn are afforded much less.

Money, at this point, is an antiquated measure of success that COVID has clearly shown is the world’s Achilles heel. Success should not be measured by how much you have in the bank, it should be measured how much you’ve contributed to the world in problem solving. Let’s use the brains we have been given to solve societal problems and better our world condition, instead of trying to acquire and throw silly printed pieces of paper at it.

How would a new society work?

This is where this article must diverge. Such a new society would need a fully realized manifest across all sectors describing how to accomplish such a transition away from money. That’s way beyond the scope of a few paragraphs. Perhaps I could write this manifest in a book entitled, “How to transition society away from money”. I might even write such a manifest. Unfortunately, that goes way beyond the scope of this article. I’ll leave that manifest for another day. Suffice it to say that it is possible for society to exist in a new state without money as its primary motivation. Let’s get back to the topic of relaxing social distancing.

The World’s Ills

Unfortunately, our leaders are very much constrained by the ills of our economy revolving around pieces of paper. As such, our leaders are now constrained to look for solutions based on this ill conceived narrow situation of our own making. None of these leaders are attempting to think outside of the box. They are firmly rigid in their thought processes regarding how to restart our economy “as it once was”.

Our economy as it formerly existed is over. It will take full eradication of this virus from every person in the world, coupled with about a decade for this situation to recover the world back to where we were just a few months ago. A decade. Yes, I said a decade… and that’s a conservative estimate. It could take several decades.

Consider that if we lose 10% of the United States population, we’ve taken our economy back to the point where we were 38 years ago, in 1981. 20% of the population lost and we’re back over 50 years ago, in 1968. 50% of the population lost and we’re back to an economy that ended 64 years ago, in 1955. Don’t think that losing even 10% of the population is enough to cause major widespread problems in the United States, let alone throughout the world.

Losing a vast number of people in a short period is enough to send ANY economy into a tailspin. Because this virus is not at all selective towards whom it targets, it will kill anyone indiscriminately in any age group and in any economic status from young to old to male to female to rich to poor. It may even kill animals. Granted, poor people may fare worse living in closer proximity to one another, but this virus doesn’t care about age groups, race, gender, economic status or, indeed, anything else. It only seeks a host to survive and that’s exactly what it is doing.

Reopening

At a less than 1% infection rate while planning to reopen the world, Wall Street, main street or any other street is a guarantee for a second even deadlier wave. It’s a fool’s errand and foolhardy. These reckless actions will trick many people into believing that they are safe, when in fact our leaders are setting themselves (and the population) up to be a death statistic.

This article serves as both a cautionary tale and as a solemn warning to world leaders. Opening up the world at this point is effectively looking down the barrel of a gun while playing Russian Roulette.

When the second COVID wave hits, and it will, it will leave hospitals with zero space while the death toll catastrophically soars well beyond that of the statistically averaged 1.25%. Perhaps this hard lesson is what the world leaders need as a wake up call? Unfortunately, this lesson learned will be on the backs of so many who died.

If you’re reading this article, don’t fall for this reopening trick. Stay at home and urge your workplace to remain closed. If you value your health and, indeed, your own survival and your family’s survival, stay at home even after reopening. We’re still only at the beginning of this… there is still a much, much longer and deadlier road ahead.

↩︎

CanDo: An Amiga Programming Language

Posted in computers, history by commorancy on March 27, 2018

At one point in time, I owned a Commodore Amiga. This was back when I was in college. I first owned an Amiga 500, then later an Amiga 3000. I loved spending my time learning new things about the Amiga and I was particularly interested in programming it. While in college, I came across a programming language by the name of CanDo. Let’s explore.

HyperCard

Around the time that CanDo came to exist on the Amiga, Apple had already introduced HyperCard on the Mac. It was a ‘card’ based programming language. What that means is that each screen (i.e., card) had a set of objects such has fields for entering data, placement of visual images or animations, buttons and whatever other things you could jam onto that card. Behind each element on the card, you could attach written programming functions() and conditionals (if-then-else, do…while, etc). For example, if you had an animation on the screen, you could add a play button. If you click the play button, a function would be called to run the animation just above the button. You could even add buttons like pause, slow motion, fast forward and so on.

CanDo was an interpreted object oriented language written by a small software company named Inovatronics out of Dallas. I want to say it was released around 1989. Don’t let the fact that it was an interpreted language fool you. CanDo was fast for an interpreted language (by comparison, I’d say it was proportionally faster than the first version of Java), even on the then 68000 CPU series. The CanDo creators took the HyperCard idea, expanded it and created their own version on the Amiga. While it supported very similar ideas to HyperCard, it certainly wasn’t identical. In fact, it was a whole lot more powerful than HyperCard ever dreamed of being. HyperCard was a small infant next to this product. My programmer friend and I would yet come to find exactly how powerful the CanDo language could be.

CanDo

Amiga owners only saw what INOVATronics wanted them to see in this product. A simplistic and easy to use user interface consisting of a ‘deck’ (i.e, deck of cards) concept where you could add buttons or fields or images or sounds or animation to one of the cards in that deck. They were trying to keep this product as easy to use as possible. It was, for all intents and purposes, a drag-and-drop programming language, but closely resembled HyperCard in functionality, not language syntax. For the most part, you didn’t need to write a stitch of code to make most things work. It was all just there. You pull a button over and a bunch of pre-programmed functions could be placed onto the button and attached to other objects already on the screen. As a language, it was about as simple as you could make it. I commend the INOVATronics guys on the user-friendly aspect of this system. This was, hands down, one of the easiest visual programming languages to learn on the Amiga. They nailed that part of this software on the head.

However, if you wanted to write complex code, you most certainly could do this as well. The underlying language was completely full featured and easy to write. The syntax checker was amazing and would help you fix just about any problem in your code. The language had a full set of typical conditional constructs including for loops, if…then…else, do…while, while…until and even do…while…until (very unusual to see this one). It was a fully stocked mostly free form programming language, not unlike C, but easier. If you’re interested in reading the manual for CanDo, it is now located at this end of this section below.

As an object oriented language, internal functions were literally attached to objects (usually screen elements). For example, a button. The button would then have a string of code or functions that drove its internal functionality. You could even dip into that element’s functions to get data out (from the outside). Like most OO languages, the object itself is opaque. You can’t see its functions names or use them directly, only that object that owns that code can. However, you could ask the object to use one of its function and return data back to you. Of course, you had to know that function existed. In fact, this would be the first time I would be introduced to the concept of object oriented programming in this way. There was no such thing as free floating code in this language. All code had to exist as an attachment to some kind of object. For example, it was directly attached to the deck itself, to one of the cards in the deck, to an element on one of the cards or to an action associated with that object (mouse, joystick button or movement, etc).

CanDo also supported RPC calls. This was incredibly important for communication between two separately running CanDo deck windows. If you had one deck window with player controls and another window that had a video you wanted to play, you could send a message from one window to another to perform an action in that window, like play, pause, etc. There were many reasons to need many windows open each communicating with each other.

The INOVATronics guys really took programming the Amiga to a whole new level… way beyond anything in HyperCard. It was so powerful, in fact, there was virtually nothing stock on the Amiga it couldn’t control. Unfortunately, it did have one downside. It didn’t have the ability to import system shared libraries on AmigaDOS. If you installed a new database engine on your Amiga with its own shared function library, there was no way to access those functions in CanDo by linking it in. This was probably CanDo’s single biggest flaw. It was not OS extensible. However, for what CanDo was designed to do and the amount of control that was built into it by default, it was an amazing product.

I’d also like to mention that TCP/IP was just coming into existence with modems on the Amiga. I don’t recall how much CanDo supported network sockets or network programming. It did support com port communication, but I can’t recall if it supported TCP/IP programming. I have no doubt that had INOVATronics stayed in business and CanDo progressed beyond its few short years in existence, TCP/IP support would have been added.

CanDo also supported Amiga Rexx (ARexx) to add additional functionality to CanDO which would offer additional features that CanDo didn’t support directly. Though, ARexx worked, it wasn’t as convenient as having a feature supported directly by CanDo.

Here are the CanDo manuals if you’re interested in checking out more about it:

Here’s a snippet from the CanDo main manual:

CanDo is a revolutionary, Amiga specific, interactive software authoring system. Its unique purpose is to let you create real Amiga software without any programming experience. CanDo is extremely friendly to you and your Amiga. Its elegant design lets you take advantage of the Amiga’s sophisticated operating system without any technical knowledge. CanDo makes it easy to use the things that other programs generate – pictures, sounds, animations, and text files. In a minimal amount of time, you can make programs that are specially suited to your needs. Equipped with CanDo, a paint program, a sound digitizer, and a little bit of imagination, you can produce standalone applications that rival commercial quality software. These applications may be given to friends or sold for profit without the need for licenses or fees.

As you can see from this snippet, INOVATronics thought of it as an ‘Authoring System’ not as a language. CanDo itself might have been, but the underlying language was most definitely a programming language.

CanDo Player

The way CanDo worked its creation process was that you would create your CanDo deck and program it in the deck creator. Once your deck was completed, you only needed the CanDo player to run your deck. The player ran with much less memory than the entire CanDo editor system. The player was also freely redistributable. However, you could run your decks from the CanDo editor if you preferred. The CanDo Player could also be appended to the deck to become a pseudo-executable that allowed you to distribute your executable software to other people. Also, anything you created in CanDo was fully redistributable without any strings to CanDo. You couldn’t distribute CanDo, but you could distribute anything you created in it.

The save files for decks were simple byte compiled packages. Instead of having to store humanly readable words and phrases, each language keyword had a corresponding byte code. This made storing decks much smaller than keeping all of the human readable code there. It also made it a lot more tricky to read this code if you opened the deck up in a text editor. It wasn’t totally secure, but it was better than having your code out there for all to see when you distributed a deck. You would actually have to own CanDo to decompile the code back into a humanly readable format… which was entirely possible.

The CanDo language was way too young to support more advanced code security features, like password encryption before executing the deck, even though PGP was a thing at that time. INOVATronics had more to worry about than securing your created deck’s code from prying eyes, though they did improve this as they upgraded versions. I also think the INOVATronics team was just a little naïve about how easily it would be to crack open CanDo, let alone user decks.

TurboEditor — The product that never was

A programmer friend who was working towards his CompSci masters owned an Amiga, and also owned CanDo. In fact, he introduced me to it. He had been poking around with CanDo and found that it supported three very interesting functions. It had the ability to  decompile its own code into humanly readable format to allow modification, syntactically check the changes and recompile it, all while still running. Yes, you read that right. It supported on-the-fly code modification. Remember this, it’s important.

Enter TurboEditor. Because of this one simple little thing (not so little actually) that my friend found, we were able to decompile the entire CanDo program and figure out how it worked. Remember that important thing? Yes, that’s right, CanDo is actually written in itself and it could actually modify pieces that are currently executing. Let me clarify this just a little. One card could modify another card, then pull that card into focus. The actual card wasn’t currently executing, but the deck was. In fact, we came to find that CanDo was merely a facade. We also found that there was a very powerful object oriented, fully reentrant, self-modifying, programming language under that facade of a UI. In fact, this is how CanDo’s UI worked. Internally, it could take an element, decompile it, modify it and then recompile it right away and make it go live, immediately adding the updated functionality to a button or slider.

While CanDo could modify itself, it never did this. Instead, it utilized a parent-child relationship. It always created a child sandbox for user-created decks. This sandbox area is where the user built new CanDo Decks. Using this sandbox approach, this is how CanDo built and displayed a preview of your deck’s window. The sandbox would then be saved to a deck file and then executed as necessary. In fact, it would be one of these sandbox areas that we would use to build TurboEditor, in TurboEditor.

Anyway, together, we took this find one step further and created an alternative CanDo editor that we called TurboEditor, so named because you could get into it and edit your buttons and functions much, much faster than CanDo’s somewhat sluggish and clunky UI. In fact, we took a demo of our product to INOVATronics’s Dallas office and pitched the idea of this new editor to them. Let’s just say, that team was lukewarm and not very receptive to the idea initially. While they were somewhat impressed with our tenacity in unraveling CanDo to the core, they were also a bit dismayed and a little perturbed by it. Though, they warmed to the idea a little. Still, we pressed on hoping we could talk them into the idea of releasing TurboEditor as an alternative script editor… as an adjunct to CanDo.

Underlying Language

After meeting with and having several discussions with the INOVATronics team, we found that the underlying language actually has a different name. The underlying language name was AV1. Even then, everyone called it by ‘CanDo’ over that name. Suffice it to say that I was deeply disappointed that INOVATronics never released the underlying fully opaque, object oriented, reentrant, self-modifying on-the-fly AV1 language or its spec. If they had, it would have easily become the go-to deep programming language of choice for the Amiga. Most people at the time had been using C if they wanted to dive deep. However, INOVATronics had a product poised to take over for Amiga’s C in nearly every way (except for the shared library thing which could have been resolved).

I asked the product manager while at the INOVATronics headquarters about releasing the underlying language and he specifically said they had no plans to release it in that way. I always felt that was shortsighted. In hindsight for them, it probably was. If they had released it, it could have easily become CanDo Pro and they could sold it for twice or more the price. They just didn’t want to get into that business for some reason.

I also spoke with several other folks while I was at INOVATronics. One of them was the programmer who actually built much of CanDo (or, I should say, the underlying language). He told me that the key pieces of CanDo he built in assembly (the compiler portions) and the rest was built with just enough C to bootstrap the language into existence. The C was also needed to link in the necessary Amiga shared library functions to control audio, animation, graphics and so on. This new language was very clever and very useful for at least building CanDo itself.

It has been confirmed by Jim O’Flaherty, Jr. (formerly Technical Support for INOVATronics) via a comment that the underlying language name was, in fact, AV1. This AV portion meaning audio visual. This new, at the time, underlying object oriented Amiga programming language was an entirely newly built language and was designed specifically to control the Amiga computer.

Demise of INOVAtronics

After we got what seemed like a promising response from the INOVATronics team, we left their offices. We weren’t sure it would work out, but we kept hoping we would be able to bring TurboEditor to the market through INOVATronics.

Unfortunately, our hopes dwindled. As weeks turned into months waiting for the go ahead for TurboEditor, we decided it wasn’t going to happen. We did call them periodically to get updates, but nothing came of that. We eventually gave up, but not because we didn’t want to release TurboEditor, but because INOVATronics was apparently having troubles staying afloat. Apparently, their CanDo flagship product at the time wasn’t able to keep the lights on for the company. In fact, they were probably floundering when we visited them. I will also say their offices were a little bit of a dive. They weren’t in the best area of Dallas and in an older office building. The office was clean enough, but the office space just seemed a little bit well worn.

Within a year of meeting the INOVATronics guys, the entire company folded. No more CanDo. It was also a huge missed opportunity for me in more ways than one. I could have gone down to INOVATronics, at the time, and bought the rights to the software on a fire sale and resurrected it as TurboEditor (or the underlying language). Hindsight is 20/20.

We probably could have gone ahead and released TurboEditor after the demise of INOVATronics, but we had no way to support the CanDo product without having their code. We would have had to buy the rights to the software code for that.

So, there you have it. My quick history of CanDo on the Amiga.

If you were one of the programmers who happened to work on the CanDo project at INOVATronics, please leave a comment below with any information you may have. I’d like to expand this article with any information you’re willing to provide about the history of CanDo, this fascinating and lesser known Amiga programming language.

 

%d bloggers like this: