Random Thoughts – Randocity!

The Evolution of Sound Recording

Posted in audio engineering, audio recording, history by commorancy on February 14, 2023

edisonBeginning in the 1920s and up to the present, sound recordings have changed and improved dramatically. This article spans 100 years of audio technology improvements. Though, audio recording spans all the way back to Phonautograph in 1860. What drove these changes was primarily the quality of the recording media available at the time. This is a history-based article and is 20,000 words due to the material and ten decades covered. Grab a cup of your favorite hot beverage, sit back and let’s explore the many highlights of what sound recording has achieved since 1920.

Before We Get Started

Some caveats to begin. This article isn’t intended to offer a comprehensive history of all possible sound devices or technologies produced since the 1920s. Instead, this article’s goal is to provide a glimpse of what has led to our current technologies by calling out the most important technological breakthroughs in each decade, according to this Randocity author. Some of these audio technology breakthroughs may not have been designed for the field of audio, but have impacted audio recording processes, nonetheless. If you’re really interested in learning about every audio technology ever invented, sold or used within a given decade, Google is a the best place to start for that level of exploration and research. Wikipedia is also another good source of information. A comprehensive history is not the intent of this article.

Know then that this article will not discuss some technologies. If you think that a missing technology was important enough to have been included, please leave a comment below for consideration.

This article is broken down by decades, punctuated with deeper dives into specific topics. Thus, this sectioning is intended to make this article easier read over multiple sittings if you’re short on time. There is intentionally no TL;DR section in this article. If you’re wanting a quick synopsis you can read in 5 minutes, this is not that article.

Finally, because of the length of this article, there may still be unintended typos, misspellings and other errors still present. Randocity is continuing to comb through this article to shake out any loose grammatical problems. Please bear with us while we continue to clean up this article. Additionally in this cleanup process, more information may be added to improve clarity or as the article requires.

With that in mind, onto the roaring…



Ah, the flapper era. Let’s all reminisce over the quaint cloche hats covering tightly waved hair, the Charleston headbands adorned with a feather, the fringe dresses and the flapper era music in general, which wouldn’t be complete without including the Charleston itself.



For males, it was all about a pinstripe or grey suit with a Malone cap, straw hat or possibly a fedora. While women were burning their hair with hot irons to create that signature 1920s wave hairstyle and slipping into fringe flapper evening dresses, musicians were recording their music using, at least by today’s standards, antiquated audio equipment. At the time in the 1920s, though, that recording equipment was considered top end professional!

In the 1920s, recordings produced in a recording studio were recorded or ‘cut’ by a record cutting lathe. Hence, the use of the term “record cut”. This style lathe recorder used a stylus which cut a continuous groove into a “master” record, usually made of lacquer. The speed? 78 RPM (revolutions per minute). Typically, a studio using an acoustic microphone had one microphone. When electrical microphones appear, this setup requires an immense sized amplifier to feed the sound into that lathe recorder. Prior to the 1920s, records were made via acoustic microphones (no electricity involved). By the 1920s, this era ushered in the use of electrical amplifiers to improve the sound quality, improving microphone placement and numbers and make the recordings sound more natural by improving the volume on the records produced on the master recording. Effectively, a studio recorded the music straight onto a “cut” record… which that record would be used as a master to mass produce shellac 78 RPM records, which would then be sold in mass to consumers in stores.

This also meant that there was no such thing as overdubbing. Musicians had to get the entire song down in one single take. It’s possible multiple takes were utilized to get the best possible version, but that would waste several master discs until that best take could be made.

Though audio recording processes would improve only a little from 1920 to 1929, going into the 30s, the recording process would show much more improvement. We would have to wait until 1948 before 33 RPM records would be introduced to see a decidedly marked improvement in sound quality on records. Until then, 78 RPM shellac records would remain the steadfast, but average quality standard for buying music.

With non-electrical recordings of 1920s, these recordings utilized only a single microphone to record the entire band, including the singer. It wouldn’t be until the mid to late 20s, with electrical recording processes using amplifiers, that a two channel mixing board becomes available, allowing for placement of two or more microphones connected via wire, one or more mics for the band and one for the singer.

Shellac Records

Before we jump into the 1930s, let’s take a moment to discuss the mass produced shellac 78 records, which remained popular well into the 1950s. Shellac is very brittle. Thus, dropping one of these records would result in the record shattering into small pieces. Of course, one of these records can also be intentionally broken by whacking it against something hard, like so…




Shellac records fell out of vogue primarily because shellac became a scarce commodity due to World War II wartime efforts, but also because of its lack of quality when compared with vinyl records. Because of the scarcity of shellac combined with the rise of vinyl records, this led to the demise of the shellac format by 1959.

This wartime scarcity of shellac also led to another problem; the loss of some audio recordings. This shellac scarcity led to people performing their civic duty by turning in their shellac 78 RPM records to help reduce this shellac scarcity. Also around this time, some 78 records were made of vinyl due to this shellac shortage. While a noble goal, the turning in of shellac 78s to help out the war effort also contributed to the loss of many shellac recordings. In essence, this wartime effort may have caused the loss of many audio recordings that may never be heard again.

1920s Continued

As for cinema sound, it would be the 1920s that introduces moviegoers to what would be affectionately dubbed, “talkies“. Cinema sound as we know it, using an optical strip along side the film, was not yet available. Synchronization of sound to motion pictures was immensely difficult and ofttimes impractical to achieve with the technologies available at the time. One system used was the sound-on-disc process, which required synchronizing a separate large phonograph disc with the projected motion picture. Unfortunately, this synchronization could be impossible to achieve reliably. The first commercial success of this sound-on-disc process, albeit with limited sound throughout, was The Jazz Singer in 1927.

Even though the sound-on-film (optical strip) process (aka Fox Film Corporation’s Movietone) would be invented during the 1920s, it wouldn’t be widely in use until the 1930s, when this audio film process becomes fully viable for commercial film use. Though, the first movies released using Fox’s Movietone optical audio system would be Sunrise: A Song of Two Humans (1927) and then a year later Mother Knows Best (1928). Until optical sound became much more widely and easily accessible to filmmakers, most filmmakers in the 1920s utilized sound-on-disc (phonographs) to synchronize their sound separately with the film. Only the Fox Film Corporation, at that time, had access (due to William Fox having purchased the patents in 1926) for the Movietone film process. Even then, the two films above only sported limited voice acting on film. Most of the audio included in those two pictures consisted of music and sound effects, with very limited voice acting.

Regardless of the clumsy, low quality and usually unworkable processes for providing motion picture sound in the 1920s, this decade was immensely important in ushering sound into the motion picture industry. If anything, the 1920s (and William Fox) proved that sound would become fundamental to the motion picture experience.

Commercial radio broadcasting also began during this era. In November of 1920, KDKA began broadcasting its first radio programs. This began the era of commercial radio broadcasting that we know today. With this broadcast, radio broadcasters needed ways to attenuate the signal to accommodate the broadcast frequency bandwidth requirements.

Thus, additional technologies would be both needed and created during the 1930s, such as audio compression and limiting. These compression technologies were designed for the sole purpose of keeping audio volumes strictly within radio broadcast specifications, to prevent overloading the radio transmitter, thus giving the listener a better audio experience when using their new radio equipment. On a related note, RCA apparently held most of the patents for AM radio broadcasting during this time.



By the 1930s, women’s fashion had become more sensible, less ostentatious, less flamboyant and more down-to-earth. Gone are the cloche hats and heavy eye makeup. This decade shares a lot of its women’s fashion sensibilities with 1950s dress.

Continuing with the sound-on-film discussion from the 1920s, William Fox’s investment in Movietone, would only prove useful until 1931, when Western Electric introduced a light-valve optical recorder which superseded Fox’s Movietone process. This Western Electric process would become the optical film process of choice, utilized by filmmakers throughout 1930 film features and into the future, even though Western Electric’s recording equipment design proved to be bulky and heavy. Fox’s Movietone optical process would continue to remain in use for producing Movietone news reels until 1939 due to the better portability of the sound equipment, thanks in part to Western Electric’s over-engineering of its light-valve’s unnecessarily heavy case designs.

As for commercial audio recording throughout the 1930s, recording processes haven’t changed drastically from the 20s, except new equipment was introduced to aid in recording music better, including the use of better microphones. While the amplifiers got a little smaller, the microphone quality improved along with the use of multi-channel mixing boards. These boards were introduced so that instead of recording only one microphone, many microphones (as many as six) could record an orchestra including a singer mixed down into one monophonic recorded input for the lathe recorder. This change allowed for better, more accurate, more controlled sound recording and reproduction. However, recording to the lathe stylus recorder was still the main way to record, even though audio tape recording equipment was beginning to appear, such as the AEG/Telefunken Magnetophon K1 (1935).

At this time, RCA produced its first uni-directional ribbon microphone, the large 77A (1932) and this mic became a workhorse in many studios. There is some discrepancy on the exact date the 77A was introduced. However, it was big and bulky, but became an instant favorite. However, in 1933, RCA introduced its smaller successor to the 77A, the RCA 44A, which was a bi-directional microphone. The model 77 would go on to also see the release of the 77B, C, D and DX. However, the two latter 77 series microphones wouldn’t see release until the mid-40s, after having been redesigned to about the size of the 44.

There would be three 44 series models released including the 44A (1933), 44B (1936) and the 44BX (1938). These figure 8 pattern bi-directional ribbon microphones also became the workhorse mics used in most of the recording and broadcast industries in the United States, ultimately replacing the 77A throughout the 30s. These microphones were, in fact, so popular, some are still in use today and can be found on eBay. There’s even an RCA 44A replica being produced today by AEA. Unfortunately, RCA discontinued manufacture of the 44 microphone series in 1955. RCA would discontinue producing microphones altogether in 1977, ironically RCA’s last model released was the model 77 in 1977. The 44A sported an audio pickup range between 50 Hz to 15,000 Hz… an impressive dynamic range, even though the lathe recording system could not record or reproduce that dynamic range.

A mixing board when combined with several new workhorse 44A mics allowed a sound engineer to bring certain channel volumes up and other volumes down. A mixing board use allowed vocalists to be brought front and center in the recording, not drowned out by the band… with sound leveled on the fly during recording by the engineer’s hand and a pair of monitor headphones or speakers.

One microphone problem during the 20s was that microphones were primarily omni-directional. This meant that any noise would be picked up from anywhere around the microphone. This also meant that in recording situations, everything had to remain entirely silent during the recording process, except for the sound being recorded. By 1935, Siemens and RCA had introduced various cardioid microphones to attempt to solve for extraneous side noise. These uni or bi-directional microphones only picked up audio directly in front of the microphone, but not sounds outside of the microphone’s cardioid pattern. This improvement was important when recording audio for film when on location. You can’t exactly stop car honking, tires squealing and general city noises during a take. The solution was the uni-directional microphone, introduced around 1935.

Most recording studios at the time relied on heavy wall-mounted gear that wasn’t at all easy to transport. This meant recording had to be done in the confines of a studio using fixed equipment. This portability need led to the release of this 1938 Western Electric 22D model mixer, which had 4 microphone inputs and a master gain output. It sported a lighted VU meter and it could be mounted in a portable carrying case or possibly in a rack. This unit even sported a battery pack! In 1938, this unit was primarily used for radio broadcast recording, but this or similar portable models were also used when recording on-location audio for films or news reels at the time.


In the studio, larger channel versions were also utilized to allow for more microphone placement, but still mixing down into a single monophonic channel. Such studio typically used up to 6 microphones, though amplifiers sometimes added hiss and noise, which might be audibly detectable if too many were strung together. There was also the possibility that phase problems could exist if too many microphones were utilized. The resulting output recording would be monophonic for the mass produced shellac 78 RPM records, for radio broadcast or for movies shown in a theater.

Here are more details about this portable Western Electric 22-d mixing board…


Lathe Recording

Unfortunately, electric current during this time was still considered too unreliable and could cause audio “wobble” if electrical power was used to power the turntable during recording. In some cases, lathe recorders used a heavy counterweight suspended from the ceiling which would slowly drop to the floor at a specified rate which would power the rotation of the lathe turntable to ensure a continuous rotation speed. This weight system provided the lathe with a stable cut from the beginning to the end of the recording, unaffected by potential unstable electrical power. Electrical power was used for amplification purposes, but not always for driving the turntable rotation while “cutting” the recording. Spring based or wound mechanisms may have also been used.

1930s Continued

All things considered, this Western Electric portable 4 channel mixer was pretty far ahead of the curve. With technology like this, these 1930 audio innovations led us directly into 60s and 70s era of recording. This portable mixing board alone, released in 1938, is definitely ahead of its time. Of course, this portability was likely being driven by both broadcasters, who wanted to record audio on location, and by the movie industry who needed to record audio on-location while filming. Though, the person tasked with using this equipment had to lug around 60 lbs of equipment, 30 lbs on each shoulder.

Additionally, during the 1930s and specifically in 1934, Bell Labs began experimenting with stereo (binaural) recordings in their audio labs. Here’s an early stereo recording from 1934.

Note that even though this recording was recorded stereo in 1934, the first commercially produced stereo / binaural record wouldn’t hit the stores until 1957. Until 1957, the monophonic / monaural 78 RPM records remained the primary standard for purchasing music during the 1930s.

For home recording units (and even used in professional situations) in the 1930s, there were options. Presto created various model home recording lathes. Some of Presto’s portable models include the 6D, D, M and J models, which were introduced between the years 1932 and 1937. The K8 model was later introduced around 1941. Some of these recorders can still be found on the secondary market today in working order. These units required specialty blank records in various 6″, 8″ and 10″ sizes and sported 4 holes in the center. This home recording lathe system recorded at either 78 or 33⅓ speed. In 1934, these recorder lathes cost around $400, equivalent to well over $2,000 today. By 1941, the price of the recorders had dropped to between $75 and $200. The blanks cost around $16 per disc during the 30s. That $16 then is equivalent to around $290 today. Just think about recording random noises on a $290 blank disk? Expensive.

Finally, it is worth discussing Walt Disney’s contribution to audio recording during the late 30s and into 1940. Fantasia (produced in 1939, released in 1940) was the first movie to sport a full stereo soundtrack. This was achieved through the use of a 9 track optical recorder. These 9 optical tracks were mixed down to 4 optical tracks for use when presenting the audio in a theater. Optical audio recording and playback is the method a sprocket film projector uses to play back audio through theater sound equipment (see 1920s above), prior to the introduction of magnetic analog audio and later digital audio in the 90s. Physically running along side the 35mm or 70mm film imagery, an audio track runs vertically throughout the entire length of the film. The audio track is run through a separate audio decoder and amplifier at the time the projector is flipping images.

To operate the Fantasia film with stereo in 1940, a theater would need two projectors running simultaneously. The first projector ran the visual film image and that film also contained one mono optical audio track (for backup or for theaters running Fantasia only in mono). The second “stereo” projector ran four (4) optical tracks consisting of the left, right and center audio tracks (technically, a 3.0 sound system). The fourth track managed an automated gain control to allow for fades as well as volume increase and decrease in the audio. This system was dubbed Fantasound by Disney. Note that Fantasound apparently employed an RCA compression system to make the audio sound better and keep the audio volumes consistent (not too loud, not to low volume) while watching Fantasia. At the time when shellac recordings were common, seeing a full color and stereo feature in the theater would have been a technical marvel.

Disney’s Fantasia vs Wizard of Oz

It is worth pausing here to discuss the technical achievement of both Walt Disney and MGM in sound recording and reproduction. Walt Disney contributed greatly to the advancement of theatrical film audio quality and stereo films. Fantasia (produced in 1939, released in 1940) was the first movie to sport a full stereo soundtrack in a theater. This was achieved through the use of a 9 track optical recorder when recording the original music soundtrack. These 9 optical tracks were then mixed down to 4 optical tracks for use when presenting the audio in a theater. According to Wikipedia, the Fantasia orchestra was outfitted with 36 microphones, these 36 mics were condensed down into the aforementioned 9 (less, actually) optical audio tracks when recorded. One of these 9 tracks was a click track for animators to use when producing their animations.

To explain optical audio a bit more, optical audio recording and playback is the method a sprocket film projector uses to playback audio through theater sound equipment. This optical audio system remained in use prior to the introduction of digital audio in the 90s. Physically running along side the 35mm or 70mm film reel imagery, there is an optical, but analog audio track that runs vertically throughout the entire length of the film. There have been many formats for this optical track. The audio track is run through a separate analog audio decoder and amplifier at the same time the projector is flipping through images.

For a theater operator to operate the Fantasia film in stereo in a theater in 1940, a theater would need two projectors running simultaneously along with appropriate left, right and center speakers, speaker amplifiers, speakers hidden behind the screen and, in the case of Fantasound, speakers mounted in the back of the theater. The first projector would present the visual film image on the screen, while that film reel also contained one mono optical audio track (used for backup purposes or for theaters running the film only in mono). The second “stereo” projector ran four (4) optical tracks consisting of the left, right and center audio tracks (likely the earliest 3.0 sound system). The fourth track managed an automated gain control to allow for fades as well as automated audio volume increase and decrease. This stereo system was dubbed Fantasound by Disney. At the time when mono shellac recordings were common in the home, seeing a full color and stereo motion picture in the theater in November of 1940 would have been a technical marvel.

Let’s pause here to savor this incredible Disney cinema sound innovation moment. Consider that it’s 1940, just barely out of the 30s. Stereo isn’t even a glimmer in the eye of record labels as yet and Walt Disney is outfitting theaters with basically what would be considered today’s modern multichannel audio theater standard (as in 1970s or newer) stereo type sound system. Though Cinerama, a 7 channel audio standard, would land in theaters as early as 1952 featuring the documentary film This Is Cinerama, it wouldn’t be until 1962’s How The West Was Won that the theater goers actually got a full scripted feature film using Cinerama’s 7 channel sound system. In fact, Disney’s Fantasound basically morphed into what would become Cinerama, using three synchronized projectors like Fantasound, but Cinerama used multiple projectors for a different reason than for multichannel sound.

Cinerama also gave pause to sound recording for film. It made filmmakers invest in more equipment and microphones to ensure that all 7 channels were recorded so that Cinerama could be used. Clearly, even though the technology was available for use in Cinemas, filmmakers didn’t exactly embrace this new audio technology as readily as theater owners were willing to install it. Basically, it wouldn’t be until the 60s and on into 70s that Cinerama and the later THX and Dolby sound systems became in common use in cinemas. Disney ushered the idea of stereo in theaters in in the 40s, but it took nearly 30 years for the entire film industry to embrace it, including easier and cheaper ways to achieve it.

Disney’s optical automated volume gain control track foreshadows Disney’s use of animatronics in its own theme parks beginning in the 1960s. Even though Disney’s animatronics use a completely different mechanism of control, the use of an optical track to control automation of the soundtrack’s volume in 1939 was (and still is) completely ingenious. Though, this entire optical stereo system, at a time when theaters were still running monophonic motion pictures, was also likewise quite ingenious (and expensive).

Unfortunately, Fantasia’s stereo run in theaters would be short, with only 11 roadshow runs using the Fantasound optical stereo system. The installation of Fantasound required a huge amount of equipment, including installation of amplifiers, speakers behind the screen and speakers in the back of the theater. In short, it required the equipment that modern stereo theaters require today. See the link just above for more details on this.

Consider also that the Wizard of Oz, which was released in 1939 by MGM, was also considered a technical marvel for its Technicolor process, but this musical film was released to theaters in mono. Though this film’s production did record most, if not all, of the audio for the Wizard of Oz on a multitrack recorder during filming, which occurred between 1938 and 1939. It wouldn’t be until 1998 when The Wizard of Oz’s original 1939 recorded multitrack audio was restored and remastered in stereo, finally giving The Wizard of Oz its full stereo soundtrack from its original 1930s on-set multitrack recordings.

Here’s Judy Garland singing Over the Rainbow from the original multitrack masters recorded in 1939, even though this song wasn’t released in stereo until 1998 after this film’s theatrical re-release. Note, I would embed the YouTube video inside this article, but this YouTube channel owner doesn’t allow for embedding. You’ll need to click through to listen.

As a side note, it wouldn’t be until the 1950s when stereo becomes commonplace in theaters and until the late 50s when stereo records also become available for home use. In 1939, we’re many years away from stereo audio standards. It’s amazing then that, between 1938 and 1939, MGM had the foresight to record this film’s audio using a multitrack recorder during filming sessions, in addition to MGM’s choice of employing those spectacular Technicolor sequences.



In addition to Disney’s Fantasia, the 1940s were punctuated by World War II (1939-1945), the holocaust (1933-1945) and the atomic bomb (1945). Because of the Great Depression and the frugality beginning in 1929 and lasting through the late 1930s, this frugality moved into the 1940s, in part because of left over anxieties from the Great Depression, but also because of the wartime necessity to ration certain types of items including sugar, tires, gasoline, meat, coffee, butter, canned goods and shoes. This rationing led housewives to be much more frugal in other areas of life including hairstyles and dress… also because the war surged the prices of some consumer goods.

Thus, this frugality influenced fashion and also impacted sound recording equipment manufacturing, most likely due to the early 1940s wartime efforts requiring manufacturers to convert to making wartime equipment instead of consumer goods. While RCA continued to manufacture microphones in the mid 40s (mostly after the war), a number of other manufacturers also jumped into the fray. Some microphone manufacturers targeted ham operators, while others created equipment targeted at “home recordists” (sic). These consumer microphones were fairly costly at the time, equivalent to hundreds of dollars today.

Some of 1940’s microphones sported a slider switch which allowed moving the microphone from uni-directional to bi-directional to omni-directional. This meant that the microphone could be used in a wide array of applications. For example, both RCA’s MI-6203-A and MI-6204-A microphones (both released in 1945) offered a slider switch to move between the 3 different pickup types. Earlier microphones, like RCA’s 44A, required opening up the microphone down to the main board and moving a “jumper” to various positions, if this change could be performed at all. Performing this change was inconvenient and meant extra setup time. Thus, the slider in the MI-6203 and MI-6204 made performing this change much easier and quicker. See, it’s the small innovations!

During the 1940s, both ASCAP and, later, BMI (music royalty services aka performing rights organization or PRO) changed the face of music. In the 1930s, most music played on broadcast programs had been performed by a live studio orchestra, employing many musicians. During the 1940s, this began to change. As sound reproduction became better sounding, these better quality sound recordings led broadcasters to using prerecorded music over live bands during broadcast segments.

This put a lot of musicians out of work, musicians who would have otherwise continued gainful employment with a radio program. ASCAP (established in 1914 as a PRO) tripled its royalties for broadcasters in January of 1941 to help out these musicians. In retaliation for these higher royalty costs to play ASCAP music, broadcasters dropped using ASCAP music from its broadcasts, instead choosing public domain music and, at the time, unlicensed music (country, R&B and Latin). Disenchanted by ASCAP’s already doubled fees in 1939, broadcasters created their own PRO organization, BMI in 1939 (acronym for Broadcast Music Incorporated). This meant that music placed under the BMI royalty catalog would either be free to broadcasters and/or supplied at a much lower cost than that music licensed by ASCAP.

This tripling of fees in 1941 and, subsequent, dropping of ASCAP’s catalog by broadcasters put a hefty dent in ASCAP’s (and its artist’s) bottom line. By October of 1941, ASCAP had reversed its tripled royalty requirement. During this several month period in 1941, ASCAP’s higher fees helped to popularize genres of music which were not only free to broadcasters, but these genres were now being introduced to unfamiliar new listeners. Thus, these musical genres which typically did not get much air play prior, including country, R&B and Latin music, saw major growth in popularity during this time via radio broadcasters.

The genre popularity growth is partly responsible for the rise of Latin artists like Desi Arnaz and Carmen Miranda throughout the 1940s.

By 1945, many recording studios had converted away from using the lathe stylus recording turntables  and began using magnetic tape to prerecord music and other audio. The lathe turntables were still used to create the final 78 RPM disc master from the audio tape master for commercial record purposes. However, broadcasters didn’t need this when using reel to reel tape for playback.

Reel to reel tape also allowed for better fidelity and better broadcast playback over those noisy 78 RPM shellac records at the time. It also cost less to use because a single reel of tape could be recorded over again and again. With tape, there is also less hiss and way less background noise, making for a more professional listening and playback experience in a broadcast or film use. Listeners couldn’t tell the difference between the live radio segments and prerecorded musical segments.

Magnetic recording and playback would also give rise to better sounding commercials, though commercial jingle producers did record commercials using 78 RPMs during that era. From 1945 until about 1982, recordings had been produced almost exclusively using magnetic tape… a small preview of things to come.

While the very first vinyl record was released in 1930, this new vinyl format wouldn’t actually become viable as a prerecorded commercial product until 1948, when Columbia introduced its first 12″ 33⅓ RPM microgroove vinyl long playing (LP) record. CBS / Columbia was aided in producing this new format by the aforementioned Presto company who helped CBS develop the vinyl format. Considering Presto’s involvement with and innovation of its own line of lathe recorders, Columbia leaning on Presto was only natural. This Columbia LP format would eventually replace the shellac 78 RPMs in short order.

At around 23 minutes per side, the vinyl LP afforded a musical artist with about 46 minutes of recording time. This format quickly became the standard for releasing new music, not only because of the format’s ~46 minutes of running time, but also because it offered way less surface noise than when using shellac 78s. Vinyl records were also slightly less brittle than shellac records, giving them a bit more durability over shellac records.

By 1949, RCA had introduced a 7″ version of this 33⅓ microgroove vinyl format intended for use with individual (single) songs… holding around 4-6 minutes per side. These vinyl records at the time were still all monaural / monophonic. Stereo wouldn’t become available and popular until the next decade.

Note that Presto Recording Corporation continued to release and sell both portable, professional and home lathe recorders during the 1940s and on into the 50s and 60s. Unfortunately, the Presto Recording Corporation closed its doors in 1965.



By the 1950s, some big audio changes are in store; changes that Walt Disney helped usher in with Fantasia in 1940. Long past the World War II weary 1940s and the Great Depression ravaged 1930s, the 1950s had become a new prosperous era in American life. Along with this new prosperous time, fashion rebounded and so too did the musical recording industry and the movie theater industry. So too did musical artists who now began focusing on a new type of music, rock and roll. As a result of this new musical genre, recording this new genre needed some recording changes.

Because the late 1940s and early 1950s ushered in the new filmed musical format, many in Technicolor (and one in stereo in 1940), this led to audio advancements in theaters. Stereo radio broadcasts wouldn’t be heard until the 60s and stereo TV broadcasts wouldn’t begin until the early 80s, but stereo would become common place in theaters during the 1950s, particularly due to these musical features and the pressures placed on cinema by the television.

Musical films like Guys and Dolls (1955) were released in stereo along with earlier releases like Thunder Bay (1953) and House of Wax (1953). Though, it seems that some big musicals, like Marilyn Monroe’s Gentlemen Prefer Blondes (1952) was not released in stereo.

This means that stereo film recording for some films in the early 50s was haphazard and depended entirely on the film’s production. Apparently, not all film producers placed value in having stereo soundtracks for their respective films. Blockbuster films, many including Marilyn Monroe, didn’t include stereo soundtracks. However, lower budget horror and suspense films did include them, probably to entice moviegoers in for the experience.

By 1957, the first stereo LP album is released, which ushers in the stereophonic LP era. Additionally, by the late 1950s, most film producers began to see the value in recording stereo soundtracks for their films. No longer was it vogue to produce mono soundtracks for films. At this point, producers choosing to employ mono soundtracks did so out of personal choice and artistic merit, like Woody Allen.

Here’s a vinyl monophonic version of Frank Sinatra’s Too Marvelous for Words recorded for his 1956 album Songs for Swingin’ Lovers. Notice the telltale pops and clicks of vinyl. Even the newest untouched vinyl straight from the store still had a certain amount of surface noise and pops. Songs for Swingin’ Lovers was released one year before stereo made its vinyl debut. Though, Sinatra would still release a mono album in 1958, his last monophonic release entitled Only the Lonely. Sinatra may have begun recording of the Only the Lonely album in late 1956 using monophonic recording equipment and likely didn’t want to release portions of the album in mono and portions in stereo. Though, he could have done this by making side 1 mono and side 2 stereo. This gimmick would have made a great introduction to the stereo format for his fans and likely helped to sell even more copies.



This song is included to show the recording techniques being used during the 1950s and what vinyl typically sounded like.


In the 1950s, Cinema had the most impact on audio reproduction and recording. Because of Disney’s 1940 Fantasound process, this invention led to the design of Cinerama. A more simplified design by Fred Waller modified from his previous ambitious multi-projector installations. Waller had been instrumental in creating training films for the military using multi-projector systems.

However, in addition to the 3 separate, but synchronized images projected by Cinerama, the audio was also significantly changed. Like Disney’s Fantasound, Cinerama offered up multichannel audio, but in the form of 7 channels, not 4 like Fantasound. Cinerama’s audio system design was likely what led to the modern versions using DTS, Dolby Digital and SDDS sound. Cinerama, however, wasn’t intended to be primarily about the sound, but about the picture on the screen. Cinerama was intended to provide 3 projected images across a curved screen and provide that curved widescreen imagery seamlessly (a tall order and it didn’t always work properly). Cinerama was only marginally intended to be about the 7 channel audio. The audio was important to the visual experience, but not as important as the 3 projectors driving the imagery across that curved screen.

Waller’s goal was to discard the old single projector ideology and replace it with a multi-projector system akin to having peripheral vision. The lenses used to capture the film images were intended to be nearly the same focal length as the human eye in an attempt to be as visually accurate as possible and give the viewer an experience as though they were actually there, though the images were still flat, not 3D.

While Waller’s intent was to create a ground breaking projection system, the audio system employed is what withstood the test of time and what drove change in the movie and cinema sound industries. Unlike Fantavision, which used two projectors, one for visuals and one for 4 channel sound using optical tracks, Cinerama’s sound system used a magnetic strip which ran the length the film. This magnetic strip held 6 channels of audio with the 7th channel provided by the mono optical strip. Because Cinerama had 3 simultaneous projectors running, the Cinerama system could have actually supported 21 channels of audio information.

However, Cinerama settled on 7 audio channels, likely provided by the center projector. Though, information about exactly which of the three projectors provided the 7 channels of audio output is unclear. It’s also entirely possible that all 3 film reels held identical audio content for backup purposes. If one projector’s audio dies, one of the other two projectors could be used. The speaker layout for the 7 channels was five speakers behind the screen (surround left, left, center, right, surround right), two speaker channels on the walls (left and right or whatever channels the engineer feeds) and two channels in the back of the theater (again whatever the engineer feeds). There may have been more speakers than just two on the walls and in the rear, but two channels were fed to these speakers. The audio arrangement was managed manually by a sound engineer who would move the audio around the room live while the performance was running to enhance the effect and provide surround sound features. The 7 channels were likely as follows:

  • Left
  • Right
  • Center
  • Surround Left
  • Surround Right
  • Fill Left
  • Fill Right

Fill channels could be ambient audio like ocean noises, birds, trees rustling, etc. These ambient noises would be separately recorded and then mixed in at the appropriate time during the performance to bring more of a sense of realism to the visuals. Likely, the vast majority of the time, the speakers would provide the first 5 channels of audio. I don’t believe that this 7 channel audio system supported a subwoofer. Subwoofers would arrive in theaters later as part of the Sensurround system in the mid 1970s. Audio systems used in Cinerama would definitely influence later audio systems like Sensurround.

The real problem with Cinerama wasn’t its sound system. It was, in fact, its projector system. The 3 synchronized projectors projected separately filmed, but synchronized visual sequences. As a result, the three projected images overlapped each by a tiny bit. As a result of this overlap, both the projector played tricks to keep that line of overlap as unnoticeable as possible. While it mostly worked, the fact that 3 cameras were used that weren’t 100% perfectly aligned when filming led to problems with certain imagery on the screen. In short, Cinerama was a bear to use as a cinematographer. Very few film projects wanted to use the system due to its difficulty of filming scenes and it was even more difficult to make sure the scene appeared proper when projected. Thus, Cinerama wasn’t widely adopted by filmmakers nor theater owners. Though, the multichannel sound system was definitely something that filmmakers were interested in using.

Ramifications of Television on Cinema

As a result of the introduction of NTSC Television in 1941 and because of TV’s wide and rapid adoption by the early 1950s, the cinema industry tried all manner of gimmicky ideas to get people back into cinema seats. These gimmicks included, for example, Cinerama. Other in-cinema gimmicks included 3D glasses, smell-o-vision, mechanical skeletons, rumbling seats, multichannel stereo audio and even simple tricks like Cinemascope… which used anamorphic lenses to increase the width of the image instead of requiring multiple projectors to provide that width. The 50s were an era of endless trial and error cinema gimmicks in an effort to get people back into the cinema. None of these gimmicks furthered audio recording much, however.

Transition between Mono and Stereo LPs

During the 1960s, stereophonic sound would become an important asset to the recording industry. Many albums plastered the words “Stereo”, “Stereophonic” or “In Stereophonic Sound” written largely across parts of the album cover. Even the Beatles fell into this trap with a few of their albums. However, this marketing lingo was actually important at the time.

During the late 50s and into the early 60s, many albums were dual released both as monophonic and as a separate stereophonic release. These words across the front the album were intended to tell the consumer which version they were buying. This marketing text was only needed while the industry kept releasing both versions to stores. And yes, even though the words do appear prominently on the cover, some people didn’t understand and still bought the wrong version.

Thankfully, this mono vs stereo ambiguity didn’t last very long in stores. By the mid-1960s nearly every album released had converted to stereo, with very few being released in mono. By the 70s, no more mono recordings were being produced, except when an artist chose to make the recording mono for artistic purposes.

No longer was the consumer left wondering if they had bought the correct version, that is until 1976’s quadrophonic releases began… but that discussion is for the 70s. During the late 50s and early 60s, some artists were still recording in mono and some artists were recording in stereo. However, because many consumers didn’t yet own stereo players, record labels continued to release mono versions for consumers with monophonic equipment. It was assumed that stereo records wouldn’t play correctly on mono equipment, even though they played fine. Eventually, labels wised up and recorded the music in stereo, but mixed down to mono for some of the last monophonic releases… eventually abandoning monophonic releases altogether.



By the 1960s, big box recording studios were becoming the norm for recording bands like The Beatles, The Rolling Stones, The Beach Boys, The Who and vocalists like Barbra Streisand. These new studios were required to produce the professional and pristine stereo soundtracks on vinyl. This required heavy use of multitrack mixing boards. Here’s how RCA Studio B’s recording board looked when used in 1957. Most state of the art studios, at the time, would have used a mixing board similar to this one. The black and white picture shown on the wall behind this multitrack console depicts a 3 track mixing board, likely in use prior to the installation of this board.


Photo by cliff1066 under CC BY 2.0


RCA Studio B became a sort of standard for studio recording and mixing throughout the early to mid 1960s and even into the late 1970s. While this board may accept many input channels, the resulting master recording may record only as few as two tracks to as many as eight tracks through the mid-60s. It wouldn’t be until the late 60s that magnetic tape technologies would improve to allow recording 16 channels and then later to 24 channels by the 1970s.

Note that many modern mixing boards in use today resemble the layout and functionality of this 1957 RCA produced board, but these newer systems support more channels as well as effects.

Microphones of the 1960s also took to being majorly improved once again. No longer were microphones simply utilitarian, now they were being sold for luxury sound purposes. For example, Barbra Streisand almost exclusively recorded with the Neumann M49 microphone (called the Cadillac of Microphones, with a price tag to match) throughout the early 60s. In fact, this microphone became her staple. Whenever she recorded, she always requested a very specific serial number for her Neumann M49 from the rental service. She felt that this microphone specifically made her voice sound great.

However, part of the recording process was not just the microphone that Barbra Streisand used. It was also the recording equipment that Columbia owned at the time. While RCA’s studios made great sounding records, Columbia’s recording system was well beyond that. Barbra’s recordings from the 60s sound like they could have been recorded today on digital equipment. To some extent, that’s partially true. Barbra’s original 1960s recordings have been cleaned up and restored digitally. However, you have to have an excellent product from which to start to make it sound even better.

Columbia’s recordings of Barbra in the 60s were truly exceptional. These recordings were always crystal clear. Yes, the clarity is attributable to the microphone, but also due to Columbia’s high quality recording equipment, which was leaps and bounds ahead of other studios at the time. Not all recording systems were as good as what Columbia used, as evidenced by the soundtrack to the film Hello Dolly (1969) which Barbra recorded for 20th Century Fox. This recording is more midrangy, less warm and not at all as clear as the recordings Barbra made for Columbia records.

There were obviously still pockets of less-than-stellar recording studios recording inferior material for film and television, even going into the 1970s.

Cassettes and 8-Tracks

During the early 1960s and specifically in 1963, a new audio format was introduced in the Compact Cassette, otherwise known as simply a cassette tape. The cassette tape would go on to rival that of the vinyl record and have a commercial life of its own, which is still in diminished use to this day. Because the cassette didn’t rely on a stylus moving, there were way less constraints on the bass that could be laid down into it. This meant that cassettes ultimately had better sonic capabilities than vinyl.

In 1965, the 8-track or Stereo 8 format was introduced, which became extremely popular for use inside of vehicles initially. Eventually, though, the cassette tape and eventually the multi changer CD would replace 8 track systems in car stereos. Today, CarPlay and similar Bluetooth systems are the norm.

The Stereo 8 Cartridge was created in 1964 by a consortium led by Bill Lear, of Lear Jet Corporation, along with Ampex, Ford Motor Company, General Motors, Motorola, and RCA Victor Records (RCA – Radio Corporation of America).

Quote Source: Wikipedia



The 1970s were punctuated by mod clothing, bell bottom jeans, Farrah Fawcett feathered hair, drive-in movies and leisure suits. Coming out of the psychedelic 1960s, these bright vibrant colors and polyester knits led to some rather colorful, but dated rock and roll music (and outfits) to go along.

Though, drive-in theaters appeared as early as the 1940s, drive-in theaters would ultimately see their demise in late 1970s, primarily due to urban sprawl and the rise of malls. Even still, drive-in theaters wouldn’t have lasted into the multitrack 7.1 sound era of rapidly improving cinema sound. There is no way to reproduce such incredible surround sound within the confines of automobiles of the era, let alone today. The best that drive-in theaters offered was either a mono speaker affixed to the window or tuning into a radio station with the radio, which might or might not offer stereo sound, usually not. The sound capabilities afforded by indoor theaters, coupled with year round air conditioning, led people indoors to watch films any time of the day and all year round rather than watching movies in their cars only at night and when weather permitted. Thus, brutally cold winters don’t work well for drive-in theater viewing.

By the 1970s, sound recording engineers were also beginning to overcome the surface noise and sonic capabilities of stereo vinyl records, making stereo records sound much better. During this era, audiophiles were born. Audiophiles are people who always want the best audio equipment to make their vinyl records sound their absolute best. To that end, audio engineers pushed vinyl’s capabilities to its limits. Because diamond needles must travel through a groove to playback audio, if the audio gained too much thumping bass or volume, it could jump the needle out of its track and cause skipping.

To avoid this turntable skipping problem, audio engineers had to tune down the bass and volume when mastering for vinyl. While audio engineers could create two masters, one for vinyl and one for cassette, that almost never happened. Most sound engineers were tasked to create one single audio master for a musical artist and that master was strictly geared towards vinyl. This meant that a prerecorded cassette got the same audio master as the vinyl record, instead of a unique master created for the dynamic range available on a cassette.

Additionally, cassettes came in various formulations. From ferric oxide to metal (Type I to Type IV). There were eventually four different cassette tape formulations available to consumers, all of which commercial producers could also use when producing commercial duplication. However, most commercial producers opted to use Type I or Type II cassettes (the least costly formulations available). These were also available all the way through the 1970s. Type IV was metal and could produce the best sound available due to its tape formulation, but didn’t arrive until late in the 1970s.

8-tracks could be recorded, but there was essentially only one tape formulation. These recorders began appearing in the 1970s for home use. It was difficult to record an 8-track tape and sometimes more difficult to find blanks. Because each tape program was limited in length, you must make sure the audio doesn’t gap over from one track to the next or else you’ll have a jarring audio experience. With audio cassettes, this was a bit easier to avoid. Because 8-tracks had 4 stereo programs, each of the 4 stereo program segments is fairly short. Because the entire 8-track tape is 80 minutes, that would be 20 minutes per stereo track. It ends up more complicated for the home consumer to divide their music up into four 20 minute segments than it is to manage a 90 minute cassette with 45 minutes on each side.

Because a vinyl record only holds about 46 minutes, that length became the standard for musical artists until the CD arrived. Even though cassettes could hold up to 90 minutes of content, commercially produced prerecorded tapes held only the amount of tape need to match the 46 minutes of content available on vinyl. In other words, musical artists didn’t offer extended releases on cassettes during the 70s and 80s. It wouldn’t be until the CD arrives that musical artists were able to extend the amount of content they could produce.

As for studio recording during the 1960s and 1970s, most studios relied on Ampex or 3M (or similar professional quality) 1/2 inch or 1 inch multitrack tape for recording musical artists in the studio. Unfortunately, many of these Ampex and 3M branded tape formulations ended up not archival. This led to degradation (i.e., sticky-shed syndrome) in some of these audio masters 10-20 years later. The Tron Soundtrack, recorded in 1982 on Ampex tape, degraded in the 1990s to the point that the tape needed to be carefully baked in an oven to reaffix and solidify the ferric coating. After it had been carefully baked, there were effectively a few limited shots at re-recording the original tape audio onto a new master. It’s possible a baked master could also be played a few times onto several masters. Some Ampex tape audio master recordings may have been completely lost from the lack of being archival. Wendy Carlos explains in 1999 what it took to recover the masters for the 1982 Tron soundtrack.

Thankfully, cassette tape gluing formulations didn’t seem to suffer from sticky-shed syndrome like the some formulations of Ampex and 3M professional tape masters did. It also seems that 8-track tapes may have been immune to this problem as well.

For cinematic films made during the 1970s, almost every single film was recorded and presented in stereo. In fact, George Lucas’s Star Wars in 1977 ushered in the absolute need for stereo soundtracks in summer blockbusters to direct action sequences of the shots timed to orchestral music. Musical cues timed to each visual beat has now become a staple in filmmaking since the first Star Wars in 1977. While the recording of the music itself is much the same as it was in the 60s, the use of this orchestral music timed to visual beats became the breakthrough feature of filmmaking in the late 1970s and beyond. This musical beat system is still very much in use today in every summer blockbuster.

As for vinyl records and tapes of the 70s, surface noise and hiss is always a problem. To counter this problem, cassettes employed Dolby noise reduction techniques almost from the start. Commercially prerecorded tapes are encoded with a specific type of noise reduction. The associated player would need to be set on the same reduction type to reduce the inherent noise via that noise reduction. Setting a tape on the wrong noise reduction setting (or none at all) could cause the high end to be lost or, in many cases, for the audio playback to distort. For tapes, the most commonly used noise reduction was Dolby B, with the occasional use of Dolby C. Though, tapes could be encoded with Dolby A, B, C or S. The most commonly sold noise reduction for commercially prerecorded music cassette tapes was Dolby B, which began around 1965, but which remained in use throughout the 70s and 80s.

DBXFor vinyl, most vinyl albums didn’t offer or include noise reduction systems at all. However, starting around 1971, a relatively small number of vinyl releases were sold containing the DBX encoding noise reduction system. The discs were signified with the DBX encoded disc notation. This system, like Dolby’s tape noise reduction system, requires a decoder to playback the vinyl properly. Unfortunately, no turntables or amplifiers sold, that Randocity is aware, had a built-in DBX decoder. Instead, you had to buy and then inline a separate DBX decoder component in your Stereo Hi-Fi chain of devices, like the DBX model 21 decoder. DBX vinyl noise reduction was not just noise reduction, however. It also changed the audio dynamics of the recorded vinyl groove. DBX grooved disks thinned out and reduced the sonics and dynamics dramatically, making listening to a DBX encoded vinyl disc without a decoder nearly impossible. The DBX decoder would uncompress these compressed and thinned tracks back into their original sonically and suitably dynamic audio range.

To play a DBX encoded vinyl disk back properly, it required buying a DBX decoder component (around $150-$200 in addition to the cost of an amplifier, speakers and a turntable). This extra cost was for only a handful of vinyl disks, though. Not really worth the investment. DBX is unlike Dolby B reduction on tape, which if Dolby B is not decoded, still sounded relatively decent sonically even without the noise reduction enabled. DBX encoded vinyl discs are almost impossible to listen to without a decoder. For this reason, it’s likely why only very few vinyl discs were released encoded with DBX. However, if you were willing to invest in a DBX decoder component, the high and low ends were said to sound much better than a standard vinyl disc containing no noise reduction. The DBX system expanded and played these dynamics better, but probably not as full a sound as a CD can reproduce. DBX encoded vinyl likely meant that a fully remastered or at least better equalized version of the vinyl master was produced for these specific vinyl releases.

With that said, Dolby C and Dolby S are more like DBX when reproducing dynamics than Dolby A and B, which these first two were strictly noise reduction, not offering dynamic enhancement. These noise reduction techniques are explained in this section under the 1970s area because this is where they rose to their most prominent use, moreso on cassettes than on vinyl. Of course, these noise reduction techniques are not needed on a CD format, which is yet to come during the 80s.

For professional audio recording, in 1978, 3M introduced the first digital multitrack recorder for professional studio use. This recorder used one inch tape for recording up to 32 tracks. However, it priced in at an astonishing $115,000 (32 tracks) and $32,000 (4 tracks), which only a professional recording studio could afford. Studios like Los Angeles’s A&M Studios, The Record Plant and Warner Brother’s Amigo Studios all installed this 3M system.

Around 1971, IMAX was introduced. While this incredibly large screen format didn’t specifically change audio recording or drastically improve audio playback in the cinema, it did provide a much bigger screen experience which has endured to today. It’s included here to be complete for the 70s, but not so much for its improvements to audio recording, though it did improve film requirements for filmmakers.

For advancements in cinema sound, the 1970s saw the introduction of Sensurround. While there weren’t many features that supported this cinema sound system, it was mostly for good reason. The gimmick primarily featured a huge rumbling, theater shaking subwoofer (or several) aimed directly at the audience from below the screen. Nevertheless, subwoofers have since become common and have even endured as a constant in theaters since the introduction of Sensurround, just not to the degree of Sensurround. Like the 50s near endless gimmicks to drive people back into the theaters, the 1970s tried a few of these gimmicks such as Sensurround to also captivate and drive people back into theaters.

Earthquake Sensurround

In case you’re really curious, a few film features supporting Sensurround were Earthquake (1974), the Towering Inferno (1974) and Battlestar Galactica (1978). The Sensurround experience was interesting, but the thundering, rattling subwoofer bass was, at times, more of a distraction than it added to the film’s experience. It’s no wonder why it only lasted through the 70s and why only a few filmmakers used it. Successor cinema sound systems include DTS, Dolby Digital and SDDS, while THX ensured proper sound reproduction to ensure those rumbling, thundering bass segments can be properly heard (and felt).

Digital Audio Workstations

Let’s pause here to discuss a new audio recording methodology introduced as a result of the advent of digital audio… more or less required for producing the CD. As a result of digital audio recorders becoming available in the late 70s and early 80s and with accessibility of easy to use computers now dawning, the DAW or digital audio workstation is born. While computers in the late 70s and early 80s were fairly primitive, the introduction of the Macintosh computer (1984) with its impressive and easy to use UI made building and using a DAW much easier. It’s probably a little early to discuss DAWs during the late 70s early 80s here, but because it factors into nearly every type of digital audio recording prominently during the late 80s, 90s, 00s and beyond, the discussion is placed here.

Moving into the late 80s with even easier UI based computers like the Macintosh (1984), Amiga (1985), Atari ST (1985), Windows 3 (1990) and later Windows 95 (1995), DAWs became even more available, accessible and usable by the general public. With the release of Windows 98 and newer Mac OS systems, the DAW systems became even more feature rich, commonplace and easy to use, ultimately targeting home musicians.

Free open source products like Audicity, which first released in 1999, also became available. By 2004, Apple would include its own DAW, GarageBand, with its own Mac OS X and iOS operating systems. Acid Music Studio by Sonic Foundry, another home consumer DAW, was introduced in 1998 for Windows. This product and Sonic Foundry would subsequently be acquired by Sony, but then was later sold to Magix in 2016.

Let’s talk more specifically about DAWs for a moment. The Digital Audio Workstation was a ground breaking improvement over editing using analog recording technologies. This digital visual editing system allows for much easier digital audio recording and editing than any previous system before it. With tape recording technologies of the past, to move audio around required juggling tapes by re-recording and then overdubbing on top of existing recordings. If done incorrectly, it could ruin the original audio with no way back. Back in the 50s, the simplest of editing which could be done with analog recordings was playing games with tape speeds and, if possible on the tape recorder itself by overdubbing.

With digital audio clips in a DAW, you can now pull up individual audio clips and place them on as many tracks as are needed visually on the screen. This means you can place drums on one track, guitars on another, bass on another and vocals on another. You can easily add sound effects to individual tracks or layer them on top with simple drag, drop and mouse moves. If you don’t like the drums, you can easily swap them for an entirely new drum track or mute the drums altogether to create an acoustic type of effect. With a DAW, creative control is almost limitless when putting together audio materials. In addition, DAWs support plugins of all varying types including both digital instruments as well as digital effects. They can even be used to synchronize music to videos.

DAWs are intended to allow for mixing multiple tracks down into a stereo (2 track) mix in many different formats, including MP3, AAC and even uncompressed WAV files.

DAWs can also control external music devices, like keyboards or drum machines or any other device that supports MIDI control. DAWs can also be used to record music or audio for films allowing for easy placement using the industry standard SMPTE timing controller. This allows complete synchronization of an audio track (or set of tracks) with a film visual’s easily and, more importantly, consistently. SMPTE can even control such devices as lighting controllers to allow for live control over lighting rig automation, though some lighting rigs also support MIDI. A DAW is a very flexible and extensible piece of software used by audio recording engineers to take the hassle out of mixing and mastering musical pieces and speed up the musical creation process… even being able to use it in live music situations.

While DAWs came to existence in the early 1980s for professional use, it was the 1990s and into the 2000s which saw more home consumer musician use, especially with tools like Acid Music Studio, which based their entire DAW around managing creative loops… loops being short for looped audio clips. Sonic Foundry sold a lot of prerecorded royalty free loops which the user could use those royalty free loops in the creation of musical works. Though, if you wanted to create your own loops in Acid Music Studio using your own musical instruments, that was (and still is) entirely possible.

The point is, once the DAW became commonplace, it changed the recording industry in very substantial ways. Unfortunately, with the good so comes the bad. As technology improved with DAWs, so too did technologies to improve a singer’s vocals… thus was born the dreaded and now overused autotune vocal effect. This effect is now used by many vocalists as a crutch to make their already great voice supposedly sound even better. On the flip side, it can also be used to make bad vocalists sound passable… which is personally how it’s being used these days. I don’t personally think autotune makes vocals sound better ever, but I don’t matter when it comes to such recordings. With DAWs out of the way, let’s segue into another spurious 1970s audio technology topic…

Quadrophonic Vinyl Releases

In the early 1970s, just as stereo began to take hold, JVC and RCA corporations devised Quadrophonic vinyl albums. This format expected the home consumer to buy into an all new audio system including a quad decoder amplifier, a quad turntable, two additional speakers for a total of four and to purchase into albums that supported the quad format. This was a tall (and expensive) order. As people had just begun investing in somewhat expensive amplifiers and speakers to support stereo, JVC and RCA expected the consumer to toss all of their existing (mostly new) equipment and invest in brand new equipment AGAIN. Let’s just say that that didn’t happen. Though, to be fair, you didn’t need to buy a quad turntable. Instead you simply needed to buy a CD-4 cartridge for your turntable and have an amplifier that could decode the resulting CD-4 encoded data.

For completion, the CD-4 system offered 4 discrete audio channels: left front, left back, right front and right back. Quad was intended to be enjoyed with four speakers each placed in a square around the listener.

This hatched quad plan expected way too much of consumers. While many record labels did adopt this format and did produce perhaps hundreds of releases in quad, the format was not at all successful due to consumer reticence. The equipment was simply too costly for most consumers to toss and replace their existing HiFi equipment. Stereo remained the dominant music format and has remained so since. Though, with the advent of quad’s special stylus cartridges, it did help improve stereo recordings by improvements with styluses and higher quality vinyl formulations needed to produce the quad vinyl LPs.

Note also that while quad vinyl LP releases made their way into record stores in the early 1970s, no cassette version of quad ever became available. However, Q8 or quad 8-track tapes arrived as early as 1970, two years before the first vinyl release. Of course, 8-track tapes at the time were primarily used in cars… which would have meant upgrading your car audio system with two more speakers and a new decoder car player with four amplifiers, one for each speaker.

The primary thing that the quad format was successful at doing, at least for consumers, was muddy the waters at the record store and introduce multichannel audio playback, which wouldn’t really become a home consumer “thing” until the DVD arrived in the 1990s. However, for a consumer shopping for albums in the 1970s, it would have been super easy to accidentally buy a quad album, take it home and then realize it doesn’t play. Same problem exists for Q8 tapes; though Q8 tapes had a special quad notch that may have prevented it from playing in some players. And now, onto the …



In the 1980s, we see big hair, glam rock bands and hear new wave, synth pop and alternative music on the radio. Along with all of these, this era ushers us into the digital music era using the new Compact Disc (CD) and, of course, players. The CD, however, would actually turn out to be a couple of decade stop gap for the music and film industries. While the CD is still very much in use and available today, its need is diminishing rapidly with the likes of music services, like Apple Music. But, that discussion is for the 2010s and into the 2020s.

Until 1983, vinyl, cassettes and, to a much lesser degree, 8-track tapes were the music formats available to buy at a record store. By late 1983 and into 1984, the newfangled CD hit the store shelves, but not majorly as yet. At the same time, out went 8-Track tapes. While the introduction of the CD was initially aimed at the classical music genre, where the CD’s silence and dynamic range works exceedingly well to capture orchestral music arrangements, pop music releases would take a bit more time to ramp up. By late 1984 and into 1985, popular music eventually begins to dribble its way onto CD as record labels begin re-releasing back catalog in an effort to slowly and begrudgingly embrace this new format. Though, bands were also embracing this new format, thus new music began releasing onto the CD format faster than back catalog.

However, the introduction of the all digital CD upped the sound engineer’s game once again. Like vinyl took a while for sound engineers to grasp, so too did the CD format. Because the top and bottom sonic end of the CD is effectively unlimited, placing those masters made for vinyl onto a CD made for a lower volume and a sonically unexciting and occasionally shrill music experience.

If you buy a CD made in the mid 1980s and listen to it, you can tell the master was originally crafted for a vinyl record. The sonics are usually tinny, harsh and flat with a very low volume. These vinyl master recordings were intended to prevent the needle from skipping and relied on some of the sonics to be smoothed out and filled in by the turntable and amplifier itself. A CD needs no such help. This meant that CD sound engineers needed to find their footing on how deep the bass goes, how high the treble can get and how loud it can be. Because vinyl (and the turntable itself) tended to attenuate the audio to a more manageable level, placing a vinyl master onto CD foisted all of these inherent vinyl mastering flaws onto the CD buying public. This especially, considering the price tag of a CD was typically priced around $14.99 when vinyl records regularly sold for $5.99-$7.99. Asking a consumer to fork over almost double the price for no real benefit in audio quality was a tall order.

Instead, sound engineers needed to remix and remaster the audio to fill the audio dynamics and sonics of a CD. However, studios at the time were cheap and wanted to sell product fast. That meant existing vinyl masters instantly made their way onto CDs, only to sound thin, shrill and harsh. In effect, it introduced the buying public to a lateral, if not inferior product that all but seemed to sound the same as vinyl. The only improved audio masters being tailored for CD were many classical music artists. Pop artist older catalog titles were simply being rote copied straight onto the CD format… no changes. To the pop, rock and R&B buying consumer, the CD appeared to be an expensive transition format with no real benefit.

The pop music industry more or less screwed itself with the introduction of the CD format before it even got a foothold. By the late 80s and into the early 90s, consumers began to hear the immense difference in a CD as musical artists began recording their new material using the full dynamic range of the CD, sometimes on digital recorders. Eventually, consumers began to hear the much better sonics and dynamics capable of the CD format. However, during the initial 2-4 years after the CD was introduced, many labels released previous vinyl catalog onto CD sounding way less than stellar… dare I say, most of those CD releases sounded bad. Even new releases were a mixed bag depending on the audio engineer’s capabilities and equipment access.

Further, format wars always seem to ensue with new audio formats and the CD was no exception. Sony felt the need to introduce their smaller MiniDisc format, a lossy compressed format. While the CD format offered straight up uncompressed digital audio at 16 bit, the MiniDisc offered compressed audio akin to an MP3. The introduction of the MiniDisc (MD) meant that this was the first time a consumer was effectively introduced to an MP3-like device. While the compression on the MD wasn’t the same as MP3, it effectively produced the same result. In effect, you might actually say a MiniDisc player was the first pseudo MP3 player, but used a small optical disc for its music storage.

The CD format was not dissuaded by the introduction of the MD format. If anything, many audiophile consumers didn’t like the MD for the fact that it used compressed audio, making it sometimes sound worse than a CD. Though, many vinyl audiophiles also didn’t embrace the CD format likening it to a very cold musical experience without warmth or expression. Many vinyl audiophiles preferred and even loved the warmth that a stylus brought to vinyl when dragged across a record’s groove. I was not one of these vinyl huggers, however. When a CD fades to silence, it’s 100% silent. When a vinyl record fades to silence, there’s still audible vinyl surface noise present. The silence and dynamics alone made the CD experience golden… especially when the deep bass and proper treble sonics are mixed correctly for the CD.

The MiniDisc did thrive to an extent, but only because recorders became available early in its life along with many, many players from a lot of different companies, thus ensuring price competition. That, and the MD sported an exceedingly small size when compared to carrying around a huge CD Walkman. This allowed people to record their own already purchased audio right to a MiniDisc and easily carry their music around with them in their pocket. The CD didn’t offer recordables until much, much later into the 90s, mostly after computers became commonplace and those computers needed to use CDs as data storage devices. And yes, there were also many prerecorded MiniDiscs available to buy.

During the late 70s and into the early 80s, bands began to experiment with digital recording units in studios, such as 3M’s. In 1982, Sony introduced its own 24 track PCM-3324 digital recorder in addition to 3M’s already existing 1978 32 track unit, thus widening studio options when looking for digital multitrack recorders. This expanded the ability for artists to record their music all digital at pretty much any studio. Onto the cinema scene…

THX_logoIn the early-mid 80s, a new sound theater system standard emerged in THX by LucasFilm. This cinema acoustical sound standard is not a digital audio format and has nothing to do with recording and everything to do with audio playback and sound reproduction in a specific sized room space. At the time, theaters were coming out of the 1970s with short lived audio technologies like Sensurround. In the 1970s, theater acoustics were still fairly primitive and not at all optimized for the large theater room space. Thus, many of the theater sound systems were under-designed (read installed on the cheap) and didn’t appropriately or correctly fill the room with audio, leaving the soundtrack and music, at times, hard to hear. When Star Wars: Return of the Jedi was on the cusp of being released in 1983, George Lucas took an interest in theater acoustics to ensure moviegoers could hear all of the nuanced audio as George Lucas had intended in the film. Thus, the THX certification was born.

THX is essentially a movie theater certification program that ensures that all “certified theaters” must provide an optimal audio acoustical experience for moviegoers. Like the meticulous setup of Walt Disney’s Fantasound in 1940, George Lucas likewise wanted ensure his theater patrons could correctly hear all of the nuances and music within Star Wars: Return of the Jedi  in 1983. Thus, any theater that chose to certify itself via the THX standard must outfit each of their theaters appropriately to present the audio to acoustically fill the theater space correctly for all theater patrons.

However, THX is not a digital recording standard. The digital recording standards like Dolby Digital and DTS and even SDDS are all capable of supporting theaters certified for THX. Theaters certified for THX also play the Deep Note sound to signify that the theater is acoustically certified to present the feature film just to come. In fact, even multichannel analog systems such as Fantasound, if it were still available, could benefit from an acoustically certified THX theater. Further, each cinema must individually outfit each individual theater in the building to acoustically uphold the THX standard. That means that the manager of each theater must work with THX to ensure that each theater in a given megaplex adheres to the THX acoustic standard before each theater can be certified. THX means having the appropriate volume levels needed to fill the space for each channel of audio no matter where the theater patron chooses to sit within the theater.

CD Nomenclature

When CDs were first introduced, it became difficult to determine whether a musician’s music was recorded analog or digital. To combat this confusion, CD producers put 3 letters onto the cover to tell consumers how the music was recorded, mixed and mastered. For example, DDD meant that the music was recorded, mixed and mastered using only digital equipment. This likely meant a DAW was entirely used to record, mix and master. Other labels you might see included:

DAD = Digital recording, Analog mixing, Digital mastering
ADD = Analog recording, Digital mixing, Digital mastering
AAD = Analog recording, Analog mixing, Digital mastering

The third letter on a CD would always be D because every CD had to be digitally mastered regardless of how it was recorded or mixed. This nomenclature has more or less dropped away today. I’m not even sure why it became that important during the 80s, but it did. It was probably included to placate audiophiles at the time. I honestly didn’t care about this nomenclature. For those who did, it was there.

Almost all back catalog releases recorded during the 70s and even some into the 80s would likely have been AAD simply because digital equipment wasn’t yet available when most 70s music would have been recorded and mixed. However, some artists did spend the money to take their original analog multitrack recordings back to an audio engineer to convert them to digital for remixing and remastering, thus making them ADD releases. This also explains why certain CD releases of some artists had longer intros, shorter outros and sometimes extended or changed content from their vinyl release.


Sony further introduced its two-track DAT professional audio recording systems around 1987. It would be these units that would allow bands to mix down to stereo digital recordings more easily. However, Sony messed this audio format up for the home consumer market.

Fearing that consumers could create infinite perfect duplicates of DAT tapes, Sony introduced a system that would limit how many times a DAT tape could be duplicated. Each time a tape was duplicated, a marker was placed onto the duplicated tape. If a recorder detected a counter marker at the allowed max duplication number, all recorders supporting this copy protection system should prevent the tape from being duplicated again. This copy protection system all but sank Sony’s DAT system as a viable consumer alternative. Consumers didn’t understand the system, but more than this, they didn’t want to be limited by Sony’s stupidity. Thus, DAT was dead as a home consumer technology.

This at the time when MiniDisc had no such stupid duplication requirements. Sony’s DAT format silently died while MiniDisc continued to thrive throughout the 1990s. Though, to be fair, the MD’s compression system would eventually turn duplicated music into unintelligible garbage after a fair number of recompression dupes. The DAT system utilized uncompressed audio where the MD didn’t.

The stupidity of Sony was that it and other manufacturers also sold semi-professional and professional DAT equipment. The “professional” gear was not subject to this limited duplication system. Anyone who wanted to buy a DAT recorder could simply by up to semi-professional gear from any manufacturer, like Fostex, where no such copy protection schemes were enforced or used. By the time these other manufacturer’s gear became available, consumers didn’t care about the format.

A secondary problem with the DAT format was that it used helical scanning head technology, similar to the head was used in a VHS or BetaMax video system. These heads spin rapidly and can go out of alignment easily. As a result, a DAT car stereo system was likely not long term feasible. Meaning, if you hit a bump, the spinning head might change alignment and then you’ll have to readjust. Enough bumps and the whole unit might need to be fully realigned. Even the heat of scorching summer days might damage the DAT system.

Worse, helical scanning systems are subject to getting dirty quickly, in addition to alignment problems. This meant the need to regularly clean these units with a specially designed cleaning tape. Many DAT recorders would stop working altogether until you used a cleaning tape in the unit, which would reset the cleaning counter and allow the unit to function again until it needed another cleaning. Alignment problems also didn’t help the format. A recording made on one DAT unit might prevent playing the tape on another unit. Head alignment is critical between two different units. This might mean getting a tape from your friend, whose DAT machine is aligned differently from yours, that won’t play. CDs and MDs didn’t suffer from this alignment problem. What that meant was that while you could always playback DATs recorded in your own unit, a friend might not be able to play your DAT tapes in their unit at all, suffering digital noise, static, long dropouts or silence on playback.

DAT was not an optimal technology for sharing or when using outside of the home for audio. Though, some bootleggers did adopt the portable DAT recorder for bootlegging concerts. That’s pretty much no longer needed, with smartphones now taking the place of such digital recorders.

Though, Sony would more than make up for the lack of DAT being adopted as a home audio format after the computer industry adopted the DAT tape as an enterprise backup tape solution. Once DAT tape changers and libraries became common, DATs became a staple in many computer centers. All was not lost for Sony in this format. DAT simply didn’t get used for its original intended purpose, to be a home consumer digital audio format. Though, it did (and does) have a cult audiophile and bootleg following.



By the 1990s, the CD had quickly become the new staple of the music industry (over vinyl and cassettes). It was so successful, it caused the music industry to stop producing vinyl records entirely, before their recent resurgence in the 2010s for a completely different reason. Cassettes and 8-track tapes also went the way of the dinosaurs. Though, 8-tracks had been more or less gone from stores by 1983, the prerecorded cassette continued to limp along into the early 90s. Though, even newer digital audio technologies and formats are yet on the horizon, they won’t make their way into consumer’s hands until the late 1990s.

Throughout the 1990s, the CD remains the primary digital audio format of choice for commercial prerecorded music. By 1995, you could even record your own audio CDs using blanks, thanks to the burgeoning computer industry. This meant that you could now copy an audio CD or convert all of the audio tracks from a CD into MP3s (called ripping) and/or make an MP3 CD, which some later CD players could play. And yes, there were even MiniDisc car stereos available later in the decade. The rise of the USB drive also gave life to MP3s as well. This meant you could easily carry a lot more music from place to place and from computer to computer than can be held on a single CD. The MP3’s portability and downloadability along with the Internet gave rise to music downloading and sharing sites like Napster.

Though, MP3 CDs could be played in some CD players, this format didn’t really take off as a standard. This left players primarily using the audio CD as the means of playing music while in a car, thus multi-CD car changers were born. The car stereo models that supported MP3 formatted CDs would have an ‘MP3’ label printed on the front bezel near the slot where you insert a CD. No label means MP3s were not supported. Though, the rise of separate mp3 players further gave rise to car auxiliary input jacks by car manufacturers, which began because of clumsy cassette adapters. If the car stereo had only a cassette player, you would need to use a cassette adapter to plug in your 3.5mm jack equipped mp3 player. Eventually, car players would adopt the Bluetooth standard so that wireless playback could be achieved when using smart phones, but the full usefulness of that technology wouldn’t become common until many years after the 1990s. However, Chrysler took a chance and integrated its own Bluetooth UConnect system into one of its cars as early as 1999! Talk about jumping on board early!?!

Throughout the 1990s, record stores were also still very much common places to shop and buy audio CDs. By the late 1990s, the rise of DVD with its multichannel audio had also become common. Even big box electronics retailers tried to get into the DVD act with Circuit City banking on its new DiVX rental and/or purchase format, which mostly disappeared within a year of introduction. This also meant big box record stores were still available such as Blockbuster Music, Virgin Megastore, Tower Records, Sound Warehouse, Sam Goody, Suncoast, Peaches, Borders and so on. The rise of the Blockbuster Video Rental stores would eventually became defunct as VHS died over DVD, which then switched to digital streaming around the time of the Blu-ray. Some blame Netflix for Blockbuster’s demise when it was, in fact, Redbox’s $1 rental that did in Blockbuster Video stores, which were still charging $5-6 for a rental at the time of their demise.

By 1999, Diamond had introduced the Rio MP3 player. Around that same time, Napster was born (a music sharing service). The Diamond Rio was the first actual MP3 player placed onto the market, not counting Sony’s MD players. It was a product that mirrored the want of digital music downloads, which were afforded by Napster… a then music download service. I won’t get into the nitty gritty legal details, but a battle ensued between Napster and the music industry and again between Diamond (for its Rio player) and the music industry. These two lawsuits were more or less settled. Diamond prevailed, which left the Rio player on the market and allowed subsequent MP3 players to come to market, which further led to Apple’s very own iPod player being released a few years later. Unfortunately, Napster lost its battle, which left Napster mostly out of business and without much of a future until it chose to reinvent or perish.

Without Diamond paving the legal way for the MP3 player’s coming in 1999, Apple wouldn’t have been able to benefit from this legal precedent with its first iPod, released in 2001. Napster’s loss also paved the way for Apple to succeed by doing music sharing and streaming right, by getting permission from the music industry first… which Napster failed to do and was not willing to do initially. If only Napster had had the foresight to loop in the music industry initially instead of alienating them.

As for recordings made during the 90s, read the DAW section above for more details on what a DAW is and how most audio recording worked during the 90s. Going into the early 90s, traditional recording methods may have been employed, but that was quickly replaced by computer based DAW systems as Windows 98, Mac OS and other computer systems made a DAW quick and easy to install and operate. Not only is a DAW now used to record all commercial music, it is also used to prerecord audio for movies and TV productions. Live audio productions might even use a DAW to add live effects while performing live.

Though, some commercial DAW systems like Pro Tools sport the ability to control a physical mixing board’s controls. With Pro Tools, for example, the DAW shows a virtual mixing board identical to a physical mixing board attached. When the virtual mixing board controls are moved, so too does it rotate the knobs and move the sliders of the attached specific (and quite expensive) physical mixing board. While the Pro Tools demo was quite impressive, it was very expensive to buy (both Pro Tools and the supported mixing board); it was mostly a novelty. When you’re recording a specific song with live musicians, such an automated system handling a physical board might be great if you’re wanting to make sure all of the musical parts are performed live in a professional sounding way without having a sound engineer sitting there tweaking all of the controls manually. Still, moving the sliders and knobs with automation software is cool to watch, but is way overpriced and not very practical.

To be fair, though, Pro Tools was originally released at the end of 1989, but I’m still considering it a 1990s product as it would have taken until the mid-90s to mature into a useful DAW. Cubase, a rival DAW product, actually released earlier in 1989 than Pro Tools. Both products are mostly equivalent in features, with the exception of Pro Tools being able to control a physical mixing board where Cubase, at least at the time I tested it, could not.

As for cinema sound, 1990 ushered in a new digital format in Cinema Digital Sound (CDS). Unfortunately, CDS had a fatal flaw that left some movie theaters in the lurch when presenting. Because CDS replaced the optical audio track on film with a magnetic strip of digital 5.1 sound (left, right, center, S-left and S-right and low frequency effects), this left the feature (and the format) without sound if the audio strip were damaged. As a result, Dolby Digital (1992) and Digital Theater Systems (DTS — 1993) quickly became the preferred formats for presenting films with digital sound to audiences. Dolby Digital and DTS used alternative placement for the film’s digital tracks leaving the optical track available for backup audio “just in case”. For completeness, Sony’s SDDS also uses alternative placement as well.

According to Wikipedia:

…unlike those formats [Dolby Digital and DTS], there was no analog optical backup in 35 mm and no magnetic backup in 70 mm, meaning that if the digital information were damaged in some way, there would be no sound at all.

CDS was quickly superseded by Digital Theatre Systems (DTS) and Dolby Digital formats.

Source: Wikipedia

However, Sony (aka Columbia Pictures) always prefers to create its own formats so that it doesn’t have to rely on or license technology from third parties (see: Blu-ray). As a result, Sony created Sony Dynamic Digital Sound (SDDS), which saw its first film release in 1993’s The Last Action Hero. However, DTS and Dolby Digital, at the time, remained the digital track system of choice when the film was not released by Sony. Likewise, Sony typically charged a mint to license and use its technologies. Thus, producers would opt for systems that cost less in the final product if the product were not being released by a Sony owned film studio. Because Sony also owned rival film studios, many non-Sony studios didn’t want to embrace or use Sony’s technological inventions, choosing Dolby Digital or DTS over Sony’s SDDS.

Wall of Sound and Loudness Wars

Sometime in the late 1990s, sound engineers using a DAW began to get a handle on properly remastering older 80s music. This is about the time that the Volume War (aka Loudness War) began. Sound engineers began using sound compression tools and add-ons, like iZotope’s Ozone, to push audio volumes ever higher and higher, while remaining under the maximum threshold of the CD’s volume capability to prevent noticeable clipping. These remastering tools meant, at least to the subsequent remastered audio, much louder sound output than before adding compression.

Such remastering tools have been a tremendous boon to audio and artists, though Ozone didn’t really begin until middle of the 2010s. Thus, we’re jumping ahead a little. Prior to using such 2010’s tools, Cubase and Pro Tools already offered built-in compression tools that afford similar audio compression to iZotope Ozone, but which required a more manual tweaking and complexity. These built-in tools have likely existed in these products since the mid 1990s.

The Wall of Sound idea is basically pushing an audio track’s volume to the point where nearly every point in the track has the same level of volume. It makes a track difficult to listen to, offers up major ear fatigue and is generally an unpleasant sonic experience for the listener. Some engineers have pushed the compression way too far on some releases. CDs afford impressive volume differences, from the softest whisper to the loudest shout. These dynamics in music can make for tremendous artistic uses. When compression is used on pop music, all of those dynamics are lost… instead replaced by a Wall of Sound that never ends. Many rock and pop tracks fall into this category, only made worse by a tin eared, inexperienced sound engineers with no finesse over a track’s dynamics. However, sometimes it’s the band requesting the remaster and giving explicit instructions, but sometimes it’s left up to the sound engineer to create what sounds best. Either way, a Wall of Sound is never a good idea.

As a result of improving sound quality through these new mastering, this invigorated the process of remastering those old crappy-sounding, vinyl-mastered 1980 CD releases… finally giving that music the sound quality treatment it should have had when those CDs originally released in the 1980s. That, and record labels needed yet more cash to continue to operate.

These remastering efforts, unfortunately, left a problem for consumers. Because the CD releases mostly look identical, you can’t tell if what you’re buying (particularly when buying used) is the original 1980s release or the updated and remastered new release. You’d need to read the dates printed on the CD case to know if it were pressed in the 1980s or in the late 1990s. Even then, this vinyl master CD pressing problem continued into the early 1990s. It wouldn’t be until around the late 1990s or into the 2000s when the remastering efforts really began in earnest. This meant that you couldn’t assume a 1993 CD release of a 1980s album was remastered.

The only way you know if the CD is remastered is 1) buying it new and seeing a sticker making this remastering claim and 2) listening to it. Even then, some older CDs only got very minimal sound improvements (usually only volume) when remastered over their 1980s CD release. Many remasters didn’t improve the bottom or top ends of the dynamics of the music and only focused on volume… which only served to make that tinny, vinyl remaster even louder. For example, The Cars’s 1984 release, Heartbeat City, is an good example of this problem. The original release on CD had thin, tinny audio, clearly indicative that the music was originally mastered to accommodate vinyl. The 1990s and 2000s remasters only served to improve the volume, but left the music dynamics shallow, thin and tinny, with no bottom end at all… basically leaving the original vinyl remaster’s sound almost wholly intact.

A sound engineer really needed to spend some quality time with the original material (preferably from the original multitrack master) bringing out the bottom end of the drums, bass and keyboards while bringing the vocals front and center. If remastered correctly, that album (and many other 1980s albums) could sound like it was recorded on modern equipment from at least the 2000s, if not the 2010s or beyond. On the flip side, Barbra Streisand’s 1960’s albums were fully digitally remastered by John Arrias who was able to reveal incredible sonics. Barbra’s vocals are fully crisp and entirely clear along side the music backing tracks. The handling of remixing and remastering of many rock and pop bands was ofttimes handed to ham-fisted, so-called sound engineers with no digital mastering experience at all.

Where in the 1930s, it was about simply getting a recording down to a shellac 78 rpm record, in the 90s for new music, it was all about pumping up the sub-bass and making the CD as loud as possible. All of this in the later 90s was made possible by digital editing using a DAW.


Seeing as this is an article about The Evolution of Sound, this article would be remiss if it didn’t discuss and describe the MP3 format’s contribution to audio evolution. The MP3 format, or more specifically, lossy compression, was invented by Karlheinz Brandenburg, a mathematician and electrical engineer working in conjunction with various people at Fraunhofer IIS. While Karlheinz has received numerous awards for this so-called audio technological improvement, one has to wonder if the MP3 really was an improvement to audio? Let’s dive deeper.

What exactly is lossy compression? Lossy compression is an algorithmic technique by which an mathematical algorithm takes in a stream of uncompressed digital audio content and then removes and rearranges that audio to reduce or eliminate extraneous, unnecessary or repetitive segments via an encoder. When the decoder plays back the resulting compressed audio file, it recreates that audio on-the-fly based on the encoded data back into a suitably similar audio form supposedly indistinguishable from the original uncompressed audio. The idea here is to produce audio so aurally similar to the uncompressed audio that the ears cannot distinguish a difference from the original uncompressed audio content. That’s the theory, but unfortunately this format isn’t 100% perfect.

Unfortunately, not all audio is amenable to being compressed in such a way. For example, MP3 is not at all capable of producing low volume content without introducing noticeable audible artifacting. Instead of hearing only the audio as expected, the decoder also introduces a digital whine… the equivalent of analog static or white noise. Because pop, rock, R&B and Country music rely on guitars, bass and drums, keeping the volumes mostly consistent throughout the track, the MP3 format works perfectly fine for these. For orchestral music with low volume passages, the MP3 format isn’t always the best choice.

Some of this digital whine can be overcome by increasing the bit rate of the resulting file. For example, many MP3s are compressed at 128k bits per second (kbps). However, this bit rate can be increased to 320 kbps, thus reducing digital whine and increasing the overall sound fidelity. The problem with increasing bit rates is that it also increases the resulting size of the file. Thus, 320 kbps MP3 file sizes might not be that far off in size from an uncompressed .WAV file. Why suffer possible audio artifacts using the MP3 format when you can simply store uncompressed audio and avoid this?

Let’s understand why MP3s were needed throughout the 1990s. Around 1989-1990, a 1 GB sized SCSI hard drive might you cost around $1000 or more. Considering that a CD holds around 700 megabytes, you could extract the contents of about 1.5 CDs onto a 1 GB sized hard drive. If you MP3 compressed those same CD tracks, that same 1GB hard drive might be able to hold 8-10 (or more) CDs worth of MP3s. As the 1990s progressed, hard drive sizes would increase and these prices would also decrease, eventually making both SCSI and IDE drives way more affordable. It wouldn’t be until 2007 when the first 1TB sized drive launched. From 1990 through to 2007, hard drive sizes were not amenable to storing tons of uncompressed audio wave files. To a lesser degree, we’re still affected by storage sizes even today, making compressed audio still necessary, particularly when storing audio on smart phones. We’re getting too far ahead.

Because of the small storage capacities of hard drives throughout the 1990s, the need for much smaller storage of audio files was necessary, thus the mp3 was born. You might be asking, “Well, what about the MiniDisc?” It is true that Sony’s MiniDisc format also used a compressed format. Sony, however, like it always does, devised its own compression technique called ATRAC. This compression format is not unlike MP3 in terms of its design. As for specifically how Sony’s ATRAC algorithm works exactly is unknown because it is a proprietary format. Because of ATRAC’s proprietary nature, this article will not speculate on how Sony came about creating it. Suffice it to say that Sony’s ATRAC arrived 2 years after the MP3 format’s initial release in 1991. Read into that what you will.

As for the advancement of audio in the MP3 format, lossy compression has really set back audio quality. While the CD format sought to improve on audio and did so by making tremendous strides in its near 0db silence, the MP3 only sought to make audio “sound” the same as an uncompressed CD track. With the word “sound” being the key to MP3. While MP3 did mostly achieve this goal with most musical genres, the format doesn’t work for all music and all musical styles. Specifically, certain electronic music with sawtooth or exactly square wave forms can suffer. Certain passages of very low volume can also suffer under MP3’s clutches. It’s most definitely not a perfect solution, but MP3 solved one big problem, reducing the file sizes down to fit on the small data storage products available at the time.

Data Compression vs Audio Compression

Note that the compression discussed above regarding the MP3 format is wholly different from audio compression used to increase volumes and reduce clipping when remastering a track. The MP3 compression above is strictly a form a data compression, but data compression designed specifically for audio tracks. Audio volume compression used in remastering (see Loudness Wars), is not a form of data compression at all. Audio compression used in remastering is a form of analog compression and limiting. It seeks to raise volume of most of a track, but only compresses down (or lowers the volume of) the peaks that would otherwise reach above the volume ceiling of the audio media.

Remastering (music production) audio compression is intended to increase the overall volume of the audio without introducing audio clipping (clicking and static heard if audio volumes increase above the audio volume ceiling). In other words, remastered audio compression is almost solely intended to increase volumes while eliminating or introducing unwanted noises. The MP3 compression described above is solely intended to reduce file storage sizes of audio files on disc, while maintaining the audio fidelity and quality as a reasonably close facsimile to its original uncompressed audio content. Note that while audio compression techniques began in the 1930s to support radio broadcasts, the MP3 format was created in the 1990s. While both of these techniques improved during the 1990s, they are entirely separate inventions used in entirely separate ways.

For the reasons described in this section, I actually question the long term viability of the MP3 format once storage sizes become sufficiently large that uncompressed audio is the norm. MP3 wasn’t designed to improve audio fidelity at all. It was solely improved to reduce file storage sizes of compressed audio.



In the 2000s, we then faced the turn of the millennium and all of the computer problems that went along with that. With the near fall of Napster and the rise of Apple (again), the 2000s are punctuated by smart phone devices, the iPod and various ever smaller and lighter laptops.

At this point, I’d also be remiss in not discussing the rise of the video game console which has now become a new form of storytelling, like interactive cinema in its own right. These games also require audio recordings, but because they’re computer programs, they rely entirely on digital audio to operate. Thus, the importance of using a DAW to create waveforms for video games.

Additionally, the rise of digital audio and video in cinemas further pushes the envelope for audio recording. However, instead of needing a massive mixing board, audio can be recorded in smaller segments into audio files, then those files are “mixed” together using a DAW by a sound engineer, who can then play all of the waveforms back simultaneously in a mixed format. Because the sound files can be moved around on the fly, the timing can be changed, they can be added, removed, volumed up or down, have effects added, run backwards, sped up or slowed down or even duplicated multiple times to create unusual echo effects and new sounds. With video games, this can be done by the software while running live. Instead of baking down music into a single premade track, video games can live mix many audio clips, effects and sounds into a whole continuous composition live at the time the game plays. For example, if you enter a cave environment in a game, the developers are likely to apply some form of reverb onto the sound effects of walking and combat situations to mimic the sound you might experience inside of a cave environment. Once you leave the cave, that reverb effect goes away.

The flexibility of sound creation in a DAW is fairly astounding, particularly when a sound engineer is doing all of this on a small laptop on location or when connected to their desk system at an office. The flexibility of using a video game console to live mix tracks into the gameplay on the fly is even more astounding. The flexibility of using a laptop remotely on a movie set is further amazing when you need to hear the resulting recordings played back instantly with effects and processing applied.

In the 2000s, these easy to use and affordable DAW software systems opened the door up to home musicians and even professionals. This affordability made DAW systems within the reach for small musicians to create professional sounding tracks even on a limited budget. As long as the home musician was studious with their learning of the DAW software, these musicians could now produce tracks that rivaled tracks professionally recorded, mixed and mastered at an expensive studio.

While the 1930s wanted to give home users a simple way to record audio at home, this was actually achieved once DAWs like Acid Music Studio arrived and could be easily run on a laptop with minimal requirements.

Not only were DAWs incredibly important to the recording industry, but so too were small portable recording and mixing devices like the Zoom H1n. These handheld devices sport two microphones and are battery operated. The H1n supported recording 4 track inputs and could record two or four tracks simultaneously onto an SD card in various digital audio formats. These recorders also sported multiple input types in addition to the built-in microphones. While these handheld units are not technically a DAW, they do offer a few built-in minimal DAW-like tools. Additionally, the resulting audio files produced by an H1n could be imported into a DAW and used in any audio mix.

These audio recorders are incredibly flexible and can be used in a myriad of environments to capture audio clips. For on the go of capturing ambient background effects, such as sirens, water running, rain falling or cars honking, this handheld recorder offers the perfect way to do this. Its resulting audio files from the built-in microphones are always incredibly crisp and clear, but you must remain perfectly silent to not have distracting noises picked up by the incredibly sensitive microphones.

There have been a number of Zoom handy recorder products including models going back to 2007. The H1n is one of its newest models, but each of these Zoom recorder products work almost identically in recording capabilities to the earlier models.

iPhone, iPod, iPad and Mac OS X

This article would be remiss if it failed to discuss the impact the iPod, iPad and iPhone have had on various industries. However, one industry it has had very little impact on is the sound recording industry. While the iPad and iPhone do sport microphones, these microphones are not high quality. Meaning, you wouldn’t want to use the microphones built into these devices for attempting to capture professional audio.

These included microphones work fine for talking on the phone or using Facetime or for purposes where the quality of the audio coming through the microphone is unimportant. As a sound designer, you wouldn’t want to use the built-in microphone for any purposes of recording professional audio. With that said, the iPad does sport the ability to input audio channels into its lightning or USB-C ports for recording into GarageBand (or possibly other DAWs) available on iOS, but that requires hooking up an external device.

Thus, these devices, while useful for their apps and for games and other mostly fun uses, are not intended to be used for trying to record professional audio content. With that said, it is possible to record audio into GarageBand via separate audio input devices connected to an iPhone or iPad.

A MacBook is much more useful for the purposes of audio recording because these typically have several ports which could sport several input or output audio devices such as mixing boards supporting multiple audio inputs, connecting a device up like the Zoom H1n or even controlling devices via MIDI and possibly all of the above. You can even attach extensive storage space to store these resulting recorded audio files, unlike an iPad and iPhone which don’t really have these large storage options available.

While the iPad and iPhone are groundbreaking devices in some areas, audio recording, mixing and mastering is not one of those areas… that’s also because of the limited storage space on these devices combined with its lack of high quality microphones. Apple has contributed very little to the improvement and ease of professional digital audio recording with its small handheld devices. The exception here is Apple’s MacBooks and MacOS X, when using software like GarageBand, Audacity or Cubase… software that’s not easily used on an iPhone or iPad.

Let’s talk about the iPod here, but last. This device arrived in Apple’s inventory in 2001, long before the iPad or iPhone. This device was intended to be Apple’s claim to fame… and for a time, it was. This device set the tone for the future of the iPhone and iPad and even Apple Music. This device was small enough to carry, but had large enough storage capacity to hold a very large library of music while on the go. The iPod, however, didn’t really much change audio recording. It did slightly improve the quality of audio with its improvement of AAC. While AAC encoding did improve the audio quality and clarity over MP3 to a degree, the quality improvements were mostly negligible to the ears over a properly created MP3. What AAC did for Apple, more than anything, is offer a protection system to prevent users from pirating music easily when saved in Apple’s AAC format. MP3 didn’t (and still doesn’t) offer these copy protections.

AAC ultimately became Apple’s way of enticing the music industry to sign onto the Apple iTunes store as it gave music producers peace of mind knowing that iPod users couldn’t easily copy and pirate music stored in that format. For audio consumers, the perceived enhanced quality is what got some consumers to buy into Apple’s iTunes marketplace. Though, AAC was really more about placating music industry executives than about enticing consumers.



The 2010s are mostly more of the same coming out of the 2000s with one exception, digital streaming services. By the 2010s, content delivery is quickly moving from physical media towards sales of digital product over the Internet via downloads. In some cases for video games, you don’t even get a digital copy. Instead, the software runs remotely with the only pieces pumped to your system being video and audio. With streaming music and video services, that’s entirely how they work. You never own a copy of the work. You only get to view that content “on demand”.

By this point in the 2010s, DVD, blu-rays and other physical media formats are becoming quickly obsolete. This is primarily due to conversion to streaming and digital download services. Even video games are not immune to this digital purchase conversion. This means that big box retailers of the past housing shelves and shelves of physically packaged audio CDs are quickly disappearing. These brick and mortar stores are now being replaced by digital streaming services (Apple Music, Netflix, Redbox, Hulu, iTunes and Amazon Prime), yes even for video games with services like Sony’s PlayStation Now (2014) and Microsoft’s GamePass (2017). Though, it can be said that Valve’s Steam began this video game digital evolution back 2003. Sony has also decided to invest even more in its own game streaming download platform in 2022, primarily in competition with GamePass, with its facelift to PlayStation Plus Extra.

As just stated above, we are now well underway in converting from physical media to digital downloads and digital streaming. Physical media is quickly becoming obsolete, along with the retailers who formerly sold those physical media products… thus many of these retailers have closed their doors (or are in the process)… including Circuit City, Fry’s Electronics, Federated, Borders / Waldenbooks and Incredible Universe. Some of these retailers like Barnes and Noble and Best Buy are still hanging on by a thread. Because Best Buy also sells appliances, such as washers, dryers along with large screen TVs, Best Buy is somewhat diversified to not be fully reliant on the conversion from physical media to digital purchases. It remains to be seen if Best Buy can survive once consumers switch entirely to digital goods and Blu-rays are no longer available to be sold. This means that were those digital content goods to disappear tomorrow, Best Buy may or may not be able to hang on. Barnes and Noble is still in a questionable position because they don’t have other tangible goods than books. They must rely primarily on physical book sales to keep this company afloat. GameStop is also in this same situation with physical video games, though they survive primarily by selling used consoles and used games.

Technological improvements in this decade include faster computers, but not necessarily better computers as well as somewhat faster Internet, but faster networking is entirely relative to where you live. While CPUs improve in speed, the operating systems seem to get more bloated and buggier… including iOS, Mac OS X and even Windows. Thus, while the CPUs and GPUs get faster, all of that performance is soaked up almost instantly by the extra bloatware installed on the operating systems by Apple and Microsoft and Google’s Android… making an investment in new hardware almost pointless.

Audio recording during this decade doesn’t really grow as much as one would hope. That’s mainly due to services like Apple Music, Amazon Music, Pandora, Tidal and even, yes, Napster. After 1999 when Napster more or less lost its case with the music industry, it was forced to effectively change or die. Apparently, Napster decided to become a subscription service, reinventing itself. Apparently, this allowed Napster to finally get the blessing of and force royalty payments to the industry with which had lost its legal file sharing battle. Musical artists are now creating music that sells only because they have a fan base, but not because the music actually has artistic merit.

As for Napster, it all gets more convoluted. From 1999 to 2009, Napster continued to exist and grow its music subscription service. In 2009, Best Buy wanted a music subscription service for its brand and bought Napster. A couple years later, in 2011, and due primarily to money loss problems within Best Buy, Best Buy was forced to sell the remnants of Napster to the Rhapsody music service including Napster’s subscriber base along with the Napster name. In 2016, Rhapsody bizarrely renames itself to Napster… which is where we are today. The Napster that exists today isn’t the Napster from 1999 or even the Napster from 2009.

The above information about Napster is more or less included as a follow-on to the previous discussion about Napster’s near demise. This information doesn’t necessarily further the audio recording evolution, but it does tertiarily relate to the health of the music and recording industry as a whole. Clearly, if Best Buy can’t make a solid go of its own music subscription service, then maybe we have too many?

As for cinema sound, DTS and Dolby Digital (with its famous double D logo) along side THX’s acoustical room engineering became the digital standards for theater sound. Though since, audio innovation in cinema has mostly halted. This decade has been more about using the previously designed innovations than about improving the cinema experience. In fact, you would have thought that after 2019’s COVID, Cinemas would have wanted to invigorate the theater experience to get people back into the auditoriums. The only real innovation in the theater has been to seating, but not to the sound or picture improvements.

This article has intentionally overlooked the transition from analog film cameras to digital cameras (aka digital cinematography), which began in the mid 1990s and has now become quite common in cinemas. Because this transition doesn’t directly impact sound recording, it’s mentioned only in passing. Know that this transition from film to digital cameras occurred. Likewise, this article has chosen not discuss Douglas Trumball’s 60 FPS Showscan film projection process as it likewise didn’t impact the sound recording evolution. You can click through to any of the links to get more details for these visual cinema technologies if you’re so inclined.

Audio Streaming Services

While the recording industry is now firmly reliant on a DAW for producing new music, that new music must be consumed by someone, somewhere. That somewhere includes streaming services like Napster, Apple Music, Amazon Music and Pandora.

Why is this important? It’s important because of the former usefulness of the CD format. As discussed earlier, the CD was more or less a stop-gap for the music industry, but at the same time it propelled audio recording process in a new direction and offered up a whole new format for consumers to buy. Streaming services, like those named above, are now the place to go to listen to music. No longer do you need to buy and own thousands of CDs. Now you just need to pay for a subscription service and you have instant access to perhaps millions of songs at your fingertips. That’s like walking into a record store and opening every single CD in the store and listening to every single one of them for a small monthly fee. This situation could only happen on a global Internet scale, never on a single store sized scale.

For this reason, record stores like Virgin Megastore and Blockbuster Music (now out of business) no longer need to exist. When getting CDs was the only way to get music, CDs made sense. Now that you can buy MP3s from Amazon or, better, sign up for a music streaming service, you can listen to any song you want at any time you want just by asking your device’s virtual assistant or by browsing.

The paradigm of listening to commercial music has now shifted during the 2010s. Apple Music launched in 2015, for example. Since 2015, this service has now gained 88 million subscribers as of 2022 and counting. The need to buy prerecorded music, particularly CDs or vinyl, is almost nonexistent. The only people left buying CDs or vinyl are collectors, DJs or music diehards. You can get access to brand new albums the instant they drop simply by being a subscriber. With devices like iOS and Apple Music, you can even download the music to your device for offline listening. You don’t need to rely on having access to the internet to listen. You simply need access to download the tracks, but not to listen to them. As long as your device remains subscribed, all downloaded tracks remain valid.

It also means that if you buy a new device, you still have access to all of the tracks you formerly had. You would simply need to download them again.

As for music recording during this era, the DAW is firmly the entrenched recording software of choice whether in a studio or at home. Bands can even set up home studios and record their tracks right in their own studio. No need to lease out expensive studio space when you can record in your own studio. This has likely put a punch onto former studios that relied on bands showing up to record, but it was an inevitable outcome of the DAW, among other music equipment changes.

Though, it also means that the movie industry has an easier time of recording audio for films. You simply need a laptop or two and you can easily record audio for a movie production while on location. What was once cumbersome and required many people handling lots of equipment is likely down to one or two people using portable equipment.

As for Cinema audio, effectively, not much has changed since the 1970s other than perhaps better amplifiers and speakers to better support THX certification. Though by the 2010s, digital sound has become ubiquitous, even when using actual developed film prints, though digital cinematography is now becoming the defacto standard. While Cinemas have moved towards megaplexes containing 10, 20 or 30 screens, the technology driving these theaters these hasn’t much changed this decade. Other competitors to THX have come into play, like Dolby Atmos (2012), which also offers up optimal speaker placement and volume to ensure high quality spatial audio in the space allotted. While THX’s certification system was intended for commercial theater use, Dolby Atmos can be used either in a commercial cinema setting or in a home cinema.



We’re slightly over 2 years into the 2020s (100 years since the 1920s) and it’s hard to say what this decade might hold for audio recording evolution. So far, this decade is still riding out what was produced in the 2010s. When this decade is over, this section will be written. Until then, this article is awaiting this decade to be complete. Because this article is intended as a 100 year history, no speculation will be offered as to what might happen this decade or farther out. Instead, we’ll need to let this decade play out to see where audio recording goes from here.

Note: All images used within this article are strictly used under the United States fair use doctrine for historical context and research purposes, except where specifically noted.

Please Like, Follow and Comment

If you enjoy reading Randocity’s historical content, such as this, please like this article, share it and click the follow button on the screen. Please comment below if you’d like to participate in the discussion or if you’d like to add information that might be missing.

If you have worked in the audio recording industry during any of these decades, Randocity is interested in hearing from you to help improve this article using your own personal stories. Please leave a comment below or feedback in the Contact Us area.


Is Tesla Innovative?

Posted in botch, business, technologies by commorancy on July 16, 2021

I’ve been confronted with this very question many times on Social Media, specifically Twitter. Many people who own Tesla vehicles vehemently insistent that Elon Musk and Tesla’s products are innovative. But, is Tesla really innovative? In short, no. Let’s explore why.


via Oxford Dictionary

As a first step, we need to define the word, innovation. As you can see from its definition from Oxford Dictionary, it is defined as ‘a new method, idea, product, etc’.

The difficulty with this definition is that it doesn’t go deep enough to explain what the word new actually means in this definition’s context. This definition assumes the reader will understand the subtle, but important distinction of using the word ‘new’ in this definition.

Many people will, unfortunately, conclude that ‘new’ means ‘brand new’ as in a ‘just manufactured’ new model car. Simply because something is brand spankin’ new doesn’t make it innovative. However, a ‘brand new’ car model might contain some innovative elements, but the technology behind a car’s functional design may not be innovative or new at all… contrary to Oxford’s complicated use of the word ‘new’. As an example, both random cars in general and specifically electric vehicles are not new. In fact, mass produced cars have been the norm since 1901 and electric cars have been prototyped since the 1830s. While those electric prototypes weren’t truly cars in the mass produced sense, they were functional prototypes which showed that the electric vehicle technology is possible, functional and, most importantly, feasible.

You might then be thinking that Tesla was the first to create mass produced electric cars. Again, you’d be wrong. In fact, the first mass produced electric car was General Motor’s EV1, produced in 1996. The EV1 appeared 12 years before the first electric vehicle rolled off the assembly line at Tesla… and Tesla’s cars appeared 178 years after the first electric car prototype appeared. That’s a long time… definitely not ‘new’.

Electric vehicle technology was not at all new when Tesla decided to roll out its all electric vehicles. The only claim to fame that Tesla can profess is that they were able to sort-of Apple-ize their car in such a way that it warps the minds of buyers into believing it is ‘the best thing since sliced bread’. Ultimately, that defines an excellent sales strategy… what Elon Musk is actually known for.

To Tesla’s credit, they were the first viable luxury class vehicle to also claim the electric vehicle moniker. That claim doesn’t necessarily make the vehicle innovative. It makes Tesla’s sales and marketing team innovative in that they can make electric vehicle technology ‘sexy’ for the well-to-do crowd. Before Tesla, luxury car brands mostly avoided making electric vehicles. Even then, being able to successfully market and sell a product doesn’t make that product innovative. It simply means you’re good at selling things.

For example, Steve Jobs was the master at selling Apple products. To be fair, Steve Jobs didn’t really have to do much in the way of hard sells. When Jobs was at the helm, many of Apple’s early products were indeed innovative. If you need an example of innovation, Steve Jobs’s products mostly epitomize it.

Tesla, on the other hand, absconded with several key things to produce its Tesla electric vehicles: 1) Luxury car designs (which already existed), 2) electric vehicles (which already existed) and 3) standard off-the-shelf battery technology (which already existed). None of these three ideas were new in 2008. That Tesla successfully married these things together isn’t considered true innovation. It’s considered incremental innovation. Taking already existing pieces and putting them together to make a successful ‘new’ product is common in many industries. This is incremental in that these things already existed and it was only a matter of time before someone put them together in a cohesive way. Is that innovation? No. Why? If Tesla hadn’t done it, Mercedes-Benz, Cadillac, Bentley or another luxury brand would have at some point. Though, Tesla’s early claim to fame wasn’t even luxury, it was sports cars. However, Tesla has dropped the sports car idea in lieu of being a luxury brand.

Product Innovation Types

I’ll circle back around to the above, but let’s take a break here to understand the two primary types of innovation.

The first type of innovation is breakthrough innovation. This rare type of innovation offers a concept the world has never seen and usually results in a paradigm shift. Example: The Wright Brother’s first flight which brought about the paradigm shift into commercial aviation… a whole new industry emerged as a result.

The second type of innovation is incremental innovation. This much more common type of innovation takes existing ideas and marries them into a single new product. Example: The iPhone.

Some might consider both the iPad and the iPhone as breakthrough innovation. Instead, the first Apple computer would be considered breakthrough innovation and ultimately what, in part, lead to the iPhone and iPad. However and to be fair, both the iPad and iPhone products are technically incremental innovation. Prior to the iPad, there had been several tablet style computers (e.g., GRiD and even the Apple Newton) that, for whatever reason, never really took off. Handheld PDAs were actually a form of tablet. Cell phones were very popular long before the iPhone arrived. The iPhone, like the Tesla, successfully married three concepts: a computer, the cell phone and PDA into what became the smartphone.

However, even though incremental, both the iPhone and the iPad were responsible for a technological computing paradigm shift. The primary innovation seen in these devices was not from the marriage of existing technology, but from the speed, size, weight, high res screen and functionality that the devices offered… particularly when combined with the app store and a reasonable price tag. It’s much more convenient and fast to grab a tablet to quickly search the web than sitting down at a desk and powering up a desktop computer. It is the internal functions and features and flexibility that set these devices well apart from their earlier computer brethren which offered slower computing experiences at higher prices.

Steve Jobs was a master at miniaturizing computers into much smaller versions with reasonable price tags and which included high end features. This strategy is what set Apple, then NeXT, then Apple again… apart from the rest of their competitors. That was with Steve Jobs at the helm. Since Jobs’s passing, Apple is still attempting to ride Steve Jobs’s coattails, but those coattails are getting raggedy at this point. If Apple doesn’t come up with something truly breakthrough innovative in the next few years, they’re likely to begin losing sales in larger and larger quantities. Even more than this, another upstart company in similar Tesla form will step in front of Apple and usurp the industry. A business cannot keep selling the same devices over and over and expect success to continue. Apple needs another paradigm shift device to keep its success streak going. I digress.

Tesla’s Innovation

Circling back around to Tesla, we should now be able to better understand why what Tesla includes in its vehicles, while luxurious and technologically interesting, is nothing actually very new. It’s new in the sense of being recently manufactured, yes, but the technology itself is old from an innovation sense. In other words, Tesla had no hand in that technology’s development. An example, Tesla’s choice to place a large touch screen panel in the middle of the dashboard, while interesting, is simply considered luxury as touch screen flat panels are not technologically new. What about the all electric car itself? It’s not new either. Remember the 1830s? Remember the EV1? Not new.

What about the battery that powers the car? That battery technology is not new either. Technologically, it’s simply a standard lithium-ion battery built large enough to support operating a motor vehicle. Tesla didn’t design that technology either. Tesla might have had a hand in requesting the battery’s size, weight and power requirements, but that’s not innovation… that’s manufacturing requirements. The lithium-ion battery technology was created and produced much, much earlier in the 1980s. In fact, Akira Yoshino holds the 1983 patent for what is effectively the lithium-ion battery technology still being produced today… yes, even what’s being used in the Tesla.

You may be asking, “So what is innovative about the Tesla?” That’s a good question. Not very much to be honest. The car body’s design is at least proprietary, but functionally utilitarian just as most car bodies produced today are. The pop out door handles might be considered somewhat innovative, but these are born out of luxury, not out of necessity. They look cool, but don’t really serve a truly functional purpose. In this sense, while the handles might be considered innovative, they’re incremental and don’t serve a true purpose other than for aesthetics. The same statement of aesthetics can be said of a lot of both the interior and exterior of the Tesla. Functionally, the Tesla vehicles are cars.

The Tesla cars are designed to give the owner a luxury driving experience both inside and out. The all electric drive train helps reinforce that luxury function due to its torque, performance and acceleration power. Even the charging stations were built out of sales necessity, not out of innovation. You can’t exactly sell many electric vehicles if you can’t charge them easily. The proliferation of the recharge stations was, as I just said, born of necessity. Yes, this infrastructure is important to all future electric vehicles. However, Tesla built them coast to coast to ensure that Tesla owners could at least make a trip cross country without running out of power.

All of what Tesla has built I actually consider ‘smoke and mirrors’ or the ‘Hollywood Effect’. These luxury inclusions are intended to make the buyer feel better about the high purchase price. That the car acts like a highly paid butler, helping do a lot for the driver while on the road is what buyers see and feel. It’s that very luxury experience and those visual seemingly high-tech aesthetics that lure would be buyers into the brand. Buying a car from Tesla is like buying a new iPhone. It gives buyer that same endorphin rush, being able to say you have one. It’s also affords bragging rights because it’s a car brand that is relatively infrequently encountered and, at least according to Tesla buyers, is highly enviable.

People tend to buy Tesla for the same reason they buy and consume Cristal or Dom Perignon. They purchase these expensive brands not because they’re exceptional quality products, but because they afford a certain level of bragging rights because the item can be afforded. As a side note, Cristal and Dom Perignon are decent sparkling wines, but they are not worth the price tag based solely on taste alone. There are much less expensive Champagne and sparkling wines that are equal or better in taste. I’ll let you make of that statement what you will when it comes to Tesla.

Driver Assist

This leads us into the assisted driving feature. This feature is not innovative either. Driving assistance has been available on cars as far back as 2003 with the IPAS feature available on the Toyota Prius and Lexus models. This feature automatically parallel and reverse parks the vehicle. While this is not true assisted driving while on the road, the IPAS would definitely drive the vehicle into the parking space hands-free. IPAS was an important first step in proving that computer assisted driving could work.

Other driving systems which have contributed towards fully assisted driving is lane change detection, collision avoidance, traction control, distance detection, automatic braking and the backup-camera.

Tesla has taken all of these prior computerized driving innovations and, yet again, combined them into a computerized assisted driving. This technology is markedly different from full autonomous self-driving. Assisted driving utilizes all of the above detection systems to allow the driver to remove hands from the wheel, but not remove the driver from the driver’s seat. The driver must still watch the road and make sure the car’s detection systems do not go awry when the driver must be willing to reassert manual control. Because these limited detection systems aren’t fail proof, a driver is still required to take control over the vehicle should the system fail to detect a specific condition that a driver can see and avoid.

Self-Driving Vehicles

Tesla doesn’t presently offer a fully computerized autonomous self-driving vehicle for its consumers. Only driver assist mode is available. Self-driving vehicles do not require a driver. Self-driving autonomous vehicles have an advanced computer system and radar system mounted on the roof. These vehicles are continually scanning for all manner of conditions constantly. The computer is constantly able to correct for any conditions which arise or at least which have been programmed. Self-driving vehicles are substantially less prone to errors than assisted driving, primarily because of Google’s self-driving vehicle efforts. Self-driving types of vehicles do not need a driver sitting the driver’s seat, unlike assisted driving vehicles which still require a driver.

One might think that Google invented this technology. However, one would be wrong. Self-driving vehicles were introduced in 1939 at the New York World’s Fair using a system that required road modification to keep the vehicle situated.

Google was able to, in 2009, adapt this prior concept by using the then computer, current radar technologies and detection systems to allow the car to function autonomously without the need to modify the road itself. However, even though Google was able to create cars that do function properly and autonomously, this technology has yet to be manufactured into consumer grade vehicles…. mostly out of fear that it will fail in unexplained ways. That and that driving laws (and insurance policies) have not yet caught up to the idea of autonomous driver-free vehicles. For example, if there’s no driver and an autonomous car injures or kills someone, who’s at fault? Laws are slowly catching up, but this question still remains.

Tesla and Driver Assist

Let’s circle back around. The reason Google’s autonomous driving technology, now called Waymo, is mentioned in this article is that it began one year after Tesla began operations in 2008, long before Tesla began including assisted driving in their vehicles. Tesla, once again, adopted an already existing technology into their vehicle designs, likely based in part on Google’s successful autonomous vehicles. They didn’t design this mode. They simply adapted an already existing technology design to be useful in a more limited fashion. Again, this isn’t breakthrough innovation, it’s incremental innovation. There is no paradigm shift involved. It’s a utilitarian luxury inclusion in an attempt to allow Tesla to prove how modern and luxurious their vehicles are compared to other luxury brands. Basically, it’s yet another ‘feather in their cap’.

Innovation is Innovation

Unfortunately, no. It’s far, far easier to adapt existing technologies into a design than it is to build a new idea from scratch. For this reason, nothing of what Tesla has built is truly groundbreaking or ‘breakthrough’ in design. More than this, Tesla is a car. A car is a car is a car.

The point in a car is to transport you from point A to point B and back. You can buy a car that’s $5,000 to perform this function or you can buy a car that’s over a $1 million. Both perform this same basic task. The difference in price is the luxury. Do you want to do this task in a thinly walled, loud, tiny bucket of a car or do you want to do it with every creature comfort using top speed? Comfort and performance are the primary differences in price.

With Tesla, there’s nothing truly innovative included in their cars. Luxurious? Check. Performant? Check. Bells and Whistles? Check. Miles per gallon? Whoops.

Distance Driving

One of the great things about gasoline powered vehicles is the ability to travel great distances without stopping too frequently. When you do need to stop, the existing gas station infrastructure is practically every place where you might travel. Granted, there are some dead stretches of roads were you might need to plan your car’s fillups accordingly, else you might be stranded. For the vast majority of roads in the United States, finding a gas station is quick and easy.

With the Tesla, finding recharge stations fare far worse. While the charging infrastructure is improving and growing around the country, it’s still much more limited than gas stations. That means that when distance driving in a Tesla, it’s even more important to plan your travel routes to ensure you can charge your vehicle all along the way.

You have a Model 3 and you say it charges to 100% in about an hour? Sure, but only if you happen to find a V3 Supercharger. Unfortunately, the vast majority of Superchargers available are V2 chargers or older. Even then, the amount of kilowatts of power available to charge your Tesla may be artificially constrainted. The V3 chargers offer up to 250 kW. The V2 chargers offer around 150 kW. Many random chargers you find (not Tesla branded) may only offer between 6 and 20 kW. Considering that 20 is only a fraction of 250, you’ll spend a whole lot of time sitting at that charger waiting on your Model 3 to charge up. It’s great that Tesla has built the faster V3 charger, but you can’t bank on finding these when you need one most. With gas stations, you can at least get some kind of gas and fill up the tank in minutes. With a Tesla, you could be sitting at a charging station for hours waiting to get to 50%.

Around 60 minute charge times sound great for the Model 3, but only when the infrastructure is there to support it. Currently, the V3 chargers are still not the norm.

What does all of the above mean for distance driving? It means that for long distance road trips while in a Tesla, you’ll need to not only plan each charger stop carefully, you’ll need spend time locating the fastest chargers you can find. This allows you to calculate the amount of time it requires to charge your car to 100% at that charger. If you don’t properly plan for where and how long, you could spend way more time at places than you think.

Run out of charge in the middle of nowhere? With services like triple-A, you won’t find them coming to top up your charge. Oh, no. They’ll come grab your prized Tesla, place it on a flatbed and then you’ll be riding in that tow truck to the nearest charge station… which could be hundreds of miles and one very large tow cost away. Once you get there, you’ll be sitting waiting for the charge to complete… and/or attempting to find a motel. Costly. Even with Tesla’s included roadside assistance, don’t expect miracles and you may even be required to pay for that tow.

If you had been driving a hybrid, triple-A could have given you a few gallons to get you to the nearest station to fill up… and then you’d have been on your way quickly.

What are the charge costs?

Honestly, if you have to ask this question, then a Tesla is probably not the right car to buy. However, for the curious, it’s still worth a deeper dive. Unlike gasoline prices which are clearly and conspicuously visible with large price signs towering high above the gas station, neither Superchargers nor standard electric chargers give you this visibility.

In fact, to find out what it will cost to charge your vehicle, you’ll have to visit the recharger and begin poking your way through the touch screen. There are some apps and web sites you can pick a charger location and review its then electric rate, but you might not want to bank on that if you’re planning a trip. Instead, because electric prices can vary dramatically during seasons and demand, you’ll need to check the pricing just before you reach the charger or, better, directly on the charger when you reach it.

Unlike gas stations which allow you to shop around for the best price, chargers don’t really offer that convenience. You pay what you pay.

For a Supercharger, the prices are based on how the energy is doled out to your car. The two methods are kilowatts or kilowatt-hours. Whichever rate system you choose, the energy will work out to the same cost in the end.

If you choose to charge per minute, it is $0.26 per minute above 60 kW. Under 60 kW, it is only $0.13. If you charge by kWh, it is $0.28 per kWh drawn from the charger.


In case you’re wondering… No, it is not free to charge up your Tesla. However, Tesla does sometimes offer free limited time charging incentives at Superchargers when attempting to up quarter sales. You’ll need to discuss these kinds of incentives with Tesla before you sign on the dotted line.

Superchargers and Battery Wear

Battery technologies are finicky. It’s well known that the faster you charge a battery, the faster it wears out. Yes, this goes for Tesla car batteries. What that means is that while visiting a V3 Supercharger is convenient for topping up your battery quickly, it’s not so great on the battery itself. The more you visit these fast charge ports, the quicker your car battery may need to be replaced. This means you should temper your exuberance for fast chargers and utilize much slower overnight charging whenever possible.

How much is a replacement battery pack? Well, let’s hope you bought the extended warranty because here’s where things get really pricey. Obviously, under warranty, there will be no cost. If the warranty has expired or if you have bought a used Tesla without a warranty, you’re on your own. The cost to replace a battery pack can range from $3,000 to over $13,000 sans labor. If you’re considering buying a used Tesla, you should confirm if any existing warranty is transferable to the new owner and confirm how much is left. You shouldn’t confirm this with the seller as they can tell you anything. Instead, confirm this information with Tesla directly by calling and asking.

If no warranty is available, you should contact a third party warranty company (i.e., CarShield) and discuss whether the battery is a covered part under that warranty before you buy the car. Being required to spend $16,000 after buying a used Tesla (or any electric car) is not really a pleasant surprise. You’ll want to make sure you can acquire some kind of warranty that covers that battery part as soon as you buy that Tesla.

Commuter Vehicle

Let’s discuss a situation where Tesla does function decently enough. A Tesla is a reasonable, if not somewhat costly commuter vehicle. It’s great to get around town, drive to work, run errands, pick up the kids and take them to soccer practice. For long distance driving, owning a Tesla is unnecessarily more complicated, particularly if you choose to tour remote areas of the country without access to charge stations. All of this complication can be easily avoided by choosing a gas vehicle or a gas hybrid. As a commuter vehicle, a Tesla is an okay choice. However, I’d suggest there are plenty of other vehicles, gas, hybrid and even hybrid / electric, that suffice for commuting. Many of these choices are not nearly as costly as the purchase of a Tesla. But, of course, you won’t get all of the Tesla niceties with those other vehicles.

A Green Company?

With the recent trend toward companies seeking to being green and offering green technologies, it’s funny (odd) that Tesla chose not to be very green. There are a number of problems that prevent Tesla from being a green all around company. By green, let me define that.

I know, you might thinking, “How can an all-electric vehicle not be green?” Bear with me.

A Green company is a company that implements processes to reduce waste, to offer more compostable materials in packaging and implement processes to reduce its own waste and offer designs which help reduce carbon emissions and other environmental pollutants. Apple is a good example of this. Apple moved from using plastic packaging materials to paper materials which compost more fully. Though, even Apple isn’t all that green considering the eWaste afforded by Apple’s insistence at replacing iPhones every single year.

One might further think, “Well, isn’t Tesla green by using batteries instead of gasoline?” You would think that would be true, wouldn’t you? Let’s examine.

What about those Li-On batteries? The secret involving these batteries is that to manufacture that one battery, it produces 74% more emissions than a standard car does. Once the battery is manufactured, the consumption of greenhouse gasses drops to zero for that specific battery, but the manufacturing of each battery is very dirty. I guess Tesla car buyers don’t really care much about how much of a carbon footprint was required to build that luxury Tesla? It gets worse.

Power Grids derive most of their energy from fossil fuel sources. Up to a max 20% of all grid energy generated is from clean renewables such as Solar, Wind and Water. Nuclear energy further makes up another 20%. The remaining ~60% is still generated from fossil fuel sources including coal, natural gas and burning petroleum products. That means that every time you plug your Tesla into a grid charger, at least 60% of that energy consumed is contributing to greenhouse gasses.

Your Tesla doesn’t have a tailpipe, but it grows one while your Tesla is charging from the grid.

Tesla and the Power Grid

With both California and Texas now experiencing regular power problems due to various politically motivated reasons, it is also becoming obvious that the aging United States power grid infrastructure is in need of a major overhaul. For every plug-in electric car sold (not just Tesla), this puts another car onto the grid to suck even more energy. As more and more all electric cars are manufactured and sold, that only means even more added load to that aging power grid. Tesla is a heavy contributor to this problem due to its much faster (denser) powered requirements for fast charging.

At some tipping point, there will be too many cars charging for the grid to handle. The formerly off-peak hours in the wee morning hours will become the peak hours because that’s when all of the cars will be charging. Eventually, all of these charging electric cars will be drawing more current than homes draw in the middle of the day. This will be compounded by Tesla’s ever more ravenous need to speed charging up. Right now, the V3 chargers pull 250 kW. The V4 chargers will likely want to pull 500kW. V5 chargers maybe 1000kW?

When will this need-for-speed end? This is the same problem that Internet Services faced in the early 2000s. The infrastructure wasn’t designed for 10GB to every home. It still isn’t. That’s why broadband services still don’t offer 10GB home speeds. They barely offer 1GB.. and even if you do buy such a link, they don’t guarantee those speeds (read the fine print).

The point is that the more data that can be pulled in an ever shorter amount of time, the more problems it causes for the ISP over that very short time. The same for energy generation. The more energy consumed over an ever shorter amount of time, the more energy that must be generated to keep up with that load. There is a tipping point where energy generators won’t be able to keep up.

Is Tesla working with the energy generation companies? Highly unlikely. Tesla is most likely designing in a bubble of their own making. Tesla’s engineers assume that energy generation is a problem that the electric companies need to solve. Yet, energy generation has finite limits. Limits that, once reached, cannot be exceeded without expensive additional capacity… capacity that the energy companies must pay to build, not Tesla. Capacity that takes time to build and won’t come online quickly (read years). Capacity costs that will be handed down to consumers in the form of even more rate increases. Yes, all of those Tesla vehicles consuming energy will end up being the source of higher energy rate increases. Thanks, Elon!

It’s highly unlikely that Tesla knows exactly where those energy generation limits are and they probably don’t want to know. It’s also the reason many recharge stations limit power consumption draw current to around 6-10 kW max. Those limits are intentional and are likely not to be lifted any time soon. If Tesla can manage to get even a handful of V3 Superchargers set up around the United States, I’d be surprised. Even then, these rechargers may be artificially limited to significantly less than the 250 kW required for that 60 minute rapid charge in a Model 3. Power companies may simply not be able to provide that charge rate for perhaps hundreds or thousands of rechargers.

Hope meets Reality

The difficulty is that Tesla intends to build these ever faster rechargers, but then may not be able to actually get them functional in the wild due to the overly rapid amount of energy they can consume. This is where reality meets design… all for Tesla to attempt to get close to the 5-8 minutes it takes to fill up a tank of gas. Yes, let’s completely stress our aging power grid infrastructure to the breaking point all for the sake of trying to charge a bunch of Teslas in 5 minutes? Smart. /s

Instead of producing ever faster and faster rechargers, Tesla should be researching and innovating better battery technologies to reduce power consumption and improve driving distance through those improved batteries. How about hiring battery engineers to solve this difficult problem rather than taking the easy route by simply sucking down ever more energy faster from an already overloaded power grid?

With better batteries, instead of Tesla contributing to the problem of global warming by forcing ever more energy generation faster, they could be innovating to reduce this dilemma by making more efficient and faster charging batteries using lower power consumption rates. Building better and more efficient batteries? That’s innovation. Faster recharging by overburdening infrastructure? That’s callous and reckless… all in the name of capitalism. I guess as long as Tesla can make its sales numbers and Wall Street remains happy, it doesn’t matter how non-green Tesla really is.


One thing I’ve not yet fully discussed is, you guessed it, pollution. This aspect is part of being a green company. Yet, instead of trying to make Teslas charge faster and drive farther by innovating improved battery technologies, Tesla builds the low-hanging fruit of faster 250 kW rechargers to improve the speed of battery charging by consuming ever more grid energy faster.

Let’s understand the ramification of this. The faster the batteries charge, the more power must be generated at that point in time to handle the load. The more power generated, the more concentration of pollutants that go into the air to support that generation. That doesn’t say ‘green company’. It says callous, reckless, careless, dirty company in it for the money, not for helping the planet.

Overtaxing the power grid is a recipe for disaster, if only from a climate change perspective. There are plenty of other ways to look at this, but this one is the biggest problem against what Tesla is doing. It’s also, again not innovative. In fact, it’s just the opposite.


Energy sources like Wind, Solar and Water are great generation alternatives. But, they’re not always feasible. Texas is a very good example of how these renewables can fail. The mass array of Wind Turbines in North Texas and the panhandle were found to be easily damaged by both freezing temperatures and excessive winds. Clearly, these expensive turbines need to be weather proofed and managed accordingly.

For example, to avoid the freezing conditions, the motors needed heaters to keep them from freezing up. It’s not like some of the energy generated from these turbines couldn’t be used and stored locally to keep heaters operating. Additionally, high wind detectors could move the blades into a neutral position so there’s less of a chance of high wind damage. Because Texas apparently didn’t implement either of these two mitigation strategies, that left a large amount of these wind turbines damaged and out of commission. This fact meant the Texas power grid was unable to serve the entire state enough energy… thus, blackouts.

Solar, on the other hand, requires a large amount of land to “farm.” What that means is that land needs to be allocated to set up large amounts of solar panel arrays. Last time I looked, land wasn’t cheap and neither are those solar panels. This means a high amount of expense to draw in solar energy.

Unlike wind, which can potentially blow 24 by 7, if you can get 5-6 hours of solid sunlight in a day, that’s the best you can hope for. This means that a solar panel can only capture a fraction of the amount of energy that a 24 / 7 wind turbine can continuously capture and provide.

Water energy can also be harnessed, but only using large dam systems. This means, once again, specific land and water requirements. For example, the Hoover dam provides about 458,333 kW continuous, which is enough continuous power to operate around 1,559 V3 Superchargers concurrently, taking into account a 15% power loss due to transmission lines and transformers. This also assumes that dam’s power is dedicated to that purpose alone. Hint: it isn’t. Only a fraction of that power would be allowed to be used for that purpose, which means far fewer Superchargers. That power is also combined with other power generation types, which makes up the full power grid supply.

The point here is that renewables, while great at capturing limited amounts of energy, are not yet ready to take over for fossil fuel energy generation. In fact, the lion’s share of energy generation is still produced by burning coal, natural gas and petroleum… all of which significantly impact and pollute the environment.


One thing I’ve not yet discussed is the dangers of owning an electric vehicle. One danger that might not seem apparent is its battery. These lithium-ion batteries can become severe fire hazards once breached. If that Tesla lands in an accident and the battery ruptures, it’s almost assured to turn into a Car-B-Cue. If you’re pinned in the vehicle during that Car-B-Cue, it could turn out horrific. Lithium-ion fires are incredibly dangerous. Though, while gasoline is also highly flammable, a gas tank is much less likely to rupture and catch fire in an accident.

Innovation Circle

To come full circle, it’s now much easier to understand why Tesla is less an innovative car company and more of a sales and marketing gimmick. After all, you could buy plenty of other luxury car brands offering sometimes better bells and whistles. Luxury car brands have been around for years. Tesla is relatively new car company, having started in 2008. It’s just that Tesla has built its brand based on it having “sexy” technology that other brands didn’t have, but have since acquired.

Both gas and hybrid vehicles offer better distance and more readily accessible infrastructure to get you back onto the road when low on fuel. It is this feature that is still a primary motivator for most car buyers. Trying to finagle where and how to charge an electric vehicle can be a real challenge, particularly if you live in a condo or apartment and not a home. It’s worse if you choose to live in the boonies.

Where does Tesla stand?

The question remains, what does a Tesla vehicle do well? As a short distance commuter car, it’s perfectly fine for that purpose. It’s a bit pricey for that use case, but it functions fine. The convenience of being able to plug it in when you get home is appealing, assuming you have a recharge port installed at home. If you are forced to leave it in a random parking lot to charge overnight, that’s not so convenient. How do you get home from there? Walk? Uber? It kinda defeats the purpose of owning an expensive Tesla.

When purchasing a Tesla, you have to consider these dilemmas. What’s the problem with living in a condo or apartment? Many complexes have no intention of setting up rechargers, thus this forces you to leave your car at a parking lot charger perhaps blocks away. If the complex offers garages with 110v circuits, you can use these to charge, but extremely slowly. This means that to own a Tesla, certain things need to line up perfectly to make owning a Tesla convenient. Otherwise, it’s an expensive hassle.

Innovation isn’t just about the product itself, it’s how the product gets used in a wide array of use cases. If the product’s design fails to account for even basic ownership cases, the design wasn’t innovative enough. That’s where the Tesla sits today. That’s why Tesla is still considered niche car and is not generally useful across-the-board.

Calling Tesla and, by extension, Elon, innovative gives that company and Elon way too much credit. Elon’s claim to fame is that he picked a business that happened to receive a lightning strike. This is mostly because he’s an excellent sales person. Some people can sell pretty much anything they are handed. Elon is one of those people. While he’s an excellent salesman, he’s not so much of an excellent innovator. Slapping together a bunch of existing off-the-shelf technologies shouldn’t be considered innovative, particularly when you forget to take into account too many ownership cases where the final product is inconvenient to own and operate.

Home Use

The kind of buyer who can afford to buy into a Tesla is typically affluent enough to afford a home. For these people, more convenience is afforded owning a Tesla. Not only can you spend the money to install a home charging port that charges at whatever rate you can afford, homeowners can choose to park and charge their vehicles at will. This is important to understand.

Homeowners with acreage, can also choose to set up such renewable energy sources as wind turbines and solar panels. These energy generation systems can offset some of the power consumed while charging up an electric vehicle.

About renewables, one residential based wind turbine may produce a maximum of 10 kW of energy during optimal conditions. That’s about the same amount of energy provided by most third-party non-Tesla recharge ports found on parking lots. While it may take 60 minutes to charge a Model 3 using a V3 250 kW recharge port, at 10 kw or 4% of that 250 kW charge rate, it would take many hours to charge. In fact, at that much slower recharge rate, it might take 8-16 hours to fully charge.

To offset that, you would need to buy and install multiple wind turbines to increase the energy generated. Wind turbines are not at all cheap to buy or erect. Having enough land to line them up may be even more of a problem. In other words, you’d probably spend way more than the cost of your Tesla just to build enough infrastructure to support charging that car in anything close to timely. Is it worth it? Depends on the person.

To even approach the 250 kW level of charge rate, you have to rely on the power grid or install a diesel or natural gas generator. However, installing a fossil fuel generator is no better or cheaper than using the power grid.

As I said above, a Tesla grows a tailpipe the moment it begins recharging from fossil fuel sources.

Is a Tesla vehicle worth it?

As a car for car’s sake, it’s fine. It does its job well. It’ll get you from place to place. It has all of the standard amenities needed, such as heating and air conditioning and it keeps you out of the rain. It has luxury bells and whistles also, such as the touch screen panel and assisted driving.

Everyone must decide for themselves what they consider whether a product is “worth it”. Owning specific cars is mostly a subjective experience. Does it feel right when sitting in the driver’s seat? Is it comfortable? Can you see easily out of the windows? Do the mirrors offer safe views all around the vehicle? As a driver, only you can sit in a car an decide if the car is the correct fit for you and your family.

I’ve personally sat in cars that while they appeared roomy from the outside, caused my knees to bang up against the dash or door frame or other areas upon entry, exit or while driving. It’s no fun exiting a vehicle to scraped knees or banged up shins.

Car buying is an experience that can only be described as trying to find a glove that fits. Once you find the right glove, the deal is done. I would never buy a car based on brand alone. I buy cars that fit all manner of criteria, including comfort, budget, safety, warranty, reviews and cost for maintenance. Nothing’s worse than taking your car to the dealer only to be slapped with a $1000 fee each and every time.

I’m not saying that owning a Tesla isn’t “worth it”. It may well be “worth it” for specific reasons. It’s just that the one reason to own a Tesla should not be innovation. The car truly offers few innovative features. Another reason is its alleged zero carbon footprint. Yes, it has a zero carbon footprint as long as you never charge it. Can’t do that and have a functional car. As soon as it begins charging from the power grid, the car is no greener than a gas powered car. Because a Tesla must charge for hours at slower recharge rates, that’s way longer than most 2-4 hour daily commutes to and from work in a gas powered vehicle.

Simply because you don’t see the pollution going into the air out of your car doesn’t mean it’s not happening while that car sits in your garage charging.

Product Innovation

As I said above, you shouldn’t buy a Tesla because you think it’s innovative. It’s not. However, it goes beyond this. I don’t think I’ve ever purchased a car because it was “innovative.” I choose cars based on other more important criteria, such as gas mileage, comfort, warranty, performance, ease of maintenance and other functional criteria. This typically means I’m also not brand loyal. I find the car that fits what I need in the budget that I can afford. That could be a Ford, Chevy, Toyota or whatever car that works best. Every model year yields new cars that offer different features.

Tesla believes that they can craft a brand like Apple, with brand loyal fans also like Apple. Apple is a unique beast. Their brand loyalty goes very, very deep. These brand loyal folks will buy whatever Apple releases, regardless of whether it’s the best value. Likewise, Tesla hopes to build their company based on this same type of year-over-year brand loyalty. Except there’s one problem: who buys a new car every year?

However, Tesla has not proven itself to be an innovative car company. They can make cars, true enough. But, are those cars truly innovative? Not really. Even Apple’s product innovation has come to a standstill. The latest iPad, for example, removed the TouchID home button in favor of FaceID simply to remove the home button from the bezel. So then, along comes COVID-19 and thwarts FaceID by wearing a simple mask. TouchID is a better COVID alternative because you don’t need to cover your fingertips. FaceID seems like a great idea until it isn’t.

Tesla needs to consider more breakthrough innovation and less incremental innovation. Hire people with the chops to build superior battery technology. Hire people who can design and build more efficient drive motors. Hire people to figure out how to embed solar panels into the paint so you can have both an aesthetically pleasing paint job and charge your car while sitting or driving in the sun.

There are plenty of ways to recapture small amounts of energy, such as wind, solar and regenerative braking to extend the driving distance. These don’t need to fully charge the battery, but instead are used to extend the charge of the battery and add distance. Heck, why not install a simple generator that uses gasoline, propane or even natural gas? This generator doesn’t need to charge the battery to 100%. Again, it is simply used to extend the range to get more miles from the car. These are just a few simple, but profound improvement ideas. There are plenty more ideas that can be explored to make the Tesla cars, not just technologically luxurious, but truly innovative.

These more breakthrough innovative designs are missing from the Tesla. These are ideas that would make a Tesla car much more functional in all areas of driving, not simply commute driving. In fact, I’d like to see Tesla build a gasoline powered vehicle. Stop relying on electric and take the dive into building cars based on all fuel types. Does Cadillac keep its car line artificially limited to one type of motor? No. How about Bentley? How about Porsche or Lamborghini? No. These car companies innovate by not artificially constraining themselves to a single type of technology. This gives those car companies an edge that allows them to install whatever technology is best for a specific model vehicle. That Tesla is artificially constraining itself to electric only is a questionable, self-limiting business decision.


Is Google running a Racket?

Posted in botch, business, california, corruption, Uncategorized by commorancy on March 16, 2020

monopoly-1920In the 1930s, we had crime syndicates that would shake down small business owners for protection money. This became known as a “Racket”. These mob bosses would use coercion and extortion to ensure that these syndicates got their money. It seems that Google is now performing actions similar with AMP. Let’s explore.


AMP is an acronym that stands for Accelerated Mobile Pages. To be honest, this technology is only “accelerated” because it strips out much of what makes HTML pages look good and function well. The HTML technology that make a web page function are also what make it usable. When you strip out the majority of that usability, what you are left with is a stripped down protocol named AMP… which should stand for Antiquated Markup Protocol.

This “new” (ahem) technology was birthed by Google in 2016. It claims to be an open source project and also an “open standard”, but the vast majority of the developers creating this (ahem) “standard” are Google employees. Yeah… so what does this say about AMP?

AMP as a technology is fine if it were allowed to stand on its own merit. Unfortunately, Google is playing hardball to get AMP adopted.


Google seems to feel that everyone needs to adopt and support AMP. To that end, Google has created a racket. Yes, an old-fashioned mob racket.

To ensure that AMP becomes adopted, Google requires web site owners to create, design and manage “properly formatted” AMP pages or face having their entire web site rankings be lost within Google’s Search.

In effect, Google is coercing web site owners into creating AMP versions of their web sites or effectively face extortion by being delisted from Google Search. Yeah, that’s hardball guys.

It also may be very illegal under RICO laws. While no money is being transferred to Google (at least not explicitly), this action has the same effect. Basically, if as a web site owner, you don’t keep up with your AMP pages, Google will remove your web site from the search engine, thus forcing you to comply with AMP to reinstate the listing.

Google Search as Leverage

If Google Search were say 15% or less of the search market, I might not even make a big deal out of this. However, because Google’s Search holds around 90% of the search market (an effective monopoly), it can make or break a business by reducing site traffic because of low ranking. By Google reducing search rankings, this is much the same as handing Google protection money… and, yes, this is still very much a racket. While rackets have been traditionally about collecting money, Google’s currency isn’t money. Google’s currency is search rankings. Search rankings make or break companies, much the same as paying or not paying mobsters back in the 1930s.

Basically, by Google coercing and extorting web site owners into creating AMP pages, it has effectively joined the ranks of those 1930 mob boss racketeers. Google is now basically racketeering.

Technology for Technology’s Sake

I’m fine when a technology is created, then released and let land where it may. If it’s adopted by people, great. If it isn’t, so be it. However, Google felt the need to force AMP’s adoption by playing the extortion game. Basically, Google is extorting web site owners to force them to support AMP or face consequences. This forces web site owners to adopt creating and maintaining AMP versions of their web pages to not only appease Google, but prevent their entire site from being heavily reduced in search rankings and, by extensions, visitors.


In October of 1970, Richard M. Nixon signs into law the Racketeer and Influenced Corrupt Organizations Act… or RICO for short. This Act makes it illegal for corrupt organizations to coerce and extort people or businesses for personal gains. Yet, here we are in 2020 and that’s exactly what Google is doing with AMP.

It’s not that AMP is a great technology. It may have merit at some point in the future. Unfortunately, we’ll never really know that. Instead of Google following the tried-and-true formula of letting technologies land where they may, someone at Google decided to force web site owners to support AMP … or else. The ‘else’ being the loss of that business’s income stream by being deranked from Google’s Search.

Google Search can make or break a business. By Google extorting businesses into using AMP at the fear of loss of search ranking, that very much runs afoul of RICO. Google gains AMP adoption, yes, but that’s Google’s gain at the site owners loss. “What loss?”, you ask. Site owners are forced to hire staff to learn and understand AMP because the alternative is loss of business. Is Google paying business owners back for this extortion? No.

So, here we are. A business the size of Google wields a lot of power. In fact, it wields around 90% of the Internet’s search power. One might even consider that a monopoly power. Combining a monopoly and extortion together, that very much runs afoul of RICO.

Lawsuit City and Monopolies

Someone needs to bring Google up in front of congress for their actions here. It’s entirely one thing to create a standard and let people adopt it on their own. It’s entirely another matter when you force adoption of that standard on people who have no choice by using your monopoly power against them.

Google has already lost one legal battle with COPPA and YouTube. It certainly seems time that Google needs to lose another legal battle here. Businesses like Google shouldn’t be allowed to use their monopoly power to brute force business owners into complying with Google technology initiatives. In fact, I’d suggest that it may now be time for Google, just like the Bell companies back in the 80s, to be broken up into separate companies so that these monopoly problems can no longer exist at Google.


What is 35mm film resolution?

Posted in entertainment, film, movies, technologies by commorancy on December 26, 2018

filmstrip-fI’ve seen a number of questions on Quora asking about this topic, likely related to 4K TV resolution. Let’s explore.

Film vs Digital

What is the amount of pixels in a 35mm frame of film? There’s not an exact number of pixels in a single frame of 35mm film stock. You know, that old plasticky stuff you had to develop with chemicals? Yeah, that stuff. However, the number of pixels can be estimated based on the ISO used.

Based on an ISO of 100-200, it is estimated that just shy of 20,000,000 (20 million) pixels make up a single 35mm frame after conversion to digital pixels. When the ISO is increased to allow more light into the aperture, this increases film noise or grain. As grain increases, resolution decreases. At an ISO of 6400, for example, the effective resolution in pixels might drop to less than 10,000,000 (10 million) due to more film grain. It can be even lower than that depending on the type of scene, the brightness of the scene and the various other film factors… including how the film was developed.

If we’re talking about 70mm film stock, then we’re talking about double the effective resolution. This means that a single frame of 70mm film stock would contain (again at ISO 100-200) about 40,000,000 (40 million) digital pixels.

Digital Cinematography

Panavision Millenium DXL2With the advent of digital cinematography, filmmakers can choose from the older Panavision film cameras or they can choose between Panavision‘s or RED‘s digital cameras (and, of course, others). For a filmmaker choosing a digital camera over a film camera, you should understand the important differences in your final film product.

As of this article, RED and Panavision digital cinematography cameras produce a resolution up to 8k (7,680 × 4,320 = 33,177,600 total pixels). While 33 million pixels is greater than the 20 million pixels in 35mm film, it is still less resolution than can be had in 70mm film at 40 million pixels. Red DragonThis means that while digital photography might offer a smoother look than film, it doesn’t necessarily offer better ‘quality’ than film.

Though, using digital cameras to create content is somewhat cheaper because there’s no need to send the footage to a lab to be developed… only to find that the film was defective, scratched or in some way problematic. This means that digital photography is a bit more foolproof as you can immediately preview the filmed product and determine if it needs to be reshot in only a few minutes. With film, you don’t know what you have until it’s developed, which could be a day or two later.

With that said, film’s resolution is based on its inherent film structure. Film resolution can also be higher than that of digital cameras. Film also looks different due to the way the film operates with sprockets and “flipping” in both the camera and projector. Film playback simply has a different look and feel than digital playback.

RED expects to increase its camera resolution to 10k (or higher) in the future. I’m unsure what exact resolution that will entail, but the current UW10k resolution features 10,240 × 4,320 = 44,236,800 pixels. This number of pixels is similar to 70mm film stock in total resolution, but the aspect ratio is not that of a film screen, which typically uses 2.35:1 (Cinemascope widescreen) or 16:9 (TV widescreen) formats. I’d expect that whatever resolution / aspect that RED chooses will still provide a 2.35:1 format and other formats, though it might even support that oddball UW10k aspect with its 10,240 pixels wide view. These new even wider screens are becoming popular, particularly with computers and gaming.

Film Distribution

Even though films created on RED cameras may offer an up to 8k resolution, these films are always down-sampled for both theatrical performance and for home purchasing. For example, the highest resolution you can buy at home is the UltraHD 4K version which offers 3,840 x 2,160 = 8,294,400 pixels. Converting an 8k film into 4k, you lose around 24 million pixels of resolution information from the original film source. This is the same when converting film stock to digital formats.

Digital films projected in theaters typically use theatrical 4K copies, much the same as you can buy on UltraHD 4K discs, just tied to a different licensing system that only theaters use.

Future TV formats

TV resolutions have been going up and up. From 480p to 1080p to 4K and next to 8K. Once we get to 8K in the home, this is the resolution you’ll find natively with most digitally captured films. Though, some early digital films were filmed in 4K. Eventually, we will be able to see digital films in its native resolution. 8K TVs will finally allow home consumers to watch films in their filmed resolution, including both 35mm film and 70mm film stock both as well as many digital only films.

For this reason, I’m anxious to finally see 8K TVs drop in price to what 4K TVs are today (sub $1000). By that time, of course, 4K TVs will be sub $200.

8K Film Distribution

To distribute 8K films to home consumers, we’re likely going to need a new format. UltraHD Blu-ray is likely not big enough to handle the size of the files of 8K films. We’ll either need digital download distribution or we’ll need a brand new, much larger Blu-ray disc. Or, the movie will need to be shipped on two discs in two parts… I always hated switching discs in the middle of a movie. Of course, streaming from services like Netflix is always an option, but even 4K isn’t widely adopted on these streaming platforms as yet.

Seeing in 8K?

Some people claim you can’t see the difference between 1080p and 4K. This is actually an untrue statement. 1080p resolution, particularly on a 55″ or larger TV, is easy to spot the pixels from a distance… well, not exactly the pixels themselves, but the rows and columns of pixels (pixel grid) that make up the screen. With 4K resolution, the pixels are so much smaller, it’s almost impossible to see this grid unless you are within inches of the screen. This makes viewing films in 4K much more enjoyable.

With 8K films, the filmed actors and environments will be so stunningly detailed as to be astounding. We’ll finally get to see all of that detail that gets lost when films are down-converted to 4K from 8K. We’ll also get to see pretty much what came out of the camera rather than being re-encoded.

Can humans see 8K? Sure, just like you can see the difference between 1080p and 4K, you will be able to see a difference in quality and detail between 4K and 8K. It might be a subtle difference, but it will be there and people will be able to see it. Perhaps not everyone will notice it or care enough to notice, but it will be there.

Film vs Digital Differences

The difference between film and digital photography is in how the light is captured and stored. For film, the camera exposes the film to light which is then developed to show what was captured. With digital photography, CMOS (Complimentary Metal Oxide Semiconductor) or possibly CCDs (Charge Coupled Devices) are used to capture imagery. Most cameras today opt for CMOS sensors because they’re less expensive to buy and provide equivalent quality to the CCD sensors. For this reason, this is why RED has chosen CMOS as the sensor technology of choice for their cameras. Though, RED cameras are in no way inexpensive, starting at around $20k and going up from there.


In concluding this article, I will say that 4K is definitely sufficient for most movie watching needs today. However, Internet speeds will need to improve substantially to offer the best 8K viewing experience when streaming. Even Netflix and Amazon don’t currently provide even an amazing 4K experience as yet. In fact, Netflix’s 4K offerings are few and far between. When you do find a film in 4K, it takes forever for Netflix to begin streaming this 4K content to the TV. Netflix first starts out streaming at 480p (or less), then gradually increases the stream rate until the movie is finally running at 4K. It can take between 5-10 minutes before you actually get a 2160p picture. Even then, the resolution can drop back down in the middle and take minutes before it resumes 4K.

Today, 4K streaming is still more or less haphazard and doesn’t work that well. That’s partly due to Netflix and partly due to the Internet. The streaming rate at which 4K content requires to achieve a consistent quality picture can really only be had from Blu-ray players or by downloading the content to your computer in advance and playing it from your hard drive. Streaming services offering 4K content still have many hurdles to overcome to produce high quality consistent 4K viewing experiences.

For this reason, 8K streaming content is still many, many years away. Considering that 4K barely works today, 8K isn’t likely to work at all without much faster Internet speeds to the home.



Randocity Tech Holiday Shopping Guide

Posted in giving, holiday, video game by commorancy on November 23, 2018

GiftBoxIn the spirit of the upcoming holidays, I offer the Randocity Tech Holiday Shopping Guide otherwise known as the How-to-avoid-technology-pitfalls Guide. Let’s explore.


The purpose of this guide is two-fold. First, it’s designed to help you choose various electronics and video game gifts. Second, it’s design to keep you from falling into pitfalls with said gift purchases, to help minimize returns / exchanges by selecting an incompatible item and to help avoid making you look like you don’t know what you’re buying.

Let’s get started…

Xbox One Wired Microphone + Headset

Here’s one gift where you might think it would be easy to locate a functional item. Thanks to Microsoft, you would be incorrect.

🛑 Pitfall: Even though the Xbox One does have a 3.5mm jack on the controller, it only accepts certain compatible chat headphone accessories. If you’re planning on buying a chat headset for someone with an Xbox One, you should check the box for the words Universal, Xbox One and/or Samsung / Android compatibility. The problem… The Xbox One is only compatible with headsets wired for use on Samsung / Android devices or devices specifically labeled compatible with the Xbox one.

👎 This means you cannot buy any Apple compatible headphones with a 3.5mm jack and have the microphone work. The stereo output will work, but the microphone will not. If you’re unsure of the compatibility of the headset, ask the store, search the manufacturer’s web site or find another brand.

✅ Instead, look for and purchase wired headsets that list Samsung, Android, Xbox One or Universal on the box only.

🔥 Note that it is getting more difficult to find boxes labeled for Android or Samsung as most Android devices understand this incompatibility and have built their latest devices to support either headphone type. This has caused more confusion rather than helping solve the problem.

👍 Gaming headsets change yearly and offering a specific recommendation means this advice will be out of date by this time next year. I will say, Turtle Beach quality isn’t great so steer clear of this brand. If you stick with Sony branded headsets for the PS4, you should be good there. Microsoft doesn’t make high quality headsets, so you’ll have to buy from third parties for the Xbox One. I personally have a Plantronics RIG 500 Pro HC and can recommend this as a good basic quality headset. The fidelity is decent, but not perfect. Some reviewers of this headset have complained of the microphone breaking quickly.

PS4 or Xbox One Wireless Chat Headsets

Here’s another gift idea like the above, but it too has a big pitfall. I’ll break it out by console version.

🛑PS4 Pitfall: While the PS4 does have Bluetooth capabilities, it doesn’t support the AVRCP or A2DP profiles. Instead, the PS4 only supports the HSP (HeadSet Profile). This profile is a lesser used profile throughout the industry and it doesn’t support the same quality stereo output as AVRCP and A2DP. For this reason, you can’t go and buy just any Bluetooth chat headset and assume it will work. For example, the Apple Airpods do not work on the PS4. Randocity recommends not even looking at Bluetooth headphones for the PS4 as greater than 97% of them won’t work.

✅ Instead, you’ll need to buy headphones specifically designed for the PS4, and these typically come with a dongle for Wireless. For example, Sony’s Gold Wireless headphones. There are other brands from which to choose, but be sure that the box is labeled with either PS4 or Universal console compatibility.

🛑Xbox One Pitfall: The Xbox One doesn’t support Bluetooth at all. This makes it a little bit easier when gift shopping in that you can entirely avoid looking at Bluetooth headphones at all.

✅ Instead, you’ll want to look for wireless chat headphone boxes that have either Universal and/or Xbox One printed on it. As long as you make sure to look for this printing on the box, then this headphone will work.

🔥 Many places don’t allow you to listen to a gaming headset’s sound quality. You’ll have to buy the headset untried. Whether any specific headphone sounds good, that’s a personal preference. You can’t take into account your gift recipient’s personal tastes in how they like their headphones to sound. However, if you avoid buying headphones priced below $40, the headphones should provide fair to good sound quality. Below the $100 price point, don’t expect those deep rich bass drivers, though. Though, headphone drivers have drastically improved in recent years and the sub $100 price point tends to be much better quality than what you would have found in the early 00s and 90s.

👍 Randocity recommends a visit your local Best Buy or Gamestop or even Amazon and see which wireless gaming headphones are on sale. I might suggest a gift card which avoids the situation and lets the gamer pick their own brand.

Video Game Controller for iPad or iPhone

Here’s another area that would seem easy, but it isn’t. Apple requires a specific hardware certification for all game controllers called MFi. This makes it a little more tricky to find a controller that works.

🛑 Pitfall: There are many game controllers on the market including Microsoft’s Xbox One controller, PlayStation 4’s DualShock controller and even Nintendo’s Pro controller. Don’t be fooled into thinking you can get these to work. Even though all of the aforementioned controllers are Bluetooth, that doesn’t mean they’ll work on the iPad. None of them have the MFi certification. Avoid buying one of these “other” controllers as you cannot get it to work.

✅ Instead, look for and buy only MFi certified controllers, such as the SteelSeries Nimbus controller. Not only does this controller charge using a Lightning cable, it is fully compatible with all Apple devices including the iPhone, iPad, Apple TV and even MacOS.

👍 Randocity recommends the SteelSeries Nimbus controller for Apple devices as it feels the most like a PS4 or Xbox Controller.

Newest iPad and Headphones

With the introduction of the latest iPad using USB-C, this throws yet another dilemma into the works for gift purchasing. This problem also underscores why Apple should never have removed the headphone jack from its devices.

🛑 Pitfall: With the introduction of the current home-buttonless iPad, you’ll also find the unwelcome surprise of a USB-C charging port. This means that any Apple headphones (other than the Airpods) won’t work on this newest iPad. To use either a pair of Lightning or 3.5mm jack headphones, you’ll need an adapter.

✅ Instead, pick up a pair of Bluetooth headphones which will remain compatible with all Apple devices going forward.

🔥 Apple insists on changing its port standards regularly. As a result, you should not buy into any specialty jack wired Apple headphones. If you want to buy any wired headphones, buy the 3.5mm jack version and eventually Apple will create an adapter to its newest port. Since every other device on the planet still supports a 3.5mm jack, you can use these headphones on every other device. Buying Lightning or USB-C headphones means you’ll be extremely limited on where those can be used… and when Apple decides to change its port again, those USB-C or Lightning headphones will be useless.

👍 Randocity recommends gifting Apple Airpods for Apple devices. Not only do they sound great, they’re easy to use (mostly) and they’ll remain compatible with future Apple devices… unless, of course, Bluetooth is replaced with a wireless protocol of Apple’s design. The Bluetooth Airpods are also fully compatible with many other Bluetooth devices, including the Amazon Echo. Skip the wires, the hassle and the expensive dongles and go wireless with Apple devices.

DVD, Blu-ray and UltraHD 4K Blu-ray

I find it funny that we still have so many optical disc entertainment formats. DVD as a format was introduced in the late 90s and has survived for so many years. Yet, we also now have Blu-ray and UltraHD Blu-ray.

🛑 Pitfall: Be sure to read the disk case carefully. Even though DVD is typically sold in a different sized case, packaging standards in movie entertainment are loose at best. Be sure to read the package carefully so you are getting the disc you think you are getting. For example, both UltraHD 4K Blu-ray case packages and DVD use black plastic cases. If you’re eyeing the case strictly by color, you could accidentally pick up an UltraHD version of the movie when you wanted the DVD version.

✅ Choose the best format that can be played by your gift receiver’s equipment.

🔥 During the Holiday season, particularly on Black Friday weekend, you’ll find all sorts of content on Doorbusters. Take advantage, but be careful to read the packaging. You don’t want your gift receiver to be surprised that you bough them a Blu-ray when they only have DVD or that you bought them an UltraHD 4K Blu-ray when they only have Blu-ray. Be a careful shopper and read the box and also know what your gift receiver has.

It’s likewise just as bad if you buy a DVD for someone who has an UltraHD 4K TV and Blu-ray player. They won’t want to watch your DVD and will return it for credit towards something else.

Additionally, if you give a DVD or Blu-ray, you may find that they have access to Amazon Prime, Hulu or Netflix. They might already have access to the film or have already watched. So, be cautious.

👍 I’d recommend a gift card intended towards the purchase of a movie. This allows the recipient to buy whatever film they want in whatever format they have. Though, you’ll want to go look up the film and determine its price, then give a gift card that covers that purchase price.

Video Games

Here’s another one you might think can be an easy gift. Unfortunately, it isn’t.

🛑 Pitfall: Video games are very much personal to the gamer. Because there are so many genres and types of games, it can be impossible to choose a game that not only does the gamer not already have, but impossible to choose a game they might actually like.

✅ Instead, because most games are $60, you’ll be safe to give a gift card in the amount of $60 to cover the purchase of the game.

🔥 If your recipient is an adult, the purchase of any game shouldn’t be a problem. However, if your recipient is a minor, then you’ll want to give a gift card to avoid any ESRB rating or content issues that a parent might not want within the game. Avoid becoming “that aunt” or “that uncle” by buying an inappropriate game for a minor. Because video games are a personal taste situation, buying any game blind could end up with a return. I do realize that gift cards are an impersonal gift, but in some situations like video games, it is well worth it to play it safe.

👍 Randocity recommends buying gift cards over buying physical game copies, particularly for minors. If you happen to have a specific game request by the receiver and the parent has approved the game, then by all means buy it. If you’re simply shopping blind, then a gift card is Randocity’s recommendation to avoid this pitfall.

Giving the Gift of Music

Here’s another one that should be easy, but it isn’t. If you’re thinking of buying CDs for your tech savvy friend, you might want to ask some questions first.

🛑 Pitfall: Because of music services like Apple Music and Amazon Unlimited where you get access to nearly Amazon and Apple’s full music catalog, subscribers no longer need to buy CDs. As long as they remain subscribers of these music services, they now have instant access to the most recent music the day of its release.

✅ Instead, it might be wise to avoid this type of content purchase, particularly if you know the person is affluent and a music buff.

🔥 Be careful and ask questions if you’re thinking of gifting a CD. If they have access to Apple Music, Spotify or Amazon Unlimited, buying them a CD may result in a return.

👍 Randocity recommends giving gift cards to iTunes or Amazon instead of buying a specific CD. If you give a gift card, they can apply the amount towards their membership or whatever other merchandise or music they wish. This avoids the awkward look you might get once you find out they already subscribe to Apple Music.

Giving the Gift of an Apple Watch

Thinking of giving someone an Apple Watch for the the holidays? You need to understand the pitfall here.

🛑 Pitfall: An Apple Watch is entirely dependent on an iPhone to function. In order to even get the Apple Watch setup and working as a watch, it must be configured using an iPhone. Further, because the Apple watch only pairs with an iPhone, don’t give it to someone who only has an iPad, iPod touch or iPhone 4 or below. It won’t work. It also won’t work for someone who owns an Android phone.

✅ Instead, if you’re not sure if your gift recipient has an iPhone that will work, I’d suggest getting them a different watch. If the person owns an Android, you’ll want to choose one of the Android watches instead. The Apple Watch doesn’t work at all with Android.

🔥 If you do decide to chance that they own an iPhone, be sure to give them a gift receipt as they may need to return it if they don’t have one.

👎 Randocity recommends avoiding the purchase of an Apple Watch as a gift, particularly if you know the person doesn’t have an iPhone or has an Android phone. This is a particularly tricky gift item and is likely to end up returned if the person doesn’t have an iPhone. If you know the person doesn’t have an iPhone, then you’ll need to gift them both an iPhone and an Apple Watch… which is a whole lot more expensive of a gift than you might have expected to give. For this reason, I thumbs down 👎 giving the Apple Watch as a blind gift. If you are absolutely 100% certain the person you are giving the Apple Watch to has an iPhone, then go for it.

Gift Receipts

👍 Randocity always recommends asking the store for a gift receipt. Then, include it with any gift you give. This allows the recipient to trade it in should they happen to get two copies of the same item.

Happy Holidays!


Rant Time: SmugMug and Flickr

Posted in botch, business, california by commorancy on November 12, 2018

Flickr2While you may or may not be aware, if you’re a Flickr user, you should be. SmugMug bought Flickr and they’re increasing the yearly price by more than double. They’re also changing the free tier. Let’s explore.

Flickr Out

When Flickr came about under Yahoo, it was really the only photo sharing site out there. It had a vibrant community that cared about its users and it offered very good tools. It also offered a Pro service that was reasonably priced.

After Marissa Mayer took over Yahoo, she had the Flickr team redesign the interface, and not for the better. It took on a look and feel that was not only counter-intuitive, it displayed the photos in a jumbled mass that made not only the photos look bad, it made their interface look even worse.

The last time I paid for Pro service, it was for 2 years at $44.95, that’s $22.48 a year. Not a horrible price for what was being offered… a lackluster interface and a crappy display of my photos.

After SmugMug took over, it has done little to improve the interface. In fact, it is still very much the same as it was when it was redesigned and offers little in the way of improvements. We’re talking about a design of a product that started in 2004. In many ways, Flickr still feels like 2004 even with its current offerings.

Status Quo

While Flickr kept their pricing reasonable at about $23 a year, I was okay with that.. particularly with the 2 year billing cycle. I had no incentive to do anything different with the photos I already had in Flickr. I’d let them sit and do whatever they want. In recent months, I hadn’t been adding photos to that site simply because the viewership has gone way, way down. At one point, Flickr was THE goto photo service on the Internet. Today, it’s just a shell of what it once was. With Instagram, Tumblr and Pinterest, there’s no real need to use Flickr any longer.

A true Pro photographer can take their work and make money off of it at sites like iStockPhoto, Getty, Alamy and similar stock photo sites. You simply can’t sell your work on Flickr. They just never offered that feature for Pro users. Shit, for the money, Flickr was heavily remiss in not giving way more tools to the Pro users to help them at least make some money off of their work.

Price Increase

SmugMug now owns the Flickr property and has decided to more than double the yearly price. Instead of the once $44.95 every 2 years, now they want us to pay $50 a year for Pro service.


[RANT ON] So, what the hell SmugMug? What is it that you think you’re offering now that is worth more than double what Yahoo was charging Pro members before you took over Flickr? You’ve bought a 14 year old property. That’s no spring chicken. And you now expect us to shell out an extra $28 a year for an antiquated site? For what? Seriously, FOR WHAT?

We’re just graciously going to give you an extra $28 a year to pay for a 14 year old product? How stupid do you think we are? If you’re going to charge us $28 extra a year, you damned well better give us much better Pro tools and reasons to pay that premium. For example, offer tools that let us charge for and sell our photos as stock photos right through the Flickr interface. You need to provide Pro users with a hell of a lot more service for that extra $28 per year than what you currently offer.

Unlimited GB? Seriously? It already was unlimited. Photos are, in general, small enough not to even worry about size.

Advanced stats? They were already there. It’s not like the stats are useful or anything.

Ad-free browsing? What the hell? How is this even a selling point? It’s definitely not worth an extra $28 per year.

10 minutes worth of video? Who the hell uses Flickr for video? We can’t sell them as stock video! You can’t monetize the videos, so you can’t even make money that way! What other reason is there to use Flickr for video? YouTube still offers nearly unlimited length video sizes AND monetization (if applicable). Where is Flickr in this process? Nowhere.

Flickr is still firmly stuck in 2004 with 2004 ideals and 2004 mentality. There is no way Flickr is worth $50 a year. It’s barely worth $20 a year. [RANT MOSTLY OFF]

New Subscribers and Pro Features

Granted, this is pricing grandfathered from Yahoo. If you have recently joined Flickr as a Pro user, you’re likely paying $50 a year. 50 US dollars per year, I might add that’s entirely not worth it.

Let’s understand what you (don’t) get from Flickr. As a Pro user, you’re likely purchasing into this tier level to get more space and storage. But, what does that do for you other than allowing you to add more photos? Nothing. In fact, you’re paying Flickr for the privilege of letting them advertise on the back of your photo content.

Yes, you read that right. Most people searching Flickr are free tier users. Free tier viewers get ads placed onto their screens, including on your pages of content. You can’t control the ads they see or that your page might appear to endorse a specific product, particularly if the ad is placed near one of your photos. Ads that you might actually be offended by. Ads that make Flickr money, but that Flickr doesn’t trickle back into its paying Pro users. Yes, they’re USING your content to make them money. Money that they wouldn’t have had without your content being there. Think about that for a moment!

Advertising on your Content

Yes, that’s right, you’re actually paying Flickr $50 for the privilege of allowing them to place ads onto your page of content. What do they give you in return? Well, not money to be sure. Yes, they do give you a larger storage limit, but that’s effectively useless. Even the biggest photos don’t take much space… not nearly as much space as a YouTube video. Flickr knows that. SmugMug now hopes the Pro users don’t see the wool being pulled over their eyes. Yet, do you see YouTube charging its channels for the privilege of uploading or storing content? No! In fact, if your channel is big enough, YouTube will even share ad revenue with you. Yahoo, now SmugMug, has never shared any of its ad revenue with its users, let alone Pro users. Bilking… that’s what it is.

On the heels of that problem, Flickr has never offered any method of selling or licensing your photos within Flickr. If ever there was  ‘Pro’ feature that needed to exist, it would be selling / licensing photos.. like Getty, like iStockPhotos, like Alamy… or even like Deviant Art (where you can sell your photos on canvas or mousepads or even coffee mugs). Instead, what has Flickr done in this area? NOTHING.. other than the highly unpopular and horrible redesign released in 2013 which was entirely cosmetic (and ugly at that)… and which affected all users, not just Pro. Even further, what as SmugMug done for Flickr? Less than nothing… zip, zero, zilch, nada. Other than spending money to acquire Flickr, SmugMug has done nothing with Flickr… and it shows.

Free Tier Accounts

For free tier users, SmugMug has decided to limit the maximum number of uploaded photos to 1000. This is simply a money making ploy. They assume that free tier users will upgrade to Pro simply to keep their more than 1000 photos in the account. Well, I can’t tell you what to do with your account, but I’ve already deleted many photos to reduce my photo count below 1000. I have no intention of paying $50 a year to SmugMug for the “privilege” of monetizing my photos. No, thanks.

If you are a free tier user, know that very soon they will be instituting the 1000 photo limit. This means that you’ll either have to upgrade or delete some of your photos below 1000.

Because the Flickr platform is now far too old to be considered modern, I might even say that it’s on the verge of being obsolete… and because the last upgrade that Marissa had Yahoo perform on Flickr made it look like a giant turd, I’m not willing to pay Flickr / SmugMug $50 a year for that turd any longer. I’ve decided to get off my butt and remove photos, clean up my account and move on. If SmugMug decides to change their free tier further, I’ll simply move many of my photos over to DeviantArt where there are no such silly limits and then delete my Flickr account entirely.

If enough people do this, it will hurt SmugMug bad enough to turn that once vibrant Flickr community into a useless wasteland, which honestly it already is. I believe that outcome will actually become a reality anyway in about 2 years.


This company is aptly named, particularly after this Flickr stunt. They’re definitely smug about their ability bilk users out of their money without delivering any kind of useful new product. It would be entirely one thing if SmugMug had spent 6-12 months and delivered a full features ad revenue system, a stock photo licensing tool and a store-front to sell the photos on shirts, mugs and canvas. With all of these additions, $50 a year might be worth it, particularly if SmugMug helped Flickr users promote and sell their photos.

Without these kinds of useful changes, $50 is just cash without delivering something useful. If all you want to do is park your images, you can do that at Google, at Tumblr, at Pinterest, at Instagram and several other photo sharing sites just like Flickr. You can even park them at Alamy and other sites and make money from your photographic efforts.

Why would you want to park them at Flickr / SmugMug when they only want to use your photos to make money from advertising on a page with your content? It just doesn’t make sense. DeviantArt is actually a better platform and lets you sell your photos on various types of media and in various sizes.

Email Sent to Support

Here’s an email I sent to Flickr’s support team. This email is in response to Margaret who claims they gave us “3 years grace period” for lower grandfathered pricing:

Hi Margaret,

Yes, and that means you’ve had more than ample time to make that $50 a year worth it for Pro subscribers. You haven’t and you’ve failed. It’s still the same Flickr it was when I was paying $22.48 a year. Why should I now pay over double the price for no added benefits? Now that SmugMug has bought it, here we are now being forced to pay the $50 a year toll when there’s nothing new that’s worth paying $50 for. Pro users have been given ZERO tools to sell our photos on the platform as stock photos. Being given these tools is what ‘Pro’ means, Margaret. We additionally can’t in any way monetize our content to recoup the cost of our Pro membership fees. Worse, you’re displaying ads over the top our photos and we’re not seeing a dime from that revenue.

Again, what have you given that makes $50 a year worth it? You’re really expecting us to PAY you $50 a year to show ads to free users over the top of our content? No! I was barely willing to do that with $22.48 a year. Of course, this will all fall on deaf ears because these words mean nothing to you. It’s your management team pushing stupid efforts that don’t make sense in a world where Flickr is practically obsolete. Well, I’m done with using a 14 year old decrepit platform that has degraded rather than improved. Sorry Margaret, I’ve removed over 2500 photos, cancelled my Pro membership and will move back to the free tier. If SmugMug ever comes to its senses and actually produces a Pro platform worth using (i.e., actually offers monetization tools or even a storefront), I might consider paying. As it is now, Flickr is an antiquated 14 year old platform firmly rooted in a 2004 world. Wake up, it’s 2018! The iStockphotos of the world are overtaking you and offering better Pro tools.


Reasons to Leave

With this latest stupid pricing effort and the lack of effort from SmugMug, I now firmly have a reason to leave Flickr Pro. As I said in my letter above, I have deleted over 2500 photos from Flickr which is now below 1000 photos (the free tier level). After that, it will remain on free tier unless SmugMug decides to get rid of that too. If that happens, I’ll simply delete the rest of the photos and the account and move on.

I have no intention of paying a premium for a 14 year old site that feels 14 years old. It’s 2004 technology given a spit and polish shine using shoelaces and chewing gum. There’s also no community at Flickr, not anymore. There’s really no reason to even host your photos at Flickr. It’s antiquated by today’s technology standards. I also know that I can’t be alone in this. Seriously, paying a huge premium to use a site that was effectively designed in 2004? No, I don’t think so.

Oh, well, it was sort of fun while it lasted. My advice to SmugMug…

“Don’t let the door hit you on the way out!” Buh Bye. Oh and SmugMug… STOP SENDING ME EMAILS ABOUT THIS ‘CHANGE’.

If you’re a Flickr Pro subscriber, I think I’ve made my thoughts clear. Are you willing to pay this price for a 14 year old aging photo sharing site? Please leave a comment below.


How to iCloud unlock an iPad or iPhone?

Posted in botch, business, california by commorancy on October 21, 2018

apple-cracked-3.0-noderivsA lot of people seem to be asking this question. So, let’s explore if there are any solutions to the iCloud unlock problem.

Apple’s iCloud Lock: What is it?

Let’s examine what exactly is an iCloud lock. When you use an iPhone or iPad, a big part of that experience is using iCloud. You may not even know it. You may not know how much iCloud you are actually using (which is how Apple likes it) as it is heavily integrated into every Apple device. The iCloud service uses your Apple ID to gain access. Your Apple ID consists of your username (an email address) and a password. You can enable extended security features like two factor authentication, but for simplicity, I will discuss devices using only a standard login ID and password… nothing fancy.

iCloud is Apple’s cloud network services layer that support service synchronization between devices like calendaring, email contacts, phone data, iMessage, iCloud Drive, Apple Music, iTunes Playlists, etc. As long as your Apple ID remains logged into these services, you will have access to the same data across all of your devices. Note, your devices don’t have to use iCloud at all. You can disable it and not use any of it. However, Apple makes it terribly convenient to use iCloud’s services including such features as Find my iPhone, which allows you to lock or erase your iPhone if it’s ever lost or stolen.

One feature that automatically comes along for the ride when using iCloud services is an iCloud lock. If you have ever logged your iPhone or iPad into iCloud, your device is now locked to your Apple ID. This means that if it’s ever lost or stolen, no one can use your device because it is locked to your iCloud Apple ID and locked to Find my iPhone for that user (which I believe is now enabled by default upon logging into iCloud).

This also means that any recipient of such an iCloud locked device cannot use that device as their own without first disassociating that device from the previous Apple ID. This lock type is known as an iCloud lock. This type of Apple lock is separate from a phone carrier lock which limits with which carriers a phone can be used. Don’t confuse or conflate the two.

I should further qualify what “use your device” actually means after an iCloud lock is in place. A thief cannot clean off your device and then log it into their own Apple ID and use the phone for themselves. Because the phone is iCloud locked to your account, it’s locked to your account forever (or until you manually disassociate it). This means that unless you explicitly remove the association between your Apple ID and that specific device, no one can use that device again on Apple’s network. The best a would-be thief can do with your stolen phone is open it up and break it down for limited parts. Or, they can sell the iCloud locked device to an unsuspecting buyer before the buyer has a chance to notice that it’s iCloud locked.

Buying Used Devices

If you’re thinking of buying a used iPhone from an individual or any online business who is not Apple and because the iCloud lock is an implicit and automatic feature enabled simply by using iCloud services, you will always need to ask any seller if the device is iCloud unlocked before you pay. Or, more specifically, you will need to ask if the previous owner of the device has logged out and removed the device from Find my iPhone services and all other iCloud and Apple ID services. If this action has not been performed, then the device will remain iCloud locked to that specific Apple ID. You should also avoid the purchase and look for a reputable seller.

What this means to you as a would-be buyer of used Apple product is that you need to check for this problem immediately before you walk away from the seller. If the battery on the device is dead, walk away from the sale. If you’re buying a device sight unseen over the Internet, you should be extremely wary before clicking ‘Submit’. In fact, I’d recommend not buying used Apple equipment from eBay or Craigslist because of how easy it is to buy bricked equipment and lose your money. Anything you buy from Apple shouldn’t be a problem. Anything you buy from a random third party, particularly if they’re in China, might be a scam.

Can iCloud Lock be Removed?

Technically yes, but none of the solutions are terribly easy or in some cases practical. Here is a possible list of solutions:

1) This one requires technical skills, equipment and repair of the device. With this solution, you must take the device apart, unsolder a flash RAM chip, reflash it with a new serial number, then reassemble the unit.

Pros: This will fix the iPad or iPhone and allow it to work
Cons: May not work forever if Apple notices the faked and changed serial number. If the soldering job was performed poorly, the device hardware could fail.

Let’s watch a video of this one in action:

2) Ask the original owner of the device, if you know who they are, to disassociate the iDevice from their account. This will unlock it.

Pros: Makes the device 100% functional. No soldering.
Cons: Requires knowing the original owner and asking them to disassociate the device.

3) Contact Apple with your original purchase receipt and give Apple all of the necessary information from the device. Ask them to remove the iCloud lock. They can iCloud unlock the device if they so choose and if they deem your device purchase as valid.

Pros: Makes the device 100% functional.
Cons: Unlocking Apple devices through Apple Support can be difficult, if not impossible. Your mileage may vary.

4) Replace the logic board in the iPad / iPhone with one from another. Again, this one requires repair knowledge, tools, experience and necessary parts.

Pros: May restore most functionality to the device.
Cons: Certain features, like the touch ID button and other internal systems may not work 100% after a logic board replacement.

As you can see, none of these are particularly easy, but none are all that impossible either. If you’re not comfortable cracking open your gear, you might need to ask a repair center if they can do any of this for you. However, reflashing a new serial number might raise eyebrows at some repair centers with the assumption that your device is stolen. Be careful when asking a repair center to perform #1 above for you.

iCloud Locking

It seems that the reason the iCloud Lock came into existence is to thwart thieves. Unfortunately, it doesn’t actually solve that problem. Instead, it creates a whole new set of consumer problems. Now, not only are would-be thieves stealing iPads still, they’re selling these devices iCloud locked to unsuspecting buyers and scamming them out of their money. The thieves don’t care. The only thing this feature does is screw used device consumers out of their money.


That Apple thought they could stop thievery by implementing the iCloud lock shows just how idealistically naïve Apple’s technical team really is. Instead, they created a whole new scamming market for iCloud locked Apple devices. In fact, the whole reason this article exists is to explain this problem.

For the former owner of an iPad which was stolen, there’s likely no hope of ever getting it back. The iCloud lock feature does nothing to identify the thief or return stolen property to its rightful owner. The iCloud lock simply makes it a tiny nuisance to the thief and would-be scammer. As long as they can get $100 or $200 for selling an iCloud locked iPad, they don’t care if it’s iCloud locked. In fact, the fact that this feature exists makes no difference at all to a thief.

It may reduce the “value” of the stolen property some, but not enough to worry about. If it was five finger discounted, then any money had is money gained, even if it’s a smaller amount than anticipated. For thieves, the iCloud lock does absolutely nothing to stop thievery.


Here’s the place where the iCloud lock technology hurts the most. Instead of thwarting would-be thieves, it ends up placing the burden of the iCloud lock squarely on the consumer. If you are considering buying a used device, which should be a simple straightforward transaction, you now have to worry about whether the device is iCloud locked.

It also means that buying an iPhone or iPad used could scam you out of your money if you’re not careful. It’s very easy to buy these used devices sight unseen from online sellers. Yet, when you get the box open, you may find the device is iCloud locked to an existing Apple ID. At that point, unless you’re willing to jump through one of the four hoops listed above, you may have just been scammed.

If you can’t return the device, then you’re out money. The only organization that stands to benefit from the iCloud lock is Apple and that’s only because they’ll claim you should have bought your device new from them. If this is Apple’s attempt at thwarting or reducing used hardware sales, it doesn’t seem to be working. For the consumer, the iCloud lock seems intent on harming consumer satisfaction for device purchases of used Apple equipment… a market that Apple should want to exist because it helps them sell more software product (their highest grossing product).


For actually honest sellers, an iCloud lock makes selling used iPad and iPhone devices a small problem. For unscrupulous sellers, then there is no problem here at all. An honest seller must make sure that the device has been disassociated from its former Apple ID before putting the item up for sale. If an honest seller doesn’t know the original owner and the device is locked, it should not be sold. For the unscrupulous sellers, the situation then becomes the scammer selling locked gear and potentially trafficking stolen goods.

It should be said that it is naturally assumed that an iCloud locked device is stolen. It makes sense. If the owner had really wanted the item sold as used, they would have removed the device from iCloud services… except that Apple doesn’t make this process at all easy to understand.

Here’s where Apple fails would-be sellers. Apple doesn’t make it perfectly clear that selling the device requires removing the Apple ID information fully and completely from the device. Even wiping the device doesn’t always do this as there are many silent errors in the reset process. Many owners think that doing a wipe and reset of the device is enough to iCloud unlock the device. It isn’t.

As a would-be seller and before wiping it, you must go into your iPad or iPhone and manually remove the device from Find my iPhone and log the phone out of all Apple ID services. This includes not only logging it out of iCloud, but also logging out out of iTunes and Email and every other place where Apple requires you to enter your Apple ID credentials. Because iOS requires logging in multiple times separately to each of these services, you must log out of these services separately on the device. Then, wipe the device. Even after all of that, you should double check Find my iPhone from another device to make sure the old device no longer shows up there. In fact, you should walk through the setup process once to the point where it asks you for your Apple ID to confirm the device is not locked to your Apple ID.

This is where it’s easy to sell a device thinking you’ve cleared it all out, but you actually haven’t. It also means that this device was legitimately sold as used, but wasn’t properly removed from iCloud implying that it’s now stolen. Instead, Apple needs to offer a ‘Prep for Resell’ setting in Settings. This means this setting will not only wipe the device in the end, but it will also 100% ensure an iCloud unlock of the device and log it out of all logged Apple ID services. This setting will truly wipe the device clean as though it were an unregistered, brand new device. If it’s phone device, it should also carrier unlock the phone so that it can accept a SIM card from any carrier.

Apple makes it very easy to set up brand new devices, but Apple makes it equally difficult to properly clear off a device for resale. Apple should make this part a whole lot easier for would-be sellers. If need be, maybe Apple needs to sell a reseller toolkit to scan and ensure devices are not only iCloud unlocked, but run diagnostic checks to ensure they are worthy of being sold.


If you like what you’ve read, please leave a comment below and give me your feedback.


Software Engineering and Architecture

Posted in botch, business, Employment by commorancy on October 21, 2018

ExcellenceHere’s a subject of which I’m all too familiar and is in need of commentary. Since my profession is technical in nature, I’ve definitely run into various issues regarding software engineering, systems architecture and operations. Let’s Explore.

Software Engineering as a Profession

One thing that software engineers like is to be able to develop their code on their local laptops and computers. That’s great for rapid development, but it causes many problems later, particularly when it comes to security, deployment, systems architecture and operations.

For a systems engineer / devops engineer, the problem arises when that code needs to be productionalized. This is fundamentally a problem with pretty much any newly designed software system.

Having come from from a background of systems administration, systems engineering and devops, there are lots to be considered when wanting to deploy freshly designed code.

Designing in a Bubble

I’ve worked in many companies where development occurs offline on a notebook or desktop computer. The software engineer has built out a workable environment on their local system. The problem is, this local eneironment doesn’t take into account certain constraints which may be in place in a production environment such as internal firewalls, ACLs, web caching systems, software version differences, lack of compilers and other such security or software constraints.

What this means is that far too many times, deploying the code for the first time is fraught with problems. Specifically, problems that were not encountered on the engineer’s notebook… and problems that sometimes fail extremely bad. In fact, many of these failures are sometimes silent (the worst kind), where everything looks like it’s functioning normally, but the code is sending its data into a black hole and nothing is actually working.

This is the fundamental problem with designing in a bubble without any constraints.

I understand that building something new is fun and challenging, but not taking into account the constraints the software will be under when finally deployed is naive at best and reckless at the very worse. It also makes life as a systems engineer / devops engineer a living hell for several months until all of these little failures are sewn shut.

It’s like receiving a garment that looks complete, but on inspection, you find a bunch of holes all over that all need to be fixed before it can be worn.

Engineering as a Team

To me, this is situation means that software engineer is not a team player. They might be playing on the engineering team, but they’re not playing on the company team. Part of software design is designing for the full use case of the software, including not only code authoring, but systems deployment.

If systems deployment isn’t your specialty as a software engineer, then bring in a systems engineer and/or devops engineer to help guide your code during the development phase. Designing without taking the full scope of that software release into consideration means you didn’t earn your salary and you’re not a very good software engineer.

Yet, Silicon Valley is willing to pay “Principal Engineers” top dollar for these folks failing to do their jobs.

Building and Rebuilding

It’s an entirely a waste of time to get to the end of a software development cycle and claim “code complete” when that code is nowhere near complete. I’ve had so many situations where software engineers toss their code to us as complete and expect the systems engineer to magically make it all work.

It doesn’t work that way. Code works when it’s written in combination with understanding of the architecture where it will be deployed. Only then can the code be 100% complete because only then will it deploy and function without problems. Until that point is reached, it cannot be considered “code complete”.

Docker and Containers

More and more, systems engineers want to get out of the long drawn out business of integrating square code into a round production hole, eventually after much time has passed, molding the code into that round hole is possible. This usually takes months. Months that could have been avoided if the software engineer had designed the code in an environment where the production constraints exist.

That’s part of the reason for containers like Docker. When a container like Docker is used, the whole container can then be deployed without thought to square pegs in round holes. Instead, whatever flaws are in the Docker container are there for all to see because the developer put it there.

In other words, the middle folks who take code from engineering and mold it onto production gear don’t relish the thought of ironing out hundreds of glitchy problems until it seamlessly all works. Sure, it’s a job, but at some level it’s also a bit janitorial, wasteful and a unnecessary.


Part of the reason for these problems is the delineation between the engineering teams and the production operations teams. Because many organizations separate these two functional teams, it forces the above problem. Instead, these two teams should be merged into one and work together from project and code inception.

When a new project needs code to be built that will eventually be deployed, the production team should be there to move the software architecture onto the right path and be able to choose the correct path for that code all throughout its design and building phases. In fact, every company should mandate that its software engineers be a client of operations team. Meaning, they’re writing code for operations, not the customer (even though the features eventually benefit the customer).

The point here is that the code’s functionality is designed for the customer, but the deploying and running that code is entirely for the operations team. Yet, so many software engineers don’t even give a single thought to how much the operations team will be required support that code going forward.

Operational Support

For every component needed to support a specific piece of software, there needs to be a likewise knowledgeable person on the operations team to support that component. Not only do they need to understand that it exists in the environment, the need to understand its failure states, its recovery strategies, its backup strategies, its monitoring strategies and everything else in between.

This is also yet another problem that software engineers typically fail to address in their code design. Ultimately, your code isn’t just to run on your notebook for you. It must run on a set of equipment and systems that will serve perhaps millions of users. It must be written in ways that are fail safe, recoverable, redundant, scalable, monitorable, deployable and stable. These are the things that the operations team folks are concerned with and that’s what they are paid to do.

For each new code deployment, that makes the environment just that much more complex.

The Stacked Approach

This is an issue that happens over time. No software engineer wants to work on someone else’s code. Instead, it’s much easier to write something new and from scratch. It’s easy for software engineer, but it’s difficult for the operations team. As these new pieces of code get written and deployed, it drastically increases the technical debt and burden on the operations staff. Meaning, it pushes the problems off onto the operations team to continue supporting more and more and more components if none ever get rewritten or retired.

In one organization where I worked, we had such an approach to new code deployment. It made for a spider’s web mess of an environment. We had so many environments and so few operations staff to support it, the on-call staff were overwhelmed with the amount of incessant pages from so many of these components.

That’s partly because the environment was unstable, but that’s partly because it was a house of cards. You shift one card and the whole thing tumbles.

Software stacking might seem like a good strategy from an engineering perspective, but then the software engineers don’t have to first line support it. Sometimes they don’t have to support it at all. Yes, stacking makes code writing and deployment much simpler.

How many times can engineering team do this before the house of cards tumbles? Software stacking is not an ideal any software engineering team should endorse. In fact, it’s simply comes down to laziness. You’re a software engineer because writing code is hard, not because it is easy. You should always do the right thing even if it takes more time.

Burden Shifting

While this is related to software stacking, it is separate and must be discussed separately. We called this problem, “Throwing shit over the fence”. It happens a whole lot more often that one might like to realize. When designing in a bubble, it’s really easy to call “code complete” and “throw it all over the fence” as someone else’s problem.

While I understand this behavior, it has no place in any professionally run organization. Yet, I’ve seen so many engineering team managers endorse this practice. They simply want their team off of that project because “their job is done”, so they can move them onto the next project.

You can’t just throw shit over the fence and expect it all to just magically work on the production side. Worse, I’ve had software engineers actually ask my input into the use of specific software components in their software design. Then, when their project failed because that component didn’t work properly, they threw me under the bus for that choice. Nope, that not my issue. If your code doesn’t work, that’s a coding and architecture problem, not a component problem. If that open source component didn’t work in real life for other organizations, it wouldn’t be distributed around the world. If a software engineer can’t make that component work properly, that’s a coding and software design problem, not an integration or operational problem. Choosing software components should be the software engineer’s choice to use whatever is necessary to make their software system work correctly.

Operations Team

The operations team is the lifeblood of any organization. If the operations team isn’t given the tools to get their job done properly, that’s a problem with the organization as a whole. The operations team is the third hand recipient of someone else’s work. We step in and fix problems many times without any knowledge of the component or the software. We do this sometimes by deductive logic, trial and error, sometimes by documentation (if it exists) and sometimes with the help of a software engineer on the phone.

We use all available avenues at our disposal to get that software functioning. In the middle of the night the flow of information can be limited. This means longer troubleshooting times, depending on the skill level of the person triaging the situation.

Many organizations treat its operations team as a bane, as a burden, as something that shouldn’t exist, but does out of necessity. Instead of treating the operations team as second class citizens, treat this team with all of the importance that it deserves. This degrading view typically comes top down from the management team. The operations team is not a burden nor is it simply there out of necessity. It exists to keep your organization operational and functioning. It keeps customer data accessible, reliable, redundant and available. It is responsible for long term backups, storage and retrieval. It’s responsible for the security of that data and making sure spying eyes can’t get to it. It is ultimately responsible to make sure the customer experience remains at a high excellence standard.

If you recognize this problem in your organization, it’s on you to try and make change here. Operations exists because the company needs that job role. Computers don’t run themselves. They run because of dedicated personnel who make it their job and passion to make sure those computers stay online, accessible and remain 100% available.

Your company’s uptime metrics are directly impacted by the quality of your operations team staff members. These are the folks using the digital equivalent of chewing gum and shoelaces to keep the system operating. They spend many a sleepless night keeping these systems online. And, they do so without much, if any thanks. It’s all simply part of the job.

Software Engineer and Care

It’s on each and every software engineer to care about their fellow co-workers. Tossing code over the fence assuming there’s someone on the other side to catch it is insane. It’s an insanity that has run for far too long in many organizations. It’s an insanity that needs to be stopped and the trend needs to reverse.

In fact, by merging the software engineering and operations teams into one, it will stop. It will stop by merit of having the same bosses operating both teams. I’m not talking about at a VP level only. I’m talking about software engineering managers need to take on the operational burden of the components they design and build. They need to understand and handle day-to-day operations of these components. They need to wear pagers and understand just how much operational work their component is.

Only then can engineering organizations change for the positive.

As always, if you can identify with what you’ve read, I encourage you to like and leave a comment below. Please share with your friends as well.


Rant Time: MagicJack – Scam or Legit?

Posted in botch, business, scam, scams by commorancy on September 11, 2018

magicJackThe magicJack company offers a voice over IP phone service. You can use it with an app on your phone or by a device plugged into an actual landline-type phone. It does require Internet to function. Either way you go, it’s VoIP and they have very questionable and deceptive billing practices. Let’s explore.

Internet Phone Service Choices

If you’re in need of phone services on a device that only has access to WiFi, then a voice over IP service (VoIP) is what you need. There are many different VoIP services available on the Internet. You can even make audio and video calls via Facetime on iOS, via Skype on pretty much any mobile or desktop computer or even via Google Hangouts. For this reason, magicJack is yet another VoIP phone service in a sea of choices.

Why would you want to choose magicJack? Initially, they were one of the lowest priced VoIP phone services. They also offered a tiny computer dongle that made it easy to plug in a standard home phone. That was then. Today, mobile devices make this a different story. Lately, this company has raised their prices dramatically and they’re performing some quite deceptive and questionable billing practices.

911 Service

As with any phone service that offers the ability to use 911, the service must tack on charges to the bill by the municipality. You’d think that part of the invoice that magicJack is already collecting in payment of services would also cover for those 911 services. I certainly did. Instead, magicJack isn’t willing to part with any of their service revenue to actually cover services that, you know, they provide as part of your phone service… like any other phone company does.

MagicJack seems to think they can simply pass on said charges right to you in an email invoice and have you pay them separately. Here’s where magicJack gets firmly into scam and deceptive billing territory.

I’m sorry magicJack, but you’re forcing the 911 service when we don’t really need it or want it on that magicJack VoIP phone line. If you’re going to force this service as part of the overall service, then damned well you need to suck it up and pay the expenses from what we pay you. There is no way in hell I’m going to pay an ‘extra’ bill simply because you are unwilling to use the collected service fees to pay for those bills, like any other carrier on the planet. It’s not my problem that you choose not to do this.

You, magicJack, need to pay those bills to the 911 service. It’s your service, you forced 911 onto my line and now you must pay the piper. If you can’t do this, then you need to go out of business. This means, you need to collect the 911 service fees at the time you collect the payment for your services. And you know what, you already collected well enough money from me to cover those 911 service fees many times over. So, hop to it and pay that bill. This is not my bill to pay, it’s yours.

MagicJack Services

Should I consider magicJack services as an option when choosing a VoIP phone service? Not only no, but hell no. This service doesn’t deserve any business from anyone! This is especially true considering how many alternatives exist for making phone calls in apps today. Skip the stupidly deceptive billing hassles and choose a service that will bill you properly for ALL services rendered at the time of payment.

MagicJack is entirely misinformed if they think they can randomly send extra bills for whatever things that they deem are appropriate. Worse, magicJack is collecting payments for that 911 service, but you have no idea if that money will actually make it to the 911 municipal services in your area. That money might not even make it there and you may still receive a bill. In fact, if the municipality does send you a bill, you need to contact them and tell them to resend their bill to magicJack and collect their fees owed from magicJack, which has already been collected in the funds to cover any and all phone services. If magicJack claims otherwise, they are lying. If you are currently using magicJack’s services, you should cancel now (even if you have credit remaining).

Is magicJack a scam? Yes, considering these types of unethical and dubious billing practices. Even though their VoIP service works, it’s not without many perils dealing with this company. As with any service you buy into, Caveat Emptor.

MagicJack Headquarters

Here is the absolute biggest red flag of this scam company. MagicJack claims their corporate headquarters address is located here:

PO BOX 6785
West Palm Beach, FL 33405

Uh, no. Your headquarters cannot be inside of a PO Box.

Yelp claims that magicJack’s US address is here:

5700 Georgia Ave
West Palm Beach, FL 33405

Better, but still not accurate. This is not their corporate headquarters. This is simply a US office address. Who knows how many people actually work there? We all should know by 2018 just how many scams originate from Florida.

When you visit magicJack’s web site, no where on any of the pages does it show their actual physical headquarters address. This is a HUGE red flag. Where is magicJack’s actual headquarters?

magicJack Vocaltev Ltd (opens Google Maps)
Ha-Omanut Street 12
Netanya, Israel

As a point of consumer caution, you should always be extra careful when purchasing utility and fundamental services from any Israeli (or other middle east) companies. Worse, when companies cannot even be honest about where their corporate headquarters are on their own web site, that says SCAM in big red letters.

Class Action Lawsuit

Here’s another situation where this company needs to be in a class action lawsuit. I’m quite certain there are a number of folks who have been tricked into this scammy outfit and are now paying the price for their unethical and scammy business practices. However, because they are located in Israel, setting up a class action lawsuit against this company may be practically impossible. Better, just avoid the company and buy your phone services from U.S. based (or other local) companies where they are required to follow all local laws.

Rating: 1 star out of 10
Phone Service: 5 out of 10 (too many restrictions, limits call length)
Customer Service: 1 star out of 10
Billing: 0 stars out of 10
Overall: Scam outfit, cannot recommend.


%d bloggers like this: