Random Thoughts – Randocity!

Donald Trump indicted?

Posted in bankruptcy, politics by commorancy on April 2, 2023

brown wooden box on a striped foam mattress

Yes and this is likely to be the first of many indictments to come. Should Donald Trump be indicted for anything? Clearly, the MAGA Republicans say, “No.” Let’s explore.

Note, I’ve typically avoided political and controversial topics like this one on Randocity. Not because they’re not worth writing about, but because these topics are mostly useless. However, this topic is so urgent that it must be said.

MAGA Republicans

Let’s lead with the elephant in the room. Clearly, these MAGA Republicans are bad for the United States. They’re bad for United States Citizens. They’re bad for the world. Yet, here we are once again. With these MAGA cultists jumping to Donald Trump’s defense, not even knowing if the indictment has merit.

Donald Trump is a sleazy, off-kilter, man-child wannabe politician. There’s no doubting this. The man certainly appears to be successful by all measures, but that’s only because he uses his alleged victimhood to raise money every single time someone acts against him.

Worse, the MAGA Republican base always seems to rally around this man as if he’s some kind of cult leader messiah. Yes, MAGA is literally a cult; the cult of Donald Trump.

Liberal Democrat?

I can hear all of the MAGA Republicans groaning over this article at this point claiming this author is liberal Democrat. You don’t know me. You don’t know my views. As an author, it is a journalist’s prerogative and, indeed, obligation to call out that a duck is a duck. You and Donald Trump can’t waddle your way out of this. Be upset. Fine. But, that doesn’t change the facts above. Donald Trump’s cult IS a cult. There is no way around that.

You don’t see liberal democrats rallying around Joe Biden carrying flags with Biden’s face on it as if he were some kind of messianic figure. Just the opposite, in fact. A lot of liberal democrats don’t like what Joe Biden is doing as President of the United States either and are more than willing to call out each and every fact or action they don’t like. Yet, these liberal democrats also don’t stand on street corners waving flags with Biden’s face when he’s being persecuted. On the other hand, the MAGA Republicans rarely ever speak ill of their messiah, Donald Trump, even when Donald Trump called for people to inject bleach into themselves to rid themselves of COVID-19. A truly medically dangerous suggestion. Yet, they’re so willing to drag out the messianic flags and stand on street corners calling Donald Trump the best President ever. That’s a delusional cultist.

Intelligence vs Not

One of the things that much of Donald Trump’s base has proven time and time again is that they’re far less educated than the average liberal democrat. This lack of education may be what’s feeding the gullible nature of MAGA people, leading them to becoming cultists for Trump. When you can’t exactly think for yourself, letting other people think for you seems like bliss. Sure, as a MAGA Republican, you may think you’re good at trying to make arguments, but most of those arguments end up stupid and don’t work out for you. This is the reason Fox News is so bad at it. They hire moronic TV hosts (and lame piece writers) and then expect intelligence out of each and every one of them. You (don’t) get what you (don’t) pay for. You can’t pay dumb people to instantly become intelligent.

It’s quite clear that Laura Ingraham, Sean Hannity and Tucker Carlson check their education at the Fox News’ door every day on the way in, even though all three claim to be college educated. If I worked at the colleges where these three allegedly attended, I’d summarily revoke their degrees. Of course, these three aren’t required to have an education to work at Fox News. A comb, a suit and a dusting of makeup in the makeup artist’s chair and they’re good to go. The teleprompter and remedial reading skills do all of the rest. You don’t need a college education to read from a teleprompter.

It’s sad that Fox News has had to stoop to such (uneducated) lows to try to gain (and retain) its ratings… ratings, I might add, that come from the bottom 10-30% of the uneducated percentile of the United States.

Once again, I can hear the MAGA Republicans groaning. Stop groaning and go get an education. That or stop reading and go listen to your (sym)pathetic “friends” over at Fox News or News Max so you get today’s fix of unreality.

Donald Trump Indicted in New York

Donald Trump formerly lived in New York and has operated businesses in and out of New York while living there and since departing the state. It’s actually relatively recently that Donald Trump moved out of New York state, first to be President when he lived in the White House and then later to Florida to live full time at Mar-a-Lago. Simply because Donald Trump WAS President at one time and just because he now lives in Florida does not mean Donald Trump didn’t perform illegal activities while residing and operating businesses in New York prior to his Presidency.

Clearly, since the Manhattan District Attorney, Alvin Bragg, has brought an indictment, Bragg (and a grand jury) believes that there is sufficient evidence of wrongdoing on the part of Donald Trump.

Donald Trump claims it’s all a political witchhunt and all politically motivated, but Trump knows what he has done both prior to and after leaving New York. Because nearly every single person who is indicted of a criminal offense initially pleads “Not Guilty”, that doesn’t mean that they truly are innocent. Only a trial will uncover if this is true.

Considering that there are 30+ charges pending under Donald Trump’s New York indictment, that sheer volume doesn’t say “petty misdemeanor.” If there had been one or two charges, a petty misdemeanor might sound more reasonable, but 30 or more? No. 30+ charges definitely says something larger is at work… something that can only be decided via a judge, courtroom and jury.

Yes, it’s entirely possible that Donald Trump could be acquitted of all charges, but we won’t know that until or unless the charges and the evidence have been presented and a trial jury has decided.

Trial by Jury

One thing that Donald Trump and his cult following (including cult members Kevin McCarthy and Lindsay Graham) have continually put forth is his innocence. More than this, they have put forth that the DA has “weaponized” the judicial system against politicians, or more specifically, against Donald Trump as a politician. Again, I say, “No.” Instead, I’ll strongly counter that Trump has weaponized his cult following against the United States’ sovereignty.

A District Attorney’s job is to find wrongdoing and prosecute it. Clearly, Trump has at least left a trail of wrongdoing that has now been uncovered… in part with the help of Donald Trump’s former attorney and now convicted felon, Michael Cohen. Regardless of Cohen’s conviction and release, and regardless of Cohen’s disbarment as lawyer, that doesn’t discredit or discount his statements as to what Cohen did for and while in the employ of Donald Trump… especially when those statements can be fully corroborated using physical evidence.

Trump conferred many, many of his own personal and legal matters onto Cohen as his attorney. Trump then threw Cohen under the bus to serve prison time over those very same matters. What did Donald Trump think would happen after? Did Trump think that Cohen would step out of prison and jump right back into Trump’s good graces? No. Cohen is doing what anyone who gets thrown under the bus does. Whistleblow. Cohen just so happened to whistleblow right into the Manhattan DA’s ear… and clearly Bragg has listened intently. Cohen has likely managed and presided over many, many shady dealings with Donald Trump’s affairs, not just the Stormy Daniels “affair.”

You don’t see 30+ charges (and request to see Michael Cohen 20+ times for testimony) over of a single night’s affair requiring one single hush money payment to porn star Stormy Daniels. There’s a whole lot more that’s been uncovered about Donald Trump’s New York affairs than a single hush money payment. Clearly, Michael Cohen has been spilling the beans about everything with Trump’s dealings while Cohen was Trump’s attorney.

This extensive situation must be resolved by using a trial with a jury. There is no other way to resolve this. However, time always seems to be on Trump’s side. Meaning, Trump can delay, delay, delay until a later day. Trump also hopes that that later day is much more inconvenient for Bragg, or better, a day when a Republican appointee has taken office and these charges can be dropped against Trump. At least, that’s what Donald Trump is hoping. This situation sucks for the United States, but it is our legal system (not) at work.

Were this any person other than Donald Trump, these legal shenanigans would not work against a DA. However, Trump seems to be able to slip out of these situations with all due ease (wielding time, loopholes and his MAGA cultists as weapons).

Further Indictments and Trump’s Future

Let’s hope that Donald Trump’s potential federal trials are not so easily skirted. I’m not fully writing off Bragg’s trial yet, but I’m also not holding out much hope that this trial will land in a court before 2024, the Presidential election year. If this trial lands at any point during 2024, Bragg will likely be forced to postpone the trial until the election is over, pending the election outcome. This means even longer waiting and possibly no trial at all.

Unless Alvin Bragg can bring this to trial before the end of 2023, his hopes of actually seeing the charges in the indictment stick to Trump are fading rapidly. Along side Trump’s false, but very noisy “witchhunt” rhetoric, which is also being parroted by his MAGA cult followers, we could also end up waiting 2 years or longer for a trial to convene. This lengthy wait may even cause the charges to completely evaporate. I’m all for holding people legally accountable, but in Trump’s case, it must be performed as rapidly as possible; from indictment to trial in no more than 3 months. If that’s not feasible, then bringing an indictment might be fruitless because of Trump’s inevitable delaying tactics.

It’s nearly guaranteed that if Trump can delay this New York trial for long enough, the charges are likely to disappear. This is why any attorney seeking to bring charges against Trump must plan to execute not only the indictment and the charges carefully, but they must also weigh the delay games Trump is likely to play. If Trump can successfully delay 6 months, 12 months, 24 months or even longer, it might exceed the statute of limitations and it could allow Trump to retake Presidential office (or see another Republican become elected) to get out of the charges entirely. The longer a trial takes to convene, the less likely Trump is to even see a trial at all. Bragg needs to weigh all of this carefully.

Could Bragg continue to levy these charges if Trump is re-elected as President? Probably not. If Trump is re-elected President, these New York charges along with any federal charges are also likely to evaporate under Trump’s change in cabinet. However, Bragg’s charges could remain, but New York State could be pressured out of existence should Trump’s feds place extraordinary (retaliatory) federal pressure on the state of New York. Thus, forcing the charges to be dropped. Worse, how would it look for Alvin Bragg to bring state criminal charges against a newly elected President?

This is why it’s nearly impossible to run these indictments at this late date. In fact, any Federal indictments should have already been levied by the DOJ in 2021 or 2022 so that Trump could potentially be on trial by the end of 2023. It’s actually too late to levy any federal DOJ charges via indictment against Trump due to the upcoming 2024 election. If a federal indictment were to be levied at this point in 2023, there is likely no possible way a federal trial could occur before November 2024 (the Presidential election). At this point, the feds are going to be forced to wait until after the 2024 election to have any hope of indicting Trump, assuming he’s not re-elected. The prospect of the DOJ indicting Trump actually fades every day that goes by without an indictment at this point. Once 2024 arrives without a DOJ indictment, there very likely won’t be an indictment until after November of that year.

If ANY other Republican candidate takes Presidential office and replaces Joe Biden, all federal indictments (or investigations) against Trump are likely to be wiped away with a single pardon. State charges don’t get wiped with a pardon, but potential federal charges (and federal crimes) do. The Department of Justice likely fully understands this ticking-clock-dilemma, particularly at this late date. At this point, it is likely far too late for the DOJ’s special counsel, Jack Smith, to hand down a Department of Justice indictment over the charges of Insurrection or the classified documents case. The only way these cases would still be possible is if Jack Smith uses his ties to the ICC to levy charges under the ICC, which is entirely separate from and outside of the Department of Justice. This “dual counsel” aspect of Jack Smith may actually be the subtext for why Merrick Garland “hired” Jack Smith as special counsel to handle Trump’s case. Merrick Garland wouldn’t stand in the way of Jack Smith were he to levy an ICC indictment against Donald Trump and then hold an international trial against Trump’s espionage charges at the Hague.

Further, if the United States changes to a Republican led President and even if that newly elected President (assuming it is not Trump) chooses to pardon Trump, any ICC indictments still remain. The ICC acts outside of the United States jurisdiction. The United States is not a current signatory on the ICC, but that situation acts entirely in Merrick Garland’s favor, even if Garland ends up ousted from office as the DOJ’s head. For all of the jurisdictions that are signatories to the international court, they may have enough pressure over the United States to force the US to hand over Trump for an ICC indictment and international trial regardless of Trump’s present political office held (or not).

In other words, with Jack Smith as counsel, all may not be copacetic for Donald Trump even after the 2024 election ends. By Garland allowing Jack Smith to deep dive into Donald Trump’s DOJ investigation, this may see the ICC weigh in on Trump’s legal situation after the DOJ is unable. Garland knew the stakes of Trump’s situation and it appears that Garland may have used Jack Smith as an ICC insurance policy if the DOJ is unable to bring a proper indictment timely.

Republican Readers

If you are a Republican and you’re still reading up to this point, I applaud you. It means that you want to better understand Donald Trump’s predicament. Donald Trump is most definitely skirting laws that the rest of us would never be able to skirt. What it ultimately says is that Donald Trump is above the laws of the nation… and that situation is not right. No man is above the law. Yet, Donald Trump firmly believes that he is… and so do his sycophant cult members.

Why MAGA Republicans continue to endorse a man who wishes avoid legal consequences by becoming a dictator over the United States solely to make sure no laws ever apply to HIM, but then happily apply laws to everyone else, why would you as a Republican want that, not just for you and your family, but for the rest of the United States?

Democracy keeps the United States whole. The rule of law glues our democracy together. Dictatorship ensures neither remain and that the United States falls. Your very livelihood, family and indeed everything you hold dear is at stake. You may think that this man’s MAGA ideals are worthy, but Trump’s ideals are only worthy of collapsing the United States and turning the United States back into a third world nation.

If Donald Trump or any of his sycophant ilk including Kevin McCarthy, Lindsey Graham, Mitch McConnell, Marjorie Taylor Green, Ron DeSantis, Gregg Abbott, Josh Hawley or any other doofuses manage to become President, democracy is likely to end.

The End of Democracy?

What does it mean if democracy ends? It means you won’t have a job. It means millions won’t have a job. It means the loss of income. It means the loss of whatever money or savings you have. It means the loss of your home. It means no more public schools for your children. It means no more food. It means restaurants close. It means that the United States as you know it is over. It means those with guns survive a bit longer than those without. It means the medical system serves those who are favored by the dictator, but affords those to die who aren’t.

What comes in Democracy’s place is no more elections. No more law enforcement, except against you if you speak out against the current regime. No more free press. No more constitution at all. It effectively means the situation that now exists in China and Russia, except it’ll be under Donald Trump in the United States. Puerto Rico will likely withdraw its involvement with the United States as many other US territories are likely to do (e.g., Guam, US Virgin Islands, American Samoa, etc). Even Hawaii might try to become a territory of another nation since it’s a set of islands in the Pacific and not physically connected to the 48 state continent.

Further, it means the loss of allies and the loss of NATO status. The military will become, well who really knows? There’s no way to know what a dictator might do with a military the size of the United States Military nor the nukes under his control. It also means that those who live on or near a border, such as Mexico, will see the very real possibility of Mexican military takeovers of that border land and property. 50 states? Not anymore, with Russia firmly moving to take back Alaska and Mexico likely taking over border states.

Let’s just say that the life that you have come to know and understand is over. You won’t live under the rule of law, you will live under the rule of a supreme leader… one who will just as quickly kill you as look at you. Death will likely become the means to solve all legal problems. If you break the law, you die… even for a simple matter of theft of food simply to feed yourself or your family… even if that food was grown by you in your backyard. Once democracy is lost, your land is no longer yours and no longer is anything that sits on it.

Once democracy ends and a dictator comes into power, what you knew of the United States is gone. You can’t vote someone out of office if you’re not allowed to vote. What is the first thing that Donald Trump does if he’s re-elected president? He halts all future elections indefinitely, state or federal. An elected person cannot be voted out of office when elections don’t take place. Don’t think that Trump won’t do this. Further, Democrat led states will be either disbanded or new state leaders will be appointed by Trump, pushing out any remaining Democrat control.

This is who and what you’re endorsing and wanting to vote back into office as a MAGA Republican. You’re not voting in freedom. You’re voting out the ownership of everything you own. You’re voting out your very family’s well being. You’re voting out America’s future.


Why didn’t this happen in 2016? It didn’t happen in 2016 because Donald Trump didn’t have a vendetta against the United States. Today, Donald Trump has a very strong vendetta; a vendetta so strong that he’ll do anything once he regains power to ensure that vendetta is fulfilled and that he remains in power indefinitely. That includes shutting down democracy.

Once that happens, the rest of the world will retaliate by cutting off the United States from, well, just about anything and everything (imports and exports alike). No more imported food, clothing, oil, energy, products and various other stuff that keeps America economically rolling. No more exports will be accepted. That will collapse both Main Street and Wall Street alike. America’s dollar will become so devalued and worthless as to become worth less than the value of one Mexican Peso. Mexico’s money will likely become worth more than what will be left of the USD. Trump will have to seek new allies and trading partners. Better buy up some gold and horde it in your house. It’ll be the only thing you can buy stuff with… assuming Trump doesn’t force the military to go on raids and take everything you own away from you.

At this point, the United States will have to be renamed to something else… probably something like Trump Nation… or whatever naming whim Trump dreams up. What this also means is a major exodus of people flooding out of what’s left of the United States and into Canada or Mexico or other locations. Some will leave and head to Europe, but Europe (and the rest of the world) will suffer dearly from the collapse of the United States.

Staple products like cars and other big ticket items will disappear. Manufacturing will cease because there will be no money to buy these products by anyone in the United States. Grocery stores and most forms of commerce will cease. Barter will take hold, assuming Trump doesn’t annex the entirety of the United States land (and everything on it) into one big land mass ownership for himself, leaving no borders, no ownership and no more need for state legislators or state capitols. Meaning, you won’t be left with anything other than perhaps the clothes on your back with which to barter. A post Trump dictatorship economy will make the Great Depression decade between 1929-1939 seem like a pin prick by comparison.

The only people who will serve Trump directly are those who continue to lick his boots. The rest will be thrown to the curb and/or arrested and possibly disappeared (read killed). As for jails and current inmates, who really knows? Trump will deal with these institutions at his own fancy. Perhaps moving some military to manage and keep certain penitentiaries open and closing down others by releasing inmates back into what’s left of society or possibly executing them all. There’s no way to know exactly what Trump as a full out dictator might do here. Trump will have a huge bodyguard detail, but the rest of “America” will be left to fend for itself. Those in Trump’s employ who backstab Trump will summarily disappear.

There will also be no more press or TV channels or radio, except for radio and TV channels devoted solely to the Trump propaganda, which will run 24/7 on big screens around big cities. Trump’s still gotta stroke his fragile ego. No more free press or freedom of speech. Try to protest? The military will see to that.

Make America Great Again? Yeah, this is not so great is it? If you think this is all just alarmist rhetoric, then you really don’t know what Trump is capable of doing. Of course, if you want to live under the thumb of a dictator in squalor and poverty worse than what we have today, then by all means vote for Donald Trump again.

Donald Trump isn’t a Republican. Donald Trump isn’t even a Christian. Donald Trump is a nihilist; probably the exact type of nihilist that the author Fyodor Mikhailovich Dostoevsky warned us about.

A nihilist, by believing that laws and morals are useless, is also a person who would place the ultimate importance of life on material things. To this person, family, religion, and laws would have no purpose. This would make them a materialist, someone who believes the only things which have any value are those which are physical.

Does this quote sound just a little too familiar?

Trump actually doesn’t care about you or your family or anyone, let alone laws, religion, family or any other thing he professes to claim. He doesn’t care about the United States, religion or anything else… other than money and power (which gets him the things he wants… materialism). Trump pretends to care about such social topics, but only to the point that Trump hopes to convince you just enough so you as a Republican will vote for him. Once Trump lands back in power, his vendetta sees to it that the United States ends and so too with it do the Republicans, the Democrats, religion and the economy. After Donald Trump gets done with the United States, there will be nothing left united about it.

Let’s hope that before this happens that an indictment and a court finds Donald Trump criminally guilty and puts him away in prison for a very, very long time. The world is not ready for a petty nihilist dictator like Donald Trump practicing his petty dictatorship on American soil. Donald Trump cannot be allowed to be re-elected to any position. There are also plenty of non-MAGA Republican candidates who are actually willing to do the right thing for Democracy, who can kick these MAGA extremists to the curb, who can put this morally bankrupt Republican period behind us and bring respect and sanity back to the Republican party. Republicans need to finally wake up and stop defending a man who is so clearly morally bankrupt and in no way worth defending. Let Donald Trump be indicted; it’s his problem, not yours. It’s long past time to shut up and leave Donald Trump to his own fate.

Donald Trump Indictment Text

For your reading pleasure and enjoyment, here is the full text of Donald Trump’s New York indictment.

If you’re on a phone or tablet, you may need to click the link for the document to open. Donald Trump’s team is also still calling this a politically motivated prosecution from Alvin Bragg. There’s nothing in this indictment that seems politically motivated in nature. Though, the charges are also not surprising or unexpected considering Donald Trump’s activities stated by Michael Cohen. All of the counts are based on falsifying business records. Whether these counts are valid and have merit are for the courts to decide, not for Donald Trump or his sycophants or CNN’s analysts to decide. Donald Trump can file motions to dismiss in a legal format, but again it’s up to the courts to decide whether to consider if those motions are valid.


I hadn’t really wanted to include an analysis section of this indictment, so I’ll include a short one. CNN legal pundits are heavily leaning in Donald Trump’s favor. It’s clear, CNN has become much more right leaning in its views. Thus, CNN’s legal analysts have basically treated Bragg’s indictment as nothing more than a mere, but very weak nuisance. Donald Trump’s lawyers echoed this same sentiment immediately upon leaving the arraignment court building. These Donald Trump sycophants and CNN alike have decided that Trump will likely to be able to dismiss the entire set of charges.

MSNBC, however, has taken a more balanced approach in its view; an approach to which I agree. There are 34 counts present in the indictment (this sheer number is important to realize). If the case had zero merit, it would have never been accepted by a judge to arraign Trump over. Thus, the counts are most definitely strong enough to arrest and arraign Trump. What MSNBC’s analysts postulate is that while many of the counts may be dismissed as weak, not all of them are likely to be dismissed. That reduction in counts reduces Trump’s criminal exposure, to be sure, but it is not eliminated. If even 10 counts remain after Trump’s move to dismiss, that’s still 10 counts to which Donald Trump must be held to criminal account.

I do agree that Trump may be able to get some or even a larger number of the counts dismissed. I also agree that Trump may not be able to dismiss every single count. Trump understands this. Even just 1 count is enough to take it to trial. Though, I believe there will be well more than just 1 left over after whatever is dismissed is dismissed.

False Rhetoric

One thing I do not agree with is that Bragg’s indictment is in any way politically motivated. Bragg has every right as the Manhattan District Attorney to bring anyone and everyone to justice for perpetrating illegal activities. If Bragg has uncovered evidence that Donald Trump has, in fact, perpetrated illegal activities… and according to this indictment he has… then Bragg has a legal obligation under New York Law to seek prosecution against the accused. It does NOT say witchhunt. It does NOT say politically motivated. It DOES say that District Attorney Alvin Bragg is doing his job for the state of New York. Donald Trump and the Republicans will spin this as political, only because they think it will help Donald Trump skirt responsibility for the laws that he has allegedly broken. To that I counter, if you break the law and you are exposed, you take the consequences; ex-president or not. No man is above the law.


Should TikTok be banned in the US?

Posted in botch, business, government, legislation by commorancy on March 26, 2023

smart phone displaying tiktok profile

Clearly, TikTok’s executives would have you believe that there is no risk when using TikTok. Is there a national security risk, though? Yes. Let’s explore.


TikTok is presently owned by Bytedance. Bytedance’s company headquarters are located at Room 10A Building 2 No. 48 Zhichun Road, Haidian District, Beijing China. We also need to understand that businesses operating in Beijing China operate under Chinese law (such that it is). What that means for TikTok is that in order for this company to operate within China, it must always abide by China’s rules and regulations including spurious Chinese government requirements and mandates both existing and instantaneously required by the government.

For example, if Xi Jinping decides that Bytedance must turn over all information it has acquired to the Chinese government, Bytedance must comply or face the possibility of China pulling its licenses to operate its business in mainland China.

On the one hand, you have the TikTok CEO Shou Chew claiming that TikTok’s user data is safe. On the other hand, you have China’s government which can instantly require (i.e., force) Bytedance (or any Chinese based company) to hand over its data or face the loss of operating a business in China. Because China is a communist government, whatever China wants, China gets. Meaning, TikTok can absolutely make no assurances that user data is truly safe while Bytedance remains under China’s overreaching communist government authority. The rule of law only applies in China when the Chinese government WANTS it to apply, a key takeaway here. Internationally, China’s government does whatever it wants under the guise of appearing to support the rule of law.

Oracle Cloud

TikTok’s CEO has assured congress that it could move its data to within the Oracle cloud environment. While moving TikTok’s data storage to a United States owned business might sound great on paper, in reality it means nothing. Data stored in the US can STILL be easily exported, backed up, copied and recovered to computer equipment which resides in China. In fact, it would be entirely surprising if TikTok didn’t keep live backup copies of all user data somewhere on Chinese servers.

In other words, the CEO’s statements about using data storage on US shores as a “protection scheme” rings hollow. It’s far too easy to create copies of data and put it anywhere you want. It’s also guaranteed that if the Chinese government were to mandate that Bytedance turn over all relevant data to the Chinese government, TikTok would be forced to comply with those orders or face China’s government retaliation. In this case, not only can Bytedance not protect user data, they would have to appear completely willing to hand it over to the government instantly. Why? Because of Bytedance’s allegiance to China and not the United States… and because if TikTok doesn’t, China will close them down.


This word denotes a whole lot of things all at once. However, the most important thing this word signifies is what happens if China requests something from Bytedance and they refuse? A US based company protects all data of its users under the laws of the United States. If there were a subpoena by law enforcement issued for that data, a US based company would either have to comply with the subpoena or file an objection to quash the subpoena under specific grounds. In China, such avenues of refusal don’t necessarily work.

Because the United States is, at least thus far, based on the rule of law, the government would be required to allow an objection to funnel through the court processes before requiring the company to turn over whatever data is required by that subpoena. Even then, it would only be required if the court upheld the subpoena instead of siding with the appeal.

On the flip side, because China is a communist operated government, businesses operate under the whims of the Chinese government, which is not always based on the rule of law. While China does put up appearances suggesting that rule of law exists, the realities within China don’t always match that “rule of law” narrative. Meaning, China’s rule of law facade is just that, a facade.

For this reason, Bytedance’s allegiance must remain with China and never with the United States. The only reason Bytedance can operate within the US borders is because the United States, at present, allows it. But, that may be changing…

Is My Data Safe with TikTok?

The short answer is, no. Why? Because Bytedance’s allegiance remains solely with China because that’s where its business is incorporated. Regardless of what the executives of Bytedance may claim, that Chinese allegiance means that if Xi Jingping requires Bytedance to turn over all user data to China’s government about TikTok users, Bytedance must comply… and with no questions asked.

It doesn’t work like this if Bytedance were a company owned and operated within the United States. Rule of law actually matters in the United States where in China it only appears to matter, but doesn’t actually matter when the Chinese government wants what it wants.

What’s Wrong with China Knowing About Me?

If you don’t live in China or plan to visit, it might not matter that much. However, if you were ever to visit China, what you post on TikTok might be considered a legal offense in China and could see you legally apprehended, detained and/or jailed.

In other words, if you intend to post on TikTok and you have said or done anything that China takes offense to, you could become wanted in China. That’s a fairly extreme outcome, but China takes offense easily to many things and it takes those offenses seriously… so why poke that bull if you don’t have to?

Worse, because China is all about the money, having critical data from your phone device could allow would-be Chinese hackers to infiltrate your device, steal your identity and steal your money.

Should I use TikTok? — Should I allow my kids to use TikTok?

If you value your family’s privacy, no. YouTube and Facebook both offer similar enough video sharing features to more than make up for TikTok’s functionality. Both YouTube and Facebook are US based companies not under the Chinese government’s thumb. Why risk potentially losing your (or your child’s) personal data to China needlessly when you don’t have to?

This author definitely recommends avoiding the use of TikTok entirely. There’s really no reason to risk losing your family’s personal data to China over the use of a silly video sharing platform… a platform that already exists on YouTube and via other US operated companies.


The argument on not banning TikTok seems to stem mainly from both the TikTok executives (naturally) and from TikTok’s creators. Ignoring TikTok’s weak executive arguments for the moment, let’s focus on TikTok creators. While I agree that many creators may not have understood the ramifications of investing their creative efforts and skills into a platform of questionable origin, unfortunately they have. What that means is that a ban on TikTok in the US means that these creators must lose the audiences they have worked to gain. I get it, but that’s not reason enough.

For creators, this is a problem. However, it’s relatively simple for creators to ask their audience to move with them to a new platform. If a creator’s audience is truly committed to that creator’s content, most (if not all) of that audience should will be willing to move to any other platform that that creator may choose to use. A simple video which requests fans to sign up for and move to a new platform shouldn’t be a big deal.

If you’re a TikTok creator considering that you may lose your ability to create on the TikTok platform, you should definitely consider a movement plan to another platform. Whether that be YouTube, Instagram, Snapchat or any other short video sharing platform, moving away from TikTok is the key. You shouldn’t remain complacent and simply assume a ban won’t happen. You should take action now and, yes, complain if you like, but you should also prepare to move your fans and content to another platform. Don’t wait, take action now!

Creator arguments about engagement or loss of revenue or any other such arguments are simply not strong enough arguments to sway regulators away from the above China data sharing problem. There are too many other platforms owned and operated by US companies for such creator arguments to hold any weight at all. Simply, they don’t. This is why creators need to be proactive and take steps to plan to move both your fanbase and content to another platform now. Don’t sit on your hands and think it won’t happen. Plan ahead.

TikTok Audience versus TikTok CEO

While creators make up a relatively small portion of TikTok users, they are the ones responsible for bringing in the viewers. Still, having an audience is not an argument to keep TikTok from being banned. It’s not whether TikTok offers a valuable video sharing service, it’s that a Chinese based company manages TikTok’s data and always remains at the whims of China.

The CEO has stated that TikTok is beholden to no country, but that’s simply not a true statement. That statement cannot possibly be true. Every company must go into business under some country. Every country has laws and requirements for businesses to remain in business within that country. Bytedance incorporated its business within China. That means that Bytedance is beholden to China’s laws and regulations, no matter how, when or why they might appear. Because China’s government only appears to abide by its written laws and regulations, it only does so when it is convenient to the Chinese government. When it’s not convenient, new laws instantly come into being to cover whatever “thing” China is trying to make happen.

Instant laws don’t occur in the United States. It takes time, effort and lots of congressional or state legislator bickering and months of wrangling before a new law can come to exist. Most new laws require ballot measures to be voted on by the population, something that China doesn’t offer to its citizens.

What this all means is that TikTok’s CEO can say whatever he wants, but the realities of the way China operates remains. If Mr. Chew is so willing to lie about Bytedance’s allegiance to China, what else is Mr. Chew lying about? Lying to congressional members really doesn’t say great things about Bytedance or TikTok.

Should TikTok be banned in the United States?

We’ve come full circle from the beginning of this article. After all the above arguments are considered, I’d say that it is most definitely worth banning TikTok (and any other Chinese based apps) from the app stores. This situation shouldn’t be limited to TikTok. TikTok is simply so visible because it’s now used by more people than, in some cases, YouTube. The shear audience sizes alone for some TikTok creators means ever more and more people are signing up to use the service. Many of these new users are children (aged 17 and younger).

Children are unable to comprehend what sharing of personal data to China really means. They just see silly videos, but have no idea what information TikTok may be collecting while these children use TikTok.

Additionally, because Bytedance is a Chinese operated company, it doesn’t have to abide by federal regulations like COPPA. TikTok might choose to voluntarily comply (or simply put up a facade of doing so) as a measure of apparent goodwill. However internally, it may not at all comply with COPPA because it doesn’t have to. Because the TikTok company exists and operates outside of the US’s borders, United States federal laws don’t apply and cannot be enforced upon TikTok. This aspect right here is the single biggest elephant in the room and the single biggest reason why TikTok should be banned.

Without the federal regulations to help protect US citizens from nefarious or malicious use of data collected, Bytedance can literally do almost anything to non-Chinese citizens without any legal ramifications by the United States. Even if the United States were to try and bring suit, China wouldn’t allow it. This situation alone is why TikTok (and other Chinese operated services) should not be allowed to operate within the United States. TikTok is literally one Chinese company among many taking advantage of its Chinese locale to avoid being held accountable to United States laws.

The United States has every right to protect its citizens from unlawful interference by other countries. TikTok is one among many companies where this reality now exists, not just companies located in China. The United States legislators need to take a step back and really think long and hard about (the lack of) legislation around companies operating in countries which are mostly unfriendly to the United States.

China only tolerates the United States at this point because of the buying power the United States offers. Other than buying power, that’s where China’s civility with the US ends. China (and a Chinese operated company) doesn’t care how many people in the United States die, get maimed or get injured as a result of products made in China. The same can be said of services like TikTok. Anyone who legitimately believes that the TikTok CEO legitimately cares about United States citizens, other than for their wallets and the almighty dollar, is clearly deluded.

Yes, TikTok should be banned, along with every other app-based service operated out of unfriendly territories around the globe.

First Amendment?

Some have claimed that the First Amendment will be violated by banning TikTok. Let’s definitively state here and now that there is no First Amendment problem at play. Because TikTok is a Chinese company wholly operating out of China, Constitutional laws don’t apply to TikTok. The executives who operate TikTok aren’t United States citizens.

Even though there are United States users using the service as creators and viewers, the service itself is not bound by the United States Constitution. In effect, by you as a user choosing to invest your time and effort into putting your videos onto a wholly owned Chinese entity, you’ve effectively forfeited your right to First Amendment protections.

While some First Amendment advocates might disagree with the above stance, one thing is certain, the United States Constitution does not apply to non-US citizens… which would include any and all executives and staff who were hired and operate out of Bejing China. While it is possible that Bytedance has hired some United States citizens to help operate its service globally, that doesn’t wholly, suddenly or automatically then make Bytedance as a company bound by the United States Constitution.


Movie Review: Smile Movie

Posted in movies, reviews by commorancy on March 25, 2023

smilePaying to see a film in the theater, particularly a film that I already know is bad, is out of the question. And so it goes with the movie Smile. Let’s explore this film’s problems.

Spoiler Warning: Don’t continue reading if you want to watch this film.


Another movie reviewer suggests that this film takes some of its cues from The Ring (2002). I didn’t get much of a “The Ring” vibe while watching Smile. However, I did get a heavy Final Destination (2000) vibe. While the movie doesn’t feel like a Final Destination film in its story, the predestined outcome, once revealed by Rose, feels very much like Final Destination.

What that means is that Smile is very much derivative of other films which have come before, like Final Destination. Thus, the film feels already way too familiar while watching it.


While I’m sure that the director and producer(s) had wanted to produce something that felt like a psychological thriller, in the vein of Alfred Hitchcock, unfortunately Smile never achieves that level of professional treatment. That’s mostly because the story fails the film. Instead of watching a solid thriller unfold, what we get is something that feels at once all too familiar, but with an overly problematic story. Unfortunately, as I said, that leaves the film never reaching the level of foreboding suspense of an Alfred Hitchcock production.

Story Problems

This story feels a lot like a very long episode of the Twilight Zone. Unfortunately, the writers more or less fail this film almost completely. The first act of this film begins by unfolding Smile’s narrative of a crazed young woman named Laura Weaver who complains of being stalked by an unseen entity. Rose (an overworked therapist at a hospital) initially diagnoses Laura’s condition as a psychotic break. It isn’t until she witnesses the brutal suicide death of Laura where Smile truly begins. Laura’s suicide is odd because she does it all with a smile on her face; a smile that remains even after she’s dead by her own hand.

This is where Rose’s life takes a huge turn and where the Smile movie’s first act can begin in earnest. During the first act, Rose doesn’t fully understand that something’s amiss for quite a long time. Too long, actually. She goes about her daily life believing that her being overworked is the source of her anxieties and “seeing things”. It isn’t until her cat ultimately disappears and then turns up in a rather awkward place that she really begins to understand something more than merely being overworked is at play.

This is when Rose herself begins to see her former psychiatrist. More than this, Rose begins to understand that what Laura had said in her first encounter is actually unfolding in the same way for her.

Let’s stop right here…

Story and Plot So Far

To this point in the film, Smile is relatively slow burn, but the plot is unfolding in a way that’s not overly far fetched or out of line for its premise. However, most viewers by this point probably have a strong inkling of where this film is heading and where it is likely to head. That’s always a problem. The director and writer didn’t actually do anything counter this audience second-guessing problem, instead they double down on it… which ultimately doesn’t work.

What do the writers do? They begin Act II by waking Rose up and having her take her experience and knowledge and trying to determine if there’s truly something amiss… and more importantly, exactly how to resolve it. Of course, none of the characters around her believe any of her ramblings, partly because the entity is making her see people who aren’t actually there.

Act II

As we enter Act II of this film, Rose has decided that there’s a real problem that she needs to investigate. Rose dons her investigator cap and begins web surfing along with using an on-again-off-again boyfriend cop to get information related to her investigation of “the cycle.”

Suffice it to say that this investigatory process lasts way too long considering how it’s ultimately used in the film. Once again, let me stop right here.

Investigation and Horror

The only reason to launch a long drawn out investigation within a horror film is to try and break “the cycle.” Rose eventually determines that there is a cycle afoot; a cycle that could lead to her death if she can’t get out in front of it. This investigation leads her to one person who seemingly was able to break the cycle and lived to tell the tale.

Unfortunately, who and where this person is ultimately tells us exactly how the cycle was broken. Yet, the writers insist on taking us to where he is to actually talk to this person IN person to confirm it. Once we get there and Rose confirms the reason why this person was able to cheat the cycle, she instantly blurts out that she is unable to perform the act that got him out of the cycle. She blurts it out so fast, in fact, that it was somewhat difficult to understand what she said.

At first, I thought this admission was just the Rose character jumping to far-too-quick of a conclusion without truly thinking it through. If Rose’s life is on the line, her need for self-preservation should have eventually kicked in. After all, Smile is a suspense and horror film. For Rose to completely blow off the idea so readily and instantly tells me that the writers were hopelessly lost at this point. It’s a horror film, guys! Get with the program!

Second half of Act II

The second half of Act II was the time for a twist from the writers. Unfortunately, that twist just never materializes. Everything set up in the first act leads us in a very straightforward manner directly into the second act. Everything predicted by the cycle exactly comes to pass with no deviation… even though, Rose has had well enough time to investigate the cycle and even break it if she had so chosen.

Instead, the second half of the second act is merely devoted to just how the Smile entity can take over the mind of its victim making that person believe whatever it wants. In that goal, Rose decides to head out to an isolated house in the middle of the woods with no one there who can try to bring Rose back to reality… not that that would have been possible. Her rationale, though, was that with no one else there, the entity is trapped. Rose way underestimated the entity’s prowess.


One topic that was never really outright said in the film, demonic possession, was clearly the hallmark of this situation. This entity turns out to be some kind of demon who delights in seeing its victims commit suicide in front of others in a very brutal way. If the writers had embraced the idea of demonic possession, either Laura or Rose could have attempted to stop the demon from possessing them. Unfortunately, by the time the possession takes full effect (a slow buildup taking several days), once possessed, that leaves minutes before the person commits suicide in front of another, beginning the cycle anew. I have to assume the writers were thinking that the entity’s possession takes days because it takes that long to wear down the psyche of the person being tormented.


The film is derivative and unimpressive. The demonic smile isn’t demonic. It’s just a smile. The smile needed some major CG enhancement to make the smile wider, wierder and more sinister than it appears on the faces of each of the victims… perhaps, even tearing the face apart to achieve it.

Worse, the whole overly long investigatory aspect for Rose left the viewers wondering what the hell happened. Since Rose spent a large swath of screen time trying to make sense of the situation and, more importantly, trying to find a way out of it, you would think that Rose could have seriously contemplated what was needed to get herself out of it and survive. She didn’t. More than this, because it’s a horror film likely wanting to begin a franchise, killing off your lead character doesn’t allow for that.

If the writers are thinking that they can make Smile 2 using the ex-boyfriend cop as the lead, they need to think again. Of course, they can simply use him as the setup, like Laura, to initiate another lead character to try their hand again at trying to unravel the mystery of this demon. The question is, can a second Smile film actually pull that off? I’m not so sure.

For this film, the story doesn’t really work. If Rose had actually contemplated and achieved breaking the cycle, that would have been the twist that Smile required to make it satisfying. Yes, the cycle would have been broken for Rose, but someone else would have still inherited the demon to deal with later. Unfortunately, this all too predictable story was all too predictable in its ending.

Rating: 2 out of 5 stars
Recommendation: Watch only on “free” streaming services


Fallout 76: Are Re-Rollers Gambling?

Posted in botch, business, video game, video game design by commorancy on March 16, 2023


As of Season 12, entitled Rip Daring and the Cryptid Hunt, Bethesda might has well have entitled Fallout 76’s newest season, Welcome to Gambling. Let’s explore.

Challenge System

When Fallout 76 released in 2018, Bethesda included a “Challenge” system in the game. This challenge system allowed (and still allows in limited ways) players to obtain Atomic Shop currency called “Atom” in exchange for performing relatively basic challenges in the game world.

This Atom currency allows players to “buy” in-game items, such as CAMP add-ons, character clothing, hairstyles and face paint. The selection of the items in the Atomic Shop, at the time, was relatively limited in the early days, but has since expanded into consumables, Fallout 1st items, weapon and armor paints and even weapons.

Almost every video game released today offers a store with add-on items for players to purchase to enhance their gaming experience. The “Atom” currency has always been and remains the only currency in the game that can be purchased with real cash money in the form of USD (or other currencies around the world). For the purposes of this article, all dollar amounts shown are in USD.

For example, if you own a PlayStation, you can visit Sony’s PlayStation store and purchase Atom bundles, such as 500 Atom for $4.99 (or basically $5). The exchange rate is 100 Atoms to every $1 spent. An item in the Atomic Shop could then be said to cost $7 if it costs 700 Atoms. A small discount is applied the more Atom you buy.

If you pay $20, you’ll receive 2,400 Atoms. Bethesda’s marketing states there’s a bonus of 400 Atoms. In reality, this is simply considered a slight markdown on costs. Instead of costing $1 per 100 Atom, this changes the ratio to $1 per 120 Atom or discounted to 83¢ per 100 Atom. If you’re willing to spend $40, you’ll receive 5,000 Atoms. That further reduces the cost to $1 for every 125 Atom or 80¢ for every 100 Atom (a 20¢ discount or 20% discount for each 100 Atoms bought over the $5 purchase in the store).

That means that if you buy $5 worth of Atoms, a 1500 Atom item in the Atomic Shop store costs you $15.00. If you pay for $40 in Atoms, that same 1500 Atom item now costs you $12. That’s not a tremendous discount overall, but you will have saved $3 by buying $40 in Atom over buying three separate $5 Atom items (making $15 or 1500 Atom) from the PlayStation store. This conversion rate is only important if you’re wanting to equate how much something actually costs you in the Atomic Shop.

Fallout 1st

With the introduction of Fallout 1st, this monthly subscription service muddies the Atom waters just a little. By subscribing to this service, you’ll receive not only access to all of Fallout 1st’s features, including access to private worlds, access to custom worlds, access to the ammo and scrap boxes and a bunch of 1st exclusive Atomic Shop inclusions, you’ll also receive 1,650 Atoms each month. The cost for Fallout 1st is $12.99/mo straight up or $8.25/mo if you pay for the $99 to buy 1 year / 12 months up front.

Depending on how you choose to buy Fallout 1st, the value of the monthly allotment of Atoms changes. Buying Fallout 1st at the 1 year price is obviously the cheapest option offering up a nearly 50% discount off of those 1,650 Atoms (ignoring all of the rest of Fallout 1st’s features). 1,650 Atoms would normally cost close to $16.50 to purchase. Of course, the closest Atom bundle on the PS store is the 1,100 Atom bundle which costs $10. You’d have to jump to the 2,400 Atom bundle at $20 to get enough to cover 1,650 Atoms. The best Atom cost bundle is the 1 year subscription to Fallout 1st which discounts the cost of Atoms to 50¢ per 100 or effectively half price. That means that that same 1500 Atom item in the Atomic shop would cost you $7.50… assuming all of your Atoms came solely from being a Fallout 1st member.

Buying Fallout 1st monthly at $13 is still a discount on Atom, but at a rate of (rounded up) 127 Atom per $1 spent, just slightly better than buying the 5,000 Atom bundle shown above.

Challenges and Score

When the first Scoreboard season arrived in 2020, Bethesda changed what the daily and weekly challenges gave as rewards. Prior to the introduction of the Scoreboard seasons, all challenges awarded Atom. A player can spend these Atoms in any way chosen. Atom was originally awarded from both daily and weekly challenges as well as all of the other environmental challenges in the game.

With the introduction of the first Season and the Scoreboard, daily and weekly challenges changed to providing S.C.O.R.E. (another insipid Bethesda acronym). For the purposes of this article, this author will use the word ‘score’ for simplicity. Score is simply yet another, but separate form of Experience Points (XP). The score moves a blue progress bar across the Scoreboard. Once the progress bar reaches the end, the Scoreboard advances by one space. Each Scoreboard has 100 board spaces. Each space requires a progressive amount more score to complete. The first space might require 1000 score to complete. The last space might require 3500 score to complete. Every space in between requires more than 1000 score and less than 3500 score to advance to the next space. Once the game board has completed, additional board spaces appear so that awards can still be accrued by performing daily and weekly challenges.

For the first few seasons, once you reached the Scoreboard’s end, the board was over. Challenges did nothing and were worthless. It wasn’t until a few seasons later that Bethesda realized the problem and added more board spaces after the 100th board space, the final space which awards the “big prize” (such that it is).

What exactly is a Challenge?

While there are many challenges available in the game, the ones that matter most to today’s players are those that produce score. These are the only ones that advance board spaces on the Scoreboard. All other remaining challenges still provide Atom, but in small and diminishing quantities. The only renewing challenges are the daily and weekly challenges. The environmental challenges are one-off challenges that, for the most part, do not renew. These environmental challenges are one-and-done… with the exception of pick-lock and hack-terminal challenges that have a progression system that eventually ends, but which provide a small amount of Atom so long as they remain uncompleted.

The daily and weekly score challenges help “move the needle” through the Scoreboard. Each game board space unlocks some kind of Atomic Shop item including the possibility of a space awarding Atom itself. The board spaces are not random chance. They are hard set by Bethesda and the “prize” can be easily seen by hovering over the board space.

When Fallout 76 was introduced in 2018 (and until 2020), daily and weekly challenges awarded exclusively Atom alone. The daily challenge board might, in total, award anywhere from 100 to 300 Atoms (maybe more) depending on that day’s included challenges. That meant you could gain at least between 100-300 Atoms per day simply by doing the daily challenges. In a week, that could accrue to 1,000 or more Atoms just by doing daily challenges. The weekly challenges might accrue up to 1,000 to 2,000 Atoms (or more) depending on the included challenges. That meant that between the daily and weekly challenges you could see anywhere between 1,200 and 2000 Atom accrued per week. The problem for Bethesda was that all this freely available Atom from the daily and weekly challenges meant that players didn’t need to buy Atom frequently or sometimes at all. Bethesda wanted more income.

When the Scoreboard was introduced, the amount of Atom awarded by challenges was dramatically reduced to only those Scoreboard spaces which, all except one, offer a measly 150 Atom per space and only a handful of these spaces now exist on the board. In total, a single season Scoreboard typically awards 2,000 Atoms instead of up to 2,000 Atoms we were formerly getting per week from completing both the daily and weekly challenges. With the Scoreboard, it now takes many, many weeks of challenges to unlock the total Atom on the Scoreboard. Less Atom given out means more Atom sold with real cash money.

In other words, the amount of Atom awarded by the Scoreboard has been drastically reduced… forcing players to actually pay real money for Atom to buy larger Atomic Shop items. Bethesda enforces this purchase behavior by putting shop bundles into the Atomic shop for 16 days or 3 days or similar limited time offers which see the item disappear from the store after the timer ticks down. It’s a ruse that tries to force gamers into buying Atom to avoid “losing out”.

The challenges themselves include all manner of fetch quests. Some are long tailed and some can be completed in just a few minutes. Many are convoluted and may require things that a low level player might not have or might not yet have access to. Not all challenges can be completed by every player, depending on where that player is in completing the game’s main quest lines.

Challenge Examples

Challenges come in all shapes and forms. Some require completion of the challenge once, but many require completion of the challenge multiple times. For example, “Scrap Junk to produce Black Titanium (0/10)”. The 0/10 means that the player must scrap junk 10 times to produce Black Titanium to complete the challenge.

Bethesda plays games with these counters, too. The “Collect Pieces of Wood (0/200)” is a challenge that Bethesda has modified from its original to make it more difficult. When you collect wood, you might actually collect 4, 8 or even up to 20 pieces of wood as random chance. Yet, Bethesda only counts the collection itself toward the counter, not the number of the pieces of wood collected… as a way to cheat the player out of getting the task done sooner. Yet, “Craft ammo on a Crafting Bench (0/50)” still counts each individual piece of ammo crafted towards the challenge, even if you’ve only pressed the crafting button once. It’s this inconsistency and disparity between the challenges that not only make this system confusing, it makes the challenges a pain in the ass not knowing which rules apply. These counters are also what put long tails on challenges and require them to take a whole lot longer to complete than they should.

Some challenges are based solely on the completion of other challenges, like the Gold Star Daily Challenge. The Gold Star challenge isn’t actually a challenge, it’s a counter. It counts a specific set of challenges that have been completed. Once all of the Daily Challenges have been fully completed, the Gold Star Daily Challenge also completes. The Gold Star Daily challenge is an incidental challenge that completes only because other challenges have completed.

Challenges might include the following:

  • Kill a Yao Guai with a Syringer
  • Collect 100 Pieces of Wood
  • Scrap junk to produce Black Titanium
  • Kill a Deathclaw
  • Eat a Meat based Meal
  • Eat Pre-War Food
  • Scrap Pre-War Money
  • Complete a Daily Ops

Each of these challenges is usually sub-qualified with the number of times the player must perform that task. For daily challenges, if not once only, then it’s typically set to 3-5 times; relatively easy. For weekly challenges, it might be 20-100 times. With the introduction of the Re-Roller, the daily challenges have increased repeating the task from 3-5 to sometimes 20 or 30 or more, making these tasks take much, much longer. Yet, you still only have 24 hours to complete the challenge. There’s reason for this change, keep reading.

Atomic Shop Items

Atomic shop items are actually worthless. Why? Because you can’t craft them for others, sell them or even drop them. Any items purchase from the Atomic shop or, by extension, received from the Scoreboard are exclusively locked to that player. If you purchase (for Atom) an outfit from the Atomic Shop, it is exclusively for your use alone. If another player wants that same item, they must also spend Atom to buy it from the Atomic Shop. These player locked items make the item, in fact, worthless in the game world. They’re cosmetic, yes, but that’s the extent of the value of that item.

Some items can be used by other players, like Shelters, making these kinds of Atomic Shop items a bit more worthwhile than those like cosmetic armor or weapon skins. Shelters, for example, are probably one of the most useful items in the game. These in-game rooms offer the player a way to decorate and build in creative form, which can be shared by other players who visit that shelter. Shelters also afford a way to display items to other players that you have found and which you value. You can even display Atomic Shop items, but why bother? Only the rarest items found in the game world are those worth displaying.

What is a Re-Roller?

Here we arrive at the heart of this article and why you’re here reading. It’s important to understand the above system in place to understand this next most recent introduction by Bethesda, the Re-Roller (aka Re-Roll). Some challenges have been a problem to complete by some players, but only because the player might not be far enough along in the game to actually complete that challenge. For example, they might not have access to a specific location that a challenge requires. The player also might not be high enough level to use the required weapon to complete the challenge. There are many challenges like these that Bethesda includes in the challenge board.

Because of complaints over uncompletable challenges, Bethesda has now introduced in Season 12 (the current season as of this article), the concept of a Re-Roller. What is a Re-Roller? It simply allows you to “spin” for and hopefully 🤞 get a new and completely different challenge. And here’s where the gambling arises and where Re-Rollers intentionally fail.

Before diving into all of that, let’s step back in time.

EA and Loot Boxes

Several years back around 2019, EA introduced for-pay loot boxes into several of its games, but most notably FIFA. These for pay random chance loot boxes, once opened, provided the player with a common, rare or legendary item, which could be used in the game. In the case of FIFA, the game’s loot boxes provided trading cards in various rarities.

Many authorities jumped in claiming these loot boxes had become a form of gambling. These authorities are not wrong. They are a form of gambling. You spend real money and then the game spins and awards you with a “win” or “lose” situation. Because of typical house odds of offering up the worst rewards most frequently, it encourages players to do it again and again in hopes of getting “something better” or at least not a duplicate. Yes, duplicates are possible and extremely common.

As a result of the backlash over loot box gambling and other games of chance, EA and several other game developers have since stripped loot boxes from their games.

Enter Bethesda and Re-Rollers…

Re-Roller Gambling

Lumberjack SlotWhile Fallout 76 has included a relatively real appearing slot machine in the game for many months now, it doesn’t rely on real cash money to operate. The in-game slot machine uses “caps”, an easily obtained in-game currency. This currency has been in the game from the beginning. Today, caps are considered mostly worthless, other than for the purposes of finding player vendors who are selling relatively rare in-game items.

With this slot machine, there is effectively no way to lose. The cost to play is 10 caps. Spending 10 caps, you’ll always get something for your caps spent. For example, spinning this slot machine always awards +2 to Luck for a limited time, a very useful player perk for as long as it lasts. You can always renew this perk by spinning again. You get this perk no matter whether you get caps back or not. Most times, you’ll get back exactly the caps you spent to play, 10 caps. Sometimes you’ll get back 4 caps. Occasionally, you’ll get 20 caps or more. If you don’t win anything, the game still awards you a piece junk to scrap or sell… junk likely worth 5-10 caps at a vendor. In essence, you almost never lose any caps in this slot machine… and even then, the additional perk means you never lose. It is also impossible to spend real cash money to play this machine as caps cannot be purchased directly with USD.

Enter Re-Rollers

The name itself actually has connotations of spinning something, like a slot machine. Even the sound effects used when re-rolling are reminiscent of spinning a slot machine. How do Re-Rollers work?

Because some challenges may be uncompletable (for whatever reason), a Re-Roller allows the player to take a chance on a new replacement challenge in the hopes it will be better than what was there. In effect, the Re-Roller is tantamount to pulling the arm on a slot machine and waiting for the spinning to stop to see if you have “won”. If not, that encourages you to spin again. This encouragement is tantamount to and turns a Re-Roller into a form of gambling… triggering the same effects as any other game of gambling. While the game issues one free Re-Roller per day, additional Re-Rollers aren’t free, making this situation far, far worse.

Re-Rollers cost Atom to buy from the Atomic Shop. As has been established earlier, Atoms cost real USD. Thus, to buy Atom means paying real cash money to Bethesda for these Atoms. Thus, Re-Rollers cost the player real cash money to buy. This further means it’s possible to lose a large amount of real USD to gambling with these Re-Rollers. This is also the first time Bethesda has tied real cash money to an in-game random chance based gambling device in Fallout 76.

One could argue that Lunchboxes could be considered a form of gambling, but there’s really no gambling involved. You buy a lunchbox, you open it and you get a reward in game. There is no random chance involved. The only randomness is in the name of perk you get, not whether you’ll get one as each one is nearly equal to the others. More than this, you can buy Lunchboxes in the game world by earning in-game currency… something that can’t be done with Re-Rollers.

Re-Rollers can only be obtained by spending Atom in the atomic shop or by obtaining a very small number of them off of the Scoreboard (and even less off of the ever diminishing environmental challenges). Even then, the Scoreboard only offers 3 Re-Rollers from a single space with just a few spaces across the game board. Those 3 Re-Rollers are easily consumed in just a few minutes on ONE (1) Challenge. There is no other way to get Re-Rollers in Fallout 76 as of this writing.

Gambling Triggers and Addictions

The problem with random chance spin and win mechanisms is that they trigger the same exact gambling centers of the brain as any other form of gambling. Because real money is involved in obtaining Re-Rollers, this could cause real actual gambling problems for children targeted by this new Re-Roller mechanism. Unlike the slot machine above, which always wins you something, the Re-Roller has no guarantee you will get anything different from what you already have, which is perceived by the player as a loss. Yes, it IS entirely possible to get the same exact (or an even worse) challenge as a result of a Re-Roller. Sometimes it happens multiple times in a row.

Let’s consider that it costs 50 Atom to buy one Re-Roller. That means that an average player could spend as much as $1 for every 2 Re-Rollers purchased. Because a player might need use multiple Re-Rollers multiple times, it would be easy to spend $5 or $10 attempting to get new daily challenges… every single day. That money adds up in a week or a month or even a year.

As stated above, this unusual move is the first time Bethesda has tied real world fiat money into the purchase of a random chance game mechanic driven entirely by the need to gamble. While there’s no way to win cash money back out of this, using it always means loss of money AND its effect as a gambling device stands. Because the win is considered a “better” (subjective interpretation) challenge, the loss is real money spent on wasted / lost Re-Rollers. This loss of Re-Rollers has the real affect of triggering a gambling addiction.

As a result, the questionable inclusion of this game mechanic is easy to see children become addicted to this system so that they continue to Re-Roll without bounds, just to see what they get… all in an effort to make Bethesda more money! It’s not a simple matter that child wants to complete the challenges. It’s that the addiction causes the child to want to see “better” or “rarer” challenges. Addictive mechanics lead to addictive behaviors… and this Re-Roller feature has a real chance of being abused by someone caught up in gambling addictions. Worse, these games are targeted towards children and young adults who might not understand gambling addiction or the money problems which can result from them.

More than this, it’s surprising that Bethesda didn’t realize that these real world money tied Re-Rollers are actually a form of gambling and put the brakes on this feature before introduction.

Because Bethesda is now owned by Microsoft, that puts Microsoft on the hook for this gambling device. An enterprising lawyer may now see very deep pockets in Microsoft and choose to pursue a lawsuit over perpetrating gambling on minors. In fact, under the eyes of state laws, gambling targeted towards minors is illegal. Bethesda is playing with legal fire here.

Convoluted and Epic Challenges

One thing that has made this entire Re-Roller system far worse and even more addictive is the inclusion of even more complex and convoluted challenges. Most daily challenges included in past Seasons have required relatively simple quest objectives. Go kill a single creature. Fetch 5 purified water. Easy and relatively simple, but also useful items to player.

With this Season, Bethesda has abandoned these simplistic challenges for longer tailed, more complex and even more difficult challenges. Where it might have taken an hour to get through the daily challenge board, it might now take 3 hours because of these newer more complex and convoluted daily challenges.

Weekly challenges have always been long tailed. That means that they might take several days to complete. That’s the point in weekly challenges. Daily challenges have always given only 24 hours to complete these challenges before they reset for the next day. Moving the challenges from maybe 5-10 minutes per challenge to 15-20 minutes per challenge is an odd play, but not when you consider Re-Rollers.

A reasonably experienced player can instantly size up the amount of time a specific challenge might take. This re-enforces the need to Re-Roll long-tailed challenges in the hopes of “getting something better” …. thus, ensuring that players get addicted to this random chance Re-Roller system. Thus, the reason for inclusion of longer, obscure and more complex challenges ensure that players will buy into this gambling mechanism for a “chance” to get something better or faster or easier.

In reality, the challenges ahead of the one being Re-Rolled are equally as complex and equally convoluted, with the exception of a perhaps a tiny few which might be as easy or as useful as those given in prior seasons. A player ultimately has no idea what might pop up when a challenge is Re-Rolled. Thus, the illusion of “getting something better.”

However, it didn’t stop with complexity. Bethesda felt the need to include one more incentive to reinforce gambling behaviors: Epic Challenges. “Epic Challenges” offer higher Score and are, thus, rarer to see from a Re-Roller. Rarity is a concept that will cause players to want to gamble. Because Epic Challenges are considered a low chance “win”, players will be incentivized to consume Re-Rollers multiple times until they get an Epic challenge.

Therefore, this Epic Challenge system “rarity” encourages players to Re-Roll every challenge on the daily (and weekly) boards simply to get Epic Challenges. Doing so could cost the player $20, $50 or even $100 real cash money to achieve… being required to Re-Roll 6 challenges per day and up to 12-20 Weekly challenges. Expensive AND addictive.

Way Cheaper to Pay for Board Spaces

On the Scoreboard, Bethesda has included the ability to pay 150 Atom to advance one space. For example, if you’re at space 99 and you want to complete the board, you could simply pay 150 Atom and avoid the hassles of performing challenges to gain the required amount of score.

Let’s equate this with Re-Roller costs. The purchase of three (3) Re-Rollers costs 150 Atom, the same amount it costs to buy one board space. Consider that three Re-Rollers on daily challenges cannot provide you with any amount of score close to completing a board space by itself. The board space might complete if, for example, the score amount given for a challenge is 150 and that also happens to be the amount needed to advance to the next board space. That also meant you had completed many previous challenges to get the progress bar to the point of being almost completed.

To put this in perspective, for only 150 Atom, the gameboard will advance an entire board space (adding somewhere between 1000 and 3500 score) OR you can pay 150 Atom for three Re-Rollers in the hopes that you can replace one single daily challenge and receive maybe 100-200 score. In other words, paying for just three Re-Rollers at 50 * 3 = 150 Atom is between 10x-35x more expensive than simply paying that same 150 Atom to advance a full board space. Thus, there is zero value in paying for Re-Rollers when you can pay for board spaces. Bethesda understands this.

Gambling Targeted Towards Minors

There’s a reason why gambling establishments require people to be age 21 to play. First and foremost, it’s the law. More than this, children shouldn’t be gambling. Unfortunately, video games can’t age check before each use. There is no way to exclude a system like Re-Rollers from players under the age of 21. The only way to avoid such a situation is if such a gambling system is not included at all.

This is why so many game developers have since removed real cash money based random chance loot box systems from their games. It’s also questionable why Bethesda has now chosen to include one in Fallout 76. Bethesda has most definitely crossed a line here; a line that shouldn’t have been crossed.

Ultimately, this Re-Roller system is likely to be seen for what it is, a gambling system strongly encouraging children to gamble by the use of Bethesda’s “Epic” challenge strategy, strongly appealing to “rarity” and “miss out” child behaviors. Gambling systems should never be included in products used by or, more importantly, targeted towards minors… particularly gambling systems when tied to the use of real cash money.

Bethesda, if you’re reading, you might want to quickly retain legal counsel as this Re-Roller system is likely to blow up in your face once again. That, or you quickly need to consider its removal from Fallout 76.

Gambling Addictions / Get Help

If you’re a player who is susceptible to gambling addiction, you should not play nor fall prey to Bethesda’s gambling encouragement. Instead, please get help. It is strongly recommended to steer clear of all such gambling mechanisms included in video games; mechanisms which trigger gambling addictions and encourage you to spend real money on them.

If you’re a parent reading, you should limit your child’s play of Fallout 76. Better, stop their play altogether until Bethesda removes this gambling device and stops encouraging players to pay real cash money on random chance gambling devices.

No video game should ever make money off of the backs of children encouraged to ostensibly gamble with real money. Gambling should always remain in places like Las Vegas or Atlantic City, where people who physically travel there have made the conscious decision to gamble and are of legal age to do so.

Can this be corrected / Solutions?

[Updated Mar 18th, 2023] When this article began, this author wrestled with the idea of adding a solutions section and decided against it. The simplest solution seemed for Bethesda to remove the feature from the game entirely and be done with it. However, I’ve since decided to include this section, only because it may help out other developers considering adding “games of chance” to their game… particularly when those games of chance are tied to fiat currency and in games where children (aged 17 and below) already make up a very large portion of the player demographic.

When you’re building an adventure game like Fallout 76, where the primary objectives are to explore and experience combat situations, adding games of chance (like the Re-Roller) as ways of raising additional money is problematic and possibly illegal in some parts of the United States. Basically, a game developer should never raise money by targeting children with games of chance when tied to fiat currencies.

The big mistake Bethesda made here was to tie the purchase of a Re-Roller to fiat USD money. To rectify this situation, Bethesda would need to untie this entirely. Get rid of Re-Rollers purchased through the Atomic Shop and replace the purchase of Re-Rollers to vendors located in the game world using one of the many already established in-game currencies, such as Scrip, Gold Bullion, Stamps or Caps. These in-game currencies are earned, not purchased. For a gamer to obtain these currencies, it only requires time, not real cash money. In fact, there is no way to pay for these currencies with real cash money using any in-game mechanism by Bethesda. However, that doesn’t resolve the problem over the fact that Re-Rollers are a game of chance included in a game title targeted towards children.

Instead, Bethesda needs to reconsider the idea of swapping out one challenge for another entirely. For example, get rid of this “game of chance” mechanic entirely and replace it with a buyout option. Instead of being required to “roll and pray for something better”, the player can simply buy out the challenge by using in-game currency. Pay some amount of caps or scrip or stamps or bullion and the challenge is instantly completed. An in-replacement buyout option gets around the problem of gambling and games of chance entirely.

The other option is to simply include more challenges… more than are actually needed to complete the game board. If the player misses a few challenges, no big deal. Though, the buyout option is the best solution as it gives the player the option to complete all challenges if they so choose simply by playing longer and earning more in-game currency.

Should a buyout option be tied to actual fiat currency? Perhaps. One thing is certain with the buy-out option, it cannot in any way be considered a potentially illegal form of gambling targeted at children. Though, bilking children out of money for the purposes of such “buy-out” mechanics is still considered dubious at best. Including games of chance in similar fashion to a slot machine, on the other hand, is problematic all around when an excessively large demographic of players consists of ages 17 and below. The only way such games of chance mechanics should be included is if the developer has a way to absolutely 100% exclude 17 and younger from participating in these mechanics.

To date, I don’t know of any game developer that has found a way to do this reliably… or at all! The best way to handle the inclusion of “adult” gambling situations is simply not to include them. If you want to operate games of chance via fiat money gambling situations, then spend the money to construct and open a real casino in Las Vegas.


Retro Review: Earth Final Conflict

Posted in entertainment, reviews, TV Shows by commorancy on March 14, 2023

EFCIn the mid-late 1990s and after Gene Roddenberry’s untimely passing, Majel Barrett Roddenberry pitched a new Sci-Fi series which was conceived by the late Gene Roddenberry based entirely on Gene’s notes. That series? Earth: Final Conflict. Let’s take a retro dive and explore and review this series.

Premise of Earth Final Conflict

Aliens arrive on Earth bearing the promise of peace and gifts to humanity. It seems ideal. Yet, that’s where the series begins its subtext and subversion. After all, the series is called “Final Conflict”. Thus, there must be some level of “conflict” involved or the series isn’t correctly named.

Season 1

As the first season unfolds, it is clear that the concept and execution is both rough in some areas and polished in others. Much of the first season, then, seems to have been lifted almost directly from some of Gene’s notes; notes which crafted many of the episodes. Most of the episodes were, for the most part, well written. The pacing and situations unfolded in a logical and progressive manner. Every story tended to build upon the last, laying groundwork and foundation for what could make a very profound series. In this first season, there was hope for something positive. There was nothing given away in Season 1 to indicate the endgame, but the series itself was, as I said above, mostly well written.

With William Boon (Kevin Kilner), Lili Marquette (Lisa Howard), Ronald Sandoval (Von Flores), Augur (Richard Chevolleau) and to a lesser degree Jonathan Doors (David Hemblen) rounding out the “Human” cast of characters in this series, the stories worked reasonably well. The Taelon cast primarily consisted of Da’an (Leni Parker) and, later in the season, Zo’or (Anita La Selva). Majel Barrett Roddenberry also lent her acting hand as basically a “Doctor Chapel” to help implement the Taelon medical agenda on Earth. However, her character was effectively dropped before the end of season 1.

For the stories throughout season 1, the Taelons were mostly in agreement on how to move their agenda forward, with only differences in how the agenda would be implemented by specific companions.

That’s not to say the series was perfect at this point. There was definitely some inane writing involved at times. The most prominent writing problem was focusing on Da’an as the “primary” Taelon. To understand this further, it became understood that Da’an was the “North American Companion”. What about the South American Companion, the Asian Companion, the UK or even the Russian Companions? Where were those companions? Why did we never get to see those or, indeed, hear their sides of this agenda? Why weren’t their inputs as important as Da’an’s? There’s a big plot hole here which could have been rectified by season 2, but it wasn’t. Instead, the series attempted to capitalize on this disparity, instead of trying to explain it.

Season 1 wasn’t perfect, but it was a very first and important step in explaining why the Taelons had come to Earth, why a growing faction of “Resistance” had formed and how both parties would interact with one another as the series progressed.

Cast Changes

Unfortunately, by Season 2, the producers had introduced their own agenda. They wanted a younger (and hopefully larger) audience and in so doing began one in a series of rather odd personnel changes. Let’s stop right here and discuss this aspect.

Personnel changes in series can be the kiss of death for a series. While some series have been able to endure personnel changes to some degree, the series is never the same after. The most prominent personnel change in a series was in Charlie’s Angels. This series lost Farrah Fawcett after one season to her demand for more pay. Because the producers were unwilling to agree to her demands, but also because she was locked into a contract to complete a certain number of episodes, a compromise was met. Farrah would appear in fewer episodes over a longer amount of time, making it so she received the same pay as she did when working in every episode. Effectively, she got her raise, but worked less and made less money overall.

This change was the beginning of the Charlie’s Angels series tanking. Farrah’s departure opened the door to a revolving cast of wannabe actresses moving in and out of the series. From the progressively worse Cheryl Ladd to Shelley Hack to Tanya Roberts… with Cheryl Ladd being the one who was able to hold her own most of the time, but could be just as awkward at times. Even still, the amazing and iconic chemistry between Jaclyn Smith, Kate Jackson and Farrah Fawcett was undeniable and was entirely lost as a result of Farrah’s departure. The producers should have kept Farrah with the series at all costs, yes even agreeing to her higher salary. Yet, failed to do so.

Getting back to Earth Final Conflict, this same situation unfolded on this series. At the end of Season 1, the William Boone character was killed off. It is unknown why Kevin Kilner (who played Boone) left the series, but it has been theorized that the producers were looking for a younger face as a leading man. Apparently, at the time, Kevin Kilner was around the age of 40. He certainly didn’t look it on small lo-res TV screens, but apparently the producers may have thought so. To that end, William Boone was killed off, leading to a completely preposterous scenario to literally born a new character in Liam Kincaid for season 2.

In Season 4, probably due to cost cutting measures, Richard Chevolleau’s role as Augur was replaced by a new female lead in Juliette Street (played by Melinda Deines). Unfortunately, you get what you pay for. Deines was apparently so overjoyed that she landed this role that she incessantly smiled for the vast majority of her performances, as though she were some kind of EFC cheerleader. Her “better than Augur” technical persona was so overwhelmed by her scenery chewing deliveries, her character just didn’t work well for the vast majority of the episodes where she appeared.

I won’t even get into the inane naming of this “Street” character in replacement for Augur. While Richard Chevolleau’s performances steadily improved throughout the first 3 seasons to the point where his deliveries actually became more or less acceptable, the series producers forced us viewers to start all over with yet another very green actress to play yet another Augur character. So what was the point in all of this anyway?

These are by no means the only examples. Oh, no no no. Season 4 ushered in a nearly brand new cast of characters, replacing the Taelons and everything we had come to know about the series. Keep reading below for more details on that immense mistake.


Before I get into season 2, let’s discuss a hot topic that has been widely debated about this series. Yes, acting was a problem with this series. That’s to be sure. Not so much in Leni Parker or Anita La Selva (Taelons), but in the human actors. From Doors, to Augur to Marquette to Boone, the acting was fair. It was utilitarian enough to carry the stories forward, but there was nothing award winning about the acting. This is partly due to the directors. Some directors are able to draw out excellent performances from mediocre actors, but not in this series.

Kilner’s William Boone, for example, had a delivery that sounded like he was in dress rehearsal reading the lines for the first time. His delivery is so dry and unaffected, it’s occasionally difficult to watch. Augur, on the other hand, delivered lines like he was in a Saturday morning children’s show. With his clownish outfits and his near chewing the scenery, Richard Chevolleau’s Augur made for a difficult watch at times.

Howard’s Marquette delivery ranged from decent to excellent for most of her episodes. Her delivery worked well, mostly, and she was able to hold her own with the material. In fact, I’d say that Lisa Howard’s Marquette, being one of the strongest of the actors involved in the series, was the glue who more or less held the “human” cast together.

Leni Parker and Anita La Selva worked well within their “Alien” constraints as Da’an and Zo’or, respectively. From the the hand gestures to the head bobs to the constraining costumes to the prosthetic makeup, this left the acting constrained to specific “Alien” mannerisms. These constraints, when combined with the “Alien” attitudes, delivered a more stilted performance… which is more or less what we got. Whether this delivery was intentional or a product of the costumes and prosthetics remains unknown. For the sake of this article, I will assume that the alien mannerisms and stilted performances were all intentional, as the role demanded to sell the Taelons. I’ve not seen these two actresses work in other acting parts, so I can’t speak to their full range or abilities. However, as Taelons, their subtle and constrained delivery worked well… particularly Leni Parker, whose occasional smile said volumes about Da’an.

Speaking of smiles, well smirks actually, Robert Leeshock’s incessant smirk after delivering a line of dialog is not only annoying, you can’t even tell the seriousness of the delivery. In his own way, Leeshock replaced Kilner’s dry, flat affect with his own level of oddly smirky, but also dry, flat delivery.

Season 2

Here’s where things start to get rough. This sophomore season didn’t help the series much at all. The first thing to notice about Season 2 is the cast change from William Boone to Liam Kincaid. As I discussed earlier, Kincaid’s setup was entirely preposterous. At the end of Season 1, Ha’gel (a Kimera who looks like a Taelon) impregnates the Irish protector, Siobahn Beckett (played by Kari Matchett). The fetus and then the baby to child to adult grows at an astonishing rate. From birth to adult is no more than a couple of hours.

Because Ha’gel is Kimera, the baby (who becomes Liam Kincaid) is half human, half Kimera. Why this plot device was included is entirely unknown. However, the writers failed to explore Liam’s Kimera abilities the vast majority of the time in season 2 or even 3, eventually explaining it off as his “Kimera” side is disappearing. Yeah, right, whatever. Whenever Liam is placed into a combat scenario, he reaches for a gun 90% of the time rather than using his innate Shaqarava… with the Shaqarava being primarily used as a palm embedded energy projectile weapon. Though, I believe the Shaqarava could be used for other purposes, like healing or as a telepathic instrument.

It gets worse. In the episode Hijacked, a reporter captures Liam’s use of his Shaqarava (palm energy organs) on camera. When confronted, she acts all surprised as if, while standing around on the mothership surrounded by literally tons of Taelons, Taelon implanted humans and Taelon technology, that it’s the most surprising thing she’s ever seen in a human. It’s such stupid and preposterous writing, I don’t even know how the showrunners thought we could swallow this trash.

Yet, this surprise is presented in all sincerity. The only thing Liam needed to explain to the reporter is that it is Taelon technology. It is actually Taelon in origin and he wouldn’t be lying by saying that. It’s just not Taelon technology from these specific Taelons. It was inherited Taelon technology back when the Taelons and Jaridians were one race… which is what and who Ha’gel is (an “old style Taelon”). Worse, why does it matter anyway? Liam acted like his being exposed for having Shaqarava is the worst thing ever. With literally thousands of Taelon implanted and enhanced humans running around, many using Skrill as weapons, Liam’s Shaqarava is something to be concerned over?

However, that’s not the worst of season 2. There were so many bad lines of dialog written, I can’t even document them all here. There were lines delivered by Zo’or that literally made no sense. Zo’or would ask questions of the humans that all Taelons should already know the answer. So then, why is Zo’or asking these questions? It would be one thing if Zo’or were asking the questions sarcastically, but the writers had Zo’or ask them in all seriousness, like Zo’or genuinely didn’t know the answer. It doesn’t make any sense. The Taelons are superior to the humans in practically every way and have vastly superior knowledge and intelligence. Why is Zo’or asking such basic and degrading questions of humans?

There were so many stupid lines of dialog in too many of the season 2 episodes, the writing felt cloddish. It’s clear, the writers had no finesse for the scripts or the source material. They didn’t understand where to take the Taelons and, indeed, attempted to rely on silly and foolish action scenes to carry the weight of the episodes rather on using the story to drive the overall narrative. While the first season was rough, it wasn’t cloddish. However, going into season 2, we began with the absurd and it gets even more inane as the season progresses.

In fact, some actions scenes are so out of place and come from nowhere, I’m at a loss to figure out what the writers were thinking. Is the only way out of a story dilemma to throw a chase and gunfire scene in? I don’t get it. There were so many unnecessary (and unproductive) shootouts, I eventually lost track. Worse, in all of those shootouts, Liam almost never uses his Shaqarava. There are so many times where he has no weapon on him, and yet he takes punches or gunfire like he’s unarmed. It’s like, come on writers, he has the Shaqarava. Let him use it to end the conflict. Yet, Liam almost always ends up on the floor face down unconscious without ever having used his Shaqarava.

The only time he ends up using his Shaqarava is when there’s a stupid plot device involved, like in Season 2, Episode 18’s Hijacked.

Let’s talk about Hijacked again in another way. This episode is so preposterous and absurd, I can’t even fathom why it was written. If the Taelon mothership is so vulnerable to takeover, why did it take this long for the human resistance to abscond with it? The dialog that “justifies” the episode is that, paraphrased, Taelons didn’t design with vulnerabilities in mind. If that were the case, then every Taelon technology is sufficiently vulnerable to takeover.

There is zero chance a dying race as advanced as the Taelons would be that naïve towards technology vulnerabilities. Claiming that and then writing an episode around the mothership being taken for a joy ride is jump-the-shark preposterous. It’s as preposterous as the first episode when Liam Kincaid comes into existence.

It’s like the showrunners for season 1 were fired and replaced. Then, new showrunners were allowed to retool the series to their own whims and desires. Season 2 felt like it was simply a playground for the “new” producers, not to tell a story, but for them to put up all action stories that the writers can devise, stupid or not.


Before I get into season 3, let’s talk about this specific season 2 episode. The stupidity of this episode should be understood immediately. We need to understand this episode to further understand how it shaped seasons 4 and 5. The primary mistake the writers made with the Atavus episode is that the series had already established that Taelons don’t revert to an atavistic state upon severance from the Commonality (psychic bond to other Taelons). That fact was established by Ma’el thousands of years earlier. Ma’el didn’t have access to humans to “suck” their minds to keep from reverting. Either Ma’el figured out a way that has never been known to Taelons or Taelons simply don’t automatically revert. I’m betting on the latter.

When Ma’el came to Earth to learn about humans, he severed his ties to the Commonality. Yet, he didn’t have the luxury of near death humans to sustain him in Taelon form. He also didn’t revert to an Atavus. Or, if he did, he quickly figured out a way to revert back to Taelon form. However, it was never established that Ma’el ever became an Atavus or remained in that state after severing his ties to the Commonality. That means that Season 2, Episode 2 made no sense for when Bel’lie comes to Earth from a different Taelon colony strictly to become an Atavus.

That either means that Ma’el was immune to becoming an Atavus or this whole “become an atavus” thing was made up by the writers for sheer convenience strictly for this silly Season 2 episode.

Season 2 also employed one entirely questionable and throw-away episide in S2E8’s Redemption. This episode consisted almost entirely of flashbacks combined with maybe 5-8 minutes of “new” content to carry the segments. It wasn’t even handled by a primary character. It was handled by the tertiary character, Siobhan Beckett (Kari Matchett)… as if we actually cared about the fate of this character? This also wouldn’t be the first or last throw-away character that the writers would devote an entire episode to.

One other extremely bad episode to call out is the final episode of Season 2. The entire story is so dumb, I don’t even know why the idea was greenlit. The whole series up to this point had been grooming Jonathan Doors (played David Hemblen) to become President of the United States. In fact, Zo’or was nearly ready to profess his loyalty to Doors. Yet, this episode sees Zo’or with his right hand man, Sandoval, lead the charge to assassinate Daniel Thompson, the then President of the United States in the series. The assassination was to be carried out during the presidential debate. In fact, what we come to find later was the plot intended to injure Thompson and pin it on Doors with the intention of discrediting him. Why is it bad? Because while the Taelons are malevolent in a very hands-off way, this plot shows the Taelons to be not only overt in their dangerousness, they’re fully willing to play with people’s lives to further their agenda, they hadn’t ever done this so overtly prior.

Honestly, the show would have been better had Doors won the election. This convoluted plot kept Thompson in office, an actor (Barry Flatman) whose bit parts were okay, but nothing to write home about. Now, we have to have more involved plots with this actor more and more. David Hemblen’s acting was far superior to Barry Flatman. Why the producers chose to sideline David Hemblen’s character in lieu of Barry Flatman’s character is beyond me. Yet more stupid showrunner decisions.

Season 3

More personnel changes. At the opening of Season 3, we come to find that the Season 2 prophetic vision of Augur’s holographic computer interface being pregnant and having a baby actually served a real life purpose. Lisa Howard was apparently pregnant and showing towards the tail end of season 2. Showing Augur’s computer character as pregnant and then having a child would have allowed Lisa Howard to continue her dual role, playing both Lili Marquette and Augur’s holographic computer assistant in the shape of Lili Marquette.

Going into Season 3, we come to find Lili has been killed… or at least, so we’re told. Behind the scenes, Lisa Howard had apparently requested maternity leave from the producers for the first half of Season 3. The producers were apparently somewhat obliging to her request and reduced her role significantly. Not only did Augur delete his holographic interface, so too did Lili Marquette mostly disappear from the series… appearing only in a few episodes in Season 3. This allowed Lisa Howard the time she wanted to be with her new family member.

Unfortunately, this personnel change put a hole in the series for a leading lady character. In that effort, they hired actress Jayne Heitmeyer to play Jonathan Door’s new right hand, Renee Palmer. Lisa Howard apparently claimed that Heitmeyer wasn’t intended to be a replacement for Lili Marquette. While that statement may have seemed true in some small way, that’s not really how the series turned out. In fact, by Seasons 4 and 5, Heitmeyer’s character, Renee Palmer, had become the primary lead character who carried the entire series (along side Sandoval). So, yes, Renee Palmer did eventually come to replace Lili Marquette and most of the rest of the “human” cast.

Season 3 is just as ludicrous, unfortunately, as season 2. This is mostly because of the newly introduced Renee Palmer character. When this character appears, there’s no questions, no background checks, nothing. She’s so readily accepted into the fold, it’s as if she’s been there all along. This is part of the reason I despise personnel changes in TV series. The writers could have at least had the Taelons perform some kind of fiduciary responsibility around Renee’s appearance. Yet, crickets.

As for Jayne Heitmeyer, her mostly flat affect left the series, once again, mostly without a strong actor to carry the series. The character was mostly utilitarian, just enough to carry the role. It’s the same problem with William Boone and again with Liam Kincaid. The producers kept trying to inject fresh blood into the series, yet kept failing at it so amazingly. While the series started out with a strong enough cast, particularly the Taelons, the “human” cast keeps changing as often as people change their socks.

The character changes don’t necessarily make a series bad all by itself, but when coupled with amazingly bad scripts and no way to properly use these characters, it most certainly does. As I said above, character changes are ultimately the kiss of death for a series… and Earth Final Conflict proves this out in spades. It’s really not solely the cast changes that made EFC bad, but also the bad, bad writing.

As a prime example of this extremely bad writing, in Season 3 Episode 14, Da’an opens the episode by saying, “The Jaridians carry within them the demon of violence which we removed from our species.” This stupid line of dialog is uttered to Liam Kincaid by Da’an one episode after Zo’or kills Ram, the “Latin American Companion” along with a human using an interdimensional portal and nearly kills T’Than, the war minister and another human the same way.

Liam Kincaid is well aware of what happened to Ram and nearly T’Than at Zo’or’s hand having almost been killed himself. Yet, Kincaid’s response? “What you call the demon of violence is the willingness to fight your own battles”. Kincaid was nearly killed because of Zo’or perpetuating violence against his own species, yet Kincaid makes no mention of this fact to Da’an? The Taelons have no more removed violence from the Taelon species than humans have. Yet, the writers aren’t willing to admit this.

In another episode, Season 3 Episode 9 “In Memory”, Lili wakes up and thinks she’s on Earth again. Yet, so many clues tell her strongly that she’s not where she thinks she is. Additionally, her hospital captors seem to forever press her to repair the interdimensional drive in her damaged shuttle, a very big clue in itself as to what’s going on. It’s not like the series hadn’t told both the audience and the characters repeatedly over many episodes that if the Jaridians manage to get their hands on a working interdimensional drive that that would allow the Jaridians to reach Earth much, much sooner.

You would have thought the Lili character was far smarter than this and had a brain in her head. Lili should have recognized these clues, especially those that continually press her to repair the drive system. It’s not like the Taelons haven’t repeatedly warned not only her, but every human around the Jaridian’s plan. Yet, there Lili sits, being asked to do the very thing that could help the Jaridians out majorly, all without a single suspicion? I figured out the ruse in under 5 minutes. Yet, Lili doesn’t figure it out until the very end? I don’t think so. Again, grossly bad tropes and horribly bad writing. Writers must treat characters as if they actually have brains in their heads, particularly when stories have previously demonstrated that they do. Barring that, the writers should have kept Lili drugged up and constantly in a brain fog so that she couldn’t put these pieces together. Yet, nothing in the episode unfolded in this way.

Seasons 4 and 5

I have chosen to lump both of these crap seasons together as one. The beginning of season 4 is effectively where Jayne Heitmeyer takes over as a lone character driving this series. Additionally, the Tealons are gone, replaced by the two Atavus: Juda (Guylaine St-Onge) and Howlyn (Alan Van Sprang). Gone are the characters we had come to know and (ahem) love and in their place we now have a nearly brand new cast of characters we don’t know and ultimately don’t care about. No offense, Juda and Howlyn.

This is truly when the writers had lost their way and had no idea where to take this series. When Earth Final Conflict could have ended with an uplifting story, instead it attempted to turn itself into a competitor TV show to Buffy the Vampire Slayer (which that series was effectively in the #1 spot at the time). The producers and writers made a grave mistake with this series, one that ultimately turned the series into trash.

Instead of embracing the roots and uniqueness of what was setup in season 1, they choose to embrace a concept foreign to the series. Effectively, the series turned the Taelons into energy sucking creatures, similar to that campy and crappy 1985 movie “Lifeforce” in an attempt to compete with Buffy the Vampire Slayer on another network. In essence, the series became Renee the Vampire Slayer with Renee Palmer becoming the primary character who runs around slaying the Atavus, much like Buffy slays Vampires.

After the questionable choice of having Taelons merge into fewer characters, who then revert to becoming Atavus, the whole series takes a 180 degree turn. No longer is EFC about Taelons being a friend to Earth. Now it’s about the Atavus let loose on Earth to suck energy from humans to survive.

I can’t even fathom what the producers were thinking. Here you have a show about Tealons who come to Earth and who understand their fate is inexplicably linked to Humans. Then, the producers derail that premise entirely and effectively kill off the Taelons only to replace them with the Atavus; primitive creatures who look like vampires, act like vampires and kill like vampires. It’s then surprising to realize that the Atavus storyline managed to last for two full seasons before the series was cancelled.

Assignment Earth’s ties to Earth Final Conflict

It’s worth noting that Gene Roddenberry attempted to create a spinoff series at the time Star Trek: The Original Series was in production in 1968. This new series was to be called Assignment Earth. It featured an Alien, Gary Seven, who comes to Earth to study humans and human culture and intervene as necessary to keep humanity safe. However, one episode was made in the form of a Star Trek episode featuring Robert Lansing and Teri Garr as what is intended to be the series pilot. Unfortunately, this spinoff series failed to materialize and was not picked up.

How exactly is Earth Final Conflict related to Assignment Earth? Clearly, Gene Roddenberry wanted Assignment Earth to succeed. However, due to the feedback he received after Assignment Earth failed to get picked up, it appears Gene may have retooled the idea into what ultimately became Earth Final Conflict. In the 1960s, limited special effects were available. As Gene moved into the late 80s, computer effects were becoming available along with much better film effects by companies like Industrial Light and Magic (ILM). This allowed Gene to retool some of these previous ideas into concepts which could now be filmed properly with current effect technologies. Thus, the idea of using energy beings like the Taelons could now be realized on video or film.

Even the premise between Assignment Earth and Earth Final Conflict are similar enough to draw correlations between both. For example, Gary Seven is assigned to Earth for as yet unknown reasons, but he is an alien. He is accompanied by his metamorph companion, Isis, who could transform between a house cat and a human form. Clearly, the human forms of Gary Seven and Isis were not native to these Aliens. They are simply using these forms to allow for better interaction with humanity. This same can be said of the Taelons. Their human form was only created to better interact with humans. When Taelons are in their energy form, they only somewhat resemble a human form.

Additionally, the Taelons were effectively “assigned” to Earth to help humanity, the same goal as Gary Seven in Assignment Earth. The parallels between these two story ideas are more similar than they are dissimilar. I won’t dive into all of the parallels between these two series ideas. You simply need to watch both to understand them. These parallels have led this author to believe that Gene Roddenberry had ultimately retooled the original Assignment Earth ideas into what became Earth Final Conflict.

Additionally, the original series title for Earth Final Conflict was to be titled Battleground: Earth, similar to Assignment: Earth. At the time that Earth Final Conflict hit the airwaves, L. Ron Hubbard’s movie Battlefield Earth was showing in theaters. Because of the confusing naming similarities between Battlefield Earth and Battleground: Earth, it was decided to rename the TV series to Earth Final Conflict to avoid that confusion. Thus, it is this author’s opinion that Gene Roddenberry’s original Assignment Earth idea morphed into Battleground Earth, which ultimately became Earth Final Conflict. Thus, even though Assignment Earth was never realized into a series during the 60s, it did finally become a series, as Earth Final Conflict, in the mid to late 1990s.

I’ve seen many comments from fans who have wished that Assignment Earth had come to pass as a TV series. In effect, it did… in Earth Final Conflict.


It’s too bad that Majel Barrett Roddenberry didn’t retain enough control over the series to keep it from taking this direction. I would really liked to have seen the Taleons return. Then, have the humans help save the Taelons, at least in some small way. Earth Final Conflict should have been an uplifting series, not a dark depressing survival series.

If you ignore seasons 4 and 5 as if they don’t exist and focus on seasons 1-3 as if these are the only seasons that matter, it’s not a bad series. The inclusion of seasons 4 and 5 simply turns Earth Final Conflict into a throw away and worthless series in the annals of science fiction. I’m quite sure that if Gene Roddenberry had been alive to see his creation come to exist, it would have turned out much differently. I’m also surprised that both Majel Roddenberry and Rod Roddenberry allowed for this series to take this unusual and completely distasteful turn at the end.


Did Elizabeth Holmes get the correct sentence?

Posted in botch, business, california, criminal, legal by commorancy on March 5, 2023

lab-testing-equipmentAs we should already know, now-disgraced and convicted CEO Elizabeth Holmes operated Theranos. Theranos was to offer the world a fantastical new way of testing people’s health concerns for all manner of blood diagnostics all with the tiniest drop of blood. It’s fantastical because Elizabeth Holmes’s Theranos was never able to make this testing technology actually work (the entire basis for the fraud). Ms. Holmes has now been convicted of wire fraud and defrauding investors, a federal crime. More than this, Ms. Holmes has now been sentenced to serve 11 years in a federal prison.

NBC News Opinion

One NBC opinion piece, written by a former federal prosecutor and current attorney, Andrey Spektor, contends that Elizabeth Holmes’s 11 year sentence is too harsh. This author does not agree. Why? Because of the nature of and, more importantly, the real dangers posed by the device she failed to create.

Andrey’s contention is:

But that calculation was the least important component of determining Holmes’ sentence because the judge ultimately disagreed with the probation estimate and, anyway, no rational judge would have sentenced her to anything approaching life in prison. Among other things, she is a first-time, nonviolent offender whose crime did not lead to anyone’s death.

I contend that this highlighted statement is, at best, inaccurate and is, at its worst, false. There may actually have been illness and death as a result of Theranos’s deception, when the Theranos “Edison miniLab” machine (pictured)theranos_minilab-crop, did not work as purported and likely impacted medical treatments needed (or weren’t needed, as the case may be) for medical patients. For Andrey to contend that no one died (or by extension, weren’t injured or hurt), that’s incredibly wrong thinking.

Ms. Holmes’s deception impacted many people’s health; health which relied on accurate testing results from Theranos’s Edison miniLab machine. Without accurate testing results, the wrong medications could have been prescribed, the wrong treatment plans could be implemented, up to and including not prescribing medications which could save people’s lives… or indeed the opposite may have occurred; the wrong prescription may have been prescribed causing injury or potentially death. Claiming her fraudulent testing equipment couldn’t cause harm is fallacious and disingenuous. Worse, according to the whistleblowers, Ms. Holmes knew that her miniLab testing equipment didn’t work!

Dangers to Society

The fact that the Edison’s machine’s deception was “caught early” is of no consolation to those who received inaccurate test results from Theranos’s intentional equipment deception. In other words, you can’t just play “god” with people’s lives and health and expect to get away with it.

To claim that her defrauding and misleading and intentional deception about her alleged testing methodology, which clearly did not work properly (or at all), didn’t impact people’s health and lives is insulting to those who could have lost their lives to Theranos’s medical fraud. Even still, some still could lose their lives early because of Theranos.

That the fraud was caught early because of two conscientious whistleblowers within Theranos employee ranks is more a testament to those two individual’s forthright and upstanding conscience than of Elizabeth Holmes coming clean about the dangers the Theranos Edison ultimately posed to society. Elizabeth Holmes would likely have continued to play this dangerous game if those two whistleblowers hadn’t come forward. It wasn’t Elizabeth Holmes who “came clean” on the wrongness of her equipment. It was those two Theranos whistleblowers who put their careers in jeopardy to save the lives of others.

11 Year Sentence

For all of these reasons above, I vehemently disagree with Andrey Spektor’s opinion. Elizabeth Holmes’s 11 year sentence is not at all inappropriate or too long. In fact, I’d say her sentence was downright lenient considering the danger she, Theranos and her fraudulent testing equipment posed to society as a whole. If her equipment’s fraud had not been found early, we could have gone perhaps a year or two or longer without knowing how many people might have been misdiagnosed, given the wrong medical treatments or, indeed, given no treatments at all for preventable, but fatal illnesses if left untreated. In short, Elizabeth Holmes (and her fake testing equipment) was (and is) a danger to society.

I contend that 11 years is way too lenient for that level of danger and risk that she and Theranos posed to the world. She doesn’t deserve leniency for having committed this level of medical malfeasance against the public at large. While one can try and argue that the trial wasn’t about her medical malfeasance specifically, the fraud fully stemmed from that malfeasance. Thus, any malfeasance must be considered as part of the sentencing. It can’t be “distanced” or “separated” as though it didn’t exist. That malfeasance was the entire reason Elizabeth Holmes’s machine was found to have caused the defrauding of investors. Eye on the ball, people.

While a trial for the affected patients was not allowed to move forward, that doesn’t preclude the absolute sheer negligence and willful malfeasance Holmes performed against an unsuspecting public. Elizabeth Holmes knew her machine didn’t work. Yet she STILL went ahead with placing it into Walgreens knowing its problems. That’s not innocent happenstance; that’s willful malfeasance and, at worst, malevolence. Conscientious people don’t put other people in harm’s way intentionally. Elizabeth Holmes put people in harm’s way. One might want to call that blind ambition. Call it what you will. Blind ambition can still result in someone doing the wrong things for the wrong reasons, even knowing that the outcome might cause harm to others. That can’t be dismissed with an 18 month sentence (as Ms. Holmes has requested), a mere slap on the wrist.

No, the 11 year sentence by federal sentencing Judge Edward Davilla was definitely of a sufficient length as to give her pause AND send her a solid message for what Theranos and she had done to the public… even if not specifically stated by judge Davilla; this judge knew the stakes.

Babies as Shields?

One thing Elizabeth Holmes appears to also be shrewd at is trying to get out of her 11 year sentence. She’s now attempting this by getting pregnant. There’s absolutely nothing wrong with starting a family… but on the heels of beginning an 11 year federal criminal sentence? I get that her biological clock is ticking, but it primarily says she’s using an infant as a shield. That’s not a good look and it fully supports the above malfeasance. She’s putting her baby in harm’s way to protect herself from going to prison, or at least so she hopes. It’s a crude and crass way to begin prison… and it leaves her kids in the lurch without a parent for 11 years.

She knew she had been convicted, yet she chose to get pregnant anyway? A judge should have held her in contempt of court over that. When Holmes’s first child was born, her trial had not yet begun. Thus, there was no way to know which way her trial might go. Her second child, however, is simply being used as a pawn against incarceration. That’s both a nasty and very vile reason to have a child. It doesn’t show compassion for the child, it shows self-preservation by Holmes. It’s an incredibly uncaring and self-centered tactic, especially for a baby that’s now caught in her manipulative crossfire. As I said, distasteful.

She’s now delivered her second child, but it’s almost certain she’s working hard to conceive a third as yet another shield. Enough’s enough here. If she pops up pregnant again, cite her for contempt of court, let her carry that child to term in prison and give birth to that child behind bars. The sentence was issued and it must be carried out. Having a baby shouldn’t become a “get out of jail free” card… not for her, not for anyone. Worse, babies should never be used as incarceration blockers.

Judges should make it perfectly clear to any convicted felon who decides to conceive a baby after conviction means possible contempt of court and that neither the pregnancy nor the birth will stop the incarceration from occurring. Playing these games with the court should always mean contempt of court and possible longer incarceration time.

Did Elizabeth Holmes get the correct sentence?

No, but not for the reasons Andrey Spektor proposed. In fact, Ms. Holmes got a far more lenient sentence than she should have been given considering the real medical dangers both she and her testing machine imposed on society. Ms. Holmes should count herself lucky at receiving only 11 years. Let’s hope that when she gets out of prison, she doesn’t try to start yet another dangerous “medical testing” company.

As for those 11 years Ms. Holmes faces? This amount of incarceration also sends a clear message to other would-be CEOs not to play with people’s lives using untested medical technologies in the goal of gaining personal fame, wealth or for any other reason.


The Evolution of Sound Recording

Posted in audio engineering, audio recording, history by commorancy on February 14, 2023

edisonBeginning in the 1920s and up to the present, sound recordings have changed and improved dramatically. This article spans 100 years of audio technology improvements. Though, audio recording spans all the way back to Phonautograph in 1860. What drove these changes was primarily the quality of the recording media available at the time. This is a history-based article and is 20,000 words due to the material and ten decades covered. Grab a cup of your favorite hot beverage, sit back and let’s explore the many highlights of what sound recording has achieved since 1920.

Before We Get Started

Some caveats to begin. This article isn’t intended to offer a comprehensive history of all possible sound devices or technologies produced since the 1920s. Instead, this article’s goal is to provide a glimpse of what has led to our current technologies by calling out the most important technological breakthroughs in each decade, according to this Randocity author. Some of these audio technology breakthroughs may not have been designed for the field of audio, but have impacted audio recording processes, nonetheless. If you’re really interested in learning about every audio technology ever invented, sold or used within a given decade, Google is a the best place to start for that level of exploration and research. Wikipedia is also another good source of information. A comprehensive history is not the intent of this article.

Know then that this article will not discuss some technologies. If you think that a missing technology was important enough to have been included, please leave a comment below for consideration.

This article is broken down by decades, punctuated with deeper dives into specific topics. Thus, this sectioning is intended to make this article easier read over multiple sittings if you’re short on time. There is intentionally no TL;DR section in this article. If you’re wanting a quick synopsis you can read in 5 minutes, this is not that article.

Finally, because of the length of this article, there may still be unintended typos, misspellings and other errors still present. Randocity is continuing to comb through this article to shake out any loose grammatical problems. Please bear with us while we continue to clean up this article. Additionally in this cleanup process, more information may be added to improve clarity or as the article requires.

With that in mind, onto the roaring…



Ah, the flapper era. Let’s all reminisce over the quaint cloche hats covering tightly waved hair, the Charleston headbands adorned with a feather, the fringe dresses and the flapper era music in general, which wouldn’t be complete without including the Charleston itself.



For males, it was all about a pinstripe or grey suit with a Malone cap, straw hat or possibly a fedora. While women were burning their hair with hot irons to create that signature 1920s wave hairstyle and slipping into fringe flapper evening dresses, musicians were recording their music using, at least by today’s standards, antiquated audio equipment. At the time in the 1920s, though, that recording equipment was considered top end professional!

In the 1920s, recordings produced in a recording studio were recorded or ‘cut’ by a record cutting lathe. Hence, the use of the term “record cut”. This style lathe recorder used a stylus which cut a continuous groove into a “master” record, usually made of lacquer. The speed? 78 RPM (revolutions per minute). Typically, a studio using an acoustic microphone had one microphone. When electrical microphones appear, this setup requires an immense sized amplifier to feed the sound into that lathe recorder. Prior to the 1920s, records were made via acoustic microphones (no electricity involved). By the 1920s, this era ushered in the use of electrical amplifiers to improve the sound quality, improving microphone placement and numbers and make the recordings sound more natural by improving the volume on the records produced on the master recording. Effectively, a studio recorded the music straight onto a “cut” record… which that record would be used as a master to mass produce shellac 78 RPM records, which would then be sold in mass to consumers in stores.

This also meant that there was no such thing as overdubbing. Musicians had to get the entire song down in one single take. It’s possible multiple takes were utilized to get the best possible version, but that would waste several master discs until that best take could be made.

Though audio recording processes would improve only a little from 1920 to 1929, going into the 30s, the recording process would show much more improvement. We would have to wait until 1948 before 33 RPM records would be introduced to see a decidedly marked improvement in sound quality on records. Until then, 78 RPM shellac records would remain the steadfast, but average quality standard for buying music.

With non-electrical recordings of 1920s, these recordings utilized only a single microphone to record the entire band, including the singer. It wouldn’t be until the mid to late 20s, with electrical recording processes using amplifiers, that a two channel mixing board becomes available, allowing for placement of two or more microphones connected via wire, one or more mics for the band and one for the singer.

Shellac Records

Before we jump into the 1930s, let’s take a moment to discuss the mass produced shellac 78 records, which remained popular well into the 1950s. Shellac is very brittle. Thus, dropping one of these records would result in the record shattering into small pieces. Of course, one of these records can also be intentionally broken by whacking it against something hard, like so…




Shellac records fell out of vogue primarily because shellac became a scarce commodity due to World War II wartime efforts, but also because of its lack of quality when compared with vinyl records. Because of the scarcity of shellac combined with the rise of vinyl records, this led to the demise of the shellac format by 1959.

This wartime scarcity of shellac also led to another problem; the loss of some audio recordings. This shellac scarcity led to people performing their civic duty by turning in their shellac 78 RPM records to help reduce this shellac scarcity. Also around this time, some 78 records were made of vinyl due to this shellac shortage. While a noble goal, the turning in of shellac 78s to help out the war effort also contributed to the loss of many shellac recordings. In essence, this wartime effort may have caused the loss of many audio recordings that may never be heard again.

1920s Continued

As for cinema sound, it would be the 1920s that introduces moviegoers to what would be affectionately dubbed, “talkies“. Cinema sound as we know it, using an optical strip along side the film, was not yet available. Synchronization of sound to motion pictures was immensely difficult and ofttimes impractical to achieve with the technologies available at the time. One system used was the sound-on-disc process, which required synchronizing a separate large phonograph disc with the projected motion picture. Unfortunately, this synchronization could be impossible to achieve reliably. The first commercial success of this sound-on-disc process, albeit with limited sound throughout, was The Jazz Singer in 1927.

Even though the sound-on-film (optical strip) process (aka Fox Film Corporation’s Movietone) would be invented during the 1920s, it wouldn’t be widely in use until the 1930s, when this audio film process becomes fully viable for commercial film use. Though, the first movies released using Fox’s Movietone optical audio system would be Sunrise: A Song of Two Humans (1927) and then a year later Mother Knows Best (1928). Until optical sound became much more widely and easily accessible to filmmakers, most filmmakers in the 1920s utilized sound-on-disc (phonographs) to synchronize their sound separately with the film. Only the Fox Film Corporation, at that time, had access (due to William Fox having purchased the patents in 1926) for the Movietone film process. Even then, the two films above only sported limited voice acting on film. Most of the audio included in those two pictures consisted of music and sound effects, with very limited voice acting.

Regardless of the clumsy, low quality and usually unworkable processes for providing motion picture sound in the 1920s, this decade was immensely important in ushering sound into the motion picture industry. If anything, the 1920s (and William Fox) proved that sound would become fundamental to the motion picture experience.

Commercial radio broadcasting also began during this era. In November of 1920, KDKA began broadcasting its first radio programs. This began the era of commercial radio broadcasting that we know today. With this broadcast, radio broadcasters needed ways to attenuate the signal to accommodate the broadcast frequency bandwidth requirements.

Thus, additional technologies would be both needed and created during the 1930s, such as audio compression and limiting. These compression technologies were designed for the sole purpose of keeping audio volumes strictly within radio broadcast specifications, to prevent overloading the radio transmitter, thus giving the listener a better audio experience when using their new radio equipment. On a related note, RCA apparently held most of the patents for AM radio broadcasting during this time.



By the 1930s, women’s fashion had become more sensible, less ostentatious, less flamboyant and more down-to-earth. Gone are the cloche hats and heavy eye makeup. This decade shares a lot of its women’s fashion sensibilities with 1950s dress.

Continuing with the sound-on-film discussion from the 1920s, William Fox’s investment in Movietone, would only prove useful until 1931, when Western Electric introduced a light-valve optical recorder which superseded Fox’s Movietone process. This Western Electric process would become the optical film process of choice, utilized by filmmakers throughout 1930 film features and into the future, even though Western Electric’s recording equipment design proved to be bulky and heavy. Fox’s Movietone optical process would continue to remain in use for producing Movietone news reels until 1939 due to the better portability of the sound equipment, thanks in part to Western Electric’s over-engineering of its light-valve’s unnecessarily heavy case designs.

As for commercial audio recording throughout the 1930s, recording processes haven’t changed drastically from the 20s, except new equipment was introduced to aid in recording music better, including the use of better microphones. While the amplifiers got a little smaller, the microphone quality improved along with the use of multi-channel mixing boards. These boards were introduced so that instead of recording only one microphone, many microphones (as many as six) could record an orchestra including a singer mixed down into one monophonic recorded input for the lathe recorder. This change allowed for better, more accurate, more controlled sound recording and reproduction. However, recording to the lathe stylus recorder was still the main way to record, even though audio tape recording equipment was beginning to appear, such as the AEG/Telefunken Magnetophon K1 (1935).

At this time, RCA produced its first uni-directional ribbon microphone, the large 77A (1932) and this mic became a workhorse in many studios. There is some discrepancy on the exact date the 77A was introduced. However, it was big and bulky, but became an instant favorite. However, in 1933, RCA introduced its smaller successor to the 77A, the RCA 44A, which was a bi-directional microphone. The model 77 would go on to also see the release of the 77B, C, D and DX. However, the two latter 77 series microphones wouldn’t see release until the mid-40s, after having been redesigned to about the size of the 44.

There would be three 44 series models released including the 44A (1933), 44B (1936) and the 44BX (1938). These figure 8 pattern bi-directional ribbon microphones also became the workhorse mics used in most of the recording and broadcast industries in the United States, ultimately replacing the 77A throughout the 30s. These microphones were, in fact, so popular, some are still in use today and can be found on eBay. There’s even an RCA 44A replica being produced today by AEA. Unfortunately, RCA discontinued manufacture of the 44 microphone series in 1955. RCA would discontinue producing microphones altogether in 1977, ironically RCA’s last model released was the model 77 in 1977. The 44A sported an audio pickup range between 50 Hz to 15,000 Hz… an impressive dynamic range, even though the lathe recording system could not record or reproduce that dynamic range.

A mixing board when combined with several new workhorse 44A mics allowed a sound engineer to bring certain channel volumes up and other volumes down. A mixing board use allowed vocalists to be brought front and center in the recording, not drowned out by the band… with sound leveled on the fly during recording by the engineer’s hand and a pair of monitor headphones or speakers.

One microphone problem during the 20s was that microphones were primarily omni-directional. This meant that any noise would be picked up from anywhere around the microphone. This also meant that in recording situations, everything had to remain entirely silent during the recording process, except for the sound being recorded. By 1935, Siemens and RCA had introduced various cardioid microphones to attempt to solve for extraneous side noise. These uni or bi-directional microphones only picked up audio directly in front of the microphone, but not sounds outside of the microphone’s cardioid pattern. This improvement was important when recording audio for film when on location. You can’t exactly stop car honking, tires squealing and general city noises during a take. The solution was the uni-directional microphone, introduced around 1935.

Most recording studios at the time relied on heavy wall-mounted gear that wasn’t at all easy to transport. This meant recording had to be done in the confines of a studio using fixed equipment. This portability need led to the release of this 1938 Western Electric 22D model mixer, which had 4 microphone inputs and a master gain output. It sported a lighted VU meter and it could be mounted in a portable carrying case or possibly in a rack. This unit even sported a battery pack! In 1938, this unit was primarily used for radio broadcast recording, but this or similar portable models were also used when recording on-location audio for films or news reels at the time.


In the studio, larger channel versions were also utilized to allow for more microphone placement, but still mixing down into a single monophonic channel. Such studio typically used up to 6 microphones, though amplifiers sometimes added hiss and noise, which might be audibly detectable if too many were strung together. There was also the possibility that phase problems could exist if too many microphones were utilized. The resulting output recording would be monophonic for the mass produced shellac 78 RPM records, for radio broadcast or for movies shown in a theater.

Here are more details about this portable Western Electric 22-d mixing board…


Lathe Recording

Unfortunately, electric current during this time was still considered too unreliable and could cause audio “wobble” if electrical power was used to power the turntable during recording. In some cases, lathe recorders used a heavy counterweight suspended from the ceiling which would slowly drop to the floor at a specified rate which would power the rotation of the lathe turntable to ensure a continuous rotation speed. This weight system provided the lathe with a stable cut from the beginning to the end of the recording, unaffected by potential unstable electrical power. Electrical power was used for amplification purposes, but not always for driving the turntable rotation while “cutting” the recording. Spring based or wound mechanisms may have also been used.

1930s Continued

All things considered, this Western Electric portable 4 channel mixer was pretty far ahead of the curve. With technology like this, these 1930 audio innovations led us directly into 60s and 70s era of recording. This portable mixing board alone, released in 1938, is definitely ahead of its time. Of course, this portability was likely being driven by both broadcasters, who wanted to record audio on location, and by the movie industry who needed to record audio on-location while filming. Though, the person tasked with using this equipment had to lug around 60 lbs of equipment, 30 lbs on each shoulder.

Additionally, during the 1930s and specifically in 1934, Bell Labs began experimenting with stereo (binaural) recordings in their audio labs. Here’s an early stereo recording from 1934.

Note that even though this recording was recorded stereo in 1934, the first commercially produced stereo / binaural record wouldn’t hit the stores until 1957. Until 1957, the monophonic / monaural 78 RPM records remained the primary standard for purchasing music during the 1930s.

For home recording units (and even used in professional situations) in the 1930s, there were options. Presto created various model home recording lathes. Some of Presto’s portable models include the 6D, D, M and J models, which were introduced between the years 1932 and 1937. The K8 model was later introduced around 1941. Some of these recorders can still be found on the secondary market today in working order. These units required specialty blank records in various 6″, 8″ and 10″ sizes and sported 4 holes in the center. This home recording lathe system recorded at either 78 or 33⅓ speed. In 1934, these recorder lathes cost around $400, equivalent to well over $2,000 today. By 1941, the price of the recorders had dropped to between $75 and $200. The blanks cost around $16 per disc during the 30s. That $16 then is equivalent to around $290 today. Just think about recording random noises on a $290 blank disk? Expensive.

Finally, it is worth discussing Walt Disney’s contribution to audio recording during the late 30s and into 1940. Fantasia (produced in 1939, released in 1940) was the first movie to sport a full stereo soundtrack. This was achieved through the use of a 9 track optical recorder. These 9 optical tracks were mixed down to 4 optical tracks for use when presenting the audio in a theater. Optical audio recording and playback is the method a sprocket film projector uses to play back audio through theater sound equipment (see 1920s above), prior to the introduction of magnetic analog audio and later digital audio in the 90s. Physically running along side the 35mm or 70mm film imagery, an audio track runs vertically throughout the entire length of the film. The audio track is run through a separate audio decoder and amplifier at the time the projector is flipping images.

To operate the Fantasia film with stereo in 1940, a theater would need two projectors running simultaneously. The first projector ran the visual film image and that film also contained one mono optical audio track (for backup or for theaters running Fantasia only in mono). The second “stereo” projector ran four (4) optical tracks consisting of the left, right and center audio tracks (technically, a 3.0 sound system). The fourth track managed an automated gain control to allow for fades as well as volume increase and decrease in the audio. This system was dubbed Fantasound by Disney. Note that Fantasound apparently employed an RCA compression system to make the audio sound better and keep the audio volumes consistent (not too loud, not to low volume) while watching Fantasia. At the time when shellac recordings were common, seeing a full color and stereo feature in the theater would have been a technical marvel.

Disney’s Fantasia vs Wizard of Oz

It is worth pausing here to discuss the technical achievement of both Walt Disney and MGM in sound recording and reproduction. Walt Disney contributed greatly to the advancement of theatrical film audio quality and stereo films. Fantasia (produced in 1939, released in 1940) was the first movie to sport a full stereo soundtrack in a theater. This was achieved through the use of a 9 track optical recorder when recording the original music soundtrack. These 9 optical tracks were then mixed down to 4 optical tracks for use when presenting the audio in a theater. According to Wikipedia, the Fantasia orchestra was outfitted with 36 microphones, these 36 mics were condensed down into the aforementioned 9 (less, actually) optical audio tracks when recorded. One of these 9 tracks was a click track for animators to use when producing their animations.

To explain optical audio a bit more, optical audio recording and playback is the method a sprocket film projector uses to playback audio through theater sound equipment. This optical audio system remained in use prior to the introduction of digital audio in the 90s. Physically running along side the 35mm or 70mm film reel imagery, there is an optical, but analog audio track that runs vertically throughout the entire length of the film. There have been many formats for this optical track. The audio track is run through a separate analog audio decoder and amplifier at the same time the projector is flipping through images.

For a theater operator to operate the Fantasia film in stereo in a theater in 1940, a theater would need two projectors running simultaneously along with appropriate left, right and center speakers, speaker amplifiers, speakers hidden behind the screen and, in the case of Fantasound, speakers mounted in the back of the theater. The first projector would present the visual film image on the screen, while that film reel also contained one mono optical audio track (used for backup purposes or for theaters running the film only in mono). The second “stereo” projector ran four (4) optical tracks consisting of the left, right and center audio tracks (likely the earliest 3.0 sound system). The fourth track managed an automated gain control to allow for fades as well as automated audio volume increase and decrease. This stereo system was dubbed Fantasound by Disney. At the time when mono shellac recordings were common in the home, seeing a full color and stereo motion picture in the theater in November of 1940 would have been a technical marvel.

Let’s pause here to savor this incredible Disney cinema sound innovation moment. Consider that it’s 1940, just barely out of the 30s. Stereo isn’t even a glimmer in the eye of record labels as yet and Walt Disney is outfitting theaters with basically what would be considered today’s modern multichannel audio theater standard (as in 1970s or newer) stereo type sound system. Though Cinerama, a 7 channel audio standard, would land in theaters as early as 1952 featuring the documentary film This Is Cinerama, it wouldn’t be until 1962’s How The West Was Won that the theater goers actually got a full scripted feature film using Cinerama’s 7 channel sound system. In fact, Disney’s Fantasound basically morphed into what would become Cinerama, using three synchronized projectors like Fantasound, but Cinerama used multiple projectors for a different reason than for multichannel sound.

Cinerama also gave pause to sound recording for film. It made filmmakers invest in more equipment and microphones to ensure that all 7 channels were recorded so that Cinerama could be used. Clearly, even though the technology was available for use in Cinemas, filmmakers didn’t exactly embrace this new audio technology as readily as theater owners were willing to install it. Basically, it wouldn’t be until the 60s and on into 70s that Cinerama and the later THX and Dolby sound systems became in common use in cinemas. Disney ushered the idea of stereo in theaters in in the 40s, but it took nearly 30 years for the entire film industry to embrace it, including easier and cheaper ways to achieve it.

Disney’s optical automated volume gain control track foreshadows Disney’s use of animatronics in its own theme parks beginning in the 1960s. Even though Disney’s animatronics use a completely different mechanism of control, the use of an optical track to control automation of the soundtrack’s volume in 1939 was (and still is) completely ingenious. Though, this entire optical stereo system, at a time when theaters were still running monophonic motion pictures, was also likewise quite ingenious (and expensive).

Unfortunately, Fantasia’s stereo run in theaters would be short, with only 11 roadshow runs using the Fantasound optical stereo system. The installation of Fantasound required a huge amount of equipment, including installation of amplifiers, speakers behind the screen and speakers in the back of the theater. In short, it required the equipment that modern stereo theaters require today. See the link just above for more details on this.

Consider also that the Wizard of Oz, which was released in 1939 by MGM, was also considered a technical marvel for its Technicolor process, but this musical film was released to theaters in mono. Though this film’s production did record most, if not all, of the audio for the Wizard of Oz on a multitrack recorder during filming, which occurred between 1938 and 1939. It wouldn’t be until 1998 when The Wizard of Oz’s original 1939 recorded multitrack audio was restored and remastered in stereo, finally giving The Wizard of Oz its full stereo soundtrack from its original 1930s on-set multitrack recordings.

Here’s Judy Garland singing Over the Rainbow from the original multitrack masters recorded in 1939, even though this song wasn’t released in stereo until 1998 after this film’s theatrical re-release. Note, I would embed the YouTube video inside this article, but this YouTube channel owner doesn’t allow for embedding. You’ll need to click through to listen.

As a side note, it wouldn’t be until the 1950s when stereo becomes commonplace in theaters and until the late 50s when stereo records also become available for home use. In 1939, we’re many years away from stereo audio standards. It’s amazing then that, between 1938 and 1939, MGM had the foresight to record this film’s audio using a multitrack recorder during filming sessions, in addition to MGM’s choice of employing those spectacular Technicolor sequences.



In addition to Disney’s Fantasia, the 1940s were punctuated by World War II (1939-1945), the holocaust (1933-1945) and the atomic bomb (1945). Because of the Great Depression and the frugality beginning in 1929 and lasting through the late 1930s, this frugality moved into the 1940s, in part because of left over anxieties from the Great Depression, but also because of the wartime necessity to ration certain types of items including sugar, tires, gasoline, meat, coffee, butter, canned goods and shoes. This rationing led housewives to be much more frugal in other areas of life including hairstyles and dress… also because the war surged the prices of some consumer goods.

Thus, this frugality influenced fashion and also impacted sound recording equipment manufacturing, most likely due to the early 1940s wartime efforts requiring manufacturers to convert to making wartime equipment instead of consumer goods. While RCA continued to manufacture microphones in the mid 40s (mostly after the war), a number of other manufacturers also jumped into the fray. Some microphone manufacturers targeted ham operators, while others created equipment targeted at “home recordists” (sic). These consumer microphones were fairly costly at the time, equivalent to hundreds of dollars today.

Some of 1940’s microphones sported a slider switch which allowed moving the microphone from uni-directional to bi-directional to omni-directional. This meant that the microphone could be used in a wide array of applications. For example, both RCA’s MI-6203-A and MI-6204-A microphones (both released in 1945) offered a slider switch to move between the 3 different pickup types. Earlier microphones, like RCA’s 44A, required opening up the microphone down to the main board and moving a “jumper” to various positions, if this change could be performed at all. Performing this change was inconvenient and meant extra setup time. Thus, the slider in the MI-6203 and MI-6204 made performing this change much easier and quicker. See, it’s the small innovations!

During the 1940s, both ASCAP and, later, BMI (music royalty services aka performing rights organization or PRO) changed the face of music. In the 1930s, most music played on broadcast programs had been performed by a live studio orchestra, employing many musicians. During the 1940s, this began to change. As sound reproduction became better sounding, these better quality sound recordings led broadcasters to using prerecorded music over live bands during broadcast segments.

This put a lot of musicians out of work, musicians who would have otherwise continued gainful employment with a radio program. ASCAP (established in 1914 as a PRO) tripled its royalties for broadcasters in January of 1941 to help out these musicians. In retaliation for these higher royalty costs to play ASCAP music, broadcasters dropped using ASCAP music from its broadcasts, instead choosing public domain music and, at the time, unlicensed music (country, R&B and Latin). Disenchanted by ASCAP’s already doubled fees in 1939, broadcasters created their own PRO organization, BMI in 1939 (acronym for Broadcast Music Incorporated). This meant that music placed under the BMI royalty catalog would either be free to broadcasters and/or supplied at a much lower cost than that music licensed by ASCAP.

This tripling of fees in 1941 and, subsequent, dropping of ASCAP’s catalog by broadcasters put a hefty dent in ASCAP’s (and its artist’s) bottom line. By October of 1941, ASCAP had reversed its tripled royalty requirement. During this several month period in 1941, ASCAP’s higher fees helped to popularize genres of music which were not only free to broadcasters, but these genres were now being introduced to unfamiliar new listeners. Thus, these musical genres which typically did not get much air play prior, including country, R&B and Latin music, saw major growth in popularity during this time via radio broadcasters.

The genre popularity growth is partly responsible for the rise of Latin artists like Desi Arnaz and Carmen Miranda throughout the 1940s.

By 1945, many recording studios had converted away from using the lathe stylus recording turntables  and began using magnetic tape to prerecord music and other audio. The lathe turntables were still used to create the final 78 RPM disc master from the audio tape master for commercial record purposes. However, broadcasters didn’t need this when using reel to reel tape for playback.

Reel to reel tape also allowed for better fidelity and better broadcast playback over those noisy 78 RPM shellac records at the time. It also cost less to use because a single reel of tape could be recorded over again and again. With tape, there is also less hiss and way less background noise, making for a more professional listening and playback experience in a broadcast or film use. Listeners couldn’t tell the difference between the live radio segments and prerecorded musical segments.

Magnetic recording and playback would also give rise to better sounding commercials, though commercial jingle producers did record commercials using 78 RPMs during that era. From 1945 until about 1982, recordings had been produced almost exclusively using magnetic tape… a small preview of things to come.

While the very first vinyl record was released in 1930, this new vinyl format wouldn’t actually become viable as a prerecorded commercial product until 1948, when Columbia introduced its first 12″ 33⅓ RPM microgroove vinyl long playing (LP) record. CBS / Columbia was aided in producing this new format by the aforementioned Presto company who helped CBS develop the vinyl format. Considering Presto’s involvement with and innovation of its own line of lathe recorders, Columbia leaning on Presto was only natural. This Columbia LP format would eventually replace the shellac 78 RPMs in short order.

At around 23 minutes per side, the vinyl LP afforded a musical artist with about 46 minutes of recording time. This format quickly became the standard for releasing new music, not only because of the format’s ~46 minutes of running time, but also because it offered way less surface noise than when using shellac 78s. Vinyl records were also slightly less brittle than shellac records, giving them a bit more durability over shellac records.

By 1949, RCA had introduced a 7″ version of this 33⅓ microgroove vinyl format intended for use with individual (single) songs… holding around 4-6 minutes per side. These vinyl records at the time were still all monaural / monophonic. Stereo wouldn’t become available and popular until the next decade.

Note that Presto Recording Corporation continued to release and sell both portable, professional and home lathe recorders during the 1940s and on into the 50s and 60s. Unfortunately, the Presto Recording Corporation closed its doors in 1965.



By the 1950s, some big audio changes are in store; changes that Walt Disney helped usher in with Fantasia in 1940. Long past the World War II weary 1940s and the Great Depression ravaged 1930s, the 1950s had become a new prosperous era in American life. Along with this new prosperous time, fashion rebounded and so too did the musical recording industry and the movie theater industry. So too did musical artists who now began focusing on a new type of music, rock and roll. As a result of this new musical genre, recording this new genre needed some recording changes.

Because the late 1940s and early 1950s ushered in the new filmed musical format, many in Technicolor (and one in stereo in 1940), this led to audio advancements in theaters. Stereo radio broadcasts wouldn’t be heard until the 60s and stereo TV broadcasts wouldn’t begin until the early 80s, but stereo would become common place in theaters during the 1950s, particularly due to these musical features and the pressures placed on cinema by the television.

Musical films like Guys and Dolls (1955) were released in stereo along with earlier releases like Thunder Bay (1953) and House of Wax (1953). Though, it seems that some big musicals, like Marilyn Monroe’s Gentlemen Prefer Blondes (1952) was not released in stereo.

This means that stereo film recording for some films in the early 50s was haphazard and depended entirely on the film’s production. Apparently, not all film producers placed value in having stereo soundtracks for their respective films. Blockbuster films, many including Marilyn Monroe, didn’t include stereo soundtracks. However, lower budget horror and suspense films did include them, probably to entice moviegoers in for the experience.

By 1957, the first stereo LP album is released, which ushers in the stereophonic LP era. Additionally, by the late 1950s, most film producers began to see the value in recording stereo soundtracks for their films. No longer was it vogue to produce mono soundtracks for films. At this point, producers choosing to employ mono soundtracks did so out of personal choice and artistic merit, like Woody Allen.

Here’s a vinyl monophonic version of Frank Sinatra’s Too Marvelous for Words recorded for his 1956 album Songs for Swingin’ Lovers. Notice the telltale pops and clicks of vinyl. Even the newest untouched vinyl straight from the store still had a certain amount of surface noise and pops. Songs for Swingin’ Lovers was released one year before stereo made its vinyl debut. Though, Sinatra would still release a mono album in 1958, his last monophonic release entitled Only the Lonely. Sinatra may have begun recording of the Only the Lonely album in late 1956 using monophonic recording equipment and likely didn’t want to release portions of the album in mono and portions in stereo. Though, he could have done this by making side 1 mono and side 2 stereo. This gimmick would have made a great introduction to the stereo format for his fans and likely helped to sell even more copies.



This song is included to show the recording techniques being used during the 1950s and what vinyl typically sounded like.


In the 1950s, Cinema had the most impact on audio reproduction and recording. Because of Disney’s 1940 Fantasound process, this invention led to the design of Cinerama. A more simplified design by Fred Waller modified from his previous ambitious multi-projector installations. Waller had been instrumental in creating training films for the military using multi-projector systems.

However, in addition to the 3 separate, but synchronized images projected by Cinerama, the audio was also significantly changed. Like Disney’s Fantasound, Cinerama offered up multichannel audio, but in the form of 7 channels, not 4 like Fantasound. Cinerama’s audio system design was likely what led to the modern versions using DTS, Dolby Digital and SDDS sound. Cinerama, however, wasn’t intended to be primarily about the sound, but about the picture on the screen. Cinerama was intended to provide 3 projected images across a curved screen and provide that curved widescreen imagery seamlessly (a tall order and it didn’t always work properly). Cinerama was only marginally intended to be about the 7 channel audio. The audio was important to the visual experience, but not as important as the 3 projectors driving the imagery across that curved screen.

Waller’s goal was to discard the old single projector ideology and replace it with a multi-projector system akin to having peripheral vision. The lenses used to capture the film images were intended to be nearly the same focal length as the human eye in an attempt to be as visually accurate as possible and give the viewer an experience as though they were actually there, though the images were still flat, not 3D.

While Waller’s intent was to create a ground breaking projection system, the audio system employed is what withstood the test of time and what drove change in the movie and cinema sound industries. Unlike Fantavision, which used two projectors, one for visuals and one for 4 channel sound using optical tracks, Cinerama’s sound system used a magnetic strip which ran the length the film. This magnetic strip held 6 channels of audio with the 7th channel provided by the mono optical strip. Because Cinerama had 3 simultaneous projectors running, the Cinerama system could have actually supported 21 channels of audio information.

However, Cinerama settled on 7 audio channels, likely provided by the center projector. Though, information about exactly which of the three projectors provided the 7 channels of audio output is unclear. It’s also entirely possible that all 3 film reels held identical audio content for backup purposes. If one projector’s audio dies, one of the other two projectors could be used. The speaker layout for the 7 channels was five speakers behind the screen (surround left, left, center, right, surround right), two speaker channels on the walls (left and right or whatever channels the engineer feeds) and two channels in the back of the theater (again whatever the engineer feeds). There may have been more speakers than just two on the walls and in the rear, but two channels were fed to these speakers. The audio arrangement was managed manually by a sound engineer who would move the audio around the room live while the performance was running to enhance the effect and provide surround sound features. The 7 channels were likely as follows:

  • Left
  • Right
  • Center
  • Surround Left
  • Surround Right
  • Fill Left
  • Fill Right

Fill channels could be ambient audio like ocean noises, birds, trees rustling, etc. These ambient noises would be separately recorded and then mixed in at the appropriate time during the performance to bring more of a sense of realism to the visuals. Likely, the vast majority of the time, the speakers would provide the first 5 channels of audio. I don’t believe that this 7 channel audio system supported a subwoofer. Subwoofers would arrive in theaters later as part of the Sensurround system in the mid 1970s. Audio systems used in Cinerama would definitely influence later audio systems like Sensurround.

The real problem with Cinerama wasn’t its sound system. It was, in fact, its projector system. The 3 synchronized projectors projected separately filmed, but synchronized visual sequences. As a result, the three projected images overlapped each by a tiny bit. As a result of this overlap, both the projector played tricks to keep that line of overlap as unnoticeable as possible. While it mostly worked, the fact that 3 cameras were used that weren’t 100% perfectly aligned when filming led to problems with certain imagery on the screen. In short, Cinerama was a bear to use as a cinematographer. Very few film projects wanted to use the system due to its difficulty of filming scenes and it was even more difficult to make sure the scene appeared proper when projected. Thus, Cinerama wasn’t widely adopted by filmmakers nor theater owners. Though, the multichannel sound system was definitely something that filmmakers were interested in using.

Ramifications of Television on Cinema

As a result of the introduction of NTSC Television in 1941 and because of TV’s wide and rapid adoption by the early 1950s, the cinema industry tried all manner of gimmicky ideas to get people back into cinema seats. These gimmicks included, for example, Cinerama. Other in-cinema gimmicks included 3D glasses, smell-o-vision, mechanical skeletons, rumbling seats, multichannel stereo audio and even simple tricks like Cinemascope… which used anamorphic lenses to increase the width of the image instead of requiring multiple projectors to provide that width. The 50s were an era of endless trial and error cinema gimmicks in an effort to get people back into the cinema. None of these gimmicks furthered audio recording much, however.

Transition between Mono and Stereo LPs

During the 1960s, stereophonic sound would become an important asset to the recording industry. Many albums plastered the words “Stereo”, “Stereophonic” or “In Stereophonic Sound” written largely across parts of the album cover. Even the Beatles fell into this trap with a few of their albums. However, this marketing lingo was actually important at the time.

During the late 50s and into the early 60s, many albums were dual released both as monophonic and as a separate stereophonic release. These words across the front the album were intended to tell the consumer which version they were buying. This marketing text was only needed while the industry kept releasing both versions to stores. And yes, even though the words do appear prominently on the cover, some people didn’t understand and still bought the wrong version.

Thankfully, this mono vs stereo ambiguity didn’t last very long in stores. By the mid-1960s nearly every album released had converted to stereo, with very few being released in mono. By the 70s, no more mono recordings were being produced, except when an artist chose to make the recording mono for artistic purposes.

No longer was the consumer left wondering if they had bought the correct version, that is until 1976’s quadrophonic releases began… but that discussion is for the 70s. During the late 50s and early 60s, some artists were still recording in mono and some artists were recording in stereo. However, because many consumers didn’t yet own stereo players, record labels continued to release mono versions for consumers with monophonic equipment. It was assumed that stereo records wouldn’t play correctly on mono equipment, even though they played fine. Eventually, labels wised up and recorded the music in stereo, but mixed down to mono for some of the last monophonic releases… eventually abandoning monophonic releases altogether.



By the 1960s, big box recording studios were becoming the norm for recording bands like The Beatles, The Rolling Stones, The Beach Boys, The Who and vocalists like Barbra Streisand. These new studios were required to produce the professional and pristine stereo soundtracks on vinyl. This required heavy use of multitrack mixing boards. Here’s how RCA Studio B’s recording board looked when used in 1957. Most state of the art studios, at the time, would have used a mixing board similar to this one. The black and white picture shown on the wall behind this multitrack console depicts a 3 track mixing board, likely in use prior to the installation of this board.


Photo by cliff1066 under CC BY 2.0


RCA Studio B became a sort of standard for studio recording and mixing throughout the early to mid 1960s and even into the late 1970s. While this board may accept many input channels, the resulting master recording may record only as few as two tracks to as many as eight tracks through the mid-60s. It wouldn’t be until the late 60s that magnetic tape technologies would improve to allow recording 16 channels and then later to 24 channels by the 1970s.

Note that many modern mixing boards in use today resemble the layout and functionality of this 1957 RCA produced board, but these newer systems support more channels as well as effects.

Microphones of the 1960s also took to being majorly improved once again. No longer were microphones simply utilitarian, now they were being sold for luxury sound purposes. For example, Barbra Streisand almost exclusively recorded with the Neumann M49 microphone (called the Cadillac of Microphones, with a price tag to match) throughout the early 60s. In fact, this microphone became her staple. Whenever she recorded, she always requested a very specific serial number for her Neumann M49 from the rental service. She felt that this microphone specifically made her voice sound great.

However, part of the recording process was not just the microphone that Barbra Streisand used. It was also the recording equipment that Columbia owned at the time. While RCA’s studios made great sounding records, Columbia’s recording system was well beyond that. Barbra’s recordings from the 60s sound like they could have been recorded today on digital equipment. To some extent, that’s partially true. Barbra’s original 1960s recordings have been cleaned up and restored digitally. However, you have to have an excellent product from which to start to make it sound even better.

Columbia’s recordings of Barbra in the 60s were truly exceptional. These recordings were always crystal clear. Yes, the clarity is attributable to the microphone, but also due to Columbia’s high quality recording equipment, which was leaps and bounds ahead of other studios at the time. Not all recording systems were as good as what Columbia used, as evidenced by the soundtrack to the film Hello Dolly (1969) which Barbra recorded for 20th Century Fox. This recording is more midrangy, less warm and not at all as clear as the recordings Barbra made for Columbia records.

There were obviously still pockets of less-than-stellar recording studios recording inferior material for film and television, even going into the 1970s.

Cassettes and 8-Tracks

During the early 1960s and specifically in 1963, a new audio format was introduced in the Compact Cassette, otherwise known as simply a cassette tape. The cassette tape would go on to rival that of the vinyl record and have a commercial life of its own, which is still in diminished use to this day. Because the cassette didn’t rely on a stylus moving, there were way less constraints on the bass that could be laid down into it. This meant that cassettes ultimately had better sonic capabilities than vinyl.

In 1965, the 8-track or Stereo 8 format was introduced, which became extremely popular for use inside of vehicles initially. Eventually, though, the cassette tape and eventually the multi changer CD would replace 8 track systems in car stereos. Today, CarPlay and similar Bluetooth systems are the norm.

The Stereo 8 Cartridge was created in 1964 by a consortium led by Bill Lear, of Lear Jet Corporation, along with Ampex, Ford Motor Company, General Motors, Motorola, and RCA Victor Records (RCA – Radio Corporation of America).

Quote Source: Wikipedia



The 1970s were punctuated by mod clothing, bell bottom jeans, Farrah Fawcett feathered hair, drive-in movies and leisure suits. Coming out of the psychedelic 1960s, these bright vibrant colors and polyester knits led to some rather colorful, but dated rock and roll music (and outfits) to go along.

Though, drive-in theaters appeared as early as the 1940s, drive-in theaters would ultimately see their demise in late 1970s, primarily due to urban sprawl and the rise of malls. Even still, drive-in theaters wouldn’t have lasted into the multitrack 7.1 sound era of rapidly improving cinema sound. There is no way to reproduce such incredible surround sound within the confines of automobiles of the era, let alone today. The best that drive-in theaters offered was either a mono speaker affixed to the window or tuning into a radio station with the radio, which might or might not offer stereo sound, usually not. The sound capabilities afforded by indoor theaters, coupled with year round air conditioning, led people indoors to watch films any time of the day and all year round rather than watching movies in their cars only at night and when weather permitted. Thus, brutally cold winters don’t work well for drive-in theater viewing.

By the 1970s, sound recording engineers were also beginning to overcome the surface noise and sonic capabilities of stereo vinyl records, making stereo records sound much better. During this era, audiophiles were born. Audiophiles are people who always want the best audio equipment to make their vinyl records sound their absolute best. To that end, audio engineers pushed vinyl’s capabilities to its limits. Because diamond needles must travel through a groove to playback audio, if the audio gained too much thumping bass or volume, it could jump the needle out of its track and cause skipping.

To avoid this turntable skipping problem, audio engineers had to tune down the bass and volume when mastering for vinyl. While audio engineers could create two masters, one for vinyl and one for cassette, that almost never happened. Most sound engineers were tasked to create one single audio master for a musical artist and that master was strictly geared towards vinyl. This meant that a prerecorded cassette got the same audio master as the vinyl record, instead of a unique master created for the dynamic range available on a cassette.

Additionally, cassettes came in various formulations. From ferric oxide to metal (Type I to Type IV). There were eventually four different cassette tape formulations available to consumers, all of which commercial producers could also use when producing commercial duplication. However, most commercial producers opted to use Type I or Type II cassettes (the least costly formulations available). These were also available all the way through the 1970s. Type IV was metal and could produce the best sound available due to its tape formulation, but didn’t arrive until late in the 1970s.

8-tracks could be recorded, but there was essentially only one tape formulation. These recorders began appearing in the 1970s for home use. It was difficult to record an 8-track tape and sometimes more difficult to find blanks. Because each tape program was limited in length, you must make sure the audio doesn’t gap over from one track to the next or else you’ll have a jarring audio experience. With audio cassettes, this was a bit easier to avoid. Because 8-tracks had 4 stereo programs, each of the 4 stereo program segments is fairly short. Because the entire 8-track tape is 80 minutes, that would be 20 minutes per stereo track. It ends up more complicated for the home consumer to divide their music up into four 20 minute segments than it is to manage a 90 minute cassette with 45 minutes on each side.

Because a vinyl record only holds about 46 minutes, that length became the standard for musical artists until the CD arrived. Even though cassettes could hold up to 90 minutes of content, commercially produced prerecorded tapes held only the amount of tape need to match the 46 minutes of content available on vinyl. In other words, musical artists didn’t offer extended releases on cassettes during the 70s and 80s. It wouldn’t be until the CD arrives that musical artists were able to extend the amount of content they could produce.

As for studio recording during the 1960s and 1970s, most studios relied on Ampex or 3M (or similar professional quality) 1/2 inch or 1 inch multitrack tape for recording musical artists in the studio. Unfortunately, many of these Ampex and 3M branded tape formulations ended up not archival. This led to degradation (i.e., sticky-shed syndrome) in some of these audio masters 10-20 years later. The Tron Soundtrack, recorded in 1982 on Ampex tape, degraded in the 1990s to the point that the tape needed to be carefully baked in an oven to reaffix and solidify the ferric coating. After it had been carefully baked, there were effectively a few limited shots at re-recording the original tape audio onto a new master. It’s possible a baked master could also be played a few times onto several masters. Some Ampex tape audio master recordings may have been completely lost from the lack of being archival. Wendy Carlos explains in 1999 what it took to recover the masters for the 1982 Tron soundtrack.

Thankfully, cassette tape gluing formulations didn’t seem to suffer from sticky-shed syndrome like the some formulations of Ampex and 3M professional tape masters did. It also seems that 8-track tapes may have been immune to this problem as well.

For cinematic films made during the 1970s, almost every single film was recorded and presented in stereo. In fact, George Lucas’s Star Wars in 1977 ushered in the absolute need for stereo soundtracks in summer blockbusters to direct action sequences of the shots timed to orchestral music. Musical cues timed to each visual beat has now become a staple in filmmaking since the first Star Wars in 1977. While the recording of the music itself is much the same as it was in the 60s, the use of this orchestral music timed to visual beats became the breakthrough feature of filmmaking in the late 1970s and beyond. This musical beat system is still very much in use today in every summer blockbuster.

As for vinyl records and tapes of the 70s, surface noise and hiss is always a problem. To counter this problem, cassettes employed Dolby noise reduction techniques almost from the start. Commercially prerecorded tapes are encoded with a specific type of noise reduction. The associated player would need to be set on the same reduction type to reduce the inherent noise via that noise reduction. Setting a tape on the wrong noise reduction setting (or none at all) could cause the high end to be lost or, in many cases, for the audio playback to distort. For tapes, the most commonly used noise reduction was Dolby B, with the occasional use of Dolby C. Though, tapes could be encoded with Dolby A, B, C or S. The most commonly sold noise reduction for commercially prerecorded music cassette tapes was Dolby B, which began around 1965, but which remained in use throughout the 70s and 80s.

DBXFor vinyl, most vinyl albums didn’t offer or include noise reduction systems at all. However, starting around 1971, a relatively small number of vinyl releases were sold containing the DBX encoding noise reduction system. The discs were signified with the DBX encoded disc notation. This system, like Dolby’s tape noise reduction system, requires a decoder to playback the vinyl properly. Unfortunately, no turntables or amplifiers sold, that Randocity is aware, had a built-in DBX decoder. Instead, you had to buy and then inline a separate DBX decoder component in your Stereo Hi-Fi chain of devices, like the DBX model 21 decoder. DBX vinyl noise reduction was not just noise reduction, however. It also changed the audio dynamics of the recorded vinyl groove. DBX grooved disks thinned out and reduced the sonics and dynamics dramatically, making listening to a DBX encoded vinyl disc without a decoder nearly impossible. The DBX decoder would uncompress these compressed and thinned tracks back into their original sonically and suitably dynamic audio range.

To play a DBX encoded vinyl disk back properly, it required buying a DBX decoder component (around $150-$200 in addition to the cost of an amplifier, speakers and a turntable). This extra cost was for only a handful of vinyl disks, though. Not really worth the investment. DBX is unlike Dolby B reduction on tape, which if Dolby B is not decoded, still sounded relatively decent sonically even without the noise reduction enabled. DBX encoded vinyl discs are almost impossible to listen to without a decoder. For this reason, it’s likely why only very few vinyl discs were released encoded with DBX. However, if you were willing to invest in a DBX decoder component, the high and low ends were said to sound much better than a standard vinyl disc containing no noise reduction. The DBX system expanded and played these dynamics better, but probably not as full a sound as a CD can reproduce. DBX encoded vinyl likely meant that a fully remastered or at least better equalized version of the vinyl master was produced for these specific vinyl releases.

With that said, Dolby C and Dolby S are more like DBX when reproducing dynamics than Dolby A and B, which these first two were strictly noise reduction, not offering dynamic enhancement. These noise reduction techniques are explained in this section under the 1970s area because this is where they rose to their most prominent use, moreso on cassettes than on vinyl. Of course, these noise reduction techniques are not needed on a CD format, which is yet to come during the 80s.

For professional audio recording, in 1978, 3M introduced the first digital multitrack recorder for professional studio use. This recorder used one inch tape for recording up to 32 tracks. However, it priced in at an astonishing $115,000 (32 tracks) and $32,000 (4 tracks), which only a professional recording studio could afford. Studios like Los Angeles’s A&M Studios, The Record Plant and Warner Brother’s Amigo Studios all installed this 3M system.

Around 1971, IMAX was introduced. While this incredibly large screen format didn’t specifically change audio recording or drastically improve audio playback in the cinema, it did provide a much bigger screen experience which has endured to today. It’s included here to be complete for the 70s, but not so much for its improvements to audio recording, though it did improve film requirements for filmmakers.

For advancements in cinema sound, the 1970s saw the introduction of Sensurround. While there weren’t many features that supported this cinema sound system, it was mostly for good reason. The gimmick primarily featured a huge rumbling, theater shaking subwoofer (or several) aimed directly at the audience from below the screen. Nevertheless, subwoofers have since become common and have even endured as a constant in theaters since the introduction of Sensurround, just not to the degree of Sensurround. Like the 50s near endless gimmicks to drive people back into the theaters, the 1970s tried a few of these gimmicks such as Sensurround to also captivate and drive people back into theaters.

Earthquake Sensurround

In case you’re really curious, a few film features supporting Sensurround were Earthquake (1974), the Towering Inferno (1974) and Battlestar Galactica (1978). The Sensurround experience was interesting, but the thundering, rattling subwoofer bass was, at times, more of a distraction than it added to the film’s experience. It’s no wonder why it only lasted through the 70s and why only a few filmmakers used it. Successor cinema sound systems include DTS, Dolby Digital and SDDS, while THX ensured proper sound reproduction to ensure those rumbling, thundering bass segments can be properly heard (and felt).

Digital Audio Workstations

Let’s pause here to discuss a new audio recording methodology introduced as a result of the advent of digital audio… more or less required for producing the CD. As a result of digital audio recorders becoming available in the late 70s and early 80s and with accessibility of easy to use computers now dawning, the DAW or digital audio workstation is born. While computers in the late 70s and early 80s were fairly primitive, the introduction of the Macintosh computer (1984) with its impressive and easy to use UI made building and using a DAW much easier. It’s probably a little early to discuss DAWs during the late 70s early 80s here, but because it factors into nearly every type of digital audio recording prominently during the late 80s, 90s, 00s and beyond, the discussion is placed here.

Moving into the late 80s with even easier UI based computers like the Macintosh (1984), Amiga (1985), Atari ST (1985), Windows 3 (1990) and later Windows 95 (1995), DAWs became even more available, accessible and usable by the general public. With the release of Windows 98 and newer Mac OS systems, the DAW systems became even more feature rich, commonplace and easy to use, ultimately targeting home musicians.

Free open source products like Audicity, which first released in 1999, also became available. By 2004, Apple would include its own DAW, GarageBand, with its own Mac OS X and iOS operating systems. Acid Music Studio by Sonic Foundry, another home consumer DAW, was introduced in 1998 for Windows. This product and Sonic Foundry would subsequently be acquired by Sony, but then was later sold to Magix in 2016.

Let’s talk more specifically about DAWs for a moment. The Digital Audio Workstation was a ground breaking improvement over editing using analog recording technologies. This digital visual editing system allows for much easier digital audio recording and editing than any previous system before it. With tape recording technologies of the past, to move audio around required juggling tapes by re-recording and then overdubbing on top of existing recordings. If done incorrectly, it could ruin the original audio with no way back. Back in the 50s, the simplest of editing which could be done with analog recordings was playing games with tape speeds and, if possible on the tape recorder itself by overdubbing.

With digital audio clips in a DAW, you can now pull up individual audio clips and place them on as many tracks as are needed visually on the screen. This means you can place drums on one track, guitars on another, bass on another and vocals on another. You can easily add sound effects to individual tracks or layer them on top with simple drag, drop and mouse moves. If you don’t like the drums, you can easily swap them for an entirely new drum track or mute the drums altogether to create an acoustic type of effect. With a DAW, creative control is almost limitless when putting together audio materials. In addition, DAWs support plugins of all varying types including both digital instruments as well as digital effects. They can even be used to synchronize music to videos.

DAWs are intended to allow for mixing multiple tracks down into a stereo (2 track) mix in many different formats, including MP3, AAC and even uncompressed WAV files.

DAWs can also control external music devices, like keyboards or drum machines or any other device that supports MIDI control. DAWs can also be used to record music or audio for films allowing for easy placement using the industry standard SMPTE timing controller. This allows complete synchronization of an audio track (or set of tracks) with a film visual’s easily and, more importantly, consistently. SMPTE can even control such devices as lighting controllers to allow for live control over lighting rig automation, though some lighting rigs also support MIDI. A DAW is a very flexible and extensible piece of software used by audio recording engineers to take the hassle out of mixing and mastering musical pieces and speed up the musical creation process… even being able to use it in live music situations.

While DAWs came to existence in the early 1980s for professional use, it was the 1990s and into the 2000s which saw more home consumer musician use, especially with tools like Acid Music Studio, which based their entire DAW around managing creative loops… loops being short for looped audio clips. Sonic Foundry sold a lot of prerecorded royalty free loops which the user could use those royalty free loops in the creation of musical works. Though, if you wanted to create your own loops in Acid Music Studio using your own musical instruments, that was (and still is) entirely possible.

The point is, once the DAW became commonplace, it changed the recording industry in very substantial ways. Unfortunately, with the good so comes the bad. As technology improved with DAWs, so too did technologies to improve a singer’s vocals… thus was born the dreaded and now overused autotune vocal effect. This effect is now used by many vocalists as a crutch to make their already great voice supposedly sound even better. On the flip side, it can also be used to make bad vocalists sound passable… which is personally how it’s being used these days. I don’t personally think autotune makes vocals sound better ever, but I don’t matter when it comes to such recordings. With DAWs out of the way, let’s segue into another spurious 1970s audio technology topic…

Quadrophonic Vinyl Releases

In the early 1970s, just as stereo began to take hold, JVC and RCA corporations devised Quadrophonic vinyl albums. This format expected the home consumer to buy into an all new audio system including a quad decoder amplifier, a quad turntable, two additional speakers for a total of four and to purchase into albums that supported the quad format. This was a tall (and expensive) order. As people had just begun investing in somewhat expensive amplifiers and speakers to support stereo, JVC and RCA expected the consumer to toss all of their existing (mostly new) equipment and invest in brand new equipment AGAIN. Let’s just say that that didn’t happen. Though, to be fair, you didn’t need to buy a quad turntable. Instead you simply needed to buy a CD-4 cartridge for your turntable and have an amplifier that could decode the resulting CD-4 encoded data.

For completion, the CD-4 system offered 4 discrete audio channels: left front, left back, right front and right back. Quad was intended to be enjoyed with four speakers each placed in a square around the listener.

This hatched quad plan expected way too much of consumers. While many record labels did adopt this format and did produce perhaps hundreds of releases in quad, the format was not at all successful due to consumer reticence. The equipment was simply too costly for most consumers to toss and replace their existing HiFi equipment. Stereo remained the dominant music format and has remained so since. Though, with the advent of quad’s special stylus cartridges, it did help improve stereo recordings by improvements with styluses and higher quality vinyl formulations needed to produce the quad vinyl LPs.

Note also that while quad vinyl LP releases made their way into record stores in the early 1970s, no cassette version of quad ever became available. However, Q8 or quad 8-track tapes arrived as early as 1970, two years before the first vinyl release. Of course, 8-track tapes at the time were primarily used in cars… which would have meant upgrading your car audio system with two more speakers and a new decoder car player with four amplifiers, one for each speaker.

The primary thing that the quad format was successful at doing, at least for consumers, was muddy the waters at the record store and introduce multichannel audio playback, which wouldn’t really become a home consumer “thing” until the DVD arrived in the 1990s. However, for a consumer shopping for albums in the 1970s, it would have been super easy to accidentally buy a quad album, take it home and then realize it doesn’t play. Same problem exists for Q8 tapes; though Q8 tapes had a special quad notch that may have prevented it from playing in some players. And now, onto the …



In the 1980s, we see big hair, glam rock bands and hear new wave, synth pop and alternative music on the radio. Along with all of these, this era ushers us into the digital music era using the new Compact Disc (CD) and, of course, players. The CD, however, would actually turn out to be a couple of decade stop gap for the music and film industries. While the CD is still very much in use and available today, its need is diminishing rapidly with the likes of music services, like Apple Music. But, that discussion is for the 2010s and into the 2020s.

Until 1983, vinyl, cassettes and, to a much lesser degree, 8-track tapes were the music formats available to buy at a record store. By late 1983 and into 1984, the newfangled CD hit the store shelves, but not majorly as yet. At the same time, out went 8-Track tapes. While the introduction of the CD was initially aimed at the classical music genre, where the CD’s silence and dynamic range works exceedingly well to capture orchestral music arrangements, pop music releases would take a bit more time to ramp up. By late 1984 and into 1985, popular music eventually begins to dribble its way onto CD as record labels begin re-releasing back catalog in an effort to slowly and begrudgingly embrace this new format. Though, bands were also embracing this new format, thus new music began releasing onto the CD format faster than back catalog.

However, the introduction of the all digital CD upped the sound engineer’s game once again. Like vinyl took a while for sound engineers to grasp, so too did the CD format. Because the top and bottom sonic end of the CD is effectively unlimited, placing those masters made for vinyl onto a CD made for a lower volume and a sonically unexciting and occasionally shrill music experience.

If you buy a CD made in the mid 1980s and listen to it, you can tell the master was originally crafted for a vinyl record. The sonics are usually tinny, harsh and flat with a very low volume. These vinyl master recordings were intended to prevent the needle from skipping and relied on some of the sonics to be smoothed out and filled in by the turntable and amplifier itself. A CD needs no such help. This meant that CD sound engineers needed to find their footing on how deep the bass goes, how high the treble can get and how loud it can be. Because vinyl (and the turntable itself) tended to attenuate the audio to a more manageable level, placing a vinyl master onto CD foisted all of these inherent vinyl mastering flaws onto the CD buying public. This especially, considering the price tag of a CD was typically priced around $14.99 when vinyl records regularly sold for $5.99-$7.99. Asking a consumer to fork over almost double the price for no real benefit in audio quality was a tall order.

Instead, sound engineers needed to remix and remaster the audio to fill the audio dynamics and sonics of a CD. However, studios at the time were cheap and wanted to sell product fast. That meant existing vinyl masters instantly made their way onto CDs, only to sound thin, shrill and harsh. In effect, it introduced the buying public to a lateral, if not inferior product that all but seemed to sound the same as vinyl. The only improved audio masters being tailored for CD were many classical music artists. Pop artist older catalog titles were simply being rote copied straight onto the CD format… no changes. To the pop, rock and R&B buying consumer, the CD appeared to be an expensive transition format with no real benefit.

The pop music industry more or less screwed itself with the introduction of the CD format before it even got a foothold. By the late 80s and into the early 90s, consumers began to hear the immense difference in a CD as musical artists began recording their new material using the full dynamic range of the CD, sometimes on digital recorders. Eventually, consumers began to hear the much better sonics and dynamics capable of the CD format. However, during the initial 2-4 years after the CD was introduced, many labels released previous vinyl catalog onto CD sounding way less than stellar… dare I say, most of those CD releases sounded bad. Even new releases were a mixed bag depending on the audio engineer’s capabilities and equipment access.

Further, format wars always seem to ensue with new audio formats and the CD was no exception. Sony felt the need to introduce their smaller MiniDisc format, a lossy compressed format. While the CD format offered straight up uncompressed digital audio at 16 bit, the MiniDisc offered compressed audio akin to an MP3. The introduction of the MiniDisc (MD) meant that this was the first time a consumer was effectively introduced to an MP3-like device. While the compression on the MD wasn’t the same as MP3, it effectively produced the same result. In effect, you might actually say a MiniDisc player was the first pseudo MP3 player, but used a small optical disc for its music storage.

The CD format was not dissuaded by the introduction of the MD format. If anything, many audiophile consumers didn’t like the MD for the fact that it used compressed audio, making it sometimes sound worse than a CD. Though, many vinyl audiophiles also didn’t embrace the CD format likening it to a very cold musical experience without warmth or expression. Many vinyl audiophiles preferred and even loved the warmth that a stylus brought to vinyl when dragged across a record’s groove. I was not one of these vinyl huggers, however. When a CD fades to silence, it’s 100% silent. When a vinyl record fades to silence, there’s still audible vinyl surface noise present. The silence and dynamics alone made the CD experience golden… especially when the deep bass and proper treble sonics are mixed correctly for the CD.

The MiniDisc did thrive to an extent, but only because recorders became available early in its life along with many, many players from a lot of different companies, thus ensuring price competition. That, and the MD sported an exceedingly small size when compared to carrying around a huge CD Walkman. This allowed people to record their own already purchased audio right to a MiniDisc and easily carry their music around with them in their pocket. The CD didn’t offer recordables until much, much later into the 90s, mostly after computers became commonplace and those computers needed to use CDs as data storage devices. And yes, there were also many prerecorded MiniDiscs available to buy.

During the late 70s and into the early 80s, bands began to experiment with digital recording units in studios, such as 3M’s. In 1982, Sony introduced its own 24 track PCM-3324 digital recorder in addition to 3M’s already existing 1978 32 track unit, thus widening studio options when looking for digital multitrack recorders. This expanded the ability for artists to record their music all digital at pretty much any studio. Onto the cinema scene…

THX_logoIn the early-mid 80s, a new sound theater system standard emerged in THX by LucasFilm. This cinema acoustical sound standard is not a digital audio format and has nothing to do with recording and everything to do with audio playback and sound reproduction in a specific sized room space. At the time, theaters were coming out of the 1970s with short lived audio technologies like Sensurround. In the 1970s, theater acoustics were still fairly primitive and not at all optimized for the large theater room space. Thus, many of the theater sound systems were under-designed (read installed on the cheap) and didn’t appropriately or correctly fill the room with audio, leaving the soundtrack and music, at times, hard to hear. When Star Wars: Return of the Jedi was on the cusp of being released in 1983, George Lucas took an interest in theater acoustics to ensure moviegoers could hear all of the nuanced audio as George Lucas had intended in the film. Thus, the THX certification was born.

THX is essentially a movie theater certification program that ensures that all “certified theaters” must provide an optimal audio acoustical experience for moviegoers. Like the meticulous setup of Walt Disney’s Fantasound in 1940, George Lucas likewise wanted ensure his theater patrons could correctly hear all of the nuances and music within Star Wars: Return of the Jedi  in 1983. Thus, any theater that chose to certify itself via the THX standard must outfit each of their theaters appropriately to present the audio to acoustically fill the theater space correctly for all theater patrons.

However, THX is not a digital recording standard. The digital recording standards like Dolby Digital and DTS and even SDDS are all capable of supporting theaters certified for THX. Theaters certified for THX also play the Deep Note sound to signify that the theater is acoustically certified to present the feature film just to come. In fact, even multichannel analog systems such as Fantasound, if it were still available, could benefit from an acoustically certified THX theater. Further, each cinema must individually outfit each individual theater in the building to acoustically uphold the THX standard. That means that the manager of each theater must work with THX to ensure that each theater in a given megaplex adheres to the THX acoustic standard before each theater can be certified. THX means having the appropriate volume levels needed to fill the space for each channel of audio no matter where the theater patron chooses to sit within the theater.

CD Nomenclature

When CDs were first introduced, it became difficult to determine whether a musician’s music was recorded analog or digital. To combat this confusion, CD producers put 3 letters onto the cover to tell consumers how the music was recorded, mixed and mastered. For example, DDD meant that the music was recorded, mixed and mastered using only digital equipment. This likely meant a DAW was entirely used to record, mix and master. Other labels you might see included:

DAD = Digital recording, Analog mixing, Digital mastering
ADD = Analog recording, Digital mixing, Digital mastering
AAD = Analog recording, Analog mixing, Digital mastering

The third letter on a CD would always be D because every CD had to be digitally mastered regardless of how it was recorded or mixed. This nomenclature has more or less dropped away today. I’m not even sure why it became that important during the 80s, but it did. It was probably included to placate audiophiles at the time. I honestly didn’t care about this nomenclature. For those who did, it was there.

Almost all back catalog releases recorded during the 70s and even some into the 80s would likely have been AAD simply because digital equipment wasn’t yet available when most 70s music would have been recorded and mixed. However, some artists did spend the money to take their original analog multitrack recordings back to an audio engineer to convert them to digital for remixing and remastering, thus making them ADD releases. This also explains why certain CD releases of some artists had longer intros, shorter outros and sometimes extended or changed content from their vinyl release.


Sony further introduced its two-track DAT professional audio recording systems around 1987. It would be these units that would allow bands to mix down to stereo digital recordings more easily. However, Sony messed this audio format up for the home consumer market.

Fearing that consumers could create infinite perfect duplicates of DAT tapes, Sony introduced a system that would limit how many times a DAT tape could be duplicated. Each time a tape was duplicated, a marker was placed onto the duplicated tape. If a recorder detected a counter marker at the allowed max duplication number, all recorders supporting this copy protection system should prevent the tape from being duplicated again. This copy protection system all but sank Sony’s DAT system as a viable consumer alternative. Consumers didn’t understand the system, but more than this, they didn’t want to be limited by Sony’s stupidity. Thus, DAT was dead as a home consumer technology.

This at the time when MiniDisc had no such stupid duplication requirements. Sony’s DAT format silently died while MiniDisc continued to thrive throughout the 1990s. Though, to be fair, the MD’s compression system would eventually turn duplicated music into unintelligible garbage after a fair number of recompression dupes. The DAT system utilized uncompressed audio where the MD didn’t.

The stupidity of Sony was that it and other manufacturers also sold semi-professional and professional DAT equipment. The “professional” gear was not subject to this limited duplication system. Anyone who wanted to buy a DAT recorder could simply by up to semi-professional gear from any manufacturer, like Fostex, where no such copy protection schemes were enforced or used. By the time these other manufacturer’s gear became available, consumers didn’t care about the format.

A secondary problem with the DAT format was that it used helical scanning head technology, similar to the head was used in a VHS or BetaMax video system. These heads spin rapidly and can go out of alignment easily. As a result, a DAT car stereo system was likely not long term feasible. Meaning, if you hit a bump, the spinning head might change alignment and then you’ll have to readjust. Enough bumps and the whole unit might need to be fully realigned. Even the heat of scorching summer days might damage the DAT system.

Worse, helical scanning systems are subject to getting dirty quickly, in addition to alignment problems. This meant the need to regularly clean these units with a specially designed cleaning tape. Many DAT recorders would stop working altogether until you used a cleaning tape in the unit, which would reset the cleaning counter and allow the unit to function again until it needed another cleaning. Alignment problems also didn’t help the format. A recording made on one DAT unit might prevent playing the tape on another unit. Head alignment is critical between two different units. This might mean getting a tape from your friend, whose DAT machine is aligned differently from yours, that won’t play. CDs and MDs didn’t suffer from this alignment problem. What that meant was that while you could always playback DATs recorded in your own unit, a friend might not be able to play your DAT tapes in their unit at all, suffering digital noise, static, long dropouts or silence on playback.

DAT was not an optimal technology for sharing or when using outside of the home for audio. Though, some bootleggers did adopt the portable DAT recorder for bootlegging concerts. That’s pretty much no longer needed, with smartphones now taking the place of such digital recorders.

Though, Sony would more than make up for the lack of DAT being adopted as a home audio format after the computer industry adopted the DAT tape as an enterprise backup tape solution. Once DAT tape changers and libraries became common, DATs became a staple in many computer centers. All was not lost for Sony in this format. DAT simply didn’t get used for its original intended purpose, to be a home consumer digital audio format. Though, it did (and does) have a cult audiophile and bootleg following.



By the 1990s, the CD had quickly become the new staple of the music industry (over vinyl and cassettes). It was so successful, it caused the music industry to stop producing vinyl records entirely, before their recent resurgence in the 2010s for a completely different reason. Cassettes and 8-track tapes also went the way of the dinosaurs. Though, 8-tracks had been more or less gone from stores by 1983, the prerecorded cassette continued to limp along into the early 90s. Though, even newer digital audio technologies and formats are yet on the horizon, they won’t make their way into consumer’s hands until the late 1990s.

Throughout the 1990s, the CD remains the primary digital audio format of choice for commercial prerecorded music. By 1995, you could even record your own audio CDs using blanks, thanks to the burgeoning computer industry. This meant that you could now copy an audio CD or convert all of the audio tracks from a CD into MP3s (called ripping) and/or make an MP3 CD, which some later CD players could play. And yes, there were even MiniDisc car stereos available later in the decade. The rise of the USB drive also gave life to MP3s as well. This meant you could easily carry a lot more music from place to place and from computer to computer than can be held on a single CD. The MP3’s portability and downloadability along with the Internet gave rise to music downloading and sharing sites like Napster.

Though, MP3 CDs could be played in some CD players, this format didn’t really take off as a standard. This left players primarily using the audio CD as the means of playing music while in a car, thus multi-CD car changers were born. The car stereo models that supported MP3 formatted CDs would have an ‘MP3’ label printed on the front bezel near the slot where you insert a CD. No label means MP3s were not supported. Though, the rise of separate mp3 players further gave rise to car auxiliary input jacks by car manufacturers, which began because of clumsy cassette adapters. If the car stereo had only a cassette player, you would need to use a cassette adapter to plug in your 3.5mm jack equipped mp3 player. Eventually, car players would adopt the Bluetooth standard so that wireless playback could be achieved when using smart phones, but the full usefulness of that technology wouldn’t become common until many years after the 1990s. However, Chrysler took a chance and integrated its own Bluetooth UConnect system into one of its cars as early as 1999! Talk about jumping on board early!?!

Throughout the 1990s, record stores were also still very much common places to shop and buy audio CDs. By the late 1990s, the rise of DVD with its multichannel audio had also become common. Even big box electronics retailers tried to get into the DVD act with Circuit City banking on its new DiVX rental and/or purchase format, which mostly disappeared within a year of introduction. This also meant big box record stores were still available such as Blockbuster Music, Virgin Megastore, Tower Records, Sound Warehouse, Sam Goody, Suncoast, Peaches, Borders and so on. The rise of the Blockbuster Video Rental stores would eventually became defunct as VHS died over DVD, which then switched to digital streaming around the time of the Blu-ray. Some blame Netflix for Blockbuster’s demise when it was, in fact, Redbox’s $1 rental that did in Blockbuster Video stores, which were still charging $5-6 for a rental at the time of their demise.

By 1999, Diamond had introduced the Rio MP3 player. Around that same time, Napster was born (a music sharing service). The Diamond Rio was the first actual MP3 player placed onto the market, not counting Sony’s MD players. It was a product that mirrored the want of digital music downloads, which were afforded by Napster… a then music download service. I won’t get into the nitty gritty legal details, but a battle ensued between Napster and the music industry and again between Diamond (for its Rio player) and the music industry. These two lawsuits were more or less settled. Diamond prevailed, which left the Rio player on the market and allowed subsequent MP3 players to come to market, which further led to Apple’s very own iPod player being released a few years later. Unfortunately, Napster lost its battle, which left Napster mostly out of business and without much of a future until it chose to reinvent or perish.

Without Diamond paving the legal way for the MP3 player’s coming in 1999, Apple wouldn’t have been able to benefit from this legal precedent with its first iPod, released in 2001. Napster’s loss also paved the way for Apple to succeed by doing music sharing and streaming right, by getting permission from the music industry first… which Napster failed to do and was not willing to do initially. If only Napster had had the foresight to loop in the music industry initially instead of alienating them.

As for recordings made during the 90s, read the DAW section above for more details on what a DAW is and how most audio recording worked during the 90s. Going into the early 90s, traditional recording methods may have been employed, but that was quickly replaced by computer based DAW systems as Windows 98, Mac OS and other computer systems made a DAW quick and easy to install and operate. Not only is a DAW now used to record all commercial music, it is also used to prerecord audio for movies and TV productions. Live audio productions might even use a DAW to add live effects while performing live.

Though, some commercial DAW systems like Pro Tools sport the ability to control a physical mixing board’s controls. With Pro Tools, for example, the DAW shows a virtual mixing board identical to a physical mixing board attached. When the virtual mixing board controls are moved, so too does it rotate the knobs and move the sliders of the attached specific (and quite expensive) physical mixing board. While the Pro Tools demo was quite impressive, it was very expensive to buy (both Pro Tools and the supported mixing board); it was mostly a novelty. When you’re recording a specific song with live musicians, such an automated system handling a physical board might be great if you’re wanting to make sure all of the musical parts are performed live in a professional sounding way without having a sound engineer sitting there tweaking all of the controls manually. Still, moving the sliders and knobs with automation software is cool to watch, but is way overpriced and not very practical.

To be fair, though, Pro Tools was originally released at the end of 1989, but I’m still considering it a 1990s product as it would have taken until the mid-90s to mature into a useful DAW. Cubase, a rival DAW product, actually released earlier in 1989 than Pro Tools. Both products are mostly equivalent in features, with the exception of Pro Tools being able to control a physical mixing board where Cubase, at least at the time I tested it, could not.

As for cinema sound, 1990 ushered in a new digital format in Cinema Digital Sound (CDS). Unfortunately, CDS had a fatal flaw that left some movie theaters in the lurch when presenting. Because CDS replaced the optical audio track on film with a magnetic strip of digital 5.1 sound (left, right, center, S-left and S-right and low frequency effects), this left the feature (and the format) without sound if the audio strip were damaged. As a result, Dolby Digital (1992) and Digital Theater Systems (DTS — 1993) quickly became the preferred formats for presenting films with digital sound to audiences. Dolby Digital and DTS used alternative placement for the film’s digital tracks leaving the optical track available for backup audio “just in case”. For completeness, Sony’s SDDS also uses alternative placement as well.

According to Wikipedia:

…unlike those formats [Dolby Digital and DTS], there was no analog optical backup in 35 mm and no magnetic backup in 70 mm, meaning that if the digital information were damaged in some way, there would be no sound at all.

CDS was quickly superseded by Digital Theatre Systems (DTS) and Dolby Digital formats.

Source: Wikipedia

However, Sony (aka Columbia Pictures) always prefers to create its own formats so that it doesn’t have to rely on or license technology from third parties (see: Blu-ray). As a result, Sony created Sony Dynamic Digital Sound (SDDS), which saw its first film release in 1993’s The Last Action Hero. However, DTS and Dolby Digital, at the time, remained the digital track system of choice when the film was not released by Sony. Likewise, Sony typically charged a mint to license and use its technologies. Thus, producers would opt for systems that cost less in the final product if the product were not being released by a Sony owned film studio. Because Sony also owned rival film studios, many non-Sony studios didn’t want to embrace or use Sony’s technological inventions, choosing Dolby Digital or DTS over Sony’s SDDS.

Wall of Sound and Loudness Wars

Sometime in the late 1990s, sound engineers using a DAW began to get a handle on properly remastering older 80s music. This is about the time that the Volume War (aka Loudness War) began. Sound engineers began using sound compression tools and add-ons, like iZotope’s Ozone, to push audio volumes ever higher and higher, while remaining under the maximum threshold of the CD’s volume capability to prevent noticeable clipping. These remastering tools meant, at least to the subsequent remastered audio, much louder sound output than before adding compression.

Such remastering tools have been a tremendous boon to audio and artists, though Ozone didn’t really begin until middle of the 2010s. Thus, we’re jumping ahead a little. Prior to using such 2010’s tools, Cubase and Pro Tools already offered built-in compression tools that afford similar audio compression to iZotope Ozone, but which required a more manual tweaking and complexity. These built-in tools have likely existed in these products since the mid 1990s.

The Wall of Sound idea is basically pushing an audio track’s volume to the point where nearly every point in the track has the same level of volume. It makes a track difficult to listen to, offers up major ear fatigue and is generally an unpleasant sonic experience for the listener. Some engineers have pushed the compression way too far on some releases. CDs afford impressive volume differences, from the softest whisper to the loudest shout. These dynamics in music can make for tremendous artistic uses. When compression is used on pop music, all of those dynamics are lost… instead replaced by a Wall of Sound that never ends. Many rock and pop tracks fall into this category, only made worse by a tin eared, inexperienced sound engineers with no finesse over a track’s dynamics. However, sometimes it’s the band requesting the remaster and giving explicit instructions, but sometimes it’s left up to the sound engineer to create what sounds best. Either way, a Wall of Sound is never a good idea.

As a result of improving sound quality through these new mastering, this invigorated the process of remastering those old crappy-sounding, vinyl-mastered 1980 CD releases… finally giving that music the sound quality treatment it should have had when those CDs originally released in the 1980s. That, and record labels needed yet more cash to continue to operate.

These remastering efforts, unfortunately, left a problem for consumers. Because the CD releases mostly look identical, you can’t tell if what you’re buying (particularly when buying used) is the original 1980s release or the updated and remastered new release. You’d need to read the dates printed on the CD case to know if it were pressed in the 1980s or in the late 1990s. Even then, this vinyl master CD pressing problem continued into the early 1990s. It wouldn’t be until around the late 1990s or into the 2000s when the remastering efforts really began in earnest. This meant that you couldn’t assume a 1993 CD release of a 1980s album was remastered.

The only way you know if the CD is remastered is 1) buying it new and seeing a sticker making this remastering claim and 2) listening to it. Even then, some older CDs only got very minimal sound improvements (usually only volume) when remastered over their 1980s CD release. Many remasters didn’t improve the bottom or top ends of the dynamics of the music and only focused on volume… which only served to make that tinny, vinyl remaster even louder. For example, The Cars’s 1984 release, Heartbeat City, is an good example of this problem. The original release on CD had thin, tinny audio, clearly indicative that the music was originally mastered to accommodate vinyl. The 1990s and 2000s remasters only served to improve the volume, but left the music dynamics shallow, thin and tinny, with no bottom end at all… basically leaving the original vinyl remaster’s sound almost wholly intact.

A sound engineer really needed to spend some quality time with the original material (preferably from the original multitrack master) bringing out the bottom end of the drums, bass and keyboards while bringing the vocals front and center. If remastered correctly, that album (and many other 1980s albums) could sound like it was recorded on modern equipment from at least the 2000s, if not the 2010s or beyond. On the flip side, Barbra Streisand’s 1960’s albums were fully digitally remastered by John Arrias who was able to reveal incredible sonics. Barbra’s vocals are fully crisp and entirely clear along side the music backing tracks. The handling of remixing and remastering of many rock and pop bands was ofttimes handed to ham-fisted, so-called sound engineers with no digital mastering experience at all.

Where in the 1930s, it was about simply getting a recording down to a shellac 78 rpm record, in the 90s for new music, it was all about pumping up the sub-bass and making the CD as loud as possible. All of this in the later 90s was made possible by digital editing using a DAW.


Seeing as this is an article about The Evolution of Sound, this article would be remiss if it didn’t discuss and describe the MP3 format’s contribution to audio evolution. The MP3 format, or more specifically, lossy compression, was invented by Karlheinz Brandenburg, a mathematician and electrical engineer working in conjunction with various people at Fraunhofer IIS. While Karlheinz has received numerous awards for this so-called audio technological improvement, one has to wonder if the MP3 really was an improvement to audio? Let’s dive deeper.

What exactly is lossy compression? Lossy compression is an algorithmic technique by which an mathematical algorithm takes in a stream of uncompressed digital audio content and then removes and rearranges that audio to reduce or eliminate extraneous, unnecessary or repetitive segments via an encoder. When the decoder plays back the resulting compressed audio file, it recreates that audio on-the-fly based on the encoded data back into a suitably similar audio form supposedly indistinguishable from the original uncompressed audio. The idea here is to produce audio so aurally similar to the uncompressed audio that the ears cannot distinguish a difference from the original uncompressed audio content. That’s the theory, but unfortunately this format isn’t 100% perfect.

Unfortunately, not all audio is amenable to being compressed in such a way. For example, MP3 is not at all capable of producing low volume content without introducing noticeable audible artifacting. Instead of hearing only the audio as expected, the decoder also introduces a digital whine… the equivalent of analog static or white noise. Because pop, rock, R&B and Country music rely on guitars, bass and drums, keeping the volumes mostly consistent throughout the track, the MP3 format works perfectly fine for these. For orchestral music with low volume passages, the MP3 format isn’t always the best choice.

Some of this digital whine can be overcome by increasing the bit rate of the resulting file. For example, many MP3s are compressed at 128k bits per second (kbps). However, this bit rate can be increased to 320 kbps, thus reducing digital whine and increasing the overall sound fidelity. The problem with increasing bit rates is that it also increases the resulting size of the file. Thus, 320 kbps MP3 file sizes might not be that far off in size from an uncompressed .WAV file. Why suffer possible audio artifacts using the MP3 format when you can simply store uncompressed audio and avoid this?

Let’s understand why MP3s were needed throughout the 1990s. Around 1989-1990, a 1 GB sized SCSI hard drive might you cost around $1000 or more. Considering that a CD holds around 700 megabytes, you could extract the contents of about 1.5 CDs onto a 1 GB sized hard drive. If you MP3 compressed those same CD tracks, that same 1GB hard drive might be able to hold 8-10 (or more) CDs worth of MP3s. As the 1990s progressed, hard drive sizes would increase and these prices would also decrease, eventually making both SCSI and IDE drives way more affordable. It wouldn’t be until 2007 when the first 1TB sized drive launched. From 1990 through to 2007, hard drive sizes were not amenable to storing tons of uncompressed audio wave files. To a lesser degree, we’re still affected by storage sizes even today, making compressed audio still necessary, particularly when storing audio on smart phones. We’re getting too far ahead.

Because of the small storage capacities of hard drives throughout the 1990s, the need for much smaller storage of audio files was necessary, thus the mp3 was born. You might be asking, “Well, what about the MiniDisc?” It is true that Sony’s MiniDisc format also used a compressed format. Sony, however, like it always does, devised its own compression technique called ATRAC. This compression format is not unlike MP3 in terms of its design. As for specifically how Sony’s ATRAC algorithm works exactly is unknown because it is a proprietary format. Because of ATRAC’s proprietary nature, this article will not speculate on how Sony came about creating it. Suffice it to say that Sony’s ATRAC arrived 2 years after the MP3 format’s initial release in 1991. Read into that what you will.

As for the advancement of audio in the MP3 format, lossy compression has really set back audio quality. While the CD format sought to improve on audio and did so by making tremendous strides in its near 0db silence, the MP3 only sought to make audio “sound” the same as an uncompressed CD track. With the word “sound” being the key to MP3. While MP3 did mostly achieve this goal with most musical genres, the format doesn’t work for all music and all musical styles. Specifically, certain electronic music with sawtooth or exactly square wave forms can suffer. Certain passages of very low volume can also suffer under MP3’s clutches. It’s most definitely not a perfect solution, but MP3 solved one big problem, reducing the file sizes down to fit on the small data storage products available at the time.

Data Compression vs Audio Compression

Note that the compression discussed above regarding the MP3 format is wholly different from audio compression used to increase volumes and reduce clipping when remastering a track. The MP3 compression above is strictly a form a data compression, but data compression designed specifically for audio tracks. Audio volume compression used in remastering (see Loudness Wars), is not a form of data compression at all. Audio compression used in remastering is a form of analog compression and limiting. It seeks to raise volume of most of a track, but only compresses down (or lowers the volume of) the peaks that would otherwise reach above the volume ceiling of the audio media.

Remastering (music production) audio compression is intended to increase the overall volume of the audio without introducing audio clipping (clicking and static heard if audio volumes increase above the audio volume ceiling). In other words, remastered audio compression is almost solely intended to increase volumes while eliminating or introducing unwanted noises. The MP3 compression described above is solely intended to reduce file storage sizes of audio files on disc, while maintaining the audio fidelity and quality as a reasonably close facsimile to its original uncompressed audio content. Note that while audio compression techniques began in the 1930s to support radio broadcasts, the MP3 format was created in the 1990s. While both of these techniques improved during the 1990s, they are entirely separate inventions used in entirely separate ways.

For the reasons described in this section, I actually question the long term viability of the MP3 format once storage sizes become sufficiently large that uncompressed audio is the norm. MP3 wasn’t designed to improve audio fidelity at all. It was solely improved to reduce file storage sizes of compressed audio.



In the 2000s, we then faced the turn of the millennium and all of the computer problems that went along with that. With the near fall of Napster and the rise of Apple (again), the 2000s are punctuated by smart phone devices, the iPod and various ever smaller and lighter laptops.

At this point, I’d also be remiss in not discussing the rise of the video game console which has now become a new form of storytelling, like interactive cinema in its own right. These games also require audio recordings, but because they’re computer programs, they rely entirely on digital audio to operate. Thus, the importance of using a DAW to create waveforms for video games.

Additionally, the rise of digital audio and video in cinemas further pushes the envelope for audio recording. However, instead of needing a massive mixing board, audio can be recorded in smaller segments into audio files, then those files are “mixed” together using a DAW by a sound engineer, who can then play all of the waveforms back simultaneously in a mixed format. Because the sound files can be moved around on the fly, the timing can be changed, they can be added, removed, volumed up or down, have effects added, run backwards, sped up or slowed down or even duplicated multiple times to create unusual echo effects and new sounds. With video games, this can be done by the software while running live. Instead of baking down music into a single premade track, video games can live mix many audio clips, effects and sounds into a whole continuous composition live at the time the game plays. For example, if you enter a cave environment in a game, the developers are likely to apply some form of reverb onto the sound effects of walking and combat situations to mimic the sound you might experience inside of a cave environment. Once you leave the cave, that reverb effect goes away.

The flexibility of sound creation in a DAW is fairly astounding, particularly when a sound engineer is doing all of this on a small laptop on location or when connected to their desk system at an office. The flexibility of using a video game console to live mix tracks into the gameplay on the fly is even more astounding. The flexibility of using a laptop remotely on a movie set is further amazing when you need to hear the resulting recordings played back instantly with effects and processing applied.

In the 2000s, these easy to use and affordable DAW software systems opened the door up to home musicians and even professionals. This affordability made DAW systems within the reach for small musicians to create professional sounding tracks even on a limited budget. As long as the home musician was studious with their learning of the DAW software, these musicians could now produce tracks that rivaled tracks professionally recorded, mixed and mastered at an expensive studio.

While the 1930s wanted to give home users a simple way to record audio at home, this was actually achieved once DAWs like Acid Music Studio arrived and could be easily run on a laptop with minimal requirements.

Not only were DAWs incredibly important to the recording industry, but so too were small portable recording and mixing devices like the Zoom H1n. These handheld devices sport two microphones and are battery operated. The H1n supported recording 4 track inputs and could record two or four tracks simultaneously onto an SD card in various digital audio formats. These recorders also sported multiple input types in addition to the built-in microphones. While these handheld units are not technically a DAW, they do offer a few built-in minimal DAW-like tools. Additionally, the resulting audio files produced by an H1n could be imported into a DAW and used in any audio mix.

These audio recorders are incredibly flexible and can be used in a myriad of environments to capture audio clips. For on the go of capturing ambient background effects, such as sirens, water running, rain falling or cars honking, this handheld recorder offers the perfect way to do this. Its resulting audio files from the built-in microphones are always incredibly crisp and clear, but you must remain perfectly silent to not have distracting noises picked up by the incredibly sensitive microphones.

There have been a number of Zoom handy recorder products including models going back to 2007. The H1n is one of its newest models, but each of these Zoom recorder products work almost identically in recording capabilities to the earlier models.

iPhone, iPod, iPad and Mac OS X

This article would be remiss if it failed to discuss the impact the iPod, iPad and iPhone have had on various industries. However, one industry it has had very little impact on is the sound recording industry. While the iPad and iPhone do sport microphones, these microphones are not high quality. Meaning, you wouldn’t want to use the microphones built into these devices for attempting to capture professional audio.

These included microphones work fine for talking on the phone or using Facetime or for purposes where the quality of the audio coming through the microphone is unimportant. As a sound designer, you wouldn’t want to use the built-in microphone for any purposes of recording professional audio. With that said, the iPad does sport the ability to input audio channels into its lightning or USB-C ports for recording into GarageBand (or possibly other DAWs) available on iOS, but that requires hooking up an external device.

Thus, these devices, while useful for their apps and for games and other mostly fun uses, are not intended to be used for trying to record professional audio content. With that said, it is possible to record audio into GarageBand via separate audio input devices connected to an iPhone or iPad.

A MacBook is much more useful for the purposes of audio recording because these typically have several ports which could sport several input or output audio devices such as mixing boards supporting multiple audio inputs, connecting a device up like the Zoom H1n or even controlling devices via MIDI and possibly all of the above. You can even attach extensive storage space to store these resulting recorded audio files, unlike an iPad and iPhone which don’t really have these large storage options available.

While the iPad and iPhone are groundbreaking devices in some areas, audio recording, mixing and mastering is not one of those areas… that’s also because of the limited storage space on these devices combined with its lack of high quality microphones. Apple has contributed very little to the improvement and ease of professional digital audio recording with its small handheld devices. The exception here is Apple’s MacBooks and MacOS X, when using software like GarageBand, Audacity or Cubase… software that’s not easily used on an iPhone or iPad.

Let’s talk about the iPod here, but last. This device arrived in Apple’s inventory in 2001, long before the iPad or iPhone. This device was intended to be Apple’s claim to fame… and for a time, it was. This device set the tone for the future of the iPhone and iPad and even Apple Music. This device was small enough to carry, but had large enough storage capacity to hold a very large library of music while on the go. The iPod, however, didn’t really much change audio recording. It did slightly improve the quality of audio with its improvement of AAC. While AAC encoding did improve the audio quality and clarity over MP3 to a degree, the quality improvements were mostly negligible to the ears over a properly created MP3. What AAC did for Apple, more than anything, is offer a protection system to prevent users from pirating music easily when saved in Apple’s AAC format. MP3 didn’t (and still doesn’t) offer these copy protections.

AAC ultimately became Apple’s way of enticing the music industry to sign onto the Apple iTunes store as it gave music producers peace of mind knowing that iPod users couldn’t easily copy and pirate music stored in that format. For audio consumers, the perceived enhanced quality is what got some consumers to buy into Apple’s iTunes marketplace. Though, AAC was really more about placating music industry executives than about enticing consumers.



The 2010s are mostly more of the same coming out of the 2000s with one exception, digital streaming services. By the 2010s, content delivery is quickly moving from physical media towards sales of digital product over the Internet via downloads. In some cases for video games, you don’t even get a digital copy. Instead, the software runs remotely with the only pieces pumped to your system being video and audio. With streaming music and video services, that’s entirely how they work. You never own a copy of the work. You only get to view that content “on demand”.

By this point in the 2010s, DVD, blu-rays and other physical media formats are becoming quickly obsolete. This is primarily due to conversion to streaming and digital download services. Even video games are not immune to this digital purchase conversion. This means that big box retailers of the past housing shelves and shelves of physically packaged audio CDs are quickly disappearing. These brick and mortar stores are now being replaced by digital streaming services (Apple Music, Netflix, Redbox, Hulu, iTunes and Amazon Prime), yes even for video games with services like Sony’s PlayStation Now (2014) and Microsoft’s GamePass (2017). Though, it can be said that Valve’s Steam began this video game digital evolution back 2003. Sony has also decided to invest even more in its own game streaming download platform in 2022, primarily in competition with GamePass, with its facelift to PlayStation Plus Extra.

As just stated above, we are now well underway in converting from physical media to digital downloads and digital streaming. Physical media is quickly becoming obsolete, along with the retailers who formerly sold those physical media products… thus many of these retailers have closed their doors (or are in the process)… including Circuit City, Fry’s Electronics, Federated, Borders / Waldenbooks and Incredible Universe. Some of these retailers like Barnes and Noble and Best Buy are still hanging on by a thread. Because Best Buy also sells appliances, such as washers, dryers along with large screen TVs, Best Buy is somewhat diversified to not be fully reliant on the conversion from physical media to digital purchases. It remains to be seen if Best Buy can survive once consumers switch entirely to digital goods and Blu-rays are no longer available to be sold. This means that were those digital content goods to disappear tomorrow, Best Buy may or may not be able to hang on. Barnes and Noble is still in a questionable position because they don’t have other tangible goods than books. They must rely primarily on physical book sales to keep this company afloat. GameStop is also in this same situation with physical video games, though they survive primarily by selling used consoles and used games.

Technological improvements in this decade include faster computers, but not necessarily better computers as well as somewhat faster Internet, but faster networking is entirely relative to where you live. While CPUs improve in speed, the operating systems seem to get more bloated and buggier… including iOS, Mac OS X and even Windows. Thus, while the CPUs and GPUs get faster, all of that performance is soaked up almost instantly by the extra bloatware installed on the operating systems by Apple and Microsoft and Google’s Android… making an investment in new hardware almost pointless.

Audio recording during this decade doesn’t really grow as much as one would hope. That’s mainly due to services like Apple Music, Amazon Music, Pandora, Tidal and even, yes, Napster. After 1999 when Napster more or less lost its case with the music industry, it was forced to effectively change or die. Apparently, Napster decided to become a subscription service, reinventing itself. Apparently, this allowed Napster to finally get the blessing of and force royalty payments to the industry with which had lost its legal file sharing battle. Musical artists are now creating music that sells only because they have a fan base, but not because the music actually has artistic merit.

As for Napster, it all gets more convoluted. From 1999 to 2009, Napster continued to exist and grow its music subscription service. In 2009, Best Buy wanted a music subscription service for its brand and bought Napster. A couple years later, in 2011, and due primarily to money loss problems within Best Buy, Best Buy was forced to sell the remnants of Napster to the Rhapsody music service including Napster’s subscriber base along with the Napster name. In 2016, Rhapsody bizarrely renames itself to Napster… which is where we are today. The Napster that exists today isn’t the Napster from 1999 or even the Napster from 2009.

The above information about Napster is more or less included as a follow-on to the previous discussion about Napster’s near demise. This information doesn’t necessarily further the audio recording evolution, but it does tertiarily relate to the health of the music and recording industry as a whole. Clearly, if Best Buy can’t make a solid go of its own music subscription service, then maybe we have too many?

As for cinema sound, DTS and Dolby Digital (with its famous double D logo) along side THX’s acoustical room engineering became the digital standards for theater sound. Though since, audio innovation in cinema has mostly halted. This decade has been more about using the previously designed innovations than about improving the cinema experience. In fact, you would have thought that after 2019’s COVID, Cinemas would have wanted to invigorate the theater experience to get people back into the auditoriums. The only real innovation in the theater has been to seating, but not to the sound or picture improvements.

This article has intentionally overlooked the transition from analog film cameras to digital cameras (aka digital cinematography), which began in the mid 1990s and has now become quite common in cinemas. Because this transition doesn’t directly impact sound recording, it’s mentioned only in passing. Know that this transition from film to digital cameras occurred. Likewise, this article has chosen not discuss Douglas Trumball’s 60 FPS Showscan film projection process as it likewise didn’t impact the sound recording evolution. You can click through to any of the links to get more details for these visual cinema technologies if you’re so inclined.

Audio Streaming Services

While the recording industry is now firmly reliant on a DAW for producing new music, that new music must be consumed by someone, somewhere. That somewhere includes streaming services like Napster, Apple Music, Amazon Music and Pandora.

Why is this important? It’s important because of the former usefulness of the CD format. As discussed earlier, the CD was more or less a stop-gap for the music industry, but at the same time it propelled audio recording process in a new direction and offered up a whole new format for consumers to buy. Streaming services, like those named above, are now the place to go to listen to music. No longer do you need to buy and own thousands of CDs. Now you just need to pay for a subscription service and you have instant access to perhaps millions of songs at your fingertips. That’s like walking into a record store and opening every single CD in the store and listening to every single one of them for a small monthly fee. This situation could only happen on a global Internet scale, never on a single store sized scale.

For this reason, record stores like Virgin Megastore and Blockbuster Music (now out of business) no longer need to exist. When getting CDs was the only way to get music, CDs made sense. Now that you can buy MP3s from Amazon or, better, sign up for a music streaming service, you can listen to any song you want at any time you want just by asking your device’s virtual assistant or by browsing.

The paradigm of listening to commercial music has now shifted during the 2010s. Apple Music launched in 2015, for example. Since 2015, this service has now gained 88 million subscribers as of 2022 and counting. The need to buy prerecorded music, particularly CDs or vinyl, is almost nonexistent. The only people left buying CDs or vinyl are collectors, DJs or music diehards. You can get access to brand new albums the instant they drop simply by being a subscriber. With devices like iOS and Apple Music, you can even download the music to your device for offline listening. You don’t need to rely on having access to the internet to listen. You simply need access to download the tracks, but not to listen to them. As long as your device remains subscribed, all downloaded tracks remain valid.

It also means that if you buy a new device, you still have access to all of the tracks you formerly had. You would simply need to download them again.

As for music recording during this era, the DAW is firmly the entrenched recording software of choice whether in a studio or at home. Bands can even set up home studios and record their tracks right in their own studio. No need to lease out expensive studio space when you can record in your own studio. This has likely put a punch onto former studios that relied on bands showing up to record, but it was an inevitable outcome of the DAW, among other music equipment changes.

Though, it also means that the movie industry has an easier time of recording audio for films. You simply need a laptop or two and you can easily record audio for a movie production while on location. What was once cumbersome and required many people handling lots of equipment is likely down to one or two people using portable equipment.

As for Cinema audio, effectively, not much has changed since the 1970s other than perhaps better amplifiers and speakers to better support THX certification. Though by the 2010s, digital sound has become ubiquitous, even when using actual developed film prints, though digital cinematography is now becoming the defacto standard. While Cinemas have moved towards megaplexes containing 10, 20 or 30 screens, the technology driving these theaters these hasn’t much changed this decade. Other competitors to THX have come into play, like Dolby Atmos (2012), which also offers up optimal speaker placement and volume to ensure high quality spatial audio in the space allotted. While THX’s certification system was intended for commercial theater use, Dolby Atmos can be used either in a commercial cinema setting or in a home cinema.



We’re slightly over 2 years into the 2020s (100 years since the 1920s) and it’s hard to say what this decade might hold for audio recording evolution. So far, this decade is still riding out what was produced in the 2010s. When this decade is over, this section will be written. Until then, this article is awaiting this decade to be complete. Because this article is intended as a 100 year history, no speculation will be offered as to what might happen this decade or farther out. Instead, we’ll need to let this decade play out to see where audio recording goes from here.

Note: All images used within this article are strictly used under the United States fair use doctrine for historical context and research purposes, except where specifically noted.

Please Like, Follow and Comment

If you enjoy reading Randocity’s historical content, such as this, please like this article, share it and click the follow button on the screen. Please comment below if you’d like to participate in the discussion or if you’d like to add information that might be missing.

If you have worked in the audio recording industry during any of these decades, Randocity is interested in hearing from you to help improve this article using your own personal stories. Please leave a comment below or feedback in the Contact Us area.


Fallout 76 Rant: The Impact of Legacy Removal

Posted in botch, business, video game, video game design by commorancy on January 25, 2023

Fallout 76_20230117225518

While Pipe might be life in Fallout 76, the Legacy removal might actually mean the death of Fallout 76. While some gamers are praising the removal of Legacy weapons from Fallout 76, those who are impacted by this change might actually have the power to sink the Fallout series, and possibly even Bethesda itself. Let’s explore.

Misguided Maneuver

It’s clear that Bethesda is horribly misguided internally. On the one hand, I get Bethesda’s rationale behind the removal of these “illegal” mods from Legacy weapons. On the other hand, Bethesda’s rationale is entirely misguided and fails to take into account the real damage that has now been inflicted on the game and, ultimately, the game’s player base. The real question now is not whether the game is better, but whether Fallout 76, ironically a survival game, can survive this change.

One thing is certain, some players are reeling from this change and rightly so. Bethesda itself also doesn’t seem to fundamentally understand the player base which has been born out of these legacy weapons having been included in the game for literal years.

What is a Legacy Weapon? A Legacy weapon is any weapon that was formerly in the game and could be obtained through loot drops, but was removed from the loot drop list by Bethesda Fallout 76 devs in the game’s early years (loot drops removed around 2018-2019). This meant there was no way to obtain these weapons after the loot drops stopped… until Legendary Modules were introduced when Nuclear Winter began in 2019. Once these legendary modules were added, for a short time it may have been possible to craft such weapons on a crafting bench until the crafting of these weapons was also patched. Since then, these weapons have been unavailable.

Which Weapons were Removed?

The “Legacy” weapons to which this article refers are any legendary energy or plasma weapon with an explosive attachment. These explosive attachments have now been deemed “illegal” by Bethesda even though they were perfectly legal when they originally dropped. Such weapons could be obtained earlier in the game’s life legitimately, but today are no longer obtainable and are now marked as “illegal” by Bethesda’s Fallout 76 team. Weapons which have now been removed include:

  • Explosive Gatling Plasma
  • Explosive Laser Pistol
  • Explosive Laser Rifle
  • Explosive Gatling Laser
  • Explosive Flamer
  • Explosive Gauss Rifle
  • Explosive Gauss Shotgun
  • Explosive Gauss Minigun
  • Explosive Gauss Pistol
  • Explosive Tesla Rifle

All of the above weapons have had their explosive attachment removed by the Fallout 76 devs, turning many 3 star Legendary weapons into 2 star weapons.

Note, I won’t even get into the severe bugs introduced as a result of the removal of these Legacy weapons… bugs which have heavily impacted many rogues in addition to the Legacy removals. It’s not pretty for Bethesda or Fallout 76 right now.

Righting Wrongs

Once Bethesda knew these weapons shouldn’t have been included in the game back in 2018-2019, a patch should have been swiftly crafted and implemented then to remove these “illegal” weapons. This would have saved Bethesda this headache today. Instead, Bethesda waited and let this situation fester for going on nearly 5 years now. Not only did it fester, it actually born a whole new type of gamer in Fallout 76… a type of gamer willing to spend real cash money to not only obtain and own these “illegal” weapons, but who were also willing to pay Bethesda for Fallout 1st and pay Bethesda for Atoms to buy the Atomic Shop’s literal valueless junk.

Yes, this new type of gamer is the one who is literally propping up Bethesda’s Fallout 76 game. These are the gamers who are paying Bethesda’s bills, keeping Bethesda’s lights on and ensuring their staff remain employed.

Removing these weapons is literally a situation of “biting the hand that feeds you!”

Fallout 76 Gamer Types

When Fallout 4 began and also when Fallout 76 began, the primary type of gamers that Bethesda had hoped for were those interested in playing the game firmly on their “golden path”. In programming, a “golden path” is the path that most users will take when using any piece of software. This path is the path the engineers design the game for users to find and use. I dub these types of users the “golden” users. The vast majority of software users fall into “golden” users. Video game software users take a different route.

Gamers are somewhat different for this “golden path” approach for a number of reasons. The primary reason gamers are different is that video games entice children to play. By the very nature of this product being a video game, children are naturally one of the video game industry’s primary demographics… regardless of the game’s rating.

Let’s define children. Children include ages 8-17, with the primary age of most children playing ranging from 12-14. Because children don’t have a lot of life experience, their minds aren’t constrained by “adult” thinking. Children play games in ways that suit their fancy, which means children do not always remain on the golden path. In fact, in most cases, children stray from the golden path frequently in video games. Children actively try to poke holes in, find problems with and generally do things that an adult gamer might never think to try.

Children aren’t the only players doing this, however. Many adults can maintain this childlike poke and prod thought process well into their 30s. This leads to the next type of gamer I dub the “rogue” gamer.

Rogues vs Golden

Rogue gamers don’t follow the golden path laid out by the developers. These gamers intentionally and actively seek to find bugs, exploit holes and obtain “rare” objects in a game, including weapons. Almost every “rogue” gamer seeks to one-up their fellow player by finding something that their friend doesn’t have, whether that be a way to build under the map, go out of bounds or obtain a weapon that few other players have.

Rogue players don’t play the game as intended and are unwilling to follow EULA rules. They’re so flippant in the way they play the game, they actually don’t really care if their account gets banned or if Sony shuts their PlayStation down by disabling their PSN account, for example. In the gaming world, Rogues don’t care about the rules or abiding by them. With that said, they do care about finding the latest rare thing to have in the game.

The thing is, many of these rogue gamers come from well-to-do, dare I say wealthy families. This means they are willing to pay and pay and pay. They will pay for Fallout 1st. They will pay for Atoms in the atomic shop. They will even pay other players real cash money on places like eBay to buy rare in-game items.

In short, many rogue gamers keep Bethesda’s (and by extension, Microsoft’s) bills paid and the lights on. That’s not to say that every rogue gamer is wealthy enough to do this, but many are. At this point, I think you might understand where this is heading.

One thing that rogues typically don’t care about is the game itself or even the game’s story. They’re not playing the game because it’s Fallout and they’re not playing it because it has interesting lore or interesting quest lines, they’re playing the game because it’s an MMO, because it has multiplayer, because it has combat and because they can find and exploit heavy guns that no one else has. Rogues will only follow down a quest line because it unlocks their character to have or use something unique or better than someone else, not because of interest in the RPG aspect or the story.

Golden players, on the other hand, play the game by the rules using weapons considered legal within the game. These are also players who typically respect the Fallout canon, who are genuinely interested in the story being told, who play by the rules, who choose to play using guns the game provides and who don’t stray outside of the bounds simply because they find a loophole. These are dedicated Fallout players who’ve likely played many previous Fallout games, if not all of them.

Mixing The Two

These player types are not hard walled into two groups. Some players remain mostly golden, but go occasionally rogue when they deem appropriate. For example, some of Bethesda’s rigid game rules go too far. Some players become rogue when it’s necessary to bypass some of these Bethesda rigid rules, simply to save time, to cut weight down or for other reasons that help them play the game better.

Bethesda doesn’t get its player base

One thing is certain, Bethesda does NOT fundamentally understand who’s actually playing Fallout 76 and who is actually paying their bills. It goes even deeper than this.

Because there was a whole separate black market for these high powered “illegal” weapons, Bethesda completely overlooked this aspect of its game. Instead of taking advantage of these payers and bilking them for money, they decided to remove the weapons from the game.

It’s clear, you can either benefit from these players by making real money off of them or you can alienate them… and alienation is exactly where we are now.

Black Tuesday

On Tuesday January 24th, 2023, rogue players had to say goodbye to their “illegal” weapons. Bethesda removed weapon modules from the game, which during the 2018-2019 years were perfectly legal to own and use. This change sends not only a mixed message to players, it sends an exceedingly bad message.

It says that Bethesda really doesn’t give one crap about a huge segment of its very player base who are paying its bills, keeping its staff employed and keeping the game from going under.

This change is likely to be the beginning of the end for Fallout 76. Why?


Rogues are as perplexed and mystified by this late change now as anyone. For years these weapons were in the game and remained so. However, it’s just now that Bethesda decides to rid the game of these weapons?

Because these rogue players comprise a substantial portion of the revenue given to Bethesda for Fallout 1st and other pay-for-play features, it’s surprising Bethesda was so willing to risk losing that revenue and possibly even the entire game over this silly change.

Rogue players must now make a choice. They can either stay and play a hobbled version of the game using no special weapons or they can go find a new game where they can, once again, feel special and own special weapons. This is the actual real danger to Fallout 76. Rogues are fickle players. They only stay and play where they can find their “specialness”. If they can’t find and remain special, then the game is done and they leave it.

That’s exactly the crossroads at which Bethesda now finds itself. The question is, are there enough newbie players to keep the lights on and the staff employed? The answer to this question comes in how Bethesda chooses to respond.

High Levels and Endgame

After playing any game, not only have you amassed levels for your character, you have unlocked perks and skills. The problem is, once the quests have ended, what do you do with these skills? That’s fundamentally the problem with most games. You spend your time playing through the quest lines leveling up your player only to find that when you reach the end, all of that leveling up and those perks were for nothing… as there’s no endgame content.

Many gamers find little to no endgame content to utilize that high level skill. That means, you reach the end and you go find a new game to play.

Fallout 76 is only different in its endgame because it offers Events (and Legacy weapons). After the quests are done and there’s no more quest lines to follow, the Events and Daily quests are what’s left. These are repetitive activities that offer a slight chance for rare loot rewards. It also offers the chance to try out a new overpowered weapon.

Leveling up in Fallout 76, unfortunately, is mostly worthless. Because guns cap out at level 45 or 50, that essentially means your player is capped out at level 45 or 50, regardless of the level number your player may actually achieve. The only benefit to leveling up is to max out the Legendary perk cards, an addition that gives higher level players a tiny bit of an incentive to stay with the game.

Once a player reaches level 650-700, that player can easily have maxed out the Legendary Perk cards.  Max leveling these Legendary Perk cards sees a tiny bit more damage out of weapons, if utilized correctly. So then, what’s left after this? Not much, other than going Rogue and trying to find unobtainable, but overpowered weapons which formerly existed in the game.

While these weapons were once in the game circa 2019, they have since stopped dropping as loot long, long ago. That means that new players can’t easily obtain these overpowered weapons unless they monetarily buy them from another player. Hence, a player economy is born.

Initially, caps were the answer to this economy. Unfortunately, caps became mostly pointless as a currency in the game when Bethesda moved to bullion, scrip and stamps offering up the newest, most rare items. This is when players moved to selling these highly prized and overpowered weapons for real cash money, as in USD. Internet forums and trading boards came to exist to list and sell these weapons for real money.

In one fell swoop, Bethesda shut all of this down… the trading, the sales, the weapons, all of it. Without these weapons in the game, there are no more sales of them. You can’t sell what’s no longer in the game.

It goes way deeper than that. Not only did it kill third party sales of in-game weapons, it is poised to see a massive number of high level players abandon Fallout 76 and cancel their Fallout 1st subscriptions. Why play a game when there’s nothing special left?

Endgame content is firmly limited to Events. Unfortunately, in retaliation for these high powered weapons being in the game, Bethesda ramped up these events to be likewise overpowered. Without these weapons in the game, the events are STILL way overpowered…. to the point where these events are likely to FAIL the vast majority of the time when using standard weapons. Bethesda retaliated against the players by removing the weapons, but failed to reduce the overpowered nature of the events back to a level where standard weapons can be successful. Right now, these “golden” level 45 and 50 level weapons are not enough against these highly overpowered event enemies.

It gets worse, as players dwindle from the game due to natural attrition and now because Legacies have been removed, new players will be hard pressed to find enough higher level players on a server to take on the Scorchbeast Queen, the Titan or even Earle. These events are now so overpowered because Bethesda souped them up against Legacies, it’s near impossible to win these events with non-Legacy weapons, especially if a server has maybe 10 players on it.

Bethesda is definitely at a cross roads.


Now that Microsoft owns Bethesda, Bethesda is most definitely playing with fire. In fact, Bethesda’s choices surrounding Fallout 76 have always been questionable. Legacy removal is probably one of THE most questionable changes Bethesda has ever made for Fallout 76, considering when the problem actually started. Why does Microsoft matter? We’ll come to that answer in a bit.

For now, Fallout 76 is on the cusp. We don’t yet know the fallout (ha) from Bethesda’s meddling with Legacies. The point is, we cannot know how the rogue players will respond or how much financial damage these players who abandon the game can literally do to Bethesda.

It’s clear, without these Legacy weapons in the game, rogues who were playing Fallout 76 solely because these weapons existed will evaporate… and along with that, so will the income from Fallout 1st and all other income that keeps Fallout 76 afloat. Are the rogues a big enough population to make a dent in Bethesda’s income stream? My personal guess is, yes… at least for the longevity of Fallout 76. Without the rogues, Fallout 76 may be hard pressed to remain a viable entity, let alone Fallout as a franchise.

Does Fallout keep Bethesda afloat? It most certainly isn’t the only game that Bethesda publishes. However, Fallout 76 is currently the only Fallout franchise title available. In short, probably not.

Obsidian, another developer, was purchased by Microsoft in 2018, the same year that Fallout 76 released. Obsidian contains the remnants of Black Isle Studios, the original studio who developed the Fallout franchise. Because Microsoft now owns both Bethesda and Obsidian, it’s possible that someone at Microsoft could easily mandate the transition of the Fallout IP and franchise from Bethesda back over to Obsidian to handle.

Bethesda is clearly out of their depths with Fallout and they clearly don’t understand the franchise. Worse, they don’t even understand multiplayer systems in relation to Fallout. This first multiplayer Fallout game is probably the worst implementation that could have possibly been imagined. Partly this is due to its design goals, but partly it’s due to the inept team who couldn’t actually build a workable product… and here we are today. Because the Fallout 76 team failed to build a workable product, they’re now forced to remove a feature from the game that shouldn’t have been in it in the first place. Yet, that feature remained for nearly 5 years, solidifying them as legitimate in the game.

What Bethesda has done is tantamount to yanking a baby bottle from a baby after that baby has already begun to drink. If you didn’t want to give the baby bottle to the baby, it’s simpler not to do it up front than yanking it away after you’ve already given it to the baby. Heartless.

Can Fallout 76 tank Bethesda?

At this point, maybe not. What the loss of Fallout 76 will do is sour future gamers towards Bethesda games.

“Once bitten, twice shy.”

Few will step up to the plate again knowing the disaster that befell Fallout 76, especially once it disappears. Believe me, Fallout 76 WILL end. The question isn’t if, it’s when. After this Legacy removal, I believe Fallout 76’s end days are here. It’s just a matter of time before the remaining high level players (many of whom are now rogues) walk away and find a new game.

Gamers are fickle and these kinds of stupid maneuvers are ripe for rage quitting. Some die hard gamers will remain and play, but only for a short time until they become frustrated with the crappy standard weapons and find a new game to play. At a minimum, I’d certainly expect to see a rash of Fallout 1st subscriptions cancelled in the next 30 days.

The answer is that, alone, Fallout 76 likely can’t tank Bethesda. However, Fallout 76’s demise can most certainly make a big enough dent that someone at Microsoft (Phil Spencer?) retaliates against Bethesda through layoffs (Buh Bye Todd Howard), closures and by handing over various game IP to better equipped and better managed studios.

It’s clear, the current developers are ill equipped to understand what Fallout 76 should be. Let’s understand why…

Rogues, Games and Marketing

Rogues, whether a game studio likes them or not, are a market force. These are players who have money and are willing to spend it. A game studio can either embrace this fact, or go bankrupt trying to eliminate these gamers from the game. As they say, “Get woke, Go Broke!”

Bethesda is firmly in this latter camp. I don’t know what impetus is driving Bethesda’s management team and devs to take this “woke” approach, but clearly it’s not about trying to make money. Clearly, rogues represent real money sales. If a single player is willing to pay $20 or $50 or $150 real cash money for a single over powered weapon in the game, then Bethesda clearly isn’t actually trying make money. Who leaves money on the table?

Leaving an untapped market on the table is not only stupid, it’s probably one of the stupidest things I’ve seen Bethesda (or in general, a game developer) do.

Pay for Play

As much as gamers harp on the pay for play scheme, it’s a real thing, it exists and it needs to exist. Yes, buying an in-game weapon for real cash money is considered pay for play. You can’t deny that. Whether pay for play is good or bad thing is entirely debatable. One thing is certain. Pay for play makes money… and that’s exactly why game developers are in business, to make money.

In fact, pay for play already exists in Fallout 76 with Fallout 1st and Scrap Kits and Repair Kits and the list goes on. Even foodstuffs like Perfect Bubblegum and Lunch Boxes are forms of pay for play. Selling overpowered rifles for real cash money is just the next logical step.

At this point, Fallout 76 is almost 5 years old. When a game is brand new, perhaps pay for play isn’t something that’s needed. However, 5 years later with 95% of players at endgame, then pay for play is perfectly fine and, dare I say, necessary. It extends the life of a game. Anything that extends the life of a game I consider a good thing. It allows new players to step in and know their time won’t be wasted because the game must close down due to lack of players. It allows rogues and endgame players a means of keeping the game interesting and keep them coming back for more play. Anything that keeps players playing is a good thing. That alone continues to make money for Bethesda. I’d say that’s win-win-win all around. Everyone wins.

High Level Players, Veterans and a New Map

One thing that Bethesda has failed to take into account, in among Fallout 76’s many failures, is the failure of planning for high level players reaching the endgame. In The Elder Scrolls Online, this game’s devs seemed to properly plan for endgame high level players. In fact, ESO devs went so far as to convert level 100+ players into then new “Veteran” levels. For example, for every 100 levels, you got 1 Veteran level. A level 300 player would convert into Veteran level 3. These new Veteran levels were denoted by a Veteran symbol next to the player’s new rank, just above their head. This distinguishes Veteran players from low level players of a similar number.

In addition to being converted into Veteran levels, this change also unlocked the game to be played from the beginning using a new harder Veteran challenge level. Eventually, the devs even opened up a new Veteran level territory that required teaming up with other Veterans to handle this new difficult area. This area was so challenging, in fact, there was simply no way to solo it. The hordes were so difficult, you were forced to go in with a team even as a high Veteran level. While the lower level territories remained trivially easy for a Veteran, the Veteran territories were intensely challenging. Even group dungeons were incredibly challenging.

Likening this to Fallout 76, there is no way to liken this. While Fallout 76 devs are busy introducing silly and bugged out territories like Nuka World and slapping high level players on the wrist by removing legacies, the ESO devs (at about this same time in ESO’s lifecycle) were treating high level players like valued players and giving them more challenges. Effectively, the Fallout 76 devs are treating high level players like a nuisance when they should be celebrating players who’ve made it to level 600 or 800 or 1200 or 2000. This celebration should include rewarding these players, not chastising them.

If a player has given up a year or two of their life to play Bethesda’s Fallout 76 game and reached level 1000 (and who continues to actively play it), that’s a celebratory moment. Bethesda devs should be celebrating long standing players who continue to play the game instead of slapping these players on the wrist and saying, “Bad”.

ESO celebrated high level players the right way. Fallout 76 devs treat high level players like nothing more than a mere annoyance.

Here you have one team at Bethesda who fully understands and embraces their entire player base. On the other hand, you have an inept team who hasn’t the faintest clue of who their player base even is. I shake my head at this incredible disparity within the same corporation. It simply makes no sense.

Inept Developers

You’d think that if anything, The Elder Scrolls Online would have taught the Fallout 76 team some valuable lessons. Unfortunately, you thought wrong. It seems that these two MMO system teams do not at all communicate their valuable lessons from one team to the other.

The reality, which has become incredibly apparent, is that the Fallout 76 development team is wholly and completely inept; not just from a development perspective, but from a money making perspective. They don’t seem to understand the value of keeping ALL of the players happy and, most importantly, paying.

A game studio makes money by keeping people playing the game WHILE spending money. You don’t make money when you chase away your paying players. It’s pretty simple. Removing legacies from the game is a seminal chase-away-players moment. It’s also quite clear that the Fallout 76 developers and even the management team don’t get the real danger here.

Instead of embracing the legacies and the whole real money economy that’s grown up around these weapons’ accidental existence, Bethesda turns its back on the players by removing the weapons from the game. Not only has this shut down that entire real world economic situation (which Bethesda could have tapped), players who wanted these items have no reason to stay, pay and play the game any longer.

This means some walk away from Fallout 76 immediately and others leave slowly over time as they lose interest, “because it’s boring”. Some players, specifically rogues, must make their own fun in a game. Legacies were the rogue’s way of making that fun and cutting the boredom. Without the legacies, there’s honestly no reason for these players to remain playing the game… let alone spend any more money on it.

Business Lessons

While I hadn’t intended this article to become a business lesson, it’s moving quickly in this direction. Let me take this section to discuss this aspect of business operations.

Every college student should be required to take at least one or two business classes. What I mean here is that it’s vitally important for students learning software development to understand how their work impacts the bottom line of the company. Not all software features are good for business. There is no more clear illustration of that here than the removal of the Legacy weapons from Fallout 76. Adding new features can help out users. Removing features can easily cause people to walk away from your product.

This is where business classes come into play. Business classes teach students to have the smarts enough to realize that, “Hey, this feature that I’m being tasked to implement has a high chance of losing 70% of our PAYING clients!” Businesses must empower all employees to speak up when they see problems like this.

While software architects come up with ideas, they may not be privy to exactly how many people might actually be using a given feature. Before implementation of any feature that impacts the userbase, someone needs to put on the brakes and say, “Let’s pull the numbers of how many people are actually using this feature before rolling it out!” Sanity must always prevail in any software business. You can’t simply roll out a feature without understanding exactly how it might impact your existing bottom line.

This is why business classes, and more importantly, business intelligence and reporting is important. Blindly making changes without understanding the business impact can easily tank a business. Case in point, Musk’s incredibly poor handling of Twitter. Now we have yet another poor business case, Bethesda’s shitty handling of Legacy removals in Fallout 76.

Too Late

This article is written after-the-fact. Unfortunately, removing these weapons is more or less a done deal. What I mean here is that knowing the way that Fallout 76’s code is written, there’s no way to undo this change. Meaning, it’s easier to stop a code rollout before it happens than it is to undo a change already made. In many cases, it’s actually impossible to undo code changes due to the nature of the way it was rolled out.

At this point, Bethesda is stuck with this change, for better or worse. At this point, unfortunately, we’re probably at the “or worse” point. As I said above, we’re nearly 5 years into this game’s lifecycle. Instead of Bethesda celebrating high level player achievements, these players are being chastised and chased off by removing weapons these players relied on.

The point in becoming a high level player is to take the benefits that go along with that high level, which includes high damage weapons. That’s an expected staple of any game that supports having high level players. If level 1000 players are reduced to using weapons at the same level as a level 50 player, what’s the point in playing Fallout 76? In fact, what’s the point in leveling up beyond level 50?

Not only does this Legacy removal impact high level players, it impacts low level players because they know they can’t get these weapons in the future. That means that players who might have hung around to level their character up to level 1000 for the chance of getting one of these weapons might now get to level 100, quit and go buy something else. That drastically reduces the income of Bethesda… and by extension Microsoft.

When the Fallout 76 team could have embraced these weapons and monetarily leveraged the external market by retooling them to be legitimate and finding legitimate ways to sell and use them, the Fallout 76 team’s lack of business intelligence and foresight prevailed.

It’s anyone’s guess if Fallout 76 can recover from this change. My guess is that this Legacy removal will be the last major thing the Fallout 76 team does before the plug gets pulled on Fallout 76 by Microsoft. Bethesda, prove me wrong.

Compensating Controls

This final thought is yet another failure of business intelligence on the part of Bethesda management regarding the legacy removals. One idea that many game developers employ to soften the blow of any negative change is introduce a compensating positive change. For example, when something gets removed from a player’s inventory because of a policy change, the developer will offer up some kind of freebie for all of those players who are impacted. This can include free currency, a free new weapon, a freebie in the game store or something similar. This freebie offsets that player’s item loss in compensation.

Unfortunately, with this Legacy removal, Bethesda offered players no form of any kind of compensation for the loss of their weapon. They still had their weapon, yes, but severely altered. Bethesda might as well have removed the weapon as the weapon that remained is pretty much worthless. It’s surprising that Bethesda has offered up no compensation at all, but here we are.

For all of the above reasons, the rogues are likely to abandon this game entirely… perhaps even the franchise itself… said as if rogues even care about Fallout as a franchise. That leaves the golden players left to carry the weight, but unfortunately there are likely not enough of these golden players willing to shell out for Fallout 1st in the numbers needed to keep the game afloat. Thus, this change is likely to be Fallout 76’s death knell.

Way to go, Todd! Phil, if you’re reading this, you probably need to have a sit down with Todd to figure out what the hell is going on with the Fallout 76 development team.

Update: 1/29/2023 — Positive Changes vs Balance

While I didn’t discuss this above, there was really no need to state the positive changes by removing legacy weapons. We all know that exactly what taking overpowered weapons from the game means. For those who need this spelled out, it means less powerful weapons now exist in the game. That means shooting more, making more ammo and grinding more to keep your guns working. It also means the need for finding more ways to buff your weapons using Magazines, Bobbleheads and other consumables. It also means reworking perk cards to max out the damage done with these weapons.

In short, it means spending more time reworking your character to find the highest damage build based around the game’s crappy level 45 and 50 weapons. Ultimately, it’s an exercise in futility.

Does the game have balance after legacies? No, it does not! Fallout 76 is actually quite unbalanced. It is entirely because Bethesda has now given enemies many questionable unbalanced buffs. Removing legacies from the game doesn’t in any way negate these problematic introductions around enemies. Let’s list these enemy problems…

  • Enemies are allowed to instantly and silently teleport right behind you and instantly damage or kill you. Not balanced.
  • Enemies are still given perfect aim with every single shot, where players are given VATs that misses more frequently than it manages to hit. Not balanced.
  • Enemies have perfect accuracy with every single shot and are given 100% anti-armor per shot while players must live with weapons that afford drastically reduced accuracy and are given zero anti-armor per shot unless using perk cards and/or Anti-Armor legendary weapons. Even then, anti-armor afforded to the player is never 100% even though enemies are given 100% anti-armor shots. Not balanced.
  • Enemies have majorly enhanced perception, which can instantly negate Sneak cards. For example, if one enemy “sees” you, the horde around them all instantly see you. It’s not enemy by enemy, but by the horde. Not balanced.
  • Daily Ops is worthless due to actual enhanced perception given to enemies. Players spend major amounts of time building their character’s method of combat. If the player has chosen a sneaky sniper build, for example, Daily Ops entirely negates that. This means Bethesda expects us to completely retool our build strictly around Daily Ops? Not balanced.
  • Daily Ops, once again, is worthless due to stealth fields given to all enemies. Stealth invisibility fields negate using VATs. If you’ve built your character around using VATS criticals, once again Bethesda has negated that. Not balanced.
  • HP bar above an enemy lies. If an enemy’s bar says level 50, yet it takes hundreds of shots to kill it, that’s not level 50. A level 50 enemy should take a similar number of shots to kill it no matter what type of enemy it is. Not balanced.
  • Weapons show a high level of accuracy in the UI, but do not provide that high level of accuracy when shooting. Not balanced.
  • Weapons show specific damage numbers, but never actually provide that level of damage when shooting. For example, an Instigating Fat Man purports around 1500 damage in sneak, but never actually shows more than about 100-200 damage when landing a direct hit while sneaking. Not balanced.

As you can see, the vast majority of Fallout 76 has no balance at all. Unless you consider enemy tactics and damage stacked against the player as balance, there is very little balance about the game. The legacies were, in fact, the only way to negate Bethesda’s entirely unbalanced game. In fact, the legacies gave balance back to a game against Bethesda’s unfair and unbalanced enemies.

Unfortunately, we’re now right back to a completely unbalanced and unfair game, where enemies can cheat against the player using tactics like teleportation where the player been given no such ability or defense.

Balance in Fallout 76? Hardly.


Fallout 76: Map locations of Wood Piles

Posted in advice, howto, video game by commorancy on January 18, 2023

Fallout 76_20230118115904

Many players are wondering where to find Wood Resource Piles (aka Wood Piles) throughout Fallout 76. While there are websites showing map locations leading the way to gold, concrete, steel, lead, waste oil and acid deposits, none yet show where to find wood resource piles for your camp. Yes, there are a number of these wood piles in Appalachia. Let’s explore.

Wood Pile Sites

Starter List
Wood Pile Update #1
Wood Pile Update #2
Wood Pile Update #3
Wood Pile Update #4
Legacy Removals and Wood Piles Update #5

The links above lead to the list of wood piles that have been found. Unfortunately, as many others have found, there are no maps to find these resources easily via Google. Well, now there is. The research for this article is time consuming as it requires searching all over the map for these wood piles. That’s where this article comes in. I’ve done this work for you and now the article is closed.

For ease of writing this article, I will abbreviate C.A.M.P. to simply the word ‘camp’ except when I’m referring to the device itself. There are a number of workshops that have wood resource piles. This article does not include wood resource piles located in workshops. If you wish to know which workshops have wood piles… then from the Fallout 76 map, hover over the workshop icon and the game will display resources available at each workshop.

Wood Resource Deposits

What are these deposits and why are they special? Like iron deposits that produce steel scrap and crystal deposits that produce crystal scrap, wood deposits produce wood scrap when equipped with a proper resource extractor in your camp. Some of these deposits exist inside of workshops, but you cannot build a camp in a workshop. Many players want to build their camp on top of a wood resource deposit so they can extract wood scraps at their camp.

Yes, these deposits do exist separately throughout the Forest and even into the Cranberry Bog. Wherever there are forests and wood, these wood pile deposits exist. That I’ve also discovered while scouting, these wood deposits seem to exist near, but outside of workshops and usually aren’t far off of a road. Though, they don’t always appear near workshops and they are not always near a road. These wood piles can be claimed using your C.A.M.P. device and used within your own camp with a wood resource extractor.

Wood resource piles have a distinctive look and shape from regular logs of wood in the landscape. Here’s an example:

Fallout 76_20230118003357

Note that the cones on this pile are only for this specific pile. All other wood piles I’ve found do not have these orange cones. Additionally, note the four posts that constrain the wood pile. These wood resource piles always look like this (with the exception of the cones).

Fallout 76_20230121213010

If you see one of these wood piles in the wild AND which is not located in a workshop, you can use it in your camp to extract wood by placing down a wood resource extractor. The extractor will produce wood scraps for you. The above is how a wood pile looks when a resource extractor is placed down. Resource extractors require power to operate, hence the reason for the two solar panels to power it. While you can use any generator of sufficient power, I prefer using two solar panels which can be placed right on top.

Settlers and Wood

Since the introduction of Wastelanders, the game now offers random NPC settlers living in the wasteland. These settlers can sometimes point the way to finding such wood pile resources. For example, if you hear the sound of wood being chopped, there’s a high probability that locating the sound will lead you to a settler and a wood pile. Settlers don’t spawn all of the time, but they do occur the vast majority of the time. This means that some wood piles must be found by stumbling onto them. By following the sound of wood being chopped, it is likely to lead you to a wood pile.

Again, these wood pile deposits are available to use as resources in your camp, just like ground copper, silver or iron deposits.

Wood Resource Deposit Locations

Let’s get right to the heart of this article and the reason why you are here. These are all of the wood pile resources I’ve found. Every single resource I’ve scouted is available for use in a camp, allowing for a wood extractor resource to be placed. The vast majority of these piles are located in The Forest. However, a few do exist in the Cranberry Bog and other locations around the map. As I find more, this article will be updated.

I’ve also tested all of these locations to ensure that a C.A.M.P. device can be placed which will include the wood pile. Though, placement on some may not work if placed directly in front of the pile. These piles may require you to walk around a little to find a suitable location where the device turns green and allows placement while still including the wood pile.

Let’s start with this first set of wood pile locations …


Location #1 — Gilman Lumber Mill (Forest)

Fallout 76_20230117222022

This wood resource deposit is located near the Gilman Lumber Mill just below Vault 76. The arrow marker points to the spot. This location actually contains two wood deposits. Unfortunately, the deposit closest to Gilman (next to a fence) cannot be claimed for use in a camp. This second one which is farther away and not sitting against a fence (see the arrow marker) is usable in your camp. It looks like so…

Fallout 76_20230117220452

This wood pile below, unfortunately, is NOT usable in a camp. It is shown to prevent confusion from the above pile, which can be used in a camp. Again, this pile just below placed next to the chain link fence cannot be used as a camp resource, but it can be harvested for wood by your character. Yes, it would be nice to have two wood extractors in the same camp, but alas it’s not to be. Thanks Bethesda.

Fallout 76_20230117220259


Location #2 — Back end of Wixon Homestead (Forest)

Fallout 76_20230118115954

This wood deposit exists not far from the Wayward, but sits at the very back end of the Wixon Homestead’s farmland, not far off the nearby road leading to the Wayward.

Fallout 76_20230118115756


Location #3 — Hunter’s Ridge (Forest)

Fallout 76_20230118121251

This wood deposit is near Hunter’s ridge along the road where the marker shows. This pile has two orange cones.

Fallout 76_20230118003357


Location #4 — Pylon V13 & Drop Site C2 (Cranberry Bog)

Fallout 76_20230117223713

This wood deposit is somewhat near the Abandoned Bog Town workshop, but is not part of it. It’s fairly far off of the road and you can hear a settler chopping wood all the way from the road if they are there.

Fallout 76_20230117223659


Location #5 — North of Twin Pines Cabin (Forest)

Fallout 76_20230118013227

Near where the road forks into two, you’ll find a wood deposit which is not far from Twin Pines Cabin (the Blood Eagle symbol on the map).

Fallout 76_20230118173436


Location #6 — Deathclaw Island (Forest)

Fallout 76_20230118005122

Slightly off of and north, just up the road from the spawn point for Deathclaw Island (pine tree marker on the map), here’s another wood resource which can be used at your camp. See the arrow marker. The wood pile looks like this.

Fallout 76_20230118004815

Note that the building seen in the background is the small red shed which is part of the Tyler County Dirt Track workshop. However, this wood pile is not part of that workshop and can be used in your camp.


Location #7 —  Point Pleasant (Forest)

Fallout 76_20230118151516

This wood pile is very close to Point Pleasant. However, the spawn point for Point Pleasant leads you to the south entrance. This pile is near the north entrance of the city with no fast travel point close. You’ll want to fast travel to Black Mountain Ordnance Works and walk down the road towards the north entrance of Point Pleasant. Immediately after you pass a red colored large hauler truck, it’ll be on your left. Careful of a possible Blood Eagle camp nearby.

Fallout 76_20230118152024


Location #8 — Silva Homestead (Forest)

Fallout 76_20230118154041

This wood pile is located a little bit south of Silva Homestead and is also a little bit off of the road near and sorta behind a red tractor. This location is a little bit tricky to place the C.A.M.P. device. However, if you are creative with your C.A.M.P. device placement, you might be able to include both water and wood as resources in your camp. Note that you only need to cover about half of a ground resource with the green perimeter circle for that resource to be usable. Placement of water resources are a bit more finicky.

Fallout 76_20230118153928


Location #9 — Billings Homestead (Forest)

Fallout 76_20230118021809

This wood pile is located just up the road a little from the Silva Homestead pile. This pile is close to, but not part of the workshop. Like Silva Homestead, this one also requires creative placement of the C.A.M.P. device to be able to use this pile as a resource.

Fallout 76_20230118021743

See the placement of the C.A.M.P. device below. Any closer to the pile and it turns red due to the workshop proximity.

Fallout 76_20230118021908


Location #10 — Grafton Steel (Toxic Valley)

Fallout 76_20230118174949

This wood pile is near Grafton Steel, but is on the north side. Similar to Point Pleasant, the fast travel point for Grafton Steel is on the south side. You’ll need to trek up the road to the point where the marker is. It’s just a little bit off the road, but you can still see the pile from the road.

Fallout 76_20230118174911


Location #11 — Near Black Mountain Ordnance Works (Forest)

Fallout 76_20230118172146

From the fast travel point at Black Mountain, travel down the road to the point you see. There’s a small gravel road that leads into an unmarked junkyard which looks like this…

Fallout 76_20230118172246

At the back of this makeshift and unmarked junkyard resides a wood pile…

Fallout 76_20230118172111

I didn’t scout this area carefully to see if there were any other resources present, such as a junk resource. You’ll need to visit to determine this. It looks like there might be a junk resource available off to the right as shown in the wider picture of this makeshift junkyard.


Location #12 — Alpine River Cabins (Forest)

Fallout 76_20230118171534

Just above Alpine River Cabins there’s another wood pile. Seems like there’s a lot of these in the Forest area. Makes sense, though. It looks like this…

Fallout 76_20230118171452


New Piles Update #1

Location #13 — Mosstown (Mire)

Fallout 76_20230119132434

Very near Mosstown, but just outside of it is a wood resource pile. This resource pile is usable in your camp even though it’s very close to Mosstown.

Fallout 76_20230119132403


Location #14 — WV Lumber Co. (Forest)

Fallout 76_20230119101100

This wood pile is so close to the water line, you can likely claim both water and this wood pile in your camp.

Fallout 76_20230119101036


Location #15 — Groves Family Cabin (Forest)

Fallout 76_20230119102900

This one is closer to Groves Family Cabin than Darling Sister’s Lab. The trouble with this map image is that when the Groves marker is selected, the text obscures the arrow. I selected the Darling Sister’s Lab marker to give you a better idea of the map location.

Fallout 76_20230119102744


Location #16 — Miner’s Monument (Forest)

Fallout 76_20230119120615

This wood pile is just across from Miner’s Monument.

Fallout 76_20230119120546


Location #17 — Southern Belle Motel (Mire)

Fallout 76_20230120011217

Next to the Southern Belle Motel is, you guessed it, a wood pile ripe for claiming in a camp. This one is directly next to water. With this one, you can definitely claim both water and the wood pile in your camp. You can probably even put up a bunch of water purifiers along side your wood extractor.

Fallout 76_20230120011106


New Piles Update #2 — Cranberry Bog

There are apparently a number of wood piles strewn around the Cranberry Bog, but not close to roads. Some of these wood piles reside fairly close to various Firebase sites, but not always. Here are the four newest wood piles I’ve found. Thanks go to a game friend who knew about two of these piles and helped me locate them with markers near the sites. That also spurred me to traipse around the Cranberry Bog looking for the two others included.

This player also told me he was unable to place a wood extractor at the wood pile located near Drop Site V9 when he had his base there. I have tested this location by placing an extractor and have experienced no difficulties (see below). Please let me know in the comments if you experience any problems with any of these sites listed.


Location #18 — The Thorn (Cranberry Bog)

Fallout 76_20230120233620

Not far from The Thorn (see marker) is another wood pile. Sometimes a settler is not there.

Fallout 76_20230120233610


Location #19 — Bootlegger’s Shack (Cranberry Bog)

Fallout 76_20230120232224

And another near a grove of Sundew Trees.

Fallout 76_20230120232210


Location #20 — Drop Site V9 (Cranberry Bog)

Fallout 76_20230120225635

A small trek north of Drop Site V9, you’ll find another wood pile. This one is far enough north that it shouldn’t be inside of a blast zone very often, if ever… unless it’s a newbie player who doesn’t know exactly where to nuke.

Fallout 76_20230120225552

This is the wood pile I was told wouldn’t place an extractor by someone who had a camp here previously. I have tested this site and I was able to place down a wood resource extractor without any difficulties, like so…

Fallout 76_20230121211241


Location #21 — Firebase LT (Cranberry Bog)

Fallout 76_20230120225217

Halfway between Firebase LT and that fissure site, you’ll find another wood pile. It looks like so…

Fallout 76_20230120225148


New Piles Update #3

Location #22 — Sunrise Field #1 (Cranberry Bog)

Fallout 76_20230122162448

This wood pile is the first in three wood piles located near Sunrise Field. You’ll need to take a close look at each marker location and correlate that with each image. Note that one of the locations that would seem natural to have a wood pile instead has a uranium deposit (see below). The third wood pile is farther away and in a different area than one might expect.

Fallout 76_20230122162427


Location #23 — Sunrise Field #2 (Cranberry Bog)

Fallout 76_20230122162629

This is the second wood pile located near Sunrise Field. It exists next to this small red shack.

Fallout 76_20230122162609


Location #24 — Sunrise Field #3 (Cranberry Bog)

Fallout 76_20230122163821

This third wood pile is located halfway between Sunrise Field and Sparse Sundew Grove. I’m connecting this one to Sunrise Field due to its close proximity.

Fallout 76_20230122163757


Sunrise Field — Uranium Deposit (Cranberry Bog)

For completeness, I roamed to an area near Sunrise Field (to the right and below) where I thought another wood pile might be located. Instead of a wood pile, I see a uranium deposit. Here’s the location of this uranium deposit in case you’re curious.

Fallout 76_20230122163015

The reason this uranium deposit is included here to keep you from wasting time roaming over to this area thinking there might be another wood pile there. There isn’t. It’s a uranium deposit.



New Pile Update #4

For this update, I’ve finally found the location that I remember seeing months back and it’s actually in the Savage Divide! I originally thought this wood pile site was much closer to The Whitespring than it is. It’s actually very close to the Garrahan area of the Savage Divide right where it meets the Ash Heap. These wood piles, though, are considered in the Savage Divide.


Location #25 — Bastion Park (Savage Divide)

From Bastion Park, travel down the road and you’ll find this wood pile just off of the road near a semi truck with logs on the back.

Fallout 76_20230125024601

At this location, you’ll see this semi truck with logs on the back right next to the wood pile.

Fallout 76_20230125025004


Location #26 — Monongah Power SS MZ-03 (Savage Divide)

Fallout 76_20230125025344

From site #25, trek just a little ways down the road and you’ll find this wood pile which is close to the substation fast travel point. This wood pile is the pile I thought was very close to The Whitespring. Instead, it’s much farther down near Monongah Power Substation MZ-03 and not far from the Garrahan Mining HQ. I was also surprised to find that there are actually a number of wood piles in close proximity in this area.

Fallout 76_20230125025321


Location #27 — Garrahan Mining HQ (Savage Divide)

Fallout 76_20230125030307

Even further down this same road near the Garrahan Mining Headquarters, you’ll find another wood pile to use. It looks like so…

Fallout 76_20230125030250

This wood pile is located near a small unmarked village of houses.


Location #28 — Braxson’s Quality Medical Supplies (Mire)

Fallout 76_20230124193755

This one is in The Mire. From Braxson’s Quality Medical Supplies (the icon looks like a factory), trek to the below location and you’ll find this wood pile…

Fallout 76_20230124193651


Location #29 — Crimson Prospect (Cranberry Bog)

Fallout 76_20230124190232

We’re back in the Cranberry Bog for yet another wood pile. Halfway between Crimson Prospect and the Ranger District Office, you’ll find another wood pile ready and waiting for a camp to be placed.

Fallout 76_20230124190150


Legacy Removals and Wood Piles

As a result of Tuesday January 24th’s removal of Legacy weapons (aka Black Tuesday), I have made the decision to no longer search for any more wood piles. There are now 29 Wood Piles in this list, well enough to satisfy anyone who wishes to base their camp around one. While I was unable to find any of these wood piles in the Ash Heap, Toxic Valley or the vast majority of the Savage Divide, you’ll need to search in these areas yourself if you want your camp in a specific area not listed near a wood pile.

If you’re simply wanting a wood pile to base your camp around, there are plenty of sites listed in this article from which to choose. If you’re specifically wanting one in a region where none is listed above, I’ll leave it to you to go find one close.

My reasoning for closing this article at this time is that Fallout 76’s player base is actively under siege by Bethesda, which I further foresee many players leaving this game over the course of the next 30-90 days as a result. Without Legacy weapons, the game is likely to become “boring” and “no longer fun to play” by many. That reduction in players leaves fewer and fewer players playing Fallout 76 on all platforms. That further means that information articles like this one will see fewer and fewer searches from a dwindling set of players who need this information.

With the 29 sites already discovered above in combination with the likely significant reduction in player base, there’s no reason to continue searching. If Bethesda changes their stance on legacies or finds a way to entice a whole lot more players back into the game, then I may revive this article and continue to update it. As of now, this article is officially closed with the 29 sites listed. I am no longer actively searching out any more wood piles.

Ending Notes…

I haven’t tested placing extractors down on each of these wood piles in this list. Some of these locations may or may not allow for placement of an extractor. You’ll need to test this. The only test I have performed is to see whether it’s possible to place a C.A.M.P. device at or near the wood pile. Every wood pile in this list turns green when testing a C.A.M.P. device placement, but that test doesn’t guarantee that a wood extractor will place once a camp is established.

For map reference, the arrow marker not only denotes the location of the wood pile, the arrow’s direction also points directly towards that wood pile.

If you happen to locate a wood pile not listed or find that one of these wood piles above doesn’t work to extract, please leave a comment below. If you find a pile that is missing from this list, please tell me where it is located is in a comment below and I’ll credit you for having found it. Even though this article is closed for updates, I am still actively accepting comments from people who may find a wood pile not in the above list. You will be duly credited for having found any wood pile not in this list.

Enjoy and happy camping!


Tips: Cooking with an Air Fryer

Posted in air fryer, baking, cooking, tips by commorancy on January 18, 2023

air-fryer-2You recently got a new air fryer as a holiday gift and now you’re wondering, “What can I do with this?” Or, maybe you have one sitting on a shelf that’s been there for months? Wonder no more. Let’s explore various cooking tips for that air fryer.

What Exactly is an Air Fryer?

Simply put, it’s a forced air broiler. It’s like a convection oven, but the forced air is much, much stronger. Not all air fryers are necessarily the same. While many offer touch controls, some offer only simple timer knobs (see Bella air fryer just below). Some also heat from the top, while some heat from the side. All cook pretty much in the same way. How does it work?

These small cooking appliances are designed with a fan which forces high speed air through heated coils vertically down onto the food. Some may force it across the food horizontally. That’s pretty much it in a nutshell. Even though the concept is simple, the speed of it is fairly amazing for cooking. However, there are some cautions to go with that cooking speed.air-fryer-knobs

Cooking times are dramatically reduced as a result of this forced air cooking method. Because of the high speed air flow, many foods can be cooked in about the same time as using a microwave. Unlike a microwave, an air fryer makes and keeps foods crispy and brown rather than mushy or rubbery.

Here are some cautions. Because the velocity of the air fryer is quite high, an air fryer is also quite drying for all food. This can make certain foods dry out if cooking precautions aren’t taken, such as wrapping the food in foil to keep the moisture in. Wrapping with foil doesn’t allow for crisping up the food. This means you’ll want to wrap the food for long duration cooking times and then unwrap for the last 8-10 minutes at the end of the cooking time to crisp the food.

Cooking Times

Air fryer cooking times are dramatically reduced from a standard oven. It’s way faster than even a convection oven. A pizza might cook in 18 minutes in a regular oven, but may be done in 8-10 minutes in an air fryer. Speaking of…


Cooking Pizza in an air fryer might seem natural, but it’s not. If you intend to cook pizza in an air fryer, you’ll need to know how to do it correctly or it’ll burn and get overly dry.

When cooking pizza in a conventional oven, 400ºF / 204ºC temperature is exactly that. However, in an air fryer, that same temperature is actually quite a bit hotter because of the forced air. This means you have to reduce the heat level when cooking in an air fryer by at least 50ºF / 10ºC to compensate, maybe more. Otherwise, your food will become blackened and hard.

Pizza is no exception. When cooking pizza in an air fryer, you’ll want to cook no higher than 280-300ºF / 138-149ºC and monitor it closely. Cheese easily burns in an air fryer and, yes, it’ll also burn quickly, within 6-8 minutes. Pizza can be tricky to cook in an air fryer. If you’re really wanting the best pizza, I always suggest using a real oven. For reheating pizza, an air fryer is perfect when set to 200ºF to 250ºF / 93ºC to 121ºC.

If you like and prefer a charred, blackened taste on pizza, then an air fryer is perfect for getting that result. I prefer my pizza cheese melted, a tiny bit crispy, but mostly still stringy and fresh. Getting the latter result in an air fryer requires careful lower temperature cooking.


Vegetables can be cooked in an air fryer, but I’d suggest wrapping them in foil, adding a tablespoon of water in the foil to keep them moist and steamy. If you want more of a grilled texture to your vegetables, then steam them in foil for about 8-10 minutes, then unwrap the foil and cook for the remaining 3-5 minutes on 380-400ºF / 193-204ºC to crisp them up.


A hamburger patty is easily cooked in an air fryer. However, air fryers are messy beasts and need cleaning frequently. With foods that tend to produce spatter, like beef, poultry and pork, you’ll want to be sure to clean the interior of your air fryer after cooking such foods.

Hamburger patties cook in about 8-10 minutes at 400ºF / 204ºC. Though, you’ll need to flip the food if you cook without foil. If you’re cooking in foil, there’s no need to flip as the steam will cook both sides evenly. I recommend steaming the hamburger patty for half of the cooking time, then unwrap and cook the remaining time open, being sure to flip it half way through the open cooking time.

Hamburger patties can be placed into the air fryer completely frozen and will still be cooked in that 8 to 10 minutes. Fresh, thawed hamburger patties will cook slightly faster, so check them more frequently.

Hot Dogs

You don’t really need to cook hot dogs in an air fryer. Instead, you’ll simply want to reheat them. Many air fryers offer a reheat setting. Use only the reheat setting for hot dogs. In about 5 minutes, you’ll have hot dogs cooked to perfection. For air fryers with knob settings, reheat is 6 minutes at 200-250ºF / 93-121ºC.

Choosing the air fry option, which typically runs at 400ºF / 204ºC for about 15 minutes, you’re sure to burn the hot dogs, and most anything else except french fries and other potato side dishes. If choosing the 400ºF / 204ºC option, be sure to check your food often and shake the basket about every 3-5 minutes.

Hot dogs cooked at 400ºF / 204ºC will begin to blacken within about 2-3 minutes. If you like your hot dogs blackened, then this is the option to choose. If you prefer your hot dogs warmed with a slightly crispy bite, then reheat is the choice for cooking.

Cooking Side by Side

It’s easy to cook foods side by side in an air fryer basket. For example, you can place hot dogs and fries into the basket together and have a full meal ready go to at the end of the cooking time. However, note that fries take longer to cook than hot dogs. A soon as the hot dogs are warmed, remove the dogs, then raise the temperature to cook the fries at 400ºF / 204ºC for the remainder of the time, around 6-8 minutes.

Alternatively, cook the fries until there’s about 3 minutes left, then lower the temperature to 200-250ºF / 93-121ºC to reheat the hot dogs for 5 minutes, which will also keep the fries hot.

Tortilla Chips / Dehydrating

Many air fryers offer a dehydrate setting. This cooking method cuts the fan speed down dramatically and runs at a temperature around 100-150º / 38-66ºC . This is perfect for drying foods, such as making your own baked corn tortilla chips.

For baked tortilla chips, cut fresh corn tortillas into quarters. Place the quarters flat into the basket. The chips can be overlapping without issues. I typically cut up about 4-5 tortillas and layer them on the bottom of the cooking tray. Then, set the cooking method to dehydrate for 1 hour and 30 minutes.

These chips come out quite crispy, but not the same as you can buy at the store. If you’re wanting to make nachos, it’s a reasonably quick way to make chips without running to the store.

One trick here to soften the chips a bit is to place the chips in a paper bag and let them sit overnight. The next day, the chips will have a softer crunch and be more like chips you can buy in the store.

However, don’t be fooled into thinking that air frying these tortillas at 400ºF / 204ºC will do you any favors. It won’t. The chips will blacken and taste burnt. You don’t want this for tortilla chips. If you want to heat the chips, use the dehydrate method described above to keep them crispy, yet looking and tasting like tortilla chips. You can use reheat on the chips for 1-3 minutes to warm them up, too.

Frozen Fried Foods

Here’s where the air fryer shines. For frozen foods like Totinos Pizza Rolls, corn dogs, mini tacos, taquitos or even simply fried chicken (fresh or frozen) or french fries, the air fryer setting works perfect to reheat and cook these.

All of the above foods cook using the air fry option. The air fry option typically runs 380-400ºF / 193-204ºC for about 12 minutes, shaking the basket several times while cooking to even out the cooking. No need to thaw, just place them straight into the basket frozen and 12 minutes later you’ll have piping hot and crispy foods. Some of the above foods may cook in around 8 minutes. Always begin checking your foods for doneness around 8 minutes while shaking the basket.

The air fry setting is perfect to cook fried chicken or other fried and battered foods to perfection. That’s why they call it an air fryer.

Cookies and Cakes

While it is possible to use an air fryer to bake such foods, I don’t recommend it. These baked food types don’t bake well in an air fryer. There are three reasons for this:

  1. The forced air ensures the top of the baked good is overcooked and dry
  2. The forced air will flatten cookies out and make them too flat
  3. The forced heat will overcook the top of cookies, but leave the underside undercooked (same for cake)

Instead, for a more even bake, I recommend using a regular or toaster oven for baking cookies, cakes and brownies. If you want your cookies a little more crispy, you can throw them into the air fryer for 1-3 minutes at around 350ºF / 177ºC after they’ve been baked in a regular oven.

Baked Pasta

If you’ve ever bought a baked personal pan pasta from Pizza Hut or any other Italian restaurant, then you may be wondering how to get that crispy cheese texture on the top of hot steaming pasta. The air fryer is perfect for making this. However, you’ll need to invest in some cooking pans that fit into your air fryer.

Up until now, I’ve not discussed the size of the baskets on an air fryer. Here’s where you’ll need to get your tape measure out and determine the dimensions of your air fryer basket. Mine is about 8″ across. With that sizing in hand, head over to Amazon and search for air fryer accessories that will fit inside your basket. It’s possible your air fryer came with small pans that fit inside of your basket. Mine did not. You’ll want to obtain either a square or round baking pan that fits inside your air fryer basket.

For baked pasta:

  • Layer your cooked pasta on the bottom of a round or square pan (not in the basket directly)
  • Layer mozzarella on top of the pasta
  • Layer cooked pasta sauce on top of the cheese
  • Place various toppings like beef, pork, veggies and pepperoni
  • Top with a layer of cheese

Cook in the air fryer at 350ºF / 177ºC for about 6-8 minutes, checking for cheese browning at around the 5 minute mark.

Note that if you’re using a basket type fryer, be sure sure to buy a pan accessory kit which also includes a pan grabber. This grabber grabs hold of the lip of the pan which allows you to easily lift and extract the pan from the basket without using your hands and without spilling. Though, you can use oven mitts if you prefer… a pan grabber is much easier to prevent food spillage.

Cooking Other Foods?

There are plenty of other foods you can try cooking in an air fryer. For example, if you buy a frozen food from the store and there are not air fryer cooking instructions on the package, subtract 50% of the cooking time from regular oven cooking and that is usually what’s needed for an air fryer. You may also want to reduce the cooking temperature by at least 30-50ºF / 8-20ºC to avoid overcooking or burning.

As I said above, if it’s cookies, cakes or other baked goods, you should opt for baking in a regular or toaster oven. Even such foods as pot pies or sweet pies may cook better in a conventional oven. Some foods are easy to over bake in an air fryer.

When cooking meats, many cooks want to save the drippings to make gravy. If you’re wanting to make gravy, don’t use an air fryer. Air fryers force evaporate almost all liquids produced by foods. This means, no gravy. If you’re wanting to make gravy, then you’ll want to braise your chicken, pork and beef in a regular oven to retain those juices. Don’t use an air fryer.

If you don’t care about gravy, then cooking your food in an air fryer is an option. Just don’t be fooled into thinking you can make gravy from cooking meats in an air fryer. It doesn’t work.

Pre-heating an Air Fryer

Some air fryers have a preheat setting. However, it’s really unnecessary. You can throw your food straight into the basket and begin the cooking instantly. It might add 1 extra minute to the cook time, but it’s faster than waiting 2 minutes for a preheat. To be fair, it only takes about 1 minute for an air fryer to preheat… which is why it’s mostly unnecessary to preheat your air fryer unless you want the grill surface to be hot so it will add grill marks to your food.

Basket vs Trays vs Cleaning

Some air fryers are more like toaster ovens with trays. If you have this kind of air fryer, you’ll need to use an oven mitt to pull out the trays to shake them. If you have the basket type fryer with a handle, these are more convenient because the handle stays completely cool on the basket. I recommend the basket variety because it’s much easier to clean and the handle remains cool.

If you have the basket variety of air fryer, there are lots of “keep it clean” options, including basket inserts made of paper, parchment and even silicone. These inserts allow for cooking and removal to keep your basket clean. There are also pans, as mentioned above, which can be used to help bake foods while keeping the interior clean.

Still, spatter from foods can get into the heating elements and surrounding areas. You’ll need to periodically wipe and clean the interior of the air fryer after it has completely cooled. You may need to use a little Easy Off oven cleaner to fully clean this spatter. Be sure to clean your basket every so often to make sure you’ve cleaned off food smells from previous uses.

ninja-air-fryerFinal note. Some air fryer images show a basket filled to capacity, like this Ninja to the left. Don’t do this. The maximum you should ever fill your basket is about half to 3/4 full. Never fill your basket entirely to the top with food. The reason? The top most food is too close to the heating element and will burn. You want to keep your food far enough away from that heating element to keep it from burning. Such images are strictly for marketing purposes, not for functionality. Do not replicate these marketing images when cooking.

Happy Air Frying!


%d bloggers like this: