Random Thoughts – Randocity!

Rant: Google Ethics Board?

Posted in botch, business, california by commorancy on March 28, 2019

PadlockGoogle has chosen to put together an “Ethics Board” to evaluate the “Morality” of Google’s uses of AI in its products. Will this be enough? Do we trust the people chosen for this task? Personally, I don’t. This one is short and sweet. Let’s explore.

Ethics Board

While it’s commendable that Google sees the need for such a board (particularly after its privacy encroaching devices), the difficulty is in knowing if this move is simply window dressing for Google or if this board actually has teeth. My guess is that this board is simply there to take money from Google and place it into each Ethics Board Member’s pocket… and Google is still allowed to get away with its prying privacy-encroaching technologies, more now than ever. This is actually a typical sly corporate tactic regularly used in California to “look good” (specifically to regulators) rather than actually performing.

The reality is, putting random people on a board from seeming positions of trust is completely questionable. I don’t know any of the people chosen, so how can I possibly trust any of them to make the right decision for Google, let alone the consumer? Additionally, are these people versed enough in Google’s technology initiatives to even have a practical say in the matter? Likely not. Will they even be given access to Google’s upcoming technologies? Likely not.

Window Dressing

Unfortunately, many companies do see the need for such oversight, but they set it all up in the wrong way and for all the wrong reasons. This is a prime example. Hiring random folks from colleges to “oversee” Google is akin to McDonald’s hiring random folks from non-food industries to oversee its food quality. Seriously, what are these people really going to do?

I can’t even imagine that this board will have any teeth to actually steer Google away from its privacy-encroaching unsavory-uses of its always-on listening devices. Even Amazon has not put together such a “committee”. The only thing this board will likely end up being is a patsy for when Google is found to have violated its own business ethics. They can then look to this board and say, “Well, you approved it” and then point the finger at the board for failing to “foresee” a problem. It’s a way to make shit run down hill and land on these unsuspecting folks on this board.

If I were considered for this board, I’d be highly skeptical of taking that position. It’s simply going to be a shitstorm for that board after Google does something questionable… and believe me, Google will.

The road to hell is paved with good intentions

This saying is very apt in this situation. I can’t possibly see anything good coming from the decision to put together this board internally. The only way to possibly oversee a company like Google is from without, not within. There’s no way Google can watch itself ethically. If you’re paying people to watch your business ethics a**, there’s already an ethical dilemma. Because they’re on your payroll, they can’t exactly be ethically impartial. If some board member actually does try to “steer” Google away from some ethical problem, Google can simply replace the board member with someone more amenable to Google’s “new” strategy.

This is a no-win situation for Google, ethics or privacy. The only way this works is if an oversight committee is created by the US Congress (and other governing bodies) to oversee Google, Amazon and other AI offerings the size of Google. Only a third party government committee who is not on a company’s payroll can possibly (and legally) steer companies away from unethical consumer situations.

Unfortunately, the US is far too pro-business and far too anti-consumer privacy to offer up such an oversight committee. There is absolutely no way the government would put the brakes on Google or Amazon or any other company of this size even if what they are doing is ethically questionable.

Privacy Encroaching Devices

As a consumer, you need to consider long and hard about putting such devices into your home. Other than Google Chrome, I do not use have or use Google devices in my home. I already know Google can’t be trusted with this data. Google is an advertising company. It is designed to advertise to you. It’s designed to take what it learns about you and then feed ads to you that “fit” with your needs. In short, it is designed to watch what you do (invade your privacy) and then tailor advertisements based on the data it learned when it eavesdropped. Google is the very opposite definition of privacy. They want to know everything about you so they can “better” target you with ads. Amazon is a much smaller scale version of this. They only do this in relation to the Amazon.com web site.

Google has tentacles pretty much everywhere including within Chrome, Chromebooks, Google Home devices, ChromeCast and, yes, even in Android smart phones… especially in Android smart phones. The biggest problem is “Okay, Google” always on listening devices. There’s no way to know exactly what Google can listen to when it’s always listening… or exactly how that information might be used by Google.

The basic problem around this data collection is that Google stores that information about you on their servers. Servers which can be hacked. Data which can be leaked. Information that can be lost. It’s happened. It will happen again. Such an “Ethics Committee” put together by Google is, by it’s very design, strictly “window dressing”… and nothing more. They can’t stop leaks. They can’t stop data loss. They certainly can’t stop Google’s technology advancements.

Consumers Suffer the Consequences

Unfortunately, this means that consumers must suffer these insufferable consequences from companies like Google. The only way to steer a company like Google is through the courts, lawsuits and eventually the passing of laws. The only way to stop the likes of Google from breaching these unwritten ethical contracts is by holding Google, Amazon and others accountable to the courts of law when they break laws and/or when they go well beyond ethical boundaries. No board of ethics on Google’s dole is likely to stop that.

Having Google set up such an internal committee ultimately means, again, that this move is simply window dressing. These chosen board members, while they might have good intentions, are on the payroll of Google. This, by design, already means there’s an ethical dilemma. Taking Google’s money means you ultimately answer to Google. It also means that when something “bad” happens, that ethics board will end up being Google’s “fall guy”. So then, who watches the watcher?

There’s just no way that this situation ends well for either that ethics board or Google or ultimately, the consumer.

↩︎

 

Advertisements

Rant Time: Adobe VoCo’s ethical dilemma

Posted in best practices, botch, business, california, ethics by commorancy on February 28, 2018

I have to wonder about Adobe’s business ethics at times. First, there’s Photoshop. While I can admit that photo editing has a legitimate purpose, such as correcting red eye or removing telephone lines or removing reflections of the camera man from a photo, there is the much seedier and ethically murky purpose for Photoshop. Now comes Adobe VoCo. It is a product idea that does for spoken audio what Photoshop does for images. Let’s explore this YouTube clip from 2016:

Skip to 3:18 for the meat of this video.

VoCo’s Use Cases and Ethics

Though, yes, I will concede that the demonstration above was funny and we all laughed, the demonstration has a deep seated ethically murky undertone once the laughing stops. In fact, that’s what prompted this blog article.

Unlike Photoshop which has actual real world use cases (yes, other than making models thinner and glowier for the cover of Vogue), VoCo is one of those unnecessary tools that, while cool in theory, makes Adobe seem that it’s now in the business of causing world disruption instead of actually solving creative problems. After the ethical problems created by Photoshop, Adobe has to know the ethical quandary it introduces by bringing the VoCo audio editing tool to market. Adobe decides to go ahead with demoing this tool anyway. So much for business ethics. Instead, Adobe should have patented and shelved this product idea and never shown it off.

There’s no effective real world use case for this product other than for making someone say things that they actually didn’t say. The only use case where this technology might even be somewhat useful, depending on output quality, is in the voice over industry where an actor might be unavailable at a time when a line needs to be changed to fit continuity better. The voice over industry is the only industry where VoCo could have even the smallest glimmer of hope of a use case. This is such a tiny niche market segment to introduce this tool in such a public spectacle way.

The only other use case would be to sample all of the audio from a particular dead actor or actress’s productions and then recreate lines of new spoken dialog based on that. Again, this is one of those entertainment areas that fits firmly into the uncanny valley, particularly if the spoken lines are attached to a CG actor. Again, this is not a substantial use case in my opinion and is most definitely creepy. It’s definitely not a big enough use case to warrant this public release spectacle. Do we really want to see Marilyn Monroe or Elvis brought back to life on the big screen using CG and VoCo dialog?

There is no other legitimate use case for this product. It’s like Adobe intentionally wants to flaunt its lack of ….

Business Ethics and Self-Editing

Businesses today have no ability to self-edit or recognize ethics. That is, stop ethically bad product ideas from making it to the market. Just thinking about this product and how it could possibly be used, it doesn’t have legitimate use cases (other than the voice over use case I mentioned above). However, there are perhaps thousands of illegitimate uses for this tool. Let’s list a few of them, shall we:

  • Falsifying a deposition to make the person being deposed say something they didn’t say
  • Falsifying a statement of non-confession to make a person confess to a crime when they didn’t actually confess
  • Falsifying a phone conversation
  • Changing any spoken words from non-incriminating to incriminating evidence

In legal circles, the use for this tool is ripe for abuse and has use cases as wide as the Grand Canyon and as deep as the Mariana Trench. In other words, while VoCo has no substantial legitimate use cases, it has thousands of illegitimate use cases. There is no way Adobe couldn’t see this. There is no way for Adobe to feign ignorance about this tool or the ethical problems it imposes if released.

Legal Evidence

Some have theorized that this tool would become just as Photoshop has. Basically, because evidence can now be manufactured in products like VoCo, it means that audio evidence would no longer be easily admissible. While that idea has some soundness to it, the legal system is not always technically savvy and can sometimes move at a snail’s pace. Eventually, the courts and lawyers will be on board with this ‘manufactured evidence’ sound clip idea, but not before several someones are incriminated over manufactured evidence that isn’t caught in time.

Some have theorized that Adobe should watermark the sound clip. The difficulty with audio watermarking is that it ruins the audio. No one would buy a professional audio tool that intentionally makes the audio sound bad or introduces something that is audibly noticeable, strictly because Adobe wants to insert a watermark to legally cover their collective butts. No. No one would buy a tool that causes damage to the audio output. This means that only a silent kind of watermark could be introduced. Such a watermark would consist primarily as a tag within the saved audio clip file. Any tags introduced in a save file can easily be stripped away by converting the audio clip to a new format or by playing the audio clip back and recording it on analog equipment. In fact, a whole industry and set of tools would likely appear to strip out any watermarks imposed by Adobe onto the saved files.

Unless there is a substantial way to identify that the clip has been edited, and I don’t know how Adobe could even solve this problem fully, VoCo is a tool that would end up more abused than legitimately used.

Flawed Product Ideas

While this is somewhat of a cool technological advancement, it doesn’t need to exist. It doesn’t need to exist because it has basically one limited use case. I’d argue that as a production runner, you can just wait until the voice actor becomes available and ask them to re-record the lines you need. That is, instead of using a tool like this. A tool like VoCo might save you some time, but by demanding such a tool for your use, it means the rest of the world must also endure the consequences of a world full of falsified evidence. Is that the world you want to live in? Evidence that could even be used against you, the audio editor. No, thanks.

However, it’s clear that prototype code has been written based on the video above. This means that Adobe could release such a product into the wild in the future. Thankfully, as of this article in 2018, this product does not yet exist. Unfortunately, Adobe has already opened Pandora’s box. A working prototype means that any coder with leanings towards audio engineering could produce a similar tool and release it into the wild without the help of Adobe. Thanks Adobe.

It is as yet unclear when or if this product could ever be released. Note that this video segment apparently showcases experimental product ideas (products that may never see the light of day) and not actual products. After all, such a legally murky product would have to clear Adobe’s legal team before release. Considering the many negative use cases for such an audio editing product and the legal liability that Adobe might endure as a result, I’d hope that Adobe’s legal team has shelved this product idea permanently.

Agree or disagree? Please leave a comment below. Also, don’t miss any new Randocity articles by subscribing to this blog via clicking the blue follow button at the top right.

%d bloggers like this: