Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.
In an age where computer generated imagery (CGI) and digital effects enable entire film genres to exist, like Marvel’s superhero series the Avengers or Guardians of the Galaxy, audiences have no expectation that movies they consume depict actual events or reflect reality. It is therefore reasonable to assume that the context and forum of how digital media and information is communicated, observed and consumed informs our default expectations of it. Digital media, augmented reality social media filters, video games and salacious news stories are wildly successful and nearly omnipresent forms of entertainment. If we were to unwittingly rely on inauthentic digital media or false information in the course of our daily lives or during the course of business operations, however, we would find it both highly problematic and devoid of entertainment value.
The authors propose a tangible perspective on AI, one where the proliferation of Deep Fakes and the ability to leverage this technology may challenge standards for information dissemination, communication and our most basic assumptions of reality—where once seeing was believing. These inauthentic intrusions not only impact our society generally, and our political system and growing divisions more specifically, but also spill into our workplaces in a way that forces employers to grapple with the often inevitable effects. Employers will need to adjust to this new reality and understand the means of minimizing the potentially negative impact, including the utilization of data analytics to protect companies and their workforces from exploitative uses of false information.
Click here to read the full Littler Report.