SAN FRANCISCO (December 14, 2018) – Littler, the world’s largest labor and employment firm representing management, recently assembled 40 world-class thought leaders and experts in science, government, academia, law, ethics and business to discuss the formation and challenges of the future workforce. The roundtable was sponsored by Littler’s Robotics, Artificial Intelligence and Automation practice group, co-chaired by Shareholders Garry Mathiason and Natalie Pierce, and Workplace Policy Institute (WPI).
The session analyzed some of the most important issues raised by robotics and AI as they redefine the 21st century workplace, workforce and work itself. The participants covered key employment and law issues, proposed compliance solutions and the prospect of future legislation, regulations, court decisions and enforcement, followed by the impact of global competition and ethical challenges. This followed the morning session, led by Littler’s WPI, which focused on the displacement of workers by automation technologies, as well as the enormous demand for workers in new occupations that technological change will generate.
“To have representatives from some of the world’s leading companies and government agencies come together and have productive discussions on these issues is extraordinary. Employers, government and other organizations are struggling with privacy and security in the age of AI, which is being adopted at an accelerated pace and on an unprecedented scale,” said Mathiason. “Furthermore, current workforce laws and regulations were almost entirely enacted for governance of a workplace now being redefined, transformed and globalized by advanced technologies. It’s critical to address the disruptive effect these innovations are having on the future workforce.”
The full report summarizing the roundtable can be found here. Following is a summary of the roundtable participants’ perspectives and points of consensus.
Redefining and Maintaining Global Workforce Privacy and Cybersecurity
In 2018, data privacy and digital identification placed themselves squarely in the center of employers’ regulatory radar. While employers are finding increasing uses for (and increasing access to) employee data, as predicted, regulators worldwide have focused increasing attention on regulating companies’ ability to collect, maintain, and use that data. The EU’s General Data Protection Regulation is the most visible and expansive regulatory effort in this space, but legislatures and regulatory agencies in many U.S. states are actively considering enhanced data privacy laws.
Given the increasing attention that data privacy and cybersecurity are receiving from governments and the media, participants agreed that companies must continue to closely monitor regulatory developments to ensure their collection and retention of employee data stays compliant with laws, and invest in cybersecurity classroom and on-line training, apprenticeships, and support-focused vocational and community college cybersecurity programs.
The Challenge of Transparency and Explainability
AI applications that rely on complex machine learning algorithms, including applications that use multilayered neural networks whose inner workings are an indecipherable “black box,” have been proliferating rapidly. Developing such algorithmic systems is spurring growing calls for technology companies to build transparency—or explainability—into the systems they design. Employers who adopt such technologies will increasingly face similar pressures, including finding themselves in litigation defending their AI systems, along with the outside AI developers.
But the very nature of deep learning and other powerful modern forms of AI makes true explainability, much less full transparency, very difficult or impossible. Several participants commented that a blanket transparency requirement would severely limit innovation and place U.S. and European AI development at a competitive disadvantage. But participants strongly agreed that ethical standards must be applied to AI decision making and algorithmic biases that violate legal requirements or core values.
Developing Needed Regulations and Minimizing Unintended Harm and Disincentives to Innovation
While the absence of regulatory activity is generally seen as a boon to innovation, that advantage dissipates when new technologies must comply with preexisting legal frameworks that are not implemented anticipating the complexities of emerging technologies. Some of this regulatory inaction can be attributed to regulators’ being simply not conversant in the benefits and risk profiles of new technologies.
Participants expressed concern that if regulators do not take a more proactive and preventative approach, either legislatures or regulators themselves will act only after a catastrophic event. At that point, governmental institutions may feel political pressure to enact strict and innovation-stifling regulations rather than regulations that account for the benefits of AI and robotics, in addition to the risks. But with America’s economic rivals increasingly prioritizing the development and rapid deployment of new applications of AI, the need to leverage the potential of AI for the greater good of America’s workers and companies could represent a rare potential space for bipartisan action.
Building a Practical and Ethical Roadmap for AI and Robotics Development
The Congressional Research Service identified China as a “leading competitor” in using AI to develop military applications, and Russian President Vladimir Putin has announced that the nation which leads in AI will “be the ruler of the world.” Most participants agreed that a titanic global competition has formed regarding AI and robotic research, development and deployment, both civilian and military.
Three key takeaways from the discussion included: 1) The U.S., EU and their allies hold the advantage in developing AI and robotics especially from a talent perspective, but the U.S. lacks a clear awareness of the importance of this competition, has no national priority or spending initiative comparable to China and Russia, and its immigration opposition and failure to produce adequate number of STEM graduates threatens to erode the talent advantage; 2) Western privacy concerns, especially in biometrics, has slowed development compared to especially China; however, privacy and individualism are a key part of U.S. values and would not prevent being competitive if investment in AI and disruptive technologies became a national priority; and 3) U.S. military leaders know of this challenge and are responding without compromising deeply held ethical mandates.
About Littler
Littler is the largest global employment and labor law practice, with more than 1,500 attorneys in over 80 offices worldwide. Littler represents management in all aspects of employment and labor law and serves as a single-source solution provider to the global employer community. Consistently recognized in the industry as a leading and innovative law practice, Littler has been litigating, mediating and negotiating some of the most influential employment law cases and labor contracts on record for over 75 years. Littler Global is the collective trade name for an international legal practice, the practicing member entities of which are separate and distinct professional firms.