The “Equity Act”: Navigating the Intersection of AI Innovation, Worker Welfare, and Corporate Responsibility

On July 21, 2023, the Biden-Harris administration announced new voluntary safety commitments from seven leading AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. These commitments aim to ensure the safe, secure, and transparent development of AI technologies, emphasizing safety, security, and trust. The companies commit to thorough security testing of their AI systems before release, sharing information on managing AI risks (how much and with whom?) and investing in cyber security or cyber resilience. These commitments include:

  • Ensuring the safety of products before their public release through internal and external security testing.
  • Building systems that prioritize security, including investing in cybersecurity and insider threat safeguards.
  • Earning the public’s trust through developing robust technical mechanisms that make clear when content is AI-generated and public reporting of their AI systems’ capabilities and limitations.
  • Committing to research on the societal risks posed by AI systems, including avoiding harmful bias and discrimination and protecting privacy.
  • Developing and deploying advanced AI systems to help address society’s greatest challenges, from cancer prevention to mitigating climate change.

This initiative is part of the Biden-Harris administration’s broader commitment to ensure the responsible development of AI and protect Americans much like several other new regulatory frameworks in the world, naming Canada’s AIDA (C-27), the EU’s AI Act, the UK’s new regulatory framework for AI and other developments.

The voluntary safeguards include testing products for security risks and using watermarks to identify AI-generated content. However, these measures are only an initial step as governments around the world seek to establish legal and regulatory frameworks for AI. The Biden administration is also developing an executive order to control access to new AI programs and components used to develop them, particularly by competitors like China.

Unmitigated risks from GenAI could spill into manipulating public opinion and distributing disturbing (illegal) content

The rise of generative artificial intelligence (GenAI), much different in application than artificial general intelligence or AGI is escalating concerns about the spread of misinformation and disinformation, introducing a new level of complexity to the fight against disinformation due to the new scales of content creation. Not only is it cheaper to create and distribute content, The University of Washington’s Kate Starbird suggests that generative AI can enhance misinformation campaigns by creating new high-quality content, imagery that is tailored to different audiences. The underlying goals behind disinformation and propaganda is to deliberately aim to mislead the public and offering them new areas of contention to shape public information and public opinion.

Australia’s internet regulator has announced that search engines like Google and Bing will be required to take steps to prevent the sharing of child sexual abuse material created by artificial intelligence (AI). This comes as part of a new code drafted by the industry giants at the government’s request, which will ensure such content is not returned in search results. The ‘search generative experience‘ that companies like Google and Microsoft have been touting this year will have additional impacts on content online, the jobs of people who create content online and publish them.

Who is missing from these conversations?

The rise of artificial intelligence (AI) is triggering concerns across various sectors, prompting labor unions to prioritize AI-related issues in their negotiations. In Hollywood, both the Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA) and the Writers Guild of America (WGA) have taken significant steps to address the potential implications of AI.

The WGA, in May 2023, went on strike, demanding not only fair wages and better residuals but also assurance that AI would not replace screenwriters in the future. The union is not attempting to ban the technology completely but proposed a framework that allows for the use of AI while also guaranteeing that AI won’t be used to create or rewrite scripts or lessen writers’ pay or credit for their work.

Similarly, SAG-AFTRA, representing actors and other media professionals, went on strike citing collapsing conversations around AI as a key issue. The union expressed concerns about studios scanning the likenesses of actors, including background actors, when they enter a set, and using that information forever. SAG-AFTRA is fighting to ensure that any use of AI is done with consent and compensation.

Meanwhile, the United Auto Workers (UAW) union is also facing transformative change with the automobile industry’s shift toward electric vehicles and self-driving technology. This shift requires modern manufacturing platforms that fully utilize automation and different skill sets. However, in their negotiations with Detroit automakers, the UAW focused on demands for a pay increase and a shorter work week, without centrally addressing how to prepare workers for a changing workplace affected by AI and automation.

These cases highlight the growing importance of AI-related issues in labor negotiations. As AI continues to advance and reshape various industries, labour unions need to actively address the potential impacts on their members, ensuring that workers are adequately prepared for the changes and that their interests are considered in the development and implementation of AI technologies.

What exactly is going on?

How is it that the explosion of one of the most exciting new technologies and well of opportunities for new graduates and seasoned managers is met with a large dump of tech workers many of whom especially worked in sectors known as “trust and safety” i.e. the very jobs that will be tasked to ensure the safe application of AI technologies to monitoring harmful speech online, content, influence and coordinated campaigns, ESG campaigns, general ecosystem funding, etc?

The current corporate climate is an intriguing paradigm, where large tech corporations appear to be gravitating towards a model reminiscent of the Reagan era, Jack Walsh worshipping, Milton Friedman’s economic theory. This theory primarily emphasizes a staunch focus on core business activities and innovation, potentially at the cost of decreased focus on employee welfare (e.g. reduced perks, RTO policies) — not that an employer has to wash your laundry for you to be protective at work or have a job.

These corporations seem to be engaging with popular socially progressive issues, such as diversity, gender equality, environmental sustainability, and more. However, their commitment often appears to be more symbolic, without substantive alterations to their fundamental operational processes. This discrepancy may lead to a perceived lack of support for their employees, particularly in a landscape increasingly driven by technological advancements.

In the face of disruptive AI technology, corporations, labour unions, and governments alike must prioritize genuine, substantive action over symbolic gestures. This ensures the protection and upskilling of workers, fosters trust and safety in AI applications, and promotes ethical, responsible technological growth.

Prev PostI am reading Naomi Klein's new book: Doppelganger, Into The Mirror World.
Next PostTech Policy Updates - Inaugural Post

Leave a reply