Navigating Global AI Regulation Frameworks: Balancing Rights, Redress, and Responsibility

*This article was written with the assistance of an LLM.

Members of the European Union and the European Council (the executive body of the European Union) have recently reached an agreement on the prospective (EU) Artificial Intelligence Act, setting out the provisions and principles for a new regulatory framework for AI that considers ethical implications, user safety and risks as well as the future of commercial general-purpose AI, quantum computing and future innovation.

This new legislation comes in company with a myriad of other regulatory frameworks for Artificial General Intelligence (AGI), in the US (Executive Order on Safe, Secure and Trustworthy AI), Canada’s AIDA (C-27), Australia, all having been lauded as efforts to address risks and risks mitigation posed by the rapid advancement of AGI in just the past year, especially measure to protect the Union’s citizens and residents from data mass-harvesting practices and other privacy-related vulnerabilities such as financial scams, manipulated harmful content. The bill will introduce a set of guidelines designed to mitigate risks associated with AI and standardize disclosure.

The conversation about AI regulation and the fervent applause for new regulatory initiatives have been encouraging yet with AI systems as much data flows become currency for future trade treaties, there is a flagrant case for regulatory misalignment where every country’s legislation focuses on one aspect of regulating AI or AGI or GenAI without addressing other aspects.

This article examines the need to align the different approaches to regulating AI specifically the subjects of AI safety and ethical implications posed by the rapid of new AGI products and systems.

The European Union’s proposed Artificial Intelligence Act is a detailed legislative framework that aims to regulate the development, deployment, and use of artificial intelligence across its member states. Under this Act, certain uses of AI would be prohibited, including real-time biometric identification in public spaces, with exceptions for national security and other significant objectives. High-risk AI applications, those integral to critical infrastructure, employment, education, and law enforcement, would be subject to stringent requirements before being put into operation. These requirements include rigorous testing for safety, transparency, data governance, and respect for user privacy and human oversight to prevent adverse decisions made without human intervention. Thus, these requirements unfortunately focus on software performance. The new Act will include a tiered system for penalties in cases of non-compliance, with fines proportional to the severity of the infringement.

Here are the new suggested provisions in the EU AI Act in a table

New Policy/ProvisionDefinition/ExplanationWeaknesses/Risks
Ban on biometric categorisation systemsProhibits AI systems that categorize individuals based on sensitive characteristics like race or sexual orientation.Enforcement may be challenging; there could be workarounds that still infringe on privacy.
Ban on indiscriminate scraping for facial recognitionProhibits creating databases from online or CCTV footage for facial recognition without targeted purpose.Could be hard to monitor and prevent, especially with advancements in data scraping technologies.
Risks of covert use or misinterpretation of behaviours as emotional indicators.Prohibits AI systems from assessing emotions in workplaces and educational institutions.Prohibits AI that scores social behaviour for eligibility purposes or manipulates behavior that circumvents free will.
Ban on social scoring and manipulationAllows the use of biometric identification in public spaces with judicial authorization for serious crimes or specific threats.Defining and proving manipulation or exploitation could be legally complex.
Law enforcement exemptions for biometric identificationFines range from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover, based on the infringement.Potential for misuse or overreach by law enforcement; could infringe upon civil liberties.
Obligations for high-risk systemsHigh-risk AI systems must undergo a fundamental rights impact assessment and comply with strict requirements.Requires significant oversight; there could be biases in impact assessments and enforcement.
Right to launch complaints about AI systemsCitizens can complain about and receive explanations for decisions made by high-risk AI systems.The process for complaints and explanations must be clear and accessible to be effective.
Transparency requirements for GPAI systemsGeneral-purpose AI systems must follow transparency guidelines, including technical documentation and training content.The broad application of GPAI may make it difficult to ensure compliance across all uses.
Stricter obligations for high-impact GPAI modelsModels with systemic risk must conduct evaluations, mitigate risks, report incidents, and ensure cybersecurity and efficiency.Defining “systemic risk” can be subjective; the criteria for evaluation may need constant updating.
Regulatory sandboxes and real-world testing for innovationThese environments allow businesses to develop and test AI before market placement, aimed at fostering innovation.Without proper regulation, sandboxes could lead to unintended consequences being overlooked.
Sanctions for non-complianceFines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover, based on the infringement.Fines may not be a sufficient deterrent for larger companies, which might accept them as operating costs.

The Need For Careful Alignment in AI Regulation
While the proposed legislation such as the EU AI Act or the Canadian AIDA | IADA focuses largely on the user safety frameworks of AI systems or the ethical implications, there remain other major new tangents that had been impacted by the mass adoption of AGI and proprietary AI creative tools this year, notably disputes over Rights (statement authored by SAG-AFTRA) ownerships and licensing and voluntary disclosure and also addressing right for redress.

Right of Redress
The EU’s AI Act and its counterparts emphasized the companies create their safety tests and share them with the government or regulatory body with no mandated set processes for third-party audits yet. It puts too much good faith and trust in the hands of the AI superpowers to heed threats and be transparent about the inherent risks of their technologies or biases. This may be flowery and perfectly fine if the world’s economy wasn’t built on incumbency advantage and shareholder profit at the expense of societal welfare (global warming, opioid crisis), workers or users. In fact, in yet another erroneous false facial recognition case, a Detroit resident, who has been falsely identified as the perpetrator of a crime she did not commit, is suing the Detroit Police Department for her unlawful arrest. Unfortunately, even as victorious, it is the taxpayers of Detroit who will have to foot the bill of the legal battle against the police department and subsequent sums awarded to the plaintiff. In another event more interesting case, the Federal Trade Commission in the US banned the drugstore chain Rite Aid from using AI facial recognition in its store due to its failure to oversee its AI service providers.

On the other side of the coin, in the new 2023 TV/Theatrical contract ratified by SAG-AFTRA earlier in December (2023), the union recognized its inability to address the legal ambiguity with regards to generative AI, rights and licensing and seeking damage. There is a general lack of clarity within the proposed AI regulatory frameworks that do not address rights (specifically labour rights), compensation and remedy. For the actors and artists represented by the union, some provisions in the new contract are unclear about the artists’ rights under the law. It rather pushes disputes about digital alterations and the use of digital replicas to an employer-mandated arbitration process. In the absence of a specific law, the contract doesn’t even touch upon instances of harm or misuse by third-party actors, the labour union does not have a bargaining relationship with — many of which are the new synthetic media companies which have dressed artists’ voices unto hugely successful covers, new music, new memes and moving GIF’s.

A new whitepaper by Stanford’s HCAI Centre on regulatory alignment
In a recent whitepaper by the Stanford Human-Centered Artificial Intelligence – RegLab, the centre researchers and directors argue that a misalignment of AI regulations may lead to rushed legislation that lacks meaningful public consultation, and democratic proof and may as well backfires or miss the opportunity to create an effective, interoperable regulatory framework for AI.

Members of the Stanford Research Centre identify four commonly proposed AI regulatory regimes – disclosure, registration, licensing, and auditing. They conclude that each regulatory vertical suffers from its “regulatory alignment problem”. The authors argue that rushed regulation may fail to address the problems it aims to solve, or even worsen them, due to technical/institutional constraints or unintended consequences. They recommend focusing first on enhancing the government’s understanding of AI risks, through adverse event reporting and third-party audits with government oversight. These exchanges have been known as red-teaming exercises or research alignment partnerships undertaken by the AI and data training companies themselves.

Other interesting arguments made in the whitepaper

  • The definition of “regulatory misalignment” – where regulation fails to address the intended harm (“regulatory mismatch”), or creates unacknowledged tradeoffs between objectives (“value conflict”).
  • Many governments lack one unified body to regulate AI and possibly, the public consultation processes in some of these countries have already hinted the risks from AI can be regulated in different existing federated bodies (privacy, antitrust, compensation, consumer protection), etc.
  • The implementation of new AI regulation may require unavailable technical capabilities or unrealistic government capacity. A new AI super-regulator would likely hinder, not improve, interagency coordination.
  • It notes regulations may be hollow, e.g. audits serving as rubber stamps. Some risks may be better addressed through conventional regulation rather than AI-specific policies e.g. existing or new privacy regulation (CCPA, GDPR, Anti-trust, DSA, etc.)
  • It cautions against industry capture and notes the growing number of entities lobbying on AI issues. Some advocates of licensing may be its main beneficiaries.

Though there may have been some push for a globalized framework on regulating AI such as the G7’s Leader Hiroshima process (or statement), the pace is disparate, especially among countries that are to be as likely affected by the expansive use of generative AI products during elections.

The legal battle over “reigning” in on the proliferating usage of AI and AI systems and their impacts on online discourse, social platforms, the media, and creators’ IP rights is still very much ongoing. A California court recently struck down that an LLM (specifically the Llama model developed by Meta) can be infringing on the copyrights of the works it is based on or could be considered derivative work. The legal conversation over the definition of high-risk AI systems, those capable of developing autonomous weaponry or infiltrating weaponry systems is yet to be fully defined or implemented.

The need for clear processes for redress and third-party audits, along with addressing legal ambiguities and ensuring labour rights and compensation, is essential. As we move forward, it is imperative to strike a balance between technological advancements and societal welfare, ensuring that AI regulation encompasses comprehensive measures to protect individuals and foster responsible AI development.T

Prev PostThe Forever Legacy of Activist Ady Barkan
Next PostThe Digital Services Act in Action: Evaluating Platform Transparency and User Safety

Leave a reply