Skip to content

Closing time for decisive action on AI approaches

Preventing state and local governments from safeguarding numerous Americans from potentially dangerous AI advancements should not be allowed by Congress.

Closing Opportunities for Action on AI: A Perspective on the Urgency of Decision-Making
Closing Opportunities for Action on AI: A Perspective on the Urgency of Decision-Making

Closing time for decisive action on AI approaches

In the rapidly evolving world of artificial intelligence (AI), New York state is making strides to ensure the technology is developed and used responsibly. Over 60 bills related to automated decision-making systems are currently moving through the legislative process in New York, signalling a commitment to AI regulation.

The New York AI Act (S1169) is a key bill aimed at ensuring AI is applied fairly and developed responsibly in consequential contexts. Recognizing the potential risks posed by AI, particularly to vulnerable populations like people with disabilities and people of colour, the Act seeks to address issues such as algorithmic bias and inaccuracy.

While AI-powered assistive technologies are improving accessibility for people with disabilities, concerns remain about the potential harms. To address this, a proposal in New York prohibits chatbots from practicing licensed professions like psychiatry, protecting the public from potential misuse of such technology.

Last year, New York enacted the FAIR Act to regulate the use of deepfakes in elections, banned the use of facial recognition technology by the MTA, and instituted guardrails on the state government's use of automated employment decision-making tools.

Recognizing the need for AI education, a competitive grant program for under-resourced schools and community organizations is being proposed in New York. This initiative aims to equip the next generation with the knowledge and skills needed to navigate the AI-driven future.

However, on the federal level, there is a growing concern. House Republicans are attempting to prevent state-level enforcement of AI rules for 10 years through the federal budget reconciliation process. This move could stifle state-level progress, such as New York's, in AI regulation.

In response, more than 50 New York state lawmakers have signed a letter calling on House Republicans to remove the proposed 10-year moratorium from the federal budget bill. A joint letter and informational campaign were initiated by state legislators, including the author, on the dangers of this moratorium on AI regulation.

The author is advocating for congressional representatives to take action on AI regulation, revoke the proposed 10-year moratorium, and pass comprehensive laws at the federal level. This call to action is part of a series of efforts to maintain key protections and advancements in AI policy.

The window for decisive action on AI is closing. AI is already being used in high-risk use cases like evaluating job applications, determining insurance coverage, and mortgage approvals. Consumers have limited recourse if these AI-powered tools make errors or hallucinate.

Crucially, it is crucial to avoid letting Big Tech dictate the safeguards imposed on artificial intelligence. Given the shift away from safety by tech companies driven by profit maximization, regulation is necessary to protect consumers and vulnerable populations.

Unfortunately, the Biden administration's 2023 executive order on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" has been revoked. This move further emphasizes the need for state-level action, such as New York's, in AI regulation.

In the face of this uncertainty, OpenAI, the creator of ChatGPT, has removed its original mission to ensure that AI benefits all of humanity from its mission statement. This move underscores the importance of regulatory oversight to ensure that AI is developed and used in a manner that benefits society as a whole.

As New York leads the way in AI regulation, it serves as a model for other states to follow. The future of AI regulation lies in the hands of state and federal lawmakers, who must act decisively to protect consumers and ensure that AI is developed and used responsibly.

Read also:

Latest