
New York's groundbreaking AI law aims to harness the power of artificial intelligence in government while safeguarding against potential pitfalls, but can it strike the right balance between innovation and regulation?
In December 2024, New York Governor Kathy Hochul signed a law mandating NY state agencies to audit and regulate their use of AI for public work. The law was enacted after AI technologies were leveraged for malicious purposes in many incidents in addition to privacy concerns surrounding AI usage, as well as using AI widely to replace human workers with AI solutions. For example, criminals used deepfake technology to extract $25 million from a finance worker at a multinational firm.
The new AI usage law requires NY state agencies to conduct thorough assessments of any software solution that incorporates AI technology, including underlying machine learning (ML) models. These reviews must be submitted to the state governor and top legislative leaders and made available to the public online, to ensure transparency and accountability of government AI deployments.
The law addresses crucial ethical concerns surrounding the usage of AI in critical government decision-making processes. For determinations on unemployment benefits and child-care assistance, the legislation establishes clear boundaries requiring human review of AI-generated decisions. This requirement ensures that critical decisions affecting citizens' lives maintain human oversight rather than relying solely on AI algorithmic judgment.
The law also confronts the growing trend of AI workforce displacement in government agencies. Through protective measures it prevents state organizations from arbitrarily reducing employee work hours or job duties simply to implement AI automation. This balanced approach demonstrates how government agencies can harness AI's benefits while protecting their workforce from unnecessary displacement.
After AI became widely accessible, security experts consistently called for more stringent steps to regulate its usage across different industries. A major privacy concern was the revelation of personally identifiable information (PII) when using AI solutions to support various tasks. For example, in healthcare institutions, ML models could have been trained on datasets containing PII information of patients and employees, and adversarial attacks could reveal such sensitive information.
The NY law was not the only one issued regarding the usage of AI in critical situations across the US. In May 2024, Colorado enacted a similar act called the Colorado AI Act, which imposes strict rules on developers and deployers of high-risk AI systems. The Colorado state legislature defined high-risk AI systems as those capable of making a "consequential decision" - decisions that have material legal or similarly significant effects on the provision, denial, cost, or terms of services to consumers in the following areas:
- Education enrollment or opportunity
- Employment or an employment opportunity
- A financial or lending service
- An essential government service
- Healthcare services
- Housing
- Insurance
- Legal services
How government agencies use AI technology in their work
Like privately owned corporations, government agencies began using AI technologies to speed up their work, cut costs, and streamline daily operations. Government agencies leverage AI capabilities in various sophisticated ways to enhance public services and operational efficiency.
Government agencies implement AI solutions for task automation to process official documents, handle permit applications, and manage license renewals. These implementations allow government agencies to reduce processing time and minimize human errors when executing repetitive tasks. The AI technology also transforms decision-making processes through advanced risk assessment models and resource allocation systems. This allows government officials to make data-driven decisions quickly and accurately.
“Threat actors may target government-used AI technology to undermine public trust in AI technologies. Advanced attackers execute sophisticated adversarial attacks where they manipulate the ML model's behavior to serve their objectives.”
In the case of complex problem-solving, AI systems excel at traffic optimization and emergency response planning. For instance, in traffic management, AI algorithms can analyze data from various sources such as traffic cameras and GPS devices and even harvest data from social media platforms and public posts to predict congestion areas and dynamically adjust traffic light signals to optimize traffic flow and minimize delays.
The healthcare sector benefits from AI through patient triage systems and medical image analysis, allowing doctors to perform more accurate diagnoses. In infrastructure management, AI powers predictive maintenance systems and utility optimization, which prevent costly breakdowns and improve resource utilization. In citizen support, AI-powered chatbots and virtual assistants facilitate delivering multilingual services 24/7.
How malicious actors can leverage AI
Attackers leverage AI technology to execute their attacks. For example, threat actors can use generative AI to craft convincing emails that resemble original ones from authentic sources. These emails could be used to steal credentials from unsuspecting employees or trick people into installing malware such as keyloggers or ransomware.
On the other hand, attackers can also leverage AI to create malware that traditional antivirus and antimalware solutions cannot detect. These types of malware are gaining more attention lately and have contributed to major successful cyberattacks. The AI-powered malware can adapt its behavior to evade detection and maximize its impact on targeted systems.
What’s more, threat actors may target government-used AI technology to undermine public trust in AI technologies. Advanced attackers execute sophisticated adversarial attacks where they manipulate the ML model's behavior to serve their objectives.
Protecting AI systems and ensuring trust
Organizations, especially those controlled by governments, must implement comprehensive security measures to safeguard AI systems against these threats. This includes robust model validation procedures to verify AI system integrity and regular security audits to identify vulnerabilities. Input sanitization mechanisms help prevent malicious data from affecting model behavior.
Adversarial training strengthens models against potential attacks by exposing them to simulated attacks during development. Strict access control mechanisms prevent unauthorized model manipulation, while continuous monitoring systems detect and respond to suspicious activities in real time.
As AI technology evolves, it is important to balance its benefits and risks, especially when leveraging it in the public sector. Success lies in implementing strong security measures while maintaining the efficiency and effectiveness of AI-powered public services.
Your email address will not be published. Required fields are markedmarked