Ethical Concerns Increase as AI Use and Innovations Grow
Ethical Concerns Increase as AI Use and Innovations Grow
Real world AI harm is on the rise. Yet many companies embrace AI risks without proper governance guardrails.
The AI Incident Database (AIID) indexes “harms” and “near harms” that happen in real life as a result of artificial intelligence systems. The database encourages anyone who has experienced incidents to report them. Predictably, the number of incidents grows each year.
The intent of the database is to “learn from experience so we can prevent or mitigate bad outcomes,” AIID reports. Despite such tools, for many organizations embracing AI, there may be a disconnect between the need for innovation and the necessity to reduce the likelihood of harm.
John Nurthen, executive director, Global Research with Staffing Industry Analysts’ (SIA) international research content team agrees. In the report, “Algorithm Auditing for Staffing Firms,” he writes, “Among companies, a gap persists between recognizing AI risks and taking meaningful action to mitigate them.”
Nurthen explained, “Initially many were swept up in the excitement and potential of AI.” This caused organizations across industries to become “less focused on what this might mean from an ethical perspective.”
But public concern has grown, even as excitement about AI’s potential and benefits has increased globally. Further, regulatory bodies, industry standards authorities, academics, and expert communities have voiced concern.
Organizations do have a pathway even though best practices are still developing and evolving, Nurthen said. He gave the example that any AI initiative within organizations needs to be “a cross-departmental project.” An IT department should not decide on AI ethics, he warned. “IT, like other colleagues, has an important role to play,” but it cannot be a sole decisionmaker, Nurthen said.
An April 2025 Pew Research Center survey showed that public concern about AI harm has grown. This concern includes bias, misinformation, and job loss. About half of U.S. adults (51 percent), for example, say they are more “concerned” about AI than “excited.”
Many people want to see more government regulation and corporate responsibility regarding algorithmic fairness and safety, the SIA report, “Staffing Trends 2026” concluded. AI experts voice concern about misuses, such as bias, impersonation, and more.
“While businesses are gung-ho about the potential and benefits of AI, ethical concerns have increased among the general public, regulatory bodies, industry standards authorities, academics, and expert communities,” Nurthen said in the newest SIA report.
But there are governance challenges. Gartner reports that responsible AI governance, ethics, and compliance are fundamental to sustainable AI adoption and corporate risk management. The “right balance” is where organizations need to focus. This process includes providing “appropriate guardrails that do not stifle innovation,” Nurthen reported.
At the end of 2025, the current administration signed an executive order to promote a uniform federal AI policy framework. “Ensuring a National Policy Framework for Artificial Intelligence,” declared U.S. leadership in AI a national priority and asserted that burdensome or conflicting state laws could undermine competitiveness and innovation.
The order directs federal agencies to develop a “minimally burdensome” national standard. “Staffing Trends 2026” reported that the executive order suggested using federal funding to discourage states from passing restrictive measures. Still, “The U.S. lacks a single, comprehensive federal law governing artificial intelligence,” SIA concluded.
AI regulation at the federal level is “fragmented, relying mostly on agency enforcement under existing laws (such as consumer protection, civil rights, and competition law) and on voluntary frameworks like the NIST AI Risk Management Framework,” the report said.
Meanwhile, Congress remains divided. Some lawmakers prioritize innovation and global competitiveness, arguing against heavy regulation. Others are pushing for clearer federal rules to address safety, bias, and accountability, the report said.
“Federal action has been focused on narrow, issue-specific measures (such as deepfake and non-consensual imagery laws) rather than a unified AI statute, leaving significant regulatory uncertainty,” Nurthen wrote in the report.
Further, states are creating AI laws in a vacuum. U.S. states have moved aggressively, creating a “fast-growing patchwork of AI laws,” Nurthen said. States such as California, Colorado, and New York have enacted or proposed rules on transparency, “high-risk” AI systems, hiring bias, child safety, and impersonation.
These laws have triggered an intense debate over federal pre-emption versus state autonomy. “Industry groups generally favor a single national framework to reduce compliance complexity, while states argue they must act where Congress has stalled,” the report said.
“Total global AI investment continues to rise sharply, especially in infrastructure, services, and software,” Nurthen concluded in “Staffing Trends 2026.”
Despite leaders failing to see a strong return from adoption of the technology, many organizations still are moving from experimentation into production. Across the country, organizations continue to shift from isolated proofs of concept and pilot projects to widespread deployment of AI in core business workflows.
There is no shortage of AI innovation, Nurthen admitted. He suggested that investment in data centers has helped accelerate AI deployment.
The conclusion is clear. The large amount of money invested in infrastructure is “increasing available compute capacity for training and inference, lowering latency and cost barriers for computing at a global scale, and enabling rapid scaling of AI applications across industries.”
Cathy Cecere is membership content program manager.
The intent of the database is to “learn from experience so we can prevent or mitigate bad outcomes,” AIID reports. Despite such tools, for many organizations embracing AI, there may be a disconnect between the need for innovation and the necessity to reduce the likelihood of harm.
John Nurthen, executive director, Global Research with Staffing Industry Analysts’ (SIA) international research content team agrees. In the report, “Algorithm Auditing for Staffing Firms,” he writes, “Among companies, a gap persists between recognizing AI risks and taking meaningful action to mitigate them.”
Organizational oversight
Nurthen explained, “Initially many were swept up in the excitement and potential of AI.” This caused organizations across industries to become “less focused on what this might mean from an ethical perspective.”But public concern has grown, even as excitement about AI’s potential and benefits has increased globally. Further, regulatory bodies, industry standards authorities, academics, and expert communities have voiced concern.
Organizations do have a pathway even though best practices are still developing and evolving, Nurthen said. He gave the example that any AI initiative within organizations needs to be “a cross-departmental project.” An IT department should not decide on AI ethics, he warned. “IT, like other colleagues, has an important role to play,” but it cannot be a sole decisionmaker, Nurthen said.
External oversight
An April 2025 Pew Research Center survey showed that public concern about AI harm has grown. This concern includes bias, misinformation, and job loss. About half of U.S. adults (51 percent), for example, say they are more “concerned” about AI than “excited.” Many people want to see more government regulation and corporate responsibility regarding algorithmic fairness and safety, the SIA report, “Staffing Trends 2026” concluded. AI experts voice concern about misuses, such as bias, impersonation, and more.
“While businesses are gung-ho about the potential and benefits of AI, ethical concerns have increased among the general public, regulatory bodies, industry standards authorities, academics, and expert communities,” Nurthen said in the newest SIA report.
But there are governance challenges. Gartner reports that responsible AI governance, ethics, and compliance are fundamental to sustainable AI adoption and corporate risk management. The “right balance” is where organizations need to focus. This process includes providing “appropriate guardrails that do not stifle innovation,” Nurthen reported.
Regulations fragmented
At the end of 2025, the current administration signed an executive order to promote a uniform federal AI policy framework. “Ensuring a National Policy Framework for Artificial Intelligence,” declared U.S. leadership in AI a national priority and asserted that burdensome or conflicting state laws could undermine competitiveness and innovation. The order directs federal agencies to develop a “minimally burdensome” national standard. “Staffing Trends 2026” reported that the executive order suggested using federal funding to discourage states from passing restrictive measures. Still, “The U.S. lacks a single, comprehensive federal law governing artificial intelligence,” SIA concluded.
Where Digital Meets Physical
Mechanical engineers become indispensable as they bridge the gap between digital models and physical reality.
Meanwhile, Congress remains divided. Some lawmakers prioritize innovation and global competitiveness, arguing against heavy regulation. Others are pushing for clearer federal rules to address safety, bias, and accountability, the report said.
“Federal action has been focused on narrow, issue-specific measures (such as deepfake and non-consensual imagery laws) rather than a unified AI statute, leaving significant regulatory uncertainty,” Nurthen wrote in the report.
Further, states are creating AI laws in a vacuum. U.S. states have moved aggressively, creating a “fast-growing patchwork of AI laws,” Nurthen said. States such as California, Colorado, and New York have enacted or proposed rules on transparency, “high-risk” AI systems, hiring bias, child safety, and impersonation.
These laws have triggered an intense debate over federal pre-emption versus state autonomy. “Industry groups generally favor a single national framework to reduce compliance complexity, while states argue they must act where Congress has stalled,” the report said.
Inevitable growth
“Total global AI investment continues to rise sharply, especially in infrastructure, services, and software,” Nurthen concluded in “Staffing Trends 2026.”Despite leaders failing to see a strong return from adoption of the technology, many organizations still are moving from experimentation into production. Across the country, organizations continue to shift from isolated proofs of concept and pilot projects to widespread deployment of AI in core business workflows.
There is no shortage of AI innovation, Nurthen admitted. He suggested that investment in data centers has helped accelerate AI deployment.
The conclusion is clear. The large amount of money invested in infrastructure is “increasing available compute capacity for training and inference, lowering latency and cost barriers for computing at a global scale, and enabling rapid scaling of AI applications across industries.”
Cathy Cecere is membership content program manager.