Conflict ahead?
https://pogowasright.org/several-state-ai-laws-set-to-go-into-effect-in-2026-despite-federal-governments-push-to-eliminate-state-level-ai-regulations/
Several
State AI Laws Set to Go into Effect in 2026, Despite Federal
Government’s Push to Eliminate State-Level AI Regulations
Corey
Bartkus of Barnes & Thornburg LLP writes:
Illinois, Texas, and Colorado
are each set to implement laws governing the use of artificial
intelligence (AI) in the workforce in 2026, all while the federal
government has signaled its intent to eliminate state-level
regulations on AI.
On Dec. 11, 2025, President Donald Trump
signed an executive order, titled “Ensuring a National Policy
Framework for Artificial Intelligence,” which directed the federal
government to review state laws that are deemed “inconsistent”
with its plans to implement a national policy framework for AI.
Meanwhile, new AI laws in Illinois and
Texas went into effect on Jan. 1. Illinois’ new law, H.B. 3773,
amends the state’s human rights act to make clear that the statute
is triggered when discrimination emanates from an employer’s use of
AI to make decisions on hiring, firing, discipline, tenure, and
training. Under H.B. 3773, companies must notify workers when AI is
integrated into any of the aforementioned workplace decisions.
Furthermore, companies are barred from using ZIP codes in the AI
model when evaluating candidates. Because these new protections were
implemented as part of Illinois’ existing human rights code, they
come with a private right of action.
Read
more at The
National Law Review.
A different
kind of security risk. Not sure training is available to address
this in most companies.
https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html
Who
Approved This Agent? Rethinking Access, Accountability, and Risk in
the Age of AI Agents
AI agents are
accelerating how work gets done. They schedule meetings, access
data, trigger workflows, write code, and take action in real time,
pushing productivity beyond human speed across the enterprise.
Then comes the
moment every security team eventually hits:
“Wait… who
approved this?”
Unlike users
or applications, AI agents are often deployed quickly, shared
broadly, and granted wide access permissions, making ownership,
approval, and accountability difficult to trace. What was once a
straightforward question is now surprisingly hard to answer.
AI Agents
Break Traditional Access Models
AI agents are
not just another type of user. They fundamentally differ from both
humans and traditional service accounts, and those differences are
what break existing access and approval models.
Human access
is built around clear intent. Permissions are tied to a role,
reviewed periodically, and constrained by time and context. Service
accounts, while non-human, are typically purpose-built, narrowly
scoped, and tied to a specific application or function.
AI agents are
different. They operate with delegated authority and can act on
behalf of multiple users or teams without requiring ongoing human
involvement. Once
authorized, they are autonomous, persistent, and often act across
systems, moving between various systems and data sources to complete
tasks end-to-end.
Perspective.
https://theconversation.com/is-ai-hurting-your-ability-to-think-how-to-reclaim-your-brain-272834
Is
AI hurting your ability to think? How to reclaim your brain
The
retirement of West Midlands police chief Craig Guildford is a wake-up
call for those of us using artificial intelligence (AI) tools at work
and in our personal lives. Guildford lost the confidence of the home
secretary after it was revealed that the force used
incorrect AI-generated
evidence
in
their controversial decision to ban Israeli football fans from
attending a match.
This
is a particularly egregious example, but many people may be falling
victim to the same phenomenon – outsourcing the
“struggle” of thinking to
AI.
As
an expert on how
new technology reshapes society and
the human experience, I have observed a growing phenomenon which I
and other researchers
refer
to as “cognitive atrophy”.
Essentially, AI
is replacing tasks
many people have grown reluctant to do themselves – thinking,
writing, creating, analysing. But when we don’t use these skills,
they can decline.
We
also risk getting things very, very wrong. Generative AI works by
predicting likely words from patterns
trained on vast amounts of data.
When you ask it to write an email or give advice, its responses
sound logical. But it does not understand or know what
is true.