Saturday, December 13, 2025

Somebody up there hates me?

https://www.cpr.org/2025/12/12/trump-artificial-intelligence-executive-order/

Trump cites Colorado in new executive order banning states from creating ‘cumbersome’ AI laws

President Donald Trump’s executive order to ban states from creating robust regulations around artificial intelligence is another potential roadblock for Colorado’s first-in-the-nation AI law, which is set to go into effect next year.

Colorado Attorney General Phil Weiser said the state would challenge the order in court.

Trump’s order specifically singled out Colorado’s law as an example of a cumbersome AI regulation.  The state law, passed in 2024, seeks to prevent discrimination in the AI systems businesses and governments use in making key decisions, such as hiring, education and banking. 

In criticizing the law, Trump’s executive order said it, “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” 



Friday, December 12, 2025

Perspective.

https://www.bespacific.com/artificial-intelligence-and-the-future-of-work/

Artificial Intelligence and the Future of Work

National Academies of Sciences, Engineering, and Medicine. 2025.  Artificial Intelligence and the Future of Work. Washington, DC: The National Academies Press. Advances in artificial intelligence (AI) promise to improve productivity significantly, but there are many questions about how AI could affect jobs and workers. Recent technical innovations have driven the rapid development of generative AI systems, which produce text, images, or other content based on user requests – advances which have the potential to complement or replace human labor in specific tasks, and to reshape demand for certain types of expertise in the labor market.  Artificial Intelligence and the Future of Work evaluates recent advances in AI technology and their implications for economic productivity, the workforce, and education in the United States. The report notes that AI is a tool with the potential to enhance human labor and create new forms of valuable work – but this is not an inevitable outcome. Tracking progress in AI and its impacts on the workforce will be critical to helping inform and equip workers and policymakers to flexibly respond to AI developments.





Perhaps not so smart after all.

https://www.schneier.com/blog/archives/2025/12/ais-exploiting-smart-contracts.html

AIs Exploiting Smart Contracts

I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature.

Here’s some interesting research on training AIs to automatically exploit smart contracts:

AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench) ­a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense.





How to violate Trumps Executive Order…

https://www.transformernews.ai/p/new-york-governor-hochul-raise-act-sb-53

New York’s governor is trying to turn the RAISE Act into an SB 53 copycat

New York Governor Kathy Hochul is proposing a dramatic rewrite of the RAISE Act, the AI transparency and safety bill that recently passed the state legislature, according to two sources who reviewed the governor’s redlines on the bill.

The governor’s proposal would strike the RAISE Act in its entirety and replace it with verbatim language from California’s recently enacted law, SB 53, with minimal changes. SB 53 is generally viewed as a lighter touch approach. One source who spoke with Transformer on the condition of anonymity said the proposal would effectively make SB 53, a law that “was always meant to be a floor” for AI regulation, “suddenly become the ceiling.”



Thursday, December 11, 2025

Push-back.

https://pogowasright.org/announcement-eff-launches-age-verification-hub-as-resource-against-misguided-laws/

ANNOUNCEMENT: EFF Launches Age Verification Hub as Resource Against Misguided Laws

EFF Also Will Host a Reddit AMA and a Livestreamed Panel Discussion

SAN FRANCISCO—With ill-advised and dangerous age verification laws proliferating across the United States and around the world, creating surveillance and censorship regimes that will be used to harm both youth and adults, the Electronic Frontier Foundation has launched a new resource hub that will sort through the mess and help people fight back.

To mark the hub’s launch, EFF will host a Reddit AMA (“Ask Me Anything”) next week and a free livestreamed panel discussion on January 15 highlighting the dangers of these misguided laws.

These restrictive mandates strike at the foundation of the free and open internet,” said EFF Activist Molly Buckley. “While they are wrapped in the legitimate concern about children’s safety, they operate as tools of censorship, used to block people young and old from viewing or sharing information that the government deems ‘harmful’ or ‘offensive.’ They also create surveillance systems that critically undermine online privacy, and chill access to vital online communities and resources. Our new resource hub is a one-stop shop for information that people can use to fight back and redirect lawmakers to things that will actually help young people, like a comprehensive privacy law.”

Half of U.S. states have enacted some sort of online age verification law. At the federal level, a House Energy and Commerce subcommittee last week held a hearing on “Legislative Solutions to Protect Children and Teens Online.” While many of the 19 bills on that hearing’s agenda involve age verification, none would truly protect children and teens. Instead, they threaten to make it harder to access content that can be crucial, even lifesaving, for some kids.

It’s not just in the U.S. Effective this week, a new Australian law requires social media platforms to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account.

We all want young people to be safe online. However, age verification is not the panacea that regulators and corporations claim it to be; in fact, it could undermine the safety of many.

Age verification laws generally require online services to check, estimate, or verify all users’ ages—often through invasive tools like government ID checks, biometric scans, or other dubious “age estimation” methods—before granting them access to certain online content or services. These methods are often inaccurate and always privacy-invasive, demanding that users hand over sensitive and immutable personal information that links their offline identity to their online activity. Once that valuable data is collected, it can easily be leaked, hacked, or misused.

To truly protect everyone online, including children, EFF advocates for a comprehensive data privacy law.

EFF will host a Reddit AMA on r/privacy from Monday, Dec. 15 at 12 p.m. PT through Wednesday, Dec. 17 at 5 p.m. PT, with EFF attorneys, technologists, and activists answering questions about age verification on all three days.

EFF will host a free livestream panel discussion about age verification at 12 p.m. PDT on Thursday, Jan. 15. Panelists will include Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; a representative of Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. RSVP at https://www.eff.org/livestream-age. For the age verification resource hub: https://www.eff.org/age

For the Reddit AMA: https://www.reddit.com/r/privacy/

For the Jan. 15 livestream: https://www.eff.org/livestream-age For this release:  https://www.eff.org/press/releases/eff-launches-age-verification-hub-resource-against-misguided-laws

Related posts:





Tools & Techniques.

https://www.bespacific.com/h2o-casebook-collection/

H2O Casebook Collection

H2O Casebook Collection Search inside H2O’s collection of 452 casebooks, 10,765 legal documents, and 161 authors, or view our featured casebooks. “H2O is a free platform for making, sharing, and remixing open-licensed casebooks and other course materials. It is developed and maintained by the Library Innovation Lab at the Harvard Law School Library. With H2O, educators can drastically reduce textbook costs for their students, gain control over their course materials, and collaborate toward new approaches to law school curriculum.”



Wednesday, December 10, 2025

Security? Imagine a mass event at rush hour… (What other “security features” are hidden in our cars?)

https://www.theregister.com/2025/12/09/porsche_bricked_russia/

Porsche panic in Russia as pricey status symbols forget how to car

Hundreds of Porsches in Russia were rendered immobile last week, raising speculation of a hack, but the German carmaker tells The Register that its vehicles are secure.

According to reports, local dealership chain Rolf traced the problem to a loss of satellite connectivity to their Vehicle Tracking Systems (VTS). This meant the systems thought a theft attempt was in progress, triggering the vehicle's engine immobilizer.

Porsche HQ was unable to help or diagnose the nature of the problem. It's understood that systems like VTS are operated by local Porsche subsidiaries or dealer networks.

But following Russia's invasion of Ukraine and the imposition of sanctions, Porsche no longer exports to the country or provides after-sales service.





Perspective.

https://www.pewresearch.org/internet/2025/12/09/teens-social-media-and-ai-chatbots-2025/

Teens, Social Media and AI Chatbots 2025

Roughly 1 in 5 U.S. teens say they are on TikTok and YouTube almost constantly. At the same time, 64% of teens say they use chatbots, including about 3 in 10 who do so daily



Tuesday, December 09, 2025

Lawyers on the endangered species list?

https://www.bespacific.com/ai-jurisprudence-toward-automated-justice/

AI Jurisprudence: Toward Automated Justice

Datzov, Nikola, AI Jurisprudence: Toward Automated Justice (September 9, 2024 – Revised December 4, 2025). Available at SSRN: https://ssrn.com/abstract=5178780 “The U.S. judiciary’s evolving role and digitization into a “modern” court system over the past several decades has brought it to a fundamentally altering moment: the adoption of automated justice. Recent developments in artificial intelligence (AI) have presented unprecedented capabilities for legal writing, legal analysis, and legal decision-making to be performed by automated technologies. The “AI Revolution,” among other things, has led to the first adoptions of automated adjudicators and judicial “employees” in numerous courts around the world. While the rest of the world has been grappling with the complex questions relating to AI’s arrival at the courthouse steps, surprisingly, courts in the United States have largely ignored their inevitable future until very recently. This unfortunate oversight has left the use of AI technology in judicial decision-making at the hands of individual judges—many of whom lack a sound understanding of the technology—without any formal guidance on its capabilities, limits, and risks.

This article is one of the first to closely examine several important concepts at the heart of U.S. courts’ path toward automated justice and comprehensively connect interdisciplinary scholarship from several diverse areas of law in the context of modern AI and judicial decision-making. Thus, it offers several significant contributions so far absent from existing literature regarding how the fundamental concept of justice will be preserved in U.S. courts in light of the AI Revolution. The article first tells the overlooked story of the courts’ unwitting transition toward an AI-ready judiciary and provides a descriptive account of the evolving capabilities of AI to perform judicial work. To help frame and structure future discussions of AI’s role in performing such judicial functions, the article introduces a novel taxonomy that categorizes the work of judges into different AI tiers and levels, namely AI-Assisted, AI-Led, and AI-Automated Judges. Such a taxonomy is critical because a more granular approach is necessary to determine the metes and bounds of where AI can replace certain judicial responsibilities for certain types of cases. It also examines whether any barriers exist to incorporating AI into the judiciary, including the constitutionality of delegating judicial work to modern AI, which to date has not been meaningfully considered in existing literature. Additionally, the article tackles the complex normative questions surrounding automated judicial decision-making (at different levels) through the lens of historical legal theory and further divides appropriate AI justiciability into the proposed taxonomy. Finally, it explores the complicated question of who can tell judges how to use AI in their judicial work and advocates for a desperately needed judge-led framework of “guided discretion” to transition U.S. courts into the age of AI based on seven fundamental principles.”





Would this 25% be a tariff, adding to the cost of the chips or a tax?

https://www.cnbc.com/2025/12/08/trump-nvidia-h200-sales-china.html

Trump greenlights Nvidia H200 AI chip sales to China if U.S. gets 25% cut, says Xi responded positively

President Donald Trump on Monday said Nvidia will be allowed to ship its H200 artificial intelligence chips to “approved customers” in China and elsewhere, on the condition that the U.S. gets a 25% cut.





Surveillance is only justified against second class citizens?

https://www.npr.org/2025/12/08/nx-s1-5631826/iceblock-app-lawsuit-trump-bondi

ICEBlock app sues Trump administration for censorship and 'unlawful threats'

The developer of ICEBlock, an iPhone app that anonymously tracks the presence of Immigration and Customs Enforcement agents, has sued the Trump administration for free speech violations after Apple removed the service from its app store under demands from the White House.

The suit, filed on Monday  n federal court in Washington, asks a judge to declare that the administration violated the First Amendment when it threatened to criminally prosecute the app's developer and pressured Apple to make the app unavailable for download, which the tech company did in October.



Tools & Techniques. Stacking for fun and profit?

https://pogowasright.org/privacy-concerns-raised-as-grok-ai-found-to-be-a-stalkers-best-friend/

Privacy concerns raised as Grok AI found to be a stalker’s best friend

Graham Cluley writes:

Grok, the AI chatbot developed by Elon Musk’s xAI, has been found to exhibit more alarming behaviour – this time revealing the home addresses of ordinary people upon request.
[…]
Reporters at Futurism fed the names of 33 non-public individuals into the free web version of Grok, with extremely minimal prompts such as “[name] address”.
According to their investigation, ten of Grok’s responses returned accurate, current home addresses.
A further seven of Grok’s responses produced out-of-date but previously correct addresses, and four returned workplace addresses.
In addition, Grok would frequently volunteer unrequested information such as phone numbers, email addresses, employment details, and even the names and addresses of family members, including children.
Only once did Grok refuse outright to provide information on an individual.

Read more at Bitdefender.


Monday, December 08, 2025

The new shout of “fire in a crowded theater?”

https://thedailyeconomy.org/article/hashtag-handcuffs-the-global-rise-of-online-speech-policing/

Hashtag Handcuffs: The Global Rise of Online Speech Policing

In November, South Korean President Lee Jae-myung launched a crackdown on so-called hate speech online, claiming that such speech “crosses the boundary of freedom of expression.” Punishments can include fines and up to seven years in prison. 

Unfortunately, South Korea’s not alone in its push to police what ordinary people can say on social media. Other countries that have recently passed laws to curtail citizens’ speech include Belarus,  ChinaTurkeyRussiaPolandThailandBrazilSyria, and India. Like South Korea, these countries punish such speech harshly: in Turkey, citizens can face imprisonment of up to three years for a retweet, and in Poland prison sentences can run up to five years for an online insult. 

Even countries that have historically respected freedom of speech and individual rights are backsliding: Germany recently cracked down on hate speech online, France has fined citizens for insulting its leaders, and the United Kingdom—once a bastion of Enlightenment ideals—now arrests 30 citizens per day for making offensive posts or comments online. In 2024, British subject Jordan Plain was sentenced to eight months in prison for filming himself making racist gestures and comments.

Even the United States is starting to backslide. When Larry Bushart posted a meme about a Charlie Kirk vigil in Perry County, Tennessee, local law enforcement arrested him. He spent 37 days in jail. 

As the Foundation for Individual Rights and Expression (FIRE)’s Matthew Harwood argues, we are entering a “global free speech recession.”



Sunday, December 07, 2025

All the news, and then some…

https://journals.4science.ge/index.php/GS/article/view/4331

Invisible Editors: Impact of AI on Media Content Quality and Trust

AI technologies such as machine learning, natural language processing, and automated journalism are rapidly transforming how media content is produced, distributed, and consumed. These tools promise greater efficiency, for example, automated news writing and personalized content recommendations, and enable real-time delivery of information. At the same time, their integration into media workflows has intensified concerns about the circulation of biased, low-quality, irrelevant information and disinformation. Because AI systems learn from existing data, they can reproduce and even amplify the biases embedded in that data, while accelerating the spread of disinformation and undermining trust in media institutions.

This study examines how AI becomes an invisible gatekeeper, contributes to biased and irrelevant media content and explores the consequences for public trust and democratic discourse. Using a mixed-methods design, including: content analysis of AI-generated and AI-curated media, an online survey, and secondary data, the research shows that media practitioners and audiences recognize both the transformative potential of AI and its ethical risks. A strong majority of survey participants perceive AI’s impact on media as significant or very significant and associate AI algorithms with the spread of biased or low-quality information across news and social platforms. Respondents also express substantial concern about the opacity and limited accountability of AI systems in shaping what information people see.

The findings point to an urgent need for strategies that reduce AI-induced bias and improve information quality, such as enhancing algorithmic transparency, diversifying training data, and developing clear regulatory and ethical frameworks. Drawing on media and communication theories, this article offers a critical analysis of AI’s role in contemporary media and outlines pathways for more responsible and accountable use of AI in the information ecosystem.



Saturday, December 06, 2025

Unreasonable?

https://pogowasright.org/privacy-s-d-cal-employee-did-not-waive-privacy-right-in-personal-email-data-on-company-provided-laptop-dec-5-2025/

PRIVACY—S.D. Cal.: Employee did not waive privacy right in personal email data on company provided laptop, (Dec 5, 2025)

Kathleen Kapusta, J.D. writes:

The company’s request to review data spanning more than 15 years, nearly 10 times the length of her employment with the company, was “patently unreasonable and overbroad.”
A former analyst for a security operations provider who added her personal email account to her employer provided laptop did not waive her constitutional right to privacy in her personal email data, a federal district court in California ruled, granting in part her motion for a protective order. While the company has a legitimate interest in obtaining relevant discovery in the employee’s lawsuit alleging race discrimination, retaliation, and wrongful discharge, its request for an unrestricted review of the data is “profoundly serious in its scope and potential impact,” the court stated, noting that there are less intrusive methods for obtaining the relevant information (Lim v. Expel, Inc., No. 3:24-cv-02284-W-AHG (S.D. Cal. Dec. 1, 2025)).
Personal email account. The employee worked as a Senior Governance Risk Compliance and Privacy Analyst for Expel, Inc., for approximately 18 months. During that time, she added her personal Gmail account to the mail account on her work issued laptop. Company policy allowed employees to use company property “for incidental personal reasons” with certain restrictions.
The employee believed her personal email account was “password-protected and separate from any work platform.” Her personal email data spanned more than 15 years and included sensitive, private information such as attorney-client communications, medical and financial records, and home security images including images of her minor children.
When the company terminated the employee, it revoked her access to her laptop. It also denied her request to retrieve personal files, telling her that her laptop would be “wiped.” According to the company’s general counsel, it was standard practice to wipe a laptop for reuse but because the employee had made allegations of retaliation, Expel retained her laptop.

Read more at VitalLaw.



Friday, December 05, 2025

Some interesting words…

https://www.oregonlive.com/pacific-northwest-news/2025/12/very-grave-situation-oregon-court-slaps-attorney-with-2000-fine-for-ai-errors.html

Very grave situation’: Oregon court slaps attorney with $2,000 fine for AI errors

An Oregon attorney accused of relying' on the totally plausible — and often totally erroneous — output of so-called artificial intelligence was slapped with a fine by the Oregon Court of Appeals on Wednesday.

The appellate court determined that Portland civil attorney Gabriel A. Watson filed briefs citing two made-up cases and used a fabricated quote that was attributed to a real piece of case law.

In a first for Oregon, the Courts of Appeals ordered Watson to pay $2,000 to the state judicial department, charging him $500 for each baloney citation and $1,000 for the bogus quote.

Lagesen, the judge, said Watson hadn’t provided a “clear explanation” of how the error occurred and that each false brief created by AI costs the judicial system time and money untangling the mix-up.

Legal precedent is the backbone of the law, Lagesen said, but artificial intelligence is a machine built on the probable order of words, not the truth itself.

AI mistakes are sometimes dubbed “hallucinations.” But Lagesen rejected that term.

Artificfial intelligence is not perceiving nonexistent law as the result of a disorder,” she wrote. “Rather, it is generating nonexistent law in accordance with its design.”





Tools & Techniques. The first of many?

https://www.bespacific.com/university-of-chicago-law-school-ai-lab-launches-leasechat/

University of Chicago Law School AI Lab Launches LeaseChat

LawSites – “LeaseChat [in Beta] it is a free AI tool designed to help renters across the United States analyze their leases and understand their legal rights. With more than 40 million rented properties nationwide, the tool aims to level the playing field for tenants navigating complex lease agreements and landlord-tenant law.

LeaseChat provides four core features, all designed to help renters understand both the law and their specific lease terms:

  • Lease Analyzer: Renters can upload their lease and receive an AI-powered analysis that identifies any red flags. For example, the system will flag unusually high security deposits or problematic clauses.

  • Lease Chat: Users can ask questions about their lease in plain language and receive answers with direct citations to specific lease provisions, complete with page references.

  • Legal Rights: Based on the lease location, the tool outlines applicable laws and tenant rights for that specific city and state, covering everything from repair timelines to notice requirements for landlord entry to rules around returns of security deposits.

  • Letter Drafter: The platform can help renters draft correspondence to landlords about repairs, late rent, deposit returns and other common issues.

All of LeaseChat’s features are also available in Spanish, via a toggle button at the top of the site’s homepage…”



Thursday, December 04, 2025

How to remain clueless without even trying?

https://www.bespacific.com/young-adults-and-the-future-of-news/

Young Adults and the Future of News

To better understand the U.S. media landscape, Pew Research Center has surveyed Americans over time about their news habits and attitudes. Time and time again, the youngest adults stand out from the crowd in their unique ways of consuming news and their views of the news media. This essay examines how the youngest group of adults – those ages 18 to 29 – consume news, interact with it and perceive its role in their daily lives. In doing so, it paints a picture of a generation of Americans that is both shaping and being shaped by the evolving news environment. As we look toward the future, understanding young adults’ news habits may be key to anticipating the coming shifts in the media landscape. Throughout this essay, we include quotes from young Americans gathered from several past Center studies to illustrate their experiences.  This is a Pew Research Center analysis from the Pew-Knight Initiative, a research program funded jointly by The Pew Charitable Trusts and the John S. and James L. Knight Foundation.

  • Young adults are less likely to follow the news Attention to news in the U.S. – measured by the share of adults who say they follow news all or most of the time – has declined across all age groups since 2016. Young adults (ages 18 to 29) have consistently had the lowest levels.

  • As of 2025, 15% of young adults say they follow the news all or most of the time.  Comparatively, 62% of the oldest Americans say they do this – about four times as many. This holds true for different types of news. Young adults are less likely than all older age groups to say they closely follow national and local news.

  • Younger adults also differ in the news topics they follow. They tend to be less likely than older adults to say they often or extremely often get news about government and politics, science and technology, and business and finance. They are only slightly less likely to often get sports news – and more likely to get entertainment news. About a third (32%) of adults under 30 say they get entertainment news extremely often or often, compared with 13% of the oldest adults (those 65 and older).

  • Even though young adults are less likely to report following the news, news may still be finding them in other ways.  When asked how often they seek out the news, about one-in-five young adults (22%) say they do so often or extremely often. Older adults are much more likely to intentionally seek out news…”



Wednesday, December 03, 2025

Beware the hallucinating AI Judge?

https://www.bespacific.com/not-ready-for-the-bench-llm/

Not ready for the bench: LLM legal interpretation is unstable and out of step with human judgments

Not ready for the bench: LLM legal interpretation is unstable and out of step with human judgments (Purushothama, Waldon, Schneider, 2025): “Legal interpretation frequently involves assessing how a legal text, as understood by an órdinary’ speaker of the language, applies to the set of facts characterizing a legal dispute in the U.S. judicial system. Recent scholarship has proposed that legal practitioners add large language models (LLMs) to their interpretive toolkit. This work offers an empirical argument against LLM interpretation as recently practiced by legal scholars and federal judges. Our investigation in English shows that models do not provide stable interpretive judgments: varying the question format can lead the model to wildly different conclusions. Moreover, the models show weak to moderate correlation with human judgment, with large variance across model and question variant, suggesting that it is dangerous to give much credence to the conclusions produced by generative AI.”





To protect the children, require adults to surrender privacy? How does this impact the first amendment?

https://www.404media.co/missouri-age-verification-law-porn-id-check-vpns/

Half of the US Now Requires You to Upload Your ID or Scan Your Face to Watch Porn

As of this week, half of the states in the U.S. are under restrictive age verification laws that require adults to hand over their biometric and personal identification to access legal porn.

Missouri became the 25th state to enact its own age verification law on Sunday. As it’s done in multiple other states, Pornhub and its network of sister sites—some of the largest adult content platforms in the world—pulled service in Missouri, replacing their homepages with a video of performer Cherie DeVille speaking about the privacy risks and chilling effects of age verification.





Military or terrorist actors?

https://www.theregister.com/2025/12/03/india_gps_spoofing/

Indian government reveals GPS spoofing at eight major airports

India’s Civil Aviation Minister has revealed that local authorities have detected GPS spoofing and jamming at eight major airports.

In an written answer presented to India’s parliament, Minister Ram Mohan Naidu Kinjarapu said his department is aware of “recent” spoofing incidents in Delhi and other incidents since 2023.

His response confirmed recent incidents at Delhi’s Indira Gandhi International Airport, plus “regular” reports of spoofing since 2023 at Kolkata, Amritsar, Mumbai, Hyderabad, Bangalore and Chennai airports.

As The Register has previously reported, attackers who wish to jam GPS broadcast a radio signal that can drown out the weak beams that come down from navigation satellites. Spoofing a signal sees attackers transmit inaccurate location information so receivers can’t calculate their actual position.

Either technique means pilots can’t rely on satellite navigation – doing so could be catastrophic – and must instead find their way using other means.