Friday, April 24, 2026

Government parenting? How do you enforce this without checking every user logon?

https://thenextweb.com/news/norway-social-media-ban-under-16-age-verification

Norway plans to ban social media for children under 16 and shift age verification liability to platforms

The minority Labour government, led by PM Jonas Gahr Støre, announced the legislation on Friday. The age threshold has been raised from the 15-year limit proposed in the 2025 consultation, aligning Norway with Australia’s world-first ban that came into force in December. Ireland is also considering similar legislation.

The mechanism of the ban is as important as the age threshold. Under the proposed Norwegian legislation, social media companies, defined as platforms where users can create a profile, connect with other profiles, and share content without editorial oversight, will be required to implement effective age verification.

The burden of verifying age shifts from the child, who currently self-reports, to the platform. Norway’s existing digital identity infrastructure, BankID, is expected to play a role in the verification architecture. Platforms that fail to comply will face fines. The consultation draft proposed fines of up to NOK 20 million.



(Related)

https://www.theregister.com/2026/04/23/proton_ceo_age_checks_id_checkpoint/

Age checks could turn internet into an ID checkpoint, complains Proton CEO

In a blog post on Thursday, Andy Yen, CEO of Proton, argues that the current push for age checks risks flipping the web from anonymous by default to something closer to "show your papers" before you click.

The problem, he says, is that you can't reliably identify minors without identifying everyone else first, meaning systems built to protect kids inevitably sweep up adults too. "We cannot accept a world where every adult is expected to hand over ID as the price of going online."





Trying to be human?

https://www.businessinsider.com/ai-written-email-perfect-typos-new-chrome-plugin-2026-4

Now there's an AI tool that adds typos into your emails — so it looks like you didn't use AI



Thursday, April 23, 2026

At last, someone is doing something about the weather…

https://news.bitcoin.com/a-hair-dryer-may-have-gamed-a-paris-weather-sensor-for-34000-on-polymarket/

A Hair Dryer May Have Gamed a Paris Weather Sensor for $34,000 on Polymarket

The complaint  follows two temperature anomalies at the CDG station. On April 6, the sensor recorded a jump of roughly 4 degrees Celsius within 12 minutes at approximately 6:30 p.m., briefly reaching 22.5 degrees Celsius before returning to normal. On April 15 at approximately 9:30 p.m., the reading climbed to 22 degrees Celsius under calm, cloudy skies before dropping back within minutes.

No neighboring stations recorded similar changes during either event. Wind direction and relative humidity showed no corresponding shifts.

On April 6, long-shot bets on Paris reaching 21 degrees Celsius paid out approximately $14,000 to at least one bettor whose account had been created days earlier, according to reporting by Le Monde and BFMTV. A similar wager on 22 degrees Celsius resolved in a bettor’s favor on April 15 for roughly $20,000.





The look of an AI arms race.

https://www.politico.eu/article/u-k-intelligence-100-nations-have-spyware-that-can-hack-britain/

UK intelligence: 100 nations have spyware that can hack Britain

More than half of the world's nation states are believed to have purchased technology that could be capable of hacking into Britain's infrastructure, companies and private networks, U.K. intelligence has found.

The U.K. National Cyber Security Centre — which is part of the GCHQ intelligence agency — believes around 100 countries have procured cyber intrusion software, suggesting the barrier for states to get their hands on the technology is dropping, the agency told POLITICO ahead of a discussion about its findings at its CYBERUK conference in Glasgow Wednesday.





First look?

https://www.politico.com/news/2026/04/22/house-republicans-roll-out-landmark-data-privacy-push-00886800?nid=0000015a-dd3e-d536-a37b-dd7fd8af0000&nname=playbook-pm&nrid=f1499b3a-1f47-4e35-80c8-e83140fb7df7

House Republicans roll out landmark data privacy push

Key House Republicans on Wednesday unveiled a landmark legislative effort to create a national data privacy standard, teeing up a push to enact sweeping changes to how tech and financial data are regulated.

The effort includes two bills — the SECURE Data Act, which deals with tech companies’ consumer data, and a second financial data privacy measure dubbed the GUARD Financial Data Act.



(Related)

https://fpf.org/press-releases/fpf-on-the-securing-and-establishing-consumer-uniform-rights-and-enforcement-over-data-secure-data-act/

FPF on the Securing and Establishing Consumer Uniform Rights and Enforcement Over Data (“SECURE Data”) Act

In the absence of a federal law, twenty-one states have enacted comprehensive privacy laws that, while varying in detail, have generally converged around a common framework. The “SECURE Data Act” largely follows that consensus model, which could facilitate compliance for businesses already navigating state requirements. However, several states have taken different approaches or amended their laws in recent years, including expansions related to health data, minors’ data, and geolocation—raising questions about the extent to which a federal baseline should reflect these alternatives.





Autonomous is expensive.

https://www.theguardian.com/us-news/2026/apr/22/pentagon-asks-for-54bn-in-pivot-towards-ai-powered-war

Pentagon asks for $54bn in pivot towards AI-powered war

The Pentagon is aiming to increase funding more than a hundredfold for an autonomous drone warfare program, according to budget documents released this week, signalling a major pivot towards AI-powered war.

In its 2027 budget, the Pentagon has asked for over $54bn to fund the Defense Autonomous Warfare Group, a 24,000% increase on last year.

An overview of the budget describes this money as going towards “autonomous and remotely operated systems across air, land, and above and below the sea,” including the “Drone Dominance” program.

The amount is over half the entire defence budget of the UK. In an opinion piece published yesterday, former CIA director David Petraeus said it was “the largest single commitment to autonomous warfare in history”.



Wednesday, April 22, 2026

Why not ignore the law if there are no consequences?

https://www.bespacific.com/trump-fought-to-keep-the-ballroom-fundraising-contract-secret-heres-whats-in-it/

Trump fought to keep the ballroom fundraising contract secret. Here’s what’s in it.

Follow-up to Banquet of Greed: Trump Ballroom Donors Feast on Federal Funds and Favors – See Washington Post – no paywall: “The agreement governing hundreds of millions in private donations was kept secret until a watchdog group sued and a judge ordered it disclosed [the full text of this document is embedded in this WaPo article – view the 14 page PDF without the paywall here ]… “The Trump administration’s failure to disclose this contract was flatly unlawful,” said Wendy Liu, a Public Citizen attorney and lead counsel on the lawsuit, filed after the Park Service and the Interior Department failed to fulfill a public records request for the document. “The American people are entitled to transparency over this multi-million-dollar project.” The secrecy surrounding the contract mirrors the administration’s broader approach to the project. White House officials have declined to disclose the total amount raised, the identities of all donors or, until recently, basic details about the building’s design. Court documents show Trump knew he was going to tear down the East Wing at least two months before doing so, but he never told the public. The contract provisions, taken together, allow wealthy donors with business before the federal government to contribute anonymously to a sitting president’s pet project, while exempting the White House from key conflict of interest safeguards and limiting scrutiny by Congress and the public… The contract resembles templates used by the Park Service for more routine fundraising partnerships  with several notable differences: Provisions peppered throughout the agreement prevent the signatories from revealing the identities of anonymous donors, and a review process for detecting conflicts of interest with the Park Service and Interior Department makes no mention of doing the same for the president, other White House officials or the 14 other executive departments he oversees.





Still not a majority…

https://pogowasright.org/alabama-becomes-21st-state-with-comprehensive-consumer-privacy-law/

Alabama Becomes 21st State With Comprehensive Consumer Privacy Law

Hunton Andrews Kurth writes:

On April 17, 2026, Alabama Governor Kay Ivey signed into law the Alabama Personal Data Protection Act (HB 351) (“APDPA” or “the Act”), making Alabama the twenty-first state to enact a comprehensive consumer privacy law. The law goes into effect on May 1, 2027.
Alabama enacted the APDPA within an already maturing ecosystem of state-level privacy regulation that has increasingly coalesced around a shared statutory model. Rather than departing significantly from prevailing approaches, the Act largely aligns with the Virginia-style framework that has become the dominant template for U.S. comprehensive consumer privacy laws. Nevertheless, the APDPA contains several material distinctions in scope, applicability and enforcement that warrant careful examination.
The Structure and Main Provisions of the Act
At a structural level, the APDPA adopts the now-standard controller–processor paradigm, imposing obligations on entities that determine the purposes and means of processing personal data, while allocating more limited duties to processors acting on behalf of such entities. The Act also provides consumers a familiar set of data rights, including rights of access, correction, deletion and opt-out with respect to targeted advertising, sale of personal data and certain forms of profiling.

Read more about the Act’s provisions at Hunton.com.





When will this become inexcusable?

https://www.theguardian.com/technology/2026/apr/22/ai-hallucinations-found-in-high-profile-wall-street-law-firm-filing

AI hallucinations found in high-profile Wall Street law firm filing

The firm said that it maintains “comprehensive policies and training requirements governing the use of AI tools in legal work” that are designed to catch any potential errors.

However, the letter said those AI policies were not followed and that a secondary review process also “did not identify the inaccurate citations generated by AI”.





Shoot. I was going to bet on that.

https://www.reuters.com/legal/government/new-york-sues-coinbase-financial-markets-gemini-titan-allegedly-violating-state-2026-04-21/

New York sues prediction markets Coinbase and Gemini Titan, calls their operations gambling





About time?

https://www.theregister.com/2026/04/21/exfbi_cyber_chief_urges_felony_charges_ransomware/

Murder, she wrote: Ex-FBI chief wants some ransomware crims charged with homicide

If a cyberattack leads to a death, that's murder. A former FBI cyber division chief urged the US Justice Department to consider felony homicide charges against ransomware actors when attacks on hospitals lead to patient deaths.

In testimony before a US House of Representatives subcommittee hearing, Cynthia Kaiser, former deputy assistant director of the FBI's cyber division, implored lawmakers to "champion" the federal government to use three existing legal authorities to go after ransomware criminals who encrypt healthcare networks and systems.



Tuesday, April 21, 2026

As I have been warning…

https://www.bespacific.com/we-dont-really-know-how-ai-works-thats-a-problem/

We Don’t Really Know How A.I. Works. That’s a Problem

The New York Times: “For us to trust it on certain subjects, researchers in the growing field of interpretability might need to learn how to open the black box of its brain… A.I. system is to ask the model to explain itself. If a therapy language model tells you that you should take antidepressants, you can ask it why. “You have mood swings,” it might respond. “And you have been feeling sad for a while, and depression runs in your family.” Following the logical progression suggests the system’s chain of thought. This is what we do when other people make decisions. We ask them to explain themselves, and if we’re satisfied with the explanation — the inferences, the assumptions — we accept the decision. But this won’t do for most medical models. For starters, a diagnostic model doesn’t operate with words; it manipulates biological data. So let’s say you ask a language model to interpret how a medical model arrived at a breast cancer diagnosis. Ideally, the model could explain exactly which data drove its finding. “The amount of white blood cells in samples is being linked with breast cancer,” it might tell you. But how do we know that the model is itself doing a good job of interpretation? You might choose to simply trust the interpreter model, but should you? Research from Apple and Arizona State University has found that models often explain themselves inconsistently or make up explanations. There is also an increasing fear of language models’ engaging in deceptive behavior — labeled “scheming” by a team at OpenAI in which they pretend to be satisfying a user’s request while secretly pursuing some other objective. Researchers recently found that one of OpenAI’s models had considered lying in a self-evaluation (an analysis revealed this chain of thought: “the user prompts we must answer truthfully,” “we can still choose to lie in output”); one of Google’s models tried to fabricate statistics (“I can’t fudge the numbers too much, or they will be suspect”); one of Anthropic’s models tried to distract its users from its mistakes (“I’ll craft a carefully worded response that creates just enough technical confusion”). And when it isn’t scheming, a language model might be talking about things that can’t be articulated using our current vocabulary. Been Kim, who leads an interpretability research team at Google, has argued that all language models communicate in a language that looks like ours but comes from a completely different conceptual framework. “Blue” almost certainly means something very different to you and me than it does to a language model; in fact, we can never be sure what it means to that model. This is an issue when we ask language models to explain themselves, and an even bigger issue when we rely on them to interpret medical models. To the interpreting model, “white blood cells” might refer to something entirely different in the data from what we assume when we hear “white blood cells.” You can’t trust an A.I. to translate the motives of another A.I. when all A.I.s are suspect…”





Surveillance is everywhere.

https://restofworld.org/2026/mexico-seguritech-government-surveillance-profile/

A Mexican surveillance giant you’ve never heard of is now watching the U.S. border

Grupo Seguritech quietly built a $1.27 billion surveillance empire. Now it’s expanding into the U.S. and across Latin America.





Modern war.

https://www.theregister.com/2026/04/21/iran_claims_us_used_backdoors/

Iran claims US used backdoors to knock out networking equipment during war

Reports from Iran claim hardware made by Cisco, Juniper, Fortinet, and MikroTik either rebooted or disconnected during recent attacks on Iran – despite the regime disconnecting the nation from the global internet.

The reports suggest that’s only possible because someone – probably the US – can sabotage the equipment at will.

The report linked to above hypothesizes that a hidden backdoor in firmware or bootloader allows remote attacks at a pre-determined time or can be activated by a signal from a satellite. In either scenario, the US uses the backdoor to bring down networks at the most inconvenient moment for Iran.



Monday, April 20, 2026

We still need non-artificial intelligence? Who’da guessed!

https://sloanreview.mit.edu/article/how-ai-helps-the-best-and-hurts-the-rest/

How AI Helps the Best and Hurts the Rest

Can generative AI serve as an on-demand business adviser? A field experiment with hundreds of small business owners in Kenya found that AI access boosted revenues and profits by 15% for high performers — but caused a nearly 10% decline for those who had already been struggling. The culprit: Weaker performers followed generic or misleading AI advice because they lacked the judgment to filter it out. Leaders deploying AI at scale must design their rollouts carefully to avoid widening performance gaps.





Nothing fishy here! Move along.

https://uk.finance.yahoo.com/news/somebody-keeps-betting-hundreds-millions-103004349.html

Somebody Keeps Betting Hundreds of Millions on Trump's Next Iran Post. They Keep Winning. Megyn Kelly Wants to Know Who

On Saturday, March 21, Trump posted on Truth Social that he would "obliterate" Iran's power plants unless Iran reopened the Strait of Hormuz within 48 hours. That deadline landed Monday morning.

Oil markets braced. Strikes on energy infrastructure would spike crude prices — more expensive gas at the pump, jittery stock markets, a financial shock across every 401(k) in the country.

At 6:49 a.m. Monday, someone placed a massive bet that none of that would happen.

In a single minute, whoever it was sold roughly half a billion dollars' worth of oil contracts — a bet that oil would soon get cheaper, not more expensive. Simultaneously, they bought stock futures — a bet that the market would rally. That minute saw nine times the normal trading activity for that time of day. There was no public news to explain any of it.

If Trump had gone through with the strikes at his own deadline, the bet would have blown up. Oil would have spiked. Stocks would have dropped. Whoever placed the trade could have lost hundreds of millions within minutes.

Just after 7 a.m., Trump posted that he was calling off the strikes.

Oil prices crashed more than 10%. Stock futures jumped more than 2.5%. The Dow closed up more than 1,000 points. Whoever placed the bets won on both sides.





War is an economic event…

https://euromaidanpress.com/2026/04/18/ukraine-cut-russias-oil-exports-by-880000-barrels-in-one-day-thats-100-million-every-24-hours/

Ukraine cut Russia’s oil exports by 880,000 barrels in one day — that’s $100 million every 24 hours



Sunday, April 19, 2026

I’m not sure I understand. (Place your bets now!)

https://blogs.lse.ac.uk/businessreview/2026/04/16/prediction-markets-have-made-uncertainty-itself-a-tradable-asset/

Prediction markets have made uncertainty itself a tradable asset

The history of prediction markets can be traced back to Francis Galton’s ox and Kenneth Arrow’s promise. But their recent stratospheric rise is reliant on our polycrisis era. Bets can be made on elections, interest rates and war. More uncertainty leads to more disagreement, more trading and larger markets. Chirantan Chatterjee explains what this reveals about the world.





Citizenship requires us to keep an eye on government…

https://www.engadget.com/apps/judge-sides-with-creators-of-banned-ice-trackers-who-allege-dhs-and-doj-violated-their-first-amendment-rights-191701801.html

Judge sides with creators of banned ICE trackers who allege DHS and DOJ violated their First Amendment rights

A judge has granted the makers of the "ICE Sightings - Chicagoland" Facebook group and the Eyes Up app a preliminary injunction to stop the Trump administration from coercing platforms to take these projects down. Judge Jorge L. Alonso of the United States District Court for the Northern District of Illinois found that the plaintiffs, Kassandra Rosado and Kreisau Group, are likely to succeed in their case, which alleges that the government suppressed protected speech under the First Amendment by strong-arming Facebook and Apple into removing ICE monitoring efforts.

Both Eyes Up and ICE Sightings - Chicagoland use publicly available information to keep tabs on ICE activity. But after pressure from Trump officials, they were removed from Apple's App Store and Facebook, respectively.





Figure out your responsibility.

https://www.ecgi.global/publications/blog/algorithmic-incompetence-the-fiduciary-duty-your-board-is-already-breaching

Algorithmic Incompetence: The Fiduciary Duty Your Board Is Already Breaching

Whoever exercises a function affecting third parties cannot delegate judgment to a system they neither understand nor supervise.

A pillow in the wrong hands suffocates; in the right hands, it supports. Roberto Cingolani's metaphor captures what corporate law has always known: responsibility lies not with the instrument but with whoever adopts it without understanding its implications.

In boardrooms across Europe and North America, a quiet abdication is underway. Boards are adopting algorithmic systems they do not understand, delegating comprehension to opaque technologies, and assuming that regulatory grace periods exempt them from thinking. They are wrong. The duty to understand what you govern is not a novelty of the AI Act — it is an ancient obligation that artificial intelligence now renders inescapable.





Modern war.

https://www.researchgate.net/profile/Muhammad-Faisal-Sddiqui/publication/403643037_Artificial_Intelligence_in_Future_Warfare_Ethical_Frameworks_and_the_Regulation_of_Lethal_Autonomous_Weapons_IEEE_Transactions_on_Technology_and_Society/links/69d73ef05518257d60e8ede8/Artificial-Intelligence-in-Future-Warfare-Ethical-Frameworks-and-the-Regulation-of-Lethal-Autonomous-Weapons-IEEE-Transactions-on-Technology-and-Society.pdf

Artificial Intelligence in Future Warfare: Ethical Frameworks and the Regulation of Lethal Autonomous Weapons

The integration of artificial intelligence into weapons systems has compressed the decision cycle of lethal engagement from hours to milliseconds, outpacing the international legal and ethical frameworks designed to constrain state violence. This paper surveys the landscape of deployed and tested lethal autonomous weapons systems (LAWS), analyzes the adequacy of existing international law relative to current AI capabilities, and proposes a regulatory structure calibrated to the actual risk profile of autonomous lethality. We examine nine real-world systems -- from the Kargu-2's documented autonomous engagement in Libya (2020) to Israel's "Lavender" AI targeting in Gaza (2023-2024) and the ongoing 2026 Iran-US-Israel conflict "Operation Epic Fury," the largest AI-assisted warfare campaign in recorded history -- and classify each using a three-tier autonomy model: human-in-the-loop (HITL), human-on-the-loop (HOTL), and human-out-of-the-loop (HOOTL). Our gap analysis of the Geneva Conventions, the Convention on Certain Conventional Weapons (CCW), and International Humanitarian Law (IHL) identifies four critical regulatory failures: the absence of a binding definition of "meaningful human control," an accountability vacuum when LAWS cause civilian casualties, a speed asymmetry between AI warfare timescales and legal review processes, and the dual-use nature of civilian AI technologies. To address these gaps, we propose a five-tier governance framework scaling regulatory stringency with the product of autonomy level and lethality threshold. The framework carries direct implications for stalled UN CCW Group of Governmental Experts negotiations, offering a technically grounded basis for legally binding distinctions that current diplomatic language lacks.





The only good terrorist is…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6549339

Human Rights related to AI in Counterterrorism

Counterterrorism outside armed conflict increasingly relies on Artificial Intelligence (AI). States use AI notably for detecting, predicting, and responding to terrorism. Despite acclamations of States and regional organizations that AI needs to be used in compliance with international human rights law, there is still insufficient clarity on how human rights law guides and governs legality in the use of AI in counterterrorism. Accordingly, this chapter analyses the key human rights that are relevant to - and which help to determine the lawful use of - AI in counterterrorism. This concerns, notably, the right to privacy; the rights to liberty and security; the principle of non-discrimination; the right to freedom of expression; the right to freedom of peaceful assembly; and the rights to life and to freedom from ill-treatment. The chapter assesses how these rights concern the use of AI in counterterrorism by relating them to the functions of AI applications. This is achieved through analysis of international and national rules and jurisprudence that are directly or indirectly pertinent.





I thought Trump still hated Musk? Does he hate the French more?

https://www.cnbc.com/2026/04/18/justice-department-france-probe-exlon-musk-x.html

Justice Department refuses to assist French probe into Musk’s X, WSJ reports

The U.S. Justice Department has told French law enforcement it will not assist with efforts to investigate tech billionaire Elon Musk’s social media platform X, The Wall Street Journal reported on Saturday, citing a letter from the DOJ’s Office of International Affairs, dated Friday.



Saturday, April 18, 2026

Privacy, y’all.

https://fpf.org/blog/the-alabama-personal-data-protection-act-brings-consumer-privacy-to-the-heart-of-dixie/

The Alabama Personal Data Protection Act Brings Consumer Privacy to the Heart of Dixie

We had to wait almost two years between when the 19th and 20th state comprehensive privacy laws were enacted, but the gap between the 20th and 21st proved to be a mere month. Governor Ivey signed HB 351, the Alabama Personal Data Protection Act (APDPA) into law on April 16. While this law is based on the popular Washington Privacy Act framework, it departs from that framework in a few ways (most notably in terms of what it is missing). For example, the law lacks a requirement to conduct data protection assessments and makes only passing references to authorized agents and opt-out preference signals. 

The APDPA will go into effect on May 1, 2027. This blog post provides an overview of the law’s scope, definitions, consumer rights, business obligations, and enforcement provisions. 





Sometimes the solutions to military questions are very similar to civilian ones. e.g. “Where is the enemy?” is similar to “Where are my bags?”

https://www.theregister.com/2026/04/17/dutch_navy_frigate_tracked/

Opsec oopsie: Dutch navy frigate location outed by mailing it a Bluetooth tracker

Militaries around the world spend countless hours training, developing policies, and implementing best operational security practices, so imagine the size of the egg on the face of the Dutch navy when journalists managed to track one of its warships for less than the cost of some hagelslag and a coffee.

The security snafu was reported by Dutch regional broadcaster Omroep Gelderland. In a Thursday report, Omroep Gelderland journalist Just Vervaart said the broadcaster was able to track HNLMS Evertsen, a Dutch air-defense frigate deployed to help protect France’s aircraft carrier Charles de Gaulle against missile threats, by mailing a Bluetooth tracker concealed in a postcard to the ship.