Not always a bad thing, but is one set of tech skill enough?
https://www.bespacific.com/opm-cuts-degree-requirements-for-government-tech-jobs-in-new-standards/
OPM cuts degree requirements for government tech jobs in new standards
Prefacing this update to include, not referenced in this article, job applicants for federal employment are now asked to answer questions to determine loyalty to Trump and willingness to execute Trump’s Executive Orders. Via NextGov/FCW: “The Office of Personnel Management released new classification and qualification standards for technology employees on Monday that make it easier for those without higher education degrees to get government jobs. The update is meant to move the government from relying on strict requirements for higher education and years of experience when hiring and promoting workers to using assessments meant to actually test for the skills needed for a given job. The new standards for technology employees no longer include degree requirements, an OPM official, who spoke on the condition of anonymity, told Nextgov/FCW…OPM is now rewriting the standards for all 604 occupational series roles and looking to reduce the number of series, too. The agency aims to move from self-attestation of skills in government hiring to formal assessments to test for aptitude for a given job…”
Where were you on the night in question? (Within 1751 feet of this spot...)
Virginia enacts ban on precise geolocation data sales as momentum for similar prohibitions builds
Suzanne Smiley reports:
The governor of Virginia on Monday signed a law banning the sale of citizens’ precise geolocation data, a sign of growing momentum for such laws at the state level.
The legislation bars the sale of geolocation within a 1,750 foot radius, a buffer large enough to keep data brokers from pinpointing where consumers live, work, worship, shop and otherwise travel.
The bill, which was passed as an amendment to Virginia’s existing comprehensive data privacy law, received unanimous bipartisan support in the state’s legislature and takes effect on July 1.
Read more at The Record.
Why AI bias is not obvious.
https://www.nature.com/articles/s41586-026-10319-8
Language models transmit behavioural traits through hidden signals in data
Large language models (LLMs) are increasingly used to generate data to train improved models1,2,3, but it remains unclear what properties are transmitted in this model distillation4,5. Here we show that distillation can lead to subliminal learning—the transmission of behavioural traits through semantically unrelated data. In our main experiments, a ‘teacher’ model with some trait T (such as disproportionately generating responses favouring owls or showing broad misaligned behaviour) generates datasets consisting solely of number sequences. Remarkably, a ‘student’ model trained on these data learns T, even when references to T are rigorously removed. More realistically, we observe the same effect when the teacher generates math reasoning traces or code. The effect occurs only when the teacher and student have the same (or behaviourally matched) base models. To help explain this, we prove a theoretical result showing that subliminal learning arises in neural networks under broad conditions and demonstrate it in a simple multilayer perceptron (MLP) classifier. As artificial intelligence systems are increasingly trained on the outputs of one another, they may inherit properties not visible in the data. Safety evaluations may therefore need to examine not just behaviour, but the origins of models and training data and the processes used to create them.