Saturday, July 27, 2024

Adding insult to injury. We always check with primary sources, like the vendor’s website, rather than random emails.

https://www.makeuseof.com/dont-open-any-crowdstrike-repair-emails-theyre-all-fakes/

Don't Open Any CrowdStrike Repair Emails; They're All Fakes

Whenever something big happens in the technology world, scammers are not far behind. The 2024 CrowdStrike outage is no different, and while the issues have been mostly sorted out, scammers are hot on its heels, peddling their latest tricks.





Whenever management says “Oh shinny! Something we could do!” someone needs to add ‘But we probably shouldn’t.”

https://www.ft.com/content/1e8f5778-a592-42fd-80f6-c5daa8851a21

Musk’s X faces questions from watchdog over AI data grab

Social media company’s move to automatically allow user data to train chatbot could breach European privacy rules

Europe’s data protection watchdog is “seeking clarity” on a decision by Elon Musk’s X to allow users’ data to automatically be fed into his artificial intelligence start-up xAI, placing fresh regulatory scrutiny on the social media platform.

X users discovered on Friday that they had been ‘opted-in’ to having their posts to the site, as well as their interactions with its Grok chatbot, be used for “training and fine-tuning” xAI’s systems.

The move was made without first obtaining users’ explicit consent for data sharing. The setting can only be changed on the desktop version of X, so users are currently unable to opt out via its mobile apps.



Friday, July 26, 2024

Do we have your attention yet?

https://www.reuters.com/technology/meta-be-hit-with-first-eu-antitrust-fine-linking-marketplace-facebook-sources-2024-07-25/

Exclusive: Meta to be hit with first EU antitrust fine for linking Marketplace and Facebook, sources say

Meta Platforms, opens new tab is set to be hit in a few weeks with its first EU antitrust fine for tying classified advertisements service Marketplace with its Facebook social network, people with direct knowledge of the matter said.

Meta could face a fine of as much as $13.4 billion – or 10% of its 2023 global revenue - although EU sanctions are usually much lower than that cap.





Perspective.

https://www.bespacific.com/about-the-insurrection-act/

About the Insurrection Act

Brennan Center – Via Reddit – “I’m Joseph Nunn, counsel in the Liberty and National Security Program at the Brennan Center for Justice. Ask me anything about reforming the Insurrection Act, an outdated law that gives the president near limitless power to use the U.S. military as a domestic police force. The Insurrection Act is the most dangerous law in the United States. It gives the president nearly limitless discretion to use the U.S. military as a domestic police force, and it contains no meaningful safeguards against abuse. Congress, which has not updated the law in 150 years, urgently needs to clarify and limit when the president may invoke the Insurrection Act, restrict what the military can do once deployed under this powerful authority, and create mechanisms that will allow Congress and the courts to intervene to stop abuse. Join Elizabeth Goitein, Hawa Allan, Jack L. Goldsmith, and Joseph Nunn on. Took place on July 25. Read the Q&A





Perspective.

https://www.schneier.com/blog/archives/2024/07/the-crowdstrike-outage-and-market-driven-brittleness.html

The CrowdStrike Outage and Market-Driven Brittleness

The brittleness of modern society isn’t confined to tech. We can see it in many parts of our infrastructure, from food to electricity, from finance to transportation. This is often a result of globalization and consolidation, but not always. In information technology, brittleness also results from the fact that hundreds of companies, none of which you’ve heard of, each perform a small but essential role in keeping the internet running. CrowdStrike is one of those companies.

This brittleness is a result of market incentives. In enterprise computing—as opposed to personal computing—a company that provides computing infrastructure to enterprise networks is incentivized to be as integral as possible, to have as deep access into their customers’ networks as possible, and to run as leanly as possible.

Redundancies are unprofitable. Being slow and careful is unprofitable. Being less embedded in and less essential and having less access to the customers’ networks and machines is unprofitable—at least in the short term, by which these companies are measured. This is true for companies like CrowdStrike. It’s also true for CrowdStrike’s customers, who also didn’t have resilience, redundancy, or backup systems in place for failures such as this because they are also an expense that affects short-term profitability.



Thursday, July 25, 2024

I think this is a huge underestimation of the total...

https://www.theguardian.com/technology/article/2024/jul/24/crowdstrike-outage-companies-cost

CrowdStrike global outage to cost US Fortune 500 companies $5.4bn

Banking and healthcare firms, major airlines expected to suffer most losses, according to insurer Parametrix

Companies in banking and healthcare are expected to be hit the hardest, according to the insurer Parametrix, as well as major airlines. The total insured losses for the non-Microsoft Fortune 500 companies could be between $540m and $1.08bn.

A variety of industries are still struggling to rectify the damage from CrowdStrike’s outage, which grounded thousands of flights, caused turmoil at hospitals and crashed payment systems in what experts have described as the largest IT failure in history. The outage exposed how modern tech systems are built on precarious ground, with faulty code in a single update able to bring down operations around the world.

CrowdStrike – a Texas-based, multibillion-dollar company that has lost about 22% of its stock market value since the outage – has repeatedly apologized for causing the international tech crisis. The company issued a report on Wednesday detailing what went wrong in the update.

The primary cause of the failure stemmed from an update that CrowdStrike pushed to its flagship Falcon platform, which functions as a cloud-based service intended to protect businesses from cyber-attacks and disruptions. The update contained a bug which caused 8.5m Windows machines to crash en masse.

CrowdStrike stated in its postmortem that it plans to increase software testing before issuing updates in the future, and only roll out those updates gradually to prevent the widespread, simultaneous failures that took place last week. The company also plans to issue a more in-depth report on the causes of the outage in the coming weeks.

CrowdStrike is one of the world’s most prominent cybersecurity firms, and was valued at around $83bn before the outage. [83bn x .22 = 18.26bn Bob] It services about 538 of the Fortune 1000 companies, according to its website, and operates around the world. That ubiquity made the consequences of its botched update particularly severe, showcasing how many companies are reliant on the same products to keep operations running.





I thought this would have been obvious…

https://www.bespacific.com/ai-trained-on-ai-garbage-spits-out-ai-garbage/

AI trained on AI garbage spits out AI garbage

MIT Technology Review: “AI models work by training on huge swaths of data from the internet. But as AI is increasingly being used to pump out web pages filled with junk content, that process is in danger of being undermined. New research published in Nature shows that the quality of the model’s output gradually degrades when AI trains on AI-generated data. As subsequent models produce output that is then used as training data for future models, the effect gets worse. Ilia Shumailov, a computer scientist from the University of Oxford, who led the study, likens the process to taking photos of photos. “If you take a picture and you scan it, and then you print it, and you repeat this process over time, basically the noise overwhelms the whole process,” he says. “You’re left with a dark square.” The equivalent of the dark square for AI is called “model collapse,” he says, meaning the model just produces incoherent garbage. This research may have serious implications for the largest AI models of today, because they use the internet as their database. GPT-3, for example, was trained in part on data from Common Crawl, an online repository of over 3 billion web pages. And the problem is likely to get worse as an increasing number of AI-generated junk websites start cluttering up the internet…”





Tools & Techniques. (Could be a useful improvement.)

https://www.windowscentral.com/microsoft/microsoft-unveils-bing-generative-search-enhanced-with-ai-its-a-complete-overhaul-of-traditional-search

Microsoft unveils Bing Generative Search — enhanced with AI, it's a complete overhaul of traditional search

Microsoft has announced a major update to Bing Search that overhauls the search results page with AI at the heart of its experience. Currently available to a small subset of users, Bing Search now incorporates AI-generated answers in addition to traditional search results directly on the search page.

At the very top of the page will be an AI-generated answer created by large and small language models that have reviewed millions of sources to provide the most accurate answer. It will break down that answer into a document index that can provide more information about particular subjects within that search query if you'd like to learn more.

… The search page will also list the sources that the AI-generated text was created from below the answer, and will even present traditional search results in a sidebar on the right for those who are uninterested in Bing's curated AI experience.



Wednesday, July 24, 2024

Once again an assumption has unintended consequences.

https://www.theregister.com/2024/07/24/crowdstrike_preliminary_incident_report/

CrowdStrike blames a test software bug for that giant global mess it made

Whatever the Validator does or is supposed to do, it did not prevent the release of the July 19 Template Instance, despite it being a dud. That happened because CrowdStrike assumed that tests that passed the IPC Template Type delivered in March, and subsequent related IPC Template Instances, meant the July 19 release would be OK.

History tells us that was a very bad assumption. It "resulted in an out-of-bounds memory read triggering an exception."

"This unexpected exception could not be gracefully handled, resulting in a Windows operating system crash."

On around 8.5 million machines.





Why would anyone want to open this can of worms?

https://therecord.media/ftc-surveillance-pricing-inquiry

FTC launches probe into how companies use data to tailor what each customer pays

The Federal Trade Commission (FTC) announced Tuesday that it has launched an inquiry into how companies surveil consumers to set individualized pricing for the same products and services based on private data from their financial profiles.

The profiles are built from consumer demographics as well as web browsing, credit and geolocation histories, but sometimes harness even real-time data to determine what the agency refers to as “surveillance pricing.”

Eight companies — including Fortune 500 firms Mastercard, JPMorgan Chase and Accenture as well as the consulting firm McKinsey and Co. — have been ordered to explain how they gather and use consumers’ “characteristics and behavior” to set pricing, potentially undermining consumer privacy and marketplace competition, according to an FTC announcement.





When educators fail to learn…

https://pogowasright.org/uk-essex-school-reprimanded-after-using-facial-recognition-technology-for-canteen-payments/

UK: Essex school reprimanded after using facial recognition technology for canteen payments

From the Information Commissioner’s Office:

We have issued a reprimand to a school that broke the law when it introduced facial recognition technology (FRT).
Chelmer Valley High School, in Chelmsford, Essex, first started using the technology in March 2023 to take cashless canteen payments from students.
FRT processes biometric data to uniquely identify people and is likely to result in high data protection risks. To use it legally and responsibly, organisations must have a data protection impact assessment (DPIA) in place. This is to identify and manage the higher risks that may arise from processing sensitive data.
Chelmer Valley High School, which has around 1,200 pupils aged 11-18, failed to carry out a DPIA before starting to use the FRT. This meant no prior assessment was made of the risks to the children’s information. The school had not properly obtained clear permission to process the students’ biometric information and the students were not given the opportunity to decide whether they did or didn’t want it used in this way.
[...]
In March 2023, a letter was sent to parents with a slip for them to return if they did not want their child to participate in the FRT. Affirmative ‘opt-in’ consent wasn’t sought at this time, meaning until November 2023 the school was wrongly relying on assumed consent. The law does not deem ‘opt out’ a valid form of consent and requires explicit permission. Our reprimand also notes most students were old enough to provide their own consent. Therefore, parental opt-out deprived students of the ability to exercise their rights and freedoms.
Ms Currie added:
A DPIA is required by law – it’s not a tick-box exercise. It’s a vital tool that protects the rights of users, provides accountability and encourages organisations to think about data protection at the start of a project.”
We have provided Chelmer Valley High School with recommendations for the future



Tuesday, July 23, 2024

Oh what a wicked web we weave…

https://www.theverge.com/2024/7/22/24203479/eu-meta-facebook-pay-or-consent-warning-consumer-protection-cooperation-cpc

EU threatens to fine Meta for saying Facebook is ‘free’

Meta’s “pay or consent” model, which was introduced last year, gives users a choice: pay as much as €12.99 per month to use Facebook and Instagram without ads or consent to letting the company collect and use personal data to serve personalized ads. The EU doesn’t like what it sees as privacy-violating data usage and has already hit Meta separately with Digital Markets Act charges over its model and record fines under the GDPR for transferring user data overseas.

They also say that calling the ad-free versions of Facebook and Instagram “free” is misleading since it still requires users to consent to the use of their data for targeted ads.





Perspective.

https://www.fastcompany.com/91159180/the-first-wave-of-ai-innovation-is-over-heres-what-comes-next

The first wave of AI innovation is over. Here’s what comes next

This is a cycle in innovation that repeats throughout history: For a long time, an almost undetectable amount of knowledge and craft builds up around an idea, like an invisible gas. Then, a spark. An explosion of innovation ensues but, of course, eventually stabilizes. This pattern is called an S-Curve.

… The AI revolution is following this curve. In a 1950 paper, Alan Turing was one of the first computer scientists to explore how to build a thinking machine, starting the slow buildup of knowledge. Seventy years later, the spark: A 2017 research paper, Attention Is All You Need, leads to OpenAI’s development of ChatGPT, which convincingly mimics human conversation, unleashing a global shock wave of innovation based upon generative AI technology.

… We believe the real breakthrough that will allow humanity to jump to the next S-Curve is data produced at work. Workplace data—e.g. product specifications, sales presentations, and customer support interactions—is of far higher quality than what’s left of public data for training purposes, especially compared to running the dregs of the internet through the transformer mill. (The results of which may be why a lot of AI-generated content is already being called “slop.”)



Monday, July 22, 2024

I do not plan to run. There, that should clear things up.

https://www.bespacific.com/a-week-of-nonstop-breaking-political-news-stumps-ai-chatbots/

A week of nonstop breaking political news stumps AI chatbots

Washington Post [unpaywalled ]: “In the hour after President Biden announced he would withdraw from the 2024 campaign on Sunday, most popular AI chatbots seemed oblivious to the news. Asked directly whether he had dropped out, almost all said no or declined to give an answer. Asked who was running for president of the United States, they still listed his name. For the past week, we’ve tested AI chatbots’ approach to breaking political stories and found they were largely not able to keep up with consequential real-time news. Most didn’t have current information, gave incorrect answers, or declined to answer and pushed users to check news sources. Now, with just months left until the presidential election and bombshell political news dropping at a steady clip, AI chatbots are distancing themselves from politics and breaking news or refusing to answer at all. AI chatbot technology burst onto the scene two years ago, promising to revolutionize how we get information. Many of the top bots tout their access to recent information, and some have suggested using the tools to catch up on current events. But companies that make chatbots don’t appear ready for their AI to play a larger role in how people follow this election…”





You can never have too much information.

https://www.bespacific.com/the-foia-gov-search-tool-updated/

The FOIA.gov Search Tool Updated

FOIA.gov, the government’s central resource for information about the Freedom of Information Act (FOIA) now includes additional functionality to help users locate commonly requested law enforcement and related records. The FOIA.gov Search Tool was updated to add a “Law Enforcement records” pre-defined user journey that helps the public more quickly locate commonly requested information. This user journey supplements the existing journeys that help users identify agencies with some of the most common types of requested records, including Immigration/Travel records, Tax records, Social Security records, Medical records, Personnel records, and Military records. The new Law Enforcement records user journey not only helps requesters identify the multitude of federal law enforcement agencies subject to the FOIA, but also provides useful guidance for those seeking state and local records. Since the launch of the Search Tool in October 2023, nearly 99,000 queries have been recorded. More than half of users enter a predefined user journey, making the predefined journeys instrumental in helping requesters identify agencies and documents of interest. In introducing the new Law Enforcement user journey, the machine learning functionality used to power the Search Tool was retrained and benefitted from recent advancements. As a result, search results are anticipated to be more accurate and precise. As users continue to interact with the Search Tool, we will analyze usage data to further improve the quality of the results. The latest update now allows users to provide targeted feedback on the results they receive.”



Sunday, July 21, 2024

It became necessary to destroy the town to save it. Have we given too much access for “fairness?”

https://www.wsj.com/tech/cybersecurity/microsoft-tech-outage-role-crowdstrike-50917b90?st=pkas1bzrhcoj0os&reflink=desktopwebshare_permalink

Blue Screens Everywhere Are Latest Tech Woe for Microsoft

A Microsoft spokesman said it cannot legally wall off its operating system in the same way Apple does because of an understanding it reached with the European Commission following a complaint. In 2009, Microsoft agreed it would give makers of security software the same level of access to Windows that Microsoft gets.



(Related)

https://www.nytimes.com/2024/07/19/us/politics/crowdstrike-outage.html?unlocked_article_code=1.8k0._ZDj.e5unf_bqIJNo&smid=url-share

What Happened to Digital Resilience?

With each cascade of digital disaster, new vulnerabilities emerge. The latest chaos wasn’t caused by an adversary, but it provided a road map of American vulnerabilities at a critical moment.





You can’t object to an AI?

https://scholarlycommons.law.case.edu/jolti/vol15/iss2/6/

Artificial Intelligence in the Courtroom: Forensic Machines, Expert Witnesses, and the Confrontation Clause

From traditional methods like ballistics and fingerprinting, to the probabilistic genotyping models of the twenty-first century, the forensic laboratory has evolved into a cutting-edge area of scientific exploration. This rapid growth in forensic technologies will not stop here. Considering recent developments in artificial intelligence (“AI”), future forensic tools will likely become increasingly sophisticated. To be sure, AI-enabled forensic tools are far from theoretical; AI applications in the forensic sciences have already emerged in practice. Machine learning-enabled acoustic gunshot detectors, facial recognition software, and a variety of pattern recognition learning models are already disrupting law enforcement operations across the country. Soon, criminal defendants will need to learn how to navigate a courtroom dominated by AI-enabled expert systems. Unfortunately, there is little guidance in the caselaw or in the Federal Rules of Evidence on how exactly criminal defendants should approach AI as evidence in the courtroom. Although a handful of scholars have taken up the task of exploring the intersection of AI and evidence law, these studies have primarily focused on issues in authentication or issues with applying the Daubert standard to AI evidence. This study contributes to this ongoing exploration of AI in the courtroom by providing an analysis of the rights of criminal defendants facing AI-generated testimony under the Confrontation Clause of the Sixth Amendment. This study will illustrate that, in a future where AI-enabled forensic tools are increasingly used to inculpate defendants in criminal prosecutions, the right to confrontation will become increasingly eroded. This is largely because courts have carved out a broad “machine-generated data” exception to the Confrontation Clause. Under this exception, data generated by a sufficiently autonomous machine will fall outside the ambit of constitutional protection. The rationale is that such transmissions are too autonomous to be attributed to any human actor, and the Confrontation Clause protects only statements made by a human rather than a machine learning model. This exception to the right to confrontation is significant. Practically, these limitations could have a measurable negative impact on a defendant’s capacity to test the reliability of an AI model in court. Normatively, this study illustrates that, in a world where AI algorithms proffer inculpatory evidence of criminal wrongdoing, the right to confrontation adds little value for criminal defendants. As courts and scholars reinterpret and refine the rules of evidence to better reflect technological realities, some attention should be given to the proper place of the right to confrontation.