Saturday, February 22, 2025

That’s one answer… (Expect users to find a wide variety of alternative encryption packages.)

https://gizmodo.com/apple-says-no-to-uk-backdoor-order-will-pull-e2e-cloud-encryption-instead-2000566862

Apple Says ‘No’ to UK Backdoor Order, Will Disable E2E Cloud Encryption Instead

Good work, Britain. Owners of Apple devices in the United Kingdom will be a little less safe moving forward as the company pulls its most secure end-to-end (E2E) encryption from the country. The move is in response to government demands there that Apple build a backdoor into its iCloud encryption feature that would allow law enforcement to access the cloud data of any iPhone user around the world under the guise of national security.





Useful summary?

https://www.jdsupra.com/legalnews/strategic-artificial-intelligence-4822723/

Strategic Artificial Intelligence Planning Alert: A State and Federal Regulatory Roadmap for 2025 Compliance

The World Economic Forum has stated that 88 percent of C-suite executives indicated that adopting artificial intelligence (AI) in their companies is a key initiative for 2025.

Companies are pivoting from merely testing AI to expanding AI use cases in their business processes. While these new use cases can bring significant business benefits, they also introduce real contractual and legal risks that should be thoughtfully considered and potentially mitigated.

To help you reduce the barriers of using AI and also understand the legal risks involved, Hinshaw's Privacy, Security, & Artificial Intelligence team has prepared this Strategic AI Planning Alert for 2025. Keep reading to learn about key AI laws, legislation, guidance, and orders that may impact your organization's AI governance and use.





Tools & Techniques.

https://www.zdnet.com/article/geminis-new-free-feature-can-save-you-hours-of-tedious-pdf-analysis/

Gemini's new free feature can save you hours of tedious PDF analysis

Those of you who use Google Gemini for free can now take advantage of a feature formerly limited to paid subscribers. On Thursday, the Google Gemini account on X announced that the document upload capability is now available to all Gemini users.

How it works

Using this option, free users can upload a variety of file types to Gemini for analysis. These include PDFs, text files, Word documents, PowerPoint presentations, and Google Docs files. After Gemini processes your uploaded file, you can request an AI-generated summary and ask questions about the content in the file.

The $20-per-month paid version of Gemini Advanced can handle a lot more file types than the free version. Using the paid version, you can also upload CSV files, Excel spreadsheets, CSS files, HTML files, JavaScript files, PHP files, and many other files, many of which are used by developers.



Thursday, February 20, 2025

We don’t need no stinking laws! We have infallible AI lawyers!

https://www.bespacific.com/trump-cancels-the-secs-westlaw-subscription/

Trump Cancels The SEC’s Westlaw Subscription

Above the Law: “The new administration has made it clear that they don’t have much interest in enforcing the nation’s securities laws. Except maybe as an avenue to accuse companies of dishonoring shareholders by promoting ESG initiatives. The point is, Trump has a meme coin to sell and a media company stock to fluff so SEC enforcement has to hit the bench for a few years. Meanwhile, shadow president Elon Musk still nurses a grudge toward the agency for having the audacity to enforce the voluntary settlement he signed curtailing the risk he might commit fraud at scale over social media . But that’s not actually the reason they just cut off the SEC’s Westlaw access. And while on the surface, the canceled contract looks like another ill-conceived cost cutting measure from Musk’s bumbling speedrun through the federal budget, the reality appears to be much dumber… Earlier today, we learned that the word had gone out to executive agencies to purge all access to a variety of media that don’t meet the administration’s editorial standards…” [I doubt this is the only agency that forced to cancel access to Westlaw and Lexis. Please share additional information so that I may track this accurately. Thank you.]





A book at my level…

https://www.bespacific.com/book-review-generative-ai-for-dummies/

Book Review: Generative AI For Dummies

Via LLRX – Book Review: Generative AI For Dummies – Jerry Lawson’s opinion of the new book, Generative AI for Dummies, is that it demystifies the complex world of generative AI for audiences from all walks of life. If you’re after a fast, engaging, and practical introduction to AI—and maybe even a little chuckle or two along the way—this book delivers.





Perspective.

https://fpf.org/blog/fpf-releases-infographic-highlighting-the-spectrum-of-ai-in-education/

FPF Releases Infographic Highlighting the Spectrum of AI in Education

To highlight the wide range of current use cases for Artificial Intelligence (AI) in education and future possibilities and constraints, the Future of Privacy Forum (FPF) today released a new infographic, Artificial Intelligence in Education: Key Concepts and Uses. While generative AI tools that can write essays, generate and alter images, and engage with students have brought increased attention to the topic, schools have been using AI-enabled applications for years.

The AI in Education infographic builds on FPF’s 2023 The Spectrum of Artificial Intelligence report and infographic, and illustrates a sample of the use cases these technologies support, tailored to the school environment.



Wednesday, February 19, 2025

Perspective.

https://www.bespacific.com/the-generative-ai-con/

The Generative AI Con

Wheres’ Your Ed At,  Edward Zitron: “It’s been just over two years and two months since ChatGPT launched, and in that time we’ve seen Large Language Models (LLMs) blossom from a novel concept into one of the most craven cons of the 21st century — a cynical bubble inflated by OpenAI CEO Sam Altman built to sell into an economy run by people that have no concept of labor other than their desperation to exploit or replace it. I realize that Large Language Models like GPT-4o — the model that powers ChatGPT and a bunch of other apps — have use cases, and I’m fucking tired of having to write this sentence.  There are people that really like using Large Language Models for coding (even if the code isn’t good or makes systems less secure and stable) or get something out of Retrieval-Augmented Generation (RAG)-powered search, or like using one of the various AI companions or journal apps.  I get it. I get that there are people that use LLM-powered software, and I must be clear that anecdotal examples of some people using some software that they kind-of like is not evidence that generative AI is a sustainable or real industry at the trillion-dollar scale that many claim it is.  I am so very bored of having this conversation, so I am now going to write out some counterpoints so that I don’t have to say them again…”





Perhaps we need separation of Health and State?

https://pogowasright.org/missouri-bill-proposes-registry-for-pregnant-mothers/

Missouri bill proposes registry for pregnant mothers

Megan Mueller reports:

A proposed bill by a state representative pushes to make a list of expecting mothers in the state in an aim to “reduce the number of preventable abortions,” according to the bill.
House Bill 807, nicknamed the “Save MO Babies Act,” was proposed by Rep. Phil Amato, R-Arnold.
The bill summary states that if passed, the state would create a registry of every expectant mother in the state “who is at risk for seeking an abortion” through the Department of Social Services, the Division of Maternal and Child Services. It would go into effect July 1, 2026.
This registry would also incorporate hopeful adoptive parents who have completed certain screenings—including background checks, home studies, and other investigations, according to the bill.

Read more at Fox2 Now.



Tuesday, February 18, 2025

Good advice.

https://www.bespacific.com/back-up-everything-even-if-elon-musk-isnt-looking-at-it/

Back Up Everything. Even if Elon Musk Isn’t Looking at It.

The New York Times  [unpaywalled] – “In recent weeks, Elon Musk and his aides have gained access to many federal agencies’ systems and unknown amounts of data. Many readers have written in to share their fears that the agencies — and the personal data they possess on hundreds of millions of taxpayers — are now vulnerable. When people tinker with vital systems, things can go wrong. New vulnerabilities can emerge that thieves could exploit, or existing tax or loan payments could disappear. And one wrong move can bring a whole website down for days or longer. The level of risk isn’t clear, and in uncertain situations, it’s tempting to do something to feel that you’re protecting yourself. That instinct is perfectly rational. But don’t just download your history of paying into Social Security or freeze access to your credit files because of the politics of now. Back up everything important, everywhere you can. Do this at least once a year or so. It’s just good hygiene. Having multiple copies of all of the things that help you run your life brings a certain kind of peace that lacks a perfect word in English, but it’s the quality or state of being well sorted. Here’s a guide for what to do.”





Surveillance is easy…

https://databreaches.net/2025/02/18/the-myth-of-jurisdictional-privacy/

The Myth of Jurisdictional Privacy

Understanding Global Surveillance

In discussions of online privacy, you’ll often hear passionate debates about jurisdiction, with particular focus on avoiding the “Five Eyes” intelligence alliance countries (USA, UK, Canada, Australia, and New Zealand). The argument goes that by choosing a service provider outside these nations, you can somehow escape their surveillance reach.

But let’s pause and think about that for a moment. In a world where digital information flows freely across borders, where undersea cables connect continents, and where global tech infrastructure is deeply interconnected, does it really make sense to think that physical jurisdiction offers meaningful protection from surveillance?





Perspective. (Interesting metric)

https://www.zdnet.com/article/knowledge-management-takes-center-stage-in-the-ai-journey/

Knowledge management takes center stage in the AI journey

According to the Ark Invest Big Ideas 2025 report, agents will increase enterprise productivity via software. Companies that deploy agents should be able to increase unit volume with the same workforce and optimize their workforce toward higher-value activities.

Artificial intelligence (AI) will also supercharge knowledge work. Through 2030, Ark expects the amount of software deployed per knowledge worker to grow considerably as businesses invest in productivity solutions. AI agents are poised to accelerate the adoption of digital applications and create an epochal shift in human-computer interaction.



Monday, February 17, 2025

How AI will make us all dumber…

https://nmn.gl/blog/ai-and-learning

New Junior Developers Can’t Actually Code

We’re at this weird inflection point in software development. Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning.

Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares.

The foundational knowledge that used to come from struggling through problems is just… missing.

We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.





A security perspective.

https://thehackernews.com/2025/02/cisos-expert-guide-to-ctem-and-why-it.html

CISO's Expert Guide To CTEM And Why It Matters

Cyber threats evolve—has your defense strategy kept up? A new free guide available here explains why Continuous Threat Exposure Management (CTEM) is the smart approach for proactive cybersecurity.



Sunday, February 16, 2025

Worth thinking about.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5131058

Large Language Models and International Law

Large Language Models (LLMs) have the potential to transform public international lawyering. ChatGPT and similar LLMs can do so in at least five ways: (i) helping to identify the contents of international law; (ii) interpreting existing international law; (iii) formulating and drafting proposals for new legal instruments or negotiating positions; (iv) assessing the international legality of specific acts; and (v) collating and distilling large datasets for international courts, tribunals, and treaty bodies.

The article uses two case studies to show how LLMs may work in international legal practice. First, it uses LLMs to identify whether particular behavioral expectations rise to the level of customary international law. In doing so, it tests LLMs’ ability to identify persistent objectors and a more egalitarian collection of state practice, as well as their proclivity to produce orthogonal or inaccurate answers. Second, it explores how LLMs perform in producing draft treaty texts, ranging from a U.S.-China extradition treaty to a treaty banning the use of artificial intelligence in nuclear command and control systems.

Based on our analysis of the five potential functions and the two more detailed case studies, the article identifies four roles for LLMs in international law: as collaborator, confounder, creator, or corruptor. In some cases, LLMs will be collaborators, complementing existing international lawyering by drastically improving the scope and speed with which users can assemble and analyze materials and produce new texts. At the same time, without careful prompt engineering and curation of results, LLMs may generate confounding outcomes, leading international lawyers down inaccurate or ambiguous paths. This is particularly likely when LLMs fail to accurately explain or defend particular conclusions. Further, LLMs also hold surprising potential to help to create new law by offering inventive proposals for treaty language or negotiations.

Most importantly, we highlight the potential for LLMs to corrupt international law by fostering automation bias in users. That is, even where analog work by international lawyers would produce different results, LLM results may soon be perceived to accurately reflect the contents of international law. The implications of this potential are profound. LLMs could effectively realign the contents and contours of international law based on the datasets they employ. The widespread use of LLMs may even incentivize states and others to push their desired views into those datasets to corrupt LLM outputs. Such risks and rewards lead us to conclude with a call for further empirical and theoretical research on LLMs’ potential to assist, reshape, or redefine international legal practice and scholarship.





Not sure I agree.

https://thejoas.com/index.php/thejoas/article/view/263

The Intersection of Ethics and Artificial Intelligence: A Philosophical Study

The rapid development of artificial intelligence (AI) has had a significant impact on various aspects of human life, ranging from the economy, education, to health. However, these advances also raise complex ethical challenges, such as privacy concerns, algorithmic bias, moral responsibility, and potential misuse of technology. This research aims to explore the intersection between ethics and artificial intelligence through a philosophical approach. The method used in this study is qualitative with literature study (library research), examining various classical and contemporary ethical theories and their application in the context of AI development. The results of the study show that AI presents a new moral dilemma that cannot be fully answered by traditional ethical frameworks. For example, the concept of responsibility in AI becomes blurred when decisions are taken by autonomous systems without human intervention. Additionally, bias in AI training data indicates the need for strict ethical oversight in the design and implementation process of this technology. The study also highlights the need for a multidisciplinary approach in drafting ethical guidelines that are able to accommodate future AI developments. Thus, this research is expected to contribute to enriching the discourse on AI ethics and offering a deeper philosophical perspective in understanding the moral challenges faced.





You only get out what you design in… (Garbage in, garbage out.)

https://www.livescience.com/technology/artificial-intelligence/older-ai-models-show-signs-of-cognitive-decline-study-shows

Older AI models show signs of cognitive decline, study shows

People increasingly rely on artificial intelligence (AI) for medical diagnoses because of how quickly and efficiently these tools can spot anomalies and warning signs in medical histories, X-rays and other datasets before they become obvious to the naked eye. But a new study published Dec. 20, 2024 in the BMJ raises concerns that AI technologies like large language models (LLMs) and chatbots, like people, show signs of deteriorated cognitive abilities with age.

"These findings challenge the assumption that artificial intelligence will soon replace human doctors," the study's authors wrote in the paper, "as the cognitive impairment evident in leading chatbots may affect their reliability in medical diagnostics and undermine patients' confidence."