Saturday, January 11, 2025

Could significantly skew your understanding of your customer base.

https://www.zdnet.com/article/ai-agents-may-soon-surpass-people-as-primary-application-users/

AI agents may soon surpass people as primary application users

Tomorrow's application users may look quite different than what we know today -- and we're not just talking about more GenZers. Many users may actually be autonomous AI agents.   

That's the word from a new set of predictions for the decade ahead issued by Accenture, which highlights how our future is being shaped by AI-powered autonomy. By 2030, agents -- not people -- will be the "primary users of most enterprises' internal digital systems," the study's co-authors state. By 2032, "interacting with agents surpasses apps in average consumer time spent on smart devices."



Friday, January 10, 2025

No doubt copied from the Stalkers Handbook. (Perhaps they should have listed the ones that don’t sell your data.)

https://www.bespacific.com/here-is-a-list-of-every-app-on-your-phone-selling-your-location-data/

Here is a list of every app on your phone selling your location data

Via Austin Corbett‬ ‪@austincorbett.bsky.social– Here is a list of every app on your phone selling your location data to advertisers, interested unknown 3rd parties, and the US government. Thanks to 404 Media and @josephcox.bsky.social

    • There are 12, 373 apps on this Google doc as of today – the apps are used by children and adults and include: word games, puzzles, music, pets, sports, animals, solitaire, food, cooking, QR codes, gaming, news…..the list goes on and on and on. No doubt you will find a few or dozens that impact your privacy.





Because you should…

https://www.zdnet.com/article/how-to-encrypt-any-email-in-outlook-gmail-and-other-popular-services/

How to encrypt any email - in Outlook, Gmail, and other popular services

Say, for example, you include sensitive information in an innocent email, only to discover that a bad actor intercepted the message, read the content of that email, and extracted the information for some nefarious purpose.

You don't want that. Even if it does require an extra bit of work on your part, being safe is so much better than being sorry.



Thursday, January 09, 2025

Still searching for the lowest common denominator.

https://pogowasright.org/state-attorneys-general-issue-guidance-on-privacy-artificial-intelligence/

State Attorneys General Issue Guidance On Privacy & Artificial Intelligence

Libbie Canter and Jayne Ponder of Covington and Burling write:

Attorneys General in Oregon and Connecticut issued guidance over the holiday interpreting their authority under their state comprehensive privacy statutes and related authorities. Specifically, the Oregon Attorney General’s guidance focuses on laws relevant for artificial intelligence (“AI”), and the Connecticut Attorney General’s guidance focuses on opt-out preference signals that go into effect on January 1, 2025 in the state.
Oregon Guidance on AI Systems
On December 24, Oregon Attorney General Ellen Rosenblum issued guidance, “What you should know about how Oregon’s laws may affect your company’s use of Artificial Intelligence,” which underscores that the state’s Unlawful Trade Practices Act (“Oregon UTPA”), Consumer Privacy Act (“OCPA”), Equality Act, and other legal authorities apply to AI.  After noting the opportunities for Oregon’s economy – from streamlining tasks to delivering personalized services – the guidance states that AI can involve concerns around privacy, discrimination, and accountability.

Read more at Inside Privacy.





Can you say, “evidence?”

https://www.wired.com/story/bee-ai-omi-always-listening-ai-wearables/

Your Next AI Wearable Will Listen to Everything All the Time

The latest crop of AI-enabled wearables like Bee AI and Omi listen to your conversations to help organize your life. They are also normalizing embedded microphones that are always on.





Who decides what technology is appropriate?

https://www.techradar.com/pro/how-to-beat-shadow-ai-across-your-organization

How to beat ‘shadow AI’ across your organization

According to Microsoft, 78% of knowledge workers regularly use their own AI tools to complete work, yet a huge 52% don't disclose this to employers. As a result, companies are exposed to a myriad of risks, including data breaches, compliance violations, and security threats.

Addressing these challenges requires a multi-faceted approach, comprising of strong governance, clear communication, and versatile monitoring and management of AI tools, all without compromising on staff freedom and flexibility.





A brief economics refresher…

https://thedailyeconomy.org/article/debunking-the-three-best-arguments-for-tariffs/

Debunking the Three Best Arguments for Tariffs

Tariffs can’t bring back jobs and raise significant revenue at the same time. In fact, they’re unlikely to do either.



Wednesday, January 08, 2025

If you don’t write your own code, you must spend the almost as great an effort to understand the code you buy.

https://www.theregister.com/2025/01/08/backdoored_backdoors/

Crims backdoored the backdoors they supplied to other miscreants. Then the domains lapsed

"Imagine you want to gain access to thousands of systems, but don't feel like investing the effort to identify and compromise systems yourself – or getting your hands dirty," he continued.

"Instead, you commandeer abandoned backdoors in regularly used backdoors to effectively 'steal the spoils' of someone else's work, giving you the same access to a compromised system as the person who put the effort into identifying the mechanism to compromise, and performing the compromise of said system in the first place."





Building in AI distortion.

https://www.bespacific.com/suspected-undeclared-use-of-artificial-intelligence-in-the-academic-literature/

Suspected Undeclared Use of Artificial Intelligence in the Academic Literature

Suspected Undeclared Use of Artificial Intelligence in the Academic Literature. An Analysis of the Academ-AI Dataset. Alex Glynn, MA, Kornhauser Health Sciences Library, University of Louisville, Louisville, KY. November 26, 2024.

Since generative artificial intelligence (AI) tools such as OpenAI’s ChatGPT became widely available, researchers have used them in the writing process. The consensus of the academic publishing community is that such usage must be declared in the published article. Academ-AI documents examples of suspected undeclared AI usage in the academic literature, discernible primarily due to the appearance in research papers of idiosyncratic verbiage characteristic of large language model (LLM)-based chatbots. This analysis of the first 500 examples collected reveals that the problem is widespread, penetrating the journals and conference proceedings of highly respected publishers. Undeclared AI seems to appear in journals with higher citation metrics and higher article processing charges (APCs), precisely those outlets that should theoretically have the resources and expertise to avoid such oversights. An extremely small minority of cases are corrected post publication, and the corrections are often insufficient to rectify the problem. The 500 examples analyzed here likely represent a small fraction of the undeclared AI present in the academic literature, much of which may be undetectable. Publishers must enforce their policies against undeclared AI usage in cases that are detectable; this is the best defense currently available to the academic publishing community against the proliferation of undisclosed AI.





Interesting but might it slow document creation?

https://www.zdnet.com/home-and-office/work-life/grammarly-just-made-it-easier-to-prove-the-sources-of-your-text-in-google-docs/

Grammarly just made it easier to prove the sources of your text in Google Docs

In this age of distrust, misinformation, and skepticism, you may wonder how to demonstrate your sources within a Google Document. Did you type it yourself, copy and paste it from a browser-based source, copy and paste it from an unknown source, or did it come from generative AI?

You may not think this is an important clarification, but if writing is a critical part of your livelihood or life, you will definitely want to demonstrate your sources.

That's where the new Grammarly feature comes in.

Authorship proactively tracks the writing process

The new feature is called Authorship, and according to Grammarly, "Grammarly Authorship is a set of features that helps users demonstrate their sources of text in a Google doc. When you activate Authorship within Google Docs, it proactively tracks the writing process as you write."





What the other guys are doing.

https://newsroom.ibm.com/2025-01-07-ibm-study-ai-spending-expected-to-surge-52-beyond-it-budgets-as-retail-brands-embrace-enterprise-wide-innovation

IBM Study: AI Spending Expected to Surge 52% Beyond IT Budgets as Retail Brands Embrace Enterprise-Wide Innovation

 A new global study from the IBM Institute for Business Value found that surveyed retail and consumer product executives are dramatically shifting their focus toward artificial intelligence (AI), with responses indicating that participants project spending outside of traditional IT operations could surge by 52% in the next year. The report, titled "Embedding AI in Your Brand's DNA," reveals how brands are preparing for the next phase of AI-driven transformation across the enterprise.



Tuesday, January 07, 2025

A Trump inspired change? (Or is that non-factual?)

https://www.makeuseof.com/facebook-ends-fact-checking-program/

Facebook Will No Longer Fact-Check Your Dumb Posts

In both an official press release and a series of posts on Threads (embedded below), Mark Zuckerberg has announced Meta's plans to kill its fact-checking program. Community Notes will replace the third-party fact-checkers, with Meta/Facebook believing that social media communities are capable of moderating themselves.

Meta's Community Notes will work in a similar way to X's Community Notes. So, rather than third-party fact-checkers deciding what is and isn't appropriate (and removing posts accordingly), the community will respond to claims and opinions, offering counter-claims or calling out misinformation. Community Notes will be phased in over the next couple of months, starting in the United States.





Another version…

https://thehackernews.com/2025/01/india-proposes-digital-data-rules-with.html

India Proposes Digital Data Rules with Tough Penalties and Cybersecurity Requirements

The Indian government has published a draft version of the Digital Personal Data Protection (DPDP) Rules for public consultation.

"Data fiduciaries must provide clear and accessible information about how personal data is processed, enabling informed consent," India's Press Information Bureau (PIB) said in a statement released Sunday.

"Citizens are empowered with rights to demand data erasure, appoint digital nominees, and access user-friendly mechanisms to manage their data."





As goes legal, so goes the rest of the world?

https://www.law.com/legaltechnews/2025/01/07/legal-techs-predictions-for-artificial-intelligence-in-2025/?slreturn=20250107101506

Legal Tech's Predictions for Artificial Intelligence in 2025

Many expect that the coming 12 months will also shed more light on just how gen AI will impact the business of law and the legal job market. However, more complex issues, such as aligning copyright laws with AI-created content, might not be resolved anytime soon.

Amid an expected shift in the U.S. regulatory landscape, experts also predict more state-level AI laws and the EU’s far-reaching AI Act becoming more of a standard-bearer. Still, AI innovation will likely accelerate, with much focus on AI agents that look to automate entire legal workflows instead of just tackling single, one-off tasks.

Here’s a look at experts’ predictions for how AI will evolve, impact the legal industry and be regulated (or not) in 2025:



Monday, January 06, 2025

A good example of AI reliance on “similar” data over logic. Worth studying!

https://www.makeuseof.com/easy-questions-chatgpt-cant-answer/

ChatGPT Still Can't Answer These 4 Easy Questions





Cable cutting incidents spread.

https://www.theregister.com/2025/01/06/taiwan_china_submarine_cable_claim/

Taiwan reportedly claims China-linked ship damaged one of its submarine cables

Taiwanese authorities have asserted that a China-linked ship entered its waters and damaged a submarine cable.

Local media reports, and the Financial Times report that a vessel named Shunxing 39 called in the Taiwanese port of Keelung last Friday, and as it left damaged a submarine cable operated by Taiwanese carrier Chungwa Telecom as it steamed towards South Korea.

Chungwa Telecom has apparently said just four fibers were impacted, and its redundancy plans mean connectivity wasn’t disrupted.

Taiwanese media has quoted a local security expert who believes the incident was deliberate, and suggested the ship’s true owner is a Chinese national. Unnamed sources at Taiwan’s coast guard have reportedly supported that theory.





Schedule of coming articles.

https://sloanreview.mit.edu/article/gaining-real-business-benefits-from-genai-an-mit-smr-executive-guide/

Gaining Real Business Benefits From GenAI: An MIT SMR Executive Guide

For many organizations, the question isn’t whether to use generative artificial intelligence but how to use it to deliver the greatest business value. In this series, AI thought leaders weigh in with answers, advice, and examples.





Unlikely given politicians ability to ignore unpleasant truths.

https://www.dailymaverick.co.za/opinionista/2025-01-03-rolling-in-the-deep-flip-the-script-and-imagine-if-ai-could-tell-politicians-when-theyre-talking-rubbish/

Rolling in the Deep: Flip the script and imagine if AI could tell politicians when they’re talking rubbish

Some of us fear the machines, but they’re our own creations, after all. Surely we can programme artificial intelligence to expose our own bullshit every time we’re tempted to vocalise it? Any malevolence that AI seeks to deploy would have been logically thought out by humans in the first place. Not so?



Sunday, January 05, 2025

If it works, will we trust AI as a legal thinker?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5072765

Generative AI and the Future of Legal Scholarship

Since ChatGPT's release in November 2022, legal scholars have grappled with generative AI's implications for the law, lawyers, and legal education. Articles have examined the technology's potential to transform the delivery of legal services, explored the attendant legal ethics concerns, identified legal and regulatory issues arising from generative AI’s widespread use, and discussed the impact of the technology on teaching and learning in law school.

By late 2024, generative AI has become so sophisticated that legal scholars now need to consider a new set of issues that relate to a core feature of the law professor's work: the production of legal scholarship itself.

To demonstrate the growing ability of generative AI to yield new insights and draft sophisticated scholarly text, the rest of this piece contains a new theory of legal scholarship drafted exclusively by ChatGPT. In other words, the article simultaneously articulates the way in which legal scholarship will change due to AI and uses the technology itself to demonstrate the point.

The entire piece, except for the epilogue, was created by ChatGPT (OpenAI o1) in December 2024. The full transcript of the prompts and outputs is available here, https://chatgpt.com/share/676cc449-af50-8002-9145-efbfdf8ebb02, but every word of the article was drafted by generative AI. Moreover, there was no effort to generate multiple responses and then publish the best ones, though ChatGPT had to be prompted in one instance to rewrite a section in narrative form rather than as an outline.

The methodology for generating the piece was intentionally simple and started with the following prompt:

"Develop a novel conception of the future of legal scholarship that rivals some of the leading conceptions of legal scholarship. The new conception should integrate developments in generative AI and explain how scholars might use it. It should end with a series of questions that legal scholars and law schools will need to address in light of this new conception."

After ChatGPT provided an extensive overview of its response, it was asked to generate each section of the piece using text “suitable for submission to a highly selective law review.” The first such prompt asked only for a draft of the introduction. The introduction identified four parts to the article, so ChatGPT was then asked to draft Parts I, II, III and IV in separate prompts until the entire piece was completed. Because of output limits that restrict how much content can be generated in response to a single prompt, each section of the article is relatively brief. A much more thorough version of the article could have been generated if ChatGPT had been prompted to create each sub-part of the article separately rather prompting it to produce entire parts all at once.

The epilogue offers my own reflections on the resulting draft, which (in my view) demonstrates the creativity and linguistic sophistication of a competent legal scholar. Of course, as with any competent piece of scholarship, the article has gaps and flaws. In other words, it is far from perfect. But then again, very few pieces of legal scholarship are otherwise. Rather than focusing on these flaws, scholars should consider the profound implications of these new tools for the scholarly enterprise. I discuss some of those implications in the epilogue, but apropos of the theme of the piece, generative AI has some useful ideas for us to consider in this regard. 





AI is coming. Are we ready?

https://researchportal.vub.be/en/publications/specific-laws-governing-use-of-ai-in-criminal-procedure-and-atten

Specific Laws Governing Use of Ai in Criminal Procedure and Attentive Criminal Judges: American Songbook for Global Listeners

Artificial Intelligence (AI) can be used in the criminal justice system to support human decision-making at various stages of the proceedings. Despite heavy criticism on AI’s opacity, complexity, non-contestability or unfair discrimination, such AI-implementations are often favoured, in light of alleged accuracy, effectiveness or efficiency in the overall decision-making process. After briefly recalling some key functions of AI in criminal procedures, the paper addresses whether and the degree to which AI-uses can comply with the United States (US) Federal Rules of Evidence and the constitutional rights to due process, equal protection and privacy. Recent case law (e.g., Puloka and Arteaga) and legal initiatives, such as the 2024 AI Policy and California’s 2024 Rules of Court, are also discussed. This paper ends with five important take-homes for global readers and regulators intending to introduce AI into their jurisdictions.





The rules, they keep a changing…

https://databreaches.net/2025/01/04/new-york-modifies-data-breach-law-heading-into-2025/

New York Modifies Data Breach Law Heading Into 2025

Liisa M. Thomas and Kathryn Smith of Sheppard Mullin write:

As 2024 came to a close, New York Gov. Hochul signed two bills (A8872A and S2376B) amending New York’s data breach law. The modifications change both what constitutes personal information under the law, as well as modifying notification timing. The notice modification is now in effect; the change to the definition of personal information does not take effect until March 21, 2025.
As amended, companies will now have 30 days from discovery of a breach to notify impacted individuals. Previously, the law required notice to individuals “in the most expedient time possible and without reasonable delay.”

There is still an exception to the deadline for the legitimate needs of law enforcement.

The regulator to notify has also changed. Previously, businesses needed to provide notice to the NY Attorney General, the Department of State, and the Division of State Police. A fourth group has been added. Now notice must also be sent to the New York Department of Financial Services. Notification to each agency can be done via form on the New York AG website.

Read more at The National Law Review.

S2376B/A4737B  also strengthens the protections for medical data and health insurance data by adding them to the definition of personal information in identity theft:

Section 1. Subdivision 2 of section 190.77 of the penal law is amended by adding two new paragraphs d and e to read as follows:
d. “medical information” means any information regarding an individual’s medical history, mental or physical condition, or medical treatment or diagnosis by a health care professional.
e. “health insurance information” means an individual’s health insurance policy number or subscriber identification number, any unique identifier used by a health insurer to identify the individual or any information in an individual’s application and claims history, including, but not limited to, appeals history.