Saturday, March 09, 2024

As I read this, I find I have many questions. Can a 12 year old give consent? Does pasting a head on a nude make it a depiction of that person? I expect this to be educational…

https://www.wired.com/story/florida-teens-arrested-deepfake-nudes-classmates/

Florida Middle Schoolers Arrested for Allegedly Creating Deepfake Nudes of Classmates

The Florida case appears to be the first arrests and criminal charges as a result of alleged sharing of AI-generated nude images to come to light. The boys were charged with third-degree felonies—the same level of crimes as grand theft auto or false imprisonment—under a state law passed in 2022 which makes it a felony to share “any altered sexual depiction” of a person without their consent.





Perhaps there is a “right way to use AI in the elections?

https://www.wsj.com/articles/underdog-who-beat-biden-in-american-samoa-used-ai-in-election-campaign-b0ce62d6?st=e31sq54928l55f0

Underdog Who Beat Biden in American Samoa Used AI in Election Campaign

The little-known presidential candidate who beat President Biden in American Samoa’s Democratic caucus earlier this week says artificial intelligence played a big role in his campaign strategy.

Jason Palmer, an impact investor and venture capitalist who entered the race in November, has leveraged generative AI to communicate with voters via SMS text and email, and answer specific questions about his background and policy. Additionally, Palmer’s campaign website has an avatar, PalmerAI, that answers questions with the candidate’s voice and likeness.

Palmer himself never set foot on the tiny territory of islands in the South Pacific during the campaign, conducting his entire bid virtually. He credits his 11-vote victory to an exceptional local team and its grassroots effort, but also said his use of AI made a meaningful difference.

Palmer spent less than $5,000 on the American Samoa campaign. “If I had millions of dollars to market to Colorado or Vermont, who knows I might have been more competitive in those states,” he said.





14 of 50.

https://fpf.org/blog/little-new-about-hampshire/

LITTLE NEW ABOUT HAMPSHIRE

On March 6, 2024, Governor Sununu signed SB 255 into law, making New Hampshire the fourteenth U.S. State to adopt a comprehensive privacy law to govern the collection, use, and transfer of personal data. SB 255 is the second comprehensive privacy law enacted in 2024, the first having been New Jersey’s S332, which was also a holdover from the 2023 legislative session. Another example of states following the “Connecticut model,” S255 bears a strong resemblance to other laws following the Washington Privacy Act (WPA) framework. The law will take effect on January 1, 2025. This blog post addresses two unique facets of SB 255, including its narrow rulemaking authority and a unique provision addressing conflicts with other laws, while ultimately reflecting on how SB 255 is arguably the first “boring” state comprehensive privacy law.





No one is 100% safe. Not every hack is catastrophic.

https://www.cnn.com/2024/03/08/politics/top-us-cybersecurity-agency-cisa-hacked/index.html

Top US cybersecurity agency hacked and forced to take some systems offline

One of the US Cybersecurity and Infrastructure Security Agency’s affected systems runs a program that allows federal, state and local officials to share cyber and physical security assessment tools, according to the US officials briefed on the matter. The other holds information on security assessment of chemical facilities, the sources said.

A CISA spokesperson said in a statement that “there is no operational impact at this time” from the incident and that the agency continues to “upgrade and modernize our systems.”





Perspective.

https://www.cnet.com/tech/you-cant-have-an-ai-chatbot-without-an-llm-heres-how-that-all-works/

You Can't Have an AI Chatbot Without an LLM. Here's How That All Works

When you interact with an AI chatbot like ChatGPT, Claude, Copilot or Gemini, it may seem like you're talking to another person.

But these chatbots don't actually understand the meaning of words the way we do. Instead, they are how we interact with what are known as large language models, or LLMs. This underlying technology is trained to recognize how words are used and which ones frequently appear together so it can predict future words, sentences or paragraphs.

If you're wondering what LLMs have to do with AI, this explainer is for you. Here's what you need to know about LLMs.





I would never, ever try this… Probably.

https://www.1011now.com/2024/03/08/lincoln-woman-exploits-pump-glitch-get-over-27000-free-gas-police-say/

Lincoln woman exploits pump glitch to get over $27,000 of free gas, police say

Upon further investigation, police learned that the fuel pumps received a software update in November of 2022. The update managed orders and reward cards, and it was made at the request of customers and staff.

Unbeknownst to the company, however, the update was exploitable. It allowed anyone to swipe a rewards card twice to enter the pump into a demo mode. From there, the user could pump gas for free.



Friday, March 08, 2024

Interesting take.

https://sloanreview.mit.edu/video/how-to-succeed-with-predictive-ai/

How to Succeed With Predictive AI

Machine learning is the engine of predictive AI. Yet too many machine learning projects fail at deployment. The primary reason? They’re viewed as technology rather than business projects. And organizations often fail to foster a connection between business and technology functions.

In this webinar, Eric Siegel, author of The AI Playbook, will explain what business stakeholders must do to succeed with AI.





Perspective.

https://www.bloomberg.com/opinion/articles/2024-03-08/tiktok-america-s-addiction-isn-t-just-china-s-fault

America’s TikTok Addiction Isn’t Just China’s Fault

There are few things that can get both the American left and right as exercised as the idea that a foreign nation is perverting the minds of their young. When that country is China, the full force of the US political system weighs in. That has resulted in the unanimous approval of a bill that would stop internet service providers and app stores from offering TikTok to consumers, unless the social media firm’s Chinese parent ByteDance Ltd. sells it within six months.



(Related) Consistency would be confusing?

https://www.axios.com/2024/03/08/trump-claims-tiktok-ban-would-only-help-enemy-facebook

Trump claims TikTok ban would only help "enemy" Facebook

Former President Trump came out in support of TikTok in the face of congressional legislation pushing for Chinese divestment from the app in a Thursday night post that also attacked Facebook.

Why it matters: The likely 2024 Republican presidential nominee threatened to ban TikTok when he was president.



Thursday, March 07, 2024

Took a while, but worth reading…

https://www.bespacific.com/report-of-the-1st-workshop-on-generative-ai-and-law/

Report of the 1st Workshop on Generative AI and Law

Report of the 1st Workshop on Generative AI and Law (November 16, 2023). Yale Law & Economics Research Paper, Available at SSRN: https://ssrn.com/abstract=4634513 or http://dx.doi.org/10.2139/ssrn.4634513

This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw), held in July 2023. A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI, and by Generative AI for law, with an emphasis on U.S. law in particular. We begin the report with a high-level statement about why Generative AI is both immensely significant and immensely challenging for law. To meet these challenges, we conclude that there is an essential need for 1) a shared knowledge base that provides a common conceptual language for experts across disciplines; 2) clarification of the distinctive technical capabilities of generative-AI systems, as compared and contrasted to other computer and AI systems; 3) a logical taxonomy of the legal issues these systems raise; and, 4) a concrete research agenda to promote collaboration and knowledge-sharing on emerging issues at the intersection of Generative AI and law. In this report, we synthesize the key takeaways from the GenLaw workshop that begin to address these needs. All of the listed authors contributed to the workshop upon which this report is based, but they and their organizations do not necessarily endorse all of the specific claims in this report.”





Similar to what Ukraine has been doing for some time.

https://www.livescience.com/technology/engineering/ai-drone-that-could-hunt-and-kill-people-built-in-just-hours-by-scientist-for-a-game

AI drone that could hunt and kill people built in just hours by scientist 'for a game'

The scientist who configured a small drone to target people with facial recognition and chase them at full speed warns we have no defenses against such weapons.





Apparently we don’t exactly know what we want…

https://www.fastcompany.com/91044103/what-is-artificial-general-intelligence-openai-gpt-4-musk-lawsuit

Admit it: ‘Artificial general intelligence’ may already be obsolete

Expecting OpenAI’s GPT and other large language models to beat humans at thinking like a human might be missing the point.

The whole notion of AGI is predicated on the assumption that AI started out dumber than a human but could someday match or exceed our level of thinking. Already, though, generative AI is different than human intelligence—far closer to omniscient than any individual flesh-and-blood thinker, yet also preternaturally gullible and prone to blurring fact and fiction in ways that don’t map to common human frailties. That’s because it’s a predictive engine, trained to string together words without truly understanding them. If its present trajectory of simulated brilliance mixed with boneheadedness continues, it might wander off in a direction far afield from most definitions of AGI.





Resource.

https://www.bespacific.com/linkedin-learning-unlocks-250-free-ai-courses-for-a-limited-time/

LinkedIn Learning Unlocks 250 Free AI Courses for a Limited Time

Tech Republic: “LinkedIn also released its 2024 Workplace Learning Report, which found that more people want to learn AI skills. Plus, LinkedIn Learning is offering new career development and internal mobility features. To help build AI literacy in the enterprise, LinkedIn is offering 250 AI courses for free through April 5th in tandem with its annual 2024 Workplace Learning Report, which highlights the state of learning and development and the skills needed for the future. There’s little doubt that employees want to develop critical AI skills — four in five people want to learn more about how to use AI in their profession, according to the LinkedIn Learning report (Figure A). That high number was one of the surprise findings of the report, Jill Raines, director of product management at LinkedIn, told TechRepublic in an email interview…”

Wednesday, March 06, 2024

How to outsmart a smart house.

https://ktla.com/news/local-news/police-warn-of-thieves-using-wifi-jamming-tech-to-disarm-cameras-alarms/

Police warn of thieves using wifi-jamming tech to disarm cameras, alarms

Authorities with the Los Angeles Police Department are warning residents in Los Angeles’ Wilshire-area neighborhoods of a series of burglaries involving wifi-jamming technology that can disarm surveillance cameras and alarms using a wireless signal.





Is there no generic safe harbor?

https://www.washingtonpost.com/technology/2024/03/05/ai-research-letter-openai-meta-midjourney/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzA5NjE0ODAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzEwOTkzNTk5LCJpYXQiOjE3MDk2MTQ4MDAsImp0aSI6ImMwOWMzOWE1LWVkNzgtNGJkZS1hY2QzLTFlMDQ0N2U1N2E0ZSIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS90ZWNobm9sb2d5LzIwMjQvMDMvMDUvYWktcmVzZWFyY2gtbGV0dGVyLW9wZW5haS1tZXRhLW1pZGpvdXJuZXkvIn0.YPO-P4j7G5oQx-uKb7IQuhyJYnFWVGCxV-NHFGkHErA

Top AI researchers say OpenAI, Meta and more hinder independent evaluations

More than 100 top artificial intelligence researchers have signed an open letter calling on generative AI companies to allow investigators access to their systems, arguing that opaque company rules are preventing them from safety-testing tools being used by millions of consumers.

The researchers say strict protocols designed to keep bad actors from abusing AI systems are instead having a chilling effect on independent research. Such auditors fear having their accounts banned or being sued if they try to safety-test AI models without a company’s blessing.





Perspective.

https://a16z.com/the-future-of-ai-is-amazing/

The Future of AI Is Amazing

In this presentation from the American Dynamism Summit, a16z General Partner Martin Casado lays out the case for AI as a driving force behind incredible advancements in technology, creativity, and the human experience — not to mention efficiency improvements on par with, if not greater than, those delivered by the internet and the microchip.

Here is a transcript of his presentation:





Tools & Techniques.

https://www.axios.com/2024/03/06/ai-tools-teachers-chatgpt-writable

Teachers are embracing ChatGPT-powered grading

A new tool called Writable, which uses ChatGPT to help grade student writing assignments, is being offered widely to teachers in grades 3-12.

Why it matters: Teachers have quietly used ChatGPT to grade papers since it first came out — but now schools are sanctioning and encouraging its use.

Driving the news: Writable, which is billed as a time-saving tool for teachers, was purchased last summer by education giant Houghton Mifflin Harcourt, whose materials are used in 90% of K-12 schools.

Alternatives to Writable include Crowdmark, EssayGrader and Gradescope — and ChatGPT directly — to name just a few.



Tuesday, March 05, 2024

Damning with faint praise? If AI is supposed to be superior, being almost as good as the ‘model’ you are replacing seems less than stellar.

https://arstechnica.com/information-technology/2024/03/the-ai-wars-heat-up-with-claude-3-claimed-to-have-near-human-abilities/

The AI wars heat up with Claude 3, claimed to have “near-human” abilities





Because they’ve done such a great job with… (Can you think of anything?)

https://www.brookings.edu/articles/should-the-un-govern-global-ai/

Should the UN govern global AI?

One such proposal was from the United Nations’ multi-stakeholder AI Advisory Body, which released an interim report offering future steps for global AI governance. Though it did not recommend any single model, the report concluded that “a global governance framework is needed.” It identified seven layers of governance functions for “an institution or network of institutions,” starting with expert-led scientific consensus and building to global norm elaboration, compliance, and accountability. We concur with the need to build broad consensus through many voices. But we emphasize that what is needed is a distributed and iterative approach, one that would be—as the UN AI Advisory Body itself put it—“agile, networked, flexible” and makes the most of the initiatives already underway.





Another reason to ensure we can identify AI generated works. (Might be worth creating a confession video with a few flaws to “prove” you didn’t do it.)

https://www.kron4.com/news/bay-area/accused-facebook-killer-claims-his-confession-was-ai-generated/

Accused Facebook killer claims his confession was AI-generated

Mark Stephen Mechikoff, the man accused of recording a grisly murder and then posting the video on his Facebook page, made a courtroom declaration at his most recent court appearance.

The 39-year-old Pacifica man is charged with stabbing Claribel Estrella to death inside her San Mateo apartment on July 26, 2023. Prosecutors said he recorded the entire killing with his cellphone camera, including Estrella’s last moments alive as she bled on her kitchen floor.

Mechikoff has pleaded not guilty to first-degree murder.

He appeared inside a San Mateo County courtroom on Friday for a preliminary hearing. “During the hearing, the defendant exclaimed to the court that he did kill the victim, but his confession was generated by AI (artificial intelligence),” prosecutors wrote.





Smart or silly? (Did they think ChatGPT was a Trump supporter?)

https://lancasteronline.com/news/politics/pa-gop-lawmakers-hear-from-chatgpt-as-they-consider-laws-addressing-artificial-intelligence/article_96816954-d807-11ee-8f03-1369fbfbc098.html

Pa. GOP lawmakers hear from ChatGPT as they consider laws addressing artificial intelligence

… “Ladies and gentlemen of the PA House Republican Policy Committee, thank you for the opportunity to address you today. My name is ChatGPT, an AI language model developed by OpenAI.”





And AI content generators will make it worse… (LLMs will not have the latest research…)

https://www.nature.com/articles/d41586-024-00616-5

Millions of research papers at risk of disappearing from the Internet

More than one-quarter of scholarly articles are not being properly archived and preserved, a study of more than seven million digital publications suggests. The findings, published in the Journal of Librarianship and Scholarly Communication on 24 January1, indicate that systems to preserve papers online have failed to keep pace with the growth of research output.

Our entire epistemology of science and research relies on the chain of footnotes,” explains author Martin Eve, a researcher in literature, technology and publishing at Birkbeck, University of London. “If you can’t verify what someone else has said at some other point, you’re just trusting to blind faith for artefacts that you can no longer read yourself.”





Tools & Techniques.

https://www.bespacific.com/ai-and-plagiarism-detection-tools/

AI And Plagiarism Detection Tools

Journalist’s Tool Box – AI And Plagiarism Detection Tools: “I post this here as a warning. There are tools such as AI Undetect and Humanize AI Text, rewrite tools that offer an AI detection remover service. They can fool GPTZero, ZeroGPT, Copyleak, etc.”



Monday, March 04, 2024

Is uncaring the right level of care?

https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-therapists-do-better-than-the-real-thing

He checks in on me more than my friends and family’: can AI therapists do better than the real thing?

Last autumn, Christa, a 32-year-old from Florida with a warm voice and a slight southern twang, was floundering. She had lost her job at a furniture company and moved back home with her mother. Her nine-year relationship had always been turbulent; lately, the fights had been escalating and she was thinking of leaving. She didn’t feel she could be fully honest with the therapist she saw once a week, but she didn’t like lying, either. Nor did she want to burden her friends: she struggles with social anxiety and is cautious about oversharing.

So one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.





You knew this already, right?

https://www.komando.com/security-privacy/government-spying/901570/

Here’s what the U.S. government knows about you

… If you’re doing, well, anything online, the government can know about it (unless you’ve locked down your activity — more on that below). Here are a few things we know they know.

Who you’re calling, emailing and texting — and what you’re saying

What you’re posting and who you’re following online

What you’re doing on the internet: Buying, browsing and in-app activity





Tools & Techniques. (If this works, a tool to fight disinformation?)

https://www.bespacific.com/spinscore/

SpinScore

We’re thrilled to announce a game-changing new feature: YouTube Video Analysis. Now, you can simply pass the link of a YouTube video to our system, and we’ll do the rest. This powerful new feature allows you to leverage our advanced bias detection and scoring system to analyze YouTube videos. Whether it’s a news segment, a documentary, or a vlog, you can now get insights into the bias and spin of any YouTube video content. How does it work It’s simple! Just pass the YouTube link to our system, and our AI will analyze the video’s content, providing you with a detailed breakdown of its bias and spin. Welcome to SpinScore an advanced AI tool designed to analyze and score potential biases, logical fallacies, and misleading information in content. Our system uses a combination of state-of-the-art Large Language Models and sophisticated mathematical algorithms to deliver comprehensive insights into the content you explore. How SpinScore Works – SpinScore scrutinizes articles using a comprehensive set of criteria to identify various types of biases, logical fallacies, and misleading information (also known as lies). Our scoring system rates these on a scale of 0-5, providing detailed explanations and suggestions for improvement. SpinScore – In addition to the individual scores for biases, fallacies, and misleading information, SpinScore also calculates an overall score known as the SpinScore. This score is calculated by averaging the normalized scores for biases, fallacies, and misleading information, and then scaling the result to a range of 0 to 10. A SpinScore of 0 indicates no evidence of bias, fallacies, or misleading information, while a SpinScore of 10 indicates a high level of these elements…” [h/t Pete Weiss]



Sunday, March 03, 2024

Can we improve poor writing skills with hallucinations?

https://www.crimrxiv.com/pub/c5lj2rmy/release/1

Large Language Models and Artificial Intelligence for Police Report Writing

Large Language Models (LLMs), such as ChatGPT, are advanced artificial intelligence systems capable of understanding and generating human-like text. They are trained on vast amounts of textual data, enabling them to comprehend context, answer questions, generate summaries, and even engage in meaningful conversations. As these models continue to evolve, their potential applications in various industries, including law enforcement, are becoming more apparent, as are the potential threats. One particularly promising area of application for LLMs in policing is report writing. As many police executives know, not all officers possess strong writing skills, which can lead to inaccurate or incomplete reports. This can have serious consequences for criminal prosecutions, as well as expose departments to civil liability concerns. Implementing LLMs like ChatGPT for report-writing assistance may help address these issues. Even if not fully implemented at the agency level, officers across the country are already using these tools to help in their report generation. Given the stakes, it is wise for agencies to have a sophisticated view and policy on these tools. This paper introduces practitioners to LLMs for report writing, considers the implications of using such tools, and suggests a template-based approach to deploying the technology to patrol officers.





Just the facts, Ma’am.”

https://royalsocietypublishing.org/doi/full/10.1098/rsta.2023.0162

AI and the nature of disagreement

Litigation is a creature of disagreement. Our essay explores the potential of artificial intelligence (AI) to help reduce legal disagreements. In any litigation, parties disagree over the facts, the law, or how the law applies to the facts. The source of the parties' disagreements matters. It may determine the extent to which AI can help resolve their disputes. AI is helpful in clarifying the parties’ misunderstanding over how well-defined questions of law apply to their facts. But AI may be less helpful when parties disagree on questions of fact where the prevailing facts dictate the legal outcome. The private nature of information underlying these factual disagreements typically fall outside the strengths of AI's computational leverage over publicly available data. A further complication: parties may disagree about which rule should govern the dispute, which can arise irrespective of whether they agree or disagree over questions of facts. Accordingly, while AI can provide clarity over legal precedent, it often may be insufficient to provide clarity over legal disputes.





Slow lawyers…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4735389

The Legal Ethics of Generative AI

The legal profession is notoriously conservative when it comes to change. From email to outsourcing, lawyers have been slow to embrace new methods and quick to point out potential problems, especially ethics-related concerns.

The legal profession’s approach to generative artificial intelligence (generative AI) is following a similar pattern. Many lawyers have readily identified the legal ethics issues associated with generative AI, often citing the New York lawyer who cut and pasted fictitious citations from ChatGPT into a federal court filing. Some judges have gone so far as to issue standing orders requiring lawyers to reveal when they use generative AI or to ban the use of most kinds of artificial intelligence (AI) outright. Bar associations are chiming in on the subject as well, though they have (so far) taken an admirably open-minded approach to the subject.

Part II of this essay explains why the Model Rules of Professional Conduct (Model Rules) do not pose a regulatory barrier to lawyers’ careful use of generative AI, just as the Model Rules did not ultimately prevent lawyers from adopting many now-ubiquitous technologies. Drawing on my experience as the Chief Reporter of the ABA Commission on Ethics 20/20 (Ethics 20/20 Commission), which updated the Model Rules to address changes in technology, I explain how lawyers can use generative AI while satisfying their ethical obligations. Although this essay does not cover every possible ethics issue that can arise or all of generative AI’s law-related use cases, the overarching point is that lawyers can use these tools in many contexts if they employ appropriate safeguards and procedures.

Part III describes some recent judicial standing orders on the subject and explains why they are ill-

advised.

The essay closes in Part IV with a potentially provocative claim: the careful use of generative AI is not only consistent with lawyers’ ethical duties, but the duty of competence may eventually require lawyers’ use of generative AI. The technology is likely to become so important to the delivery of legal services that lawyers who fail to use it will be considered as incompetent as lawyers today who do not know how to use computers, email, or online legal research tools.