Saturday, June 08, 2024

Resource.

https://www.schneier.com/blog/archives/2024/06/security-and-human-behavior-shb-2024.html

Security and Human Behavior (SHB) 2024

This week, I hosted the seventeenth Workshop on Security and Human Behavior at the Harvard Kennedy School. This is the first workshop since our co-founder, Ross Anderson, died unexpectedly.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security. The fifty or so attendees include psychologists, economists, computer security researchers, criminologists, sociologists, political scientists, designers, lawyers, philosophers, anthropologists, geographers, neuroscientists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. Short talks limit presenters’ ability to get into the boring details of their work, and the interdisciplinary audience discourages jargon.

Since the beginning, this workshop has been the most intellectually stimulating two days of my professional year. It influences my thinking in different and sometimes surprising ways—and has resulted in some new friendships and unexpected collaborations. This is why some of us have been coming back every year for over a decade.

This year’s schedule is here. This page lists the participants and includes links to some of their work. Kami Vaniea liveblogged both days.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, thirteenth, fourteenth, fifteenth and sixteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio/video recordings of the sessions. Ross maintained a good webpage of psychology and security resources—it’s still up for now.

Next year we will be in Cambridge, UK, hosted by Frank Stajano.



Friday, June 07, 2024

I want an App that pays my bills with someone else’s money…

https://www.inc.com/kit-eaton/need-a-quick-app-for-your-company-wix-has-new-ai-tool-thatll-do-it-just-say-what-you-need.html

AI Tool Lets Small Businesses Make Apps, Just by Telling It What They Need

In its predictions of 2024 workplace trends, website building service Wix suggested that small business leaders should get serious about AI. Given the explosion in AI tech so far, we know that prediction really hit the target, so much so that the company may profit from its own advice.

Wix plans this week to launch an AI tool that sounds genuinely useful for any company that needs to quickly build an app: Wix's tool lets you do this normally technical task just by telling the tool what you want. Those awkward meetings where a deeply non-technical executive waves their hands vaguely and demands an app that "just does this sort of thing" suddenly sound a lot less tricky.





What happens as we use increasingly hallucination based data to train LLMs? (Perhaps we should start a data generating company named ‘Infinite Monkeys?”

https://apnews.com/article/ai-artificial-intelligence-training-data-running-out-9676145bac0d30ecce1513c20561b87d

AI ‘gold rush’ for chatbot training data could run out of human-written text

Artificial intelligence systems like ChatGPT could soon run out of what keeps making them smarter — the tens of trillions of words people have written and shared online.

A new study released Thursday by research group Epoch AI projects that tech companies will exhaust the supply of publicly available training data for AI language models by roughly the turn of the decade – sometime between 2026 and 2032.





Tools & Techniques.

https://www.wired.com/story/openai-offers-a-peek-inside-the-guts-of-chatgpt/

OpenAI Offers a Peek Inside the Guts of ChatGPT

Today, OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper, researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devise a method of identifying how the model stores certain concepts—including those that might cause an AI system to misbehave.



Thursday, June 06, 2024

What am I missing here? Are trackers that common?

https://www.fox2detroit.com/news/metro-detroit-dealerships-using-artificial-intelligence-scan-vehicles-damage-trackers

Metro Detroit dealerships using Artificial Intelligence to scan vehicles for damage, trackers

The technology originally made to detect bombs and other explosive threats may now be used to scan your car for dinks and dents.

What was once designed by the U.S. Department of Homeland Security may now be used as a full-body scan of someone's vehicle. Artificial Intelligence is the brain behind UVeye, which is being installed in car dealerships around metro Detroit with the goal of helping consumers spot problems both seen and unseen on and in their vehicle.

… According to Kristie Risner, the OEM account manager for UVeye, it's detected tracking devices and vice grips under the vehicle.





Perspective.

https://neurosciencenews.com/ai-llm-reasoning-26248/

AI Reasoning Flaws: The Limits of Popular Large Language Models

Popular AI platforms like ChatGPT give inconsistent answers to reasoning tests and don’t improve with added context. The research highlights the need to understand AI ‘thinking’ before using them for decision-making. Despite advances, AI models often make simple errors and fabricate information.



Wednesday, June 05, 2024

Could be important.

https://www.insideprivacy.com/data-privacy/colorado-privacy-act-amended-to-include-biometric-data-provisions/

Colorado Privacy Act Amended To Include Biometric Data Provisions

On May 31, 2024, Colorado Governor Jared Polis signed HB 1130 into law. This legislation amends the Colorado Privacy Act to add specific requirements for the processing of an individual’s biometric data. This law does not have a private right of action.

Similarly to the Illinois Biometric Information Privacy Act (BIPA), this law requires controllers to provide notice and obtain consent prior to the collection or processing of a biometric identifier. The law also prohibits controllers from selling or disclosing biometric identifiers unless the customer consents or unless disclosure is necessary to fulfill the purpose of collection, to complete a financial transaction, or is required by law.





How much data needs to be removed before reality drifts away?

https://www.bespacific.com/if-google-kills-news-media-who-will-feed-the-ai-beast/

If Google Kills News Media, Who Will Feed the AI Beast?

Vanity Fair [unpaywalled ] – “Summarization tools from OpenAI and Google offer a CliffsNotes version of journalism that may further dumb down public discourse and deliver a brutal blow to an already battered media business…we’re on the cusp of a similar phenomenon with the new wave of AI summarization tools being launched by OpenAI, Google, and Facebook. These tools, though impressive in their ability to distill information, are just a few steps away from creating an “Irtnog”-like reality, where the richness of human knowledge and depth of understanding are reduced to bite-size, and sometimes dangerously inaccurate, summaries for our little brains to consume on our tiny devices. Case in point, this month Google launched several new AI-powered features for its search engine. One of the most notable additions is the AI Overviews feature, which provides AI-generated summaries at the top of search results. Essentially, that’s a fancy way of saying AI will summarize search results for you, because apparently reading anything that is not a summary is just too much effort these days. For news publishers, this is—understandably!—quite worrisome. Over the past three decades, tech companies have systematically helped siphon off the advertising revenue that once supported robust journalism, as advertisers have flocked to the targeted offerings of social media and search platforms. At the same time, the proliferation of free news content aggregated by tech giants (ahem, Google News) has made it increasingly difficult for news outlets to attract and retain paying subscribers. As such, the publishing industry has been declining since the early 2000s, when the real tech companies were separated from the chaff of the dot-com bubble, with newspaper revenues falling by more than 50% over the past two decades…”





Perhaps the horror isn’t so horrible? (Will they ask the AI to take the stand?)

https://www.bespacific.com/11th-circuit-judge-admits-to-using-chatgpt-to-help-decide-a-case/

11th Circuit Judge Admits to Using ChatGPT to Help Decide a Case

e-discovery Team: Urges Other Judges and Lawyers to Follow Suit: “The Eleventh Circuit published a ground breaking Concurring Opinion on May 28, 2024 by Judge Kevin C. Newsom on the use of generative AI to help decide contract interpretation issues. Snell v. United Specialty Ins. Co., 2024 U.S. App. LEXIS 12733 *; _ F.4th _ (11th Cir., 05/28/24). The case in question centered around interpretation of an insurance policy. Circuit Judge Kevin C. Newsom not only admits to using ChatGPT to help him make his decision, but praises its utility and urges other judges and lawyers to do so too. His analysis is impeccable and his writing is superb. That is bold judicial leadership – Good News. I love his opinion and bet that you will too. The only way to do the Concurring Opinion justice is to quote all of it, all 6,485 words. I know that’s a lot of words, but unlike ChatGPT, which is a good writer, Judge Newsom is a great writer. Judge Kevin C. Newsom, a Harvard law graduate from Birmingham, Alabama, is creative in his wise and careful use of AI. Judge Newsom added photos to his opinion and, as I have been doing recently in my articles, quoted in full the transcripts of the ChatGPT sessions he relied upon. He leads by doing and his analysis is correct, including especially his commentary on AI and human hallucinations…”





Perspective. How to benefit from lies about you even if they are true.

https://www.bespacific.com/the-liars-dividend-the-impact-of-deepfakes-and-fake-news-on-politician-support-and-trust-in-media/

The Liar’s Dividend: The Impact of Deepfakes and Fake News on Politician Support and Trust in Media

This project, The Liar’s Dividend: Can Politicians Claim Misinformation to Evade Accountability? is joint work between the Georgia Institute of Technology and Emory University. While previous work has addressed the direct effects of misinformation, we propose to study the phenomenon of misinformation about misinformation, or politicians “crying wolf” over fake news. We argue that strategic and false allegations of misinformation (i.e., fake news and deepfakes) benefit politicians by helping them maintain support in the face of information damaging to their reputation. This concept is known as the “liar’s dividend”(Chesney and Citron 2018) and suggests that some politicians profit from an informational environment saturated with misinformation. While previous scholarship has demonstrated that the direct effects of misinformation may be overstated (Lazer et al. 2018, Little 2018), the more subtle indirect effects of misinformation may be even more concerning. Therefore, we aim to assess the extent of the harms to political accountability and trust in media posed by the liar’s dividend. Importantly, our study will also evaluate which “protective factors,” such as media literacy, help to insulate against this form of misinformation. We posit that the payoffs from the liar’s dividend work through two theoretical channels. First, the allegation of a deepfake or fake news can produce informational uncertainty. After learning of a political scandal, a member of the public will be more likely to downgrade their evaluation of the politician or to think that the politician is a “bad type.” However, if the politician then issues a statement disclaiming the story and alleging foul play by the opposition in the form of a deepfake or fake news, then some members of the public may be more uncertain about what to believe. Compared to a counterfactual where the politician makes no so such allegation, we think claims of a deepfake or fake news will result in aunidirectional shift in average evaluations of the politician in the positive direction, along with an associated increased variance (a reflection of increased uncertainty). Second, an allegation of a deepfake or fake news can provide rhetorical cover. To avoid cognitive dissonance, core supporters or strong co-partisans may be looking for an “out” or a motivated reason (Taber and Lodge 2006) to maintain support for their preferred politician in the face of a damaging news story. This rhetorical strategy also employs a “devil shift”(Sabatier, Hunter and McLaughlin 1987) where politicians not only signal their own innocence, but also criticize political opponents and the media, prompting supporters to rally against the opposition. To evaluate these potential impacts of the liar’s dividend and the channels through which the liar’s dividend bestows its benefits, we use a survey experiment to randomly assign vignette treatments detailing embarrassing or scandalous information about American politicians to American citizens. Our study design, treatments, outcomes, covariates, estimands, and analysis strategy are described in more detail in our pre-analysis plan, which was pre-registered with EGAP/OSF.”



Tuesday, June 04, 2024

Tools & Techniques. For parents helping with homework… (Not sure how much AI is involved.)

https://www.makeuseof.com/best-ai-tools-to-help-solve-math-problems/

The 7 Best AI Tools to Help Solve Math Problems

While OpenAI's ChatGPT is one of the most widely known AI tools, there are numerous other platforms that students can use to improve their math skills.

I tested seven AI tools on two common math problems so you know what to expect from each platform and how to use each of them.



Monday, June 03, 2024

Every now and then, a hacker article…

https://www.bespacific.com/how-to-get-past-a-paywall-to-read-an-article-for-free-2/

How to Get Past a Paywall to Read an Article for Free

Lifehacker: “Over the past several years, countless websites have added paywalls. If you want to read their articles, you have to sign up and pay a monthly subscription cost. Some sites have a “metered” paywall—meaning you can read a certain number of articles for free before they ask for money—and others have a hard paywall, where you’ll have to pay to read even one article. Paywalls are mostly an thing with news websites, largely because relying on advertising income alone isn’t a viable strategy anymore, and news companies are pursuing more direct revenue sources, like monthly subscriptions. Of course, paywalls aren’t entirely a bad thing—it’s worth it to support journalism you find valuable, so by all means, if you can afford to pay to read articles, you absolutely should. But whether you lost your password, haven’t saved it on your phone, are in a rush, or are just strapped for cash and promise yourself that you’ll subscribe later, there are several ways to bypass paywalls on the internet. You may be able to use some of these methods successfully today, but that could change in the future as websites clamp down on bypass methods. If nothing else, I hope you support the websites that you do read—especially your friendly local news outlet. But if you can’t right now, here are some of the best ways to bypass paywalls online.” [Note – 12ft.io was shut down]



Useful beyond the law school?

https://www.bespacific.com/ai-and-law-courses-bridging-theory-and-practice/

AI and Law Courses: Bridging Theory and Practice

Dennis Kennedy, Director of the Center for Law, Technology & Innovation, Michigan State University College of Law via YouTube: “This webinar will guide law professors through the process of crafting a comprehensive 3-credit course on AI and the Law, showcasing how to balance substantive knowledge with hands-on learning experiences. Learn how to enrich your course offerings with cutting-edge AI technologies, ensuring your students are well-equipped for the future of law. Perfect for educators seeking to pioneer AI integration in their legal curriculum. Based on Dennis Kennedy’s approach to AI and the Law classes at Michigan State University.”





I would use a couple of these. Knowing they exist is useful.

https://www.bespacific.com/14-free-online-tools-you-should-know-about/

14 Free Online Tools You Should Know About

Gizmodo: “When web browsers first began to support apps and interactivity, the functionality was basic and slow—but now online apps can do almost as much as desktop apps can, and, more importantly, these online tools are free. So, if you’ve got a quick computing job that needs doing, you can open up a web browser to get it done—there’s no need to pay for a Windows or macOS utility to download and install. Here are some of our free favorites when it comes to tasks you can quickly do inside a browser tab.”



 

Sunday, June 02, 2024

Chat away…

https://scholarship.law.slu.edu/lj/vol68/iss3/15/

Artificial Intelligence and the Practice of Law: A Chat With ChatGPT

In late 2022, OpenAI introduced ChatGPT to the world. At the time of writing this article, ChatGPT and other generative AI models were no longer used only to generate silly responses but were being considered for substantive work in our daily lives. Specifically, this article highlights how ChatGPT and other learned language models can have a strong impact on the practice of law. Within this article, the uses of these forms of AI are explained on multiple levels: the individual attorney, the law firm, and the non-attorney. Along with its diverse applications, this article delves into potential ethical dilemmas and data privacy concerns an attorney should be mindful of when implanting this tool into their practice. In sum, this article provides a comprehensive overview of ChatGPT, demonstrating its potential as a tool for legal professionals while also emphasizing the need for careful consideration of its ethical implications.





Three strikes?

https://kiss.kstudy.com/Detail/Ar?key=4091807

Can AI become an Expert?

With the rapid development of artificial intelligence (AI), understanding its capabilities and limitations has become significant for mitigating unfounded anxiety and unwarranted optimism. As part of this endeavor, this study delves into the following question: Can AI become an expert? More precisely, should society confer the authority of experts on AI even if its decision-making process is highly opaque? Throughout the investigation, I aim to identify certain normative challenges in elevating current AI to a level comparable to that of human experts. First, I will narrow the scope by proposing the definition of an expert. Along the way, three normative components of experts -trust, explainability, and responsibility-will be presented. Subsequently, I will suggest why AI cannot become a trustee, successfully transmit knowledge, or take responsibility. Specifically, the arguments focus on how these factors regulate expert judgments, which are not made in isolation but within complex social connections and spontaneous dialogue. Finally, I will defend the plausibility of the presented criteria in response to a potential objection, the claim that some machine learning-based algorithms, such as AlphaGo, have already been recognized as experts.