Saturday, April 20, 2024

Do we need a chapter here?

https://www.geekwire.com/2024/seattle-tech-vet-calls-rapidly-growing-ai-tinkerers-meetups-the-new-homebrew-computer-club-for-ai/

Seattle tech vet calls rapidly growing ‘AI Tinkerers’ meetups the new ‘Homebrew Computer Club’ for AI

A first meetup in Seattle in November 2022 attracted 12 people. A second in Austin was led by GitHub Copilot creator Alex Graveley, who came up with the name “AI Tinkerers.”

Nearly a year-and-a-half later, Heitzeberg said the idea has taken off and is going global. In a LinkedIn post last week, he said eight cities — from Seattle to Chicago to Boston to Medellin, Colombia, and elsewhere — have AI Tinkerers meetups planned over the next month.

We are kind of the Homebrew Computer Club of AI,” Heitzeberg said, referencing the famed hobbyist group that gathered in Silicon Valley in the mid-1970s to mid-1980s and attracted the likes of Apple founders Steve Jobs and Steve Wozniak. “It was people trying stuff. It’s that for AI, and it’s really needed and really good for innovation.”



Friday, April 19, 2024

I worry that “force” might eventually include beating a password out of me.

https://www.bespacific.com/cops-can-force-suspect-to-unlock-phone-with-thumbprint-us-court-rules/

Cops can force suspect to unlock phone with thumbprint, US court rules

Ars Technica: “The US Constitution’s Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan, a federal appeals court ruled yesterday. The ruling does not apply to all cases in which biometrics are used to unlock an electronic device but is a significant decision in an unsettled area of the law. The US Court of Appeals for the 9th Circuit had to grapple with the question of “whether the compelled use of Payne’s thumb to unlock his phone was testimonial,” the ruling in United States v. Jeremy Travis Payne said. “To date, neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial.” A three-judge panel at the 9th Circuit ruled unanimously against Payne, affirming a US District Court’s denial of Payne’s motion to suppress evidence. Payne was a California parolee who was arrested by California Highway Patrol (CHP) after a 2021 traffic stop and charged with possession with intent to distribute fentanyl, fluorofentanyl, and cocaine. There was a dispute in District Court over whether a CHP officer “forcibly used Payne’s thumb to unlock the phone.” But for the purposes of Payne’s appeal, the government “accepted the defendant’s version of the facts, i.e., ‘that defendant’s thumbprint was compelled.'” Payne’s Fifth Amendment claim “rests entirely on whether the use of his thumb implicitly related certain facts to officers such that he can avail himself of the privilege against self-incrimination,” the ruling said. Judges rejected his claim, holding “that the compelled use of Payne’s thumb to unlock his phone (which he had already identified for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking.” “When Officer Coddington used Payne’s thumb to unlock his phone—which he could have accomplished even if Payne had been unconscious—he did not intrude on the contents of Payne’s mind,” the court also said…”





Perspective. Worth an hour of your time.

https://www.nationalreview.com/corner/the-rise-of-the-machines-john-etchemendy-and-fei-fei-li-on-our-ai-future/

The Rise of The Machines: John Etchemendy and Fei-Fei Li on Our AI Future

John Etchemendy and Fei-Fei Li are the co-directors of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), founded in 2019 to “advance AI research, education, policy and practice to improve the human condition.” In this interview, they delve into the origins of the technology, its promise, and its potential threats. They also discuss what AI should be used for, where it should not be deployed, and why we as a society should — cautiously — embrace it.





Interesting story of an unlevel playing field.

https://lawrencekstimes.com/2024/04/18/lhs-journalists-dispute-gaggle/

Lawrence journalism students convince district to reverse course on AI surveillance they say violates freedom of press

Journalism students at Lawrence High School have convinced the school district to remove their files from the purview of a controversial artificial intelligence surveillance system after months of debate with administrators.

The AI software, called Gaggle, sifts through anything connected to the district’s Google Workspace — which includes Gmail, Drive and other products — and flags content it deems a safety risk, such as allusions to self-harm, depression, drug use and violence.



Thursday, April 18, 2024

Sorry for the short notice but I just found out myself. Privacy Foundation Seminar:

Artificial Intelligence and the Practice of Law

Friday April 19th 11:30 – 1:00

1 ethics CLE credit. Contact Kristen Dermyer 303-871-6487 <Kristen.Dermyer@du.edu> to register.





...and not just for lawyers.

https://www.bespacific.com/the-smartest-way-to-use-ai-at-work/

The Smartest Way to Use AI at Work

WSJ via MSN: “By by day, there’s growing pressure at the office. Do you respond to all those clients—or let AI do it? Do you attend that meeting—or do you send a bot? About 20% of employed adults said they have used OpenAI’s ChatGPT for work as of February 2024, up from 8% a year ago, according to Pew Research Center. The most popular uses for AI at work are research and brainstorming, writing first-draft emails and creating visuals and presentations, according to an Adobe survey. Productivity boosts from AI are estimated to be worth trillions of dollars over the next decade, say consultants. Many companies are encouraging their workers to embrace and learn the new tools. The industries that will benefit most are sales and marketing, customer care, software engineering and product development. For most workers, it can make your day-to-day a bit less annoying. “If you’re going to use it as a work tool,” said Lareina Yee, a senior partner at the consulting firm McKinsey and chair of its Technology Council, “you need to think of all the ways it can change your own productivity equation.” Using AI at work could get you fired—or at least in hot water. A judge last year sanctioned a lawyer who relied on fake cases generated by ChatGPT, and some companies have restricted AI’s usage. Other companies and bosses are pushing staff to do more with AI, but you’ll need to follow guidelines. Rule No. 1: Don’t put any company data into a tool without permission. And Rule No. 2: Only use AI to do work you can easily verify, and be sure to check its work…” Uses include: Email; Presentations; Summaries; Meetings.





Too many tools, too little time.

https://www.makeuseof.com/custom-gpts-that-make-chat-gpt-better/

10 Custom GPTs That Actually Make ChatGPT Better

ChatGPT on its own is great, but did you know that you can use custom GPTs to streamline its functionality? Custom GPTs can teach you how to code, plan trips, transcribe videos, and much, much more, and there are heaps for you to choose from.

So, here are the best custom GPTs that actually make ChatGPT a better tool for any situation.





Not sure I believe these numbers…

https://www.edweek.org/technology/see-which-types-of-teachers-are-the-early-adopters-of-ai/2024/04

See Which Types of Teachers Are the Early Adopters of AI

Among social studies and English/language arts teachers, the number of AI users was higher than the general teaching population. Twenty-seven percent of English teachers and social studies teachers use AI tools in their work. By comparison, 19 percent of teachers in STEM disciplines said they use AI, and 11 percent of elementary education teachers reported doing so.



Wednesday, April 17, 2024

I thought this sounded familiar…

https://sloanreview.mit.edu/article/ai-and-statistics-perfect-together/

AI and Statistics: Perfect Together

People are often unsure why artificial intelligence and machine learning algorithms work. More importantly, people can’t always anticipate when they won’t work. Ali Rahimi, an AI researcher at Google, received a standing ovation at a 2017 conference when he referred to much of what is done in AI as “alchemy,” meaning that developers don’t have solid grounds for predicting which algorithms will work and which won’t, or for choosing one AI architecture over another. To put it succinctly, AI lacks a basis for inference: a solid foundation on which to base predictions and decisions.

This makes AI decisions tough (or impossible) to explain and hurts trust in AI models and technologies — trust that is necessary for AI to reach its potential. As noted by Rahimi, this is an unsolved problem in AI and machine learning that keeps tech and business leaders up at night because it dooms many AI models to fail in deployment.

Fortunately, help for AI teams and projects is available from an unlikely source: classical statistics. This article will explore how business leaders can apply statistical methods and statistics experts to address the problem.





Clogging congress. (Or any organization that would take this seriously.)

https://www.schneier.com/blog/archives/2024/04/using-ai-generated-legislative-amendments-as-a-delaying-technique.html

Using AI-Generated Legislative Amendments as a Delaying Technique

Canadian legislators proposed 19,600 amendments —almost certainly AI-generated—to a bill in an attempt to delay its adoption.





Resource.

https://www.bespacific.com/free-guide-learn-how-to-use-chatgpt/

Free guide – Learn how to use ChatGPT

Ben’s Bites – Learn how to use ChatGPT. An introductory overview of ChatGPT, the AI assistant by OpenAI Designed for absolute beginners, this short course explores in simple terms how AI assistant ChatGPT works and how to get started using it.





Tools & Techniques. Could this be trained for other topics?

https://news.yale.edu/2024/04/16/student-developed-ai-chatbot-opens-yale-philosophers-works-all

Student-developed AI chatbot opens Yale philosopher’s works to all

LuFlot Bot, a generative AI chatbot trained on the works of Yale philosopher Luciano Floridi, answers questions on the ethics of digital technology.

Visit this link to converse with LuFlot about the ethics of digital technologies.



Tuesday, April 16, 2024

Is this the first step on the slippery slope to home defense drones armed with napalm and machine guns? (I can see where it would be very satisfying to paint ball a porch pirate!)

https://boingboing.net/2024/04/15/this-armed-security-camera-uses-ai-to-fire-paintballs-or-tear-gas-at-trespassers.html

This armed security camera uses AI to fire paintballs or tear gas at trespassers

PaintCam is an armed home/office security camera that uses AI to spot trespassers and fires paintballs or tear gas projectiles at them. The company's promotional video looks like a parody but apparently this "vigilant guardian that doesn't sleep, blink, or miss a beat" is a real product.

According to New Atlas, the system "uses automatic target marking, face recognition and AI-based decision making to identify unfamiliar visitors to your property, day or night.





When the demand for information is huge, providing anything must be profitable.

https://www.wired.com/story/iran-israel-attack-viral-fake-content/

Fake Footage of Iran’s Attack on Israel Is Going Viral

IN THE HOURS after Iran announced its drone and missile attack on Israel on April 13, fake and misleading posts went viral almost immediately on X. The Institute for Strategic Dialogue (ISD), a nonprofit think tank, found a number of posts that claimed to reveal the strikes and their impact, but that instead used AI-generated videos, photos, and repurposed footage from other conflicts which showed rockets launching into the night, explosions, and even President Joe Biden in military fatigues.

Just 34 of these misleading posts received more than 37 million views, according to ISD. Many of the accounts posting the misinformation were also verified, meaning they have paid X $8 per month for the “blue tick” and that their content is amplified by the platform’s algorithm. ISD also found that several of the accounts claimed to be open source intelligence (OSINT) experts, which has, in recent years, become another way of lending legitimacy to their posts.





I’m trying to get and stay current…

https://aiindex.stanford.edu/report/

Measuring trends in AI

Welcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine.

DOWNLOAD THE FULL REPORT

DOWNLOAD INDIVIDUAL CHAPTERS





Tools & Techniques.

https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/3741371/nsa-publishes-guidance-for-strengthening-ai-system-security/

NSA Publishes Guidance for Strengthening AI System Security

The National Security Agency (NSA) is releasing a Cybersecurity Information Sheet (CSI) today, Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.” The CSI is intended to support National Security System owners and Defense Industrial Base companies that will be deploying and operating AI systems designed and developed by an external entity.



Sunday, April 14, 2024

The evolution of computer crime.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4788909

Hacking Generative AI

Generative AI platforms, like ChatGPT, hold great promise in enhancing human creativity, productivity, and efficiency. However, generative AI platforms are prone to manipulation. Specifically, they are susceptible to a new type of attack called “prompt injection.” In prompt injection, attackers carefully craft their input prompt to manipulate AI into generating harmful, dangerous, or illegal content as output Examples of such outputs include instructions on how to build an improvised bomb, how to make meth, how to hotwire a car, and more. Researchers have also been able to make ChatGPT generate malicious code.

This article asks a basic question: do prompt injection attacks violate computer crime law, mainly the Computer Fraud and Abuse Act? This article argues that they do. Prompt injection attacks lead AI to disregard its own hard-coded content generation restrictions, which allows the attacker to access portions of the AI that are beyond what the system’s developers authorized. Therefore, this constitutes the criminal offense of accessing a computer in excess of authorization. Although prompt injection attacks could run afoul of the Computer Fraud and Abuse Act, this article offers ways to distinguish serious acts of AI manipulation from less serious ones, so that prosecution would only focus on a limited set of harmful and dangerous prompt injections.





Perspective.

https://www.ft.com/content/cde75f58-20b9-460c-89fb-e64fe06e24b9

ChatGPT essay cheats are a menace to us all

The other day I met a British academic who said something about artificial intelligence that made my jaw drop.

The number of students using AI tools like ChatGPT to write their papers was a much bigger problem than the public was being told, this person said.

AI cheating at their institution was now so rife that large numbers of students had been expelled for academic misconduct — to the point that some courses had lost most of a year’s intake. “I’ve heard similar figures from a few universities,” the academic told me.

Spotting suspicious essays could be easy, because when students were asked why they had included certain terms or data sources not mentioned on the course, they were baffled. “They have clearly never even heard of some of the terms that turn up in their essays.”