Saturday, July 22, 2023

Is it possible to determine how much Amazon made because of this violation? Do we ask Amazon for an estimate?

https://www.bleepingcomputer.com/news/technology/amazon-agrees-to-25-million-fine-for-alexa-children-privacy-violations/

Amazon agrees to $25 million fine for Alexa children privacy violations

The U.S. Justice Department and the Federal Trade Commission (FTC) announced that Amazon has agreed to pay a $25 million fine to settle alleged children's privacy laws violations related to the company's Alexa voice assistant service.

Amazon has offered Alexa voice-activated products and services targeted at children under 13 years old since May 2018.





A sign of things to come?

https://www.pogowasright.org/examining-the-private-right-of-action-in-washingtons-my-health-my-data-act/

Examining the Private Right of Action in Washington’s My Health My Data Act

Alexander Vitruk and Andreas Kaltsounis of BakerHostetler write:

Washington’s groundbreaking “My Health My Data Act (HB 1155) (the Act) was signed into law on April 27, 2023. This Act imposes new requirements on the processing and sale of consumer health data by organizations with a nexus to Washington state, as our earlier blog posts explain. In this blog post, we examine the private right of action available under the Act, including how it interacts with the state’s Consumer Protection Act and the risk of class actions.
The Private Right of Action’s Extensive Scope
The Act provides for a private right of action in Section 11 by establishing that a violation of the Act is an unfair or deceptive act under the Washington Consumer Protection Act (CPA). It is one of the most far-reaching private rights of action of any state privacy law, for several reasons:

Read more at DataCounsel,



Friday, July 21, 2023

I wonder if they asked ChatGPT?

https://www.darkreading.com/attacks-breaches/google-red-team-provides-insight-on-real-world-ai-attacks

Google Categorizes 6 Real-World AI Attacks to Prepare for Now

The company revealed in a report published this week that its dedicated AI red team has already uncovered various threats to the fast-growing technology, mainly based on how attackers can manipulate the large language models (LLMs) that drive generative AI products like ChatGPT, Google Bard, and more.

The attacks largely result in the technology producing unexpected or even malice-driven results, which can lead to outcomes as benign as the average person's photos showing up on a celebrity photo website, to more serious consequences such as security-evasive phishing attacks or data theft.

Google's findings come on the heels of its release of the Secure AI Framework (SAIF), which the company said is aimed at getting out in front of the AI security issue before it's too late, as the technology already is experiencing rapid adoption, creating new security threats in its wake.





Similar to the Chinese model? If you don’t act like a good little communist you don’t get an education, loans or the right to travel?

https://neurosciencenews.com/social-norms-ai-23667/

AI System Detects Social Norm Violations

A pioneering AI system successfully identifies violations of social norms. Utilizing GPT-3, zero-shot text classification, and automatic rule discovery, the system categorizes social emotions into ten main types. It analyzes written situations and accurately determines if they are positive or negative based on these categories.

This initial study offers promising evidence that the approach can be expanded to encompass more social norms.



(Related)

https://www.schneier.com/blog/archives/2023/07/ai-and-microdirectives.html

AI and Microdirectives

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.





A comment on anything government wants to suppress?

https://www.bespacific.com/dpla-launches-the-banned-book-club-to-ensure-access-to-banned-books/

Digital Public Library of America Launches The Banned Book Club to Ensure Access to Banned Books

PR Newswire: The Digital Public Library of America (DPLA) has launched The Banned Book Club to ensure that readers in communities affected by book bans can now access banned books for free via the Palace e-reader app. The Banned Book Club makes e-book versions of banned books available to readers in locations across the United States where titles have been banned. The e-books will be available to readers for free via the Palace e-reader app. “At DPLA, our mission is to ensure access to knowledge for all and we believe in the power of technology to further that access,” said John S. Bracken, executive director of Digital Public Library of America. “Today book bans are one of the greatest threats to our freedom, and we have created The Banned Book Club to leverage the dual powers of libraries and digital technology to ensure that every American can access the books they want to read.”





Worth a try?

https://www.cnbc.com/2023/07/20/3-steps-to-land-a-lucrative-ai-job-even-if-you-dont-work-in-tech.html

3 ways to build A.I. skills even if you don’t work in tech: ‘Suddenly your employability options go through the roof’



Thursday, July 20, 2023

I think ChatGPT is being contaminated by the articles it generates.

https://venturebeat.com/ai/not-just-in-your-head-chatgpts-behavior-is-changing-say-ai-researchers/

Not just in your head: ChatGPT’s behavior is changing, say AI researchers

Researchers at Stanford University and University of California-Berkeley have published an unreviewed paper on the open access journal arXiv.org, which found that the “performance and behavior” of OpenAI’s ChatGPT large language models (LLMs) have changed between March and June 2023. The researchers concluded that their tests revealed “performance on some tasks have gotten substantially worse over time.”

Commenters on the ChatGPT subreddit and Ycombinator similarly took issue with the thresholds the researchers considered failing, but other longtime users seemed to be comforted by evidence that perceived changes in the generative AI output weren’t merely in their heads.

This work brings to light a new area that business and enterprise operators need to be aware of when considering generative AI products. The researchers have dubbed the change in behavior as “LLM drift” and cited it as a critical way to comprehend how to interpret results from popular chat AI models.





This might be useful to generate phishing examples, on the other hand it might be good enough to turn the security staff into criminals.

https://nypost.com/2023/07/19/chatgpts-evil-twin-wormgpt-is-secretly-entering-emails-raiding-banks/

ChatGPT’s evil twin WormGPT is secretly entering emails, raiding banks

ChatGPT has an evil twin — and it wants to take your money.

WormGPT was created by a hacker and is designed for phishing attacks on a larger scale than ever before.

Cybersecurity firm SlashNext confirmed that the “sophisticated AI model” was developed purely with malevolent intent.

This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” security researcher Daniel Kelley wrote on the website. “WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.”

They played around with WormGPT to see its potential dangers and how extreme they may be, asking it to create phishing emails.

The results were unsettling,” the cyber expert confirmed. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” Kelley chillingly added. [Does any AI have ethical boundaries? Bob]



Wednesday, July 19, 2023

Generals preparing to fight the last war…

https://www.bespacific.com/law-unlimited-welcome-to-the-re-envisioned-legal-profession/

Law Unlimited: Welcome to the re-envisioned legal profession

Via LLRX Law Unlimited: Welcome to the re-envisioned legal profession Will Generative AI destroy law firms? Jordan Furlong argues this may only occur if lawyers are too fixed in their ways to see the possibilities that lie beyond who we’ve always been and what we’ve always done.





Keeping up.

https://www.pogowasright.org/webinar-the-new-breed-of-state-health-privacy-laws/

Webinar – The New Breed of State Health Privacy Laws

There are so many webinars each week that I generally don’t sign up for them or post links to them, but this one really caught my eye because there have been so many recent changes at the state level.

The New Breed of State Health Privacy Laws
Thursday, July 27, 2023 at 2 PM ET
The State of Washington passed the My Health My Data (MHMD) Law, a very broad and powerful health privacy law that might be the strictest privacy law in the U.S. This law was soon followed by state health privacy laws in Nevada and Connecticut. These laws have implications far beyond health. In this webinar, Daniel Solove discusses these new laws with Mike Hintze (Hintze Law).

More information and registration form at TeachPrivacy.





Some protection but possibly not enough?

https://www.bespacific.com/authorbots/

Authorbots

Bambauer, Derek E. and Surdeanu, Mihai, Authorbots (May 9, 2023). 3 Journal of Free Speech Law (forthcoming 2023), Arizona Legal Studies Discussion Paper No. 23-13, Available at SSRN: https://ssrn.com/abstract=4443714 – “ChatGPT has exploded into the popular consciousness in recent months, and the hype and concerns about the program have only grown louder with the release of GPT-4, a more powerful version of the software. Its deployment, including with applications such as Microsoft Office, has raised questions about whether the developers or distributors of code that includes ChatGPT, or similar generative pre-trained transformers, could face liability for tort claims such as defamation or false light. One important potential barrier to these claims is the immunity conferred by 47 U.S.C. § 230, popularly known as “Section 230.” In this Essay, we make two claims. First, Section 230 is likely to protect the creators, distributors, and hosts of online services that include ChatGPT in many cases. Users of those services, though, may be at greater legal risk than is commonly believed. Second, ChatGPT and its ilk make the analysis of the Section 230 safe harbor more complex, both substantively and procedurally. This is likely a negative consequence for the software’s developers and hosts, since complexity in law tends to generate uncertainty, which in turn creates cost. Nonetheless, we contend that Section 230 has more of a role to play in legal questions about ChatGPT than most commentators do—including the principal legislative drafters of Section 230—and that this result is generally a desirable one.”





Who’da thunk it?

https://www.makeuseof.com/chatgpt-write-poetry-book-how-to/

How to Use ChatGPT to Write a Poetry Book



Tuesday, July 18, 2023

Who determines “possible criminal behavior?” (Is it behavior no innocent person could possibly blunder into?)

https://www.forbes.com/sites/thomasbrewster/2023/07/17/license-plate-reader-ai-criminal/?sh=7c07b0a43ccc

This AI Watches Millions Of Cars Daily And Tells Cops If You’re Driving Like A Criminal

In March of 2022, David Zayas was driving down the Hutchinson River Parkway in Scarsdale. His car, a gray Chevrolet, was entirely unremarkable, as was its speed. But to the Westchester County Police Department, the car was cause for concern and Zayas a possible criminal; its powerful new AI tool had identified the vehicle’s behavior as suspicious.

Searching through a database of 1.6 billion license plate records collected over the last two years from locations across New York State, the AI determined that Zayas’ car was on a journey typical of a drug trafficker. According to a Department of Justice prosecutor filing, it made nine trips from Massachusetts to different parts of New York between October 2020 and August 2021 following routes known to be used by narcotics pushers and for conspicuously short stays. So on March 10 last year, Westchester PD pulled him over and searched his car, finding 112 grams of crack cocaine, a semiautomatic pistol and $34,000 in cash inside, according to court documents. A year later, Zayas pleaded guilty to a drug trafficking charge.



(Related)

https://www.pogowasright.org/bill-that-would-force-internet-companies-to-spy-on-their-users-for-the-dea-headed-to-senate-floor/

Bill That Would Force Internet Companies to Spy on Their Users for the DEA Headed to Senate Floor

Lucas Ropek reported:

Internet drug sales have skyrocketed in recent years, allowing powerful narcotics to be peddled to American teenagers and adolescents. It’s a trend that’s led to an epidemic of overdoses and left countless young people dead. Now, a bill scheduled for a congressional vote seeks to tackle the problem, but it comes with a major catch. Critics worry that the legislative effort to crack down on the drug trade could convert large parts of the internet into a federal spying apparatus.
The Cooper Davis Act was introduced by Kansas Republican Sen. Roger Marshall and New Hampshire Democrat Sen. Jeanne Shaheen in March and has been under consideration by the Senate Judiciary Committee for weeks. Named after a 16-year-old Kansas boy who died of a fentanyl overdose two years ago, the bipartisan bill, which the committee is scheduled to vote on Thursday, has spurred intense debate. Proponents say it could help address a spiraling public health crisis; critics see it as a gateway to broad and indiscriminate internet surveillance.

Read more at Gizmodo.

On Friday, the bill was headed to the Senate Floor.





Likely that writing by a committee would look like writing generated from a large library of examples.

https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/

Why AI detectors think the US Constitution was written by AI

If you feed America's most important legal document—the US Constitution —into a tool designed to detect text written by AI models like ChatGPT, it will tell you that the document was almost certainly written by AI. But unless James Madison was a time traveler, that can't be the case. Why do AI writing detection tools give false positives? We spoke to several experts—and the creator of AI writing detector GPTZero—to find out.

Among news stories of overzealous professors flunking an entire class due to the suspicion of AI writing tool use and kids falsely accused of using ChatGPT, generative AI has education in a tizzy. Some think it represents an existential crisis. Teachers relying on educational methods developed over the past century have been scrambling for ways to keep the status quo—the tradition of relying on the essay as a tool to gauge student mastery of a topic.

As tempting as it is to rely on AI tools to detect AI-generated writing, evidence so far has shown that they are not reliable. Due to false positives, AI writing detectors such as GPTZero, ZeroGPT, and OpenAI's Text Classifier cannot be trusted to detect text composed by large language models (LLMs) like ChatGPT.

If you feed GPTZero a section of the US Constitution, it says the text is "likely to be written entirely by AI." Several times over the past six months, screenshots of other AI detectors showing similar results have gone viral on social media, inspiring confusion and plenty of jokes about the founding fathers being robots. It turns out the same thing happens with selections from The Bible, which also show up as being AI-generated.





So much for the “AI will replace programmers” idea…

https://www.bbc.com/news/technology-66178247

AI trend drives rise in students wanting to study computing

This year's application data showed 18-year-olds were increasingly inspired to study computing "thanks to the rise of digital and AI", UCAS chief executive Clare Marchant said.

Applications to study computing were up almost 10% compared to 2022.





Resource.

https://www.makeuseof.com/free-websites-to-learn-data-analytics/?newsletter_popup=1

7 Websites to Learn Data Analytics for Free

There are many data-analysis tools and processes that assist in manipulating data. These range from spreadsheets, reporting tools, and data visualization to data mining programs.

To use these tools, you need some essential data analysis skills. The following free-to-use websites will help you acquire the necessary analytical skills. You can use their courses to boost your data analytics skills.



Sunday, July 16, 2023

I have been waiting for an indication that the US has been using the technology France announced it will implement. (https://gizmodo.com/france-bill-allows-police-access-phones-camera-gps-1850609772 )

https://www.ft.com/content/6567e7f2-c5fb-4da4-bd95-bf7ceef54038

Thousands of Russian officials to give up iPhones over US spying fears

FSB enforces crackdown on use by state officials after claiming it uncovered an espionage operation using Apple devices

… The ban on iPhones, iPad tablets and other Apple devices at leading ministries and institutions reflects growing concern in the Kremlin and the Federal Security Service spy agency over a surge in espionage activity by US intelligence agencies against Russian state institutions.

“Security officials in ministries — these are FSB employees who hold civilian positions such as deputy ministers — announced that iPhones were no longer considered safe and that alternatives should be sought,” said a person close to a government agency that has banned Apple products.

… “Officials truly believe that Americans can use their equipment for wiretapping,” said Andrey Soldatov, a Russia security and intelligence services expert. “The FSB has long been concerned about the use of iPhones for professional contacts, but the presidential administration and other officials opposed [restrictions] simply because they liked iPhones.”





I want a robot butler.

https://www.researchgate.net/profile/Jochen-Wirtz/publication/372135992_How_Intelligent_Automation_Service_Robots_and_AI_Will_Reshape_Service_Products_and_Their_Delivery/links/64a62849b9ed6874a5fc7d48/How-Intelligent-Automation-Service-Robots-and-AI-Will-Reshape-Service-Products-and-Their-Delivery.pdf

How Intelligent Automation, Service Robots, and AI Will Reshape Service Products and Their Delivery

Intelligent Automation in form of robots, smart self-service technologies, wearable technologies, software and systems such as machine learning (ML), generative artificial intelligence (AI) such as ChatGPT, and the metaverse are increasingly adopted in a wide range of customer-facing service settings. The shift toward robot- and AI-powered services will lead to improved customer experiences, service quality, and productivity all at the same time. However, these also carry ethical, fairness, and privacy risks for customers and society. In this opinion piece, we discuss the implications of the service revolution for service firms, their marketing, and their customers, and provide avenues for future research opportunities.





A topic of interest.

https://journal.ijresm.com/index.php/ijresm/article/view/2749

How to Solve Unbreakable Codes: World War II Edition; Version 202.3

We cannot imagine a life without the Internet. Cyber data security has become a main concern for anyone connected to the web. The security and privacy of information are maintained by encryption. Decryption techniques are also equally important to check the robustness of encryption. So, this paper is an exploration of the feasibility of using current technologies like Machine Learning and Blockchain concepts to break the encryptions, with special reference to the Enigma encryption.