Friday, July 21, 2023

I wonder if they asked ChatGPT?

https://www.darkreading.com/attacks-breaches/google-red-team-provides-insight-on-real-world-ai-attacks

Google Categorizes 6 Real-World AI Attacks to Prepare for Now

The company revealed in a report published this week that its dedicated AI red team has already uncovered various threats to the fast-growing technology, mainly based on how attackers can manipulate the large language models (LLMs) that drive generative AI products like ChatGPT, Google Bard, and more.

The attacks largely result in the technology producing unexpected or even malice-driven results, which can lead to outcomes as benign as the average person's photos showing up on a celebrity photo website, to more serious consequences such as security-evasive phishing attacks or data theft.

Google's findings come on the heels of its release of the Secure AI Framework (SAIF), which the company said is aimed at getting out in front of the AI security issue before it's too late, as the technology already is experiencing rapid adoption, creating new security threats in its wake.





Similar to the Chinese model? If you don’t act like a good little communist you don’t get an education, loans or the right to travel?

https://neurosciencenews.com/social-norms-ai-23667/

AI System Detects Social Norm Violations

A pioneering AI system successfully identifies violations of social norms. Utilizing GPT-3, zero-shot text classification, and automatic rule discovery, the system categorizes social emotions into ten main types. It analyzes written situations and accurately determines if they are positive or negative based on these categories.

This initial study offers promising evidence that the approach can be expanded to encompass more social norms.



(Related)

https://www.schneier.com/blog/archives/2023/07/ai-and-microdirectives.html

AI and Microdirectives

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.





A comment on anything government wants to suppress?

https://www.bespacific.com/dpla-launches-the-banned-book-club-to-ensure-access-to-banned-books/

Digital Public Library of America Launches The Banned Book Club to Ensure Access to Banned Books

PR Newswire: The Digital Public Library of America (DPLA) has launched The Banned Book Club to ensure that readers in communities affected by book bans can now access banned books for free via the Palace e-reader app. The Banned Book Club makes e-book versions of banned books available to readers in locations across the United States where titles have been banned. The e-books will be available to readers for free via the Palace e-reader app. “At DPLA, our mission is to ensure access to knowledge for all and we believe in the power of technology to further that access,” said John S. Bracken, executive director of Digital Public Library of America. “Today book bans are one of the greatest threats to our freedom, and we have created The Banned Book Club to leverage the dual powers of libraries and digital technology to ensure that every American can access the books they want to read.”





Worth a try?

https://www.cnbc.com/2023/07/20/3-steps-to-land-a-lucrative-ai-job-even-if-you-dont-work-in-tech.html

3 ways to build A.I. skills even if you don’t work in tech: ‘Suddenly your employability options go through the roof’



No comments: