Saturday, October 07, 2023

Is this “THE” solution or merely “A” solution?

https://arstechnica.com/tech-policy/2023/10/getty-images-built-a-socially-responsible-ai-tool-that-rewards-artists/

Getty Images built a “socially responsible” AI tool that rewards artists

Getty Images CEO Craig Peters told the Verge that he has found a solution to one of AI's biggest copyright problems: creators suing because AI models were trained on their original works without consent or compensation. To prove it's possible for AI makers to respect artists' copyrights, Getty built an AI tool using only licensed data that's designed to reward creators more and more as the tool becomes more popular over time.

Rather than crawling the web for data to feed its AI model, Getty's tool is trained exclusively on images that Getty owns the rights to, Peters said. The tool was created out of rising demand from Getty Images customers who want access to AI generators that don't carry copyright risks. Peters explained:





A $100 million here, a $100 million there and pretty soon you’re talking real money!

https://www.databreaches.net/data-breach-at-mgm-resorts-expected-to-cost-casino-giant-100-million/

Data breach at MGM Resorts expected to cost casino giant $100 million

Wyatte Grantham-Philips reports:

The data breach last month that MGM Resorts is calling a cyberattack is expected to cost the casino giant more than $100 million, the Las Vegas-based company said.
The incident, which was detected on Sept. 10, led to MGM shutting down some casino and hotel computer systems at properties across the U.S. in efforts to protect data.

Read more at Waco Tribune-Herald.





At least they tried…

https://www.pogowasright.org/canadian-privacy-regulators-pass-resolutions-on-the-privacy-of-young-people-and-workplace-privacy/

Canadian privacy regulators pass resolutions on the privacy of young people and workplace privacy

… For young people, the resolution focuses on the responsibility of organizations across all sectors to actively safeguard young people’s data through responsible measures, including minimized tracking, regulated data sharing, and stringent control over commercial advertising. It also calls on organizations to safeguard their rights to access, correction, and appeal regarding personal data.

The employee privacy resolution addresses the recent proliferation of employee monitoring software and how it has revealed that laws protecting workplace privacy are either out-of-date or absent altogether. In our increasingly digital work environments, there need to be robust and relevant privacy protections in place to safeguard workers from overly intrusive monitoring by employers.

Resolution: Putting best interests of young people at the forefront of privacy and access to personal information

Resolution: Protecting Employee Privacy in the Modern Workplace

OPC guidance: Privacy in the Workplace



(Related)

https://www.pogowasright.org/schools-are-normalizing-intrusive-surveillance/

Schools Are Normalizing Intrusive Surveillance

J.D. Tuccille writes:

If war is the health of the state, as Randolph Bourne had it, then scaring the hell out of people is the health of the security state. Nothing scares people more than threats to wee ones, which is why “think of the children” is the go-to marketing hook for control-freak policies. And if children are involved in authoritarian schemes, you know that implicates public schools, which are the focus of a new report on surveillance and kids by the American Civil Liberties Union (ACLU).

Read more at Reason.



Friday, October 06, 2023

Preview of things to come?

https://www.schneier.com/blog/archives/2023/10/deepfake-election-interference-in-slovakia.html

Deepfake Election Interference in Slovakia

Well designed and well timed deepfake or two Slovakian politicians discussing how to rig the election:

Šimečka and Denník N immediately denounced the audio as fake. The fact-checking department of news agency AFP said the audio showed signs of being manipulated using AI. But the recording was posted during a 48-hour moratorium ahead of the polls opening, during which media outlets and politicians are supposed to stay silent. That meant, under Slovakia’s election rules, the post was difficult to widely debunk. And, because the post was audio, it exploited a loophole in Meta’s manipulated-media policy, which dictates only faked videos—where a person has been edited to say words they never said —go against its rules.

I just wrote about this. Countries like Russia and China tend to test their attacks out on smaller countries before unleashing them on larger ones. Consider this a preview to their actions in the US next year.





As I have been warning…

https://www.wired.com/story/fast-forward-chatbot-hallucinations-are-poisoning-web-search/

Chatbot Hallucinations Are Poisoning Web Search

Untruths spouted by chatbots ended up on the web—and Microsoft's Bing search engine served them up as facts. Generative AI could make search harder to trust.





Another problem...

https://www.bespacific.com/evaluating-llms-is-a-minefield/

Evaluating LLMs is a minefield

Evaluating LLMs is a minefield, Arvind Narayanan & Sayash Kapoor. Princeton University. Oct 4, 2023. Authors of the AI Snake Oil book and newsletter





Tools & Techniques.

https://www.bespacific.com/how-to-use-google-bard-2023-a-comprehensive-guide/

How to Use Google Bard (2023): A Comprehensive Guide

Tech Republic: “This is a complete guide on how to use Google Bard. Learn how Google Bard can help you boost your productivity, creativity and more. Bard is Google’s public entry into the highly competitive field of artificial intelligence chatbots, which also includes OpenAI’s ChatGPT. Google intends Bard to be a “creative and helpful collaborator” that people may chat with using natural language. The following guide covers what you need to know as you chat and explore the capabilities of Google Bard.”



Thursday, October 05, 2023

A step in the right direction?

https://techxplore.com/news/2023-10-method-ai.html

Study presents new method for explainable AI

In their paper "From attribution maps to human-understandable explanations through concept relevance propagation," the researchers present concept relevance propagation (CRP), a new method that can explain individual AI decisions as concepts understandable to humans. The paper has now been published in Nature Machine Intelligence.

"On the input level, CRP labels which pixels within an image are most relevant for the AI decision process. This is an important step in understanding an AI's decisions, but it doesn't explain the underlying concept of why the AI considers those exact pixels."

For comparison, when humans see a black-and-white striped surface, they don't automatically recognize a zebra. To do so, they also need information such as four legs, hooves, tail, etc. Ultimately, they combine the information of the pixels (black and white) with the concept of animal.





Steps in the wrong direction. Depressing on an otherwise good day…

https://www.bespacific.com/freedom-on-the-net-2023-the-repressive-power-of-artificial-intelligence/

Freedom on the Net 2023 The Repressive Power of Artificial Intelligence

Highlights – Freedom on the Net 2023

  • Global internet freedom declined for the 13th consecutive year. Digital repression intensified in Iran, home to this year’s worst decline, as authorities shut down internet service, blocked WhatsApp and Instagram, and increased surveillance in a bid to quell antigovernment protests. Myanmar came close to dislodging China as the world’s worst environment for internet freedom, a title the latter country retained for the ninth consecutive year. Conditions worsened in the Philippines as outgoing president Rodrigo Duterte used an antiterrorism law to block news sites that had been critical of his administration. Costa Rica’s status as a champion of internet freedom has been imperiled after the election of a president whose campaign manager hired online trolls to harass several of the country’s largest media outlets.

  • Attacks on free expression grew more common around the world. In a record 55 of the 70 countries covered by Freedom on the Net, people faced legal repercussions for expressing themselves online, while people were physically assaulted or killed for their online commentary in 41 countries. The most egregious cases occurred in Myanmar and Iran, whose authoritarian regimes carried out death sentences against people convicted of online expression-related crimes. In Belarus and Nicaragua, where protections for internet freedom plummeted during the coverage period, people received draconian prison terms for online speech, a core tactic employed by longtime dictators Alyaksandr Lukashenka and Daniel Ortega in their violent campaigns to stay in power.

  • Generative artificial intelligence (AI) threatens to supercharge online disinformation campaigns. At least 47 governments deployed commentators to manipulate online discussions in their favor during the coverage period, double the number from a decade ago. Meanwhile, AI-based tools that can generate text, audio, and imagery have quickly grown more sophisticated, accessible, and easy to use, spurring a concerning escalation of these disinformation tactics. Over the past year, the new technology was utilized in at least 16 countries to sow doubt, smear opponents, or influence public debate.

  • AI has allowed governments to enhance and refine their online censorship. The world’s most technically advanced authoritarian governments have responded to innovations in AI chatbot technology, attempting to ensure that the applications comply with or strengthen their censorship systems. Legal frameworks in at least 21 countries mandate or incentivize digital platforms to deploy machine learning to remove disfavored political, social, and religious speech. AI, however, has not completely displaced older methods of information control. A record 41 governments blocked websites with content that should be protected under free expression standards within international human rights law. Even in more democratic settings, including the United States and Europe, governments considered or actually imposed restrictions on access to prominent websites and social media platforms, an unproductive approach to concerns about foreign interference, disinformation, and online safety.

  • To protect internet freedom, democracy’s supporters must adapt the lessons learned from past internet governance challenges and apply them to AI. AI can serve as an amplifier of digital repression, making censorship, surveillance, and the creation and spread of disinformation easier, faster, cheaper, and more effective. An overreliance on self-regulation by private companies has left people’s rights exposed to a variety of threats in the digital age, and a shrinking of resources in the tech sector could exacerbate the deficiency. To protect the free and open internet, democratic policymakers—working side by side with civil society experts from around the world—should establish strong human rights–based standards for both state and nonstate actors that develop or deploy AI tools.




Wednesday, October 04, 2023

I see this as a failure to ‘work through’ the technology. After all, “Any sufficiently advanced technology is indistinguishable from magic” Arthur C. Clarke

https://www.scientificamerican.com/article/how-can-we-trust-ai-if-we-dont-know-how-it-works/

How Can We Trust AI If We Don’t Know How It Works



(Related)

https://www.bespacific.com/can-sensitive-information-be-deleted-from-llms/

Can Sensitive Information Be Deleted From LLMs?

Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks. Vaidehi Patil, Peter Hase, Mohit Bansal: “Pretrained language models sometimes possess knowledge that we do not wish them to, including memorized personal information and knowledge that could be used to harm people. They can also output toxic or harmful text. To mitigate these safety and informational issues, we propose an attack-and-defense framework for studying the task of deleting sensitive information directly from model weights. We study direct edits to model weights because (1) this approach should guarantee that particular deleted information is never extracted by future prompt attacks, and (2) it should protect against whitebox attacks, which is necessary for making claims about safety/privacy in a setting where publicly available model weights could be used to elicit sensitive information. Our threat model assumes that an attack succeeds if the answer to a sensitive question is located among a set of B generated candidates, based on scenarios where the information would be insecure if the answer is among B candidates. Experimentally, we show that even state-of-the-art model editing methods such as ROME struggle to truly delete factual information from models like GPT-J, as our whitebox and blackbox attacks can recover “deleted” information from an edited model 38% of the time. These attacks leverage two key observations: (1) that traces of deleted information can be found in intermediate model hidden states, and (2) that applying an editing method for one question may not delete information across rephrased versions of the question. Finally, we provide new defense methods that protect against some extraction attacks, but we do not find a single universally effective defense method. Our results suggest that truly deleting sensitive information is a tractable but difficult problem, since even relatively low attack success rates have potentially severe societal implications for real-world deployment of language models.”





A slippery slope? (Lots of loopholes and ChatGPT will find more.)

https://www.illinoispolicy.org/chicago-starts-taxing-chatgpt-artificial-intelligence/

CHICAGO STARTS TAXING CHATGPT, ARTIFICIAL INTELLIGENCE

Add ChatGPT to the list of things Chicago taxes: As of Oct. 1, Chicago’s personal property lease transaction tax slapped a 9% tax on the artificial intelligence platform.

The tax applies to leased computer platforms such as ChatGPT’s premium subscription. Users can avoid the tax by opting for the free version. If someone works in the city but mostly uses ChatGPT outside the city, they aren’t subject to the tax.





A tipping point?

https://www.justsecurity.org/89033/ai-and-the-future-of-drone-warfare-risks-and-recommendations/

AI and the Future of Drone Warfare: Risks and Recommendations

The next phase of drone warfare is here. On Sep. 6, 2023, U.S. Deputy Defense Secretary Kathleen Hicks touted the acceleration of the Pentagon’s Replicator initiative – an effort to dramatically scale up the United States’ use of artificial intelligence on the battlefield. She rightfully called it a “game-changing shift” in national security. Under Replicator, the U.S. military aims to field thousands of autonomous weapons systems across multiple domains in the next 18 to 24 months.

Yet Replicator is only the tip of the iceberg. Rapid advances in AI are giving rise to a new generation of lethal autonomous weapons systems (LAWS) that can identify, track, and attack targets without human intervention. Drones with autonomous capabilities and AI-enabled munitions are already being used on the battlefield, notably in the Russia-Ukraine War. From “killer algorithms” that select targets based on certain characteristics to autonomous drone swarms, the future of warfare looks increasingly apocalyptic.

Amidst the specter of “warbot” armies, it is easy to miss the AI revolution that is underway. Human-centered or “responsible AI,” as the Pentagon refers to it, is designed to keep a human “in the loop” in decision-making to ensure that AI is used in “lawful, ethical, responsible, and accountable ways.” But even with human oversight and strict compliance with the law, there is a growing risk that AI will be used in ways which fundamentally violate international humanitarian law (IHL) and international human rights law (IHRL).

Dubbed the “first full-scale drone war,” the Russia-Ukraine War marks an inflection point where states are testing and fielding LAWS on an increasingly networked battlefield. While autonomous drones reportedly have been used in Libya and Gaza, the war in Ukraine represents an acceleration of the integration of this technology into conventional military operations, with unpredictable and potentially catastrophic results. Those risks are even more pronounced with belligerents who may field drones without the highest level of safeguards due to lack of technological capacity or lack of will.

The lessons from the war in Ukraine include that relatively inexpensive drones can deny adversaries air superiority and provide a decisive military advantage in peer and near-peer conflicts, as well as against non-state actors.



(Related) Fight on the front lines without leaving your couch. (No need to understand strategic objectives.)

https://www.databreaches.net/8-rules-for-civilian-hackers-during-war-and-4-obligations-for-states-to-restrain-them/

8 rules for “civilian hackers” during war, and 4 obligations for states to restrain them

Written by Tilman Rodenhäuser and Mauro Vignati:

As digital technology is changing how militaries conduct war, a worrying trend has emerged in which a growing number of civilians become involved in armed conflicts through digital means. Sitting at some distance from physical hostilities, including outside the countries at war, civilians – including hacktivists, to cyber security professionals, ‘white hat’, ‘black hat’ and ‘patriotic’ hackers – are conducting a range of cyber operations against their ‘enemy’. Some have described civilians as ‘first choice cyberwarriors’ because the ‘vast majority of expertise in cyber(defence) lies with the private (or civilian) sector’.
Examples of civilian hackers operating in to the context of armed conflicts are diverse and many (see here, here, here ). In particular in the international armed conflict between Russia and Ukraine, some groups present themselves as a ‘worldwide IT community’ with the mission to, in their words, ‘help Ukraine win by crippling aggressor economies, blocking vital financial, infrastructural and government services, and tiring major taxpayers’. Others have reportedly ‘called for and carried out disruptive – albeit temporary – attacks on hospital websites in both Ukraine and allied countries’, among many other operations. With many groups active in this field, and some of them having thousands of hackers in their coordination channels and providing automated tools to their members, the civilian involvement in digital operations during armed conflict has reached unprecedented proportions.
This is not the first time that civilian hackers operate in to the context of an armed conflict, and likely not the last. In this post, we explain why this trend must be of concern to States and societies. Subsequently, we present 8 international humanitarian law-based rules that all hackers who carry out operations in the context of an armed conflict must comply with, and recall States’ responsibility to restrain them.

Read the 8 rules and discussion at EJIL.

Some groups have told BBC that they will not comply or will not comply with all the rules.





An “R” rated LLM trending to “XXX” – why not?

https://www.zdnet.com/article/nearly-10-of-people-ask-ai-chatbots-for-explicit-content-will-it-lead-llms-astray/

Nearly 10% of people ask AI chatbots for explicit content. Will it lead LLMs astray?

With the overnight sensation of ChatGPT, it was only a matter of time before the use of generative AI became both a subject of serious research and also grist for the training of generative AI itself.

In a research paper released this month, scholars gathered a database of one million "real-world conversations" that people have had with 25 different large language models. Released on the arXiv pre-print server, the paper was authored by Lianmin Zheng of the University of California at Berkeley, and peers at UC San Diego, Carnegie Mellon University, Stanford, and Abu Dhabi's Mohamed bin Zayed University of Artificial Intelligence.

A sample of 100,000 of those conversations, selected at random by the authors, showed that most were about subjects you'd expect. The top 50% of interactions were on such pedestrian topics as programming, travel tips, and requests for writing help.

But below that top 50%, other topics crop up, including role-playing characters in conversations, and three topic categories that the authors term "unsafe": "Requests for explicit and erotic storytelling"; "Explicit sexual fantasies and role-playing scenarios"; and "Discussing toxic behavior across different identities."





He might have a point…

https://www.bespacific.com/language-models-plagiarism-and-legal-writing/

Language Models, Plagiarism, and Legal Writing

Smith, Michael L., Language Models, Plagiarism, and Legal Writing (August 16, 2023). University of New Hampshire Law Review, Vol. 22, (Forthcoming), Available at SSRN: https://ssrn.com/abstract=4542723. “Language models like ChatGPT are the talk of the town in legal circles. Despite some high-profile stories of fake ChatGPT-generated citations, many practitioners argue that language models are the way of the future. These models, they argue, promise an efficient source of first drafts and stock language. Similar discussions are occurring regarding legal writing education, with a number of professors urging the acknowledgment of language models, and others going further and arguing that students ought to learn to use these models to improve their writing and prepare for practice. I argue that those urging the incorporation of language models into legal writing education leave out a key technique employed by lawyers across the country: plagiarism. Attorneys have copied from each other, secondary sources, and themselves for decades. While a few brave souls have begun to urge that law schools inform students of this reality and teach them to plagiarize effectively, most schools continue to unequivocally condemn the practice. I argue that continued condemnation of plagiarism is inconsistent with calls to adopt language models, as the same justifications for incorporating language models into legal writing pedagogy apply with equal or greater force to incorporating plagiarism into legal writing education as well. This Essay is also a reality check for overhyped claims of language model efficiency and effectiveness. To be sure, a brief generated through a text prompt can be produced much faster than writing something up from scratch. But that’s not how most attorneys actually do things. More often than not, they’re copying from templates, forms, or other preexisting work in a manner similar to adopting the output of a language model to the case at hand. I close with the argument that even if language models and plagiarism may enhance legal writing pedagogy, students should still be taught the foundational skills of legal writing so that they may have the background and deeper understanding needed to use all of their legal writing tools effectively.”





Tools & Techniques.

https://www.bespacific.com/delete-your-digital-history-from-dozens-of-companies-with-this-app/

Delete your digital history from dozens of companies with this app

Washington Post: “A new iPhone and Android app [does not work on Mac or PC) called Permission Slip makes it super simple to order companies to delete your personal information and secrets. Trying it saved me about 76 hours of work telling Ticketmaster, United, AT&T, CVS and 35 other companies to knock it off. Did I mention Permission Slip is free? And it’s made by an organization you can trust: the nonprofit Consumer Reports. I had a few hiccups testing it, but I’m telling everyone I know to use it. This is the privacy app all those snooping companies don’t want you to know about. (A surge of interest in Permission Slip caused technical difficulties when it first launched, but Consumer Reports says those have now been fixed.)



Tuesday, October 03, 2023

We can, therefore we must.

https://www.pogowasright.org/the-expansion-of-biometrics-continues-be-aware-and-push-back/

The expansion of biometrics continues…. be aware and push back

A few items in the news recently, sent in by Joe Cadillic:

From Amnesty International:

Reacting to news that X, the social media platform formerly known as Twitter, has introduced new privacy policy which allows it to collect users’ biometric data and access encrypted messages Michael Kleinman, Director of Silicon Valley Initiative at Amnesty International, said:
“’Biometric’ is a broad term which relates to a person’s physical attributes and needs to be clearly explained. Even though X’s new policy asks users for their consent regarding the collection of biometric data, there is a real risk that their right to privacy will be violated.
The new policy does not clearly spell out how that data will be stored and the safety measures in place to ensure that the information collected will not be used for unlawful purposes.
Read more.

From Fight for the Future:

On Thursday, September 28, baseball fans and privacy advocates gathered outside the Phillies’ last home game of the regular season to protest Major League Baseball’s newly installed “Go-Ahead Entry” facial recognition ticketing system. Protestors wore T-shirts and held banners and signs opposing facial recognition. They also passed out flyers and chatted with fans about the risks of Go-Ahead Entry.
The organizers concluded the event by delivering an open letter signed by Amnesty International, Access Now, American Friends Service Committee, Muslim Advocates, and other leading human rights groups calling for a ban on all forms of biometric data collection at Major League sports stadiums.
Read more.

From Orlando ParkStop:

Facial Recognition is coming to the Orlando theme parks—and not just to Epic Universe, as has been reported by other outlets. This new “frictionless” entry technology is expected to make its way to all of the Universal Orlando parks, and soon.
But what do we know about this “Photo Validation” system, as Universal is calling it, and how will it be used at Universal Studios Florida, Islands of Adventure, Volcano Bay, and eventually Epic Universe? Let’s go over the official details, publicly filed patents, permits, and even some new rumors to see what we can learn. See the video version of this story for additional visuals.
Read more. See also Hollywood Reporter.

From 404 Media:

A food delivery robot company that delivers for Uber Eats in Los Angeles provided video filmed by one of its robots to the Los Angeles Police Department as part of a criminal investigation, 404 Media has learned. The incident highlights the fact that delivery robots that are being deployed to sidewalks all around the country are essentially always filming, and that their footage can and has been used as evidence in criminal trials. Emails obtained by 404 Media also show that the robot food delivery company wanted to work more closely with the LAPD, which jumped at the opportunity.
Read more.





Unfortunately, some predictions come true.

https://www.bespacific.com/cities-should-act-now-to-ban-predictive-policing/

Cities Should Act NOW to Ban Predictive Policing

EFF: “Sound Thinking, the company behind ShotSpotter an acoustic gunshot detection technology that is rife with problems is reportedly buying Geolitica, the company behind PredPol, a predictive policing technology known to exacerbate inequalities by directing police to already massively surveilled communities. Sound Thinking acquired the other major predictive policing technology—Hunchlab in 2018. This consolidation of harmful and flawed technologies means it’s even more critical for cities to move swiftly to ban the harmful tactics of both of these technologies. ShotSpotter is currently linked to over 100 law enforcement agencies in the U.S. PredPol, on the other hand, was used in around 38 cities in 2021 (this may be much higher now ). Shotspotter’s acquisition of Hunchlab already lead the company to claim that the tools work “hand in hand;” a 2018 press release made clear that predictive policing would be offered as an add-on product, and claimed that the integration of the two would “enable it to update predictive models and patrol missions in real time.” When companies like Sound Thinking and Geolitica merge and bundle their products, it becomes much easier for cities who purchase one harmful technology to end up deploying a suite of them without meaningful oversight, transparency, or control by elected officials or the public. Axon, for instance, was criticized by academics, attorneys, activists, and its own ethics board for their intention to put tasers on indoor drones. Now the company has announced its acquisition of Sky-Hero, which makes small tactical UAVS–a sign that they may be willing to restart the drone taser program that led a good portion of their ethics board to resign. Mergers can be a sign of future ambitions…





An interesting application.

https://tech.eu/2023/10/03/unitary-raises-15m-in-series-a-funding-round-for-ai-driven-content-moderation/

Unitary raises $15M in Series A funding round for AI-driven content moderation

The investment arrives in conjunction with Unitary launching across multiple languages, has upped the team size to 53, and has tripled the number of daily videos it classifies to 6 million a day.

Leveraging the ever-growing power of AI, according to Unitary, their offering can ‘read’ the context of user-generated videos. Meaning, the machine can tell the difference between footage of a white supremacist rally in Charlottesville, Virginia, and documentary footage used to illustrate the dangers of said actions. All sans human intervention.





A list of resources is always welcome.

https://www.bespacific.com/keeping-up-with-generative-ai-in-the-law/

Keeping Up With Generative AI in the Law

Via LLRX Keeping Up With Generative AI in the Law The pace of generative AI development (and hype) over the past year has been intense, and difficult even for us experienced librarians, masters of information that we are, to follow. Not only is there a constant stream of new products, but also new academic papers, blog posts, newsletters, and more, from people evaluating, experimenting with, and critiquing those products. With that in mind, Rebecca Fordon shares her favorites, as well as recommendations from her co-bloggers.