Defining privacy?
Invasion of the Data Snatchers: B.C. Court of Appeal Clarifies Possible Scope of Privacy Claims Against Data Custodians in Data Breaches
Lyann Danielak, Joshua Hutchinson, and Robin Reinertson of Blake, Cassels & Graydon LLP write:
On July 4, 2024, the B.C. Court of Appeal issued a duo of class action appeal decisions considering the potential scope of statutory and common law privacy claims against data custodians that fall victim to cyberattacks in data breach cases. In both G.D. v. South Coast British Columbia Transportation Authority (G.D.) and Campbell v. Capital One Financial Corporation (Campbell), the B.C. Court of Appeal affirmed that numerous causes of action may arguably be available even against data custodians innocent of any intentional wrongdoing, including the statutory tort of violation of privacy pursuant to the B.C. Privacy Act. These decisions follow the B.C. Court of Appeal’s decision earlier this year in Situmorang v. Google, LLC, in which the court left open the question of whether the tort of intrusion upon seclusion exists in B.C., in addition to the statutory tort of violation of privacy.
Read more at JDSupra.
Raising obfuscation to an art…
Ninth Circuit Signals That A Reasonable User Cannot Consent to Data Collection Via Confusing and Contradictory Privacy Disclosures
From EPIC.org:
Last week, the Ninth Circuit heard oral arguments in Google v. Calhoun, a case about whether users really consented to Google’s collecting and sharing their data when Google’s own published policies said contradictory things about those practices. EPIC’s amicus brief asserted that Google cannot argue that consumers reasonably consented to its data practices when the company’s privacy policy said it would not engage in those practices, even though Google disclaimed any liability in its contradictory general disclosure terms. During oral argument, the judges signaled agreement with EPIC’s position.
In this case, plaintiffs sued because Google represented to Chrome users that it would not collect browsing history unless the users chose to sync that data to the cloud. But, in fact, Google did collect and transfer information about Chrome user’s browsing habits even if they did not choose to sync their data to the cloud. Google argued in its defense that these users had nevertheless consented to the collection and transfer of their sensitive browsing data based on general disclosures in its user agreement.
The Ninth Circuit judges seemed to agree with plaintiffs and EPIC, explaining that the federal judge had an 8-hour evidentiary hearing to understand the data collection practices and no reasonable user can be held to that standard to consent to them. One judge also said that reading complicated Terms of Service online is like reading hieroglyphics.
Read more at EPIC.
Tools & Techniques. (Also creates more ‘bad data’ for the AI to rely on...)
A new tool for copyright holders can show if their work is in AI training data
MIT Technology Review [unpaywalled ]: “Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set. Now they have a new way to prove it: “copyright traps” developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. The idea is similar to traps that have been used by copyright holders throughout history—strategies like including fake locations on a map or fake words in a dictionary. These AI copyright traps tap into one of the biggest fights in AI. A number of publishers and writers are in the middle of litigation against tech companies, claiming their intellectual property has been scraped into AI training data sets without their permission. The New York Times’ ongoing case against OpenAI is probably the most high-profile of these. The code to generate and detect traps is currently available on GitHub, but the team also intends to build a tool that allows people to generate and insert copyright traps themselves.”
Tools & Techniques. (Get ready for those election ads…)
https://www.schneier.com/blog/archives/2024/07/new-research-in-detecting-ai-generated-videos.html
New Research in Detecting AI-Generated Videos
The latest in what will be a continuing arms race between creating and detecting videos:
The new tool the research project is unleashing on deepfakes, called “MISLnet”, evolved from years of data derived from detecting fake images and video with tools that spot changes made to digital video or images. These may include the addition or movement of pixels between frames, manipulation of the speed of the clip, or the removal of frames.
Such tools work because a digital camera’s algorithmic processing creates relationships between pixel color values. Those relationships between values are very different in user-generated or images edited with apps like Photoshop.
But because AI-generated videos aren’t produced by a camera capturing a real scene or image, they don’t contain those telltale disparities between pixel values.
The Drexel team’s tools, including MISLnet, learn using a method called a constrained neural network, which can differentiate between normal and unusual values at the sub-pixel level of images or video clips, rather than searching for the common indicators of image manipulation like those mentioned above.
Research paper.
(Related)
https://www.bespacific.com/fake-images-are-getting-harder-to-spot-heres-a-field-guide/
Fake images are getting harder to spot. Here’s a field guide.
Washington Post [unpaywalled ]: “Photographs have a profound power to shape our understanding of the world. And it’s never been more important to be able to discern which ones are genuine and which are doctored to push an agenda, especially in the wake of dramatic or contentious moments. But advances in technology mean that spotting manipulated or even totally AI-generated imagery is only getting trickier. Take for example a photo of Catherine, Princess of Wales, issued by Kensington Palace in March. News organizations retracted it after experts noted some obvious manipulations. And some questioned whether images captured during the assassination attempt on former president Donald Trump were genuine. Here are a few things experts suggest the next time you come across an image that leaves you wondering…”
No comments:
Post a Comment