Saturday, August 03, 2024

A tiny weeny misdirection.

https://www.cnbc.com/2024/08/02/elon-musk-pac-voter-data-trump-harris.html

How an Elon Musk PAC is using voter data to help Trump beat Harris in 2024 election

If a user lives in a state that is not considered competitive in the presidential election, like California or Wyoming for example, they’ll be prompted to enter their email addresses and ZIP code and then directed quickly to a voter registration page for their state, or back to the original sign-up section.

But for users who enter a ZIP code that indicates they live in a battleground state, like Pennsylvania or Georgia, the process is very different.

Rather than be directed to their state’s voter registration page, they instead are directed to a highly detailed personal information form, prompted to enter their address, cellphone number and age.

If they agree to submit all that, the system still does not steer them to a voter registration page. Instead, it shows them a “thank you” page.

So that person who wanted help registering to vote? In the end, they got no help at all registering. But they did hand over priceless personal data to a political operation.



Friday, August 02, 2024

Probably not written by AI.

https://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/95621-u-s-copyright-office-releases-part-one-of-ai-report-calls-for-new-legislation.html

U.S. Copyright Office Releases Part One of AI Report, Calls for New Legislation

The U.S. Copyright Office has released part one of wide-ranging report on the impact of the recent artificial intelligence boom—and “digital replicas,” more commonly known as deepfakes, are the first topic of concern.

From “AI-generated musical performances to robocall impersonations of political candidates to images in pornographic videos,” the report acknowledges that “a new era of sophisticated digital replicas has arrived.” And while the technology has long existed to produce such deepfakes, the report concludes that rapid advancement of generative AI suggests that new federal legislation is needed.



(Related)

https://www.billboard.com/pro/ai-firms-suno-udio-blast-lawsuit-music-labels-threat-to-market-share/

AI Firms Blast Lawsuit From Music Giants: ‘Labels See a Threat to Their Market Share’

AI music firms Suno and Udio are firing back with their first responses to sweeping lawsuits filed by the major record labels, arguing that they were free to use copyrighted songs to train their models and claiming the music industry is abusing intellectual property to crush competition.

In legal filings on Thursday, the two firms admitted to using proprietary materials to create their artificial intelligence, with Suno saying it was “no secret” that the company had ingested “essentially all music files of reasonable quality that are accessible on the open Internet.”

But both companies said that such use was clearly lawful under copyright’s fair use doctrine, which allows for the reuse of existing materials to create new works.





Interesting resource. It would be even better if I had a way to transcribe the documentaries.

https://www.bespacific.com/4000-free-movies-online/

4,000+ Free Movies Online

4,000+ Free Movies Online: Great Classics, Indies, Noir, Westerns, Documentaries & More Watch 4,000+ movies free online. Includes classics, indies, film noir, documentaries and other films, created by some of our greatest actors, actresses and directors. The collection is divided into the following categories: Comedy & Drama; Film Noir, Horror & Hitchcock: Westerns (many with John Wayne); Martial Arts Movies: Silent Films; Documentaries, and Animation.





Tools & Techniques.

https://www.makeuseof.com/ai-powered-video-transcription-tools/

These 3 AI-Powered Video Transcription Tools Save Me Hours of Watching

AI video transcription tools turn hours of watching into minutes of quick info-finding. Here are three tools that have significantly reduced my video-watching time and improved my productivity.



Thursday, August 01, 2024

Help or handicap? Blocking what might be the best option?

https://www.darkreading.com/vulnerabilities-threats/would-making-ransom-payments-illegal-result-in-fewer-attacks

Would Making Ransom Payments Illegal Result in Fewer Attacks?

Ransomware and other malware attacks are among the top three types of security incidents that organizations experience, according to Netwrix's "2024 Hybrid Security Trends Report." In a bid to curb this menace, for several years now there have been discussions around a radical approach: making ransomware payments illegal. The rationale is straightforward. If paying a ransom is prohibited, organizations won't do it — thus eliminating the incentive for cybercriminals to launch ransomware attacks. Problem solved. Or is it?



(Related)

https://www.cnn.com/2024/07/31/politics/cyberattack-oneblood-blood-donation/

First on CNN: Cyberattack hits blood-donation nonprofit OneBlood

… OneBlood serves hospitals in Alabama, Florida, Georgia, and North and South Carolina, according to its website. In a statement, the nonprofit acknowledged the ransomware attack and said it was working closely with cybersecurity experts as well as law enforcement. The nonprofit is “operating at a significantly reduced capacity.”

“We have implemented manual processes and procedures to remain operational. Manual processes take significantly longer to perform and impacts inventory availability. In an effort to further manage the blood supply we have asked the more than 250 hospitals we serve to activate their critical blood shortage protocols and to remain in that status for the time being,” said Susan Forbes, a spokeswoman for the nonprofit.





Perspective.

https://sloanreview.mit.edu/article/seven-reasons-to-strengthen-your-customer-benefits-focus/

Seven Reasons to Strengthen Your Customer Benefits Focus

Harvard Business School professor Theodore Leavitt emphasized the customer impact of benefits when he famously argued that people don’t want to buy a quarter-inch drill; they want a quarter-inch hole.1 Although his idea is straightforward, many companies still fail to appreciate how embracing a benefits-driven approach can help them unlock new opportunities — for innovation, customer satisfaction, and sustainable growth.



Wednesday, July 31, 2024

Is this the future of all government documents? If so, I have a few concerns…

https://www.reuters.com/technology/california-dmv-puts-42-million-car-titles-blockchain-fight-fraud-2024-07-30/

California DMV puts 42 million car titles on blockchain to fight fraud

California's Department of Motor Vehicles (DMV) has digitized 42 million car titles using blockchain technology in a bid to detect fraud and smoothen the title transfer process, the agency's technology partners exclusively told Reuters on Tuesday.

The project, in collaboration with tech company Oxhead Alpha on Avalanche blockchain, will allow California's more than 39 million residents to claim their vehicle titles through a mobile app, the first such move in the United States.





The value of good security keeps going up!

https://therecord.media/ibm-breach-report-cost-rise-to-5-million

IBM: Cost of a breach reaches nearly $5 million, with healthcare being hit the hardest

Businesses that fall victim to a data breach can expect a financial hit of nearly $5 million on average — a 10% increase compared to last year — according to IBM’s annual report on cybersecurity incidents.

The tech giant worked with the Ponemon Institute to study 604 organizations affected by data breaches between March 2023 and February 2024. The breaches — affecting 17 industries across 16 countries and regions — ranged from 2,100 to 113,000 individuals records leaked. The researchers also interviewed 3,556 security and C-suite business leaders with firsthand knowledge of the data breach incidents at their organizations.

What stood out most to IBM was the jump in the global average cost of a data breach, which reached $4.88 million and was the biggest jump since the pandemic. In 2023 the cost was $4.45 million.





Think this could kick off a ‘largest settlement’ competition?

https://pogowasright.org/attorney-general-ken-paxton-secures-1-4-billion-settlement-with-meta-over-its-unauthorized-capture-of-personal-biomet/

Attorney General Ken Paxton Secures $1.4 Billion Settlement with Meta Over Its Unauthorized Capture of Personal Biometric Data In Largest Settlement Ever Obtained From An Action Brought By A Single State

Texas Attorney General Ken Paxton issued the following press release today:

Texas Attorney General Ken Paxton has secured a $1.4 billion settlement with Meta (formerly known as Facebook) to stop the company’s practice of capturing and using the personal biometric data of millions of Texans without the authorization required by law.
This settlement is the largest ever obtained from an action brought by a single State. Further, this is the largest privacy settlement an Attorney General has ever obtained, dwarfing the $390 million settlement a group of 40 states obtained in late 2022 from Google. This is the first lawsuit brought and first settlement obtained under Texas’s “Capture or Use of Biometric Identifier” Act and serves as a warning to any companies engaged in practices that violate Texans’ privacy rights.




Reasonable.

https://www.lawnext.com/2024/07/in-first-ethics-ruling-on-gen-ai-aba-says-lawyers-must-have-reasonable-understanding-of-the-technology-but-need-not-become-experts.html

In First Ethics Ruling on Gen AI, ABA Says Lawyers Must Have Reasonable Understanding of the Technology, But Need Not Become Experts

In its first major pronouncement on the ethics of using generative AI in law practice, the American Bar Association has issued an opinion saying that lawyers need not become experts in the technology, but must have a reasonable understanding of the capabilities and limitations of the specific generative AI technology the lawyer might use.

In Formal Opinion 512, issued yesterday, the ABA’s Standing Committee on Ethics and Professional Responsibility sought to identify some of the ethics issues lawyers face when using generative AI tools and offer guidance for lawyers in navigating this emerging landscape.

Acknowledging that the rapid development of gen AI makes it a fast-moving target, the committee said, “It is anticipated that this Committee and state and local bar association ethics committees will likely offer updated guidance on professional conduct issues relevant to specific GAI tools as they develop.”

The opinion offers no earth-shattering insights.



Tuesday, July 30, 2024

Will this change be reversed? Probably.

https://techcrunch.com/2024/07/29/us-border-agents-must-get-warrant-before-cell-phone-searches-federal-court-rules/?guccounter=1

US border agents must get warrant before cell phone searches, federal court rules

A federal district court in New York has ruled that U.S. border agents must obtain a warrant before searching the electronic devices of Americans and international travelers crossing the U.S. border.

The ruling on July 24 is the latest court opinion to upend the U.S. government’s long-standing legal argument, which asserts that federal border agents should be allowed to access the devices of travelers at ports of entry, like airports, seaports and land borders, without a court-approved warrant.





What is worse than a deepfake?

https://pogowasright.org/kansas-court-of-appeals-denies-that-ku-medical-center-has-a-duty-of-privacy-to-its-patients/

Kansas Court of Appeals Denies that KU Medical Center has a Duty of Privacy to its Patients

While looking for information on another breach, PogoWasRight stumbled over this blog post by McShane & Brady law firm in Kansas City:

McShane & Brady filed a lawsuit against the University of Kansas Medical Center (KUMC) for a breach of private medical information in which a doctor took a photograph of a patient’s genitals on her personal cell phone and texted the photo to medical students.
The case was filed in the Wyandotte County District Court and assigned to Judge Timothy Dupree. KUMC moved for the case to be dismissed claiming that it did not have a duty to keep patient information confidential. On March 29, 2023, Judge Timothy Dupree dismissed the case finding that the KUMC did not have a duty to keep patient information private. The decision of Judge Dupree was appealed to the Kansas Court of Appeals.
On July 5, 2024, the Kansas Court of Appeal affirmed the District Court’s ruling stating:
We agree with the district court that Kansas does not recognize a common-law duty for a medical entity to protect the privacy and confidentiality of patients that would give rise to a private cause of action for the alleged breach of that duty.”
The Court of Appeals thrown medical privacy out the window, “said Maureen Brady.
McShane & Brady is filing a Petition for Review with the Kansas Supreme Court and is seeking justice for all patients who have been victims of the wrongful disclosure of medical information.
Fox4Kc conducted an interview with Maureen Brady to discuss the case. KU Medical Center is expected to comment today. Click here to view the story.

So this is a bit shocking. It’s one thing for HIPAA to have no private cause of action, but for there to be no private cause of action under state law, well, does that leave Kansas residents with any redress if their medical privacy has been violated? What law is left that protects them? From reading the opinion, it sounds like there is none. And although Ms Brady claims the appellate court has “thrown medical privacy out of the window,” it sounds like medical privacy was never in any window in Kansas.

It will be interesting to see what the state supreme court does.





Another reason for AI lawyers?

https://www.bespacific.com/the-race-against-time-to-reinvent-lawyers/

The race against time to reinvent lawyers

Via LLRX The race against time to reinvent lawyers Jordan Furlong is a leading analyst of the global legal market and forecaster of its future development. In this insightful article he contends that our legal education and licensing systems produce one kind of lawyer. The legal market of the near future will need another kind. I f we can’t close this gap fast, we’ll have a very serious problem.





Something to consider. Is the software for self driving cars vulnerable to a Crowdstrike type failure?

https://www.schneier.com/blog/archives/2024/07/providing-security-updates-to-automobile-software.html

Providing Security Updates to Automobile Software

Auto manufacturers are just starting to realize the problems of supporting the software in older models:

Today’s phones are able to receive updates six to eight years after their purchase date. Samsung and Google provide Android OS updates and security updates for seven years. Apple halts servicing products seven years after they stop selling them.
That might not cut it in the auto world, where the average age of cars on US roads is only going up. A recent report found that cars and trucks just reached a new record average age of 12.6 years, up two months from 2023. That means the car software hitting the road today needs to work — and maybe even improve— beyond 2036. The average length of smartphone ownership is just 2.8 years.

I wrote about this in 2018, in Click Here to Kill Everything, talking about patching as a security mechanism:

This won’t work with more durable goods. We might buy a new DVR every 5 or 10 years, and a refrigerator every 25 years. We drive a car we buy today for a decade, sell it to someone else who drives it for another decade, and that person sells it to someone who ships it to a Third World country, where it’s resold yet again and driven for yet another decade or two. Go try to boot up a 1978 Commodore PET computer, or try to run that year’s VisiCalc, and see what happens; we simply don’t know how to maintain 40-year-old [consumer] software.
Consider a car company. It might sell a dozen different types of cars with a dozen different software builds each year. Even assuming that the software gets updated only every two years and the company supports the cars for only two decades, the company needs to maintain the capability to update 20 to 30 different software versions. (For a company like Bosch that supplies automotive parts for many different manufacturers, the number would be more like 200.) The expense and warehouse size for the test vehicles and associated equipment would be enormous. Alternatively, imagine if car companies announced that they would no longer support vehicles older than five, or ten, years. There would be serious environmental consequences.

We really don’t have a good solution here. Agile updates is how we maintain security in a world where new vulnerabilities arise all the time, and we don’t have the economic incentive to secure things properly from the start.



Monday, July 29, 2024

Defining privacy?

https://pogowasright.org/invasion-of-the-data-snatchers-b-c-court-of-appeal-clarifies-possible-scope-of-privacy-claims-against-data-custodians-in-data-breaches/

Invasion of the Data Snatchers: B.C. Court of Appeal Clarifies Possible Scope of Privacy Claims Against Data Custodians in Data Breaches

Lyann Danielak, Joshua Hutchinson, and Robin Reinertson of Blake, Cassels & Graydon LLP write:

On July 4, 2024, the B.C. Court of Appeal issued a duo of class action appeal decisions considering the potential scope of statutory and common law privacy claims against data custodians that fall victim to cyberattacks in data breach cases. In both G.D. v. South Coast British Columbia Transportation Authority (G.D.) and Campbell v. Capital One Financial Corporation (Campbell), the B.C. Court of Appeal affirmed that numerous causes of action may arguably be available even against data custodians innocent of any intentional wrongdoing, including the statutory tort of violation of privacy pursuant to the B.C. Privacy Act. These decisions follow the B.C. Court of Appeal’s decision earlier this year in Situmorang v. Google, LLC, in which the court left open the question of whether the tort of intrusion upon seclusion exists in B.C., in addition to the statutory tort of violation of privacy.

Read more at JDSupra.





Raising obfuscation to an art…

https://pogowasright.org/ninth-circuit-signals-that-a-reasonable-user-cannot-consent-to-data-collection-via-confusing-and-contradictory-privacy-disclosures/

Ninth Circuit Signals That A Reasonable User Cannot Consent to Data Collection Via Confusing and Contradictory Privacy Disclosures

From EPIC.org:

Last week, the Ninth Circuit heard oral arguments in Google v. Calhoun, a case about whether users really consented to Google’s collecting and sharing their data when Google’s own published policies said contradictory things about those practices. EPIC’s amicus brief asserted that Google cannot argue that consumers reasonably consented to its data practices when the company’s privacy policy said it would not engage in those practices, even though Google disclaimed any liability in its contradictory general disclosure terms. During oral argument, the judges signaled agreement with EPIC’s position.
In this case, plaintiffs sued because Google represented to Chrome users that it would not collect browsing history unless the users chose to sync that data to the cloud. But, in fact, Google did collect and transfer information about Chrome user’s browsing habits even if they did not choose to sync their data to the cloud. Google argued in its defense that these users had nevertheless consented to the collection and transfer of their sensitive browsing data based on general disclosures in its user agreement.
The Ninth Circuit judges seemed to agree with plaintiffs and EPIC, explaining that the federal judge had an 8-hour evidentiary hearing to understand the data collection practices and no reasonable user can be held to that standard to consent to them. One judge also said that reading complicated Terms of Service online is like reading hieroglyphics.

Read more at EPIC.





Tools & Techniques. (Also creates more ‘bad data’ for the AI to rely on...)

https://www.bespacific.com/a-new-tool-for-copyright-holders-can-show-if-their-work-is-in-ai-training-data/

A new tool for copyright holders can show if their work is in AI training data

MIT Technology Review [unpaywalled ]: “Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set. Now they have a new way to prove it: “copyright traps developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. The idea is similar to traps that have been used by copyright holders throughout history—strategies like including fake locations on a map or fake words in a dictionary. These AI copyright traps tap into one of the biggest fights in AI. A number of publishers and writers are in the middle of litigation against tech companies, claiming their intellectual property has been scraped into AI training data sets without their permission. The New York Timesongoing case against OpenAI is probably the most high-profile of these. The code to generate and detect traps is currently available on GitHub, but the team also intends to build a tool that allows people to generate and insert copyright traps themselves.”





Tools & Techniques. (Get ready for those election ads…)

https://www.schneier.com/blog/archives/2024/07/new-research-in-detecting-ai-generated-videos.html

New Research in Detecting AI-Generated Videos

The latest in what will be a continuing arms race between creating and detecting videos:

The new tool the research project is unleashing on deepfakes, called “MISLnet”, evolved from years of data derived from detecting fake images and video with tools that spot changes made to digital video or images. These may include the addition or movement of pixels between frames, manipulation of the speed of the clip, or the removal of frames.
Such tools work because a digital camera’s algorithmic processing creates relationships between pixel color values. Those relationships between values are very different in user-generated or images edited with apps like Photoshop.
But because AI-generated videos aren’t produced by a camera capturing a real scene or image, they don’t contain those telltale disparities between pixel values.
The Drexel team’s tools, including MISLnet, learn using a method called a constrained neural network, which can differentiate between normal and unusual values at the sub-pixel level of images or video clips, rather than searching for the common indicators of image manipulation like those mentioned above.

Research paper.



(Related)

https://www.bespacific.com/fake-images-are-getting-harder-to-spot-heres-a-field-guide/

Fake images are getting harder to spot. Here’s a field guide.

Washington Post [unpaywalled ]: “Photographs have a profound power to shape our understanding of the world. And it’s never been more important to be able to discern which ones are genuine and which are doctored to push an agenda, especially in the wake of dramatic or contentious moments. But advances in technology mean that spotting manipulated or even totally AI-generated imagery is only getting trickier. Take for example a photo of Catherine, Princess of Wales, issued by Kensington Palace in March. News organizations retracted it after experts noted some obvious manipulations. And some questioned whether images captured during the assassination attempt on former president Donald Trump were genuine. Here are a few things experts suggest the next time you come across an image that leaves you wondering…”



Sunday, July 28, 2024

Perspective.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4872389

Technology and Totalitarianism: Artificial Intelligence a Tool for the Dictator?!

In an era where technology is advancing at an amazing speed, artificial intelligence (AI) has been positioned as one of the most important driving forces in these developments. This article provides an in-depth analysis of the effects of AI on political and social structures, with a particular focus on its role in fostering totalitarianism. We examine how authoritarian regimes can use AI as a tool to exercise control and surveillance, and what measures should be taken to prevent this technology from becoming a tool of repression. Also, the paper examines the ways in which democracies can use AI to enhance public participation and transparency. This research was inspired by the article "Is artificial intelligence in favor of autocracy or not?" It is written and seeks to provide a comprehensive and balanced perspective of this topic.





Perspective.

https://www.tandfonline.com/doi/full/10.1080/15027570.2024.2378584

Does History Matter?

This issue of our journal – while asking crucial questions about today’s world and our future challenges – contains thoughtful pieces attending to history. It has been compiled while the war in Ukraine continues fiercely and frighteningly, and while violent unrest in Sudan, the Middle East, and several other theaters keeps shocking us.

In this rapidly changing and violent world, does history matter? Can we learn from long-dead figures – most of them, as we are frequently and correctly reminded, white men?

James Turner Johnson has been among the leading scholars to insist not only that we can learn from history, thinkers, and tradition, but that we must. Crucial terminology and theoretical categories as well as deep-seated moral convictions, after all, come from somewhere, and by taking seriously how they have emerged we can more clearly grasp their meaning and importance, in the past and today. Edward Erwin’s useful book review of a recent volume of Johnson’s landmark essays brings this into focus. Not least, it reminds us of the coherence of just-war and military-ethics categories over the centuries and across traditions, even as disagreements have been aired and moral views and contexts have changed. Also in this issue, Mihaly Boda helps us untangle differences and nuances within the history of medieval military ethics, adding insights into why human beings more than 1000 years ago as well as in the present have held some actions to be right and others to be deeply wrong.

As we do our best to insist on basic rules of decency and dignity during armed conflict, and as we ask big questions about the morality of nuclear deterrence or the ethical use of artificial intelligence, we should indeed look back to lessons from past thinkers and traditions. We do this not to copy them, nor to accept their teachings uncritically, but in order to develop our serious conversations about military ethics still further.