Saturday, October 12, 2024

The obvious outcome of self surveillance.

https://gizmodo.com/hacked-robot-vacuums-across-the-us-started-yelling-slurs-2000511013

Hacked Robot Vacuums Across the U.S. Started Yelling Slurs

It’s a tale as old as… the Internet of Things era. Robot vacuums made by Ecovacs have been reported roving around people’s homes, yelling profanities at them through the onboard speakers after the company’s software was found to be vulnerable to intrusion.

He opened the vacuum’s app to find a stranger was accessing its live camera feed and remote control feature, but assumed it might be an error. After resetting the password and rebooting the robot, the vacuum quickly started moving again:

This time, there was no ambiguity about what was coming out of the speaker. A voice was yelling racist obscenities, loud and clear, right in front of Mr Swenson’s son.
“F*** n******s,” screamed the voice, over and over again.





What would you build?

https://yaledailynews.com/blog/2024/10/11/yale-law-school-introduces-numerous-ai-focused-initiatives/

Yale Law School introduces numerous AI-focused initiatives

Through the Tsai Leadership Program, Scott Shapiro, professor of law and philosophy, heads an AI lab where students build AI tools for legal use. Currently, Shapiro is supervising a student building and coding a defamation detector. This AI model will be programmed to detect defamatory material and flag it for legal review.





Perspective.

https://darioamodei.com/machines-of-loving-grace#basic-assumptions-and-framework

Machines of Loving Grace

In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one.



Friday, October 11, 2024

Perspective. (With implications for most businesses.)

https://warontherocks.com/2024/10/ai-and-intelligence-analysis-panacea-or-peril/

AI and Intelligence Analysis: Panacea or Peril?

In today’s chaotic world, professional intelligence analysts must contend with nearly endless data streams, which risk overwhelming them while also exacerbating the impact of cognitive biases. Is AI the answer, or do the flaws that currently afflict AI create yet more risks?

In fact, AI is neither a panacea nor a peril. Like other emerging technologies, AI is not an instant “out of the box” solution but rather a capability that continues to evolve. Today, AI can augment human capabilities and enhance the analysis process by tackling specific challenges. However, AI is not without issues. This means its value lies in serving as a complementary capability to the expertise and judgment of human intelligence analysts.

Before the wholesale adoption of AI in support of intelligence analysis, it is essential to understand the specific problems facing analysts: coping with large volumes of data; the acquisition of data from non-traditional sources; and, perhaps most vexing of all, the impacts of cognitive biases that impact the objectivity of intelligence assessments. AI can play a valuable role in alleviating these challenges, but only if humans are kept in the loop.





Perspective.

https://www.brookings.edu/articles/generative-ai-the-american-worker-and-the-future-of-work/

Generative AI, the American worker, and the future of work

The launch of ChatGPT-3.5 at the end of 2022 captured the world’s attention and illustrated the uncanny ability of generative artificial intelligence (AI) to produce a range of seemingly human-generated content, including text, video, audio, images, and code. The release, and the many eye-catching breakthroughs that quickly followed, have raised questions about what these fast-moving generative AI technologies might mean for work, workers, and livelihoods—now and in the future, as new models are released that are potentially much more powerful. Many U.S. workers are worried: According to a Pew Research Center poll, most Americans believe that generative AI will have a major impact on jobs—mainly negative—in the next two decades. 

Despite these widely shared concerns, however, there is little consensus on the nature and scale of generative AI’s potential impacts and how—or even whether—to respond. Fundamental questions remain unanswered: How do we ensure workers can proactively shape generative AI’s design and deployment? What will it take to make sure workers benefit meaningfully from its gains? And what guardrails are needed for workers to avoid harms as much as possible? 

These animating questions are the heart of this report and a new multiyear effort we have launched at Brookings with a wide range of external collaborators



Thursday, October 10, 2024

No wonder they rely on AI to read for them.

https://www.theatlantic.com/magazine/archive/2024/11/the-elite-college-students-who-cant-read-books/679945/

The Elite College Students Who Can’t Read Books

This development puzzled Dames until one day during the fall 2022 semester, when a first-year student came to his office hours to share how challenging she had found the early assignments. Lit Hum often requires students to read a book, sometimes a very long and dense one, in just a week or two. But the student told Dames that, at her public high school, she had never been required to read an entire book. She had been assigned excerpts, poetry, and news articles, but not a single book cover to cover.

My jaw dropped,” Dames told me. The anecdote helped explain the change he was seeing in his students: It’s not that they don’t want to do the reading. It’s that they don’t know how. Middle and high schools have stopped asking them to.





Are we slipping down the slope toward fully autonomous drones?

https://www.defenseone.com/technology/2024/10/new-ai-powered-strike-drone-shows-how-quickly-battlefield-autonomy-evolving/400179/

New AI-powered strike drone shows how quickly battlefield autonomy is evolving

Small drones have been changing modern warfare at least since 2015, when Russia and Ukraine began to use them to great effect for rapid targeting. The latest addition is a strike-and-intelligence quadcopter that its builder hopes will do more things with a lot less operator attention.

The point of the Bolt-M, revealed by Anduril today, is to make fewer demands on the operator and offer more information than, easy-to-produce first-person-view strike drones, the type that Ukraine is producing by the hundreds of thousands. The U.S. Army, too, is looking at FPV drones for infantry platoons. But they require special training to use and come with a lot of operational limits. The Bolt-M, according to an Anduril statement, works “without requiring specialized operators.” The company has a contract from the U.S. Marine Corps’ Organic Precision Fires – Light, or OPF-L, program to develop a strike variant.



Wednesday, October 09, 2024

Inevitable?

https://www.eff.org/deeplinks/2024/10/germany-rushes-expand-biometric-surveillance

Germany Rushes to Expand Biometric Surveillance

Germany is a leader in privacy and data protection, with many Germans being particularly sensitive to the processing of their personal data – owing to the country’s totalitarian history and the role of surveillance in both Nazi Germany and East Germany.

So, it is disappointing that the German government is trying to push through Parliament, at record speed, a “security packagethat would increase biometric surveillance at an unprecedented scale. The proposed measures contravene the government’s own coalition agreement, and undermine European law and the German constitution.

The German government wants to allow law enforcement authorities to identify suspects by comparing their biometric data (audio, video, and image data) to all data publicly available on the internet. Beyond the host of harms related to facial recognition software, this would mean that any photos or videos uploaded to the internet would become part of the government’s surveillance infrastructure.





This only applies if you’re young?

https://www.nbcnews.com/tech/tech-news/tiktok-sued-14-attorneys-general-rcna174395

Fourteen AGs sue TikTok, accusing it of harming children's mental health

Fourteen attorneys general, led by officials in New York and California, filed lawsuits Tuesday accusing the social media platform TikTok of damaging young users’ mental health and collecting their data without consent.

The legal broadside, organized by a bipartisan coalition of 14 law enforcement officers, alleges TikTok violated state laws by falsely claiming its service is safe for young people. The lawsuits were filed individually.





Perspective.

https://www.pewresearch.org/data-labs/2024/10/08/who-u-s-adults-follow-on-tiktok/

Who U.S. Adults Follow on TikTok

Adult TikTok users in the U.S. use the platform to follow pop culture and entertainment accounts much more than news and politics



Tuesday, October 08, 2024

What’s the worst that could happen? Put your iPhone to your ear and listen closely.

https://www.aei.org/op-eds/exploding-pagers-and-spy-chips-the-rising-risk-of-hardware-tampering/

Exploding Pagers and Spy Chips: The Rising Risk of Hardware Tampering

The explosives that Mossad slipped into thousands of Hizbollah pager batteries and detonated last month in Lebanon should send a jolt of fear through the otherwise staid world of global supply chain management. Surely adversaries of the west will have their own tactics to compromise our electronics hardware. Most companies think only about cyber and software vulnerabilities. It is time they take hardware security more seriously.

The Russians are already so nervous that complex electronics can be manipulated by opponents that they have created a special institute to test the veracity of western chips smuggled in for use in missile and drone manufacturing. History shows that they are probably right to worry. Though many cold war-era spy games are still concealed by classification, Politico recently uncovered a 1980s FBI scheme designed to tamper with chipmaking tools that the Soviets were illegally importing.

However, western security agencies may no longer have the opportunity to repeat such practices — even if they are as skilled today as they were during the cold war. The epicentre of electronics manufacturing has shifted from the US to Asia — in particular to China and in the case of chipmaking to Taiwan. The more products a country assembles, the more opportunities for malfeasance.





There is nothing satisfying about an “I told you so.” Would it help in a lawsuit? (Knowing these backdoors exist makes attempts to find them inevitable.)

https://techcrunch.com/2024/10/07/the-30-year-old-internet-backdoor-law-that-came-back-to-bite/

The 30-year-old internet backdoor law that came back to bite

News broke this weekend that China-backed hackers have compromised the wiretap systems of several U.S. telecom and internet providers, likely in an effort to gather intelligence on Americans.

The wiretap systems, as mandated under a 30-year-old U.S. federal law, are some of the most sensitive in a telecom or internet provider’s network, typically granting a select few employees nearly unfettered access to information about their customers, including their internet traffic and browsing histories.

But for the technologists who have for years sounded the alarm about the security risks of legally required backdoors, news of the compromises are the “told you so” moment they hoped would never come but knew one day would.

I think it absolutely was inevitable,” Matt Blaze, a professor at Georgetown Law and expert on secure systems, told TechCrunch regarding the latest compromises of telecom and internet providers.



Sunday, October 06, 2024

It’s not science fiction, it just reads like that.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4966334

PREDICTABILITY, AI, AND JUDICIAL FUTURISM: WHY ROBOTS WILL RUN THE LAW AND TEXTUALISTS WILL LIKE IT

The question isn’t whether machines are going to replace judges and lawyers—they are. The question is whether that’s a good thing. If you’re a textualist, you have to answer yes. But you won’t—which means you’re not a textualist. Sorry.

Hypothetical: The year is 2030. AI has far eclipsed the median federal jurist as a textual interpreter. A new country is founded; it’s a democratic republic that uses human legislators to write laws and programs a state-sponsored Large Language Model called “Judge.AI” to apply those laws to facts. The model makes judicial decisions as to conduct on the back end, but can also provide advisory opinions on the front end; if a citizen types in his desired action and hits “enter,” Judge.AI will tell him, ex ante, exactly what it would decide ex post if the citizen were to perform the action and be prosecuted. The primary result is perfect predictability; secondary results include the abolition of case law, the death of common law, and the replacement of all judges—indeed, all lawyers—by a single machine. Don’t fight the hypothetical, assume it works. This article poses the question: Is that a utopia or a dystopia?

If you answer dystopia, you cannot be a textualist. Part I of this article establishes why: Because predictability is textualism’s only lodestar, and Judge.AI is substantially more predictable than any regime operating today. Part II-A dispatches rebuttals premised on positive nuances of the American system; such rebuttals forget that my hypothetical presumes a new nation and take for granted how much of our nation’s founding was premised on mitigating exactly the kinds of human error that Judge.AI would eliminate. And Part II-B dispatches normative rebuttals, which ultimately amount to moral arguments about objective good—which are none of the textualist’s business.

When the dust clears, you have only two choices: You’re a moralist, or you’re a formalist. If you’re the former, you’ll need a complete account of the objective good—which has evaded man for his entire existence. If you’re the latter, you should relish the fast-approaching day when all laws and all lawyers are usurped by a tin box. But you’re going to say you’re something in between. And you’re not.





A point!

https://gaexcellence.com/index.php/ijlgc/article/view/1686

URGENCY RIGHT TO BE FORGOTTEN AS AN LEGAL PROTECTION FOR DEEPFAKE PORNOGRAPHY VICTIMS BY ARTIFICIAL INTELLIGENCE TECHNOLOGY IN SOCIAL MEDIA

The emergence of artificial intelligence (AI) gives threat for abuse manipulated pornography called deepfake pornography. Deepfake pornography is a form of online gender-based violence that allows perpetrator to replace and insert someone’s face onto another person’s body. It can made by anyone and anywhere, so it is vulnerable to cause victims. Deepfake pornography are affected mentally and emotionally for the victims. To support deepfake pornography victims regain control over him, right to be forgotten (RTBF) plays an important role as a protection for the victims. The regulation of RTBF in Indonesia currently in Article 26 (3) UU ITE. Under this RTBF, the victims may request the electronic system organizer to eliminate their images/videos from the platforms. However, RTBF is considered to have legal vague, so resulting in not achievement of legal protection for deepfake pornography victims. The research method is normative qualitative using primary, secondary and tertiary literature data. This study concludes that RTBF is a promising attempt to protect deepfake pornography victims in this digital era, but it is necessary to make efforts by strengthening regulations related to RTBF as a recovery of deepfake pornography victims.





A summation?

https://webofjournals.com/index.php/9/article/view/1787

IMPAŠ”T OF ARTIFICIAL INTELLIGENCE ON THE FIELD OF LAW

This article examines the impact of artificial intelligence on the field of law. The article explores the significance of artificial intelligence in automating legal processes, providing legal advice, and processing documents. Furthermore, it discusses how AI can create opportunities for detecting and preventing crime, as well as predicting court decisions. However, the introduction of AI in the legal field also brings numerous ethical and legal challenges, particularly regarding transparency in decision-making, the reduction of human involvement, and issues such as data privacy, which are also discussed.