Saturday, July 13, 2024

Is this the best we can do?

https://techcrunch.com/2024/07/12/new-senate-bill-seeks-to-protect-artists-and-journalists-content-from-ai-use/

New Senate bill seeks to protect artists’ and journalists’ content from AI use

The bill would require companies that develop AI tools to allow users to attach content provenance information to their content within two years. Content provenance information refers to machine-readable information that documents the origin of digital content, such as photos and news articles. According to the bill, works with content provenance information could not be used to train AI models or generate AI content.

The bill is designed to give content owners, such as journalists, newspapers, artists, songwriters and others the ability to protect their work, while also setting the terms of use for their content, including compensation. It also gives them the right to sue platforms that use their content without their permission or have tampered with content provenance information.



Friday, July 12, 2024

I’ll add my voice since I have no way to add pressure…

https://www.schneier.com/blog/archives/2024/07/the-nsa-has-a-long-lost-lecture-by-adm-grace-hopper.html

The NSA Has a Long-Lost Lecture by Adm. Grace Hopper

The NSA has a video recording of a 1982 lecture by Adm. Grace Hopper titled “Future Possibilities: Data, Hardware, Software, and People.” The agency is (so far) refusing to release it.

Basically, the recording is in an obscure video format. People at the NSA can’t easily watch it, so they can’t redact it. So they won’t do anything.

[...]

Surely we can put pressure on them somehow.





Perhaps I should try ransomware as my side hussle?

https://www.cnn.com/2024/07/11/business/cdk-hack-ransom-tweny-five-million-dollars/

How did the auto dealer outage end? CDK almost certainly paid a $25 million ransom

CDK Global, a software firm serving car dealerships across the US that was roiled by a cyberattack last month, appears to have paid a $25 million ransom to the hackers, multiple sources familiar with the matter told CNN.

The company has declined to discuss the matter. Pinpointing exactly who sends a cryptocurrency payment can be complicated by the relative anonymity that some crypto services offer. But data on the blockchain that underpins cryptocurrency payments also tells its own story.



Thursday, July 11, 2024

I did it my way…

https://www.theverge.com/24195235/scotus-netchoice-kosa-kids-safety-age-verification-tiktok-ban

The aftermath of the Supreme Court’s NetChoice ruling

The NetChoice decision states that tech platforms can exercise their First Amendment rights through their content moderation decisions and how they choose to display content on their services — a strong statement that has clear ramifications for any laws that attempt to regulate platforms’ algorithms in the name of kids online safety and even on a pending lawsuit seeking to block a law that could ban TikTok from the US.

When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices,” Justice Elena Kagan wrote in the majority opinion, referring to Facebook’s News Feed and YouTube’s homepage. “And because that is true, they receive First Amendment protection.”

NetChoice isn’t a radical upheaval of existing First Amendment law, but until last week, there was no Supreme Court opinion that applied that existing framework to social media platforms. The justices didn’t rule on the merits of the cases, concluding, instead, that the lower courts hadn’t completed the necessary analysis for the kind of First Amendment challenge that had been brought.





Apparently we did good.

https://fpf.org/blog/a-first-for-ai-a-close-look-at-the-colorado-ai-act/

A FIRST FOR AI: A CLOSE LOOK AT THE COLORADO AI ACT

Colorado made history on May 17, 2024 when Governor Polis signed into law the Colorado Artificial Intelligence Act (“CAIA”), the first law in the United States to comprehensively regulate the development and deployment of high-risk artificial intelligence (“AI”) systems. The law will come into effect on February 1, 2026, preceding the March, 2026 effective date of (most of) the European Union’s AI Act.

To help inform public understanding of the law, the Future of Privacy Forum released a Policy Brief summarizing and analyzing key CAIA elements, as well as identifying significant observations about the law.



Wednesday, July 10, 2024

Is there a downside here? What if AI can’t answer questions related to Brazil?

https://pogowasright.org/brazil-orders-meta-to-stop-training-its-ai-on-brazilian-personal-data/

Brazil orders Meta to stop training its AI on Brazilian personal data

The Human Rights Watch article mentioned in the article on photos of Australian children has also had an impact in Brazil, where the government has now blocked Meta from training AI on Brazilian personal data. The Paypers reports:

Data protection authority in Brazil (ANPD) has blocked Meta from training its AI models on Brazilian personal data, citing the risks of serious damage and difficulty to users.
The decision follows an update to Meta’s privacy policy in which the social media granted itself permission to use public Facebook, Messenger, and Instagram data from Brazil including posts, images, and captions for AI training.
After a report by Human Rights Watch revealed that LAION-5B, a large image-caption datasets used for training AI models, includes personal, identifiable photos of children in Brazil, exposing them to the risk of deepfakes and other forms of exploitation.

Read more at The Paypers.





Are we all doomed?

https://www.bespacific.com/future-of-professionals-report/

Future of Professionals Report

Thomson Reuters – Future of Professionals Report. AI-powered technology & the forces shaping professional work. July 2024. “Key findings – First, the productivity benefits we have been promised are now becoming more apparent. As AI adoption has become widespread, professionals can more tangibly tell us about how they will use this transformative technology and the greater efficiency and value it will provide. The most common use cases for AI-powered technology thus far include drafting documents, summarizing information, and performing basic research. Second, there’s a tremendous sense of excitement about the value that new AI-powered technology can bring to the day-to-day lives of the professionals we surveyed. While more than half of professionals said they’re most excited about the benefits that new AI-powered technologies can bring in terms of time-savings, nearly 40% said the new value that will be brought is what excites them the most. This report highlights how AI could free up that precious commodity of time.





If I destroy my old hard drive after updating my computer could I be guilty of obstruction if a some future time law enforcement wanted to look at my data?

https://pogowasright.org/breaking-a-cell-phone-to-avoid-its-search-and-seizure-justified-obstruction-enhancement-under-federal-sentencing-guidelines/

Breaking a cell phone to avoid its search and seizure justified obstruction enhancement under federal sentencing guidelines

Damned if you do, damned if you don’t? Seen at FourthAmendment.com:

Defendant attempted to thwart a search of cell phones in his car tried to break one such that it had to be forensically reviewed to get information off of it. He wasn’t under arrest. Still, his actions qualified for a 2 level obstruction enhancement under USSG § 3C1.1. United States v. Manning, 2024 U.S. App. LEXIS 16411 (8th Cir. July 5, 2024).

Other cases of enhancement for obstruction can be found on SentencingCases.com. Another one from the Eighth Circuit held that falsely denying that the person had other devices was deemed to justify a sentencing enhancement. U.S. v. Davenport, __ F.3d __ (8th Cir. Dec. 14, 2018) No. 17-3496.





Technology can not be suppressed forever. If not Microsoft, someone else.

https://www.livescience.com/technology/artificial-intelligence/ai-speech-generator-reaches-human-parity-but-its-too-dangerous-to-release-scientists-say

AI speech generator 'reaches human parity' — but it's too dangerous to release, scientists say

Microsoft researchers said VALL-E 2 was capable of generating "accurate, natural speech in the exact voice of the original speaker, comparable to human performance," in a paper that appeared June 17 on the pre-print server arXiv. In other words, the new AI voice generator is convincing enough to be mistaken for a real person — at least, according to its creators.





Chatbots of historic writers? Will students take the time for a conversation? (I wonder what an hallucinating Shakespeare would say…)

https://news.harvard.edu/gazette/story/2024/07/a-modern-approach-to-teaching-classics/

A modern approach to teaching classics

Martin Puchner is using chatbots to bring to life Socrates, Shakespeare, and Thoreau





Tools & Techniques. (I use Feedly)

https://www.makeuseof.com/what-are-best-news-aggregators/

I Tried These 7 News Aggregators, and This Is My Favorite

We live in a time when news flows from multiple sources, including traditional news outlets, blogs, and social media platforms, to name a few. That's why, over the last few years, I've moved away from getting my daily dose of current events and updates from a single news outlet to using news aggregators.



Monday, July 08, 2024

Perspective.

https://www.bespacific.com/considering-the-ethics-of-ai-assistants/

Considering the Ethics of AI Assistants

Tech Policy Press: “…Just a couple of weeks before Pichai took the stage, in April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines from different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.” The paper considers the potential nature of the technology itself, giving a broad overview of these imagined AI assistants, their technical roots, and the wide array of potential applications. It delves into questions about values and safety, and how to guard against malicious uses. Then, it takes a closer look at how these imagined advanced AI assistants interact with individual users, discussing issues like manipulation, persuasion, anthropomorphism, trust, and privacy. Then, the papers moves on to the collective, examining the broader societal implications of deploying advanced AI assistants, including on cooperation, equity and access, misinformation, economic impact, environmental concerns, and methods for evaluating these technologies. The paper also offers a series of recommendations for researchers, developers, policymakers, and public stakeholders to consider…”





Another way to look at the spread of AI.

https://www.bespacific.com/generative-artificial-intelligence-patent-landscape-report/

Generative Artificial Intelligence Patent – Landscape Report

In this WIPO Patent Landscape Report on Generative AI, discover the latest patent trends for GenAI with a comprehensive and up-to-date understanding of the GenAI patent landscape, alongside insights into its future applications and potential impact. The report explores patents relating to the different modes, models and industrial application areas of GenAI.”



Sunday, July 07, 2024

A judge by any other name…

https://escholarship.mcgill.ca/concern/theses/fb494g061

Automating the science and art of judging

This thesis is about the essence of judicial automation. It is an important matter because asking ourselves what we are doing when we automate judging should precede deliberations on how to do it. Key to my argument is the idea that judicial automation is both a historical event and a “theorizing” of judging. Theorizing because preliminary to the automation of something is a theoretical articulation of what this something is or of what it should be. The task to be automated – in our case, judging – must be formalized so that a machine can perform it. With this in mind, I proceed with a review of influential Western legal theories and approaches to judicial interpretation to investigate what judging has historically been theorized to be. I ground this review in a dual classification framework where “science” is distinct from “art”. These two are, I claim, archetypal frameworks for knowledge, which spectres we perceive in influential legal theories of judging. Each archetype reflects a viewpoint on mind and reality relative to one another (i.e. a metaphysical approach) that makes a scientific or artistic outlook on law and judging possible. “Judging as science” and “judging as art” is how I refer to the two outlooks. We discover each outlook highlights a dimension of judging corresponding to a correlative dimension of human “knowing”. These two dimensions of knowing, we can simplify as “cognition” and “emotion”, are in fact not so distinct according to contemporary psychology. Yet, my conclusion following a discussion about artificial intelligence and technology is that judicial automation strengthens judging as science while undermining judging as art. Judicial automation enables a rationalist and formalist approach to law, underplaying the role of “emotion” in judging. Max Weber allows us to conceive of judicial automation as relating to a larger historical transformation of law: the “rationalization of law”. What does it mean for law, its “rationality”, and its grounding in history and society that judicial automation is a rationalization? In asking this question, we get closer to the essence of judicial automation. We find that judicial automation “reveals” law as formalizable knowledge, and judging as a formalizable task. The product of this revealing is what I call “technological justice”. Informing this part of my argument is the ancient Greek notion of technê, that helps us understand why science, art and technology belong in the same conversation. At this stage, we also engage with early critical works about technology. The thesis concludes with the sketching of two “scenarios”, or ways technological justice could alter our relationship to law, which help us picture how law, society and we may change because of judicial automation. We try to assess whether there is any hope of avoiding these scenarios.





AI as an expert witness?

https://stars.library.ucf.edu/hut2024/59/

The Implications of Artificial Intelligence in the Criminal Justice System

This thesis focuses on artificial intelligence's recent implications on the criminal justice system regarding its admissibility as evidence in civil and criminal cases. One of the main concerns surrounding artificial intelligence is determining the validity of AI application; application refers to the accuracy "AI measures, classifies, or predicts what it is designed to" (Article: Artificial Intelligence as Evidence by Paul W. Grimm, Maura R. Grossman & Gordon V.Cormack.(https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=1349&context=njtip). Privacy law will also be analyzed in this analysis. Is evidence recorded without the individual's consent or knowledge acceptable in determining an individual's guilt? This analysis will primarily focus on determining whether the introduced AI evidence is valid and if it can and should play a role in a civil or criminal case. Like any other system, the criminal justice system has many imperfections. The goal of this research is to neither negate nor enforce what that criminal justice system is currently doing but rather to provide evidence for growth within the system. Through the research process, many wrongful convictions due to mishaps with AI have presented themselves. Whether AI continues to grow in the criminal justice system or not is inevitable. AI as evidence will continue to grow in the system and become more than evidence one day. The Florida Bar has passed a rule allowing the integration of AI into the legal system. The rule prohibits misleading information and ensures the client must be aware that they are not communicating with an attorney but rather an AI program. As AI continues to integrate into the legal system, court officials must do it harm-free, which is the goal of this research.





New words, new thinking?

https://repo.lib.duth.gr/jspui/handle/123456789/18946

Fully autonomous weapon systems (Doctoral thesis)

Autonomous technology rapidly becomes a major fighting force in the militaries worldwide. At the same time, scientists and armed forces around the globe have set their sights on manufacturing new types of robots – fully autonomous – that are intended to be classified as a whole new type of soldier. The development and deployment of such technology seems inevitable. This prospect, however, makes lawmakers and academics fear that reality may outdistance the existing laws and the current law-making procedures, exposing thus humanity to great dangers and threats. Matters of law compliance, accountability and the use of lethal force on humans by machines become the focal point of interest with the deployment of fully autonomous weapon systems in battlefields. This thesis acknowledges that the era of AI in warfare has come and ventures into the quest of the legal framework that could possibly apply to fully autonomous weapon systems, so as they can be incorporated safely into our societies and militaries, without their existence and actions creating legal gaps. Accordingly, the possibility of granting fully autonomous weapon systems the legal status of an artificial soldier is explored, along with the possibility of regulating them on the basis of the laws on military weapons. The findings indicate that it is too early for the international community to consider attributing the status of an artificial soldier to fully autonomous weapon systems. But there seems to be room for another development in the future. At the same time, the attempt to incorporate fully autonomous weapon systems into the legal framework which regulates military weapons is also unfruitful. Their technical characteristics along with their unique feature of autonomy blocks this route. As a way out of the impasse, this thesis proposes that fully autonomous weapons systems should serve humanity as “military servants”. For the time being, it is safe to suggest that highly sophisticated military robots can serve the armies as defensive agents or contribute to military campaigns under the capacity of strategic advisors, operators, doctors etc. Currently, it is considered prohibitive to have these systems in military vanguard.