Wednesday, November 06, 2024

Perspective.

https://www.eff.org/deeplinks/2024/11/ai-criminal-justice-trend-attorneys-need-know-about

AI in Criminal Justice Is the Trend Attorneys Need to Know About

The integration of artificial intelligence (AI) into our criminal justice system is one of the most worrying developments across policing and the courts, and EFF has been tracking it for years. EFF recently contributed a chapter on AI’s use by law enforcement to the American Bar Association’s annual publication, The State of Criminal Justice 2024.

The chapter describes some of the AI-enabled technologies being used by law enforcement, including some of the tools we feature in our Street-Level Surveillance hub, and discusses the threats AI poses to due process, privacy, and other civil liberties.





Tools & Techniques.

https://www.zdnet.com/article/the-best-open-source-ai-models-all-your-free-to-use-options-explained/

The best open-source AI models: All your free-to-use options explained

Here are the best open-source and free-to-use AI models for text, images, and audio, organized by type, application, and licensing considerations.



Tuesday, November 05, 2024

Perspective.

https://fpf.org/blog/u-s-legislative-trends-in-ai-generated-content-2024-and-beyond/

U.S. Legislative Trends in AI-Generated Content: 2024 and Beyond

Generative AI is a powerful tool, both in elections and more generally in people’s personal, professional, and social lives. In response, policymakers across the U.S. are exploring ways to mitigate risks associated with AI-generated content, also known as “synthetic” content. As generative AI makes it easier to create and distribute synthetic content that is indistinguishable from authentic or human-generated content, many are concerned about its potential growing use in political disinformation, scams, and abuse. Legislative proposals to address these risks often focus on disclosing the use of AI, increasing transparency around generative AI systems and content, and placing limitations on certain synthetic content. While these approaches may address some challenges with synthetic content, they also face a number of limitations and tradeoffs that policymakers should address going forward.





Tools & Techniques

https://www.bespacific.com/how-to-use-images-from-your-phone-to-search-the-web/

How to Use Images From Your Phone to Search the Web

The New York Times [unpaywalled] – “If you’re not sure how to describe what you want with keywords, use your camera or photo library to get those search results. A picture is worth a thousand words, but you don’t need to type any of them to search the internet these days.  Boosted by artificial intelligence, software on your phone can automatically analyze objects live in your camera view or in a photo (or video) to immediately round up a list of search results. And you don’t even need the latest phone model or third-party apps; current tools for Android and iOS can do the job with a screen tap or swipe. Here’s how…”



Sunday, November 03, 2024

If so, we don’t need lawyers…

https://webspace.science.uu.nl/~prakk101/pubs/oratieHPdefENG.pdf

Can Computers Argue Like a Lawyer?

My own research falls within two subfields of AI: AI & law and computational argumentation. It is therefore natural to discuss today the question whether computers can argue like a lawyer. At a first glance, the answer seems trivial, because if ChatGPT is asked to provide arguments for or against a legal claim, it will generate them. And even before ChatGPT, many knowledge-based AI systems could do the same. But the real question is of course: can computers argue as well as a good human lawyer can? And that is the question I want to discuss today.





Could we put AI in jail?

https://www.researchgate.net/profile/Khaled-Khwaileh/publication/385161726_Pakistan_Journal_of_Life_and_Social_Sciences_The_Criminal_Liability_of_Artificial_Intelligence_Entities/links/6718b48924a01038d0004e8b/Pakistan-Journal-of-Life-and-Social-Sciences-The-Criminal-Liability-of-Artificial-Intelligence-Entities.pdf

The Criminal Liability of Artificial Intelligence Entities

The rapid evolution of information technologies has led to the emergence of artificial intelligence (AI) entities capable of autonomous actions with minimal human intervention. While these AI entities offer remarkable advancements, they also pose significant risks by potentially harming individual and collective interests protected under criminal law. The behavior of AI, which operates with limited human oversight, raises complex questions about criminal liability and the need for legislative intervention. This article explores the profound transformations AI technologies have brought to various sectors, including economic, social, political, medical, and digital domains, and underscores the challenges they present to the legal framework. The primary aim is to model the development of criminal legislation that effectively addresses the unique challenges posed by AI, ensuring security and safety. The article concludes that existing legal frameworks are inadequate to address the complexities of AI-related crimes. It recommends the urgent development of new laws that establish clear criminal responsibility for AI entities, their manufacturers, and users. These laws should include specific penalties for misuse and encourage the responsible integration of AI across various sectors. A balanced approach is crucial to harness the benefits of AI while safeguarding public interests and maintaining justice in an increasingly AIdriven world





Interesting. AI as a philosopher?

https://philpapers.org/rec/TSUPAL

Possibilities and Limitations of AI in Philosophical Inquiry Compared to Human Capabilities

Traditionally, philosophy has been strictly a human domain, with wide applications in science and ethics. However, with the rapid advancement of natural language processing technologies like ChatGPT, the question of whether artificial intelligence can engage in philosophical thinking is becoming increasingly important. This work first clarifies the meaning of philosophy based on its historical background, then explores the possibility of AI engaging in philosophy. We conclude that AI has reached a stage where it can engage in philosophical inquiry. The study also examines differences between AI and humans in terms of statistical processing, creativity, the frame problem, and intrinsic motivation, assessing whether AI can philosophize in a manner indistinguishable from humans. While AI can imitate many aspects of human philosophical inquiry, the lack of intrinsic motivation remains a significant limitation. Finally, the paper explores the potential for AI to offer unique philosophical insights through its diversity and limitless learning capacity, which could open new avenues for philosophical exploration far beyond conventional human perspectives.