Saturday, November 09, 2024

How it’s used isn’t harmful?

https://www.reuters.com/legal/litigation/openai-defeats-news-outlets-copyright-lawsuit-over-ai-training-now-2024-11-07/

OpenAI defeats news outlets' copyright lawsuit over AI training, for now

A New York federal judge on Thursday dismissed a lawsuit against artificial intelligence giant OpenAI that claimed it misused articles from news outlets Raw Story and AlterNet to train its large language models.

U.S. District Judge Colleen McMahon said that the outlets could not show enough harm to support the lawsuit but allowed them to file a new complaint, even though she said she was "skeptical" that they could "allege a cognizable injury."





Perspective.

https://theconversation.com/is-ai-dominance-inevitable-a-technology-ethicist-says-no-actually-240088

Is AI dominance inevitable? A technology ethicist says no, actually

Anyone following the rhetoric around artificial intelligence in recent years has heard one version or another of the claim that AI is inevitable. Common themes are that AI is already here, it is indispensable, and people who are bearish on it harm themselves.

In the business world, AI advocates tell companies and workers that they will fall behind if they fail to integrate generative AI into their operations. In the sciences, AI advocates promise that AI will aid in curing hitherto intractable diseases.

In higher education, AI promoters admonish teachers that students must learn how to use AI or risk becoming uncompetitive when the time comes to find a job.

And, in national security, AI’s champions say that either the nation invests heavily in AI weaponry, or it will be at a disadvantage vis-à-vis the Chinese and the Russians, who are already doing so.

The argument across these different domains is essentially the same: The time for AI skepticism has come and gone. The technology will shape the future, whether you like it or not. You have the choice to learn how to use it or be left out of that future. Anyone trying to stand in the technology’s way is as hopeless as the manual weavers who resisted the mechanical looms in the early 19th century.

In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the ethical questions raised by the widespread adoption of AI, and I believe the inevitability argument is misleading.



Friday, November 08, 2024

No good deed goes unpunished.

https://www.404media.co/police-freak-out-at-iphones-mysteriously-rebooting-themselves-locking-cops-out/

Police Freak Out at iPhones Mysteriously Rebooting Themselves, Locking Cops Out

Law enforcement officers are warning other officials and forensic experts that iPhones which have been stored securely for forensic examination are somehow rebooting themselves, returning the devices to a state that makes them much harder to unlock, according to a law enforcement document obtained by 404 Media.

The exact reason for the reboots is unclear, but the document authors, who appear to be law enforcement officials in Detroit, Michigan, hypothesize that Apple may have introduced a new security feature in iOS 18 that tells nearby iPhones to reboot if they have been disconnected from a cellular network for some time. After being rebooted, iPhones are generally more secure against tools that aim to crack the password of and take data from the phone.



Thursday, November 07, 2024

The loss of ‘things everyone knows.’

https://www.bespacific.com/how-the-death-of-cursive-is-complicating-our-elections/

How the death of cursive is complicating our elections

Fast Company: “The death of cursive has become a problem for voters and election officials. Young people who vote by mail and were never taught cursive risk having their ballots tossed if the signature they sign on their mail-in ballot envelope doesn’t match the signature on file, which the state uses to verify their identity. That’s what happened in Nevada to some 28,000 voters on Election Day this past Tuesday. The voters have until November 12 to “signature cure” their ballots, or verify it’s their signature signed on the envelope. As Nevada Secretary of State Francisco Aguilar told The Washington Post, “more Nevadans than ever sign their names on digital screens that may look different than their pen-to-paper signatures,” especially young people who “may not have a set signature developed yet.” Cursive has been falling out of fashion for a while now, and it’s had a range of cultural implications. It’s prompted the federal government to seek volunteers who can transcribe historical documents written in cursive before it becomes as indecipherable to the average American as hieroglyphics Cursive’s declining popularity has also prompted rebrands.  Eddie Bauer retired its script logo, while in Maryland, Washington College changed its logo, which used George Washington’s signature, saying it “was difficult to read and not immediately recognizable for many prospective students,” and blaming the fact that cursive is no longer being taught universally in K-12 education. Nevada is one of 33 states plus the District of Columbia that lets voters know if their absentee or mail-in ballot has problems and allows them to “cure” them within a window of time after Election Day, according to the National Conference of State Legislatures. Still, more than 560,000 total ballots were rejected—typically due to minor errors—across the U.S. in the 2020 election. That’s about 1% of the vote…”





It never hurts to reconsider your security.

https://thehackernews.com/2024/11/a-hackers-guide-to-password-cracking.html

A Hacker's Guide to Password Cracking

Defending your organization's security is like fortifying a castle—you need to understand where attackers will strike and how they'll try to breach your walls. And hackers are always searching for weaknesses, whether it's a lax password policy or a forgotten backdoor. To build a stronger defense, you must think like a hacker and anticipate their moves. Read on to learn more about hackers' strategies to crack passwords, the vulnerabilities they exploit, and how you can reinforce your defenses to keep them at bay.



(Related)

https://www.makeuseof.com/quick-test-see-unique-browser-fingerprint/

Use This Quick Test to See How Unique Your Browser Fingerprint Is

Each browser has a level of uniqueness, which is where Am I Unique comes into play. This handy browser fingerprint detection website quickly details how unique your browser is and assigns you a score. All you have to do is press See My Fingerprint and let Am I Unique check out your browser configuration.



Wednesday, November 06, 2024

Perspective.

https://www.eff.org/deeplinks/2024/11/ai-criminal-justice-trend-attorneys-need-know-about

AI in Criminal Justice Is the Trend Attorneys Need to Know About

The integration of artificial intelligence (AI) into our criminal justice system is one of the most worrying developments across policing and the courts, and EFF has been tracking it for years. EFF recently contributed a chapter on AI’s use by law enforcement to the American Bar Association’s annual publication, The State of Criminal Justice 2024.

The chapter describes some of the AI-enabled technologies being used by law enforcement, including some of the tools we feature in our Street-Level Surveillance hub, and discusses the threats AI poses to due process, privacy, and other civil liberties.





Tools & Techniques.

https://www.zdnet.com/article/the-best-open-source-ai-models-all-your-free-to-use-options-explained/

The best open-source AI models: All your free-to-use options explained

Here are the best open-source and free-to-use AI models for text, images, and audio, organized by type, application, and licensing considerations.



Tuesday, November 05, 2024

Perspective.

https://fpf.org/blog/u-s-legislative-trends-in-ai-generated-content-2024-and-beyond/

U.S. Legislative Trends in AI-Generated Content: 2024 and Beyond

Generative AI is a powerful tool, both in elections and more generally in people’s personal, professional, and social lives. In response, policymakers across the U.S. are exploring ways to mitigate risks associated with AI-generated content, also known as “synthetic” content. As generative AI makes it easier to create and distribute synthetic content that is indistinguishable from authentic or human-generated content, many are concerned about its potential growing use in political disinformation, scams, and abuse. Legislative proposals to address these risks often focus on disclosing the use of AI, increasing transparency around generative AI systems and content, and placing limitations on certain synthetic content. While these approaches may address some challenges with synthetic content, they also face a number of limitations and tradeoffs that policymakers should address going forward.





Tools & Techniques

https://www.bespacific.com/how-to-use-images-from-your-phone-to-search-the-web/

How to Use Images From Your Phone to Search the Web

The New York Times [unpaywalled] – “If you’re not sure how to describe what you want with keywords, use your camera or photo library to get those search results. A picture is worth a thousand words, but you don’t need to type any of them to search the internet these days.  Boosted by artificial intelligence, software on your phone can automatically analyze objects live in your camera view or in a photo (or video) to immediately round up a list of search results. And you don’t even need the latest phone model or third-party apps; current tools for Android and iOS can do the job with a screen tap or swipe. Here’s how…”



Sunday, November 03, 2024

If so, we don’t need lawyers…

https://webspace.science.uu.nl/~prakk101/pubs/oratieHPdefENG.pdf

Can Computers Argue Like a Lawyer?

My own research falls within two subfields of AI: AI & law and computational argumentation. It is therefore natural to discuss today the question whether computers can argue like a lawyer. At a first glance, the answer seems trivial, because if ChatGPT is asked to provide arguments for or against a legal claim, it will generate them. And even before ChatGPT, many knowledge-based AI systems could do the same. But the real question is of course: can computers argue as well as a good human lawyer can? And that is the question I want to discuss today.





Could we put AI in jail?

https://www.researchgate.net/profile/Khaled-Khwaileh/publication/385161726_Pakistan_Journal_of_Life_and_Social_Sciences_The_Criminal_Liability_of_Artificial_Intelligence_Entities/links/6718b48924a01038d0004e8b/Pakistan-Journal-of-Life-and-Social-Sciences-The-Criminal-Liability-of-Artificial-Intelligence-Entities.pdf

The Criminal Liability of Artificial Intelligence Entities

The rapid evolution of information technologies has led to the emergence of artificial intelligence (AI) entities capable of autonomous actions with minimal human intervention. While these AI entities offer remarkable advancements, they also pose significant risks by potentially harming individual and collective interests protected under criminal law. The behavior of AI, which operates with limited human oversight, raises complex questions about criminal liability and the need for legislative intervention. This article explores the profound transformations AI technologies have brought to various sectors, including economic, social, political, medical, and digital domains, and underscores the challenges they present to the legal framework. The primary aim is to model the development of criminal legislation that effectively addresses the unique challenges posed by AI, ensuring security and safety. The article concludes that existing legal frameworks are inadequate to address the complexities of AI-related crimes. It recommends the urgent development of new laws that establish clear criminal responsibility for AI entities, their manufacturers, and users. These laws should include specific penalties for misuse and encourage the responsible integration of AI across various sectors. A balanced approach is crucial to harness the benefits of AI while safeguarding public interests and maintaining justice in an increasingly AIdriven world





Interesting. AI as a philosopher?

https://philpapers.org/rec/TSUPAL

Possibilities and Limitations of AI in Philosophical Inquiry Compared to Human Capabilities

Traditionally, philosophy has been strictly a human domain, with wide applications in science and ethics. However, with the rapid advancement of natural language processing technologies like ChatGPT, the question of whether artificial intelligence can engage in philosophical thinking is becoming increasingly important. This work first clarifies the meaning of philosophy based on its historical background, then explores the possibility of AI engaging in philosophy. We conclude that AI has reached a stage where it can engage in philosophical inquiry. The study also examines differences between AI and humans in terms of statistical processing, creativity, the frame problem, and intrinsic motivation, assessing whether AI can philosophize in a manner indistinguishable from humans. While AI can imitate many aspects of human philosophical inquiry, the lack of intrinsic motivation remains a significant limitation. Finally, the paper explores the potential for AI to offer unique philosophical insights through its diversity and limitless learning capacity, which could open new avenues for philosophical exploration far beyond conventional human perspectives.