Wednesday, November 06, 2024

Perspective.

https://www.eff.org/deeplinks/2024/11/ai-criminal-justice-trend-attorneys-need-know-about

AI in Criminal Justice Is the Trend Attorneys Need to Know About

The integration of artificial intelligence (AI) into our criminal justice system is one of the most worrying developments across policing and the courts, and EFF has been tracking it for years. EFF recently contributed a chapter on AI’s use by law enforcement to the American Bar Association’s annual publication, The State of Criminal Justice 2024.

The chapter describes some of the AI-enabled technologies being used by law enforcement, including some of the tools we feature in our Street-Level Surveillance hub, and discusses the threats AI poses to due process, privacy, and other civil liberties.





Tools & Techniques.

https://www.zdnet.com/article/the-best-open-source-ai-models-all-your-free-to-use-options-explained/

The best open-source AI models: All your free-to-use options explained

Here are the best open-source and free-to-use AI models for text, images, and audio, organized by type, application, and licensing considerations.



Tuesday, November 05, 2024

Perspective.

https://fpf.org/blog/u-s-legislative-trends-in-ai-generated-content-2024-and-beyond/

U.S. Legislative Trends in AI-Generated Content: 2024 and Beyond

Generative AI is a powerful tool, both in elections and more generally in people’s personal, professional, and social lives. In response, policymakers across the U.S. are exploring ways to mitigate risks associated with AI-generated content, also known as “synthetic” content. As generative AI makes it easier to create and distribute synthetic content that is indistinguishable from authentic or human-generated content, many are concerned about its potential growing use in political disinformation, scams, and abuse. Legislative proposals to address these risks often focus on disclosing the use of AI, increasing transparency around generative AI systems and content, and placing limitations on certain synthetic content. While these approaches may address some challenges with synthetic content, they also face a number of limitations and tradeoffs that policymakers should address going forward.





Tools & Techniques

https://www.bespacific.com/how-to-use-images-from-your-phone-to-search-the-web/

How to Use Images From Your Phone to Search the Web

The New York Times [unpaywalled] – “If you’re not sure how to describe what you want with keywords, use your camera or photo library to get those search results. A picture is worth a thousand words, but you don’t need to type any of them to search the internet these days.  Boosted by artificial intelligence, software on your phone can automatically analyze objects live in your camera view or in a photo (or video) to immediately round up a list of search results. And you don’t even need the latest phone model or third-party apps; current tools for Android and iOS can do the job with a screen tap or swipe. Here’s how…”



Sunday, November 03, 2024

If so, we don’t need lawyers…

https://webspace.science.uu.nl/~prakk101/pubs/oratieHPdefENG.pdf

Can Computers Argue Like a Lawyer?

My own research falls within two subfields of AI: AI & law and computational argumentation. It is therefore natural to discuss today the question whether computers can argue like a lawyer. At a first glance, the answer seems trivial, because if ChatGPT is asked to provide arguments for or against a legal claim, it will generate them. And even before ChatGPT, many knowledge-based AI systems could do the same. But the real question is of course: can computers argue as well as a good human lawyer can? And that is the question I want to discuss today.





Could we put AI in jail?

https://www.researchgate.net/profile/Khaled-Khwaileh/publication/385161726_Pakistan_Journal_of_Life_and_Social_Sciences_The_Criminal_Liability_of_Artificial_Intelligence_Entities/links/6718b48924a01038d0004e8b/Pakistan-Journal-of-Life-and-Social-Sciences-The-Criminal-Liability-of-Artificial-Intelligence-Entities.pdf

The Criminal Liability of Artificial Intelligence Entities

The rapid evolution of information technologies has led to the emergence of artificial intelligence (AI) entities capable of autonomous actions with minimal human intervention. While these AI entities offer remarkable advancements, they also pose significant risks by potentially harming individual and collective interests protected under criminal law. The behavior of AI, which operates with limited human oversight, raises complex questions about criminal liability and the need for legislative intervention. This article explores the profound transformations AI technologies have brought to various sectors, including economic, social, political, medical, and digital domains, and underscores the challenges they present to the legal framework. The primary aim is to model the development of criminal legislation that effectively addresses the unique challenges posed by AI, ensuring security and safety. The article concludes that existing legal frameworks are inadequate to address the complexities of AI-related crimes. It recommends the urgent development of new laws that establish clear criminal responsibility for AI entities, their manufacturers, and users. These laws should include specific penalties for misuse and encourage the responsible integration of AI across various sectors. A balanced approach is crucial to harness the benefits of AI while safeguarding public interests and maintaining justice in an increasingly AIdriven world





Interesting. AI as a philosopher?

https://philpapers.org/rec/TSUPAL

Possibilities and Limitations of AI in Philosophical Inquiry Compared to Human Capabilities

Traditionally, philosophy has been strictly a human domain, with wide applications in science and ethics. However, with the rapid advancement of natural language processing technologies like ChatGPT, the question of whether artificial intelligence can engage in philosophical thinking is becoming increasingly important. This work first clarifies the meaning of philosophy based on its historical background, then explores the possibility of AI engaging in philosophy. We conclude that AI has reached a stage where it can engage in philosophical inquiry. The study also examines differences between AI and humans in terms of statistical processing, creativity, the frame problem, and intrinsic motivation, assessing whether AI can philosophize in a manner indistinguishable from humans. While AI can imitate many aspects of human philosophical inquiry, the lack of intrinsic motivation remains a significant limitation. Finally, the paper explores the potential for AI to offer unique philosophical insights through its diversity and limitless learning capacity, which could open new avenues for philosophical exploration far beyond conventional human perspectives.



Saturday, November 02, 2024

Perspective. AI is getting better at everything, including crime.

https://www.zdnet.com/article/anthropic-warns-of-ai-catastrophe-if-governments-dont-regulate-in-18-months/

Anthropic warns of AI catastrophe if governments don't regulate in 18 months

Only days away from the US presidential election, AI company Anthropic is advocating for its own regulation -- before it's too late. 

On Thursday, the company, which stands out in the industry for its focus on safety, released recommendations for governments to implement "targeted regulation" alongside potentially worrying data on the rise of what it calls "catastrophic" AI risks

In a blog post, Anthropic noted how much progress AI models have made in coding and cyber offense in just one year. "On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems (Claude 2, October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5 Sonnet, October 2024)," the company wrote. "Internally, our Frontier Red Team has found that current models can already assist on a broad range of cyber offense-related tasks, and we expect that the next generation of models -- which will be able to plan over long, multi-step tasks -- will be even more effective."



Friday, November 01, 2024

Another way to look at the web.

https://www.bespacific.com/introducing-chatgpt-search/

Introducing ChatGPT search

(opens in a new window)

OpenAI: “ChatGPT can now search the web in a much better way than before. You can get fast, timely answers with links to relevant web sources, which you would have previously needed to go to a search engine for. This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more. ChatGPT will choose to search the web based on what you ask, or you can manually choose to search by clicking the web search icon. Search will be available at chatgpt.com, as well as on our desktop and mobile apps. All ChatGPT Plus and Team users, as well as SearchGPT waitlist users, will have access today. Enterprise and Edu users will get access in the next few weeks. We’ll roll out to all Free users over the coming months… Chats now include links to sources, such as news articles and blog posts, giving you a way to learn more. Click the Sources button below the response to open a sidebar with the references…”

See also MIT Technology Review: “At stake is the future of AI search—that is, chatbots that summarize information from across the web. If their growing popularity is any indication, these AI “answer engines” could replace traditional search engines as our default gateway to the internet. While ordinary AI chatbots can reproduce—often unreliably—information learned through training, AI search tools like Perplexity, Google’s Gemini, or OpenAI’s now-public SearchGPT aim to retrieve and repackage information from third-party websites. They return a short digest to users along with links to a handful of sources, ranging from research papers to Wikipedia articles and YouTube transcripts. The AI system does the reading and writing, but the information comes from outside…. At its best, AI search can better infer a user’s intent, amplify quality content, and synthesize information from diverse sources. But if AI search becomes our primary portal to the web, it threatens to disrupt an already precarious digital economy. Today, the production of content online depends on a fragile set of incentives tied to virtual foot traffic: ads, subscriptions, donations, sales, or brand exposure. By shielding the web behind an all-knowing chatbot, AI search could deprive creators of the visits and “eyeballs” they need to survive. ..”





Tools & Techniques.

https://www.zdnet.com/article/claude-ai-adds-desktop-apps-and-dictation-mode-heres-how-to-use-them/

Claude AI adds desktop apps and dictation mode – here's how to use them

Anthropic is expanding its Claude AI beyond the web. On Thursday, the company unveiled new desktop applications for its popular chatbot. Designed for Windows and MacOS, the new apps work similarly to the website and are available for free users and paid subscribers.

To grab the apps, head to the Claude for Desktop site, where you'll find versions for Windows, Windows on ARM, and MacOS. For now, the apps are tagged with a beta label, which may indicate that Anthropic is still tweaking them. After downloading one of the apps, you'll be prompted to sign in using a Google account or an email link. From there, use Claude just as you would use the website.



Thursday, October 31, 2024

Imagine the consequences…

https://www.theregister.com/2024/10/31/canada_cybersec_threats/

Chinese attackers accessed Canadian government networks – for five years

India makes it onto list of likely threats for the first time

A report by Canada's Communications Security Establishment (CSE) revealed that state-backed actors have collected valuable information from government networks for five years.

The biennial National Cyber Threat Assessment  described the People's Republic of China's (PRC) cyber operations against Canada as "second to none." Their purpose is to "serve high-level political and commercial objectives, including espionage, intellectual property (IP) theft, malign influence, and transnational repression."

The report also named Russia and Iran as significant hostile states – which isn't surprising.

The inclusion of India, named for the first time as an emerging threat, may be. Canada and India are, after all, both democracies and share membership of the UK-centric Commonwealth of Nations.





Should the people with the passwords also be posting tings online?

https://www.reuters.com/world/us/colorado-voting-system-partial-passwords-accidentally-posted-government-website-2024-10-30/

Colorado voting system partial passwords accidentally posted on government website

Partial passwords to some parts of the state's voting systems that were accidentally posted online pose no threat to Nov. 5 general election, the Colorado Department of State said on Tuesday.

The department said a spreadsheet located on its website "improperly" included a hidden tab including partial passwords to certain components of Colorado voting systems.





Tasteless. Seems trivial but could kill.

https://databreaches.net/2024/10/30/fbi-investigated-disney-world-cyberattack-after-restaurant-menus-were-changed/

FBI investigated Disney World cyberattack after restaurant menus were changed

Gabrielle Russon reports on your latest reminder of the insider threat:

A fired Disney World employee is accused of hacking into an online system and altering Disney World restaurant menus by changing fonts and prices, adding profanity and manipulating the food allergy warnings, according to new federal documents.
The cyberattack caused at least $150,000 in damage and has gotten the FBI involved. Disney printed the wrong menus but realized the mistake in time. The menus were not sent to restaurants or distributed to the public.
A criminal complaint against Michael Scheuer was filed last week in U.S. District Court’s Orlando division. He was arrested on Oct. 23.

Read more at Florida Politics.

Note that this allegedly vengeful former employee also risked public health and safety. By editing the menus to suggest that certain items were safe for people with peanut allergies when they weren’t, he risked people having life-threatening anaphylactic incidents. There is no allegation that anyone was actually harmed or injured, however, as Disney detected the alterations before menus could be sent out to restaurants.

There seems to be a lot more to this case, as the affidavit in support of the complaint refers to DDoS attacks and Scheuer allegedly “doxing” his victims.

DataBreaches reminds readers that a complaint is just unproven allegations at this point.





To be expected? AI algorithms generate formulaic speech.

https://techxplore.com/news/2024-10-text-ai-generated-figured-method.html

How can you tell if text is AI-generated? Researchers have figured out a new method

Have you ever looked at a piece of writing and thought something might be "off"? It might be hard to pinpoint exactly what it is. There might be too many adjectives or the sentence structure might be overly repetitious. It might get you thinking, "Did a human write this or was it generated by artificial intelligence?"

In a new paper, researchers at Northeastern University set out to make it a little easier to answer that question by analyzing the syntax, or sentence structure, in AI-generated text. What they found is that AI models tend to produce specific patterns of nouns, verbs and adjectives more frequently than humans.

The work is published on the arXiv preprint server.

"It empirically validates the sense that a lot of these generations are formulaic," says Byron Wallace, director of Northeastern's data science program and the Sy and Laurie Sternberg interdisciplinary associate professor. "Literally, they're formulaic."





Perspective.

https://thehill.com/policy/energy-environment/4963246-ai-sentience-welfare-study/

Plans must be made for the welfare of sentient AI, animal consciousness researchers argue

Computer scientists need to grapple with the possibility that they will accidentally create sentient artificial intelligence (AI) — and to plan for those systems’ welfare, a new study argues.

The report published on Thursday comes from an unusual quarter: specialists in the frontier field of animal consciousness, several of whom were signatories of the New York Declaration on Animal Consciousness.

But while the probability of creating self-aware artificial life over the next decade might be “objectively low,” it’s high enough that developers need to at least give it thought, Sebo said.





Tools & Techniques. Because you can’t subscribe to everything?

https://www.bespacific.com/all-of-the-paywall-removers-in-one-place/

All of the paywall removers in one place

Archive Buttons – Simply enter the URL of the article and click the archive buttons to remove any paywall.



 

Wednesday, October 30, 2024

Social media unchecked...

https://www.404media.co/elon-musk-funded-pac-supercharges-progress-2028-democrat-impersonation-ad-campaign/

Elon Musk-Funded PAC Supercharges ‘Progress 2028’ Democrat Impersonation Ad Campaign

An Elon Musk-funded super PAC has expanded an advertising campaign in which it is impersonating Democrats and targeting registered Republicans with policies unpopular with conservatives they say Kamala Harris will pass if she wins the election. The policies, which are not supported by the Harris campaign, include “mandatory” gun buy-back programs, allowing undocumented immigrants to vote, keeping parents out of decisions about gender-affirming care for minors, and imagining “a world without gas-powered vehicles.”

The campaign, called Progress 2028, is designed to look like it is the Democratic version of Project 2025 and lists a set of policies that the group says Harris would enact if elected president. In actuality, the entire scheme is being orchestrated and promoted by an Elon Musk-funded group called Building America’s Future, which registered to operate “Progress 2028” as a “fictitious name” under the PAC, according to documents uncovered by OpenSecrets, which investigates money in politics. Building America’s Future is the group we previously reported on, which is targeting Muslims in Michigan and Jewish people in Pennsylvania with opposing messages about Harris’s stance on Israel’s invasion of Palestine. 



(Related)

https://www.bbc.com/news/articles/cx2dpj485nno

How X users can earn thousands from US election misinformation and AI images

Some users on X who spend their days sharing content that includes election misinformation, AI-generated images and unfounded conspiracy theories say they are being paid "thousands of dollars" by the social media site.

The BBC identified networks of dozens of accounts that re-share each other's content multiple times a day - including a mix of true, unfounded, false and faked material - to boost their reach, and therefore, revenue on the site.

Some of these networks support Donald Trump, others Kamala Harris, and some are independent. Several of these profiles - which say they are not connected to official campaigns - have been contacted by US politicians, including congressional candidates, looking for supportive posts.