Saturday, November 02, 2024

Perspective. AI is getting better at everything, including crime.

https://www.zdnet.com/article/anthropic-warns-of-ai-catastrophe-if-governments-dont-regulate-in-18-months/

Anthropic warns of AI catastrophe if governments don't regulate in 18 months

Only days away from the US presidential election, AI company Anthropic is advocating for its own regulation -- before it's too late. 

On Thursday, the company, which stands out in the industry for its focus on safety, released recommendations for governments to implement "targeted regulation" alongside potentially worrying data on the rise of what it calls "catastrophic" AI risks

In a blog post, Anthropic noted how much progress AI models have made in coding and cyber offense in just one year. "On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems (Claude 2, October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5 Sonnet, October 2024)," the company wrote. "Internally, our Frontier Red Team has found that current models can already assist on a broad range of cyber offense-related tasks, and we expect that the next generation of models -- which will be able to plan over long, multi-step tasks -- will be even more effective."



Friday, November 01, 2024

Another way to look at the web.

https://www.bespacific.com/introducing-chatgpt-search/

Introducing ChatGPT search

(opens in a new window)

OpenAI: “ChatGPT can now search the web in a much better way than before. You can get fast, timely answers with links to relevant web sources, which you would have previously needed to go to a search engine for. This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more. ChatGPT will choose to search the web based on what you ask, or you can manually choose to search by clicking the web search icon. Search will be available at chatgpt.com, as well as on our desktop and mobile apps. All ChatGPT Plus and Team users, as well as SearchGPT waitlist users, will have access today. Enterprise and Edu users will get access in the next few weeks. We’ll roll out to all Free users over the coming months… Chats now include links to sources, such as news articles and blog posts, giving you a way to learn more. Click the Sources button below the response to open a sidebar with the references…”

See also MIT Technology Review: “At stake is the future of AI search—that is, chatbots that summarize information from across the web. If their growing popularity is any indication, these AI “answer engines” could replace traditional search engines as our default gateway to the internet. While ordinary AI chatbots can reproduce—often unreliably—information learned through training, AI search tools like Perplexity, Google’s Gemini, or OpenAI’s now-public SearchGPT aim to retrieve and repackage information from third-party websites. They return a short digest to users along with links to a handful of sources, ranging from research papers to Wikipedia articles and YouTube transcripts. The AI system does the reading and writing, but the information comes from outside…. At its best, AI search can better infer a user’s intent, amplify quality content, and synthesize information from diverse sources. But if AI search becomes our primary portal to the web, it threatens to disrupt an already precarious digital economy. Today, the production of content online depends on a fragile set of incentives tied to virtual foot traffic: ads, subscriptions, donations, sales, or brand exposure. By shielding the web behind an all-knowing chatbot, AI search could deprive creators of the visits and “eyeballs” they need to survive. ..”





Tools & Techniques.

https://www.zdnet.com/article/claude-ai-adds-desktop-apps-and-dictation-mode-heres-how-to-use-them/

Claude AI adds desktop apps and dictation mode – here's how to use them

Anthropic is expanding its Claude AI beyond the web. On Thursday, the company unveiled new desktop applications for its popular chatbot. Designed for Windows and MacOS, the new apps work similarly to the website and are available for free users and paid subscribers.

To grab the apps, head to the Claude for Desktop site, where you'll find versions for Windows, Windows on ARM, and MacOS. For now, the apps are tagged with a beta label, which may indicate that Anthropic is still tweaking them. After downloading one of the apps, you'll be prompted to sign in using a Google account or an email link. From there, use Claude just as you would use the website.



Thursday, October 31, 2024

Imagine the consequences…

https://www.theregister.com/2024/10/31/canada_cybersec_threats/

Chinese attackers accessed Canadian government networks – for five years

India makes it onto list of likely threats for the first time

A report by Canada's Communications Security Establishment (CSE) revealed that state-backed actors have collected valuable information from government networks for five years.

The biennial National Cyber Threat Assessment  described the People's Republic of China's (PRC) cyber operations against Canada as "second to none." Their purpose is to "serve high-level political and commercial objectives, including espionage, intellectual property (IP) theft, malign influence, and transnational repression."

The report also named Russia and Iran as significant hostile states – which isn't surprising.

The inclusion of India, named for the first time as an emerging threat, may be. Canada and India are, after all, both democracies and share membership of the UK-centric Commonwealth of Nations.





Should the people with the passwords also be posting tings online?

https://www.reuters.com/world/us/colorado-voting-system-partial-passwords-accidentally-posted-government-website-2024-10-30/

Colorado voting system partial passwords accidentally posted on government website

Partial passwords to some parts of the state's voting systems that were accidentally posted online pose no threat to Nov. 5 general election, the Colorado Department of State said on Tuesday.

The department said a spreadsheet located on its website "improperly" included a hidden tab including partial passwords to certain components of Colorado voting systems.





Tasteless. Seems trivial but could kill.

https://databreaches.net/2024/10/30/fbi-investigated-disney-world-cyberattack-after-restaurant-menus-were-changed/

FBI investigated Disney World cyberattack after restaurant menus were changed

Gabrielle Russon reports on your latest reminder of the insider threat:

A fired Disney World employee is accused of hacking into an online system and altering Disney World restaurant menus by changing fonts and prices, adding profanity and manipulating the food allergy warnings, according to new federal documents.
The cyberattack caused at least $150,000 in damage and has gotten the FBI involved. Disney printed the wrong menus but realized the mistake in time. The menus were not sent to restaurants or distributed to the public.
A criminal complaint against Michael Scheuer was filed last week in U.S. District Court’s Orlando division. He was arrested on Oct. 23.

Read more at Florida Politics.

Note that this allegedly vengeful former employee also risked public health and safety. By editing the menus to suggest that certain items were safe for people with peanut allergies when they weren’t, he risked people having life-threatening anaphylactic incidents. There is no allegation that anyone was actually harmed or injured, however, as Disney detected the alterations before menus could be sent out to restaurants.

There seems to be a lot more to this case, as the affidavit in support of the complaint refers to DDoS attacks and Scheuer allegedly “doxing” his victims.

DataBreaches reminds readers that a complaint is just unproven allegations at this point.





To be expected? AI algorithms generate formulaic speech.

https://techxplore.com/news/2024-10-text-ai-generated-figured-method.html

How can you tell if text is AI-generated? Researchers have figured out a new method

Have you ever looked at a piece of writing and thought something might be "off"? It might be hard to pinpoint exactly what it is. There might be too many adjectives or the sentence structure might be overly repetitious. It might get you thinking, "Did a human write this or was it generated by artificial intelligence?"

In a new paper, researchers at Northeastern University set out to make it a little easier to answer that question by analyzing the syntax, or sentence structure, in AI-generated text. What they found is that AI models tend to produce specific patterns of nouns, verbs and adjectives more frequently than humans.

The work is published on the arXiv preprint server.

"It empirically validates the sense that a lot of these generations are formulaic," says Byron Wallace, director of Northeastern's data science program and the Sy and Laurie Sternberg interdisciplinary associate professor. "Literally, they're formulaic."





Perspective.

https://thehill.com/policy/energy-environment/4963246-ai-sentience-welfare-study/

Plans must be made for the welfare of sentient AI, animal consciousness researchers argue

Computer scientists need to grapple with the possibility that they will accidentally create sentient artificial intelligence (AI) — and to plan for those systems’ welfare, a new study argues.

The report published on Thursday comes from an unusual quarter: specialists in the frontier field of animal consciousness, several of whom were signatories of the New York Declaration on Animal Consciousness.

But while the probability of creating self-aware artificial life over the next decade might be “objectively low,” it’s high enough that developers need to at least give it thought, Sebo said.





Tools & Techniques. Because you can’t subscribe to everything?

https://www.bespacific.com/all-of-the-paywall-removers-in-one-place/

All of the paywall removers in one place

Archive Buttons – Simply enter the URL of the article and click the archive buttons to remove any paywall.



 

Wednesday, October 30, 2024

Social media unchecked...

https://www.404media.co/elon-musk-funded-pac-supercharges-progress-2028-democrat-impersonation-ad-campaign/

Elon Musk-Funded PAC Supercharges ‘Progress 2028’ Democrat Impersonation Ad Campaign

An Elon Musk-funded super PAC has expanded an advertising campaign in which it is impersonating Democrats and targeting registered Republicans with policies unpopular with conservatives they say Kamala Harris will pass if she wins the election. The policies, which are not supported by the Harris campaign, include “mandatory” gun buy-back programs, allowing undocumented immigrants to vote, keeping parents out of decisions about gender-affirming care for minors, and imagining “a world without gas-powered vehicles.”

The campaign, called Progress 2028, is designed to look like it is the Democratic version of Project 2025 and lists a set of policies that the group says Harris would enact if elected president. In actuality, the entire scheme is being orchestrated and promoted by an Elon Musk-funded group called Building America’s Future, which registered to operate “Progress 2028” as a “fictitious name” under the PAC, according to documents uncovered by OpenSecrets, which investigates money in politics. Building America’s Future is the group we previously reported on, which is targeting Muslims in Michigan and Jewish people in Pennsylvania with opposing messages about Harris’s stance on Israel’s invasion of Palestine. 



(Related)

https://www.bbc.com/news/articles/cx2dpj485nno

How X users can earn thousands from US election misinformation and AI images

Some users on X who spend their days sharing content that includes election misinformation, AI-generated images and unfounded conspiracy theories say they are being paid "thousands of dollars" by the social media site.

The BBC identified networks of dozens of accounts that re-share each other's content multiple times a day - including a mix of true, unfounded, false and faked material - to boost their reach, and therefore, revenue on the site.

Some of these networks support Donald Trump, others Kamala Harris, and some are independent. Several of these profiles - which say they are not connected to official campaigns - have been contacted by US politicians, including congressional candidates, looking for supportive posts.



Tuesday, October 29, 2024

You do everything right and the people paid to keep you safe rat you out…

https://www.theregister.com/2024/10/29/macron_location_strava/

Merde! Macron's bodyguards reveal his location by sharing Strava data

The French equivalent of the US Secret Service may have been letting their guard down, as an investigation showed they are easily trackable via the fitness app Strava.

An investigation by Le Monde has shown that members of the Security Group for the Presidency of the Republic (GSPR) have been openly displaying their location on the popular software during their workout sessions. Since they travel with President Emmanuel Macron, this makes it fairly easy to work out his location. A dozen of his bodyguards were leaking key information this way.

More disclosures are promised later, but it appears that both President Biden and Russia's Vladimir Putin are also vulnerable to this kind of tracking. In the latter case, it would be interesting if someone - say a Ukrainian drone operator - got hold of such information.





Perspective.

https://neurosciencenews.com/human-ai-colaboration-neuroscience-27953/

When Human-AI Teams Thrive and When They Don’t

Published today in Nature Human Behaviour, “When Combinations of Humans and AI Are Useful” is the first large-scale meta-analysis conducted to better understand when human-AI combinations are useful in task completion, and when they are not.

Surprisingly, the research has found that combining humans and AI to complete decision-making tasks often fell short; but human-AI teams showed much potential working in combination to perform creative tasks.





Tools & Techniques.

https://www.bespacific.com/deepfake-o-meter/

Deepfake-o-Meter

UB Media Forensic Lab – DEEPFAKE-O-METER – An Open Platform Integrating State-Of-The-Art Algorithms for DeepFake Image, Video, and Audio Detection. Free but requires login.



Monday, October 28, 2024

Use AI to analyze your data. If you know what to analyze.

https://www.zdnet.com/article/could-ai-make-data-science-obsolete/

Could AI make data science obsolete?

According to these experts, AI democratizes software development, but could eventually replace it altogether -- and change data science as we know it.

That's the word from Thomas Davenport of Babson College and Ian Barkin, a venture capitalist, in their latest book, All Hands on Tech: The AI-Powered Citizen Revolution. For starters, they point out that with low-code and no-code tools, robotic process automation, and now AI, the gates of software development are open to all. 



(Related)

https://databreaches.net/2024/10/27/in-legal-first-japan-convicts-man-of-abusing-ai-to-generate-ransomware/

In legal first, Japan convicts man of abusing AI to generate ransomware

Malay Mail reports:

 A 25-year-old man has become the first person in Japan to be convicted for criminal activities involving generative AI.
According to The Yomiuri Shimbun, the Tokyo District Court found Ryuki Hayashi guilty of creating a computer virus using interactive generative artificial intelligence.
He was sentenced to three years in prison, suspended for four years. Prosecutors had sought a four-year sentence.
The newspaper reported that Hayashi developed the ransomware-like virus at his home in Kawasaki around March 31, 2023, using illegal source code obtained with AI tools.

Read more at MSN.



Sunday, October 27, 2024

This could be useful…

https://dl.acm.org/doi/abs/10.1145/3691620.3695353

CompAi: A Tool for GDPR Completeness Checking of Privacy Policies using Artificial Intelligence

We introduce CompAı - a tool for checking the completeness of privacy policies against the general data protection regulation (GDPR). CompAı facilitates the analysis of privacy policies to check their compliance to GDPR requirements. Since privacy policies serve as an agreement between a software system and its prospective users, the policy must fully capture such requirements to ensure that collected personal data of individuals (or users) remains protected as specified by the GDPR. For a given privacy policy, CompAı semantically analyzes its textual content against a comprehensive conceptual model which captures all information types that might appear in any policy. Based on this analysis, alongside some input from the end user, CompAı can determine the potential incompleteness violations in the input policy with an accuracy of ≈96%. CompAı generates a detailed report that can be easily reviewed and validated by experts. The source code of CompAı is publicly available on https://figshare.com/articles/online_resource/CompAI/23676069, and a demo of the tool is available on https://youtu.be/zwa_tM3fXHU.





AI as owner. Who gets paid?

https://scholarsarchive.byu.edu/byuplr/vol38/iss1/10/

Plagiarism or Progress?: An Inquiry into Generative AI and Copyright

The emergence of accessible artificial intelligence (AI) in 2022, including notable models such as Chat GPT, Google Bard, and Microsoft Bing Chat, has sparked significant discourse regarding their societal impact. This paper delves into the contrasting perspectives regarding the role of Large Language Models (LLMs) in society, particularly in the realm of copyright law. While pessimists express concerns about the potential stifling of human ingenuity and innovation, optimists advocate for embracing AI's creative potential, advocating for legal reforms to address copyright issues. Focusing on the legal domain, this paper argues for amending US copyright law to accommodate AI-generated content, proposing that the primary creator of such content should hold copyright, contingent upon user involvement. By examining the inherent challenges within the current legal framework and proposing reforms to uphold innovation and creativity, this article aims to contribute to a comprehensive understanding of AI's influence on copyright law.





Perspective.

https://digitalcommons.onu.edu/cgi/viewcontent.cgi?article=1367&context=onu_law_review

AI in Robes: Courts, Judges, and Artificial Intelligence

The legal system, courts, and judges in particular, are often criticized for being slow to address new technologies.1 That has not been the case with artificial intelligence (“AI”), especially since the public release of generative AI programs such as ChatGPT.2 In the last couple of years, the court systems and individual courts have proactively taken steps to anticipate and prepare to deal with issues created by AI.3 These actions include both steps to allow courts to take advantage of the benefits offered by AI, and to be prepared to identify and mitigate the risks created by AI.4 This rare technological activism by the courts reflects an understanding of the profound impacts that AI is likely to have on the legal system and society.

This Article reviews the actions that courts have taken to address AI.5 Part I examines the role of the courts in policing the inappropriate use of AI by attorneys.6 Part II describes the courts’ utilization of AI in their operations, both in administrative applications and in researching and drafting judicial opinions and orders.7 In both supervising attorneys’ and their own use of AI, courts have acted surprisingly proactively, spurred on by the rapid speed and powerful capabilities of emerging AI tools.8