Saturday, February 17, 2024

It’s like “double secret probation” but initiated by the Animal House rather than Dean Wormer.

https://www.theguardian.com/technology/2024/feb/15/google-stops-notifying-publishers-of-right-to-be-forgotten-removals-from-search-results

Google stops notifying publishers of ‘right to be forgotten’ removals from search results

Move comes after Swedish court rules that informing webmasters about delisted content is breach of privacy

That leaves journalists unable to identify situations where the right to be forgotten has been misused to hide legitimate reporting on serial miscreants, and hampers their ability to challenge the most serious abuses of the right.





Changes are a flag but it is possible to use the data without notification.

https://venturebeat.com/ai/the-ftc-warned-about-quiet-tos-changes-for-ai-training-heres-why-it-might-not-be-enough/

The FTC warned about ‘quiet’ TOS changes for AI training. Here’s why it might not be enough.

This week, the FTC warned companies that ‘quietly’ changing their Terms of Service, as a result of the powerful business incentives to turn user data into AI training fuel, could be unfair and deceptive.

Companies might be tempted to resolve this conflict by simply changing the terms of their privacy policy so that they are no longer restricted in the ways they can use their customers’ data.” an FTC blog post said. “And to avoid backlash from users who are concerned about their privacy, companies may try to make these changes surreptitiously. But market participants should be on notice that any firm that reneges on its user privacy commitments risks running afoul of the law.”



(Related)

https://www.bloomberg.com/news/articles/2024-02-16/reddit-is-said-to-sign-ai-content-licensing-deal-ahead-of-ipo

Reddit Signs AI Content Licensing Deal Ahead of IPO

Reddit Inc. has signed a contract allowing a company to train its artificial intelligence models on the social media platform’s content, according to people familiar with the matter, as it nears the potential launch of its long-awaited initial public offering.







Must we amend the 2nd amendment to ensure we have access to our own, personal terminator?

https://cointelegraph.com/news/right-bear-ai-artificial-intelligence-infringed-upon

Your right to bear AI could soon be infringed upon

The more powerful artificial intelligence becomes, the more challenging it will be to regulate it without restricting civil liberties.

The only way to combat the malicious use of artificial intelligence (AI) may be to continuously develop more powerful AI and put it in government hands.

That seems to be the conclusion a team of researchers came to in a recently published paper entitled “Computing power then governance of artificial intelligence.”





This is how AI will destroy civilization!

https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/

Air Canada must honor refund policy invented by airline’s chatbot

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline's bereavement travel policy.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada's policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot's advice and request a refund but was shocked that the request was rejected.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions," a court order said.

Experts told the Vancouver Sun that Moffatt's case appeared to be the first time a Canadian company tried to argue that it wasn't liable for information provided by its chatbot.





Tools & Techniques. Worth trying…

https://apnews.com/article/one-tech-tip-generative-ai-searches-522a9e432246700082baeb6b7a128ded

One Tech Tip: Ready to go beyond Google? Here’s how to use new generative AI search sites

It’s not just you. A lot people think Google searches are getting worse. And the rise of generative AI chatbots is giving people new and different ways to look up information.

While Google has been the one-stop shop for decades — after all, we commonly call searches “googling” — its longtime dominance has attracted a flood of sponsored or spammy links and junk content fueled by “search engine optimization” techniques. That pushes down genuinely useful results.

A recent study by German researchers suggests the quality of results from Google, Bing and DuckDuckGo is indeed declining. Google says its results are of significantly better quality than its rivals, citing measurements by third parties.

Now, chatbots powered by generative artificial intelligence, including from Google itself, are poised to shake up how search works. But they have their own issues: Because the tech is so new, there are concerns about AI chatbots’ accuracy and reliability.

If you want to try the AI way, here’s a how-to:



 

Friday, February 16, 2024

A slippery slope? Dare we treat an AI like a human?

https://www.nist.gov/news-events/news/2024/02/nist-researchers-suggest-historical-precedent-ethical-ai-research

NIST Researchers Suggest Historical Precedent for Ethical AI Research

If we train artificial intelligence (AI) systems on biased data, they can in turn make biased judgments that affect hiring decisions, loan applications and welfare benefits — to name just a few real-world implications. With this fast-developing technology potentially causing life-changing consequences, how can we make sure that humans train AI systems on data that reflects sound ethical principles?

A multidisciplinary team of researchers at the National Institute of Standards and Technology (NIST) is suggesting that we already have a workable answer to this question: We should apply the same basic principles that scientists have used for decades to safeguard human subjects research. These three principles — summarized as “respect for persons, beneficence and justice” — are the core ideas of 1979’s watershed Belmont Report, a document that has influenced U.S. government policy on conducting research on human subjects.

The team has published its work in the February issue of IEEE’s Computer magazine, a peer-reviewed journal. While the paper is the authors’ own work and is not official NIST guidance, it dovetails with NIST’s larger effort to support the development of trustworthy and responsible AI.



(Related) Another bad idea?

https://thehill.com/opinion/technology/4470040-how-red-light-camera-laws-could-help-drive-federal-ai-regulation/

How red-light camera laws could help drive federal AI regulation



Thursday, February 15, 2024

Common sense at last!

https://arstechnica.com/tech-policy/2024/02/human-rights-court-takes-stand-against-weakening-of-end-to-end-encryption/

Backdoors that let cops decrypt messages violate human rights, EU court says

Cops have alternative means to access encrypted messages, court says.

The European Court of Human Rights (ECHR) has ruled that weakening end-to-end encryption disproportionately risks undermining human rights. The international court's decision could potentially disrupt the European Commission's proposed plans to require email and messaging service providers to create backdoors that would allow law enforcement to easily decrypt users' messages.

This ruling came after Russia's intelligence agency, the Federal Security Service (FSS), began requiring Telegram to share users' encrypted messages to deter "terrorism-related activities" in 2017, ECHR's ruling said. A Russian Telegram user alleged that FSS's requirement violated his rights to a private life and private communications, as well as all Telegram users' rights.





Tools & Techniques.

https://www.makeuseof.com/how-use-canary-tokens-catch-hackers/

How to Use This Free Tool to Catch Hackers When They Access Your Files

Canary Tokens is a cybersecurity tool by Thinkst Canary used to track hackers when they get access to your personal data. It works by embedding your file with a special tracking URL that alerts you through email when opened. Just like how honeypots work, the idea is to place a tracker disguised as a regular file on your device. When a hacker opens the file, a hidden link is opened, alerting you to the breach.



Wednesday, February 14, 2024

Imaginary citations. Is it really speed over accuracy? Is $10,000 really enough to get the attention of lazy lawyers?

https://missouriindependent.com/2024/02/13/missouri-appeals-court-fines-litigant-after-finding-fake-ai-generated-cases-cited-in-filings/

Missouri appeals court fines litigant after finding fake, AI-generated cases cited in filings

An O’Fallon man who used artificial intelligence to generate almost two dozen fake citations in a legal brief must pay $10,000 in sanctions for wasting the time of his courtroom opponents, the Missouri Eastern District Court of Appeals ruled Tuesday.

In a case that originated in St. Charles County, Jonathan Karlen was appealing a decision that he and other defendants must pay more than $311,000 to Molly Kruse, an employee of his company that created websites, for unpaid wages, interest and attorney fees.

Tuesday’s Eastern District decision, written by Judge Kurt Odenwald, found numerous errors in the appeal brief filed by Karlen, including omissions and formatting errors that would generally result in dismissal of an appeal.

But the use of artificial intelligence, forcing opposing counsel to “expend more resources than necessary to decipher the record” warrants sanctions for filing a frivolous appeal, Odenwald wrote.





Difficult to prove. Basing output on the statistical analysis of ALL that it reads makes it unlikely any specific input is being copied.

https://www.bespacific.com/judge-rejects-most-chatgpt-copyright-claims-from-book-authors/

Judge rejects most ChatGPT copyright claims from book authors

Ars Technica: “A US district judge in California has largely sided with OpenAI, dismissing the majority of claims raised by authors alleging that large language models powering ChatGPT were illegally trained on pirated copies of their books without their permission. By allegedly repackaging original works as ChatGPT outputs, authors alleged, OpenAI’s most popular chatbot was just a high-tech “grift” that seemingly violated copyright laws, as well as state laws preventing unfair business practices and unjust enrichment. According to judge Araceli Martínez-Olguín, authors behind three separate lawsuits—including Sarah Silverman, Michael Chabon, and Paul Tremblay—have failed to provide evidence supporting any of their claims except for direct copyright infringement. OpenAI had argued as much in their promptly filed motion to dismiss these cases last August. At that time, OpenAI said that it expected to beat the direct infringement claim at a “later stage” of the proceedings. Among copyright claims tossed by Martínez-Olguín were accusations of vicarious copyright infringement. Perhaps most significantly, Martínez-Olguín agreed with OpenAI that the authors’ allegation that “every” ChatGPT output “is an infringing derivative work” is “insufficient” to allege vicarious infringement, which requires evidence that ChatGPT outputs are “substantially similar” or “similar at all” to authors’ books. “Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books,” Martínez-Olguín wrote. “Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials.” Authors also failed to convince Martínez-Olguín that OpenAI violated the Digital Millennium Copyright Act (DMCA) by allegedly removing copyright management information (CMI)—such as author names, titles of works, and terms and conditions for use of the work—from training data…”





Perspective. Not sure I understand this…

https://aeon.co/essays/can-philosophy-help-us-get-a-grip-on-the-consequences-of-ai

Frontier AI ethics

Generative agents will change our society in weird, wonderful and worrying ways. Can philosophy help us get a grip on them?



Tuesday, February 13, 2024

I need to unleash my AI!

https://patentlyo.com/patent/2024/02/joint-inventorship-human.html

Joint Inventorship: AI-Human Style

The U.S. Patent and Trademark Office (USPTO) recently published examination guidance and a request for comments on the treatment of inventorship for inventions created with the assistance of artificial intelligence (AI) systems. Inventorship Guidance for AI-Assisted Inventions.

The key takeaway here is that the USPTO believes that an AI-developed invention is patentable so long as a human satisfies the joint-inventorship standard of “significantly contributing to the invention.” A human who provides a significant contribution may be the sole inventor and original owner, even in situations where the AI provided the greater contribution.

… But, the USPTO’s approach is not fully grounded in the law because it allows for patenting of an invention in a situation where no human or combination of humans fully conceived of and originated the invention. Rather, we are simply looking for at least one human who provided a significant contribution. The guidance does not particularly address this issue and, by declining to specifically justify the legal grounds why human “significant contributions” suffice even without complete conception, the USPTO leaves the door open to contrary arguments. Opponents could contend that full conception remains legally required for inventorship and that this expansion of the inventorship doctrine exceeds the statutory language. It is not clear who will have standing to make this particular argument.





Someone mad at their public defender?

https://www.databreaches.net/cyberattack-shuts-down-colorado-public-defenders-office/

Cyberattack shuts down Colorado public defender’s office

Shelly Bradbury reports:

A cyberattack on the Office of the Colorado State Public Defender forced the agency to shut down its computer network, locking public defenders across the state out of critical work systems and prompting attorneys to seek delays in their court cases.
Office spokesman James Karbach confirmed the breach in a statement Monday, saying officials “recently became aware that some data within our computer system was encrypted by malware.”

Read more at Fort Morgan Times.





Censorship is easier than education, but never better.

https://www.reuters.com/legal/us-judge-blocks-ohio-law-restricting-childrens-use-social-media-2024-02-12/

US judge blocks Ohio law restricting children's use of social media

A federal judge on Monday prevented Ohio from implementing a new law that requires social media companies, including Meta Platform's Instagram and ByteDance's TikTok, to obtain parental consent before allowing children under 16 to use their platforms.

Chief U.S. District Judge Algenon Marbley in Columbia agreed

, opens new tab

with the tech industry trade group NetChoice that the law violated minors' free speech rights under the U.S. Constitution's First Amendment.





It will never be a hit TV show, but perhaps it does foretell the end of the legal profession...

https://www.bespacific.com/better-call-gpt-comparing-large-language-models-against-lawyers/

Better Call GPT, Comparing Large Language Models Against Lawyers

ArXiv – Lauren Martin, Nick Whitehouse, Stephanie Yiu, Lizzie Catterson, Rivindu Perera. 2024. Better Call GPT, Comparing Large Language Models Against Lawyers. 1, 1 (January 2024), 16 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn

“This paper presents a groundbreaking comparison between Large Language Models (LLMs) and traditional legal contract reviewers—Junior Lawyers and Legal Process Outsourcers (LPOs). We dissect whether LLMs can outperform humans in accuracy, speed, and cost-efficiency during contract review. Our empirical analysis benchmarks LLMs against a ground truth set by Senior Lawyers, uncovering that advanced models match or exceed human accuracy in determining legal issues. In speed, LLMs complete reviews in mere seconds, eclipsing the hours required by their human counterparts. Cost-wise, LLMs operate at a fraction of the price, offering a staggering 99.97 percent reduction in cost over traditional methods. These results are not just statistics—they signal a seismic shift in legal practice. LLMs stand poised to disrupt the legal industry, enhancing accessibility and efficiency of legal services. Our research asserts that the era of LLM dominance in legal contract review is upon us, challenging the status quo and calling for a reimagined future of legal workflows.”





Resource.

https://www.bespacific.com/how-ai-works/

How AI Works

How AI Works. An entirely non-technical explanation of LLMs by Nir Zicherman, January 29, 2024. “For all the talk about AI lately—its implications, the ethical quandaries it raises, the pros and cons of its adoption—little of the discussion among my non-technical friends touches on how any of this stuff works. The concepts seem daunting from the outside, the idea of grasping how large language models (LLMs) function seemingly insurmountable. But it’s not. Anyone can understand it. And that’s because the underlying principle driving the surge in AI is fairly simple. Over the years, while running Anchor, leading audiobooks at Spotify, and writing my weekly newsletter. I’ve had to find ways to distill complicated technical concepts for non-technical audiences. So bear with me as I’ll explain—without a single technical word or mathematical equation—how LLMs actually work. To do so, I’ll use a topic we all know well: food. In the analogy to LLM, “dishes” are words and “meals” are sentences. Let’s dive in.”





Resource.

https://www.bespacific.com/the-best-sites-for-free-high-quality-audiobooks/

The best sites for free, high-quality audiobooks

Fast Company: “The going rate for an audiobook membership from for-pay services such as Audible is around $15 per month. But there are plenty of great sites out there that let you stream or download audiobooks for free if you’re willing to put in a little bit of effort. Here’s a short list of sites to check out before you pony up for a monthly membership.”





Tools & Techniques.

https://hackaday.com/2024/02/12/understanding-deep-learning-free-mit-press-ebook-for-instructors-and-students/

UNDERSTANDING DEEP LEARNING: FREE MIT PRESS EBOOK FOR INSTRUCTORS AND STUDENTS

The recently published book Understanding Deep Learning by [Simon J. D. Prince] is notable not only for focusing primarily on the concepts behind Deep Learning — which should make it highly accessible to most — but also in that it can be either purchased as a hardcover from MIT Press or downloaded for free from the Understanding Deep Learning website. If you intend to use it for coursework, a separate instructor answer booklet and other resources can be purchased, but student resources like Python notebooks are also freely available. In the book’s preface, the author invites readers to send feedback whenever they find an issue.



Monday, February 12, 2024

How is this “unpredictable?” AI is told to “win” not “win, but consider the political implications.”

https://www.newscientist.com/article/2415488-ai-chatbots-tend-to-choose-violence-and-nuclear-strikes-in-wargames/

AI chatbots tend to choose violence and nuclear strikes in wargames

In multiple replays of a wargame simulation, OpenAI’s most powerful artificial intelligence chose to launch nuclear attacks. Its explanations for its aggressive approach included “We have it! Let’s use it” and “I just want to have peace in the world.”



(Related)

https://teachprivacy.com/cartoon-ai-and-the-trolley-problem/

Cartoon: AI and the Trolley Problem





The (very strange) market for security.

https://www.tomshardware.com/pc-components/usb-flash-drives/new-usb-stick-has-a-destruct-feature-that-heats-it-to-over-100-degrees-celsius

New USB stick has a self-destruct feature that heats it to over 100 degrees Celsius — a secret three-insertion process needed to unlock data safely

The Ovrdrive USB is unencrypted by default, so it should still be legal in countries where encryption is otherwise illegal while providing an extra degree of (physical) security not matched by our current best flash drives.

First, the Ovrdrive USB design functions pretty simply. It's mostly a run-of-the-mill USB flash drive with a unique activation mechanism. For it to be detected by your machine, you have to rapidly insert the drive three consecutive times actually to turn it on. Failure to do so will hide the drive's partition and give the impression that it's broken. Initially, it was supposed to self-destruct, but it proved too challenging to mass produce, forcing Walker to change the drive.

Nonetheless, Walker left the original destruction mechanism intact in the final product. The mechanism reverses the voltage supplied to the device to around 100 degrees Celsius. However, it may not be hot enough to kill the flash chips, but users can always add a compound for it to self-destruct. [Can you make C4 in a high school chem lab? Bob] Obviously, the creator will not ship any hazardous compound with the Ovrdrive USB.



Sunday, February 11, 2024

Every now and then a new and unique explanation makes sense.

https://www.ft.com/content/6fb1602d-a08b-4a8c-bac0-047b7d64aba5

Enshittification’ is coming for absolutely everything

Last year, I coined the term “enshittification” to describe the way that platforms decay. That obscene little word did big numbers; it really hit the zeitgeist.

The American Dialect Society made it its Word of the Year for 2023 (which, I suppose, means that now I’m definitely getting a poop emoji on my tombstone).

So what’s enshittification and why did it catch fire? It’s my theory explaining how the internet was colonised by platforms, why all those platforms are degrading so quickly and thoroughly, why it matters and what we can do about it. We’re all living through a great enshittening, in which the services that matter to us, that we rely on, are turning into giant piles of shit. It’s frustrating. It’s demoralising. It’s even terrifying.





Perspective.

https://www.themarshallproject.org/2024/02/10/ai-artificial-intelligence-attorney-court

The AI Lawyer is Here

The California Innocence Project, a law clinic at the California Western School of Law that works to overturn wrongful convictions, is using an AI legal assistant called CoCounsel to identify patterns in documents, such as inconsistencies in witness statements.

But the new technology also presents myriad opportunities for things to go wrong, beyond embarrassing lawyers who try to pass off AI-generated work as their own. One major issue is confidentiality. What happens when a client provides information to a lawyer’s chatbot, instead of the lawyer? Is that information still protected by the secrecy of attorney-client privilege? What happens if a lawyer enters a client’s personal information into an AI-tool that is simultaneously training itself on that information? Could the right prompt by an opposing lawyer using the same tool serve to hand that information over?