Wednesday, October 25, 2023

Reawakens my concern that I will be viewed with great suspicion because I don’t use social media.

https://www.404media.co/inside-ices-database-derogatory-information-giant-oak-gost/

Inside ICE’s Database for Finding ‘Derogatory’ Online Speech

Immigration and Customs Enforcement (ICE) has used a system called Giant Oak Search Technology (GOST) to help the agency scrutinize social media posts, determine if they are “derogatory” to the U.S., and then use that information as part of immigration enforcement, according to a new cache of documents reviewed by 404 Media.

The documents peel back the curtain on a powerful system, both in a technological and a policy sense—how information is processed and used to decide who is allowed to remain in the country and who is not.

The government should not be using algorithms to scrutinize our social media posts and decide which of us is ‘risky.’ And agencies certainly shouldn't be buying this kind of black box technology in secret without any accountability. DHS needs to explain to the public how its systems determine whether someone is a ‘risk’ or not, and what happens to the people whose online posts are flagged by its algorithms,” Patrick Toomey, Deputy Director of the ACLU's National Security Project, told 404 Media in an email. The documents come from a Freedom of Information Act (FOIA) lawsuit brought by both the ACLU and the ACLU of Northern California. Toomey from the ACLU then shared the documents with 404 Media.





Suspicions confirmed. (How does the AI know what you want?)

https://cointelegraph.com/news/humans-ai-prefer-sycophantic-chatbot-answers-truth-study

Humans and AI often prefer sycophantic chatbot answers to the truth — Study

Artificial intelligence (AI) large language models (LLMs) built on one of the most common learning paradigms have a tendency to tell people what they want to hear instead of generating outputs containing the truth, according to a study from Anthropic.

In one of the first studies to delve this deeply into the psychology of LLMs, researchers at Anthropic have determined that both humans and AI prefer so-called sycophantic responses over truthful outputs at least some of the time.

Per the team’s research paper:

“Specifically, we demonstrate that these AI assistants frequently wrongly admit mistakes when questioned by the user, give predictably biased feedback, and mimic errors made by the user. The consistency of these empirical findings suggests sycophancy may indeed be a property of the way RLHF models are trained.”

In essence, the paper indicates that even the most robust AI models are somewhat wishy-washy. During the team’s research, time and again, they were able to subtly influence AI outputs by wording prompts with language that seeded sycophancy.





Perspective.

https://www.bespacific.com/lessig-on-why-ai-and-social-media-are-causing-a-free-speech-crisis-for-the-internet/

Lessig on why AI and social media are causing a free speech crisis for the internet

The Verge: Harvard Professor Lawrence Lessig After 30 years teaching law, the internet policy legend is as worried as you’d think about AI and TikTok and he has surprising thoughts about balancing free speech with protecting democracy. Nilay Patel: “…Larry and I talked about the current and recurring controversy around react videos on YouTube, not what they are but what they represent: the users of a platform trying to establish their own culture around what people can and cannot remix and reuse — their own speech regulations based in copyright law. That’s a fascinating cultural development. There’s a lot of approaches to create these types of speech regulations that get around the First Amendment, and I wanted to know how Larry felt about that as someone who has been writing about speech on the internet for so long. His answers really surprised me. Of course, we also had to talk about artificial intelligence. You’ll hear us pull apart two different types of AI that are really shaping our cultural experiences right now. There’s algorithmic AI, which runs the recommendation engines on social platforms and tries to keep you engaged. And then there’s the new world of generative AI, which everyone agrees is a huge risk for the spread of misinformation, both today and in the future, but which no two people seem to agree on how to tackle. Larry’s thoughts here were also surprising. Maybe, he says, we need to get all of politics offline if we’re going to solve this problem.”





Perspective. Proves that AI is everywhere!

https://time.com/collection/best-inventions-2023/

THE BEST INVENTIONS OF 2023



No comments: