Tuesday, March 14, 2023

I’d like to disagree but my AI said, “Not so fast Bob.”

https://techcrunch.com/2023/03/07/worldcoin-cofounded-by-sam-altman-is-betting-the-next-big-thing-in-ai-is-proving-you-are-human/

Worldcoin, co-founded by Sam Altman, is betting the next big thing in AI is proving you are human

Fake virtual identities are nothing new. The ability to so easily create them has been both a boon for social media platforms — more “users” — and a scourge, tied as they are to the spread of conspiracy theories, distorted discourse and other societal ills.

Still, Twitter bots are nothing compared with what the world is about to experience, as any time spent with ChatGPT illustrates. Flash forward a few years and it will be impossible to know if someone is communicating with another mortal or a neural network.



(Related)

https://www.theatlantic.com/technology/archive/2023/03/ai-chatbots-large-language-model-misinformation/673376/

Why Are We Letting the AI Crisis Just Happen?

Bad actors could seize on large language models to engineer falsehoods at unprecedented scale.

New AI systems such as ChatGPT, the overhauled Microsoft Bing search engine, and the reportedly soon-to-arrive GPT-4 have utterly captured the public imagination. ChatGPT is the fastest-growing online application, ever, and it’s no wonder why. Type in some text, and instead of getting back web links, you get well-formed, conversational responses on whatever topic you selected—an undeniably seductive vision.

But the public, and the tech giants, aren’t the only ones who have become enthralled with the Big Data–driven technology known as the large language model. Bad actors have taken note of the technology as well. At the extreme end, there’s Andrew Torba, the CEO of the far-right social network Gab, who said recently that his company is actively developing AI tools to “uphold a Christian worldview” and fight “the censorship tools of the Regime.” But even users who aren’t motivated by ideology will have their impact.



(Related)

https://thehill.com/opinion/technology/3890357-chatgpt-blues-the-coming-generative-ai-gerrymandering-of-the-american-mind/

ChatGPT blues: The coming generative AI gerrymandering of the American mind

ChatGPT has been trained on vastly more text than individual experts can ever hope to read. So, it is not surprising that ChatGPT is viewed as an objective oracle and friendly guide to any and all topics under the sun. In this giddy excitement, we are overlooking that it can gradually shape individual beliefs and shift social attitudes: As you rely on it more and more, this machine’s worldview easily could become your worldview. Vox AI, vox populi!

Indeed, it turns out that ChatGPT may be an influencer with an agenda. Early research shows consistent, left-of-center leanings of ChatGPT. Compared to conservative positions, it exhibits a positive sentiment and tone toward liberal politicians and policies. Ditto for the European Union, where ChatGPT responses align more closely with some political parties than others.





The pendulum swings?

https://thenextweb.com/news/uk-plans-replace-gdpr-data-protection-unleash-savings-cut-red-tape

New plans for a GDPR replacement have divided Britain’s tech sector

Critics fear privacy will be sacrificed for business benefits

The UK has finally unveiled plans for its GDPR replacement: the Data Protection and Digital Information Bill (DPDIB). Introduced in Parliament last week, the bill aims to boost economic growth while protecting privacy.

The proposed rules promise to reduce paperwork, slash costs, foster trade, and (please, Lord) cut down on cookie pop-ups. They also controversially claim to produce savings of more than £4 billion over 10 years (more on that later).





AIs have no compassion? Treating people as the statistical average...

https://www.statnews.com/2023/03/13/medicare-advantage-plans-denial-artificial-intelligence/

Denied by AI: How Medicare Advantage plans use algorithms to cut off care for seniors in need

An algorithm, not a doctor, predicted a rapid recovery for Frances Walter, an 85-year-old Wisconsin woman with a shattered left shoulder and an allergy to pain medicine. In 16.6 days, it estimated, she would be ready to leave her nursing home.

On the 17th day, her Medicare Advantage insurer, Security Health Plan, followed the algorithm and cut off payment for her care, concluding she was ready to return to the apartment where she lived alone. Meanwhile, medical notes in June 2019 showed Walter’s pain was maxing out the scales and that she could not dress herself, go to the bathroom, or even push a walker without help.

It would take more than a year for a federal judge to conclude the insurer’s decision was “at best, speculative” and that Walter was owed thousands of dollars for more than three weeks of treatment. While she fought the denial, she had to spend down her life savings and enroll in Medicaid just to progress to the point of putting on her shoes, her arm still in a sling.



No comments: