Thursday, July 06, 2023

Is this a license for journalistic surveillance?

https://www.pogowasright.org/law-against-secretly-recording-public-conversations-is-unconstitutional-ninth-circuit-rules/

Law Against Secretly Recording Public Conversations Is Unconstitutional, Ninth Circuit Rules

Avalon Zoppo reports:

The U.S. Court of Appeals for the Ninth Circuit on Monday struck down as unconstitutional an Oregon wiretapping law that bars secretly taping in-person conversations in public spaces, with a dissenting judge citing the rise of generative artificial intelligence “deepfakes” in support of a person’s right to have notice before being recorded.
The decision revives a lawsuit from Project Veritas, a conservative undercover media organization that claimed in a 2020 complaint that the law violated the First Amendment right to newsgathering. The group said the statute’s exceptions—one allowing the recording of life-endangering felonies and another of police officers—favors recording some government officials over others.

Read more at Law.com.





This could be interesting, or maybe impossible.

https://www.nbcnews.com/tech/tech-news/nyc-companies-will-prove-ai-hiring-software-isnt-sexist-racist-rcna92336

In NYC, companies will have to prove their AI hiring software isn't sexist or racist

A new law, which takes effect Wednesday, is believed to be the first of its kind in the world. Under New York’s new rule, hiring software that relies on machine learning or artificial intelligence to help employers choose preferred candidates or weed out bad ones — called an automatic employment decision tool, or AEDT — must pass an audit by a third-party company to show it’s free of racist or sexist bias.

Companies that run AI hiring software must also publish those results. Businesses that use third-party AEDT software can no longer legally use such programs if they haven’t been audited.





Perspective.

https://www.bespacific.com/how-chat-based-large-language-models-replicate-the-mechanisms-of-a-psychics-con/

The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con

Out of the Software Crisis – Baldur Bjarnason: “For the past year or so I’ve been spending most of my time researching the use of language and diffusion models in software businesses. One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent. But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained. LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think. LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text. There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think. There are two possible explanations for this effect:

  1. The tech industry has accidentally invented the initial stages a completely new kind of mind, based on completely unknown principles, using completely unknown processes that have no parallel in the biological world.
  2. The intelligence illusion is in the mind of the user and not in the LLM itself.

Many AI critics, including myself, are firmly in the second camp. It’s why I titled my book on the risks of generative “AI” The Intelligence Illusion. For the past couple of months, I’ve been working on an idea that I think explains the mechanism of this intelligence illusion. I now believe that there is even less intelligence and reasoning in these LLMs than I thought before. Many of the proposed use cases now look like borderline fraudulent pseudoscience to me…”





Perspective.

https://venturebeat.com/ai/gartner-survey-most-corporate-strategists-find-ai-and-analytics-critical-to-success/

Gartner survey: Most corporate strategists find AI and analytics critical to success

A new survey conducted by Gartner has revealed that as many as 79% of global corporate strategists see AI, analytics and automation as critical drivers for success over the next two years.

Conducted between October 2022 and April 2023, the poll highlights enterprises’ growing focus on next-gen technologies. Now, companies are looking at advanced systems not only to handle repetitive or basic tasks but also high-value projects directly related to business growth, such as strategic planning and decision-making.





Resource.

https://mashable.com/uk/deals/ethical-hacking-free-courses

10 of the best ethical hacking courses you can take online for free

Ethical hacking is the practise of learning the skills of a hacker, but using those skills to highlight system vulnerabilities and implement robust cybersecurity protocols. It's like fighting fire with fire. By understanding the ways of a hacker, you can stay one step ahead of the bad guys.



No comments: