Friday, November 15, 2024

Useful tips…

https://www.zdnet.com/article/5-ways-to-catch-ai-in-its-lies-and-fact-check-its-outputs-for-your-research/

5 ways to catch AI in its lies and fact-check its outputs for your research

Sometimes, I think AI chatbots are modeled after teenagers. They can be very, very good. But other times, they tell lies. They make stuff up. They confabulate. They confidently give answers based on the assumption that they know everything there is to know, but they're woefully wrong.

Let's dig into five key steps you can take to guide an AI to accurate responses.





Perspective.

https://www.science.org/doi/10.1126/science.adt6140

The metaphors of artificial intelligence

A few months after ChatGPT was released, the neural network pioneer Terrence Sejnowski wrote about coming to grips with the shock of what large language models (LLMs) could do:

Something is beginning to happen that was not expected even a few years ago. A threshold was reached, as if a space alien suddenly appeared that could communicate with us in an eerily human way.… Some aspects of their behavior appear to be intelligent, but if it’s not human intelligence, what is the nature of their intelligence?”

What, indeed, is the nature of intelligence of LLMs and the artificial intelligence (AI) systems built on them? There is still no consensus on the answer. Many people view LLMs as analogous to an individual human mind (or perhaps, like Sejnowski, to that of a space alien)—a mind that can think, reason, explain itself, and perhaps have its own goals and intentions.

Others have proposed entirely different ways of conceptualizing these enormous neural networks: as role players that can imitate many different characters; as cultural technologies, akin to libraries and encyclopedias, that allow humans to efficiently access information created by other humans; as mirrors of human intelligence that “do not think for themselves [but instead] generate complex reflections cast by our recorded thoughts”; as blurry JPEGs of the Web that are approximate compressions of their training data; as stochastic parrots that work by “haphazardly stitching together sequences of linguistic forms…according to probabilistic information about how they combine, but without any reference to meaning”; and, most dismissively, as a kind of autocomplete on steroids.



Thursday, November 14, 2024

Ignore the safeguards, it’s only make believe.

https://spectrum.ieee.org/jailbreak-llm

It's Surprisingly Easy to Jailbreak LLM-Driven Robots

AI chatbots such as ChatGPT and other applications powered by large language models (LLMs) have exploded in popularity, leading a number of companies to explore LLM-driven robots. However, a new study now reveals an automated way to hack into such machines with 100 percent success. By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs.

Essentially, LLMs are supercharged versions of the autocomplete feature that smartphones use to predict the rest of a word that a person is typing.

However, a group of scientists has recently identified a host of security vulnerabilities for LLMs. So-called jailbreaking attacks discover ways to develop prompts that can bypass LLM safeguards and fool the AI systems into generating unwanted content, such as instructions for building bombs, recipes for synthesizing illegal drugs, and guides for defrauding charities.





Or perhaps a way to advertise Polymarket?

https://nypost.com/2024/11/13/business/fbi-seizes-polymarket-ceos-phone-electronics-after-betting-platform-predicts-trump-win-source/

FBI seizes Polymarket CEO’s phone, electronics after betting platform predicts Trump win: source

FBI agents raided the Manhattan apartment of Polymarket CEO Shayne Coplan early Wednesday — just a week after the election betting platform accurately predicted Donald Trump’s stunning victory, The Post has learned.

The 26-year-old entrepreneur was roused from bed in his Soho pad at 6 a.m. by US law enforcement personnel who demanded he turn over his phone and other electronic devices, a source close to the matter told The Post.

It’s “grand political theater at its worst,” the source told The Post. “They could have asked his lawyer for any of these things. Instead, they staged a so-called raid so they can leak it to the media and use it for obvious political reasons.”





Never a good idea…

https://www.zdnet.com/article/employees-are-hiding-their-ai-use-from-their-managers-heres-why/

Employees are hiding their AI use from their managers. Here's why

"For the first time since generative AI arrived on the scene, sentiment and uptake among desk workers is starting to cool," the report published on Tuesday states.

The survey found that 48% of desk workers felt uncomfortable with their manager knowing they use AI "for common workplace tasks" like messaging, writing code, brainstorming, and data analysis, citing fears of being seen as cheating and appearing lazy or less competent. 

This builds on Slack's earlier research from June, which revealed employees aren't always sure how they're allowed to use AI at their workplace.

However, proper setup may also be the issue. According to the report, "a persistent lack of training continues to hamper AI uptake; 61% of desk workers have spent less than five hours total learning how to use AI." Most (76%) desk workers urgently want to upskill, reportedly due to industry trends and personal career goals.



Wednesday, November 13, 2024

The tyranny of simple genetic testing?

https://www.bespacific.com/genetic-discrimination-is-coming-for-us-all/

Genetic Discrimination Is Coming for Us All

The Atlantic: [unpaywalled] “Insurers are refusing to cover Americans whose DNA reveals health risks. It’s perfectly legal… Studies have shown that people seek out additional insurance when they have increased genetic odds of becoming ill or dying. “Life insurers carefully evaluate each applicant’s health, determining premiums and coverage based on life expectancy,” Jan Graeber, a senior health actuary for the American Council of Life Insurers, said in a statement. “This process ensures fairness for both current and future policyholders while supporting the company’s long-term financial stability.” But it also means people might avoid seeking out potentially lifesaving health information. Research has consistently found that concerns about discrimination are one of the most cited reasons that people avoid taking DNA tests… In aggregate, such information can be valuable to companies, Nicholas Papageorge, a professor of economics at Johns Hopkins University, told me. Insurers want to sell policies at as high a price as possible while also reducing their exposure; knowing even a little bit more about someone’s odds of one day developing a debilitating or deadly disease might help one company win out over the competition. As long as the predictions embedded in polygenic risk scores come true at least a small percentage of the time, they could help insurers make more targeted decisions about who to cover and what to charge them. As we learn more about what genes mean for everyone’s health, insurance companies could use that information to dictate coverage for ever more people…”





I want to blow this up to wall size…

https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence

Explore Beyond GenAI on the 2024 Hype Cycle for Artificial Intelligence

Generative AI (GenAI) receives much of the hype when it comes to artificial intelligence. However, the technology has yet to deliver on its anticipated business value for most organizations.

The hype surrounding GenAI can cause AI leaders to struggle to identify strong use cases, unnecessarily increasing complexity and the potential for failure. Organizations looking for worthy AI investments must consider a wider range of AI innovations — many of which are highlighted in the 2024 Gartner Hype Cycle for Artificial Intelligence.





Language continues to devolve.

https://www.bespacific.com/punctuation-is-dead-because-the-iphone-keyboard-killed-it/

Punctuation is dead because the iPhone keyboard killed it

Apple sacrificed commas and periods at the altar of simplified keyboard design. Android Authority’s Rita El Khoury argues that the decline in punctuation use and capitalization in social media writing, especially among younger generations, can largely be attributed to the iPhone keyboard. “By hiding the comma and period behind a symbol switch, the iPhone keyboard encourages the biggest grammar fiends to be lazy and skip punctuation,” writes El Khoury. She continues: Pundits will say that it’s just an extra tap to add a period (double-tap the space bar) or a comma (switch to the characters layout and tap comma), but it’s one extra tap too many. When you’re firing off replies and messages at a rapid rate, the jarring pause while the keyboard switches to symbols and then switches back to letters is just too annoying, especially if you’re doing it multiple times in one message. I hate pausing mid-sentence so much that I will sacrifice a comma at the altar of speed. […]  The real problem, at the end of the day, is that iPhones — not Android phones — are popular among Gen Z buyers, especially in the US — a market with a huge online presence and influence. Add that most smartphone users tend to stick to default apps on their phones, so most of them end up with the default iPhone keyboard instead of looking at better (albeit often even slower) alternatives. And it’s that same keyboard that’s encouraging them to be lazy instead of making it easier to add punctuation.  So yes, I blame the iPhone for killing the period and slaughtering the comma, and I think both of those are great offenders in the death of the capital letter. But trends are cyclical, and if the cassette player can make a comeback, so can the comma. Who knows, maybe in a year or two, writing like a five-year-old will be passe, too, and it’ll be trendy to use proper grammar again.”



Tuesday, November 12, 2024

Encouraging use to increase Ad views?

https://www.zdnet.com/article/the-washington-posts-ai-bot-answers-your-questions-now-no-subscription-required/

The Washington Post's AI bot answers your questions now - no subscription required

Last week, The Washington Post debuted an experimental generative AI tool called "Ask The Post AI," which allows users to get conversational answers on any topic referenced in text news articles published by the newspaper since 2016. The publication refers to the tools as an initiative "built by news for news." 



Sunday, November 10, 2024

Fake subpoenas from fake cops?

https://krebsonsecurity.com/2024/11/fbi-spike-in-hacked-police-emails-fake-subpoenas/

FBI: Spike in Hacked Police Emails, Fake Subpoenas

The Federal Bureau of Investigation (FBI) is urging police departments and governments worldwide to beef up security around their email systems, citing a recent increase in cybercriminal services that use hacked police email accounts to send unauthorized subpoenas and customer data requests to U.S.-based technology companies.

In an alert (PDF) published this week, the FBI said it has seen un uptick in postings on criminal forums regarding the process of emergency data requests (EDRs) and the sale of email credentials stolen from police departments and government agencies.





Real fakes?

https://ijlsi.com/wp-content/uploads/The-Ethics-of-Deepfakes-A-Digital-Age-Crisis.pdf

The Ethics of Deepfakes: A Digital Age Crisis

Deepfake Technology has earned great attention because of its capability to deceive, manipulate and fabricate certain content like images, audio, video and much more. The term “deepfake” is a bifurcation of “deep learning”, a subset of Artificial Intelligence (AI) and “fake” which denotes the synthetic or unreal nature of the content. With the use of this technology, audio and video media can be altered to give the impression that someone has said or done something they haven’t. This technology is flourishing on social media which is harming children, women and other vulnerable users. This research explores the role of deepfake technology in propagating misinformation or false information throughout the web along with its potential results on public and social cohesion. Legal frameworks have also been discussed in this research on how recent legislation responds to this manipulation. This research also argues that the impact of deep fakes on society is extreme and versatile, necessitating a coordinated response from governments, tech companies, and civil society. By shedding light on these critical aspects, this research aims to contribute to a better understanding of the impact of deepfake technology on social media and to inform future efforts in detection, prevention, and policy development.