Wednesday, November 20, 2024

Are hallucinations by AI worse than hallucinations by humans?

https://www.bespacific.com/artificial-intelligence-and-constitutional-interpretation/

Artificial Intelligence and Constitutional Interpretation

Coan, Andrew and Surden, Harry, Artificial Intelligence and Constitutional Interpretation (November 12, 2024). Arizona Legal Studies Discussion Paper No. 24-30, U of Colorado Law Legal Studies Research Paper No. 24-39, Available at SSRN: https://ssrn.com/abstract=5018779 or http://dx.doi.org/10.2139/ssrn.5018779

This Article examines the potential use of large language models (LLMs) like ChatGPT in constitutional interpretation. LLMs are extremely powerful tools, with significant potential to improve the quality and efficiency of constitutional analysis. But their outputs are highly sensitive to variations in prompts and counterarguments, illustrating the importance of human framing choices. As a result, using LLMs for constitutional interpretation implicates substantially the same theoretical issues that confront human interpreters. Two key implications emerge: First, it is crucial to attend carefully to particular use cases and institutional contexts. Relatedly, judges and lawyers must develop “AI literacy” to use LLMs responsibly. Second, there is no avoiding the burdens of judgment. For any given task, LLMs may be better or worse than humans, but the choice of whether and how to use them is itself a judgment requiring normative justification.





An old complaint that has been solved by most organizations...

https://www.theregister.com/2024/11/20/data_is_the_new_uranium/

Data is the new uranium – incredibly powerful and amazingly dangerous

CISOs are quietly wishing they had less data, because the cost of management sometimes exceeds its value

I recently got to play a 'fly on the wall' at a roundtable of chief information security officers. Beyond the expected griping and moaning about funding shortfalls and always-too-gullible users, I began to hear a new note: data has become a problem.

A generation ago we had hardly any data at all. In 2003 I took a tour of a new all-digital 'library' – the Australian Centre for the Moving Image (ACMI) – and marveled at its single petabyte of online storage. I'd never seen so much, and it pointed toward a future where we would all have all the storage capacity we ever needed.

That day arrived not many years later when Amazon's S3 quickly made scale a non-issue. Today, plenty of enterprises manage multiple petabytes of storage and we think nothing about moving a terabyte across the network or generating a few gigabytes of new media during a working day. Data is so common it has become nearly invisible.

Unless you're a CISO. For them, more data means more problems, because it's stored in so many systems. Most security execs know they have pools of data all over the place, and that marketing departments have built massive data-gathering and analytics engines into all customer-facing systems, and acquire more data every day.





Keep America stupid? Why not learn to use the new tools?

https://www.bostonglobe.com/2024/11/15/opinion/ai-classroom-teaching-writing/

AI in the classroom could spare educators from having to teach writing

Of all the skills I teach my high school students, I’ve always thought writing was the most important — essential to their future academic success, useful in any profession. I’m no longer so sure.

Thanks to AI, writing’s place in the curriculum today is like that of arithmetic at the dawn of cheap and widely available calculators. The skills we currently think are essential — spelling, punctuation, subject-predicate agreement — may soon become superfluous, and schools will have to adapt.

But writing takes a lot of time to do well, and time is the most precious resource in education. Longer writing assignments, like essays or research papers, may no longer be the best use of it. In the workplace, it is becoming increasingly common for AI to write the first draft of any long-form document.  More than half of professional workers used AI on the job in 2023, according to one study, and of those who used AI, 68 percent were using it to draft written content. Refining AI’s draft — making sure it conveys what is intended — becomes the real work. From a business perspective, this is an efficient division of labor: Humans come up with the question, AI answers it, and humans polish the AI output.

In schools, the same process is called cheating.



(Related)

https://techcrunch.com/2024/11/20/openai-releases-a-teachers-guide-to-chatgpt-but-some-educators-are-skeptical/

OpenAI releases a teacher’s guide to ChatGPT, but some educators are skeptical

OpenAI envisions teachers using its AI-powered tools to create lesson plans and interactive tutorials for students. But some educators are wary of the technology — and its potential to go awry.

Today, OpenAI released a free online course designed to help K-12 teachers learn how to bring ChatGPT, the company’s AI chatbot platform, into their classrooms. Created in collaboration with the nonprofit organization Common Sense Media, with which OpenAI has an active partnership, the one-hour, nine-module program covers the basics of AI and its pedagogical applications.



Tuesday, November 19, 2024

Let AI do the thinking?

https://www.bespacific.com/the-death-of-search/

The Death of Search

The Atlantic unpaywalled AI is transforming how billions navigate the web. A lot will be lost in the process. “…Although ChatGPT and Perplexity and Google AI Overviews cite their sources with (small) footnotes or bars to click on, not clicking on those links is the entire point. OpenAI, in its announcement of its new search feature, wrote that “getting useful answers on the web can take a lot of effort. It often requires multiple searches and digging through links to find quality sources and the right information for you. Now, chat can get you to a better answer.” Google’s pitch is that its AI “will do the Googling for you.” Perplexity’s chief business officer told me this summer that “people don’t come to Perplexity to consume journalism,” and that the AI tool will provide less traffic than traditional search. For curious users, Perplexity suggests follow-up questions so that, instead of opening a footnote, you keep reading in Perplexity. The change will be the equivalent of going from navigating a library with the Dewey decimal system, and thus encountering related books on adjacent shelves, to requesting books for pickup through a digital catalog. It could completely reorient our relationship to knowledge, prioritizing rapid, detailed, abridged answers over a deep understanding and the consideration of varied sources and viewpoints. Much of what’s beautiful about searching the internet is jumping into ridiculous Reddit debates and developing unforeseen obsessions on the way to mastering a topic you’d first heard of six hours ago, via a different search; falling into clutter and treasure, all the time, without ever intending to.  AI search may close off these avenues to not only discovery but its impetus, curiosity…”





A response to US authorization of long range weapons by Ukraine or the start of something larger?

https://www.cnn.com/2024/11/18/europe/undersea-cable-disrupted-germany-finland-intl/

Two undersea cables in Baltic Sea disrupted, sparking warnings of possible ‘hybrid warfare’

Two undersea internet cables in the Baltic Sea have been suddenly disrupted, according to local telecommunications companies, amid fresh warnings of possible Russian interference with global undersea infrastructure.

A communications cable between Lithuania and Sweden was cut on Sunday morning around 10:00 a.m. local time, a spokesperson from telecommunications company Telia Lithuania confirmed to CNN.

Another cable linking Finland and Germany was also disrupted, according to Cinia, the state-controlled Finnish company that runs the link. The C-Lion cable – the only direct connection of its kind between Finland and Central Europe – spans nearly 1,200 kilometers (730 miles), alongside other key pieces of infrastructure, including gas pipelines and power cables.

The area that was disrupted along the Finnish-German cable is roughly 60 to 65 miles away from the Lithuanian-Swedish cable that was cut, a CNN analysis of the undersea routes shows.





Civil defense: we don’t do that any more, do we?

https://www.theregister.com/2024/11/18/sweden_updates_war_guide/

Sweden's 'Doomsday Prep for Dummies' guide hits mailboxes today

Residents of Sweden are to receive a handy new guide this week that details how to prepare for various types of crisis situations or wartime should geopolitical events threaten the country.

The "If crisis or war comes" [PDF] guide received its first update in six years and its distribution to every Swedish household begins today. Citing factors such as war, terrorism, cyberattacks, and increasingly extreme weather events, the 32-page guide was commissioned by the government and calls for unity to secure the country's independence.



Monday, November 18, 2024

 Is this the best source for training AI?

https://archive.is/TmYqM#selection-905.16-913.25

The Hollywood AI Database

I can now say with absolute confidence that many AI systems have been trained on TV and film writers’ work. Not just on The Godfather and Alf, but on more than 53,000 other movies and 85,000 other TV episodes: Dialogue from all of it is included in an AI-training data set that has been used by Apple, Anthropic, Meta, Nvidia, Salesforce, Bloomberg, and other companies. I recently downloaded this data set, which I saw referenced in papers about the development of various large language models (or LLMs). It includes writing from every film nominated for Best Picture from 1950 to 2016, at least 616 episodes of The Simpsons, 170 episodes of Seinfeld, 45 episodes of Twin Peaks, and every episode of The Wire, The Sopranos, and Breaking Bad. It even includes prewritten “live” dialogue from Golden Globes and Academy Awards broadcasts. If a chatbot can mimic a crime-show mobster or a sitcom alien—or, more pressingly, if it can piece together whole shows that might otherwise require a room of writers—data like this are part of the reason why.





Those who do not study history are doomed to repeat it?

https://timesofindia.indiatimes.com/world/rest-of-world/when-machines-took-over-ais-sarcastic-take-on-industrial-revolution/articleshow/115399605.cms

When machines took over: AI’s sarcastic take on industrial revolution



Sunday, November 17, 2024

Perspective.

https://ieeexplore.ieee.org/abstract/document/10747739

From artificial intelligence to artificial mind: A paradigm shift

Considering the development of artificial intelligence (AI) in various fields, especially the closeness of their function to the human brain in terms of perception and understanding of sensory and emotional concepts, it can be concluded that this concept is cognitively evolving toward an artificial mind (AM). This article introduces the concept of AM as a more accurate interpretation of the future of AI. It explores the distinction between intelligence and mind, highlighting the holistic nature of the mind, which includes cognitive, psychological, and emotional dimensions. Various types of intelligence, from rational to emotional, are categorized to emphasize their role in shaping human abilities. The study evaluates the human mind, focusing on cognitive functions, logical thinking, emotional understanding, learning, and creativity. It encourages AI systems to understand contextual, emotional, and subjective aspects and aligns AI with human intelligence through advanced perception and emotional capabilities. The shift from AI to AM has significant implications, transforming work, education, and human-machine collaboration, and promises a future where AI systems integrate advanced perceptual and emotional functions. This narrative guides the conversation around AI terminology, emphasizing the convergence of artificial and human intelligence and acknowledging the social implications. Therefore, the term “artificial mind” appears as a more appropriate term than “artificial intelligence”, symbolizing the transformative technological change and its multifaceted impact on society.





Extermination by stress? I doubt it.

https://www.nature.com/articles/s41599-024-04018-w

The mental health implications of artificial intelligence adoption: the crucial role of self-efficacy

The rapid adoption of artificial intelligence (AI) in organizations has transformed the nature of work, presenting both opportunities and challenges for employees. This study utilizes several theories to investigate the relationships between AI adoption, job stress, burnout, and self-efficacy in AI learning. A three-wave time-lagged research design was used to collect data from 416 professionals in South Korea. Structural equation modeling was used to test the proposed mediation and moderation hypotheses. The results reveal that AI adoption does not directly influence employee burnout but exerts its impact through the mediating role of job stress. The results also show that AI adoption significantly increases job stress, thus increasing burnout. Furthermore, self-efficacy in AI learning was found to moderate the relationship between AI adoption and job stress, with higher self-efficacy weakening the positive relationship. These findings highlight the importance of considering the mediating and moderating mechanisms that shape employee experiences in the context of AI adoption. The results also suggest that organizations should proactively address the potential negative impact of AI adoption on employee well-being by implementing strategies to manage job stress and foster self-efficacy in AI learning. This study underscores the need for a human-centric approach to AI adoption that prioritizes employee well-being alongside technological advancement. Future research should explore additional factors that may influence the relationships between AI adoption, job stress, burnout, and self-efficacy across diverse contexts to inform the development of evidence-based strategies for supporting employees in AI-driven workplaces.



 

Friday, November 15, 2024

Useful tips…

https://www.zdnet.com/article/5-ways-to-catch-ai-in-its-lies-and-fact-check-its-outputs-for-your-research/

5 ways to catch AI in its lies and fact-check its outputs for your research

Sometimes, I think AI chatbots are modeled after teenagers. They can be very, very good. But other times, they tell lies. They make stuff up. They confabulate. They confidently give answers based on the assumption that they know everything there is to know, but they're woefully wrong.

Let's dig into five key steps you can take to guide an AI to accurate responses.





Perspective.

https://www.science.org/doi/10.1126/science.adt6140

The metaphors of artificial intelligence

A few months after ChatGPT was released, the neural network pioneer Terrence Sejnowski wrote about coming to grips with the shock of what large language models (LLMs) could do:

Something is beginning to happen that was not expected even a few years ago. A threshold was reached, as if a space alien suddenly appeared that could communicate with us in an eerily human way.… Some aspects of their behavior appear to be intelligent, but if it’s not human intelligence, what is the nature of their intelligence?”

What, indeed, is the nature of intelligence of LLMs and the artificial intelligence (AI) systems built on them? There is still no consensus on the answer. Many people view LLMs as analogous to an individual human mind (or perhaps, like Sejnowski, to that of a space alien)—a mind that can think, reason, explain itself, and perhaps have its own goals and intentions.

Others have proposed entirely different ways of conceptualizing these enormous neural networks: as role players that can imitate many different characters; as cultural technologies, akin to libraries and encyclopedias, that allow humans to efficiently access information created by other humans; as mirrors of human intelligence that “do not think for themselves [but instead] generate complex reflections cast by our recorded thoughts”; as blurry JPEGs of the Web that are approximate compressions of their training data; as stochastic parrots that work by “haphazardly stitching together sequences of linguistic forms…according to probabilistic information about how they combine, but without any reference to meaning”; and, most dismissively, as a kind of autocomplete on steroids.



Thursday, November 14, 2024

Ignore the safeguards, it’s only make believe.

https://spectrum.ieee.org/jailbreak-llm

It's Surprisingly Easy to Jailbreak LLM-Driven Robots

AI chatbots such as ChatGPT and other applications powered by large language models (LLMs) have exploded in popularity, leading a number of companies to explore LLM-driven robots. However, a new study now reveals an automated way to hack into such machines with 100 percent success. By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs.

Essentially, LLMs are supercharged versions of the autocomplete feature that smartphones use to predict the rest of a word that a person is typing.

However, a group of scientists has recently identified a host of security vulnerabilities for LLMs. So-called jailbreaking attacks discover ways to develop prompts that can bypass LLM safeguards and fool the AI systems into generating unwanted content, such as instructions for building bombs, recipes for synthesizing illegal drugs, and guides for defrauding charities.





Or perhaps a way to advertise Polymarket?

https://nypost.com/2024/11/13/business/fbi-seizes-polymarket-ceos-phone-electronics-after-betting-platform-predicts-trump-win-source/

FBI seizes Polymarket CEO’s phone, electronics after betting platform predicts Trump win: source

FBI agents raided the Manhattan apartment of Polymarket CEO Shayne Coplan early Wednesday — just a week after the election betting platform accurately predicted Donald Trump’s stunning victory, The Post has learned.

The 26-year-old entrepreneur was roused from bed in his Soho pad at 6 a.m. by US law enforcement personnel who demanded he turn over his phone and other electronic devices, a source close to the matter told The Post.

It’s “grand political theater at its worst,” the source told The Post. “They could have asked his lawyer for any of these things. Instead, they staged a so-called raid so they can leak it to the media and use it for obvious political reasons.”





Never a good idea…

https://www.zdnet.com/article/employees-are-hiding-their-ai-use-from-their-managers-heres-why/

Employees are hiding their AI use from their managers. Here's why

"For the first time since generative AI arrived on the scene, sentiment and uptake among desk workers is starting to cool," the report published on Tuesday states.

The survey found that 48% of desk workers felt uncomfortable with their manager knowing they use AI "for common workplace tasks" like messaging, writing code, brainstorming, and data analysis, citing fears of being seen as cheating and appearing lazy or less competent. 

This builds on Slack's earlier research from June, which revealed employees aren't always sure how they're allowed to use AI at their workplace.

However, proper setup may also be the issue. According to the report, "a persistent lack of training continues to hamper AI uptake; 61% of desk workers have spent less than five hours total learning how to use AI." Most (76%) desk workers urgently want to upskill, reportedly due to industry trends and personal career goals.



Wednesday, November 13, 2024

The tyranny of simple genetic testing?

https://www.bespacific.com/genetic-discrimination-is-coming-for-us-all/

Genetic Discrimination Is Coming for Us All

The Atlantic: [unpaywalled] “Insurers are refusing to cover Americans whose DNA reveals health risks. It’s perfectly legal… Studies have shown that people seek out additional insurance when they have increased genetic odds of becoming ill or dying. “Life insurers carefully evaluate each applicant’s health, determining premiums and coverage based on life expectancy,” Jan Graeber, a senior health actuary for the American Council of Life Insurers, said in a statement. “This process ensures fairness for both current and future policyholders while supporting the company’s long-term financial stability.” But it also means people might avoid seeking out potentially lifesaving health information. Research has consistently found that concerns about discrimination are one of the most cited reasons that people avoid taking DNA tests… In aggregate, such information can be valuable to companies, Nicholas Papageorge, a professor of economics at Johns Hopkins University, told me. Insurers want to sell policies at as high a price as possible while also reducing their exposure; knowing even a little bit more about someone’s odds of one day developing a debilitating or deadly disease might help one company win out over the competition. As long as the predictions embedded in polygenic risk scores come true at least a small percentage of the time, they could help insurers make more targeted decisions about who to cover and what to charge them. As we learn more about what genes mean for everyone’s health, insurance companies could use that information to dictate coverage for ever more people…”





I want to blow this up to wall size…

https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence

Explore Beyond GenAI on the 2024 Hype Cycle for Artificial Intelligence

Generative AI (GenAI) receives much of the hype when it comes to artificial intelligence. However, the technology has yet to deliver on its anticipated business value for most organizations.

The hype surrounding GenAI can cause AI leaders to struggle to identify strong use cases, unnecessarily increasing complexity and the potential for failure. Organizations looking for worthy AI investments must consider a wider range of AI innovations — many of which are highlighted in the 2024 Gartner Hype Cycle for Artificial Intelligence.





Language continues to devolve.

https://www.bespacific.com/punctuation-is-dead-because-the-iphone-keyboard-killed-it/

Punctuation is dead because the iPhone keyboard killed it

Apple sacrificed commas and periods at the altar of simplified keyboard design. Android Authority’s Rita El Khoury argues that the decline in punctuation use and capitalization in social media writing, especially among younger generations, can largely be attributed to the iPhone keyboard. “By hiding the comma and period behind a symbol switch, the iPhone keyboard encourages the biggest grammar fiends to be lazy and skip punctuation,” writes El Khoury. She continues: Pundits will say that it’s just an extra tap to add a period (double-tap the space bar) or a comma (switch to the characters layout and tap comma), but it’s one extra tap too many. When you’re firing off replies and messages at a rapid rate, the jarring pause while the keyboard switches to symbols and then switches back to letters is just too annoying, especially if you’re doing it multiple times in one message. I hate pausing mid-sentence so much that I will sacrifice a comma at the altar of speed. […]  The real problem, at the end of the day, is that iPhones — not Android phones — are popular among Gen Z buyers, especially in the US — a market with a huge online presence and influence. Add that most smartphone users tend to stick to default apps on their phones, so most of them end up with the default iPhone keyboard instead of looking at better (albeit often even slower) alternatives. And it’s that same keyboard that’s encouraging them to be lazy instead of making it easier to add punctuation.  So yes, I blame the iPhone for killing the period and slaughtering the comma, and I think both of those are great offenders in the death of the capital letter. But trends are cyclical, and if the cassette player can make a comeback, so can the comma. Who knows, maybe in a year or two, writing like a five-year-old will be passe, too, and it’ll be trendy to use proper grammar again.”