Saturday, November 23, 2024

In case of war, turn off the enemy’s lights. (and heat and water etc.)

https://www.reuters.com/technology/cybersecurity/chinese-hackers-preparing-conflict-says-us-cyber-official-2024-11-22/

Chinese hackers preparing for conflict, US cyber official says

Officials have warned that China-linked hackers have compromised IT networks and taken steps to carrying out disruptive attacks in the event of a conflict. Their activities include gaining access to key networks to enable potential disruptions such as manipulating heating, ventilation and air-conditioning systems in server rooms, or disrupting critical energy and water controls, U.S. officials said earlier this year.





Perhaps AI has just become a better hallucinator?

https://www.zdnet.com/article/ai-isnt-hitting-a-wall-its-just-getting-too-smart-for-benchmarks-says-anthropic/

AI isn't hitting a wall, it's just getting too smart for benchmarks, says Anthropic

As their self-correction and self-reasoning improve, the latest LLMs find new capabilities at a rate that makes it harder to measure everything they can do.





Perspective.

https://blogs.microsoft.com/on-the-issues/2024/11/21/the-3-8-trillion-opportunity-unlocking-the-economic-potential-of-the-us-generative-ai-ecosystem/

The $3.8 Trillion Opportunity: Unlocking the Economic Potential of the US Generative AI Ecosystem

To better understand this transformational potential, Microsoft and Accenture partnered to conduct an analysis of the AI opportunity in the US. Today, we are releasing a new paper that provides an overview of the current state of gen AI, its economic potential in the US, and the contours of its ecosystem, including the role that partnerships play in driving growth, innovation, and, ultimately, the broader adoption and diffusion of gen AI technology. The AI landscape is evolving so rapidly that we do not expect this paper to provide all the answers, but rather we see it as a catalyst and contribution to the dialogue as we move forward.  



Friday, November 22, 2024

I resemble that remark.

https://www.audacy.com/kywnewsradio/news/local/university-of-pennsylvania-expert-values-driven-artificial-intelligence

Shaping artificial intelligence: Penn expert calls for values-driven technology

The artificial intelligence of tomorrow depends on what we put into it today: our values, our priorities, our humanity. A researcher at the University of Pennsylvania says we should take deliberate action to ensure AI serves society — not just the bottom line.

Think about phone numbers. You remember your own. Probably the number to your parents’ house. But how many more of your family and close friends can you dial by heart? And how many digits will forever be known only to your phone’s contacts app?

"That is one illustration of the fact that we voluntarily delegate ever more to our devices in whichever form they may come," says Dr. Cornelia Walther, senior fellow at Penn.

She's making the point that — well, I’ll let her finish the thought.

"Let’s face it: The human being is lazy," she says.





Perspective.

https://fpf.org/blog/fpf-unveils-report-on-the-anatomy-of-state-comprehensive-privacy-law/

FPF Unveils Report on the Anatomy of State Comprehensive Privacy Law

Today, the Future of Privacy Forum (FPF) launched a new report—Anatomy of State Comprehensive Privacy Law: Surveying the State Privacy Law Landscape and Recent Legislative Trends.

https://fpf.org/wp-content/uploads/2024/11/REPORT-Anatomy-of-State-Comprehensive-Privacy-Law.pdf





Interesting.

https://hai.stanford.edu/news/global-ai-power-rankings-stanford-hai-tool-ranks-36-countries-ai

Global AI Power Rankings: Stanford HAI Tool Ranks 36 Countries in AI

The U.S. has the world’s most robust AI ecosystem and outperforms every other country by significant margins.

In recent years, there has been much focus on how the U.S. compares to China in AI. This tool indicates that while the two superpowers used to be competitors, the U.S. is quickly pulling away.





Follow-up. We don’t need AI rules?

https://www.reuters.com/world/us/massachusetts-students-punishment-ai-use-can-stand-us-judge-rules-2024-11-21/

Massachusetts student's punishment for AI use can stand, US judge rules

A federal judge has rejected a bid by the parents of a Massachusetts high school senior to force his school to expunge his disciplinary record and raise his history class grade after officials accused him of using an artificial intelligence program to cheat on a class assignment.

, opens new tab

U.S. Magistrate Judge Paul Levenson in Boston on Wednesday ruled that officials at Hingham High School reasonably concluded that the use of the AI tool by Jennifer and Dale Harris' son to complete a class project violated academic integrity rules.





Is this an argument that AI would use?

https://theconversation.com/ai-could-soon-be-making-major-scientific-discoveries-a-machine-could-even-win-a-nobel-prize-one-day-243996

AI could soon be making major scientific discoveries. A machine could even win a Nobel Prize one day

It may sound strange, but future Nobel Prizes, and other scientific achievement awards, one day might well be given out to intelligent machines. It could come down just to technicalities and legalities.

What should we draw from the use of the term “person” in Alfred Nobel’s will? The Nobel peace prize can be awarded to institutions and associations, so could it include other non-human entities, such as an AI system?

Whether an AI is entitled to legal personhood is one important question in all this. Another is whether intelligent machines can make scientific contributions worthy of one of Nobel’s prestigious prizes.

I do not consider either condition to be impossible and I am not alone. A group of scientists at the UK’s Alan Turing Institute has already set this as a grand challenge for AI. They have said: “We invite the community to join us in… developing AI systems capable of making Nobel quality scientific discoveries.” According to the challenge, these advances by an AI would be made “highly autonomously at a level comparable, and possibly superior, to the best human scientists by 2050”.





Those who do not study history are doomed to repeat it.

https://www.bespacific.com/a-timeless-guide-to-subverting-any-organization-with-purposeful-stupidity/

A Timeless Guide to Subverting Any Organization with “Purposeful Stupidity”

Open Culture – Discover the CIA’s Simple Sabotage Field Manual: A Timeless Guide to Subverting Any Organization with “Purposeful Stupidity”  (1944): “…Now declassified and freely available on the CIA website, the manual that the agency describes as “surprisingly relevant” was once distributed to OSS officers abroad to assist them in training “citizen-saboteurs” in occupied countries like Norway and France. Such people, writes Rebecca Onion at Slate, “might already be sabotaging materials, machinery, or operations of their own initiative,” but may have lacked the devious talent for sowing chaos that only an intelligence agency can properly master. Genuine laziness, arrogance, and mindlessness may surely be endemic. But the Field Manual asserts that “purposeful stupidity is contrary to human nature” and requires a particular set of skills. The citizen-saboteur “frequently needs pressure, stimulation or assurance, and information and suggestions regarding feasible methods of simple sabotage.” You can read the full document here. Or find an easy-to-read version on Project Gutenberg here. To get a sense of just how “timeless”—according to the CIA itself—such instructions remain, see the abridged list below, courtesy of Business Insider. You will laugh ruefully, then maybe shudder a little as you recognize how much your own workplace, and many others, resemble the kind of dysfunctional mess the OSS meticulously planned during World War II…

Organizations and Conferences

  • Insist on doing everything through “channels.” Never permit short-cuts to be taken in order to expedite decisions.
  • Make “speeches.” Talk as frequently as possible and at great length. Illustrate your “points” by long anecdotes and accounts of personal experiences.
  • When possible, refer all matters to committees, for “further study and consideration.” Attempt to make the committee as large as possible — never less than five.
  • Bring up irrelevant issues as frequently as possible.
  • Haggle over precise wordings of communications, minutes, resolutions.
  • Refer back to matters decided upon at the last meeting and attempt to re-open the question of the advisability of that decision.
  • Advocate “caution.” Be “reasonable” and urge your fellow-conferees to be “reasonable” and avoid haste which might result in embarrassments or difficulties later on...”



Thursday, November 21, 2024

I thought we were past this stupidity.

https://minnesotareformer.com/2024/11/20/misinformation-expert-cites-non-existent-sources-in-minnesota-deep-fake-case/

Misinformation expert cites non-existent sources in Minnesota deep fake case

A leading misinformation expert is being accused of citing non-existent sources to defend Minnesota’s new law banning election misinformation.

Professor Jeff Hancock, founding director of the Stanford Social Media Lab, is “well-known for his research on how people use deception with technology,” according to his Stanford biography.

At the behest of Minnesota Attorney General Keith Ellison, Hancock recently submitted an affidavit supporting new legislation that bans the use of so-called “deep fake” technology to influence an election. The law is being challenged in federal court by a conservative YouTuber and Republican state Rep. Mary Franson of Alexandria for violating First Amendment free speech protections.

Hancock’s expert declaration in support of the deep fake law cites numerous academic works. But several of those sources do not appear to exist, and the lawyers challenging the law say they appear to have been made up by artificial intelligence software like ChatGPT.





Something else to worry about…

https://www.newyorker.com/news/news-desk/the-technology-the-trump-administration-could-use-to-hack-your-phone

The Technology the Trump Administration Could Use to Hack Your Phone

In September, the Department of Homeland Security (D.H.S.) signed a two-million-dollar contract with Paragon, an Israeli firm whose spyware product Graphite focusses on breaching encrypted-messaging applications such as Telegram and Signal. Wired first reported that the technology was acquired by Immigration and Customs Enforcement (ICE)—an agency within D.H.S. that will soon be involved in executing the Trump Administration’s promises of mass deportations and crackdowns on border crossings. A source at Paragon told me that the deal followed a vetting process, during which the company was able to demonstrate that it had robust tools to prevent other countries that purchase its spyware from hacking Americans—but that wouldn’t limit the U.S. government’s ability to target its own citizens. The technology is part of a booming multibillion-dollar market for intrusive phone-hacking software that is making government surveillance increasingly cheap and accessible. In recent years, a number of Western democracies have been roiled by controversies in which spyware has been used, apparently by defense and intelligence agencies, to target opposition politicians, journalists, and apolitical civilians caught up in Orwellian surveillance dragnets. Now Donald Trump and incoming members of his Administration will decide whether to curtail or expand the U.S. government’s use of this kind of technology. Privacy advocates have been in a state of high alarm about the colliding political and technological trend lines. “It’s just so evident—the impending disaster,” Emily Tucker, the executive director at the Center on Privacy and Technology at Georgetown Law, told me. “You may believe yourself not to be in one of the vulnerable categories, but you won’t know if you’ve ended up on a list for some reason or your loved ones have. Every single person should be worried.”



(Related)

https://www.schneier.com/blog/archives/2024/11/secret-service-tracking-peoples-locations-without-warrant.html

Secret Service Tracking People’s Locations without Warrant

This feels important:

The Secret Service has used a technology called Locate X which uses location data harvested from ordinary apps installed on phones. Because users agreed to an opaque terms of service page, the Secret Service believes it doesn’t need a warrant.





Could be useful.

https://venturebeat.com/ai/openscholar-the-open-source-a-i-thats-outperforming-gpt-4o-in-scientific-research/

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

Scientists are drowning in data. With millions of research papers published every year, even the most dedicated experts struggle to stay updated on the latest findings in their fields.

A new artificial intelligence system, called OpenScholar, is promising to rewrite the rules for how researchers access, evaluate, and synthesize scientific literature. Built by the Allen Institute for AI (Ai2) and the University of Washington, OpenScholar combines cutting-edge retrieval systems with a fine-tuned language model to deliver citation-backed, comprehensive answers to complex research questions.

Scientific progress depends on researchers’ ability to synthesize the growing body of literature,” the OpenScholar researchers wrote in their paper. But that ability is increasingly constrained by the sheer volume of information. OpenScholar, they argue, offers a path forward—one that not only helps researchers navigate the deluge of papers but also challenges the dominance of proprietary AI systems like OpenAI’s GPT-4o.





Wednesday, November 20, 2024

Are hallucinations by AI worse than hallucinations by humans?

https://www.bespacific.com/artificial-intelligence-and-constitutional-interpretation/

Artificial Intelligence and Constitutional Interpretation

Coan, Andrew and Surden, Harry, Artificial Intelligence and Constitutional Interpretation (November 12, 2024). Arizona Legal Studies Discussion Paper No. 24-30, U of Colorado Law Legal Studies Research Paper No. 24-39, Available at SSRN: https://ssrn.com/abstract=5018779 or http://dx.doi.org/10.2139/ssrn.5018779

This Article examines the potential use of large language models (LLMs) like ChatGPT in constitutional interpretation. LLMs are extremely powerful tools, with significant potential to improve the quality and efficiency of constitutional analysis. But their outputs are highly sensitive to variations in prompts and counterarguments, illustrating the importance of human framing choices. As a result, using LLMs for constitutional interpretation implicates substantially the same theoretical issues that confront human interpreters. Two key implications emerge: First, it is crucial to attend carefully to particular use cases and institutional contexts. Relatedly, judges and lawyers must develop “AI literacy” to use LLMs responsibly. Second, there is no avoiding the burdens of judgment. For any given task, LLMs may be better or worse than humans, but the choice of whether and how to use them is itself a judgment requiring normative justification.





An old complaint that has been solved by most organizations...

https://www.theregister.com/2024/11/20/data_is_the_new_uranium/

Data is the new uranium – incredibly powerful and amazingly dangerous

CISOs are quietly wishing they had less data, because the cost of management sometimes exceeds its value

I recently got to play a 'fly on the wall' at a roundtable of chief information security officers. Beyond the expected griping and moaning about funding shortfalls and always-too-gullible users, I began to hear a new note: data has become a problem.

A generation ago we had hardly any data at all. In 2003 I took a tour of a new all-digital 'library' – the Australian Centre for the Moving Image (ACMI) – and marveled at its single petabyte of online storage. I'd never seen so much, and it pointed toward a future where we would all have all the storage capacity we ever needed.

That day arrived not many years later when Amazon's S3 quickly made scale a non-issue. Today, plenty of enterprises manage multiple petabytes of storage and we think nothing about moving a terabyte across the network or generating a few gigabytes of new media during a working day. Data is so common it has become nearly invisible.

Unless you're a CISO. For them, more data means more problems, because it's stored in so many systems. Most security execs know they have pools of data all over the place, and that marketing departments have built massive data-gathering and analytics engines into all customer-facing systems, and acquire more data every day.





Keep America stupid? Why not learn to use the new tools?

https://www.bostonglobe.com/2024/11/15/opinion/ai-classroom-teaching-writing/

AI in the classroom could spare educators from having to teach writing

Of all the skills I teach my high school students, I’ve always thought writing was the most important — essential to their future academic success, useful in any profession. I’m no longer so sure.

Thanks to AI, writing’s place in the curriculum today is like that of arithmetic at the dawn of cheap and widely available calculators. The skills we currently think are essential — spelling, punctuation, subject-predicate agreement — may soon become superfluous, and schools will have to adapt.

But writing takes a lot of time to do well, and time is the most precious resource in education. Longer writing assignments, like essays or research papers, may no longer be the best use of it. In the workplace, it is becoming increasingly common for AI to write the first draft of any long-form document.  More than half of professional workers used AI on the job in 2023, according to one study, and of those who used AI, 68 percent were using it to draft written content. Refining AI’s draft — making sure it conveys what is intended — becomes the real work. From a business perspective, this is an efficient division of labor: Humans come up with the question, AI answers it, and humans polish the AI output.

In schools, the same process is called cheating.



(Related)

https://techcrunch.com/2024/11/20/openai-releases-a-teachers-guide-to-chatgpt-but-some-educators-are-skeptical/

OpenAI releases a teacher’s guide to ChatGPT, but some educators are skeptical

OpenAI envisions teachers using its AI-powered tools to create lesson plans and interactive tutorials for students. But some educators are wary of the technology — and its potential to go awry.

Today, OpenAI released a free online course designed to help K-12 teachers learn how to bring ChatGPT, the company’s AI chatbot platform, into their classrooms. Created in collaboration with the nonprofit organization Common Sense Media, with which OpenAI has an active partnership, the one-hour, nine-module program covers the basics of AI and its pedagogical applications.



Tuesday, November 19, 2024

Let AI do the thinking?

https://www.bespacific.com/the-death-of-search/

The Death of Search

The Atlantic unpaywalled AI is transforming how billions navigate the web. A lot will be lost in the process. “…Although ChatGPT and Perplexity and Google AI Overviews cite their sources with (small) footnotes or bars to click on, not clicking on those links is the entire point. OpenAI, in its announcement of its new search feature, wrote that “getting useful answers on the web can take a lot of effort. It often requires multiple searches and digging through links to find quality sources and the right information for you. Now, chat can get you to a better answer.” Google’s pitch is that its AI “will do the Googling for you.” Perplexity’s chief business officer told me this summer that “people don’t come to Perplexity to consume journalism,” and that the AI tool will provide less traffic than traditional search. For curious users, Perplexity suggests follow-up questions so that, instead of opening a footnote, you keep reading in Perplexity. The change will be the equivalent of going from navigating a library with the Dewey decimal system, and thus encountering related books on adjacent shelves, to requesting books for pickup through a digital catalog. It could completely reorient our relationship to knowledge, prioritizing rapid, detailed, abridged answers over a deep understanding and the consideration of varied sources and viewpoints. Much of what’s beautiful about searching the internet is jumping into ridiculous Reddit debates and developing unforeseen obsessions on the way to mastering a topic you’d first heard of six hours ago, via a different search; falling into clutter and treasure, all the time, without ever intending to.  AI search may close off these avenues to not only discovery but its impetus, curiosity…”





A response to US authorization of long range weapons by Ukraine or the start of something larger?

https://www.cnn.com/2024/11/18/europe/undersea-cable-disrupted-germany-finland-intl/

Two undersea cables in Baltic Sea disrupted, sparking warnings of possible ‘hybrid warfare’

Two undersea internet cables in the Baltic Sea have been suddenly disrupted, according to local telecommunications companies, amid fresh warnings of possible Russian interference with global undersea infrastructure.

A communications cable between Lithuania and Sweden was cut on Sunday morning around 10:00 a.m. local time, a spokesperson from telecommunications company Telia Lithuania confirmed to CNN.

Another cable linking Finland and Germany was also disrupted, according to Cinia, the state-controlled Finnish company that runs the link. The C-Lion cable – the only direct connection of its kind between Finland and Central Europe – spans nearly 1,200 kilometers (730 miles), alongside other key pieces of infrastructure, including gas pipelines and power cables.

The area that was disrupted along the Finnish-German cable is roughly 60 to 65 miles away from the Lithuanian-Swedish cable that was cut, a CNN analysis of the undersea routes shows.





Civil defense: we don’t do that any more, do we?

https://www.theregister.com/2024/11/18/sweden_updates_war_guide/

Sweden's 'Doomsday Prep for Dummies' guide hits mailboxes today

Residents of Sweden are to receive a handy new guide this week that details how to prepare for various types of crisis situations or wartime should geopolitical events threaten the country.

The "If crisis or war comes" [PDF] guide received its first update in six years and its distribution to every Swedish household begins today. Citing factors such as war, terrorism, cyberattacks, and increasingly extreme weather events, the 32-page guide was commissioned by the government and calls for unity to secure the country's independence.



Monday, November 18, 2024

 Is this the best source for training AI?

https://archive.is/TmYqM#selection-905.16-913.25

The Hollywood AI Database

I can now say with absolute confidence that many AI systems have been trained on TV and film writers’ work. Not just on The Godfather and Alf, but on more than 53,000 other movies and 85,000 other TV episodes: Dialogue from all of it is included in an AI-training data set that has been used by Apple, Anthropic, Meta, Nvidia, Salesforce, Bloomberg, and other companies. I recently downloaded this data set, which I saw referenced in papers about the development of various large language models (or LLMs). It includes writing from every film nominated for Best Picture from 1950 to 2016, at least 616 episodes of The Simpsons, 170 episodes of Seinfeld, 45 episodes of Twin Peaks, and every episode of The Wire, The Sopranos, and Breaking Bad. It even includes prewritten “live” dialogue from Golden Globes and Academy Awards broadcasts. If a chatbot can mimic a crime-show mobster or a sitcom alien—or, more pressingly, if it can piece together whole shows that might otherwise require a room of writers—data like this are part of the reason why.





Those who do not study history are doomed to repeat it?

https://timesofindia.indiatimes.com/world/rest-of-world/when-machines-took-over-ais-sarcastic-take-on-industrial-revolution/articleshow/115399605.cms

When machines took over: AI’s sarcastic take on industrial revolution



Sunday, November 17, 2024

Perspective.

https://ieeexplore.ieee.org/abstract/document/10747739

From artificial intelligence to artificial mind: A paradigm shift

Considering the development of artificial intelligence (AI) in various fields, especially the closeness of their function to the human brain in terms of perception and understanding of sensory and emotional concepts, it can be concluded that this concept is cognitively evolving toward an artificial mind (AM). This article introduces the concept of AM as a more accurate interpretation of the future of AI. It explores the distinction between intelligence and mind, highlighting the holistic nature of the mind, which includes cognitive, psychological, and emotional dimensions. Various types of intelligence, from rational to emotional, are categorized to emphasize their role in shaping human abilities. The study evaluates the human mind, focusing on cognitive functions, logical thinking, emotional understanding, learning, and creativity. It encourages AI systems to understand contextual, emotional, and subjective aspects and aligns AI with human intelligence through advanced perception and emotional capabilities. The shift from AI to AM has significant implications, transforming work, education, and human-machine collaboration, and promises a future where AI systems integrate advanced perceptual and emotional functions. This narrative guides the conversation around AI terminology, emphasizing the convergence of artificial and human intelligence and acknowledging the social implications. Therefore, the term “artificial mind” appears as a more appropriate term than “artificial intelligence”, symbolizing the transformative technological change and its multifaceted impact on society.





Extermination by stress? I doubt it.

https://www.nature.com/articles/s41599-024-04018-w

The mental health implications of artificial intelligence adoption: the crucial role of self-efficacy

The rapid adoption of artificial intelligence (AI) in organizations has transformed the nature of work, presenting both opportunities and challenges for employees. This study utilizes several theories to investigate the relationships between AI adoption, job stress, burnout, and self-efficacy in AI learning. A three-wave time-lagged research design was used to collect data from 416 professionals in South Korea. Structural equation modeling was used to test the proposed mediation and moderation hypotheses. The results reveal that AI adoption does not directly influence employee burnout but exerts its impact through the mediating role of job stress. The results also show that AI adoption significantly increases job stress, thus increasing burnout. Furthermore, self-efficacy in AI learning was found to moderate the relationship between AI adoption and job stress, with higher self-efficacy weakening the positive relationship. These findings highlight the importance of considering the mediating and moderating mechanisms that shape employee experiences in the context of AI adoption. The results also suggest that organizations should proactively address the potential negative impact of AI adoption on employee well-being by implementing strategies to manage job stress and foster self-efficacy in AI learning. This study underscores the need for a human-centric approach to AI adoption that prioritizes employee well-being alongside technological advancement. Future research should explore additional factors that may influence the relationships between AI adoption, job stress, burnout, and self-efficacy across diverse contexts to inform the development of evidence-based strategies for supporting employees in AI-driven workplaces.