Saturday, October 14, 2023

Changing how your company works requires human thought, not AI suggestions.

https://www.cio.com/article/655669/business-ai-will-change-the-way-businesses-are-run.html

Business AI will change the way businesses are run

Less than a year after most CIOs and business leaders even heard the expression “generative artificial intelligence,” for the first time, this technology has set off a wave of innovation that will dramatically change how businesses are run.

However, we at SAP are not entering this race as newcomers. In fact, we have been at the forefront of embedding AI into our solutions for years. And as part of this, we recently launched SAP Joule, a generative AI assistant that will be embedded throughout the SAP cloud portfolio.

Joule will help our customers achieve business results faster by enabling them to access insights that are relevant for their business through natural conversation. Simply by asking a question in plain language, our customers will get smart answers drawn from a pool of data from across the SAP portfolio and third-party sources. [Can you control these sources? And what do they think about you using their data? Bob] Joule will continuously deliver new insights that get even more intelligent over time.



Friday, October 13, 2023

How many similar ‘fakes’ are getting by the judges? (This is the area of law you specialize in and you didn’t check citations you have never seen before?)

https://laist.com/news/housing-homelessness/dennis-block-chatgpt-artificial-intelligence-ai-eviction-court-los-angeles-lawyer-sanction-housing-tenant-landlord

This Prolific LA Eviction Law Firm Was Caught Faking Cases In Court. Did They Misuse AI?

  • Dennis P. Block and Associates, which describes itself as California’s “leading eviction law firm,” was recently sanctioned by an L.A. County Superior Court judge over a court filing the judge found contained fake case law.

  • Block’s firm was ordered to pay $999 over the violation. That’s $1 below the threshold that would have required the firm to report the sanction to the state bar for further investigation and possible disciplinary action.





Imagine hundreds (thousands?) of these things overhead. And you thought we weren’t ready for driverless cars…

https://www.cnbc.com/2023/10/13/china-lets-ehang-operate-fully-autonomous-passenger-carrying-air-taxis.html

China gives Ehang the first industry approval for fully autonomous, passenger-carrying air taxis

U.S.-listed Ehang claims it’s the first in the world to get such a certificate, which allows it to fly passenger-carrying autonomous electric vertical take-off and landing (eVTOL) aircraft in China.

The certificate will also significantly simplify the company’s ability to get similar certificates for commercial operation in the U.S., Europe and Southeast Asia, CEO Huazhi Hu told CNBC in a video conference interview.

Next year we should start to expand overseas,” he said, noting those regulators still need to establish a process for mutual regulation of the Chinese airworthiness certification. That’s according to a CNBC translation of his Mandarin-language remarks.

A significant difference between self-driving taxis and self-piloting drones is that while cars on the road must make turns at intersections, a drone flight is between two points in the air, Ehang’s CEO said.





Read this one with a grain of salt. I think the doom is overdone. Will some people be left behind? Sure. Some people haven’t learned to use computers yet.

https://news.harvard.edu/gazette/story/2023/10/a-tech-warning-ai-is-coming-fast-and-its-going-to-be-rough-ride/

A tech warning: AI is coming fast and it’s going to be rough ride

… “It’s important for everybody to understand how fast this is going to change,” said Eric Schmidt, the former CEO and chairman of Google, during a conversation Wednesday evening with Graham Allison, Douglas Dillon Professor of Government at Harvard Kennedy School about what’s just over the horizon in AI. “It’s going to happen so fast. People are not going to adapt.”

The pace of improvements is picking up. The high cost of training AI models, which is time consuming, is coming down very quickly, and output quality will be far more accurate and fresher than it is today, said Schmidt.

But “the negatives are quite profound,” he added. AI firms still have no solutions for issues around algorithmic bias or attribution, or for copyright disputes now in litigation over the use of writing, books, images, film, and artworks in AI model training. Many other as yet unforeseen legal, ethical, and cultural questions are expected to arise across all kinds of military, medical, educational, and manufacturing uses.

Most concerning are the “extreme risks” of AI being used to enable massive loss of life if the four firms at the forefront of this innovation, OpenAI, Google, Microsoft, and Anthropic, are not constrained by guardrails and their financial incentives are “not aligned with human values,” said Schmidt, who served as executive chairman of Alphabet, Google’s parent company, from 2015 to 2018, and as technical adviser from 2018 to 2020 before leaving altogether.



Thursday, October 12, 2023

Interesting because I think Adam Smith is a good model for how AI works. Smith studied lists of prices and inventories and found relations between supply and demand. AI does the same things with less structured data (and much faster). What AI can not do is produce new concepts, like Einstein did in his Special Theory of Relativity. (There were no references to prior work in that paper.)

https://www.cnn.com/2023/10/12/economy/ai-impact-on-economists-jobs/index.html

How genAI is revolutionizing the field of economics

Anton Korinek, an economics professor at the University of Virginia, tells the students he advises nowadays that they should really begin to master a flourishing technology expected to transform the field of economics. That technology is generative artificial intelligence, or “genAI” for short.

Korinek expects it to “revolutionize research,” according to a paper he wrote that was accepted for publication by the Journal of Economic Literature.

It’s a powerful technology and if you use it, you can solve economic problems that we face as a society, better and more productively. That’s what research is all about,” Korinek told CNN in an interview.





It seems that everyone – even kids – have access to the war fighting tools Hamas found so useful.

https://www.politico.com/newsletters/digital-future-daily/2023/10/10/we-just-saw-the-future-of-war-00120788

We just saw the future of war

When Hamas militants shocked the world last weekend by launching the biggest and most violent attack on Israel in decades, it was almost equally shocking how they did it.

Hamas blasted through a super-high-tech, $1 billion security system on the Gaza border using little more than bulldozers, paragliders and a 2G cellular network, a remarkable upending of the two sides’ tech dynamic — as POLITICO’s Daniella Cheslow outlined in striking detail this morning.

… “Technology is changing warfare, but it isn’t necessarily changing it in the ways that most techno-optimists think it will,” Cronin said. “Because technologies are so accessible, you’ve got… groups like Hamas able to use everything from drones, to social media, to low-tech clusters of technology both high and low that can have an enormous impact.”

For Hamas, that took the form of staying off smartphones and preparing its propaganda in advance, as well as overwhelming the Israeli border so rapidly that its drone surveillance system failed. Cronin characterizes three key areas where lower-tech actors can, and do, overwhelm their counterparts: The democratization of media technology; the increase of physical reach allowed by cheap drones and rocketry; and systems integration, or the ability to communicate effectively within the group.



Wednesday, October 11, 2023

I’ve decided not to answer that question.

https://bigthink.com/the-future/free-will-required-true-artificial-general-intelligence/

Why free will is required for true artificial intelligence

Artificial general intelligence will not arise in systems that only passively receive data. They need to be able to act back on the world.

Recent years have witnessed stunning advances in other areas like image recognition, text prediction, speech recognition, and language translation. These were achieved mainly due to the development and application of deep learning, inspired by the massively parallel, multilevel architecture of the cerebral cortex. This approach is tailor-made for learning the statistical regularities in masses and masses of training data. The trained neural networks can then abstract higher-order patterns; for example, recognizing types of objects in images. Or they can predict what patterns will be most likely in new instances of similar data, as in the autocompletion of text messages or the prediction of the three-dimensional structures of proteins.

However, even the most sophisticated systems can quickly be flummoxed by the right kind of questioning, the kind that presents novel scenarios not represented in the training data that humans can handle quite easily. Thus, if these systems have any kind of “understanding” — based on the abstraction of statistical patterns in an unimaginably vast set of training data — it does not seem to be the kind that humans have.





My AI claims that it not nuts.

https://www.psychologytoday.com/us/blog/the-digital-self/202310/dare-we-consider-ai-psychology

Dare We Consider AI Psychology?

In the lexicon of human experience, the term 'psychology' has often been reserved for the intricate maze of the human mind. It encapsulates our emotions, behaviors, perceptions, and much more. However, as we stand at the precipice of a technological revolution driven by artificial intelligence, particularly Large Language Models, the concept of psychology undergoes an interesting metamorphosis. Let’s step off the couch and over the line on a journey into AI Psychology.

Jean Baudrillard’s assertion that “The territory no longer precedes the map” draws a parallel to the current conundrum faced by advanced AI models. Are these systems simply mimicking the vast amounts of data they've been trained on, or is there a deeper level of "understanding"? The term 'hallucination,' commonly associated with cognitive anomalies in humans, is now applied to AI. Such anthropomorphic language leads us to question: Are we inadvertently crafting a unique psyche for these machines or is there some undercurrent of processing that gives rise to pathology?





Makes it easier to ask to be deleted. Maybe.

https://www.latimes.com/california/story/2023-10-10/newsom-bill-delete-online-personal-data

Newsom signs bill that would make it easier to delete online personal data

Californians will be able to make a single request asking that data brokers delete their personal information, under a bill Gov. Gavin Newsom signed into law Tuesday.

Senate Bill 362, also known as the Delete Act, directs the California Privacy Protection Agency to create this new tool by January 2026. [Why so long? Bob]





Is there any evidence of this?

https://www.reuters.com/legal/utah-sues-tiktok-over-impact-app-children-2023-10-10/

Utah sues TikTok, claiming app has harmful impact on children

Utah sued Chinese-owned app TikTok on Tuesday, accusing it of harming children by intentionally keeping young users spending unhealthy amounts of time on the short-video sharing platform.

The Utah suit is the latest action challenging the popular app in the United States, with Indiana and Arkansas bringing similar suits.

Last month, a federal judge blocked California from enforcing a law meant to protect children when they use the Internet.



Tuesday, October 10, 2023

War today: anyone can play. Hack the ‘enemy’ perhaps causing confusion or exposing information or wasting time. All from your favorite couch.

https://www.wired.com/story/israel-hamas-war-hacktivism/

Activist Hackers Are Racing Into the Israel-Hamas War—for Both Sides

Since the conflict escalated, hackers have targeted dozens of government websites and media outlets with defacements and DDoS attacks, and attempted to overload targets with junk traffic to bring them down.



(Related)

https://www.theinformation.com/articles/elon-musks-x-cut-disinformation-fighting-tool-ahead-of-israel-hamas-conflict

Elon Musk’s X Cut Disinformation-Fighting Tool Ahead of Israel-Hamas Conflict

Elon Musk’s X, in the months before conflict erupted in Gaza, ceased utilizing a software tool used to identify organized misinformation now spreading across the platform formerly known as Twitter.





We asked AI to analyze data and reach a conclusion. Now we are asking it to assume its conclusion was incorrect and reach another conclusion. Sounds like the problem that caused HAL to go crazy.

https://bdtechtalks.com/2023/10/09/llm-self-correction-reasoning-failures/

LLMs can’t self-correct in reasoning tasks, DeepMind study finds

Scientists are inventing various strategies to enhance the accuracy and reasoning abilities of large language models (LLM ) such as retrieval augmentation and chain-of-thought reasoning.

Among these, “self-correction”—a technique where an LLM refines its own responses—has gained significant traction, demonstrating efficacy across numerous applications. However, the mechanics behind its success remain elusive.

A recent study conducted by Google DeepMind in collaboration with the University of Illinois at Urbana-Champaign reveals that LLMs often falter when self-correcting their responses without external feedback. In fact, the study suggests that self-correction can sometimes impair the performance of these models, challenging the prevailing understanding of this popular technique.





An alternative view…

https://www.msn.com/en-xl/news/other/high-schools-in-denmark-are-embracing-chatgpt-as-a-teaching-tool-rather-than-shunning-it/ar-AA1hVLFs

High schools in Denmark are embracing ChatGPT as a teaching tool rather than shunning it

"My experience was that the students would use it without any kind of thought, and in that way, it becomes an obstacle to learning, and learning is the whole project here," said Pedersen.

"But if we could change the way they use it so that it becomes a tool for learning, then we would have won a lot, both in terms of, well, giving the students a new tool for learning, but also in terms of the relationship with the students," she added.

"Because if we can have the conversation with them about how to use AI, then the whole idea that they can't talk to us about it because it's forbidden goes away.



Monday, October 09, 2023

Heads up:

The Fall Privacy Foundation Seminar is scheduled for Friday, October 27th.

Recent Developments in State Privacy Laws

*The Colorado Privacy Act

*State Privacy Laws in Colorado compared to states like California and Virginia

Our Panel include:

Corinne O'Doherty, Senior Legislative Aide for Rep. Meg Froelich

Jefferey Riester from Attorney General Phil Weiser's Office

Shelby Dolen from Husch Blackwell

Send comments to the Privacy Foundation at privacyfoundation@law.du.edu and copy John Soma at jsoma@law.du.edu.





Can the law evolve as fast as AI?

https://eprints.ugd.edu.mk/32279/

Civil liability for AI in EU: General remarks

Artificial Intelligence (AI) has become an increasingly prominent and influential technology in modern society, permeating various sectors and significantly impacting the way we live, work, and interact. AI systems possess the ability to analyze vast amounts of data, recognize patterns, and make complex decisions with remarkable speed and accuracy. As a result, AI has facilitated significant improvements in efficiency, productivity, and problem-solving capabilities across various industries. From autonomous vehicles enhancing transportation safety to intelligent virtual assistants streamlining everyday tasks, AI has showcased immense potential in transforming societal functions. Nevertheless, alongside its transformative power, AI presents inherent risks and potential for damage. As AI systems become increasingly autonomous and capable of independent decision-making, questions regarding liability arise. Tort law, with its focus on civil wrongs and compensation for damage, plays a crucial role in determining legal accountability when AI-related damages occur. The complexities surrounding liability in AI are magnified due to the intricate interplay between human agency and machine autonomy, raising challenging legal questions that demand careful consideration.

This contribution aims to present a general overview of the liability regimes currently in place in EU Member States and to determine if they provide for an adequate distribution of all such risks. The starting idea of this research is that such cases in the EU will often have different outcomes due to peculiar features of these legal systems that may play a decisive role, especially in cases involving AI. Mainly, these legal regimes largely attribute liability to human actors, emphasizing concepts such as negligence or intentional misconduct. On the other hand, although there are strict liabilities in place in all European jurisdictions, for the legal theory at present many AI systems do not fall under these regimes, and the victims are left with the sole option of pursuing their claims via fault liability.





How does that work? An obvious question?

https://www.pnas.org/doi/abs/10.1073/pnas.2301842120

Interpretable algorithmic forensics

One of the most troubling trends in criminal investigations is the growing use of “black box” technology, in which law enforcement rely on artificial intelligence (AI) models or algorithms that are either too complex for people to understand or they simply conceal how it functions. In criminal cases, black box systems have proliferated in forensic areas such as DNA mixture interpretation, facial recognition, and recidivism risk assessments. The champions and critics of AI argue, mistakenly, that we face a catch 22: While black box AI is not understandable by people, they assume that it produces more accurate forensic evidence. In this Article, we question this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be interpretable—can be more accurate than black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. Debunking the black box performance myth has implications for forensic evidence, constitutional criminal procedure rights, and legislative policy. Absent some compelling—or even credible—government interest in keeping AI as a black box, and given the constitutional rights and public safety interests at stake, we argue that a substantial burden rests on the government to justify black box AI in criminal cases. We conclude by calling for judicial rulings and legislation to safeguard a right to interpretable forensic AI.





It is sounding less and less likely that AI will replace lawyers. Darn.

https://techreg.org/article/view/17979

All Rise for the Honourable Robot Judge?

There is a rich literature on the challenges that AI poses to the legal order. But to what extent might such systems also offer part of the solution? China, which has among the least developed rules to regulate conduct by AI systems, is at the forefront of using that same technology in the courtroom. This is a double-edged sword, however, as its use implies a view of law that is instrumental, with parties to proceedings treated as means rather than ends. That, in turn, raises fundamental questions about the nature of law and authority: at base, whether law is reducible to code that can optimize the human condition, or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves accountable to a public. For many of the questions raised, the rational answer will be sufficient; but for others, what the answer is may be less important than how and why it was reached, and whom an affected population can hold to account for its consequences.





New approach?

https://www.research.unipd.it/handle/11577/3496800

Artificial Intelligence, the Public Space, and the Right to Be Ignored

AI is capable of is occupying, patrolling, and even controlling the physical space. The issue of Artificial Face Recognition is only the most visible aspect of a much broader phenomenon, as computer vision empowers private and public entities to capture a great deal of information that is available in the public sphere in an unprecedented way, challenging how public law conceives public spaces. Despite significant differences, various legal orders are similarly concerned with this phenomenon, which is shifting the boundaries between what is private and what is public. This Chapter i) explains how, because of AI, public spaces are morphing into something new; ii) argues for the protection of anonymity; iii) demonstrates the failures of several contemporary theorizations of public places and that the standard privacy paradigm fails to provide sufficient protection; iv) proposes a new approach to the topic. Drawing from different legal orders, it ultimately argues for a reconceptualization of the public sphere in a way that mitigates the impact of AI on social life.