Saturday, November 18, 2023

Among other things…

https://www.pymnts.com/artificial-intelligence-2/2023/this-week-in-ai-accelerationism-agi-and-the-law/

This Week in AI: Accelerationism, AGI and the Law

Thomson Reuters on Wednesday (Nov. 15) launched a series of initiatives aimed at transforming the legal profession through the use of generative AI.

This comes as PYMNTS Intelligence in “The Confluence of Law and AI: An Inevitability Waiting to Happen,” a collaboration with AI-ID, finds that more than half of legal professionals are uncertain about the technology’s reliability, and nearly two in five do not trust it.

Consumers of legal services are not entirely won over either, with 55% of clients and potential clients expressing serious concerns about the use of AI within the legal profession.

Still, 62% of legal professionals believe that effective use of generative AI will differentiate successful firms from unsuccessful ones in as little as five years. An even higher share, 80%, agree that generative AI will introduce “transformative efficiencies” — a sentiment echoed by law firms and corporate legal departments.

Those potential benefits — transformative across not just law but all sectors — are a part of why Chinese tech giant Alibaba reportedly said Thursday it will not spin off its cloud intelligence business amid the ongoing focus on AI.





A new tool for open source intelligence in general.

https://hbr.org/2023/11/use-genai-to-uncover-new-insights-into-your-competitors

Use GenAI to Uncover New Insights into Your Competitors

This real example illustrates companies’ growing problem of information overload regarding markets and competitors, which often prevents the C-suite from making the best decisions available given the data at its disposal. Financial Times Stock Exchange 100 Index companies’ reports run, on average, more than 223 pages and contain in excess of 147,000 words — and are growing by almost nine pages a year. It is hardly surprising that company experts responsible for analyzing competitive information — in marketing intelligence, strategy, strategic management accounting (a common business function in European companies), etc. — only focus on the sections that are apparently relevant to their function, often missing genuinely relevant information in the rest of the document.



Friday, November 17, 2023

I have nothing to hide, except for those texts with my lawyer. The tips from my brokers aren’t insider information, are they?

https://www.pogowasright.org/eff-to-supreme-court-fifth-amendment-protects-people-from-being-forced-to-enter-or-hand-over-cell-phone-passcodes-to-the-police/

EFF to Supreme Court: Fifth Amendment Protects People from Being Forced to Enter or Hand Over Cell Phone Passcodes to the Police

WASHINGTON, D.C.—The Electronic Frontier Foundation (EFF) today asked the Supreme Court to overturn a ruling undermining Fifth Amendment protections against self-incrimination and find that constitutional safeguards prevent police from forcing people to provide or use passcodes for their cell phones so officers can access the tremendous amount of private information on phones.

At stake is the fundamental principle that the government can’t force people to testify against themselves, including by revealing or using their passcodes.

When the government demands someone turn over or enter their passcode, it is forcing that person to disclose the contents of their mind and provide a link in a chain of possibly incriminating evidence,” said EFF Surveillance Litigation Director Andrew Crocker. “Whenever the government calls on someone to use memorized information to aid in their own prosecution—whether it be a cellphone passcode, a combination to a safe, or even their birthdate—the Fifth Amendment applies.”

The Illinois Supreme Court in the case People v. Sneed erroneously ruled that the Fifth Amendment doesn’t apply to compelled entry of passcodes because they are just a string of numbers memorized by the phone’s owner with minimal independent value—and therefore not a form of testimony. The Illinois court erred further by ruling that the passcode at issue fell under the dubious “forgone conclusion exception” to the Fifth Amendment because the government agents already knew it existed and the defendant knew the code.

Federal and state courts are split on whether the Fifth Amendment prohibits police from compelling individuals to unlock their cell phones so prosecutors can look for incriminating evidence, and when and how the “forgone conclusion exception” applies. Only the Supreme Court can resolve this split, EFF said in a brief today.

The Supreme Court should find that Fifth Amendment protection against self-incrimination extends to the digital age and applies to turning over or entering a passcode,” said EFF Staff Attorney Lisa Femia.* “Cell phones hold an unprecedented amount of our private information and device searches have become routine in law enforcement investigations. It’s imperative that the court make clear that the Fifth Amendment doesn’t allow the government to require people to hand over their passcodes and assist in their own prosecution.”

*Admitted in New York and Washington, D.C., only; not admitted in California

For the brief: https://www.eff.org/document/sneed-v-illinois-eff-brief

Source: EFF.





It’s a tool. Someone will apply it or misapply it whenever they see an opportunity.

https://www.bespacific.com/chatgpt-has-been-turned-into-a-social-media-surveillance-assistant/

ChatGPT Has Been Turned Into A Social Media Surveillance Assistant

Forbes [free to read ]: “Social Links, a surveillance company that had thousands of accounts banned after Meta accused it of mass-scraping Facebook and Instagram, is now using ChatGPT to make sense of data its software grabs from social media. Most people use ChatGPT to answer simple queries, draft emails, or produce useful (and useless) code. But spyware companies are now exploring how to use it and other emerging AI tools to surveil people on social media. In a presentation at the Milipol homeland security conference in Paris on Tuesday, online surveillance company Social Links demonstrated ChatGPT performing “sentiment analysis,” where the AI assesses the mood of social media users or can highlight commonly-discussed topics amongst a group. That can then help predict whether online activity will spill over into physical violence and require law enforcement action. Founded by Russian entrepreneur Andrey Kulikov in 2017, Social Links now has offices in the Netherlands and New York; previously, Meta dubbed the company a spyware vendor in late 2022, banning 3,700 Facebook and Instagram accounts it allegedly used to repeatedly scrape the social sites. It denies any link to those accounts and the Meta claim hasn’t harmed its reported growth: company sales executive Rob Billington said the company had more than 500 customers, half of which were based in Europe, with just over 100 in North America. That Social Links is using ChatGPT shows how OpenAI’s breakout tool of 2023 can empower a surveillance industry keen to tout artificial intelligence as a tool for public safety. But according to the American Civil Liberties Union’s senior policy analyst Jay Stanley, using AI tools like ChatGPT to augment social media surveillance will likely “scale up individualized monitoring in a way that could never be done with human monitors,” he told Forbes…”





Was the AI that passed the Bar exam trained on the same data as the AI that passed the Ethics exam? If not, we don’t yet have an AI lawyer, we have separate tools.

https://www.lawnext.com/2023/11/generative-ai-having-already-passed-the-bar-exam-now-passes-the-legal-ethics-exam.html

Generative AI, Having Already Passed the Bar Exam, Now Passes the Legal Ethics Exam

Well, it’s happened again: Generative AI has passed a critical test used to measure candidate’s fitness to be licensed as a lawyer.

Back in March, OpenAI’s GPT-4 took the bar exam and passed with flying colors, scoring around the top 10% of test takers.

Now, two of the leading large language models (LLMs) have passed a simulation of the Multistate Professional Responsibility Examination (MPRE), a test required in all but two U.S. jurisdictions to measure prospective lawyers’ knowledge of professional conduct rules.



Thursday, November 16, 2023

An excellent starting point.

https://www.pogowasright.org/to-address-online-harms-we-must-consider-privacy-first/

To Address Online Harms, We Must Consider Privacy First

Every year, we encounter new, often ill-conceived, bills written by state, federal, and international regulators to tackle a broad set of digital topics ranging from child safety to artificial intelligence. These scattershot proposals to correct online harm are often based on censorship and news cycles. Instead of this chaotic approach that rarely leads to the passage of good laws, we propose another solution in a new report: Privacy First: A Better Way to Address Online Harms.

In this report, we outline how many of the internet’s ills have one thing in common: they’re based on the business model of widespread corporate surveillance online. Dismantling this system would not only be a huge step forward to our digital privacy, it would raise the floor for serious discussions about the internet’s future.

Download Report PDF





Wish list?

https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy-3/

Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy

Today, the United States joined 45 endorsing states to launch the implementation of the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.

https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf



Wednesday, November 15, 2023

Oh great. Another AI threat only this one is stealthy.

https://www.dailymaverick.co.za/article/2023-11-13-chatgpt-was-disruptive-swarms-of-ai-agents-will-be-revolutionary/

ChatGPT was disruptive, swarms of AI agents will be revolutionary

We face a future of networked swarms of AI agents, interacting, competing, autonomously negotiating with each other and — where necessary — with humans, to achieve their respective goals. This is an incredible moment in history for anyone with an entrepreneurial attitude.

Simply put, employers will likely increasingly face a choice: try to make their employees more productive with AI tools or choose to replace many of them entirely. In May 2023 the firm Challenger, Gray & Christmas found that AI was the 7th leading cause of job losses in the US. At what point will AI become the leading cause of job losses? Five years? Two Years?

At the time of writing, the website There’s an AI for that lists more than 9,000 available AI models for over 2,000 different everyday business tasks, from creating Powerpoint slides and building websites all the way to therapy bots and models which streamline the scientific process itself.





Bad Clearview’ becomes ‘Good Clearview,’ or at least ‘too useful to toss out with the bath water Clearview.’

https://time.com/6334176/ukraine-clearview-ai-russia/

Ukraine’s ‘Secret Weapon’ Against Russia Is a Controversial U.S. Tech Company

Leonid Tymchenko spent the first month of Russia’s invasion sitting in his dark government office after curfew. Unable to go home, Ukraine's Deputy Minister of Internal Affairs scrolled through Telegram, looking at thousands of videos and images of advancing Russian soldiers. When Tymchenko was offered a chance to test a new facial-recognition tool, he uploaded some of the photos to try it out.

He could not believe the results. Every time Tymchenko added a photo of a Russian soldier, the software, made by the American facial-recognition company Clearview AI, seemed to come back with an exact hit, linking to pages that revealed the soldier’s name, hometown, and social-media profile. Even when he uploaded grainy photos of dead soldiers, some with their eyes closed or their faces partially burned, the software was often able to identify the person. "Every day we identified hundreds of Russians who came to Ukraine with weapons,” Tymchenko tells TIME in a video interview from his office in Kyiv.



(Related)

https://www.wired.com/story/social-media-ai-dead-bodies/

Social Media Sleuths, Armed With AI, Are Identifying Dead Bodies

Poverty, fentanyl, and lack of public funding mean morgues are overloaded with unidentified bodies. TikTok and Facebook pages are filling the gap—with AI proving a powerful and controversial new tool.



(Related)

https://www.newyorker.com/magazine/2023/11/20/does-a-i-lead-police-to-ignore-contradictory-evidence?currentPage=all

Does A.I. Lead Police to Ignore Contradictory Evidence?

Too often, a facial-recognition search represents virtually the entirety of a police investigation.





Perspective. (And yes, I might try it this way.)

https://www.scientificamerican.com/article/to-educate-students-about-ai-make-them-use-it/

To Educate Students about AI, Make Them Use It

To do so, I created an AI-powered class assignment. Each student was required to generate their own essay from ChatGPT and “grade” it according to my instructions. Students were asked to leave comments on the document, as though they were a professor assessing a student’s work. Then they answered questions I provided: Did ChatGPT confabulate any sources? If so, how did you find out? Did it use any sources correctly? Did it get any real sources wrong? Was its argument persuasive or shallow?

The results were eye-opening: Every one of the 63 essays contained confabulations and errors.





Soon to be in common use as communicators in all fields will find it irresistible. When there are millions of these fakes on the Internet, how will we separate the wheat from the chaff?

https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/11/14/presidents-use-ai-voice-clones-and

AI Voice Clones and Deepfakes: The Latest Presidents’ Engagement Tools

The friendly but authoritative voice of Astrid Tuminez delivers a cybersecurity PSA, narrating over an animated version of herself. Throughout the cartoon, the voice of the Utah Valley University president warns of perils such as phishing and phone scams before delivering a final, surprise reveal: the voice that sounded like Tuminez was that of an artificial intelligence–enabled bot.

“Trust that gut feeling that does not feel right,” the voice says. “And here’s a twist: you’ve been listening to an AI clone of President Tuminez’s voice.”

Then the real Tuminez appears, saying, “Just as my voice can be mimicked, so can others’; always be vigilant.”





Tools & Techniques.

https://www.searchenginejournal.com/ai-search-engines/497061/#close

The 6 Best AI Search Engines To Try Right Now

AI search engines are the biggest challenge Google has faced in decades. Here are the best ones to try right now.



Tuesday, November 14, 2023

If this is a collection of failed privacy issues, what chance does it have of passing?

https://www.cpomagazine.com/data-privacy/new-us-privacy-bill-focuses-on-ending-domestic-government-surveillance-overreach-at-all-levels/

New US Privacy Bill Focuses On Ending Domestic Government Surveillance Overreach At All Levels

Drawing on terms first proposed in a series of stalled-out data privacy bills that date back to at least 2018, the Government Surveillance Reform Act of 2023 (GSRA) narrows the focus specifically to warrantless government interception at all levels from federal to local.

The main thrust of the bill is to establish warrant requirements for some ongoing forms of data access that do not presently require them, but the GSRA would also put an end to “zombie” elements of the now-defunct Patriot Act and would address law enforcement use of private data broker files.





Was this a test? Or is Russia angry with Denmark for some reason?

https://www.databreaches.net/denmark-hit-with-largest-cyberattack-on-record/

Denmark Hit With Largest Cyberattack on Record

Chris Riotta reports:

Hackers potentially linked to the Russian GRU Main Intelligence Directorate carried out a series of highly coordinated cyberattacks targeting Danish critical infrastructure in the nation’s largest cyber incident on record, according to a new report.
SektorCERT, a nonprofit cybersecurity center for critical sectors in Denmark, reported that attackers gained access to the systems of 22 companies overseeing various components of Danish energy infrastructure in May. The report published Sunday says hackers exploited zero-day vulnerabilities in Zyxel firewalls, which many Danish critical infrastructure operators use to protect their networks.

Read more at Bank InfoSecurity.





Perspective.

https://www.bespacific.com/generative-ai-and-libraries-7-contexts/

Generative AI and libraries: 7 contexts

LorcanDempsey.net: “Libraries are engaging with AI in their educational, service and policy work. This post discusses seven contexts in which that work is taking place. This is the third of four posts on Generative AI:

    1. Generative AI and large language models: background and contexts

    2. Generative AI, scholarly and cultural language models, and the return of content

    3. Generative AI and libraries: 7 contexts

    4. Generative AI and library services: some directions

It is now a year since the momentous appearance of ChatGPT. So much has happened in that time. Whether one measures by new product and feature announcements, business churn (investment, startups), or policy, safety and ethical debate. Usage is increasingly integrated into daily applications. Much of this has become routine, some of it is tedious, and much still has the ability to surprise. Capacities continue to expand. See the recent inclusion of voice and image capabilities into ChatGPT for example, or the introduction of the confusingly named GPTs, which allow you to create and share custom versions of ChatGPT based on your own data (more below and NYT coverage here).,,”



Monday, November 13, 2023

Interesting. I’m sure he has not identified everything, but this is a good start.

https://www.schneier.com/blog/archives/2023/11/ten-ways-ai-will-change-democracy.html

Ten Ways AI Will Change Democracy

Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the “AI-generated disinformation” trope and speculate on some of the ways AI will change how democracy functions—in both large and small ways.

Some items on my list are still speculative, but none require science-fictional levels of technological advance. And we can see the first stages of many of them today. When reading about the successes and failures of AI systems, it’s important to differentiate between the fundamental limitations of AI as a technology, and the practical limitations of AI systems in the fall of 2023. Advances are happening quickly, and the impossible is becoming the routine. We don’t know how long this will continue, but my bet is on continued major technological advances in the coming years. Which means it’s going to be a wild ride.





A question that really needs an answer.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4623126

Criminal Liability of Artificial Intelligence

Artificial intelligence is a new and extremely quickly developing technology, which is expected, and maybe even feared to bring enormous changes in every aspect of our society. Even though this technology is still comparatively underdeveloped, we already hand over a multitude of everyday-tasks. As for now AI is mostly used to take over tasks, which are often perceived as “annoying” or highly time consuming. Therefore, it shall enhance productivity in first place. It is expected to do many of the tasks even better than human beings. At least in future. Some of these tasks, such as autonomous driving are quite dangerous, bearing the potential to infringe peoples protected rights, and even cause physical harm and death to human beings. Obviously, such technology needs a solid and reliable legal basis, especially in terms of liability, if the inevitable happens and the technology causes events that were not intended to happen. However, a well-developed set of rules should not only concern private law. Especially when such technology causes harm or even death to human beings, the question of a criminal deed arises, in a sense of criminal negligence for example. Future criminal law must be prepared and probably adjusted effectively tackle any questions concerning criminal liability of artificial intelligence.





Are we evolving toward an AI lawyer?

http://192.248.104.6/handle/345/6771

Impact of Artificial Intelligence on Legal Practice in Sri Lanka

Artificial Intelligence (AI) a machine-based system used to ease the human workload, has been popular globally and its influence can be seen even in developing countries like Sri Lanka. Although it has dominated areas such as machine problem detecting, calculating and speech recognition, it is questionable whether this sophisticated technology can address the traditional roles of legal practice. The research aims to explore the positive and negative influence of AI in the legal field while determining the degree to which this technology should be incorporated into the legal sector in Sri Lanka. The research was carried out as a literature survey with a comparative analysis of other jurisdictions. Currently, many countries including the USA have used AI-based tools such as LawGeex, Ross Intelligence, eBrevia and Leverton in legal practice due to their efficiency, accuracy and ease of use. Findings revealed that AI can be used even in Sri Lanka for legal research, preliminary legal drafting and codification of law. But according to the prevailing economic and social background of Sri Lanka, it will be discriminatory to totally rely on an AI-induced legal system since it may create barriers to equal access to legal support for the common masses. Also, excessive dependency on AI will be a barrier to innovative legal actions such as public interest litigation since it would not assess the humanitarian aspect. Hence, it is concluded that AI should be used in Sri Lankan legal practice with limitations.





Thoughtful. Something for Con-Law at last!

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4626235

AI Outputs and the Limited Reach of the First Amendment

Not all communications are “constitutional speech” - determining whether machine-generated outputs qualify for First Amendment protection requires some work. In order to do so, we first explore aspects of both linguistic and communication theories, and then under what circumstances communication can become First Amendment speech.

We reach the bounds of the First Amendment from two directions. Working from a linguistic definition of speech, we capture non-linguistic forms of protected speech. Using communication theory, we reach a divide between human-sender communication and non-human-sender communication. Together these approaches support the location of a constitutional frontier. Within we find all instances of recognized First Amendment effectiveness. Outputs of non-human autonomous senders (e.g. AI) are outside and constitute an unexamined case.

Speech” under the First Amendment requires both a human sender and a human receiver. Concededly many AI outputs will be speech – due to the human factor in the mix. But just because a human programmed the AI, or set its goals, does not mean the AI’s output is substantially the human’s message. Nor does the fact that a human receives the output, for listener’s First Amendment rights arise only where actual speech occurs. Thus, we resist the claim that all AI outputs are necessarily speech. Indeed, most AI outputs are not speech.

For those who raise objection to the challenge we pose – determining which AI outputs are speech and which are not – we respectfully note that there will be additional Constitutional work to be done. We are confident that our courts will be up to this challenge.

Whether AI outputs are First Amendment speech has profound implications. If they are, then state and federal regulation is severely hobbled, limited to the few categories of speech that have been excluded by the Supreme Court from strong constitutional protection.

With limited exception, neither the sponsors/developers of AI, the AI itself, nor the end users have rights under the First Amendment in the machine’s output. We express no opinion on other rights they may have or on what types of regulations state and federal governments should adopt. Only that they may constitutionally do so.



(Related) They may have put a finger on the problem. AI output is based on the data it has scanned.

https://ojs.journalsdg.org/jlss/article/view/1965

The Impact of Developments in Artificial Intelligence on Copyright and other Intellectual Property Laws

Objective: The objective of this study is to investigate the impact of AI breakthroughs on copyright and challenges faced by intellectual property legal protection systems. Specifically, the study aims to analyze the implications of AI-generated works in the context of copyright law in Indonesia.

Result: The research findings reveal that according to Law Number 28 of 2014 in Indonesia, AI-generated works do not meet the originality standards required for copyright protection.





This interests me because of the years I spent auditing computer systems.

https://link.springer.com/article/10.1007/s44206-023-00074-y

Auditing of AI: Legal, Ethical and Technical Approaches

AI auditing is a rapidly growing field of research and practice. This review article, which doubles as an editorial to Digital Society’s topical collection on ‘Auditing of AI’, provides an overview of previous work in the field. Three key points emerge from the review. First, contemporary attempts to audit AI systems have much to learn from how audits have historically been structured and conducted in areas like financial accounting, safety engineering and the social sciences. Second, both policymakers and technology providers have an interest in promoting auditing as an AI governance mechanism. Academic researchers can thus fill an important role by studying the feasibility and effectiveness of different AI auditing procedures. Third, AI auditing is an inherently multidisciplinary undertaking, to which substantial contributions have been made by computer scientists and engineers as well as social scientists, philosophers, legal scholars and industry practitioners. Reflecting this diversity of perspectives, different approaches to AI auditing have different affordances and constraints. Specifically, a distinction can be made between technology-oriented audits, which focus on the properties and capabilities of AI systems, and process-oriented audits, which focus on technology providers’ governance structures and quality management systems. The next step in the evolution of auditing as an AI governance mechanism, this article concludes, should be the interlinking of these available—and complementary—approaches into structured and holistic procedures to audit not only how AI systems are designed and used but also how they impact users, societies and the natural environment in applied settings over time.



(Related) You mean I can generate my own version of the evidence!

https://iplab.dmi.unict.it/mfs/user/pages/03.publications/2024_an%20Overview%20of%20Deepfake%20Technologies%20from%20Creation%20to%20Detection%20in%20Forensics.pdf

An Overview of Deepfake Technologies: from Creation to Detection in Forensics

Advancements in Artificial Intelligence (AI) techniques have given rise to significant challenges in the field of Multimedia Forensics, particularly with the emergence of the Deepfake phenomenon. Deepfakes are images, video and audio generated or altered by powerful generative models such as Generative Adversarial Networks (GANs) [5] and Diffusion Models (DMs) [12]. While GANs have long been recognized for their ability to generate high-quality images, DMs offer distinct advantages, providing better control over the generative process and the ability to create images with a wide range of styles and content [2]. In fact, DMs have shown the potential to produce even more realistic images than GANs. The AI-generated contents span diverse domains, including films, photography, video games, and virtual reality productions. A major concern of the Deepfake phenomenon is the application on important people such as politicians and celebrities to spread misinformation. However, the most alarming aspect is the misuse of GANs and DMs to create pornographic Deepfakes, posing a serious security threat. Notably, a staggering 96% of Deepfakes available on the internet fall into this pornographic category. The malicious use of Deepfakes extends to issues such as misinformation, cyberbullying, and privacy violation. In addition, Deepfakes have been applied in the fields of art and entertainment, sparking ethical discussions about the limits of creativity and authenticity. To counteract the illicit use of this powerful technology, novel forensic detection techniques are required to identify whether multimedia data has been manipulated or altered using GANs and DMs. Regarding image deepfake detection methods in the state of the art, the primary focus lies in binary detection, distinguishing between Real and AI-generated images [14, 16]. Notably, some methods in the state of the art have already demonstrated the ability to effectively differentiate between various GAN architectures [4, 7, 6, 15] and several DM engines [13, 1, 9]. These researches showed that generative models leave unique fingerprints in the generated multimedia data, which can be used not only to identify Deepfakes, but also to recognize the specific architecture used during the creation process [11]. This can be extremely important in forensics in order to reconstruct the history of the multimedia data under analysis (forensic ballistics) [8]. In order to create increasingly sophisticated deepfakes detection solutions, several challenges have been proposed by the scientific community such as the Deepfake Detection Challenge (DFDC) [3] and the Face Deepfake Detection Challenge [10]. The latter has also launched a new challenge among researchers in the field: reconstructing the original image from deepfakes; a task that can be extremely important in forensics.



Sunday, November 12, 2023

In other words, the instructions we use to tell AI what to do do not result in the AI doing what we tell it to do? Or is it the humans who don’t understand?

https://scitechdaily.com/the-illusion-of-understanding-mit-unmasks-the-myth-of-ais-formal-specifications/

The Illusion of Understanding: MIT Unmasks the Myth of AI’s Formal Specifications

As autonomous systems and artificial intelligence become increasingly common in daily life, new methods are emerging to help humans check that these systems are behaving as expected. One method, called formal specifications, uses mathematical formulas that can be translated into natural-language expressions. Some researchers claim that this method can be used to spell out decisions an AI will make in a way that is interpretable to humans.

MIT Lincoln Laboratory researchers wanted to check such claims of interpretability. Their findings point to the opposite: Formal specifications do not seem to be interpretable by humans. In the team’s study, participants were asked to check whether an AI agent’s plan would succeed in a virtual game. Presented with the formal specification of the plan, the participants were correct less than half of the time.



(Related)

https://thehill.com/opinion/congress-blog/4305486-either-the-law-will-govern-ai-or-ai-will-govern-the-law/

Either the law will govern AI, or AI will govern the law

Part of what makes AI so challenging to regulate is that the systems reach far beyond their technical components and specific products. The impact of AI, when seen as a knowledge structure, can be better understood as a philosophical force. Generative AI and machine learning, algorithms, and other subsets of AI, do not operate absent the context through which they are developed and implemented. They are informed and learn by digesting collective narratives and can reflect existing hierarchies based on preexisting historical, philosophical, political and socioeconomic structures. Acknowledging this allows us to visualize how AI may perpetuate inequities and antidemocratic values that constitutional democracies have sought to correct for generations.

The U.S. Constitution is inspired by a philosophy of how to guarantee rights and constrain power. It separates and decentralizes power, and installs checks and balances, to avoid power abuses. AI must be viewed in much the same way. Both the Constitution and AI are highly philosophical. Putting them side-by-side allows us to understand how they might be in tension with each other on a philosophical level. If we look at AI as only a technology, we will miss how AI can transform into a governing philosophy that attempts to rival the governing philosophy of a constitutional democracy.