Friday, January 19, 2024

Do laws like this provide enough information to determine where false information was entered?

https://www.pogowasright.org/new-jersey-legislature-enacts-the-first-consumer-privacy-law-of-2024/

New Jersey Legislature Enacts the First Consumer Privacy Law of 2024

On January 16, 2024, New Jersey’s Governor signed Senate Bill (SB) 332, which establishes a consumer data privacy law for the state. New Jersey becomes the 13th state to pass a consumer data consumer privacy law. The law would take effect one year after its enactment, on January 16, 2025.

Under the law, a consumer has the following rights:

  • To confirm whether a controller processes the consumer’s personal data and access such personal data.
  • To correct inaccuracies in the consumer’s personal data.
  • To delete personal data concerning the consumer.
  • To obtain a copy of the consumer’s data.
  • To opt out of the processing of personal data for the purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.

Read more at Workplace Privacy, Data Management & Security Report.





Perspective. Will this be true for organizations in every industry?

https://breakingdefense.com/2024/01/transforming-the-military-for-the-ai-age-requires-a-certain-ruthlessness-say-us-uk-experts/

Transforming the military for the AI age requires ‘a certain ruthlessness,’ say US, UK experts

Drone warfare in Ukraine shows that low-cost unmanned systems are already revolutionizing warfare. China is on the march in high-tech weapons and artificial intelligence. So it’s growing too late for the US and UK to make a gradual “managed transition” to the new technologies, warns a newly released report.

Instead, argue a team of experts from the London-based Royal United Services Institute (RUSI) and the Arlington, Va., Special Competitive Studies Project (SCSP), the two nations need determined military and civilian leaders to impose at least some “immediate transformational change” on the armed services from the top down.





Tools & Techniques.

https://www.engadget.com/microsofts-tool-for-ai-reading-lessons-is-now-a-standalone-app-230520756.html

Microsoft's tool for AI reading lessons is now a standalone app

Microsoft is rolling out Reading Coach as a standalone app, which will expand its tools for educators in Microsoft Teams. The new app will be part of its Reading Progress suite designed to help students improve literacy in the classroom and at home. The tool will use artificial intelligence to provide users with personalized feedback on how to improve reading scores as well as specific suggestions for how to improve things like pronunciation. It will be free to any users that have a Microsoft account.



Thursday, January 18, 2024

Interesting. I guess pedophiles use good passwords.

https://ottawacitizen.com/news/local-news/police-must-return-phones-after-175-million-passcode-guesses-judge-says

Police must return phones after 175 million passcode guesses, judge says

Ontario Superior Court Justice Ian Carter heard that police investigators tried about 175 million passcodes in an effort to break into the phones during the past year.

The problem, the judge was told, is that more than 44 nonillion potential passcodes exist for each phone.

To be more precise, the judge said, there are 44,012,666,865,176,569,775,543,212,890,625 potential alpha-numeric passcodes for each phone.

… In his ruling, Carter said the court had to balance the property rights of an individual against the state’s legitimate interest in preserving evidence in an investigation. The phones, he said, have no evidentiary value unless the police succeed in finding the right passcodes.

“While it is certainly possible that they may find the needle in the next two years, the odds are so incredibly low as to be virtually non-existent,” the judge wrote. “A detention order for a further six months, two years, or even a decade will not alter the calculus in any meaningful way.”

He denied the Crown’s application to retain the phones and ordered them returned or destroyed.





New laws inspired by the rich and famous…

https://www.vice.com/en/article/5d9az5/congress-is-trying-to-stop-ai-nudes-and-deepfake-scams-because-celebrities-are-mad

Congress Is Trying to Stop AI Nudes and Deepfake Scams Because Celebrities Are Mad

The new bill, called the No AI FRAUD Act and introduced by Rep. MarĂ­a Elvira Salazar (R-FL) and Rep. Madeleine Dean (D-PA), would establish legal definitions for “likeness and voice rights,” effectively banning the use of AI deepfakes to nonconsensually mimic another person, living or dead. The draft bill proclaims that “every individual has a property right in their own likeness and voice,” and cites several recent incidents where people have been turned into weird AI robots. It specifically mentions recent viral videos that featured AI-generated songs mimicking the voices of pop artists like Justin Bieber, Bad Bunny, Drake, and The Weeknd.

The bill also specifically targets AI deepfake porn, saying that “any digital depiction or digital voice replica which includes child sexual abuse material, is sexually explicit, or includes intimate images” meets the definition of harm under the act.



(Related)

https://www.bespacific.com/is-ai-the-death-of-ip/

Is A.I. the Death of I.P.?

The New Yorker [free to read ]: “Intellectual property accounts for some or all of the wealth of at least half of the world’s fifty richest people, and it has been estimated to account for fifty-two per cent of the value of U.S. merchandise exports. I.P. is the new oil. Nations sitting on a lot of it are making money selling it to nations that have relatively little. It’s therefore in a country’s interest to protect the intellectual property of its businesses. But every right is also a prohibition. My right of ownership of some piece of intellectual property bars everyone else from using that property without my consent. I.P. rights have an economic value but a social cost. Is that cost too high? I.P. ownership comes in several legal varieties: copyrights, patents, design rights, publicity rights, and trademarks. And it’s everywhere you look. United Parcel Service has a trademark on the shade of brown it paints its delivery trucks. If you paint your delivery trucks the same color, UPS can get a court to make you repaint them. Coca-Cola owns the design rights to the Coke bottle: same deal. Some models of the Apple Watch were taken off the market this past Christmas after the United States International Trade Commission determined that Apple had violated the patent rights of a medical-device firm called Masimo. (A court subsequently paused the ban.)…”



Wednesday, January 17, 2024

Imagine all that computer time wasted on my search for ‘beer near me.’

https://www.pogowasright.org/each-facebook-user-is-monitored-by-thousands-of-companies/

Each Facebook User is Monitored by Thousands of Companies

By now most internet users know their online activity is constantly tracked. No one should be shocked to see ads for items they previously searched for, or to be asked if their data can be shared with an unknown number of “partners.”

But what is the scale of this surveillance? Judging from data collected by Facebook and newly described in a unique study by non-profit consumer watchdog Consumer Reports, it’s massive, and examining the data may leave you with more questions than answers.

Using a panel of 709 volunteers who shared archives of their Facebook data, Consumer Reports found that a total of 186,892 companies sent data about them to the social network. On average, each participant in the study had their data sent to Facebook by 2,230 companies. That number varied significantly, with some panelists’ data listing over 7,000 companies providing their data.





Once AI moves to the dark side, it stays there!

https://futurism.com/the-byte/ai-deceive-creators

SCIENTISTS TRAIN AI TO BE EVIL, FIND THEY CAN'T REVERSE IT

How hard would it be to train an AI model to be secretly evil? As it turns out, according to AI researchers, not very — and attempting to reroute a bad apple AI's more sinister proclivities might backfire in the long run.

In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with "exploitable code," meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases. As the Anthropic researchers write in the paper, humans often engage in "strategically deceptive behavior," meaning "behaving helpfully in most situations, but then behaving very differently to pursue alternative objectives when given the opportunity." If an AI system were trained to do the same, the scientists wondered, could they "detect it and remove it using current state-of-the-art safety training techniques?"





Could be useful.

https://www.wral.com/story/ai-in-nc-schools-dpi-says-teachers-and-students-should-use-it-provides-guidance-on-how/21239386/

AI in NC schools: DPI says teachers and students should use it, provides guidance on how

The department recommends AI use in all K-12 grade levels.

The North Carolina Department of Public Instruction has released guidance for educators on how to use artificial intelligence in the classroom. North Carolina is the fourth state in the nation to issue directives for how students and teachers using tools like ChatGPT in schools.

Generative artificial intelligence is playing a growing and significant role in our society," said State Superintendent Catherine Truitt. Students need to be taught how to use artificial intelligence in an economy and society that will increasingly use it and expect workers to use it, she said.



Tuesday, January 16, 2024

Hallucinate with a straight face...

https://www.bespacific.com/large-legal-fictions-profiling-legal-hallucinations-in-large-language-models/

Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models

Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models. Matthew Dahl, Varun Magesh, Mirac Suzgun, Daniel E. Ho, 2 Jan 2024. “Large language models (LLMs) have the potential to transform the practice of law, but this potential is threatened by the presence of legal hallucinations — responses from these models that are not consistent with legal facts. We investigate the extent of these hallucinations using an original suite of legal queries, comparing LLMs’ responses to structured legal metadata and examining their consistency. Our work makes four key contributions: (1) We develop a typology of legal hallucinations, providing a conceptual framework for future research in this area. (2) We find that legal hallucinations are alarmingly prevalent, occurring between 69% of the time with ChatGPT 3.5 and 88% with Llama 2, when these models are asked specific, verifiable questions about random federal court cases. (3) We illustrate that LLMs often fail to correct a user’s incorrect legal assumptions in a contra-factual question setup. (4) We provide evidence that LLMs cannot always predict, or do not always know, when they are producing legal hallucinations. Taken together, these findings caution against the rapid and unsupervised integration of popular LLMs into legal tasks. Even experienced lawyers must remain wary of legal hallucinations, and the risks are highest for those who stand to benefit from LLMs the most — pro se litigants or those without access to traditional legal resources.”

See also The Economist – Generative AI could radically alter the practice of law. Even if it doesn’t replace lawyers en masse





To AI or not to AI is no longer a question.

https://www.bespacific.com/is-ai-friend-or-foe-legal-implications-of-rapid-artificial-intelligence-adoption/

Is AI Friend or Foe: Legal Implications of Rapid Artificial Intelligence Adoption

Conklin, Michael, Is AI Friend or Foe: Legal Implications of Rapid Artificial Intelligence Adoption (April 25, 2023). Michael Conklin, Is AI Friend or Foe: Legal Implications of Rapid Artificial Intelligence Adoption, 26 ATLANTIC L.J. ___ (forthcoming 2023). Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4429539

The term “artificial intelligence(hereinafter “AI”) was coined in the 1950s and has been a staple in science fiction movies and literature. However, it appears that the 2020s is the decade when the real-world realities of AI are finally manifesting. In early 2023, The New York Times referenced the beginning of an “AI arms race.” ChatGPT was introduced in November 2022 and has already passed a bar exam, passed a medical licensing exam, scored in the top 10th percentile on the combined SAT, passed an entire semester’s worth of classes at a tier 1 law school, passed a University of Pennsylvania Wharton School of Business MBA Exam, and written hundreds of books sold on Amazon. Autonomous drones have already been used in military conflicts to kill soldiers. Madison Square Garden is enforcing an expansive attorney ban through the use of facial recognition software. And in 2023, Ford applied for a patent for self-repossessing automobiles. This is a brief review of Mark Deem and Peter Warren’s new book, AI on Trial. It offers diverse views regarding the rapidly evolving legal standards of AI. It is easy to read due to the conversation tone and lack of technical jargon. While the book offers an expansive coverage of AI, this review will focus on the topics of irrational fear of AI, biased AI, AI for legal determinations, immense benefits from AI compared to human labor, and the nuanced balancing act of regulating AI. The book would serve as a valuable resource for business law professors looking for topical, high-stakes, stimulating topics to act as a catalyst to ignite class discussion regarding the real-life application of law.”



Monday, January 15, 2024

Perspective.

https://www.pogowasright.org/current-status-of-us-state-privacy-law-deluge-its-2024-do-you-know-where-your-privacy-programs-at/

Current Status of US State Privacy Law Deluge: It’s 2024, Do You Know Where Your Privacy Program’s At?

Liisa M. Thomas of Sheppard, Mullin, Richter & Hampton LLP writes:

As we begin the new year, many are wondering whether the growing list of US state privacy laws apply to them, and if so, what steps they should take to address them. For companies that gather information from consumers, especially those that offer loyalty programs, collect sensitive information, or have cybersecurity risks, these laws may be top of mind. Even for others, these may be laws that are of concern. As you prepare your new year’s resolutions -or how you will execute on them- having a centralized list of what the laws require might be helpful. So, a quick recap:
  • States With Laws: There are five state laws in effect: California, Virginia Colorado, Connecticut and Utah. Four more go into effect this year: Florida, Oregon, and Texas (July 1) and Montana (October 1). The remainder go into effect either in 2025 (Delaware and Iowa (January 1) and Tennessee (July 1). Finally, Indiana is set to go into effect January 1, 2026.
  • Applicability: Just because you operate in these jurisdictions or collect information from those states’ residents doesn’t mean that the laws necessarily apply to your organization. For many, there are either a number of individuals and/or revenue threshold that apply. On a related front, companies will want to keep in mind the various exceptions that might apply. For example, in some states health care or financial services entities might be exempt from the state laws. And in most, the laws’ obligations are limited to the treatment of consumer information (as opposed to employee information).

Read more at Eye on Privacy.



Sunday, January 14, 2024

AI weapons: We’ve had them all along.

https://techcrunch.com/2024/01/13/anthropic-researchers-find-that-ai-models-can-be-trained-to-deceive/

Anthropic researchers find that AI models can be trained to deceive

Most humans learn the skill of deceiving other humans. So can AI models learn the same? Yes, the answer seems — and terrifyingly, they’re exceptionally good at it.

A recent study co-authored by researchers at Anthropic, the well-funded AI startup, investigated whether models can be trained to deceive, like injecting exploits into otherwise secure computer code.

The research team hypothesized that if they took an existing text-generating model — think a model like OpenAI’s GPT-4 or ChatGPT — and fine-tuned it on examples of desired behavior (e.g. helpfully answering questions) and deception (e.g. writing malicious code), then built “trigger” phrases into the model that encouraged the model to lean into its deceptive side, they could get the model to consistently behave badly.





We’re going to use them.

https://ojs.journalsdg.org/jlss/article/view/2443

Criminal Responsibility for Errors Committed by Medical Robots: Legal and Ethical Challenges

This study aims to know Criminal Responsibility for Errors Committed by Medical Robots, where the use of robots in healthcare and medicine has been steadily growing in recent years. Robotic surgical systems, robotic prosthetics, and other assistive robots are being into patient care. However, these autonomous systems also carry risks of errors and adverse events resulting from mechanical failures, software bugs, or other technical issues. When such errors occur and lead to patient harm, it raises complex questions around legal and ethical responsibility

Traditional principles of criminal law have not been designed to address the issue of liability for actions committed by artificial intelligence systems and robots. There are open questions around whether autonomous medical robots can or should be held criminally responsible for errors that result in patient injury or death. If criminal charges cannot be brought against the robot itself, legal responsibility could potentially be attributed to manufacturers, operators, hospitals, or software programmers connected to the robot. However, proving causation and intent in such cases can be very difficult.





Hacking the Terminator. (Or an autonomous drone?)

https://academic.oup.com/jcsl/advance-article/doi/10.1093/jcsl/krad016/7512115

Can Autonomous Weapon Systems be Seized? Interactions with the Law of Prize and War Booty

The military has often been used as a proving ground for advances in technology. With the advent of machine learning, algorithms and artificial intelligence, there has been a slew of scholarship around the legal and ethical challenges of applying those technologies to the military. Nowhere has the debate been fiercer than in examining whether international law is resilient enough to impose individual and State responsibility for the misuse of these autonomous weapon systems (AWSs). However, by introducing increasing levels of electronic and digital components into weapon systems, States are also introducing opportunities for adversaries to hack, suborn or take over AWSs in a manner unthinkable compared to conventional weaponry. Yet, no academic discussion has considered how the law of prize and war booty might apply to AWSs that are captured in such a way. This article seeks to address this gap.





Perspective.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4688156

The Interplay Between Artificial Intelligence and the Law and the Future of the Law-Machine Interface

Since the early 1970s, and especially in the last decade, commentators have widely explored how artificial intelligence (AI) will affect the legal system. Will intelligent machines replace—or at least displace—judges, lawyers, prosecutors and law enforcement personnel? Will computers powered by ever-improving AI technology pass bar exams? Will lawyers use this new technology in daily practice to save time and money even when it may "hallucinate"—or, more precisely, when it may cite wrong or non-existent cases? Will greater AI deployment affect the future development of law and legal institutions—if so, how? Will such deployment drastically reduce legal costs and thereby improve access to justice? Or will it instead undermine democratic governance and the rule of law? Finally, are we heading toward what one commentator has called "legal singularity"—or, worse, what another has referred to as the "end of law"?

A few years ago, I wrote a couple of law review articles discussing whether AI systems can be effectively deployed to analyze whether an unauthorized use of a copyrighted work would constitute fair use. Based on these analyses, I further explored whether we could draw some useful lessons on the interplay between AI and the law and what I termed the "law-machine interface." A focus on this interface is important because we are increasingly functioning in a hybrid world in which humans and machines work alongside each other. Commissioned for the Research Handbook on the Law of Artificial Intelligence, this chapter collects those lessons that are relevant to the future development of law and legal institutions. The chapter specifically discusses the interplay between AI and the law in relation to law, the legislature, the bench, the bar and academe.