Friday, April 26, 2024

A useful overview?

https://www.bespacific.com/the-legal-ethics-of-generative-ai/

The Legal Ethics of Generative AI

Perlman, Andrew, The Legal Ethics of Generative AI (February 22, 2024). Suffolk University Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4735389 or http://dx.doi.org/10.2139/ssrn.4735389

The legal profession is notoriously conservative when it comes to change. From email to outsourcing, lawyers have been slow to embrace new methods and quick to point out potential problems, especially ethics-related concerns. The legal profession’s approach to generative artificial intelligence (generative AI) is following a similar pattern. Many lawyers have readily identified the legal ethics issues associated with generative AI, often citing the New York lawyer who cut and pasted fictitious citations from ChatGPT into a federal court filing. Some judges have gone so far as to issue standing orders requiring lawyers to reveal when they use generative AI or to ban the use of most kinds of artificial intelligence (AI) outright. Bar associations are chiming in on the subject as well, though they have (so far) taken an admirably open-minded approach to the subject. Part II of this essay explains why the Model Rules of Professional Conduct (Model Rules) do not pose a regulatory barrier to lawyers’ careful use of generative AI, just as the Model Rules did not ultimately prevent lawyers from adopting many now-ubiquitous technologies. Drawing on my experience as the Chief Reporter of the ABA Commission on Ethics 20/20 (Ethics 20/20 Commission), which updated the Model Rules to address changes in technology, I explain how lawyers can use generative AI while satisfying their ethical obligations. Although this essay does not cover every possible ethics issue that can arise or all of generative AI’s law-related use cases, the overarching point is that lawyers can use these tools in many contexts if they employ appropriate safeguards and procedures. Part III describes some recent judicial standing orders on the subject and explains why they are ill-advised. The essay closes in Part IV with a potentially provocative claim: the careful use of generative AI is not only consistent with lawyers’ ethical duties, but the duty of competence may eventually require lawyers’ use of generative AI. The technology is likely to become so important to the delivery of legal services that lawyers who fail to use it will be considered as incompetent as lawyers today who do not know how to use computers, email, or online legal research tools.”





Real rules on deepfakes?

https://www.bespacific.com/deepfakes-in-the-courtroom/

Deepfakes in the courtroom

Ars Technica: “US judicial panel debates new AI evidence rules Panel of eight judges confronts deep-faking AI tech that may undermine legal trials. On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report. The US Judicial Conference’s Advisory Committee on Evidence Rules, an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence, heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial. The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI’s ChatGPT or Stability AI’s Stable Diffusion ), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos. In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials..”





How should you regulate an AI that might grow into a person?

https://coloradosun.com/2024/04/25/colorado-generative-ai-artificial-intelligence-senate/

Colorado bill to regulate generative artificial intelligence clears its first hurdle at the Capitol

A Colorado bill that would require companies to alert consumers anytime artificial intelligence is used and to add more protections to the budding AI industry cleared its first legislative hurdle late Wednesday, even as critics testified it could stifle technological innovation in the state.

At the end of the evening, most sides seemed to agree: The bill still needs work.



Thursday, April 25, 2024

Congress is uncomfortable with TikTok.

https://www.theverge.com/2024/4/24/24139036/biden-signs-tiktok-ban-bill-divest-foreign-aid-package

Biden signs TikTok ‘ban’ bill into law, starting the clock for ByteDance to divest it



(Related) President Biden is comfortable with TikTok.

https://www.nbcnews.com/politics/joe-biden/biden-campaign-keep-using-tiktok-signed-ban-law-rcna149158

Biden campaign plans to keep using TikTok through the election





And then what? Do we trust it enough to send them the location and date of the next insurrection?

https://www.nationalreview.com/corner/good-news-ai-can-apparently-spot-conservatives-on-sight-via-facial-recognition-technology/

Good News: AI Can Apparently Spot Conservatives on Sight via Facial Recognition Technology



Wednesday, April 24, 2024

Consent is fiction.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4333743

Murky Consent: An Approach to the Fictions of Consent in Privacy Law

Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious.

Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Rather than provide extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. Murky consent should be subject to extensive regulatory oversight with an ever-present risk that it could be deemed invalid. Murky consent should rest on shaky ground. Because the law pretends people are consenting, the law’s goal should be to ensure that what people are consenting to is good. Doing so promotes the integrity of the fictions of consent. I propose four duties to achieve this end: (1) duty to obtain consent appropriately; (2) duty to avoid thwarting reasonable expectations; (3) duty of loyalty; and (4) duty to avoid unreasonable risk. The law can’t make the tale of privacy consent less fictional, but with these duties, the law can ensure the story ends well.





Tools & Techniques. (Talking gooder to your AI)

https://www.makeuseof.com/ai-prompting-tips-and-tricks-that-actually-work/#explain-what-hasn-39-t-worked-when-you-39-ve-prompted-in-the-past

7 AI Prompting Tips and Tricks That Actually Work

A whole new world of prompt engineering is springing into life, all dedicated to crafting and perfecting the art of AI prompting. But you can skip the tricky bits and improve your AI prompting game with these tips and tricks.





Tools & Techniques. Soon, humans not required.

https://www.police1.com/police-products/police-technology/software/report-writing/axon-releases-draft-one-ai-powered-report-writing-software

Axon releases Draft One, AI-powered report-writing software

Axon has announced the release of Draft One, a new software product that drafts police report narratives in seconds based on auto-transcribed body-worn camera audio, according to a press release.

Reporting is a critical component of good police work, however, it has become a significant part of the job. Axon found that every week officers in the U.S. can spend up to 40% of their time — or 15 hours per week — on what is essentially data entry.





Tools & Techniques.

https://www.lawnext.com/2024/04/launching-today-the-first-meeting-bot-specifically-for-legal-professionals-for-use-in-depositions-hearings-and-more.html

Exclusive: Launching Today Is The First Meeting Bot Specifically for Legal Professionals, for Use In Depositions, Hearings, and More

You may have noticed of late that many of your video meetings have an unfamiliar attendee — a meeting bot, invited by one of the human participants, that produces a recording or transcript when the meeting is over. But while there are several such products on the market, none have been developed to meet the specific needs of legal professionals.

That changes today with the beta launch of CoCounsel.ai, the first legally nuanced meeting bot. It can join a legal event such as a deposition, hearing or arbitration, and it uses legal-specific AI speech-to-text to provide a legally formatted, highly accurate real-time transcript, along with features such as bookmarking, tagging and archiving.



Tuesday, April 23, 2024

This seems to be dominating the news, but I’m not going to spend much time with it.

https://www.bespacific.com/at-the-top-of-the-ticket-a-criminal-defendant/

At the Top of the Ticket, a Criminal Defendant

Greg Olear. Trump may well be a convicted felon by Election Day. He’s still the GOP nominee. “Yesterday, open statements were heard in the case of The People of the State of New York v. Donald J. Trump. The defendant—a fixture in the New York tabloids for decades, a former reality TV star, and, improbably, the 45th President of the United States—is accused of “the crime of FALSIFYING BUSINESS RECORDS IN THE FIRST DEGREE, in violation of Penal Law §175.10,” a Class E felony. There are 34 counts in the indictment, each one specifying a unique instance of Trump running afoul of the law… A Class E felony is as low-rung as it sounds. This isn’t instigating a coup against our democracy, or making off with top secret documents, or bullying Georgia election officials to ensure that an election went his way. In the grand scheme of things, these counts are minor crimes. All it takes is one intractable MAGA on the jury who thinks this is a Deep State conspiracy, or that Stormy Daniels is some vindictive gold-digger, and Trump will skate. Even so, a former POTUS is a criminal defendant. Let’s pause for a moment and—to use a phrase I abhor that was ubiquitous on Twitter seven years ago—let that sink in. None of the other 43 previous presidents (Grover Cleveland was 22 and 24) were indicted for even a single crime, Ulysses Grant’s need for speed notwithstanding. Nixon likely would have been but was pre-emptively pardoned, so we’ll never know. A FPOTUS indictment, therefore, is unprecedented. And this is just the first of Trump’s criminal trials. There are three more pending. Not one, not two, but three: four, altogether. Four! That doesn’t even take into account the civil fraud case, where the State of New York is poised to seize almost half a billion dollars in assets from Trump pending appeal—and that assumes that the bond he secured winds up being legit…”

See also Axios: New York Courts to release daily transcripts from Trump hush money trial



Yes and no. Some things change, some remain the same.

https://www.axios.com/2024/04/16/ai-top-secret-intelligence

"Top secret" is no longer the key to good intel in an AI world: report

… Today's intelligence systems cannot keep pace with the explosion of data now available, requiring "rapid" adoption of generative AI to keep an intelligence advantage over rival powers.

  • The U.S. intelligence community "risks surprise, intelligence failure, and even an attrition of its importance" unless it embraces AI's capacity to process floods of data, according to the report from the Special Competitive Studies Project.

  • The federal government needs to think more in terms of "national competitiveness" than "national security," given the wider range of technologies now used to attack U.S. interests.



Something I have been meaning to try. Could this turn Shakespeare into a graphic novel?

https://www.makeuseof.com/best-open-source-ai-image-generators/

The 5 Best Open-Source AI Image Generators

AI-based text-to-image generation models are everywhere and becoming easier to access daily. While it's easy just to visit a website and generate the image you're looking for, open-source text-to-image generators are your best bet if you want more control over the generation process.



 

Sunday, April 21, 2024

Long term implications? Pollution of the LLM corpus.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4771884

Do large language models have a legal duty to tell the truth?

Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and the development of shared social truths in democratic societies. LLMs produce responses that are plausible, helpful, and confident but that contain factual inaccuracies, inaccurate summaries, misleading references, and biased information. These subtle mistruths are poised to cause a severe cumulative degradation and homogenisation of knowledge over time. This article examines the existence and feasibility of a legal duty for LLM providers to create models that “tell the truth.” We argue that LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. Careless speech is defined and contrasted with the simplified concept of “ground truth” in LLMs and prior discussion of related truth-related risks in LLMs including hallucinations, misinformation, and disinformation. The existence of truth-related obligations in EU law is then assessed, focusing on human rights law and liability frameworks for products and platforms. Current frameworks generally contain relatively limited, sector-specific truth duties. The article concludes by proposing a pathway to create a legal truth duty applicable to providers of both narrow- and general-purpose LLMs.





Law firms will use AI. How will they prepare?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4794225

Leveraging The Use of Artificial Intelligence In Legal Practice

The integration of Artificial Intelligence (AI) into legal practice has revolutionized the legal landscape, offering unprecedented opportunities for efficiency and accuracy. By embracing AI technologies and adapting to the evolving legal landscape, legal professionals can enhance efficiency, accuracy, and client satisfaction, ultimately shaping the future of the legal profession. However, the adoption of AI in legal practice also presents challenges, including ethical considerations, data privacy concerns, and the need for specialized training. As legal professionals embrace AI technologies, it becomes imperative to address these challenges proactively and ensure responsible and ethical use. This presentation explores the diverse applications of AI in legal practice and its implications for the legal profession.



Saturday, April 20, 2024

Do we need a chapter here?

https://www.geekwire.com/2024/seattle-tech-vet-calls-rapidly-growing-ai-tinkerers-meetups-the-new-homebrew-computer-club-for-ai/

Seattle tech vet calls rapidly growing ‘AI Tinkerers’ meetups the new ‘Homebrew Computer Club’ for AI

A first meetup in Seattle in November 2022 attracted 12 people. A second in Austin was led by GitHub Copilot creator Alex Graveley, who came up with the name “AI Tinkerers.”

Nearly a year-and-a-half later, Heitzeberg said the idea has taken off and is going global. In a LinkedIn post last week, he said eight cities — from Seattle to Chicago to Boston to Medellin, Colombia, and elsewhere — have AI Tinkerers meetups planned over the next month.

We are kind of the Homebrew Computer Club of AI,” Heitzeberg said, referencing the famed hobbyist group that gathered in Silicon Valley in the mid-1970s to mid-1980s and attracted the likes of Apple founders Steve Jobs and Steve Wozniak. “It was people trying stuff. It’s that for AI, and it’s really needed and really good for innovation.”



Friday, April 19, 2024

I worry that “force” might eventually include beating a password out of me.

https://www.bespacific.com/cops-can-force-suspect-to-unlock-phone-with-thumbprint-us-court-rules/

Cops can force suspect to unlock phone with thumbprint, US court rules

Ars Technica: “The US Constitution’s Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan, a federal appeals court ruled yesterday. The ruling does not apply to all cases in which biometrics are used to unlock an electronic device but is a significant decision in an unsettled area of the law. The US Court of Appeals for the 9th Circuit had to grapple with the question of “whether the compelled use of Payne’s thumb to unlock his phone was testimonial,” the ruling in United States v. Jeremy Travis Payne said. “To date, neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial.” A three-judge panel at the 9th Circuit ruled unanimously against Payne, affirming a US District Court’s denial of Payne’s motion to suppress evidence. Payne was a California parolee who was arrested by California Highway Patrol (CHP) after a 2021 traffic stop and charged with possession with intent to distribute fentanyl, fluorofentanyl, and cocaine. There was a dispute in District Court over whether a CHP officer “forcibly used Payne’s thumb to unlock the phone.” But for the purposes of Payne’s appeal, the government “accepted the defendant’s version of the facts, i.e., ‘that defendant’s thumbprint was compelled.'” Payne’s Fifth Amendment claim “rests entirely on whether the use of his thumb implicitly related certain facts to officers such that he can avail himself of the privilege against self-incrimination,” the ruling said. Judges rejected his claim, holding “that the compelled use of Payne’s thumb to unlock his phone (which he had already identified for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking.” “When Officer Coddington used Payne’s thumb to unlock his phone—which he could have accomplished even if Payne had been unconscious—he did not intrude on the contents of Payne’s mind,” the court also said…”





Perspective. Worth an hour of your time.

https://www.nationalreview.com/corner/the-rise-of-the-machines-john-etchemendy-and-fei-fei-li-on-our-ai-future/

The Rise of The Machines: John Etchemendy and Fei-Fei Li on Our AI Future

John Etchemendy and Fei-Fei Li are the co-directors of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), founded in 2019 to “advance AI research, education, policy and practice to improve the human condition.” In this interview, they delve into the origins of the technology, its promise, and its potential threats. They also discuss what AI should be used for, where it should not be deployed, and why we as a society should — cautiously — embrace it.





Interesting story of an unlevel playing field.

https://lawrencekstimes.com/2024/04/18/lhs-journalists-dispute-gaggle/

Lawrence journalism students convince district to reverse course on AI surveillance they say violates freedom of press

Journalism students at Lawrence High School have convinced the school district to remove their files from the purview of a controversial artificial intelligence surveillance system after months of debate with administrators.

The AI software, called Gaggle, sifts through anything connected to the district’s Google Workspace — which includes Gmail, Drive and other products — and flags content it deems a safety risk, such as allusions to self-harm, depression, drug use and violence.



Thursday, April 18, 2024

Sorry for the short notice but I just found out myself. Privacy Foundation Seminar:

Artificial Intelligence and the Practice of Law

Friday April 19th 11:30 – 1:00

1 ethics CLE credit. Contact Kristen Dermyer 303-871-6487 <Kristen.Dermyer@du.edu> to register.





...and not just for lawyers.

https://www.bespacific.com/the-smartest-way-to-use-ai-at-work/

The Smartest Way to Use AI at Work

WSJ via MSN: “By by day, there’s growing pressure at the office. Do you respond to all those clients—or let AI do it? Do you attend that meeting—or do you send a bot? About 20% of employed adults said they have used OpenAI’s ChatGPT for work as of February 2024, up from 8% a year ago, according to Pew Research Center. The most popular uses for AI at work are research and brainstorming, writing first-draft emails and creating visuals and presentations, according to an Adobe survey. Productivity boosts from AI are estimated to be worth trillions of dollars over the next decade, say consultants. Many companies are encouraging their workers to embrace and learn the new tools. The industries that will benefit most are sales and marketing, customer care, software engineering and product development. For most workers, it can make your day-to-day a bit less annoying. “If you’re going to use it as a work tool,” said Lareina Yee, a senior partner at the consulting firm McKinsey and chair of its Technology Council, “you need to think of all the ways it can change your own productivity equation.” Using AI at work could get you fired—or at least in hot water. A judge last year sanctioned a lawyer who relied on fake cases generated by ChatGPT, and some companies have restricted AI’s usage. Other companies and bosses are pushing staff to do more with AI, but you’ll need to follow guidelines. Rule No. 1: Don’t put any company data into a tool without permission. And Rule No. 2: Only use AI to do work you can easily verify, and be sure to check its work…” Uses include: Email; Presentations; Summaries; Meetings.





Too many tools, too little time.

https://www.makeuseof.com/custom-gpts-that-make-chat-gpt-better/

10 Custom GPTs That Actually Make ChatGPT Better

ChatGPT on its own is great, but did you know that you can use custom GPTs to streamline its functionality? Custom GPTs can teach you how to code, plan trips, transcribe videos, and much, much more, and there are heaps for you to choose from.

So, here are the best custom GPTs that actually make ChatGPT a better tool for any situation.





Not sure I believe these numbers…

https://www.edweek.org/technology/see-which-types-of-teachers-are-the-early-adopters-of-ai/2024/04

See Which Types of Teachers Are the Early Adopters of AI

Among social studies and English/language arts teachers, the number of AI users was higher than the general teaching population. Twenty-seven percent of English teachers and social studies teachers use AI tools in their work. By comparison, 19 percent of teachers in STEM disciplines said they use AI, and 11 percent of elementary education teachers reported doing so.



Wednesday, April 17, 2024

I thought this sounded familiar…

https://sloanreview.mit.edu/article/ai-and-statistics-perfect-together/

AI and Statistics: Perfect Together

People are often unsure why artificial intelligence and machine learning algorithms work. More importantly, people can’t always anticipate when they won’t work. Ali Rahimi, an AI researcher at Google, received a standing ovation at a 2017 conference when he referred to much of what is done in AI as “alchemy,” meaning that developers don’t have solid grounds for predicting which algorithms will work and which won’t, or for choosing one AI architecture over another. To put it succinctly, AI lacks a basis for inference: a solid foundation on which to base predictions and decisions.

This makes AI decisions tough (or impossible) to explain and hurts trust in AI models and technologies — trust that is necessary for AI to reach its potential. As noted by Rahimi, this is an unsolved problem in AI and machine learning that keeps tech and business leaders up at night because it dooms many AI models to fail in deployment.

Fortunately, help for AI teams and projects is available from an unlikely source: classical statistics. This article will explore how business leaders can apply statistical methods and statistics experts to address the problem.





Clogging congress. (Or any organization that would take this seriously.)

https://www.schneier.com/blog/archives/2024/04/using-ai-generated-legislative-amendments-as-a-delaying-technique.html

Using AI-Generated Legislative Amendments as a Delaying Technique

Canadian legislators proposed 19,600 amendments —almost certainly AI-generated—to a bill in an attempt to delay its adoption.





Resource.

https://www.bespacific.com/free-guide-learn-how-to-use-chatgpt/

Free guide – Learn how to use ChatGPT

Ben’s Bites – Learn how to use ChatGPT. An introductory overview of ChatGPT, the AI assistant by OpenAI Designed for absolute beginners, this short course explores in simple terms how AI assistant ChatGPT works and how to get started using it.





Tools & Techniques. Could this be trained for other topics?

https://news.yale.edu/2024/04/16/student-developed-ai-chatbot-opens-yale-philosophers-works-all

Student-developed AI chatbot opens Yale philosopher’s works to all

LuFlot Bot, a generative AI chatbot trained on the works of Yale philosopher Luciano Floridi, answers questions on the ethics of digital technology.

Visit this link to converse with LuFlot about the ethics of digital technologies.



Tuesday, April 16, 2024

Is this the first step on the slippery slope to home defense drones armed with napalm and machine guns? (I can see where it would be very satisfying to paint ball a porch pirate!)

https://boingboing.net/2024/04/15/this-armed-security-camera-uses-ai-to-fire-paintballs-or-tear-gas-at-trespassers.html

This armed security camera uses AI to fire paintballs or tear gas at trespassers

PaintCam is an armed home/office security camera that uses AI to spot trespassers and fires paintballs or tear gas projectiles at them. The company's promotional video looks like a parody but apparently this "vigilant guardian that doesn't sleep, blink, or miss a beat" is a real product.

According to New Atlas, the system "uses automatic target marking, face recognition and AI-based decision making to identify unfamiliar visitors to your property, day or night.





When the demand for information is huge, providing anything must be profitable.

https://www.wired.com/story/iran-israel-attack-viral-fake-content/

Fake Footage of Iran’s Attack on Israel Is Going Viral

IN THE HOURS after Iran announced its drone and missile attack on Israel on April 13, fake and misleading posts went viral almost immediately on X. The Institute for Strategic Dialogue (ISD), a nonprofit think tank, found a number of posts that claimed to reveal the strikes and their impact, but that instead used AI-generated videos, photos, and repurposed footage from other conflicts which showed rockets launching into the night, explosions, and even President Joe Biden in military fatigues.

Just 34 of these misleading posts received more than 37 million views, according to ISD. Many of the accounts posting the misinformation were also verified, meaning they have paid X $8 per month for the “blue tick” and that their content is amplified by the platform’s algorithm. ISD also found that several of the accounts claimed to be open source intelligence (OSINT) experts, which has, in recent years, become another way of lending legitimacy to their posts.





I’m trying to get and stay current…

https://aiindex.stanford.edu/report/

Measuring trends in AI

Welcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine.

DOWNLOAD THE FULL REPORT

DOWNLOAD INDIVIDUAL CHAPTERS





Tools & Techniques.

https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/3741371/nsa-publishes-guidance-for-strengthening-ai-system-security/

NSA Publishes Guidance for Strengthening AI System Security

The National Security Agency (NSA) is releasing a Cybersecurity Information Sheet (CSI) today, Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.” The CSI is intended to support National Security System owners and Defense Industrial Base companies that will be deploying and operating AI systems designed and developed by an external entity.



Sunday, April 14, 2024

The evolution of computer crime.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4788909

Hacking Generative AI

Generative AI platforms, like ChatGPT, hold great promise in enhancing human creativity, productivity, and efficiency. However, generative AI platforms are prone to manipulation. Specifically, they are susceptible to a new type of attack called “prompt injection.” In prompt injection, attackers carefully craft their input prompt to manipulate AI into generating harmful, dangerous, or illegal content as output Examples of such outputs include instructions on how to build an improvised bomb, how to make meth, how to hotwire a car, and more. Researchers have also been able to make ChatGPT generate malicious code.

This article asks a basic question: do prompt injection attacks violate computer crime law, mainly the Computer Fraud and Abuse Act? This article argues that they do. Prompt injection attacks lead AI to disregard its own hard-coded content generation restrictions, which allows the attacker to access portions of the AI that are beyond what the system’s developers authorized. Therefore, this constitutes the criminal offense of accessing a computer in excess of authorization. Although prompt injection attacks could run afoul of the Computer Fraud and Abuse Act, this article offers ways to distinguish serious acts of AI manipulation from less serious ones, so that prosecution would only focus on a limited set of harmful and dangerous prompt injections.





Perspective.

https://www.ft.com/content/cde75f58-20b9-460c-89fb-e64fe06e24b9

ChatGPT essay cheats are a menace to us all

The other day I met a British academic who said something about artificial intelligence that made my jaw drop.

The number of students using AI tools like ChatGPT to write their papers was a much bigger problem than the public was being told, this person said.

AI cheating at their institution was now so rife that large numbers of students had been expelled for academic misconduct — to the point that some courses had lost most of a year’s intake. “I’ve heard similar figures from a few universities,” the academic told me.

Spotting suspicious essays could be easy, because when students were asked why they had included certain terms or data sources not mentioned on the course, they were baffled. “They have clearly never even heard of some of the terms that turn up in their essays.”



Saturday, April 13, 2024

Perspective.

https://abovethelaw.com/2024/04/artificial-intelligence-may-not-disrupt-the-legal-profession-for-a-while/

Artificial Intelligence May Not Disrupt The Legal Profession For A While

The work of artificial intelligence definitely still needs to be reviewed by a lawyer.

Ever since ChatGPT roared onto the scene over a year ago, everyone has been talking about how the world will change due to advances in artificial intelligence. Many commentators have singled out the legal industry as a sector that will be particularly impacted by artificial intelligence, since much of the rote work performed by associates can presumably be handled by artificial intelligence in the coming years. Initially, I also believed that artificial intelligence would have a huge impact on the legal profession in the short term, but it now seems that the legal profession will not be materially affected for at least several years, if not longer.



Friday, April 12, 2024

I might find this useful.

https://www.zdnet.com/article/google-and-mit-launch-a-free-generative-ai-course-for-teachers/

Google and MIT launch a free generative AI course for teachers

When considering generative AI in the classroom, many people think of its potential for students; however, teachers can benefit just as much from the technology, if not more. On Thursday, Google and MIT Responsible AI for Social Empowerment and Education (RAISE) unveiled a free Google Generative AI Educators course to help middle and high school teachers use generative AI tools to enhance their workflow and students' classroom experience.

The self-paced, two-hour course instructs teachers how to use generative AI to save time in everyday tasks such as writing emails, modifying content for different reading levels, building creative assessments, structuring activities to students' interests, and more, according to the press release. Teachers can even learn how to use generative AI to help with one of the most time-consuming tasks – lesson planning – by inputting their existing lesson plan into the generative AI models to get ideas on what to do next in the classroom.

https://skillshop.exceedlms.com/student/path/1176018