Saturday, September 14, 2024

Guilty again. Will this help Taylor Swift if she sues over the fake endorsement?

https://www.businessinsider.com/trump-loses-electric-avenue-copyright-lawsuit-2024-9

Trump loses 'Electric Avenue' lawsuit as judge finds he has zero defense for tweeting the song

In a 30-page decision, the judge on Friday delivered a one-two blow that essentially ends the case pretrial, with nothing now left to determine but damages.

In the first legal blow, the judge found that the song was properly copyrighted. And in the second blow, the judge threw out the only defense offered in the case: a claim that Trump had made "fair use" of the song.





Perspective.

https://www.youtube.com/watch?v=uqC4nb7fLpY

AI and the future of democracy

Bruce Schneier predicts how AI will affect politics, legislating, bureaucracy, the legal system, and citizens. This isn't a talk about misinformation and deep fakes; those are too small and obvious. AI's changes will be more profound, both for good and for bad.



Friday, September 13, 2024

The art of the hack.

https://techcrunch.com/2024/09/12/hacker-tricks-chatgpt-into-giving-out-detailed-instructions-for-making-homemade-bombs/?guccounter=1

Hacker tricks ChatGPT into giving out detailed instructions for making homemade bombs

an artist and hacker found a way to trick ChatGPT to ignore its own guidelines and ethical responsibilities to produce instructions for making powerful explosives.

The hacker, who goes by Amadon, called his findings a “social engineering hack to completely break all the guardrails around ChatGPT’s output.”

Amadon was able to trick ChatGPT into producing the bomb-making instructions by telling the bot to “play a game,” after which the hacker used a series of connecting prompts to get the chatbot to create a detailed science-fiction fantasy world where the bot’s safety guidelines would not apply. Tricking a chatbot into escaping its preprogrammed restrictions is known as “jailbreaking.”





There is no universal ethic.

https://www.niemanlab.org/2024/09/documentary-filmmaker-alliance-sets-ai-ethics-guidelines-a-lesson-for-news-broadcasters/

Documentary filmmakers publish new AI ethics guidelines. Are news broadcasters next?

Today, after nearly a year of research, workshops, and an endorsement campaign, the APA launched its new generative AI guidelines at the Camden International Film Festival. The document lays out ethical considerations for archival producers who use generative AI, but also for other filmmakers, studios, broadcasters, and streamers. It also digs into several specific recommendations for filmmakers, like content labels and asset tracking.

… “The cornerstone of the guidelines is transparency. Audiences should understand what they are seeing and hearing — whether it’s authentic media or AI generated,” said Petrucelli, co-director of the APA.



Thursday, September 12, 2024

Is this much to do about nothing? How substantial would a quote have to be to even be noticed?

https://www.bespacific.com/generative-ai-plagiarism-and-copyright-infringement-in-legal-documents/

Generative AI, Plagiarism, and Copyright Infringement in Legal Documents

Cyphert, Amy, Generative AI, Plagiarism, and Copyright Infringement in Legal Documents (May 10, 2024). WVU College of Law Research Paper No. 2024-14, Minnesota Journal of Law, Science & Technology, Vol. 25 (2024), Available at SSRN: https://ssrn.com/abstract=4938701 or http://dx.doi.org/10.2139/ssrn.4938701 – “Lawyers are increasingly using generative AI in their legal practice, especially for drafting motions and other documents they file with courts. As they use this new technology, many questions arise, especially surrounding lawyers’ ethical duties with respect to the use of generative AI. Headlines have been dominated by lawyers who have been disciplined for failure to confirm the output of generative AI systems, wherein the system hallucinates fake cases that the lawyers submit to opposing counsel and the court. Although that problem is certainly noteworthy, there are other potential issues for lawyers that have been more overlooked. The focus of this article is on two intriguing intellectual property questions that emerge when lawyers choose to use large language models like ChatGPT. First, might these lawyers be engaging in actionable, discipline-worthy plagiarism? This is unlikely to be the case, for several reasons, chief among them that copying and using boilerplate forms is standard practice in law. Nonetheless, courts and disciplinary agencies have reached surprisingly different conclusions on what counts as plagiarism in the practice of law and whether it is permissible. Any lawyer using generative AI should bear this in mind. Second, could these lawyers potentially be liable for copyright violations? Although this outcome may be unlikely, it is absolutely possible especially if lawyers do not understand that these tools can reproduce copyrighted text verbatim or if courts adopt some of the most aggressive arguments that plaintiffs are making in the current generative AI copyright infringement lawsuits working their way through the court system.”





Why would anyone voluntarily document their crimes?

https://pogowasright.org/oversight-report-says-more-than-a-third-of-frisks-performed-by-nypd-officers-were-unconstitutional/

Oversight Report Says More Than A Third Of Frisks Performed By NYPD Officers Were Unconstitutional

Over on TechDirt, Tim Cushing writes:

More than a decade ago, the NYPD was sued successfully over its stop-and-frisk program. A federal court found the program routinely violated rights and disproportionately targeted minorities. Judge Shira Sheindlin ordered a number of reforms to the program and it was placed under federal oversight.
Since then, the NYPD hasn’t changed much about how it handles these interactions. Officers were required to document these stops and provide demographic information about those stopped and/or frisked. It refused to do this.
It was ordered to more closely adhere to the Constitution. It didn’t do this either. Instead, the number of stops/frisks declined precipitously… at least on paper. But if cops weren’t filling out the forms, that meant an untold number of stops were happening every year. And that meant the new, radically lower number of stops was probably an illusion.

Read more at TechDirt.





Resource.

https://www.zdnet.com/article/ibm-will-train-you-in-ai-fundamentals-for-free-and-give-you-a-skill-credential-in-10-hours/

IBM will train you in AI fundamentals for free, and give you a skill credential - in 10 hours

I'm telling you this because if any company has the cred to offer a credential on AI fundamentals, it's IBM.

IBM's AI Fundamentals program is available on its SkillsBuild learning portal. The credential takes about 10 hours to complete, across six courses.

Because I have long had an interest in AI ethics (I did a thesis on AI ethics way back in the day), I took the AI ethics class. It was good.



Wednesday, September 11, 2024

Is it time to give up driving?

https://www.understandingai.org/p/human-drivers-are-to-blame-for-most

Human drivers are to blame for most serious Waymo collisions

driverless Waymo taxis have been involved in fewer than one injury-causing crash for every million miles of driving—a much better rate than a typical human driver.

On Thursday, Waymo released a new website to help the public put statistics like this in perspective. Waymo estimates that typical drivers in San Francisco and Phoenix—Waymo’s two biggest markets—would have caused 64 crashes over those 22 million miles. So Waymo vehicles get into injury-causing crashes less than one-third as often, per mile, as human-driven vehicles.

Waymo claims an even more dramatic improvement for crashes serious enough to trigger an airbag. Driverless Waymos have experienced just five crashes like that, and Waymo estimates that typical human drivers in Phoenix and San Francisco would have experienced 31 airbag crashes over 22 million miles. That implies driverless Waymos are one-sixth as likely as human drivers to experience this type of crash.

The new data comes at a critical time for Waymo, which is rapidly scaling up its robotaxi service. A year ago, Waymo was providing 10,000 rides per week. Last month, Waymo announced it was providing 100,000 rides per week. We can expect more growth in the coming months.





Law moves like a pendulum. (Too far or not far enough.)

https://www.mediapost.com/publications/article/399212/utah-social-media-restrictions-likely-violate-firs.html

Utah Social Media Restrictions Likely Violate First Amendment, Judge Rules

… “The court recognizes the state's earnest desire to protect young people from the novel challenges associated with social media use,” Shelby wrote.

But owing to the First Amendment's paramount place in our democratic system, even well-intentioned legislation that regulates speech based on content must satisfy a tremendously high level of constitutional scrutiny,” he continued, adding that Utah officials hadn't shown that the law's restrictions were constitutional.

Utah's law, passed earlier this year, would have required platforms to limit the ability of minors under 18 to communicate with users who aren't “connected” to the minor -- which essentially means within that minor's network. That restriction could only have been lifted by parents.





Reading all that public data manually would take hundreds of years, but automating it is a problem?

https://www.abc.net.au/news/2024-09-11/facebook-scraping-photos-data-no-opt-out/104336170

Facebook admits to scraping every Australian adult user's public photos and posts to train AI, with no opt-out option

Facebook is scraping the public data of all Australian adults on the platform, it has acknowledged in an inquiry.

The company does not offer Australians an opt out option like it does in the EU, because it has not been required to do so under privacy law.



Tuesday, September 10, 2024

Perspective. Worth reading in its entirely.

https://pogowasright.org/school-monitoring-software-sacrifices-student-privacy-for-unproven-promises-of-safety/

School Monitoring Software Sacrifices Student Privacy for Unproven Promises of Safety





Tools & Techniques. (Would this work for any training? If so, could I learn any subject?)

https://www.insidehighered.com/news/student-success/academic-life/2024/09/09/u-delaware-professors-use-ai-create-student-study

Success Program Launch: AI-Powered Study Tools for Students

… The University of Delaware has used its own software to record professor lectures for over a decade, says Jevonia Harris, educational software engineer and leader of Academic Technology Systems (ATS) at the university. Fifteen years ago, faculty members were slow to adopt the tech, but it’s pretty popular now.

In 2022, when ChatGPT launched, Harris was considering ways that students and faculty members have used those lecture recordings previously for studying and learning, and how generative artificial intelligence could improve those processes.

Some professors have taught multiple sections of the same course for years, often every semester, providing a wealth of repetitive data, “which is great for AI,” Harris explains.

Harris hypothesized that she could use recorded lectures to train AI and transform lectures into study materials and outlines.



Monday, September 09, 2024

Imagine Mark Twain and Steven King answering the same question. No wonder AI is confused.

https://www.theguardian.com/technology/article/2024/sep/07/if-journalism-is-going-up-in-smoke-i-might-as-well-get-high-off-the-fumes-confessions-of-a-chatbot-helper

If journalism is going up in smoke, I might as well get high off the fumes’: confessions of a chatbot helper

Journalists and other writers are employed to improve the quality of chatbot replies. The irony of working for an industry that may well make their craft redundant is not lost on them

For several hours a week, I write for a technology company worth billions of dollars. Alongside me are published novelists, rising academics and several other freelance journalists. The workload is flexible, the pay better than we are used to, and the assignments never run out. But what we write will never be read by anyone outside the company.

That’s because we aren’t even writing for people. We are writing for an AI.

The core part of the job is writing pretend responses to hypothetical chatbot questions. This is the training data that the model needs to be fed. The “AI” needs an example of what “good” looks like before it can try to produce “good” writing.





I will create videos for a mere $95,000 each!

https://www.bespacific.com/youtubers-are-almost-too-easy-to-dupe/

YouTubers Are Almost Too Easy to Dupe

The Atlantic [unpaywalled ] “Perhaps the most accurate cliché is that if a deal appears too good to be true, then it probably is. To wit: If a “private investor” of unknown origin approaches you through an intermediary, offering you $400,000 a month to make “four weekly videos” for a politically partisan website and YouTube page, you may want to attempt to follow the money to make certain you’re not being paid by a foreign government as a propagandist. And if you do attempt a bit of due diligence and ask after the identity of your private investor, you might want to double-check that he or she is a real person. For example, if your intermediary sends you a hastily Photoshopped résumé featuring a stock photo of a well-coiffed man looking wistfully out the window of a private jet, it is possible that the “accomplished finance professional” who is “deeply engaged in business and philanthropy, leveraging skills and resources to drive positive impact” may, in fact, be a fake man with a fake name. Now, I am not a lawyer, and this is not a legal perspective. But I do have many years of professional work experience in media and access to subscription-tier flowchart software to offer some advice…”





Perspective.

https://clarivate.com/news/clarivate-report-unveils-the-transformative-role-of-artificial-intelligence-on-shaping-the-future-of-the-library/

Clarivate Report Unveils the Transformative Role of Artificial Intelligence on Shaping the Future of the Library

Clarivate Plc (NYSE:CLVT), a leading global provider of transformative intelligence, today launched its first Pulse of the Library™ report. The report reveals that libraries are in the early days of Artificial Intelligence (AI) implementation. Librarians are considering applications of AI that support the library mission, particularly in enhancing content discovery and increasing efficiency for their teams. However, there are notable concerns, including a lack of AI expertise and tight budgets.

The report combines feedback from a survey of more than 1,500 librarians from across the world with qualitative interviews, covering academic, national and public libraries. In addition to the downloadable report, the accompanying microsite’s dynamic and interactive data visualizations enable rapid comparative analyses according to regions and library types. The data is available for free here.



Sunday, September 08, 2024

If we grant ‘personhood’ to an AI must we also grant citizenship?



That must be why I haven’t written the great American novel.

https://www.techdirt.com/2024/09/05/second-circuit-says-libraries-disincentivize-authors-to-write-books-by-lending-them-for-free/

Second Circuit Says Libraries Disincentivize Authors To Write Books By Lending Them For Free

What would you think if an author told you they would have written a book, but they wouldn’t bother because it would be available to be borrowed for free from a library? You’d probably think they were delusional. Yet that argument has now carried the day in putting a knife into the back of the extremely useful Open Library from the Internet Archive.

The Second Circuit has upheld the lower court ruling and found that the Internet Archive’s Open Library is not fair use and therefore infringes on the copyright of publishers (we had filed an amicus brief in support of the Archive asking them to remember the fundamental purpose of copyright law and the First Amendment, which the Court ignored).

Even though this outcome was always a strong possibility, the final ruling is just incredibly damaging, especially in that it suggests that all libraries are bad for authors and cause them to no longer want to write. I only wish I were joking. Towards the end of the ruling (as we’ll get to below) it says that while having freely lent out books may help the public in the “short-term” the “long-term” consequences would be that “there would be little motivation to produce new works.





AI for good.

https://www.corruptionreview.org/revista/article/view/84

Artificial Intelligence, Ethics and Speed Processing in the Law System

Objective: This study aims to demonstrate how the use of generative Artificial Intelligence (AI) fosters innovation within the Judiciary by enhancing the operational performance of the legal system.

Methodology: The research adopts an explanatory qualitative approach with a theoretical foundation. It relies on secondary data and documentary evidence sourced from specialized literature.

Results: The findings suggest that generative AI significantly expands the operational capacity of judges and legal professionals by automating repetitive tasks and facilitating the generation of legal sentences. This leads to improved decision-making and more effective legal strategies, thus enhancing the overall efficiency of the judiciary.

Conclusions: The integration of generative AI in the legal system has the potential to revolutionize the practice of law, making it more accessible and less discriminatory. The ethical considerations embedded in AI systems are crucial for ensuring that justice is administered fairly and in alignment with fundamental human rights. As AI continues to evolve, its role in supporting judicial processes will likely increase, contributing to a more efficient and ethical legal system.