Saturday, September 23, 2023

Not a fear of AI, a fear of how someone might use AI?

https://www.semafor.com/article/09/22/2023/white-house-could-force-cloud-companies-to-disclose-ai-customers

White House could force cloud companies to disclose AI customers

The White House is considering requiring cloud computing firms to report some information about their customers to the U.S. government, according to people familiar with an upcoming executive order on artificial intelligence.

The provision would direct the Commerce Department to write rules forcing cloud companies like Microsoft, Google, and Amazon to disclose when a customer purchases computing resources beyond a certain threshold. The order hasn’t been finalized and specifics of it could still change.

Similar “know-your-customer” policies already exist in the banking sector to prevent money laundering and other illegal activities, such as the law mandating firms to report cash transactions exceeding $10,000.

In this case, the rules are intended to create a system that would allow the U.S. government to identify potential AI threats ahead of time, particularly those coming from entities in foreign countries. If a company in the Middle East began building a powerful large language model using Amazon Web Services, for example, the reporting requirement would theoretically give American authorities an early warning about it.

The policy proposal represents a potential step toward treating computing power — or the technical capacity AI systems need to perform tasks — like a national resource. Mining Bitcoin, developing video games, and running AI models like ChatGPT all require large amounts of compute.





Perspective.

https://bigthink.com/the-well/mary-shelley-lessons-frankenstein-ai/

Mary Shelley’s Frankenstein can illuminate the debate over generative AI

Shelley’s dystopian tale has managed to stay relevant since its publication. It has a riddling, Zen koan-like quality that has edified and entertained readers for centuries, inspiring a range of interpretations. Recently, it has been making appearances in the heated debates over generative artificial intelligence, where it often is evoked as a cautionary tale about the dangers of scientific overreach. Some worry that in pursuing technologies like AI, we are recklessly consigning our species to Victor Frankenstein’s tragic fate. Our wonderchildren, our miraculous machines, might ultimately destroy us. This fear is an expression of what science fiction writer Isaac Asimov once called the “Frankenstein complex,” a Luddite fear of robots.



Friday, September 22, 2023

Not a business model I would have thought of. (Let me browse through your closet to see if I want any of your stuff.)

https://www.forbes.com/sites/alexkonrad/2023/09/21/with-resale-app-tiptop-postmates-founder-bastian-lehmann-wants-to-turn-your-junk-into-cash/?sh=67c7df453fcd

With Resale App TipTop, Postmates Founder Bastian Lehmann Wants To Turn Your Junk Into Cash

With TipTop, Lehmann is taking a big bet on a new approach that uses software to predict the value of a product to offer a consumer instant cash, sight unseen. Its first app, released across the U.S. today and called TipTop Cash, connects to your Gmail or Amazon accounts to scan for past purchases and predict a fair value to offer for each eligible item — then pay it out and send someone to pick it up. A second service launching in November, TipTop Pay, will move that process up-front, offering you a discount on how much you pay for something in exchange for the promise that you’ll return the product after a fixed period of time. TipTop will then resell that inventory to wholesalers or on other third-party marketplaces like eBay.





Tools & Techniques.

https://www.makeuseof.com/ai-text-generators-for-writing-inspiration/

5 AI Text Generators for Writing Inspiration



Thursday, September 21, 2023

Humans are taking AI jobs!

https://restofworld.org/2023/ai-developers-fiction-poetry-scale-ai-appen/

Why Silicon Valley’s biggest AI developers are hiring poets

Training data companies are grabbing writers of fiction, drama, poetry, and also general humanities experts to improve AI creative writing.

A string of job postings from high-profile training data companies, such as Scale AI and Appen, are recruiting poets, novelists, playwrights, or writers with a PhD or master’s degree. Dozens more seek general annotators with humanities degrees, or years of work experience in literary fields. The listings aren’t limited to English: Some are looking specifically for poets and fiction writers in Hindi and Japanese, as well as writers in languages less represented on the internet.

The companies say contractors will write short stories on a given topic to feed them into AI models. They will also use these workers to provide feedback on the literary quality of their current AI-generated text.





Unfortunate and probably not well intended.

https://www.pogowasright.org/today-the-uk-parliament-undermined-the-privacy-security-and-freedom-of-all-internet-users/

Today The UK Parliament Undermined The Privacy, Security, And Freedom Of All Internet Users

Joe Mullin of EFF writes:

The U.K. Parliament has passed the Online Safety Bill (OSB), which says it will make the U.K. “the safest place” in the world to be online. In reality, the OSB will lead to a much more censored, locked-down internet for British users. The bill could empower the government to undermine not just the privacy and security of U.K. residents, but internet users worldwide.

A Backdoor That Undermines Encryption

A clause of the bill allows Ofcom, the British telecom regulator, to serve a notice requiring tech companies to scan their users–all of them–for child abuse content. This would affect even messages and files that are end-to-end encrypted to protect user privacy. As enacted, the OSB allows the government to force companies to build technology that can scan regardless of encryption–in other words, build a backdoor.

These types of client-side scanning systems amount to “Bugs in Our Pockets,” and a group of leading computer security experts has reached the same conclusion as EFF–they undermine privacy and security for everyone. That’s why EFF has strongly opposed the OSB for years.

It’s a basic human right to have a private conversation. This right is even more important for the most vulnerable people. If the U.K. uses its new powers to scan people’s data, lawmakers will damage the security people need to protect themselves from harassers, data thieves, authoritarian governments, and others. Paradoxically, U.K. lawmakers have created these new risks in the name of online safety.

The U.K. government has made some recent statements indicating that it actually realizes that getting around end-to-end encryption isn’t compatible with protecting user privacy. But given the text of the law, neither the government’s private statements to tech companies, nor its weak public assurances, are enough to protect the human rights of British people or internet users around the world.

Censorship and Age-Gating

Online platforms will be expected to remove content that the U.K. government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the U.K. as in the U.S., people do not agree about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions.

The OSB will also lead to harmful age-verification systems. This violates fundamental principles about anonymous and simple access that has existed since the beginning of the Internet. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary.

In the coming months, we’ll be watching what type of regulations the U.K. government publishes describing how it will use these new powers to regulate the internet. If the regulators claim their right to require the creation of dangerous backdoors in encrypted services, we expect encrypted messaging services to keep their promises and withdraw from the U.K. if that nation’s government compromises their ability to protect other users.

This article originally appeared on EFF.



Wednesday, September 20, 2023

Closer and closer to the Panopticon!

https://www.tampabay.com/news/florida/2023/09/19/florida-artificial-intelligence-prison-surveillance-leo-technologies-verus-calls-amazon/

Florida prisons use artificial intelligence to surveil calls

Florida is now using artificial intelligence to monitor and transcribe the phone conversations of the state’s 80,000-plus inmates.

The Florida Department of Corrections paid $2.5 million to California-based Leo Technologies to begin using its surveillance program called Verus beginning in August. The program scans incoming and outgoing calls, including to inmates’ friends and family, and does automatic searches for keywords selected by prison officials and the technology company’s employees. It uses speech-to-text technology powered by Amazon to transcribe the content of conversations that include those keywords.

The contract, which lasts until June 30 of next year, allows prisons to record and scan up to 50 million minutes of conversations. The only calls that the company says are excluded from monitoring are communications with lawyers, doctors and spiritual advisers.





What does ‘well trained’ mean in this context?

https://breakingdefense.com/2023/09/beyond-chatgpt-experts-say-generative-ai-should-write-but-not-execute-battle-plans/

Beyond ChatGPT: Experts say generative AI should write — but not execute — battle plans

Chatbots can now invent new recipes (with mixed success ), plan vacations, or write a budget-conscious grocery list. So what’s stopping them from summarizing secret intelligence or drafting detailed military operations orders?

Nothing, in theory, said AI experts from the independent Special Competitive Studies Project. The Defense Department should definitely explore those possibilities, SCSP argues, lest China or some other unscrupulous competitor get there first. In practice, however, the project’s analysts emphasized in interviews with Breaking Defense, it’ll take a lot of careful prep work, as laid out in a recently released SCSP study.

And, they warned, you’ll always want at least one well-trained human checking the AI’s plan before you act on it, let alone wire the AI directly to a swarm of lethal drones.





Closer and closer to useful?

https://www.platformer.news/p/how-google-taught-ai-to-doubt-itself

How Google taught AI to doubt itself

From the day that the chatbots arrived last year, their makers warned us not to trust them. The text generated by tools like ChatGPT does not draw on a database of established facts. Instead, chatbots are predictive — making probabilistic guesses about which words seem right based on the massive corpus of text that their underlying large language models were trained on.

As a result, chatbots are often “confidently wrong,” to use the industry’s term.

… Starting today, though, Bard will do a bit more work on your behalf. After the chatbot answers one of your queries, hitting the Google button will “double check” your response. Here’s how the company explained it in a blog post:

When you click on the “G” icon, Bard will read the response and evaluate whether there is content across the web to substantiate it. When a statement can be evaluated, you can click the highlighted phrases and learn more about supporting or contradicting information found by Search.

Double-checking a query will turn many of the sentences within the response green or brown. Green-highlighted responses are linked to cited web pages; hover over one and Bard will show you the source of the information. Brown-highlighted responses indicate that Bard doesn’t know where the information came from, highlighting a likely mistake.





Perspective. Some real AI applications.

https://mitsloan.mit.edu/ideas-made-to-matter/3-business-problems-data-analytics-can-help-solve

3 business problems data analytics can help solve

Each year, the MIT Sloan Master of Business Analytics Capstone Project partners students with companies that are looking to solve a business problem with data analytics. The program offers unique and up-close insight into what companies were grappling with at the beginning of 2023. This year, students worked on 41 different projects with 33 different companies. The winning projects looked at measuring innovation through patents for Accenture and using artificial intelligence to improve drug safety for Takeda.

VIEW ALL OF THE CAPSTONE PROJECTS





Making lawyers into techies?

https://www.bespacific.com/artificial-intelligence-tools-and-tips/

Artificial Intelligence Tools and Tips

Via LLRX Artificial Intelligence Tools and Tips Jim Calloway, Director of the Oklahoma Bar Association’s Management Assistance Program and Julie Bays, OBA Practice Management Advisor, aiding attorneys in using technology and other tools to efficiently manage their offices, recommend that now is a good time to experiment with specific AI-powered tools and suggest the best techniques for using them.



Tuesday, September 19, 2023

Interesting. It’s public data but we should treat it as private?

https://www.404media.co/inside-shadowdragon-ice-babycenter-pregnancy-fortnite-black-planet/

Inside ShadowDragon, The Tool That Lets ICE Monitor Pregnancy Tracking Sites and Fortnite Players

Companies like Shadow Dragon collect an extraordinary amount of information from social media and other websites about the activities of internet users. This type of mass surveillance, which is available to the government and other entities, creates a chilling effect on online activities,” Jeramie D. Scott, senior counsel & director of EPIC’s Project on Surveillance Oversight, told 404 Media in an email. “Our interactions, associations, words, habits, locations—in essence our entire digital lives—are being collected for scrutiny now and indefinitely into the future through expanding analytical tools of black box algorithms. The abuse of such tools is not an ‘if’ but a ‘when.’”





Perhaps the target was too easy? They knew going in what must be there…

https://www.bespacific.com/ai-and-casetext/

AI and Casetext

Via LinkedIn – Lawyer Carolyn Elefant on TikTok – her site is https://myshingle.com/

Watch #AI parse the 500 page #DonaldTrumpdeposition and cherry pick the instances where he arguably exaggerated his net worth which is one of the fact issues in the case. #legalai #smalllawfirm #legaltech #futureoflaw. Casetext hoovered through the 500 page deposition of former President Donald Trump, extracting everything he said related to inflating his net worth, which is one of the NY AG’s principal claims in its civil fraud lawsuit. Casetext took 45 minutes to get through. But this would easily be a day’s work for a paralegal or associate. Think of the time and cost savings in preparing for summary judgment or cross examination at trial. Think of all the lawyers who can litigate against biglaw with a lean team. Think of all the clients served. It’s awesome. To view my full demo, click the link in comments.”





Suggesting that we are well past the tipping point?

https://www.gartner.com/en/newsroom/press-releases/2023-09-18-gartner-predicts-80percent-of-large-enterprise-finance-teams-will-use-internal-ai-platforms-by-2026

Gartner Predicts 80% of Large Enterprise Finance Teams Will Use Internal AI Platforms by 2026

… Gartner analysts are discussing the characteristics and best practices of finance organizations that have successfully introduced AI during the Gartner CFO & Finance Executive Conference which is taking place here through Tuesday.

“The recent entry of large, well-established companies into the generative AI market has kicked off a highly competitive race to see who can deliver revolutionary value first,” said Mark D. McDonald, senior director analyst in the Gartner Finance Practice. “Leadership teams do not want to fall behind peers; however, as the chief steward for an organization’s financial health, CFO’s must balance the risks and rewards of tools like generative AI. There are three distinct conversations that CFOs should have across leadership circles to ensure that reasonable expectations are established, and the use of generative AI creates value without introducing unacceptable risks.”





Just because, and nevermore…

https://www.bespacific.com/the-collected-works-of-edgar-allan-poe/

The Collected Works of Edgar Allan Poe

This is a collection of Edgar Allan Poe’s works, some 130 pieces, presented with optimized legibility and no ads or trackers. This is still early days, so expect more to come. Enjoy your stay. Kindly, Joshua Mauldin Library Curator.



Monday, September 18, 2023

An article for Donald Trump’s lawyers?

https://www.bespacific.com/the-truth-about-hallucinations-in-legal-research-ai-how-to-avoid-them-and-trust-your-sources/

The Truth About Hallucinations in Legal Research AI: How to Avoid Them and Trust Your Sources

Rebecca Fordon – AI Law Librarians – “Hallucinations in generative AI are not a new topic. If you watch the news at all (or read the front page of the New York Times ), you’ve heard of the two New York attorneys who used ChatGPT to create fake cases entire cases and then submitted them to the court. After that case, which resulted in a media frenzy and (somewhat mild) court sanctions. many attorneys are wary of using generative AI for legal research. But vendors are working to limit hallucinations and increase trust. And some legal tasks are less affected by hallucinations. Understanding how and why hallucinations occur can help us evaluate new products and identify lower-risk uses. A brief aside on the term “hallucinations”. Some commentators have cautioned against this term, arguing that it lets corporations shift the blame to the AI for the choices they’ve made about their models. They argue that AI isn’t hallucinating, it’s making things up, or producing errors or mistakes, or even just bullshitting. I’ll use the word hallucinations here, as the term is common in computer science, but I recognize it does minimize the issue. With that all in mind, let’s dive in…”





Maybe its not just the legal profession…

https://www.bespacific.com/thriving-not-just-surviving-at-the-chat-gpt-tipping-point/

Thriving, Not Just Surviving, at the Chat GPT Tipping Point

JD Supra – What Lawyers, Judges, and Clients Need to Know About Using Generative AI Right Now: “Generative AI and Chat GPT exploded in the past few months causing businesses and legal professionals to reconsider their operational and administrative processes. The race to develop programs and applications using AI for legal professionals is proliferating faster than lawyers, judges, and employers can keep up as they are occupied by day to day deadlines and challenges. Individuals at all levels are asking whether AI is coming to take their job. Worst case scenarios are arising from cases where the use of ChatGPT has gone terribly wrong. Regulators from governments to judges and organization managers who want to keep AI use within safe guardrails are working against a moving target…”



Sunday, September 17, 2023

In the future, will everyone have their own personal surveillance drone?

https://link.springer.com/chapter/10.1007/978-3-031-40118-3_3

Facial Recognition Technology, Drones, and Digital Policing: Compatible with the Fundamental Right to Privacy?

Drones are the new gadget law enforcement agencies cannot get enough of. These agencies widely deploy drones, amongst others, for search and rescue operations or in response to a natural disaster. The benefits these drones offer are unquestionable. However, these drones are increasingly being deployed for a less self-evident and legitimate purpose: surveillance. The recourse to drones for surveillance operations is highly problematic, given its intrusiveness on citizens’ fundamental right to privacy. Furthermore, this intrusiveness becomes even more worrisome when these drones are equipped with facial recognition technology. Consequently, this paper will critically examine law enforcement’s recourse to facial recognition technology in drones and the worrying consequences of such deployment on citizens’ fundamental right to privacy.





Lazy government. What else is new?

https://www.pogowasright.org/gao-reports-shortcomings-in-federal-law-enforcement-on-privacy-and-civil-liberties/

GAO reports shortcomings in federal law enforcement on privacy and civil liberties

Joe Cadillic sends this along concise write-up from Matthew Casey at KJZZ:

GAO reports shortcomings in federal law enforcement on privacy and civil liberties

The first says a Homeland Security intelligence office that shares sensitive data with police and others has not done audits to make sure employees accessing the info have permission.
The second says a number of federal law enforcement agencies use facial recognition technology, but don’t have a correlating policy to protect civil rights and liberties.
And the last report says Homeland Security needs to do much more to protect the privacy of those whose info eventually goes into a delayed-and-over-budget system expected to store biometrics for hundreds of millions of people.





Could a stand alone AI serve as the “house ethicist?”

https://www.sciencedirect.com/science/article/abs/pii/S1546084323001128

Socrates in the Machine: The “House Ethicist” in AI for Healthcare

The physical presence of an on-site ethics expert has emerged as one way of addressing current and future ethical issues associated with the use of artificial intelligence in healthcare. We describe and evaluate different roles of the “house ethicist,” a figure we consider beneficial to both artificial intelligence research and society. However, we also argue that the house ethicist is currently in need of further professionalization to meet a number of challenges outlined here and will require both individual and concerted community efforts to ensure the flourishing of the embedded ethics approach.





Redefining privacy.

https://kilthub.cmu.edu/articles/thesis/Reframing_Privacy_in_the_Digital_Age_The_Shift_to_Data_Use_Control/24097575

Reframing Privacy in the Digital Age: The Shift to Data Use Control

This dissertation is concerned with privacy: specifically, vulnerabilities to harm that stem from infringements on individuals’ privacy that are unique to or exacerbated by the modern power of predictive algorithms in the contemporary digital landscape. For one ubiquitous example, consider how facial recognition technology has evolved over the past decade and fundamentally altered the sense in which our personal image is exposed, searchable, and manipulable in digital spaces. Modern algorithms are capable, based on relatively few data points (often in the form of photos freely uploaded to online albums), of identifying individuals in pictures in a variety of contexts—in different settings, from different angles, even at vastly different ages, sometimes including baby pictures! Relatedly, reverse image search is now quite an effective tool, in many cases allowing anyone with access to easily ascertain someone’s identity from a single photograph. And of course, image manipulation has progressed by leaps and bounds in recent years, approaching a point where predictive algorithms can, for instance, generate false but eerily accurate portrayals of people in situations they may never have actually been in.