Saturday, June 15, 2024

This has got to be a bit confusing…

https://www.semafor.com/article/06/14/2024/microsoft-ai-ceo-mustafa-suleyman-audits-openais-code

Microsoft’s star AI chief peers into OpenAI’s code, highlighting an unusual rivalry

lately, one of DeepMind’s founders, Mustafa Suleyman, has been doing the unthinkable: looking under the hood at OpenAI’s crown jewels — its secret algorithms behind foundation models like GPT-4, people familiar with the matter said.

That’s because Suleyman is now head of AI efforts at Microsoft, which has intellectual property rights to OpenAI’s software as part of its multibillion-dollar investment in the company.

His presence, though, has brought new attention to an unusual dynamic: Microsoft and OpenAI are inextricably linked; they are also competitors.





Don’t let the cute name fool you.

https://thehackernews.com/2024/06/new-attack-technique-sleepy-pickle.html

New Attack Technique 'Sleepy Pickle' Targets Machine Learning Models

The attack method, per Trail of Bits, weaponizes the ubiquitous format used to package and distribute machine learning (ML) models to corrupt the model itself, posing a severe supply chain risk to an organization's downstream customers.

"Sleepy Pickle is a stealthy and novel attack technique that targets the ML model itself rather than the underlying system," security researcher Boyan Milanov said.



Friday, June 14, 2024

I don’t know how common deals like this might be, but it would seem to depend on how likely Clearview is to survive.

https://www.nytimes.com/2024/06/13/business/clearview-ai-facial-recognition-settlement.html?unlocked_article_code=1.zk0.Y38c.x6rsOCxhWfmG

Clearview AI Used Your Face. Now You May Get a Stake in the Company.

The facial recognition start-up doesn’t have the funds to settle a class-action lawsuit, so lawyers are proposing equity for those whose faces were scraped from the internet.

A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database.



Thursday, June 13, 2024

Whatever sells best?

https://www.bespacific.com/google-sued-by-top-textbook-publishers-over-ads-for-pirated-e-books/

Google sued by top textbook publishers over ads for pirated e-books

XM: “June 5 (Reuters) – Google was hit with a lawsuit on Wednesday by educational publishers Cengage, Macmillan Learning, McGraw Hill and Elsevier accusing the tech giant of promoting pirate copies of their textbooks. The publishers told the U.S. District Court for the Southern District of New York that Google has ignored thousands of copyright-infringement notices and continues to profit from the sale of pirated digital versions of textbooks advertised through its dominant search engine. Google representatives did not immediately respond to a request for comment on the lawsuit. The publishers’ attorney Matt Oppenheim of Oppenheim + Zebrak told Reuters that Google had become a “thieves’ den” for textbook pirates. The complaint said that Google searches for the publishers’ work feature heavily discounted, pirated e-book versions at the top of the results. “The artificially low-priced infringing works drown out the regularly priced legitimate works,” the lawsuit said. “Of course, the pirate sellers can sell their infringing works at such low prices because they did nothing to create or license them; they just illegally made digital copies.” According to the lawsuit, Google has made the piracy worse by restricting ads for licensed e-books. “As a result, the textbook market is upside down, as the world’s largest online advertising business advertises ebooks for pirates but rejects ebook ads for legitimate sellers,” the lawsuit said. The lawsuit said that the publishers have been complaining to Google about the ads since 2021 to no avail. They accused Google of copyright and trademark infringement and deceptive trade practices, requesting an unspecified amount of monetary damages. The case is Cengage Learning Inc v. Google LLC, U.S. District Court for the Southern District of New York, No. 1:24-cv-04274.





Perhaps we too can survive an AI election…

https://www.schneier.com/blog/archives/2024/06/ai-and-the-indian-election.html

AI and the Indian Election

As India concluded the world’s largest election on June 5, 2024, with over 640 million votes counted, observers could assess how the various parties and factions used artificial intelligence technologies—and what lessons that holds for the rest of the world.

The campaigns made extensive use of AI, including deepfake impersonations of candidates, celebrities and dead politicians. By some estimates, millions of Indian voters viewed deepfakes.

But, despite fears of widespread disinformation, for the most part the campaigns, candidates and activists used AI constructively in the election. They used AI for typical political activities, including mudslinging, but primarily to better connect with voters.



Wednesday, June 12, 2024

Is this an indication that Brazil does not have enough clerks (or paralegals or whatevers) to review the cases before default?

https://www.reuters.com/technology/artificial-intelligence/brazil-hires-openai-cut-costs-court-battles-2024-06-11/

Brazil hires OpenAI to cut costs of court battles

Brazil's government is hiring OpenAI to expedite the screening and analysis of thousands of lawsuits using artificial intelligence (AI), trying to avoid costly court losses that have weighed on the federal budget.

The AI service will flag to government the need to act on lawsuits before final decisions, mapping trends and potential action areas for the solicitor general's office (AGU).

The government estimated it would spend 70.7 billion reais ($13.2 billion) next year on judicial decisions where it can no longer appeal.





Tools & Techniques. Is it truly ‘good enough.’

https://www.cnet.com/tech/services-and-software/how-to-use-ai-powered-grammarly-to-do-all-of-your-editing/

How to Use AI-Powered Grammarly to Do All of Your Editing

Mistake-free writing is within reach – and free.



Tuesday, June 11, 2024

Perhaps automating lawyers will have to wait.

https://www.bespacific.com/law-firms-start-training-summer-associates-on-using-generative-ai/

Law Firms Start Training Summer Associates on Using Generative AI

Bloomberg Law: “Some Big Law firms are now making summer associates learn the ins and outs of generative AI as they begin integrating what’s considered to be a game-changing technology for the profession. K&L Gates, Dechert, and Orrick Herrington & Sutcliffe have incorporated training on the technology for this year’s class of summer associates, teaching them how to use research and chatbot tools now being used by the firms. The programs offer a window into what some firms believe artificial intelligence will mean for those now entering the profession. Future junior lawyers won’t be replaced by AI, as some fear, but they will need to harness it to be successful, said Brendan McDonnell, a K&L Gates partner and member of the firm’s AI solutions group. That includes understanding how to effectively interact with generative AI chatbots to unearth the most useful information for clients, he said. “That’s the whole idea about the training program: You need to teach people how this is going to impact the way they come to work,” said McDonnell. While AI will automate many tasks, he said, it’s also going to open up new lines of legal practice while freeing up new professionals’ time to learn and master the complex work they went to law school for. Most firms are still in an experimentation phase when it comes to deploying generative AI chatbot and research tools. Firms’ use of the tech is also dependent on clients’ openness to it. “We’re in a transition period,” said Alex Su, the chief revenue officer at Latitude Legal, a global flexible legal staffing firm. “It’s hard to say there’s going to be a huge impact in how law firms staff in the near-term.” Still, legal experts caution that future lawyers need to address the technology…”



(Related)

https://www.bespacific.com/ai-now/

AI Now

Perkins, Rachelle Holmes, AI Now (May 24, 2024). Temple Law Review, Vol. 97, Forthcoming, George Mason Legal Studies Research Paper No. LS 24-14, Available at SSRN: https://ssrn.com/abstract=4840481 or http://dx.doi.org/10.2139/ssrn.4840481

Legal scholars have made important explorations into the opportunities and challenges of generative artificial intelligence within legal education and the practice of law. This Article adds to this literature by directly addressing members of the legal academy. As a collective, law professors, who are responsible for cultivating the knowledge and skills of the next generation of lawyers, are seemingly adopting a laissez faire posture towards the advent of generative artificial intelligence. In stark contrast to law practitioners, law professors generally have displayed a lack of urgency in responding to the repercussions of this emerging technology. This Article contends that all law professors have an inescapable duty to understand generative artificial intelligence. This obligation stems from the pivotal role faculty play on three distinct but interconnected dimensions: pedagogy, scholarship, and governance. No law faculty are exempt from this mandate. All are entrusted with responsibilities that intersect with at least one, if not all three dimensions, whether they are teaching, research, clinical, or administrative faculty. It is also not dependent on whether professors are inclined, or disinclined, to integrate artificial intelligence into their own courses or scholarship. The urgency of the mandate derives from the critical and complex role law professors have in the development of lawyers and architecture of the legal field.”





Lawyers: We don’t need no stinking rules!

https://www.reuters.com/legal/transactional/5th-circuit-scraps-plans-adopt-ai-rule-after-lawyers-object-2024-06-10/

5th Circuit scraps plans to adopt AI rule after lawyers object

… The 5th U.S. Circuit Court of Appeals said it had decided not to adopt a rule it first proposed in November after taking into consideration the use of AI in the legal practice and public comment from lawyers, which had been largely negative.

The proposed rule aimed to regulate lawyers use of generative AI tools like OpenAI's ChatGPT and govern both attorneys and litigants appearing before the court without counsel.

It would have required them to certify that, to the extent an AI program was used to generate a filing, citations and legal analysis were reviewed for accuracy. Lawyers who misrepresented their compliance with the rule could face sanctions and the prospect of their filings being stricken.

… But members of the bar in public comments submitted to the 5th Circuit largely opposed its proposal, arguing that rules already on the books were good enough to deal with any issues with the technology, including ensuring the accuracy of court filings.





Who fools who? Has AI fooled the CEO/BoD?

https://www.hklaw.com/en/insights/media-entities/2024/06/the-secs-intensified-focus-on-ai-washing-practices

The SEC’s Intensified Focus on AI Washing Practices

Litigation attorney Andrew Balthazor was a featured guest on the RiskWatch podcast hosted by Vcheck, where he discussed the growing concern of artificial intelligence (AI) washing. This deceptive practice involves companies exaggerating or misrepresenting their use of artificial intelligence to attract investor interest. Notably, the U.S. Securities and Exchange Commission (SEC) has recently taken steps against investment advisers for making false claims about their use of AI, leading to more explicit regulations and an anticipated increase in enforcement actions with stricter penalties. Throughout the episode, Mr. Balthazor emphasizes the need for caution in AI investing, highlights the importance of understanding a company's true AI capabilities and suggests practical due diligence measures to help cut through misleading misinformation.



(Related)

https://sloanreview.mit.edu/article/auditing-algorithmic-risk/

Auditing Algorithmic Risk

How do we know whether algorithmic systems are working as intended? A set of simple frameworks can help even nontechnical organizations check the functioning of their AI tools.



Monday, June 10, 2024

Tools & Techniques. (Don’t AI while angry.)

https://sloanreview.mit.edu/article/three-things-to-know-about-prompting-llms/

Three Things to Know About Prompting LLMs

These research-backed tips can help you improve your prompting strategies for better results from large language models.





Tools & Techniques. (When you want to be aggressive.)

https://www.bespacific.com/own-your-data-sort-of/

Own Your Data – Sort Of

YourDigitalRights.org – “Get organizations to delete your account or provide a copy of your personal information. Many organizations collect and sell your personal data, often without your consent. Use this free service to send them a data deletion or access request. Start by searching for an organization below…” [grain of salt etc.]



Sunday, June 09, 2024

Scary? Solutions often are…

https://teachprivacy.com/kafka-in-the-age-of-ai-and-the-futility-of-privacy-as-control-2/

Kafka in the Age of AI and the Futility of Privacy as Control

Although writing more than a century ago, Franz Kafka captured the core problem of digital technologies – how individuals are rendered powerless and vulnerable. During the past fifty years, and especially in the 21st century, privacy laws have been sprouting up around the world. These laws are often based heavily on an Individual Control Model that aims to empower individuals with rights to help them control the collection, use, and disclosure of their data.

In this Essay, we argue that although Kafka starkly shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence. In Kafka’s world, characters readily submit to authority, even when they aren’t forced and even when doing so leads to injury or death. The victims are blamed, and they even blame themselves.

Although Kafka’s view of human nature is exaggerated for darkly comedic effect, it nevertheless captures many truths that privacy law must reckon with. Even if dark patterns and dirty manipulative practices are cleaned up, people will still make bad decisions about privacy. Despite warnings, people will embrace the technologies that hurt them. When given control over their data, people will give it right back. And when people’s data is used in unexpected and harmful ways, people will often blame themselves.

Kafka’s provides key insights for regulating privacy in the age of AI. The law can’t empower individuals when it is the system that renders them powerless. Ultimately, privacy law’s primary goal should not be to give individuals control over their data. Instead, the law should focus on ensuring a societal structure that brings the collection, use, and disclosure of personal data under control.





There may be more than a snicker here. Remember, the porn industry is an early adopter.

https://scholarshare.temple.edu/handle/20.500.12613/10289

Sex robots at home: A political-economic analysis of a changing sex industry

The advent of interactive and humanistic sex robots signifies a shift in the sex technology industry. Where objects such as sex dolls require an imagined personality, sex robots operate through artificial intelligence systems, allowing the user to communicate with the robot and shape its personality more directly. Even as stigmatization and fear revolve around the emergence of sex robots, the technology has implications for social robots and companion technologies. Discourse surrounding sex robots manifests across institutions with stakeholders attempting to guide the industry toward their vision of the future. The sex robot industry remains niche and its cultural impact is unclear; yet, social and legal regulations may have farther-reaching implications. This political-economic study examines how corporate (RealDoll), advocacy (Campaign Against Porn Robots and Prostasia Foundation), and government (local, state, national, and international) stakeholders envision the current and future standing of sex robots and their place in society. The analysis demonstrates the ways stakeholders draw on moral, capitalist, and androcentric language to celebrate or condemn the sex robot industry. This study’s data includes a critical discourse analysis of business and marketing materials, press releases and interviews, ownership details, and government legislation, a total of 442 artifacts. Through this examination, I argue that moralism and absolutism dominate the discourse, while the robots’ sexual functions obfuscate the ramifications of robotic artificial intelligence. Contextualized by broader discourses on technology and feminist inquiry, I additionally argue that sex robots are utilized as a focal point to debate broader issues of child abuse, rape and objectification, sexual privacy, and loneliness. Through ownership and lobbying facets, data reveals interconnections between stakeholder segments, indicating power and influence outside of the sex industry. In particular, Realbotix, the technological avenue of RealDoll, is attempting to expand its bespoke social robot offerings, the Campaign Against Porn Robots and Prostasia continue to lobby U.S. legislators to ban and reduce restrictions respectively, all while U.S. states implement restrictions on childlike sex robots without any regulatory advice on the AI privacy risks. I conclude the study with policy recommendations to clarify Supreme Court precedent and fortify consumer data protections.





All the same but with AI?

https://ejournal.iain-manado.ac.id/index.php/since/article/view/923

The Potential Application of Artificial Intelligence by Criminals in Transnational Crimes

This paper aims to explain the relevance of artificial intelligence in the development of criminal law and how it can create new crimes due to technological developments. This paper is qualitative research with an empirical juridical approach analysed with a descriptive method. The result of this study indicates that the phenomenon of artificial intelligence in the world of crime has the potential to increase the conventional crime sophistication of artificial intelligence and facilitate new crimes with artificial intelligence. Based on this, crimes can be classified as follows: First, crimes with artificial intelligence; Second, crimes by artificial intelligence; and third, crimes against artificial intelligence.





Tools & Techniques.

https://www.howtogeek.com/how-i-use-ai-to-transcribe-and-organize-my-voice-notes/

How I Use AI to Transcribe and Organize My Voice Notes

I have a three-part system where I use free apps and tools to transcribe, refine, and organize my voice notes. Here's a step-by-step guide showcasing how I use it.