Saturday, March 15, 2025

This could have been avoided if they just RTFM! (Read the friendly manual)

https://www.cleveland.com/news/2025/01/cleveland-police-used-ai-to-justify-a-search-warrant-it-has-derailed-a-murder-case.html?utm_source=substack&utm_medium=email

Cleveland police used AI to justify a search warrant. It has derailed a murder case

A jury may never see the gun that authorities say was used to kill Blake Story last year.

That’s because Cleveland police used a facial recognition program – one that explicitly says its results are not admissible in court – to obtain a search warrant, according to court documents.

The search turned up what police say is the murder weapon in the suspect’s home. But a Cuyahoga County judge tossed that evidence after siding with defense attorneys who argued that the search warrant affidavit was misleading and relied on inadmissible evidence.



(Related)

https://www.vpm.org/news/2025-03-14/cliff-hayes-general-assembly-ai-oversight-court-decisions-facial-recognition

Virginia legislation calls for human oversight of AI use in court decisions

Virginia lawmakers want to regulate the use of artificial intelligence-based tools in the criminal justice system.

During this year’s General Assembly session, Del. Cliff Hayes Jr. (D–Chesapeake) introduced a bill  that would reinforce human oversight in the criminal justice system while allowing AI to play a supporting role.

Hayes’ bill would prohibit AI-generated recommendations from being used as the sole basis for key decisions related to pre-trial detention or release, prosecution, adjudication, sentencing, probation, parole, correctional supervision, or rehabilitation. It would also make any use of AI in those decisions subject to legal challenge or objection.



Friday, March 14, 2025

Might be interesting…

https://www.bespacific.com/vals-legal-ai-report/

Vals Legal AI Report

Vals Legal AI Report  [free-registration required] “This first-of-its-kind study evaluates how four legal AI tools perform across seven legal tasks, benchmarking their results against those produced by a lawyer control group (the Lawyer Baseline). The seven tasks evaluated in this study were Data Extraction, Document Q&A, Document Summarization, Redlining, Transcript Analysis, Chronology Generation, and EDGAR Research, representing a range of functions commonly performed by legal professionals. The evaluated tools were CoCounsel (from Thomson Reuters), Vincent AI (from vLex), Harvey Assistant (from Harvey), and Oliver (from Vecflow). Lexis+AI (from LexisNexis) was initially evaluated but withdrew from the sections studied in this report. The percentages below represent each tool’s accuracy or performance scores based on predefined evaluation criteria for each legal task. Higher percentages indicate stronger performance relative to other AI tools and the Lawyer Baseline. Some key takeaways include:

    • Harvey opted into six out of seven tasks. They received the top scores of the participating AI tools on five tasks and placed second on one task. In four tasks, they outperformed the Lawyer Baseline.

    • CoCounsel is the only other vendor whose AI tool received a top score. It consistently ranked among the top-performing tools for the four evaluated tasks, with scores ranging from 73.2% to 89.6%.

    • The Lawyer Baseline outperformed the AI tools on two tasks and matched the best-performing tool on one task. In the four remaining tasks, at least one AI tool surpassed the Lawyer Baseline.

Beyond these headline findings, a more detailed analysis of each tool’s performance reveals additional insights into their relative strengths, limitations, and areas for improvement.”





Perspective.

https://www.scrippsnews.com/science-and-tech/artificial-intelligence/half-of-americans-regularly-use-artificial-intelligence-tech-like-chatgpt-survey-says

Half of Americans regularly use artificial intelligence tech like ChatGPT, survey says

Half of Americans are now using artificial intelligence models like ChatGPT and Gemini, according to a new survey from researchers at Elon University.

"Younger, well-educated, relatively wealthy, and employed adults are somewhat more likely than others to be using LLMs now. Yet, it is also the case that half of those living in households earning less than $50,000 (53%) use the tools," the researchers said.

The technology is more popular among Hispanic adults (66%) and Black adults (57%) than White adults (47%), the survey found. It's also slightly more popular among women than men.

How often the LLMs are used varies: 34% said they use them at least once a day, 18% said they use them several times a week and 10% said they use the tools “almost constantly.”



Thursday, March 13, 2025

Interesting list.

https://www.bespacific.com/the-200-sites-an-ice-surveillance-contractor-is-monitoring/

404 Media has obtained a list of 200+ sites monitored by a contractor for ICE

404 Media [unpaywalled] – “A contractor for Immigration and Customs Enforcement (ICE) and many other U.S. government agencies has developed a tool that lets analysts more easily pull a target individual’s publicly available data from a wide array of sites, social networks, apps, and services across the web at once, including Amazon, Apple Music, BabyCenter, Bluesky, Facebook, Github, GoFundMe, OnlyFans, Instagra, according to a leaked list of the sites obtained by 404 Media. In all the list names more than 200 sites that the contractor, called ShadowDragon, pulls data from and makes available to its government clients, allowing them to map out a person’s activity, movements, and relationships.”

404 Media has uploaded the list here.





I did not know that. Perhaps I’ll take another look. What other industries use it this way?

https://www.bespacific.com/the-legal-professions-shift-to-linkedin-what-you-need-to-know/

The Legal Profession’s Shift to LinkedIn: What You Need to Know

Nicole Black – LinkedIn – “A decade ago, LinkedIn was little more than a digital resume. Today, it’s the primary networking platform for legal professionals. It’s come a long way since the book I co-authored about social media for lawyers was published by the American Bar Association in 2010. Back then, LinkedIn barely merited a mention. Interaction on the platform was minimal, and its primary benefit was assisting with job searches. At the time, Twitter and Facebook were the top social networks for legal professionals, and Instagram was in its infancy. If you’d suggested to me that one day LinkedIn would be my primary social media outlet, I’d have called you crazy… There are many recent feature updates that make the platform more engaging, such as newsletters, videos, and even daily puzzles. If you haven’t checked out LinkedIn in a while, it’s worth revisiting it. You’ll undoubtedly find notable updates from colleagues and will see some using it very creatively to highlight their law firms’ successes…”





Because…

https://mashable.com/article/free-ai-courses-march-2025

38 of the best AI courses you can take online for free

It's possible that AI is going to eventually take over the world, but we should have a few years before we get to the point of no return. So how should we approach those years? We may as well learn how to make the most out of AI before it deems that we're all obsolete.

A wide range of online courses on AI can be found on Udemy. And better yet, some of the best examples can be taken for free. We've checked out everything on offer and lined up a selection of standout courses to get you started.



Wednesday, March 12, 2025

Perspective.

https://apnews.com/article/ai-school-chromebook-surveillance-gaggle-investigation-takeaways-381fa82978f27eb85f20d03236820711

Takeaways from our investigation on AI-powered school surveillance

Thousands of American schools are turning to AI-powered surveillance technology for 24/7 monitoring of student accounts and school-issued devices like laptops and tablets.

The goal is to keep children safe, especially amid a mental health crisis and the threat of school shootings. Machine-learning algorithms detect potential indicators of problems like bullying, self-harm or suicide and then alert school officials.

But these tools raise serious questions about privacy and security. In fact, when The Seattle Times and The Associated Press partnered to investigate school surveillance, reporters inadvertently received access to almost 3,500 sensitive, unredacted student documents through a records request. The documents were stored without a password or firewall, and anyone with the link could read them.

Here are key takeaways from the investigation.





Tools & Techniques. (Free trial on desktop version.)

https://www.bespacific.com/diffchecker/

DiffChecker

I recently discovered a pretty amazing website called DiffChecker. It compares files and visually highlights any differences. You can use it to compare texts you paste right into the browser window, or you can upload documents to compare. It accepts Word docs, pdfs, spreadsheets and image files. To find the differences between two versions of a website, first you’ll have to convert them into txt files. Find an old capture in the Wayback Machine, right click to view page source, then save as a txt file. Then do the same for the live version of the site. A website’s html/css code may not include data files of course – those may be pulled from a background database you can’t access. I’m not saying it will work for every website, but it’s worth a try. The developers at DiffChecker are very responsive too, they quickly answer questions.” Via Marie Concannon Head, Government Information & Data Archives University of Missouri.

https://www.diffchecker.com/



Tuesday, March 11, 2025

Tools & Techniques.

https://www.zdnet.com/article/duckduckgos-ai-beats-perplexity-in-one-big-way-and-its-free-to-use/

DuckDuckGo's AI beats Perplexity in one big way - and it's free to use

I've been a fan of DuckDuckGo for a long time. I find the search engine to be far more trustworthy than Google and I do enjoy my privacy. But when I heard that the company was dipping its webbed feet into the AI waters, my initial reaction was a roll of the eyes.

Then I gave Duck.ai a go -- and was immediately impressed. (DuckDuck Go's AI features launched in June 2024 and came out of beta last week.)

Duck.ai does something that other similar products don't -- it gives you a choice. You can choose between the proprietary GPT-4o mini, o3-mini, and Claude 3 services or go open-source with Llama 3.3  and Mistral Small 3. Duck.ai is also private: All of your queries are anonymized by DuckDuckGo, so you can be sure no third-party will ever have access to your AI chats.



Monday, March 10, 2025

Government security, yes. Individual security, no so much.

https://www.theregister.com/2025/03/09/asia_tech_news_roundup/

India wants backdoors into clouds, email, SaaS, for tax inspectors

India’s government has proposed giving its tax authorities sweeping powers to access private email systems and applications.

The proposal emerged last month in the search and seizure provisions of a tax bill [PDF] which at section 247 requires citizens to provide tax authorities with access to their physical and digital records.

The section also gives tax authorities the power to “gain access by overriding the access code to any said computer system, or virtual digital space, where the access code thereof is not available.” That text appears in the same paragraph describing powers to break down doors or crack safes.



Sunday, March 09, 2025

Interesting take…

https://journals.irapa.org/index.php/JESTT/article/view/1013

Artificial Intelligence in Autonomous Weapon Systems: Legal Accountability and Ethical Challenges

Autonomous Weapon Systems (AWS) are reshaping modern warfare, offering enhanced operational efficiency but raising significant legal, ethical, and regulatory concerns. Their capacity to engage targets without human intervention creates an accountability gap, challenging the application of International Humanitarian Law (IHL). The current legal frameworks are incompetent to define meaningful human control. That complicate the attribution of responsibility when AWS violate human rights. Ethical challenges, including the dehumanization of warfare, algorithmic biases, and indiscriminate targeting, jeopardize civilian protection. Moreover, the proliferation of AWS amplifies global security risks, particularly with their potential misuse by non-state actors. This paper critically examines these challenges, evaluating current legal frameworks, ethical considerations, and regulatory inconsistencies. It proposes war torts, corporate accountability, transparency measures, and binding international treaties to address governance gaps. Supports international cooperation and oversight mechanisms is essential to ensure AWS comply with IHL and human rights law. This research contributes to the global discourse on autonomous warfare, offering practical policy recommendations for ethical and legal governance.





Automating the law of AI?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5166908

Legal Challenges in Protecting Personal Information in Big Data Environments

The rapid expansion of artificial intelligence (AI) and high-speed big data processing has raised significant legal challenges in safeguarding personal information. Traditional data protection frameworks struggle to address issues such as mass data collection, cross-border data transfers, and evolving cyber threats, particularly in AI-powered, high-speed data environments. This research examines key legal concerns, including compliance with privacy regulations, ethical considerations in AI-enhanced data processing, and enforcement limitations in large-scale data ecosystems. The study employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to systematically evaluate legal frameworks, case studies, and technological solutions for data protection. By applying PRISMA, the research ensures a structured approach to selecting, screening, and analyzing studies on data privacy regulations and their effectiveness. Additionally, AI-driven big data analytics present new challenges in balancing regulatory compliance with real-time, high-speed data processing demands. The study investigates how well-established legal frameworks—such as the California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR)—address AI-enhanced risks of data breaches, unauthorized access, and personal information misuse. A structured data collection process was implemented using established databases such as Google Scholar, IEEE Xplore, PubMed, Westlaw, and LexisNexis. Quantitative analysis techniques, including descriptive statistics, chi-square tests, regression analysis, and meta-analysis, were applied to examine compliance rates, reported data breaches, monetary penalties, and response times to data incidents. The statistical analysis reveals significant inconsistencies in data privacy enforcement, as compliance rates vary widely (mean: 72.5%, SD: 12.3), and financial penalties under GDPR and CCPA range significantly (median: $1.1M, max: $5.2M). Furthermore, chi-square tests indicate a significant relationship between fines and compliance rates (p < 0.05), highlighting the impact of regulatory penalties on corporate adherence to data protection laws. As AI-powered high-speed data systems continue to evolve, there is an increasing need for adaptive legal frameworks that can address privacy risks while enabling technological innovation. This study emphasizes the necessity of AI-driven compliance mechanisms, automated regulatory monitoring, and real-time enforcement strategies to safeguard personal information in the era of high-speed big data processing.





Thinking real thoughts about artificial people.

https://www.mlive.com/news/saginaw-bay-city/2025/03/do-androids-dream-of-electric-sheep-this-michigan-educators-classes-ponder-the-humanity-of-ai.html

Do androids dream of electric sheep? This Michigan educator’s classes ponder the humanity of A.I.

Matthew Katz knows you might be worried about “The Terminator.”

The Central Michigan University philosophy professor, though, also wants you to consider whether an android — a Terminator or something with less sinister intent — could one day “worry” about you.