Saturday, July 29, 2023

AI doesn’t need to openly attack, subtle works just fine.

https://www.pogowasright.org/why-doctors-using-chatgpt-are-unknowingly-violating-hipaa/

Why Doctors Using ChatGPT Are Unknowingly Violating HIPAA

Science Blog writes:

With the rise of artificial intelligence, clinicians are turning to chatbots like OpenAI’s ChatGPT to organize notes, produce medical records or write letters to health insurers. But clinicians deploying this new technology may be violating health privacy laws, according to Genevieve Kanter, an associate professor of public policy at the USC Sol Price School of Public Policy.
Kanter, who is also a senior fellow at the Leonard D. Schaeffer Center for Health Policy & Economics, a partner organization of the USC Price School, recently co-authored an article explaining the emerging issue in the Journal of the American Medical Association. To learn more, we spoke to Kanter about how clinicians are using chatbots and why they could run afoul of the Health Insurance Portability and Accountability Act (HIPAA). HIPPA (sic) is a federal law that protects patient health information from being disclosed without the patient’s permission.

Read more at Science Blog.

And to learn even more, read the “viewpoint” article co-authored by Kanter and Eric Packel, “Health Care Privacy Risks of AI Chatbots.”





Sometimes even a blind squirrel will find a nut. But should we rely on that level of security?

https://viewfromthewing.com/why-finally-know-why-the-tsa-is-cracking-down-on-clear-at-airport-security/

We Finally Know Why The TSA Is Cracking Down On CLEAR At Airport Security

CLEAR is a paid program that takes your biometrics and expedites security screening, mostly at airports. They are part-owned by Delta and United, and have a partnership with American Express.

Since you go through a biometric ID check, you usually don’t have to show ID at the security checkpoint, although you randomly are asked to do so.

Apparently last July “a man slipped through Clear’s screening lines at Reagan National Airport near Washington, before a government scan detected ammunition — which is banned in the cabin — in his possession.” And he’d “almost managed to board a flight under a false identity.” The TSA checkpoint found the ammunition, which is what it is supposed to do. This had nothing to do with his identity. There’s no suggestion that the passenger intended to do anything nefarious.



 

Friday, July 28, 2023

I’m shocked! Shocked I tel you!

https://www.bespacific.com/vulnerabilities-in-chatgpt-and-other-chatbots/

How researchers broke ChatGPT and what it could mean for future AI development

ZDNet: “As many of us grow accustomed to using artificial intelligence tools daily, it’s worth remembering to keep our questioning hats on. Nothing is completely safe and free from security vulnerabilities. Still, companies behind many of the most popular generative AI tools are constantly updating their safety measures to prevent the generation and proliferation of inaccurate and harmful content. Researchers at Carnegie Mellon University and the Center for AI Safety teamed up to find vulnerabilities in AI chatbots like ChatGPT, Google Bard, and Claude and they succeeded. In a research paper to examine the vulnerability of large language models (LLMs) to automated adversarial attacks, the authors demonstrated that even if a model is said to be resistant to attacks, it can still be tricked into bypassing content filters and providing harmful information, misinformation, and hate speech. This makes these models vulnerable, potentially leading to the misuse of AI.”





A long and useful post?

https://www.pogowasright.org/how-to-buy-ed-tech-that-isnt-evil/

How to Buy Ed Tech That Isn’t Evil

Four critical questions parents and educators should be asking

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.



Thursday, July 27, 2023

Incentive to “pay attention” to security?  But is it enough? 

https://www.bespacific.com/sec-is-giving-companies-four-days-to-report-cyberattacks/

SEC is giving companies four days to report cyberattacks

Quartz:  “The US Securities and Exchange Commission (SEC) wants public companies to be more transparent and forthcoming about “material cybersecurity incidents,” the federal agency said yesterday (July 26).  Its new rules, passed by a 3-2 vote, dictate companies must disclose details of incidents and their effect on the bottomline in a section of the Form 8-K, a broad form companies use to notify shareholders of major events, within four days of a cybersecurity event.  A delay in filing will only be allowed if the US Attorney General determines that “immediate disclosure would pose a substantial risk to national security or public safety and notifies the Commission of such determination in writing,” the SEC said.  Final rules, which will be signed into the Federal Register later this year, will apply to big companies within 30 days.  Smaller companies will be given a more generous deadline—180 days—to comply.”

        ◦ SEC Adopts Rules on Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure by Public Companies 

        ◦ Final Rule 

        ◦ Fact Sheet 



Nothing is ever straight forward…

https://fpf.org/blog/old-laws-new-tech-as-courts-wrestle-with-tough-questions-under-us-biometric-laws-immersive-tech-raises-new-challenges/

OLD LAWS & NEW TECH: AS COURTS WRESTLE WITH TOUGH QUESTIONS UNDER US BIOMETRIC LAWS, IMMERSIVE TECH RAISES NEW CHALLENGES

Extended reality (XR) technologies often rely on users’ body-based data, particularly information about their eyes, hands, and body position, to create realistic, interactive experiences.  However, data derived from individuals’ bodies can pose serious privacy and data protection risks for people.  It can also create substantial liability risks for organizations, given the growing volume of lawsuits under the Illinois Biometric Information Privacy Act (BIPA) and scrutiny of biometric data practices by the Federal Trade Commission (“FTC” or “Commission”) in their recent Policy Statement.  At the same time, there is considerable debate and lack of consensus about what counts as biometric data under existing state privacy laws, creating significant uncertainty for regulators, individuals, and organizations developing XR services.

This blog post explores the intersection of US biometric data privacy laws and XR technologies, particularly whether and to what extent specific body-based data XR devices collect and use may be considered “biometric” under various data protection regimes.  We observe that:



The law, she is a-changing. 

https://iapp.org/news/a/third-party-liability-and-product-liability-for-ai-systems/

Third-party liability and product liability for AI systems

…   Traditionally, consumer protection law has been favorable for software vendors, limiting their liability to end users.  This has been particularly true for third-party vendors that have had liability managed by the judicious use of warranty disclaimers, contractual limitations of liability and limitations in the application of negligence law to such vendors.

However, recent U.S. case law signals an erosion of these traditional liability boundaries between vendors of software and their customers.

For example, in Connecticut Fair Housing Center v. Corelogic Rental Property Solutions, a 2019 case against a third-party vendor of tenant screening software, the U.S. District Court held that the vendor of the screening software was subject to the same nondiscrimination provisions of the Fair Housing Act as its landlord customers.  Tenant screening criteria, including criminal records, was made available to landlords through the software.  This could result in discrimination against those with criminal histories and violates of Department of Housing and Urban Development guidance regarding FHA protections.

The court rejected the vendor's argument that it is precluded from FHA liability because its customers have exclusive control over setting the screening criteria.  The court stressed the vendor had a duty to not sell a product which could cause a customer to either knowingly or unknowingly violate federal housing law and regulations.



Resource.

https://www.bespacific.com/research-guide-for-the-constitution-annotated/

Research Guide for the Constitution Annotated

In Custodia Legis – Mitch Ruhl, a paralegal specialist in the American Law Division of the Congressional Research Service.  “One of the challenges for any researcher tackling questions of constitutional interpretation is knowing where to start.  The Congressional Research Service’s (CRS) Constitution of the United States of America: Analysis and Interpretation (or “Constitution Annotated”) serves as the official legal treatise on the constitution, offering a comprehensive, authoritative, and nonpartisan analysis of the most important document in American history.  This year marks the publication of the latest decennial edition and the fourth anniversary of the Constitution Annotated website.  As part of this anniversary, CRS has produced a new research guide dedicated to helping the general reader navigate and understand the Constitution Annotated, whether they are congressional staffers, seasoned attorneys, university students, or anyone interested in the Constitution and how it relates to current issues.  This research guide walks the reader through the Constitution Annotated website; the methodology behind its component essays; additional resources created by CRS, including a comprehensive table of cases cited in all essays, a table of overruled Supreme Court decisions, sets of introductory essays, and a topical guide for each section of the Constitution and its amendments.  The Constitution Annotated research guide will be regularly updated as new essays and resources are added and edited.  Researchers of all backgrounds can use this research guide to delve into this unique and important treatise and further their understanding of how America’s founding document relates to current Supreme Court cases and discussions surrounding constitutional issues.” 


Wednesday, July 26, 2023

Similar to a book ban? How will students learn to use technology properly if schools won’t teach them how? (“We don’t understand it so we should pretend it doesn’t exist.”)

https://www.theguardian.com/world/2023/jul/26/put-learners-first-unesco-calls-for-global-ban-on-smartphones-in-schools

Put learners first’: Unesco calls for global ban on smartphones in schools

Smartphones should be banned from schools to tackle classroom disruption, improve learning and help protect children from cyberbullying, a UN report has recommended.

Unesco, the UN’s education, science and culture agency, said there was evidence that excessive mobile phone use was linked to reduced educational performance and that high levels of screen time had a negative effect on children’s emotional stability.

It said its call for a smartphone ban sent a clear message that digital technology as a whole, including artificial intelligence, should always be subservient to a “human-centred vision” of education, and never supplant face-to-face interaction with teachers.





A nugget? (Tools for eliminating lawyers?)

https://www.wfmz.com/news/area/lehighvalley/ai-in-the-lehigh-valley-here-are-the-pros-cons-of-using-artificial-intelligence-in/article_5fded754-2b2a-11ee-b258-1b4e638dc41b.html

AI in the Lehigh Valley: Here are the pros, cons of using artificial intelligence in law

Novick is using artificial intelligence in an ongoing legal battle with his landlord. It's saved him lots of money.

"A couple of thousand dollars," said Novick.

For the last year, he's not had to hire a lawyer.

"I know what the word means," Novick said as he looked at legal terms.

Novick experiments with different types of software, depending on what he's trying to do. Among his favorite websites: Legalese Decoder. It translates law talk into layman's terms.



Tuesday, July 25, 2023

Is this a mistrust of technology? Perhaps the AirTag is not valuable enough by itself?

https://9to5mac.com/2023/07/24/airtag-police-motorcycle/

Chicago man tracks down stolen motorcycle with AirTag, but police can’t help recover it

AirTags are great for finding lost bags, pets, and keys. Apple’s item tracker, however, is no match for property theft — especially when vehicles are involved.

Take this news bulletin out of Chicago, for example, where someone tracked down their stolen motorcycle using Find My.

The owner knows exactly where the bike was taken, thanks to the AirTag under the seat. However, police can only respond if the bike is seen out in the open.





For the auditor in me.

https://www.bespacific.com/tips-for-investigating-algorithm-harm-and-avoiding-ai-hype/

Tips for Investigating Algorithm Harm and Avoiding AI Hype

Rowan Philp, GIJN senior reporter: “…In a recent article for the Columbia Journalism Review, Schellmann, Kapoor, and Dallas Morning News reporter Ari Sen explained that AI “machine learning” systems are neither sentient nor independent. Instead, these systems differ from past computer models because, rather than following a set of digital rules, they can “recognize patterns in data.” “While details vary, supervised learning tools are essentially all just computers learning patterns from labeled data,” they wrote. They warned that futuristic-sounding processes like “self-supervised learning” — a technique used by ChatGPT — do not denote independent thinking, but merely automated labeling. “Performance of AI systems is systematically exaggerated… there are conflicts of interest, bias, and accountability issues to watch.” — Sayash Kapoor, Princeton University computer science Ph.D. candidate. So the data labels and annotations that train algorithms — a largely human-driven process that coaches the computer to find similar things — are a major source of questions for investigative reporters on this beat. Do the labels represent the whole population affected by the algorithm? Who entered those labels? Were they audited? Do the training labels embed historic discrimination? For instance, if you simply asked a basic hiring algorithm to evaluate job applicants for a long-standing engineering company, it would likely discriminate against female candidates, because the data it has for most prior hires would most likely overwhelmingly feature “male” labels..”





An interesting new technology…

https://www.makeuseof.com/benefits-of-ipfs-that-make-it-the-future-of-web/

The 7 Benefits of IPFS That Make It the Future of the Web

The Interplanetary File System (IPFS) is a revolutionary protocol that mimics a blockchain design to decentralize data storage. Juan Benet created it to make Filecoin more open and faster, but over time, it has found so many applications in other niches.

1. Decentralization

Traditional data storage methods which rely on centralized servers are susceptible to outages. That's a challenge that has long plagued the current version of the internet. IPFS brings decentralization to data storage as it adopts a peer-to-peer model where each node in a network has a copy of data, just like on a blockchain.



Monday, July 24, 2023

It’s not just “violence,” social media also makes kids crazy?

https://www.wsj.com/articles/schools-sue-social-media-platforms-over-alleged-harms-to-students-ebca91a5?mod=djemalertNEWS

Schools Sue Social-Media Platforms Over Alleged Harms to Students

Plaintiffs’ lawyers are pitching school boards throughout the country to file lawsuits against social-media companies on allegations that their apps cause classroom disciplinary problems and mental-health issues, diverting resources from education.

Nearly 200 school districts so far have joined the litigation against the parent companies of Facebook, TikTok, Snapchat and YouTube. The suits have been consolidated in the U.S. District Court in Oakland, Calif., along with hundreds of suits by families alleging harms to their children from social media.

The lawsuits face a test later this year when a judge is expected to consider a motion by the tech companies to dismiss the cases on grounds that the conduct allegedly causing the harm is protected under the internet liability shield known as Section 230.





This raises the question: Is this the best method for training AI? How can anyone vet this data?

https://www.bespacific.com/a-i-brings-shadow-libraries-into-the-spotlight/

A.I. brings shadow libraries into the spotlight

The New York Times [free link ] – to see this text scroll down the page: ” Large language models. or L.L.M.s, the artificial intelligence systems that power tools like ChatGPT, are developed using enormous libraries of text. Books are considered especially useful training material, because they’re lengthy and (hopefully) well-written. But authors are starting to push back against their work being used this way. This week, more than 9,000 authors, including Margaret Atwood and James Patterson, called on tech executives to stop training their tools on writers’ work without compensation. That campaign has cast a spotlight on an arcane part of the internet: so-called shadow libraries, like Library Genesis, Z-Library or Bibliotik, that are obscure repositories storing millions of titles, in many cases without permission — and are often used as A.I. training data. A.I. companies have acknowledged in research papers that they rely on shadow libraries. OpenAI’s GPT-1 was trained on BookCorpus, which has over 7,000 unpublished titles scraped from the self-publishing platform Smashwords. To train GPT-3, OpenAI said that about 16 percent of the data it used came from two “internet-based books corpora” that it called “Books1” and “Books2.” According to a lawsuit by the comedian Sarah Silverman and two other authors against OpenAI, Books2 is most likely a “flagrantly illegal” shadow library. These sites have been under scrutiny for some time. The Authors Guild, which organized the authors’ open letter to tech executives, cited studies in 2016 and 2017 that suggested text piracy depressed legitimate book sales by as much as 14 percent. Efforts to shut down these sites have floundered. Last year, the F.B.I., with help from the Authors Guild, charged two people accused of running Z-Library with copyright infringement, fraud and money laundering. But afterward, some of these sites were moved to the dark web and torrent sites, making it harder to trace them. And because many of these sites are run outside the United States and anonymously, actually punishing the operators is a tall task.”





Always interesting, never amusing.

https://newsroom.ibm.com/2023-07-24-IBM-Report-Half-of-Breached-Organizations-Unwilling-to-Increase-Security-Spend-Despite-Soaring-Breach-Costs

IBM Report: Half of Breached Organizations Unwilling to Increase Security Spend Despite Soaring Breach Costs

AI/Automation cut breach lifecycles by 108 days; $470,000 in extra costs for ransomware victims that avoid law enforcement; Only one third-of organizations detected the breach themselves

IBM Security today released its annual Cost of a Data Breach Report,1 showing the global average cost of a data breach reached $4.45 million in 2023 – an all-time high for the report and a 15% increase over the last 3 years. Detection and escalation costs jumped 42% over this same time frame, representing the highest portion of breach costs, and indicating a shift towards more complex breach investigations.

According to the 2023 IBM report, businesses are divided in how they plan to handle the increasing cost and frequency of data breaches. The study found that while 95% of studied organizations have experienced more than one breach, breached organizations were more likely to pass incident costs onto consumers (57%) than to increase security investments (51%).



Sunday, July 23, 2023

If we have the face of a criminal, should we be banned from trying to match it?

https://www.technologyreview.com/2023/07/20/1076539/face-recognition-massachusetts-test-police/

The movement to limit face recognition tech might finally get a win

The state’s lawmakers are currently thrashing out a bipartisan state bill that seeks to limit police use of the technology. Although it’s not a full ban, it would mean that only state police could use it, not all law enforcement agencies.

Law enforcement agencies in the state were now permitted access only to face recognition systems owned and operated by the Registry of Motor Vehicles (RMV), the state police, or the FBI. As a result, the universe of photos that police could query was much more limited than what was available through a system like Clearview, which gives users access to all public photos on the internet.

To hunt for someone’s image, police had to submit a written request and obtain a court order. That’s a lower bar than a warrant, but previously, they’d just been able to ask by emailing over a photo to search for suspects in misdemeanor and felony offenses including fraud, burglary, and identity theft.





Wow! One whole hour!

https://www.databreaches.net/attorneys-on-alert-for-cybersecurity-threats-new-yorks-new-cle-training-requirement/

Attorneys on alert for cybersecurity threats: New York’s new CLE training requirement

John Bandler reports:

July 1st was a cybersecurity milestone for every New York attorney who now needs to complete an hour of cybersecurity training before renewing their law license. New York Courts in their role supervising and licensing attorneys recognize the importance of cybersecurity, and the threat of cybercrime.

Cybercrime menaces every person and organization including attorneys, law firms and their clients. Attorneys have specialized duties that translate to cybersecurity obligations in addition to the general obligations that apply to other professions and sectors.

Read more at Reuters.





Could be interesting. What conclusions?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4507244

The Future of Cybercrime: AI and Emerging Technologies Are Creating a Cybercrime Tsunami

This paper reviews the impact of AI and emerging technologies on the future of cybercrime and the necessary strategies to combat it effectively. Society faces a pressing challenge as cybercrime proliferates through AI and emerging technologies. At the same time, law enforcement and regulators struggle to keep it up. Our primary challenge is raising awareness as cybercrime operates within a distinct criminal ecosystem. We explore the hijacking of emerging technologies by criminals (CrimeTech) and their use in illicit activities, along with the tools and processes (InfoSec) to protect against future cybercrime. We also explore the role of AI and emerging technologies (DeepTech) in supporting law enforcement, regulation, and legal services (LawTech).





I need to think about this one…

http://www.collegepublications.co.uk/downloads/DEON00004.pdf#page=334

Inconsistent precedents and deontic logic

Computational models of legal precedent-based reasoning developed in the field of Artificial Intelligence and Law are typically based on the simplifying assumption that the background set of precedent cases is consistent. Besides being unrealistic in the legal domain, this assumption is problematic for recent promising applications of these models to the development of explainable Artificial Intelligence methods. In this paper I explore a model of legal precedent-based reasoning that, unlike existing models, does not rely on the assumption that the background set of precedent cases is consistent. The model is a generalization of the reason model of precedential constraint. I first show that the model supports an interesting deontic logic, where consistent obligations can be derived from inconsistent case bases. I then provide an explanation of this surprising result by proposing a reformulation of the model in terms of cases that support a new potential decision and cases that conflict with it.





Before you answer…

https://www.proquest.com/openview/c1a5c93054c97da9f3f79dce5b86e666/1?pq-origsite=gscholar&cbl=18750&diss=y

Is Wide-Area Persistent Surveillance by State and Local Governments Constitutional?

This dissertation addresses the following question: “Can wide-area persistent surveillance (WAPS) developed by the United States military and employed abroad as a tool in the Global War on Terror be employed domestically as a law enforcement tool without violating the US Constitution’s Fourth Amendment?” The most likely and controversial application of WAPS by state and local governments is for law enforcement. Aircraft will loiter over a city persistently taking high-definition photographs to capture locations of unidentified persons with the intent to identify persons and areas of interest for criminal investigations. Based on the Flyover Cases, aerial surveillance has few constitutional limitations which WAPS can be consistent. The key challenge in determining the constitutionality of WAPS depends on the Court’s interpretation of the Fourth Amendment concerning emerging technologies. Legal scholars have suggested various forms of the Mosaic Theory, which was introduced in two concurring opinions in Jones v. United States. The Supreme Court has been reticent to engage new technology’s constitutionality. WAPS is among the less intrusive tools when compared to other emerging technologies like digital information or facial recognition. This research argues why the Courts should view Personal Identifying Information (PII) as the line of reasonable expectations of privacy for WAPS and other emerging technologies. Aerial surveillance by nature, collects passive information, new data is not being created by photographing the happenings in public spaces from an aerial platform. In Carpenter v. United States, the Court ruled that warrantless surveillance of cell site location information (CSLI) for more than seven days was an unreasonable search. However, the court repeatedly referred to CSLI as “unique,” whereas “conventional surveillance and tools, such as security cameras,” are not. WAPS should not be limited by the Constitution for the operational duration, time of day/night, camera resolution, location of collection, altitude, or any other variable at the collection stage of the operations. The analysis and exploitation of WAPS data encounters constitutional limits necessary to protect individuals’ PII absent probable cause standards.