Saturday, February 11, 2023

Imagine policing without computers? Oh the horror!

https://www.databreaches.net/hack-attack-forces-modesto-police-off-computers-back-onto-radio-report-says/

Hack attack forces Modesto Police off computers, back onto radio, report says.

Les Hubbard reports:

An attack on City of Modesto computer systems has left the Modesto Police Department embracing “old school policing” techniques to manage calls for service and transporting of criminals for a number of days.

Read more at The Sun.



(Related) Will the courts say, ‘take all the time you need!’

https://www.databreaches.net/cybersecurity-incident-shuts-down-biglaw-network/

Cybersecurity Incident Shuts Down Biglaw Network

Joe Patrice reports:

On the plus side, the cybersecurity incident at Troutman Pepper does not appear to have compromised any client data. So, in a sense, the system worked.
But as a damage recovery matter, leaving attorneys using personal email accounts and locally saved documents for over a day highlights that for all the talk about protecting data, the unheralded impact of a cyber breach tends to be leaving the firm technologically adrift for extended stretches while tech professionals perform clean up.

Read more at Above the Law.



Friday, February 10, 2023

Can you ask that all surveillance be turned off and then rely on that?

https://fox11online.com/news/local/very-serious-privacy-invasion-american-civil-liberties-union-aclu-analyst-green-bays-audio-surveillance-microphone-eric-genrich-jay-stanly-andre-jacque-bill-galvin-joanne-bungert

'Very serious privacy invasion': ACLU analyst on Green Bay's audio surveillance

A senior official with the American Civil Liberties Union tells FOX 11 audio surveillance at Green Bay's city hall is unlike anything he's heard of before.

This is the first sort of city hall or political location that I've heard doing something like this,” said Jay Stanly, a senior policy analyst for the ACLU in Washington D.C., who has been with the nonprofit since five weeks before 9/11.

City officials say microphones were put on the hallway ceilings on the first and second levels, outside the city clerk’s office, the city council chambers, and the mayor’s office within the past two years due to threatening interactions involving members of the public and staff.

… “We have millions of video surveillance cameras all around this country and almost none of them have microphones on them and it's because wiretapping laws, federal and state wiretapping laws, make it very legally difficult to record audio conversations in public,” said Stanly.

There are no signs at city hall warning people of the audio recording devices. Some city council members were surprised and upset when they first learned about the devices during Tuesday’s city council meeting.

To have a recording device that people might not be aware of, at such a location, is a serious threat to privacy and completely unjustified,” said Stanly.

State law requires one party in a private conversation must consent to the communication being recorded.





At last, Texas?

https://www.huntonprivacyblog.com/2023/02/09/texas-state-representative-introduces-comprehensive-state-privacy-bill-draft/

Texas State Representative Introduces Comprehensive State Privacy Bill Draft

On February 6, 2023, Texas State Representative Giovanni Capriglione submitted H.B. 1844, a comprehensive privacy bill modeled after the Virginia Consumer Data Protection Act (“VCDPA”). The bill could make Texas the sixth U.S. state to enact major privacy legislation, following California, Virginia, Colorado, Utah, and Connecticut. Although the bill closely follows the VCDPA, it departs from the Virginia law in several key areas, most notably in the definition of “personal data” and its applicability.



Thursday, February 09, 2023

Imagine all the major tech companies creating their own ChatGPT.

https://www.makeuseof.com/google-launching-bard-ai-compete-with-chatgpt/

Google Is Launching An AI Called Bard to Compete With ChatGPT

Can it take on the might of OpenAI's ChatGPT, or will it be another Google feature eventually consigned to history?

The launch of ChatGPT rattled several tech companies. Google, whose revenue is heavily dependent on its search business—something ChatGPT could eventually threaten—has particularly been concerned.

Now, less than three months into ChatGPT's existence, Google has announced the launch of a ChatGPT-styled AI called Bard to take on the seemingly unchallenged reign of ChatGPT. But how will Bard work? Will Bard be better than ChatGPT? Here's everything we know so far.



(Related)

https://www.bespacific.com/chatgpt-is-a-data-privacy-nightmare/

ChatGPT is a data privacy nightmare

The Conversation If you’ve ever posted online, you ought to be concerned: “ChatGPT has taken the world by storm. Within two months of its release it reached 100 million active users. making it the fastest-growing consumer application ever launched. Users are attracted to the tool’s advanced capabilities and concerned by its potential to cause disruption in various sectors. A much less discussed implication is the privacy risks ChatGPT poses to each and every one of us. Just yesterday, Google unveiled its own conversational AI called Bard, and others will surely follow. Technology companies working on AI have well and truly entered an arms race. The problem is it’s fuelled by our personal data…”





War like act, in a time of peace?

https://www.databreaches.net/insurers-say-cyberattack-that-hit-merck-was-warlike-act-not-covered/

Insurers Say Cyberattack That Hit Merck Was Warlike Act, Not Covered

Richard Vanderford reports on another attempt by insurers to avoid having to cover costs involved in  a cyberattack by applying the common war exclusion:

The costly NotPetya cyberattack, which the U.S. blamed on Russia, should be considered a “cyber nuclear attack,” insurers argued as they urged judges to overturn a legal win by Merck & Co. in a dispute that could have broad ramifications for business insurance.
Merck, which had an estimated $1.4 billion in losses after NotPetya invaded its computer systems in 2017, suffered the collateral damage of a warlike act not covered by insurance, lawyers for a group of carriers told judges Wednesday in a state appeals court in Trenton, N.J.

Read more at WSJ.



Wednesday, February 08, 2023

Perhaps GDPR isn’t as broad as I assumed it was?

https://www.pogowasright.org/cnil-weighs-in-on-gdpr-applicability-to-us-company/

CNIL Weighs in On GDPR Applicability to US Company

Liisa Thomas & Kathryn Smith of Sheppard Mullin write:

The French Data Protection Authority capped off 2022 by terminating an investigation into Lusha Systems, Inc.’s compliance with GDPR. CNIL concluded that the law did not apply to the US company’s activities. As many know, since GDPR was passed US companies have been concerned about the extent the law applies outside of the EU: it applies not only to those entities with operations in the EU, but also those outside of the region who are either offering goods or services to people in the EU or monitoring individuals in the EU. Here, CNIL concluded that Lusha was not offering goods or services to those in the EU, nor was it monitoring those in the EU.
The European Data Protection Board has issued guidance and examples on the scope of CNIL. These include “monitoring” situations, perhaps the trickiest fact pattern. However, the guidance gives examples of when GDPR would apply but not situations where it would not apply. The Lusha case is thus helpful to companies as they consider GDPR applicability.

Read more at Eye on Privacy.





Not just no, hell no.

https://abovethelaw.com/2023/02/ais-impact-on-the-future-of-law-will-lawyers-survive/

AI’s Impact On The Future of Law: Will Lawyers Survive?

We suspect that many things we once thought impossible will be made possible by the new generation of AI.

Catchy title, right? Well, we must ‘fess up – OpenAI’s ChatGPT lent us a hand. We submitted this request: “Suggest several striking titles for an article about why lawyers are afraid of being replaced by AI.” We got 12 proposed titles in return, all of them credible as well as catchy.

Is it possible that AI will one day replace some lawyers?

When asked this question, the AI waffled a bit and said, “it is possible that AI could eventually replace some aspects of a lawyer’s job, such as document review, legal research and contract analysis.” Perhaps to make us feel better, it offered its opinion that “it is unlikely that AI will completely replace the role of lawyers as the legal profession requires a high degree of critical thinking, problem-solving and decision-making skills that are currently difficult for AI to replicate.” Note the word “currently.”

Ultimately, it opined that “it is more likely that AI will become a tool that lawyers use to augment their abilities, rather than a replacement for lawyers altogether.” Only partial comfort there . . .



Sunday, February 05, 2023

Should apply to other types of writing,

https://www.tandfonline.com/doi/full/10.1080/08989621.2023.2168535

Using AI to write scholarly publications

Artificial intelligence (AI) natural language processing (NLP) systems, such as OpenAI’s generative pre-trained transformer (GPT) model (https://openai.com) or Meta’s Galactica (https://galactica.org/) may soon be widely used in many forms of writing, including scientific and scholarly publications (Heaven 2022).1 While computer programs (such as Microsoft WORD and Grammarly) have incorporated automated text-editing features (such as checking for spelling and grammar) for many years, these programs are not designed to create content. However, new and emerging NLP systems are, which raises important issues for research ethics and research integrity.2



(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4342909

AI-Assisted Authorship: How to Assign Credit in Synthetic Scholarship

This report proposes principles for determining when it is required to credit an artificial intelligence (AI) writer for its contributions to scholarly work. We begin by critiquing a policy recently published by the journal Nature, which forbids acknowledging AI writers as authors. We question the justification and breadth of this policy. We then suggest two fundamental considerations that we think are more relevant: continuity (how substantially are the contributions of AI writers carried through to the final product?), and creditworthiness (would this kind of product typically result in academic or professional credit for a human author?). We draw upon brief reflections on the nature and value of authorship to justify these considerations. This report provides a starting point for academics and the broader scholarly community in the emerging debate on determining when and how to credit AI writers’ contributions.



(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4338115

Best Practices for Disclosure and Citation When Using Artificial Intelligence Tools

This article is intended to be a best practices guide for disclosing the use of artificial intelligence tools in legal writing. The article focuses on using artificial intelligence tools that aid in drafting textual material, specifically in law review articles and law school courses. The article’s approach to disclosure and citation is intended to be a starting point for authors, institutions, and academic communities to tailor based on their own established norms and philosophies. Throughout the entire article, the author has used ChatGPT to provide examples of how artificial intelligence tools can be used in writing and how the output of artificial intelligence tools can be expressed in text, including examples of how that use and text should be disclosed and cited. The article will also include policies for professors to use in their classrooms and journals to use in their submission guidelines.