Thursday, January 26, 2023

Keeping score.

https://www.cpomagazine.com/data-protection/dla-piper-annual-gdpr-and-data-breach-report-2022-a-record-year-for-gdpr-fines-despite-drop-in-breach-count/

DLA Piper Annual GDPR and Data Breach Report: 2022 a Record Year for GDPR Fines Despite Drop in Breach Count

DLA Piper’s annual report covering EU data breaches and GDPR fines reports a record year in penalties, with a total of €2.92 billion levied throughout the bloc in 2022. This is in spite of a small drop in the overall breach count, but it is important to remember that fines are often assessed for complaints and cases that were initiated years before.

The report also indicates that the bloc’s regulators are making AI more of a priority, as concerns run rampant about everything from facial recognition tools to ChatGPT.





Guidelines or serious thought?

https://www.defenseone.com/policy/2023/01/when-may-robot-kill-new-dod-policy-tries-clarify/382215/

When May a Robot Kill? New DOD Policy Tries to Clarify

Did you think the Pentagon had a hard rule against using lethal autonomous weapons? It doesn’t. But it does have hoops to jump through before such a weapon might be deployed—and, as of Wednesday, a revised policy intended to clear up confusion.

The biggest change in the Defense Department’s new version of its 2012 doctrine on lethal autonomous weapons is a clearer statement that it is possible to build and deploy them safely and ethically but not without a lot of oversight.

That’s meant to clear up the popular perception that there’s some kind of a ban on such weapons. “No such requirement appears in [the 2012 policy] DODD 3000.09, nor any other DOD policy,” wrote Greg Allen, the director of the Artificial Intelligence Governance Project and a senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies.

What the 2012 doctrine actually says is that the military may make such weapons but only after a “senior level review process,” which no weapon has gone through yet, according to a 2019 Congressional Research Service report on the subject.





Tools & Techniques.

https://www.bespacific.com/nonprofits-release-free-tool-to-detect-ai-written-student-work/

Nonprofits release free tool to detect AI-written student work

Fast Company: “As concerns rise about students’ use of generative artificial intelligence like ChatGPT to complete schoolwork, a pair of education nonprofits have created a free system to help teachers detect AI-assisted essays. The tool, called AI Writing Check, was developed by the writing nonprofits Quill and CommonLit using an open-source AI model designed to detect the output of ChatGPT and related systems. It enables teachers (or anyone else) to copy and paste text and within a few seconds receive a determination on whether the work in question was written by ChatGPT. AI Writing Check, which the nonprofits began to develop in December, comes as surveys indicate growing concern among teachers over machine-generated essays. Other tools, including one called GPTZero, have also been released recently to detect automated writing…”



No comments: