Rules for facial recognition?
https://pogowasright.org/nz-commissioner-issues-biometric-processing-privacy-code/
NZ: Commissioner issues Biometric Processing Privacy Code
The Privacy Commissioner has issued a biometric Code that will create specific privacy rules for agencies (businesses and organisations) using biometric technologies to collect and process biometric information.
The Code, which is now law made under the Privacy Act, will help make sure agencies implementing biometric technologies are doing it safely and in a way that is proportionate.
The Code comes into force on 3 November 2025, but agencies already using biometrics have a nine-month grace period to move to the new set of rules. That transition period ends on 3 August 2026.
Guidance is also being issued to support the Code.
Read a summary of the Biometric Processing Privacy Code
Read the Biometric Processing Privacy Code
See our factsheets for an overview of the Code
Read our guidance on the Code
Source: Privacy Commissioner of New Zealand
Now that is different.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5399463
Plagiarism, Copyright, and AI
Critics of generative AI often describe it as a “plagiarism machine.” They may be right, though not in the sense they mean. With rare exceptions, generative AI doesn’t just copy someone else’s creative expression, producing outputs that infringe copyright. But it does get its ideas from somewhere. And it’s quite bad at identifying the source of those ideas. That means that students (and professors, and lawyers, and journalists) who use AI to produce their work generally aren’t engaged in copyright infringement. But they are often passing someone else’s work off as their own, whether or not they know it. While plagiarism is a problem in academic work generally, AI makes it much worse, because authors who use AI may be taking the ideas and words of someone else without knowing it.
Disclosing that the authors used AI isn’t a sufficient solution to the problem, because the people whose ideas are being used don’t get credit for those ideas. Whether or not a declaration that “AI came up with my ideas” is plagiarism, it is a bad academic practice.
We argue that AI plagiarism isn’t—and shouldn’t be—illegal. But it is still a problem in many contexts, particularly academic work, where proper credit is an essential part of the ecosystem. We suggest best practices to align academic and other writing with good scholarly norms in the AI environment.
I must have missed some of this…
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5395242
AI Training is Fair Use: The Beginning of the End of the Copyright Assault on Gen AI
Two federal courts overseeing claims against the developers of generative artificial intelligence (GenAI) have pointed the way to resolving these infringement actions by finding that the training of GenAI models is a transformative fair use under copyright law. While the two opinions differed in tone and scope, this article takes these rulings as the starting point for a discussion on resolving the ongoing copyright claims against AI developers, signaling what may be the beginning of the end of the copyright assault on GenAI.
The goal of this article is to inject urgency into resolving these matters. It asserts that uncertainty over the legal status of AI training is a drag on innovation and development in this vital economic sector. While massive investments are pouring into this field, the money flows to an extraordinarily small number of players whose resources allow them to run the risks posed by class actions and multi-party actions demanding damages that might cripple even the largest companies. With the threat of destruction by copyright infringement action removed, AI development could expand and flourish among even the smallest of innovators.
Ending the infringement actions requires more than just a recognition that indiscriminately drawing data from existing works without permission and without licensing to create a generative artificial intelligence expression machine is fundamentally transformative under factor one of the copyright fair use test. Plaintiffs have fought to sell a theory of the case that keeps AI developers in the defendants’ seats, even though the parties responsible for the production of outputs and for any resulting market harm are the end-users of the technology.
This article asserts that the proper theory of these infringement cases is that GenAI developers made a general-purpose technology that can create an infinite variety of new, original expression, but end-users of the technology can choose to use it to compete with the plaintiff artists and creators in their same style and in their same medium, at massively reduced costs and massively increased speeds. And sometimes end-users will use the technology to create infringing works. Far from being a unique 21st century high technology story, this story is the same as that of photocopy machines, Betamax and VCR devices, scanners, image-editing software, and internet search engines, all of which are capable of making duplicates of expressive works that can be put to uses that infringe on the original works and harm their markets. Yet, the designers of these copying technologies are not sued for copyright infringement because of the disconnect between the action of creating a useful tool and the action of an end-user who co-opts the tool for their own purposes.
The designers of these GenAI models made them powerful and extraordinarily fluent tools for creating new expression with a “further purpose or different character, altering the first with new expression, meaning, or message,” but in the end, GenAI systems are just tools. They are not artists or authors and do not automatically regurgitate infringing content. Rather, they are tools capable of being used by end-users who may act purposefully to create substantially similar and potentially infringing works that can be used to compete with the plaintiffs.