Saturday, May 20, 2023

A very vanilla statement?

https://www.huntonprivacyblog.com/2023/05/19/ftc-issues-policy-statement-on-biometric-information-and-section-5-of-the-ftc-act-to-address-concerns-about-misuse/

FTC Issues Policy Statement on Biometric Information and Section 5 of the FTC Act to Address Concerns about Misuse

On May 18, 2023, the Federal Trade Commission issued a policy statement on “Biometric Information and Section 5 of the Federal Trade Commission Act.” The statement warns that the use of consumer biometric information and related technologies raises “significant concerns” regarding privacy, data security, and bias and discrimination, and makes clear the FTC’s commitment to combatting unfair or deceptive acts and practices related to the collection and use of consumers’ biometric information and the marketing and use of biometric information technologies.





Tools & Techniques.

https://www.makeuseof.com/windows-whisper-desktop-guide/

How to Turn Your Voice to Text in Real Time With Whisper Desktop

The very same people behind ChatGPT have created another AI-based tool you can use today to boost your productivity. We're referring to Whisper, a voice-to-text solution that eclipsed all similar solutions that came before it.



Friday, May 19, 2023

It’s not just images of your face. This addresses only one individual. Can it be generalized?

https://www.pogowasright.org/decision-by-the-austrian-sa-against-clearview-ai-infringements-of-articles-5-6-9-27-gdpr/

Decision by the Austrian SA against Clearview AI Infringements of Articles 5, 6, 9, 27 GDPR

Summary of the Decision

Origin of the case

Following a complaint the Austrian SA (DSB) issued a decision against the facial recognition company Clearview AI on the 10th of May 2023.

The company reportedly owns a database including over 30 billion facial images from all over the world, which are extracted from public web sources (media outlets, social media, online videos) via web scraping. It offers a sophisticated search service which allows, through AI systems, creating profiles on the basis of the biometric data extracted from the images. The profiles can be enriched by information linked to those images such as image tags and geolocation or the source web pages.

Due to a request for access, the complainant found out that his image data is also processed by Clearview AI. Thereupon he lodged a complaint with the Austrian SA.

Key Findings

The DSB found that Clearview AI infringed the following provisions of the GDPR:

Article 5(1)(a): The processing of the complainant’s personal data lacked lawfulness, fairness and transparency.

Article 5(1)(b): The processing carried out by Clearview AI serves a completely different purpose from the original publication of the complainant’s personal data (especially photographs).

Article 5(1)(c): The permanent storage of personal data also constitutes a breach of data minimisation principle.

Article 9(1): The scanning of the complainant’s face, the extraction of his uniquely identifying facial features and the translation of these features into vectors constitutes processing of special categories of personal data. An exception to the processing prohibition pursuant to Article 9(2) does not apply in this case, which is why the processing was carried out in violation of Article 9(1) GDPR.

To the extent that the complainant’s personal data did not constitute special categories of personal data and thus Art. 9 GDPR did not apply, the processing would be unlawful:

Article 6(1): of Clearview AI could only have been covered by Article 6(1)(f) GDPR. After an extensive weighing of interests, the DSB came to the conclusion that, due to the serious intrusion into his privacy, the interests of the complainant clearly outweighed the purely commercial interests of Clearview AI.

Decision

The Austrian SA found that Clearview AI infringed the above provisions of the GDPR.

Clearview AI was ordered to erase the complainant’s personal data and to designate a representative within the European Union.

The news published here does not constitute official EDPB communication, nor an EDPB endorsement. This news item was originally published by the national supervisory authority and was published here at the request of the SA for information purposes. Any questions regarding this news item should be directed to the supervisory authority concerned.

Source: EDPB





Should be an amusing fight.

https://www.pogowasright.org/montana-governor-bans-tiktok-but-can-the-state-enforce-the-law/

Montana Governor Bans TikTok. But Can the State Enforce the Law?

AP reports:

Montana Gov. Greg Gianforte on Wednesday signed into law a first-of-its kind bill that makes it illegal for TikTok to operate in the state, setting up a potential legal fight with the company amid a litany of questions over whether the state can even enforce the law.
The new rules in Montana will have more far-reaching effects than TikTok bans already in place on government-issued devices in nearly half the states and the U.S. federal government. There are 200,000 TikTok users in Montana as well as 6,000 businesses that use the video-sharing platform, according to company spokesperson Jamal Brown.

Read more at GVWire,

And read about some a lawsuit challenging the law that has already filed at Courthouse News.





Here is a law that I would have happily ignored as a kid, if there had been such things as social media back then.

https://www.makeuseof.com/should-laws-prevent-kids-joining-social-media-parental-consent/

Should Laws Prevent Kids From Joining Social Media Without Parental Consent?

The first US state to pass a law addressing parental consent for social media was Utah in March 2023. The law also prevents minors from being on social media at certain late-night hours, and requires age verification, according to NPR.

Arkansas passed a law requiring social media companies to collect a photo ID of new users to determine their age. People under the age of 18 in the state will need their parents’ consent to create an account on social media sites, according to Vice.

Ohio, Texas, Louisiana, and New Jersey are considering similar laws. And there could be more coming at the federal level.





Would a manager let AI manage without him? Perhaps AI could point to areas where your current practices resulted in discrimination and help you resolve it? (If not, what good is AI?)

https://fortune.com/2023/05/18/bossware-ai-remote-workers-tracking-software-could-be-illegal-eeoc/

Bossware’ AI that tracks remote workers’ activities could break the law, government says

What will happen is that there’s an algorithm that is looking for patterns that reflect patterns that it’s already familiar with,” she said. “It will be trained on data that comes from its existing employees. And if you have a non-diverse set of employees currently, you’re likely to end up with kicking out people inadvertently who don’t look like your current employees.”

Amazon, for instance, abandoned its own resume-scanning tool to recruit top talent after finding it favored men for technical roles — in part because it was comparing job candidates against the company’s own male-dominated tech workforce.





Clearly you can write rules. The problem is finding AI systems and determining how they implement the rules.

https://www.ft.com/content/8446842c-537a-4fc4-9e02-667d719526ae

Can AI be regulated?

For regulators trying to get their heads around the new generation of artificial intelligence systems such as ChatGPT, there are two very different problems to confront. One is when the technology does not work as intended. The other is when it does.





A resource worth considering?

https://www.latimes.com/california/story/2023-05-18/uc-berkeley-spreads-the-gospel-of-data-science-with-new-college-free-curriculum

UC Berkeley spreads the gospel of data science with new college, free curriculum

UC Berkeley’s faculty and students are marshaling the vast power of data science across myriad fields to address tough problems. And now the university is set to accelerate those efforts with a new college, its first in more than 50 years — and is providing free curriculum to help spread the gospel of data science to California community colleges, California State University and institutions across the nation and world.

The university has posted its curriculum online, complete with assignments, slides and readings, and shared it with more than 89 other campuses. Classes have launched or are set to begin this fall at six California community colleges, four Cal State campuses and other universities including Howard, Tuskegee, Cornell, Barnard and the United States Naval Academy.

[From the curriculum:

All materials for the course, including the textbook and assignments, are available for free online under a Creative Commons license.

Textbook: Computational and Inferential Thinking: The Foundations of Data Science is a free online textbook that includes interactive Jupyter notebooks and public data sets for all examples. The textbook source is maintained as an open source project.





Tools & Techniques.

https://www.theverge.com/2023/5/18/23728703/openai-chatgpt-app-ios

OpenAI launches free ChatGPT app for iOS

OpenAI has launched an iOS app for ChatGPT, promising that an Android version is coming “soon.” The app is free to use, syncs chat history with the web, and features voice input, supported by OpenAI’s open-source speech recognition model Whisper. The app works on both iPhones and iPads and can be downloaded from the App Store here. OpenAI says it’s rolling out the app in the US first and will expand to other countries “in the coming weeks.”



Thursday, May 18, 2023

When might this reach a tipping point? Or has France decided that Clearview is too useful to squash?

https://www.cpomagazine.com/data-protection/overdue-data-protection-fine-for-clearview-ai-facial-recognition-software-is-leading-to-big-penalties/

Overdue Data Protection Fine for Clearview AI Facial Recognition Software Is Leading to Big Penalties

In October 2022, French data privacy regulator CNIL fined Clearview AI €20 million for its scraping of social media profiles and public sources for biometric image fodder. The controversial facial recognition outfit was also ordered to stop this sort of data collection and delete the data it had already collected, within two months of the decision. After failing to pay the data protection fine or provide any proof of compliance, CNIL is now issuing an overdue payment penalty in the amount of €5.2 million.

France has not gone as far as to bar Clearview from offering its facial recognition services in the country, allowing it to keep its local website up, but the company appears to have been voluntarily steering clear of clients in the EU for several years now in a bid to avoid regulation. The company appears to do the vast majority of its business with US law enforcement agencies, and in the past some major retail chains in the country. In June 2020 the European Data Protection Board warned the company that its product was likely to be found illegal to use in the bloc.





Is China using Russia’s playbook? Might we see a second Ukraine? Can we support two?

https://thehackernews.com/2023/05/escalating-china-taiwan-tensions-fuel.html

Escalating China-Taiwan Tensions Fuel Alarming Surge in Cyber Attacks

The rising geopolitical tensions between China and Taiwan in recent months have sparked a noticeable uptick in cyber attacks on the East Asian island country.

"From malicious emails and URLs to malware, the strain between China's claim of Taiwan as part of its territory and Taiwan's maintained independence has evolved into a worrying surge in attacks," the Trellix Advanced Research Center said in a new report.

The attacks, which have targeted a variety of sectors in the region, are mainly designed to deliver malware and steal sensitive information, the cybersecurity firm said, adding it detected a four-fold jump in the volume of malicious emails between April 7 and April 10, 2023.





Those who do not study history…

https://www.bespacific.com/how-people-reacted-to-greatest-inventions-in-history/

How People Reacted to Greatest Inventions in History

Real Artificial From Printing Press to Generative AI: “Since the dawn of history, humans have been reluctant to embrace new inventions, from the printing press to modern generative AI innovations. But eventually, they come around. In this post, we’ll explore some of the greatest inventions of all time that have, one by one, reshaped the way we consume information, learn, and communicate with each other. Spoiler: people did not always respond kindly to them at first… All in all, throughout history, people have been both excited and terrified by new inventions. Whether it’s the printing press, the telephone, the computer, or generative AI, each new technology has faced its own set of challenges and uncertainties. However, as we look back on these inventions today, it’s clear that they have all played a significant role in shaping the world we live in.”



Wednesday, May 17, 2023

Opinion as testimony. 

https://www.bespacific.com/oversight-of-a-i-rules-for-artificial-intelligence/

Oversight of A.I.: Rules for Artificial Intelligence

Senate Judiciary Committee Hearing. May 15, 20223 – Oversight of A.I.: Rules for Artificial Intelligence – Hearing video 

    • Witnesses – Samuel Altman, CEO OpenAI San Francisco, CA – Download Testimony:  “…OpenAI is a leading developer of large language models (LLMs) and other AI tools.  Fundamentally, the current generation of AI models are large-scale statistical prediction machines – when a model is given a person’s request, it tries to predict a likely response.  These models operate similarly to auto-complete functions on modern smartphones, email, or word processing software, but on a much larger and more complex scale.2  The model learns from reading or seeing data about the world, which improves its predictive abilities until it can perform tasks such as summarizing text, writing poetry, and crafting computer code.  Using variants of this technology, AI tools are also capable of learning statistical relationships between images and text descriptions and then generating new images based on natural language inputs…”

    • Christina Montgomery Chief Privacy & Trust Officer IBM Cortlandt Manor, NY – Download Testimony – “…IBM has strived for more than a century to bring powerful new technologies like artificial intelligence into the world responsibly, and with clear purpose.  We follow long-held principles of trust and transparency that make clear the role of AI is to augment, not replace, human expertise and judgement.  We were one of the first in our industry to establish an AI Ethics Board, which I co-chair, and whose experts work to ensure that our principles and commitments are upheld in our global business engagements… This period of focused public attention on AI is precisely the time to define and build the right guardrails to protect people and their interests.  It is my privilege to share with you IBM’s recommendations for those guardrails..”

    • Gary Marcus Professor Emeritus New York University Vancouver, BC, Canada – Download Testimony:  “…We all more or less agree on the values we would like for our AI systems to honor.  We want, for example, for our systems to to be transparent, to protect our privacy, to be free of bias, and above all else to be safe.  But current systems are not in line with these values.  Current systems are not transparent, they do not adequately protect our privacy, and they continue to perpetuate bias.  Even their makers don’t entirely understand how they work.  Most of all, we cannot remotely guarantee they are safe.

See also WSJ [free article] – ChatGPT’s Sam Altman Faces Senate Panel Examining Artificial Intelligence/a>. Congress looks to impose AI regulations, if it can reach consensus 



We don’t have any?  Really?  

https://news.usni.org/2023/05/16/defense-primer-u-s-policy-on-lethal-autonomous-weapon-systems-2 

Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems

The following is the May. 15, 2023, Congressional Research Service report, Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems.

…   Contrary to a number of news reports, U.S. policy does not prohibit the development or employment of LAWS.  Although the United States does not currently have LAWS in its inventory, some senior military and defense leaders have stated that the United States may be compelled to develop LAWS in the future if U.S. competitors choose to do so.  At the same time, a growing number of states and nongovernmental organizations are appealing to the international community for regulation of or a ban on LAWS due to ethical concerns.



Resource? 

https://fpf.org/blog/new-fpf-report-unlocking-data-protection-by-design-and-by-default-lessons-from-the-enforcement-of-article-25-gdpr/ 

NEW FPF REPORT: UNLOCKING DATA PROTECTION BY DESIGN AND BY DEFAULT: LESSONS FROM THE ENFORCEMENT OF ARTICLE 25 GDPR

On May 17, the Future of Privacy Forum launched a new report on enforcement of the EU’s GDPR Data Protection by Design and by Default (DPbD&bD) obligations, which are outlined in GDPR Article 25.  The Report draws from more than 92 data protection authority (DPA) cases, court rulings, and guidelines from 16 EEA member states, the UK, and the EDPB to provide an analysis of enforcement trends regarding Article 25.  The identified cases cover a spectrum of personal data processing activities, from accessing online services and platforms, to tools for educational and employment contexts, to “emotion recognition” AI systems for customer support, and many more.



Could be amusing to play with… 

https://www.bespacific.com/law-firm-newswire-launches-free-ai-press-release-writer-for-lawyers/

Law Firm Newswire Launches Free AI Press Release Writer for Lawyers

May 15, 2023 (Law Firm Newswire via COMTEX)  “Law Firm Newswire (LFN) has developed an AI press release writer that can help lawyers and law firms write quality press releases.  AI learns by example; thus LFN’s engineers used world-class legal journalism, court opinions, and some of LFN’s best-performing press releases as a foundation to create perfect outputs for lawyers.  Knowing How to Write for Lawyers Law firm press releases are different than media announcements from other industries.  Thus, plugging in a “write a press release for XYZ” prompt into OpenAI’s ChaptGPT or Google Bard alone would result in basic outputs that don’t have the guidance needed to write an accurate press release about legal complexities.  When a court authorizes a settlement or a verdict is handed down from a jury, a marketing professional can drop that ruling into the AI writer and make it a press release.  This saves a lot of time because the AI press release writer can translate the legal wording of the document into a public-friendly concise news announcement for the law firm.  This AI writer can also listen to input to write a better press release.  A marketing director or attorney can chat with the writer, telling it to come up with a more compelling title or to include additional case information or information about any lawyers they want to be featured in the press release.  With the ability to prompt the AI writer with new information and ask for rewrites, the press release will come out just the way you want it.  LFN’s AI writer will allow law firms to send out announcements about lawsuits and settlements instantly.  In one test case, one of LFN’s agency accounts took an approved class action settlement document and using the AI writer was able to have the press release to their client for approval within 30 minutes.  Lawyers can rely on the AI writer to produce high-quality outputs that accurately represent their news and their successes.  Earlier this year, Law Firm Newswire started testing AI models on which to build their writer.  The first version which was quietly released with the new platform’s soft launch in April was using OpenAI’s Davinci model.  But the new interactive writer is built on ChaptGPT 3.5.”

    • Anyone can use the AI press release writer by creating a free account at https://lawfirmnewswire.com/join/.

    • Documentation on using the AI writer is available at https://lawfirmnewswire.com/learn/using-ai-writer/.

      

Tuesday, May 16, 2023

It’s like facial recognition, but without the need for your face. (Why are the cans of worms getting larger?) How do prove that DNA collected at a crime scene didn’t just blow in on the wind?

https://www.cnn.com/2023/05/15/health/human-dna-captured-from-air-scn/index.html

Human DNA can now be pulled from thin air or a footprint on the beach. Here’s what that could mean

Footprints left on a beach. Air breathed in a busy room. Ocean water.

Scientists have been able to collect and analyze detailed genetic data from human DNA from all these places, raising thorny ethical questions about consent, privacy and security when it comes to our biological information.

The researchers from the University of Florida, who were using environmental DNA found in sand to study endangered sea turtles, said the DNA was of such high quality that the scientists could identify mutations associated with disease and determine the genetic ancestry of populations living nearby.

They could also match genetic information to individual participants who had volunteered to have their DNA recovered as part of the research that published in the scientific journal Nature Ecology & Evolution on Monday.

… However, the ability to capture human DNA from the environment could have a range of unintended consequences — both inadvertent and malicious, they added. These included privacy breaches, location tracking, data harvesting, and genetic surveillance of individuals or groups. It could lead to ethical hurdles for the approval of wildlife studies.





A simple argument: the first harm is the data breach.

https://www.databreaches.net/our-definition-of-harm-is-harmful/

Our Definition of Harm Is Harmful

Bill Fitzgerald writes:

In April 2023, the class action lawsuit against Illuminate Education was thrown out because the judge in the case determined that the people whose data was impacted by the breach could not show any harm, or any instances of identity theft, from the breach. This decision is both fully in line with past situations where companies have been let off the hook, and completely misrepresents and underestimates the various, different ways people get hurt by data breaches.
To put it in a different way: the judge’s decision shows how, in some cases, things that are defined as legal don’t come close to what is right. The way we define harm is harmful.

Read more at FunnyMonkey.com.





Resource.

https://thenextweb.com/news/openai-free-class-prompt-engineering-devs

OpenAI is offering a free class in prompt engineering for devs

… A short course in prompt engineering has been developed in partnership with OpenAI and is available via the DeepLearning.AI website. It’s delivered by OpenAI’s Isa Fulford alongside none other than Andrew Ng, a noted computer scientist who worked on AI at Google and Baidu before he founded DeepLearning.AI.

In just one hour, Ng and Fulford outline best practices in prompt engineering and give participants hands-on practice with the OpenAI API. The introductory course is aimed at developers but no previous experience with AI is required, just a basic understanding of Python. And for developers who have already started tinkering with large language models, the course will leave you with the instructions you need to build a chatbot of your own.

The course is currently free, but this will be for a limited period only. So now is a good time to grasp this opportunity and learn what makes this tech tool tick.



Monday, May 15, 2023

If TSA relies on facial recognition is there any additional benefit in knowing my name? If my face does not match known terrorists or any other ‘no fly’ list, who cares what name I use? If TSA only has a terrorist’s name, what are the odds that he would use it to book a flight? Is this another government attempt to require a national ID card?

https://apnews.com/article/facial-recognition-airport-screening-tsa-d8b6397c02afe16602c8d34409d1451f

Are you who you say you are? TSA tests facial recognition technology to boost airport security

A passenger walks up to an airport security checkpoint, slips an ID card into a slot and looks into a camera atop a small screen. The screen flashes “Photo Complete” and the person walks through — all without having to hand over their identification to the TSA officer sitting behind the screen.

It’s all part of a pilot project by the Transportation Security Administration to assess the use of facial recognition technology at a number of airports across the country.

What we are trying to do with this is aid the officers to actually determine that you are who you say who you are,” said Jason Lim, identity management capabilities manager, during a demonstration of the technology to reporters at Baltimore-Washington International Thurgood Marshall Airport.





Even hackers know to go ‘where the money is.’

https://www.csoonline.com/article/3696350/insured-companies-more-likely-to-be-ransomware-victims-sometimes-more-than-once.html#tk.rss_all

Insured companies more likely to be ransomware victims, sometimes more than once

Back in 2019, fewer than 20% of enterprises suffered repeat ransomware attacks, while during the pandemic, the percentage rose to around 30%. And it didn’t stop with the pandemic, with 38% of organizations surveyed in 2022 reporting two or more successful ransomware attacks, those that attackers were able to lock systems, encrypt data, or exfiltrate information to demand a ransom, according to Barracuda’s report conducted by Vanson Bourne.

Companies with cyber insurance get targeted more

Cyber insurance plays a significant role in the numbers as they get targeted more, Barracuda Networks CTO Fleming Shi tells CSO. The survey found that 77% of organizations with cyber insurance were hit at least once, compared to 65% of organizations without insurance. In addition, of the companies that had cyber insurance, 39% paid the ransom.



(Related)

https://www.databreaches.net/ransomware-corrupts-data-so-backups-can-be-faster-and-cheaper-than-paying-up/

Ransomware corrupts data, so backups can be faster and cheaper than paying up

Simon Sharwood reports:

Ransomware actors aim to spend the shortest amount of time possible inside your systems, and that means the encryption they employ is shoddy and often corrupts your data. That in turn means restoration after paying ransoms is often a more expensive chore than just deciding not to pay and working from our own backups.
That’s the opinion of Richard Addiscott, a senior director analyst at Gartner.

Read more at The Register.

The statistics from Gartner are pretty striking and of course, directly conflict with what ransomware groups assure their victims about recovery and other issues. According to Sharwood’s reporting of a talk Addiscott gave:

Restoring from corrupt data dumps delivered by crooks is not easy, Addiscott advised – and that’s if ransomware operators deliver all the data they promise. Plenty don’t – instead they use a ransom payment to open a new round of negotiations about the price of further releases.
That sort of wretched villainy means just four percent of ransomware victims recover all their data, he said. Only 61 percent recover data at all. And victims typically experience 25 days of disruption to their businesses.



Sunday, May 14, 2023

If I say no but my neighbor says yes, do I have any recourse?

https://www.context.news/digital-rights/privacy-or-safety-us-brings-surveillance-city-to-the-suburbs

Privacy or safety? U.S. brings 'surveillance city to the suburbs'

For the past year Martinez has been trying to convince owners of private surveillance cameras to enroll in a city-run program that can share control of those cameras with the police.

In 2019, the city of 100,000 became one of the first on the U.S. West Coast to roll out technology from Fusus, a U.S. security tech company that aims to boost public safety by making it easier for police to access privately owned surveillance cameras.

In Rialto, the police have access to over 150 livestreams across restaurants, gas stations, and private residential developments - a number they are hoping to increase through Martinez and others' outreach.





So, we’re good?

https://www.proquest.com/openview/5e113951bc604944150be636f6e739d5/1?pq-origsite=gscholar&cbl=18750&diss=y

Open-Source Intelligence by Law Enforcement: The Impacts of Legislation and Ethics on Investigations

Open-source intelligence (OSINT) is an established method for analyzing publicly available information law enforcement agencies (LEA) use during investigations. OSINT, in the present day, regards source information as widely accessible for the world to view and indexed on Internet search engines. OSINT’s analysis by LEA does not violate one’s reasonable expectation of privacy, can be obtained without a search warrant, and is freely open to the public at no additional cost. Due to the proliferation of the Internet and its use in the daily life of citizens, LEA has become inundated with available data. To combat the overabundance of OSINT, LEA has turned to artificial intelligence (AI) and machine learning software. However, privacy advocates have influenced the creation of new and emerging data privacy regulations, questioning LEA’s ethicality in uncovering OSINT. In turn, Internet platforms have complied with data privacy regulations, altering their terms of service and affecting the analysis of OSINT by LEA. This research details the impact of data privacy regulations on LEA’s ability to analyze OSINT efficiently.





This will work until the AI goes on strike…

https://www.proquest.com/openview/fdfb424b3c88e9b516cdb5c7d2a50026/1?pq-origsite=gscholar&cbl=44595

THE COPYRIGHT AUTHORSHIP CONUNDRUM FOR WORKS GENERATED BY ARTIFICIAL INTELLIGENCE: A PROPOSAL FOR STANDARDIZED INTERNATIONAL GUIDELINES IN THE WIPO COPYRIGHT TREATY

The increasing sophistication of artificial intelligence (AI) technology in recent decades has led legal scholars to question the implications of artificial intelligence in the realm of copyright law. Specifically, who is the copyright “author” of a work created with the assistance of artificial intelligence—the AI machine, the human programmer, or no one at all? (Since the finalization of this Note, chatGPT, an AI text-generator with remarkable responsiveness and thoroughness, has taken by the world by storm, making resolution of the problems identified by this Note all the more urgent.) This Note recommends that the World Intellectual Property Organization (WIPO) resolve the confusion and inconsistency between various nation-specific approaches by adopting international guidelines that standardize how member-countries determine copyright authorship in AI-generated works. Since AI relies on human choices to create output, even if the final work seems autonomous or random to the average observer, this Note proposes that the human or corporate creators of AI machines are the copyright authors of AI-generated works. Therefore, the WIPO Copyright Treaty should adopt guidelines modeled after China’s approach, which attributes copyright authorship to the human or corporate entity responsible for making decisions that influence the originality and creative expression in AI-generated works.



(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4443714

Authorbots

ChatGPT has exploded into the popular consciousness in recent months, and the hype and concerns about the program have only grown louder with the release of GPT-4, a more powerful version of the software. Its deployment, including with applications such as Microsoft Office, has raised questions about whether the developers or distributors of code that includes ChatGPT, or similar generative pre-trained transformers, could face liability for tort claims such as defamation or false light. One important potential barrier to these claims is the immunity conferred by 47 U.S.C. § 230, popularly known as “Section 230.” In this Essay, we make two claims. First, Section 230 is likely to protect the creators, distributors, and hosts of online services that include ChatGPT in many cases. Users of those services, though, may be at greater legal risk than is commonly believed. Second, ChatGPT and its ilk make the analysis of the Section 230 safe harbor more complex, both substantively and procedurally. This is likely a negative consequence for the software’s developers and hosts, since complexity in law tends to generate uncertainty, which in turn creates cost. Nonetheless, we contend that Section 230 has more of a role to play in legal questions about ChatGPT than most commentators do—including the principal legislative drafters of Section 230—and that this result is generally a desirable one.





Good to see that someone is tracking this.

https://finance.yahoo.com/news/ai-faces-legal-limits-in-these-6-states-160929128.html

AI faces legal limits in these 6 states

Other parts of the world are accelerating laws designed to protect consumers from advanced artificial intelligence tools, including a chatbot that can replicate human tasks and biometric surveillance of faces in public spaces.

But federal legislation has stalled in the US, leaving the job of regulating Open AI’s ChatGPT and other generative AI tools to local governments. How much protection consumers have in this country at the moment depends on where they live.

There are six states that have or will have laws on their books by the end of 2023 to prevent businesses from using AI to discriminate or deceive consumers and job applicants: California, Colorado, Connecticut, Illinois, Maryland, and Virginia.





Not sure I get this. But it seems to have potential. Perhaps we should do more?

https://journals.sagepub.com/doi/full/10.1177/00380385231169676

A Sociological Conversation with ChatGPT about AI Ethics, Affect and Reflexivity

This research note is a conversation between ChatGPT and a sociologist about the use of ChatGPT in knowledge production. ChatGPT is an artificial intelligence language model, programmed to analyse vast amounts of data, recognise patterns and generate human-like conversational responses based on that analysis. The research note takes an experimental form, following the shape of a dialogue, and was generated in real time, between the author and ChatGPT. The conversation reflects on, and is a reflexive contribution to, the study of artificial intelligence from a sociology of science perspective. It draws on the notion of reflexivity and adopts an ironic, parodic form to critically respond to the emergence of artificial intelligence language models, their affective and technical qualities, and thereby comments on their potential ethical, social and political significance within the humanities.