If I say no but my neighbor says yes, do I have any recourse?
https://www.context.news/digital-rights/privacy-or-safety-us-brings-surveillance-city-to-the-suburbs
Privacy or safety? U.S. brings 'surveillance city to the suburbs'
… For the past year Martinez has been trying to convince owners of private surveillance cameras to enroll in a city-run program that can share control of those cameras with the police.
In 2019, the city of 100,000 became one of the first on the U.S. West Coast to roll out technology from Fusus, a U.S. security tech company that aims to boost public safety by making it easier for police to access privately owned surveillance cameras.
… In Rialto, the police have access to over 150 livestreams across restaurants, gas stations, and private residential developments - a number they are hoping to increase through Martinez and others' outreach.
So, we’re good?
Open-Source Intelligence by Law Enforcement: The Impacts of Legislation and Ethics on Investigations
Open-source intelligence (OSINT) is an established method for analyzing publicly available information law enforcement agencies (LEA) use during investigations. OSINT, in the present day, regards source information as widely accessible for the world to view and indexed on Internet search engines. OSINT’s analysis by LEA does not violate one’s reasonable expectation of privacy, can be obtained without a search warrant, and is freely open to the public at no additional cost. Due to the proliferation of the Internet and its use in the daily life of citizens, LEA has become inundated with available data. To combat the overabundance of OSINT, LEA has turned to artificial intelligence (AI) and machine learning software. However, privacy advocates have influenced the creation of new and emerging data privacy regulations, questioning LEA’s ethicality in uncovering OSINT. In turn, Internet platforms have complied with data privacy regulations, altering their terms of service and affecting the analysis of OSINT by LEA. This research details the impact of data privacy regulations on LEA’s ability to analyze OSINT efficiently.
This will work until the AI goes on strike…
https://www.proquest.com/openview/fdfb424b3c88e9b516cdb5c7d2a50026/1?pq-origsite=gscholar&cbl=44595
THE COPYRIGHT AUTHORSHIP CONUNDRUM FOR WORKS GENERATED BY ARTIFICIAL INTELLIGENCE: A PROPOSAL FOR STANDARDIZED INTERNATIONAL GUIDELINES IN THE WIPO COPYRIGHT TREATY
The increasing sophistication of artificial intelligence (AI) technology in recent decades has led legal scholars to question the implications of artificial intelligence in the realm of copyright law. Specifically, who is the copyright “author” of a work created with the assistance of artificial intelligence—the AI machine, the human programmer, or no one at all? (Since the finalization of this Note, chatGPT, an AI text-generator with remarkable responsiveness and thoroughness, has taken by the world by storm, making resolution of the problems identified by this Note all the more urgent.) This Note recommends that the World Intellectual Property Organization (WIPO) resolve the confusion and inconsistency between various nation-specific approaches by adopting international guidelines that standardize how member-countries determine copyright authorship in AI-generated works. Since AI relies on human choices to create output, even if the final work seems autonomous or random to the average observer, this Note proposes that the human or corporate creators of AI machines are the copyright authors of AI-generated works. Therefore, the WIPO Copyright Treaty should adopt guidelines modeled after China’s approach, which attributes copyright authorship to the human or corporate entity responsible for making decisions that influence the originality and creative expression in AI-generated works.
(Related)
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4443714
Authorbots
ChatGPT has exploded into the popular consciousness in recent months, and the hype and concerns about the program have only grown louder with the release of GPT-4, a more powerful version of the software. Its deployment, including with applications such as Microsoft Office, has raised questions about whether the developers or distributors of code that includes ChatGPT, or similar generative pre-trained transformers, could face liability for tort claims such as defamation or false light. One important potential barrier to these claims is the immunity conferred by 47 U.S.C. § 230, popularly known as “Section 230.” In this Essay, we make two claims. First, Section 230 is likely to protect the creators, distributors, and hosts of online services that include ChatGPT in many cases. Users of those services, though, may be at greater legal risk than is commonly believed. Second, ChatGPT and its ilk make the analysis of the Section 230 safe harbor more complex, both substantively and procedurally. This is likely a negative consequence for the software’s developers and hosts, since complexity in law tends to generate uncertainty, which in turn creates cost. Nonetheless, we contend that Section 230 has more of a role to play in legal questions about ChatGPT than most commentators do—including the principal legislative drafters of Section 230—and that this result is generally a desirable one.
Good to see that someone is tracking this.
https://finance.yahoo.com/news/ai-faces-legal-limits-in-these-6-states-160929128.html
AI faces legal limits in these 6 states
Other parts of the world are accelerating laws designed to protect consumers from advanced artificial intelligence tools, including a chatbot that can replicate human tasks and biometric surveillance of faces in public spaces.
But federal legislation has stalled in the US, leaving the job of regulating Open AI’s ChatGPT and other generative AI tools to local governments. How much protection consumers have in this country at the moment depends on where they live.
There are six states that have or will have laws on their books by the end of 2023 to prevent businesses from using AI to discriminate or deceive consumers and job applicants: California, Colorado, Connecticut, Illinois, Maryland, and Virginia.
Not sure I get this. But it seems to have potential. Perhaps we should do more?
https://journals.sagepub.com/doi/full/10.1177/00380385231169676
A Sociological Conversation with ChatGPT about AI Ethics, Affect and Reflexivity
This research note is a conversation between ChatGPT and a sociologist about the use of ChatGPT in knowledge production. ChatGPT is an artificial intelligence language model, programmed to analyse vast amounts of data, recognise patterns and generate human-like conversational responses based on that analysis. The research note takes an experimental form, following the shape of a dialogue, and was generated in real time, between the author and ChatGPT. The conversation reflects on, and is a reflexive contribution to, the study of artificial intelligence from a sociology of science perspective. It draws on the notion of reflexivity and adopts an ironic, parodic form to critically respond to the emergence of artificial intelligence language models, their affective and technical qualities, and thereby comments on their potential ethical, social and political significance within the humanities.
No comments:
Post a Comment