I’m waiting for an AI to write the definitive argument.
https://link.springer.com/article/10.1007/s40319-022-01213-7
Will Technology-Aided Creativity Force Us to Rethink Copyright’s Fundamentals? Highlights from the Platform Economy and Artificial Intelligence
The platform economy, the move towards artificial intelligence (AI) and the growing importance of new creative and transformative technologies such as 3D printing raise questions as to whether copyright law suffices in its present form. Our article argues that copyright law is malleable enough to fulfil some of its traditional functions in this new technology-aided (and technology-dominated) environment. However, certain adjustments and complementary instruments seem to be necessary to revitalise these functions. For example, moral rights could be more effectively harmonised at international level, and made more easily enforceable, to reflect the global reach of social media and to protect their essential reputational value in a digital economy that prioritises online exposure over remuneration opportunities. We also consider that creators’ rights are difficult, if not impossible, to license and enforce in an environment where contractual practices such as social media terms and conditions impose standard agreements that either do not compensate creators at all or compensate them only marginally. In this context, restoring the bargaining power of creators through the right of access to the platforms’ data seems to have become as important as copyright itself. Finally, doubts remain as to whether requirements such as authorship and originality can continue to apply and trigger copyright protection. To this end, we believe that the distinction between fully generative machines and other technologies that merely assist human creators is essential for the proper identification of “authorless” works. For such works we advocate the adoption of a very short right that would support computational creativity without stifling human ingenuity.
Would a reduction of communication ever be a good idea? This looks like “over protecting’ the President.
https://www.politico.com/news/2022/07/29/secret-service-may-disable-imessages-jan-6-00048780
Secret Service may disable iMessages to avoid repeat of Jan. 6 controversy
The Secret Service is considering turning off employees’ ability to send iMessages on their work-issued iPhones, hoping to head off repeats of the current controversy embroiling the agency over deleted text messages related to the Jan. 6 insurrection at the Capitol.
“This is actually something we are looking at very closely,” Secret Service spokesperson Anthony Guglielmi said. “Director James Murray has ordered a benchmarking study to further examine the feasibility of disabling iMessage and whether it could have any operational impacts.”
… On July 13, the DHS inspector general informed Congress that the Secret Service lost texts related to the attack while erasing its employees’ phones as part of a change to how it manages those devices. That revelation prompted the House committee investigating the attack to subpoena the agency for its records. The panel’s leaders suggested that the agency may have violated federal records laws by failing to preserve the messages.
Keeping up…
https://www.pogowasright.org/further-thoughts-on-adppa-the-federal-comprehensive-privacy-bill/
Further Thoughts on ADPPA, the Federal Comprehensive Privacy Bill
Law professor and privacy scholar Daniel Solove writes:
I recently wrote a post about my concerns about the American Data Privacy and Protection Act (ADPPA) (updated version after markup is here ), a bill making its way through Congress that has progress further than many other attempts at a comprehensive privacy law. Despite grading the law a B+, I was skeptical of the law because it would preempt state laws, a provision I believe to be a Faustian bargain. Here’s an updated version of the ADPPA after markup.
Omer Tene (Goodwin Procter LLP) has a series of tweets expressing puzzlement at my reaction to the law. He thinks I should be dancing in the streets. He writes that he is “genuinely puzzled by the logic here. Dan argues against passage of a good federal privacy law (he gives it a B+) bc it might be outdated in 20 years.” He argues that my concerns will be the same with every federal law because there won’t be a federal law without preemption. “[W]hat’s the alternative? Omer asks. “Having no federal law to update in 20 years? How’s that any better?” He further argues that “if the preferred option is state by state, it’s a very poor option. Dan and others have rightfully criticized the weak tea brewed by the states. ADPPA blows every one of the state laws out of the water.” The “ADPPA is *far* stronger than CPRA. Even in California. Not to mention it would also apply in 49 other states.”
Omer makes compelling arguments, and I want to respond to clarify and expand upon some things in my original post to better explain my position.
Read more at TeachPrivacy.
My AI doesn’t like me.
https://aisel.aisnet.org/treos_amcis2022/88/
Anthropomorphism, Privacy and Voice-based AI Systems
Intelligent personal assistants (digital voice assistants) are gaining popularity with 135.6 million number of active voice assistant users in the United States alone. Because of their usability and convenience, voice-controlled artificial intelligence systems are employed in both personal as well as professional settings. Existing literature (Manikonda et al. 2018) shows that these devices are used for several purposes ranging from seeking answers to queries, playing music, controlling lights, etc. However, these devices have been constantly upgraded not only in their physical appearance (e.g., Alexa devices with screens) but also their functionalities (e.g., Alexa devices allowing Telehealth visits). Thus, sufficient care is required to make sure privacy aspects are considered seriously. Users may be ignorant of the issues involved in using smart home devices, by using it as part of their daily lives because of its anthropomorphic characteristics. Research has shown that the users even though know about certain risks, they overlook them since the benefits received through using them is worth using (Sebastian and Crossler 2019). We examine the auditory design aspect (human-like voice including male and female voices), product attachment (such as viewing the AI system as a friend or a servant), trust of users in such systems and propose a theoretical model on why users continue to use voice-enabled intelligent personal assistants. The growing interest in using such devices as well as the constant upgrading of different functionalities of these devices regardless of the widely known privacy concerns is the primary motivation for us to focus on this problem. Personification, or Anthropomorphism – attributing human characteristics to non-human things, is defined as the attribution of "human-like properties, characteristics or mental states to real or imagined non-human agents and objects" (Epley et al. 2007). Anthropomorphism is spontaneous in addition to being pervasive and powerful (Yuan and Dennis 2019). For humans, they are born with anthropomorphism, and its characteristics can be divided into two main design factors- visual and auditory. We propose a theoretical model using different factors including auditory manipulation, product attachment, trust placed in the product, privacy concerns to measure an individual's willingness to use that product. The research model will be tested using data collected through survey responses. Prior studies have looked at smart home devices, but not from a perspective where anthropomorphism and privacy concerns come together. The study also contributes to the existing literature by integrating the concepts of anthropomorphism, extended privacy calculus model, and Protection Motivation Theory, to develop the research model. Furthermore, this study looks at how individuals perceive smart home devices and use it in their daily lives, even with the continual existence of privacy-related threats.
No comments:
Post a Comment