Your relationships are changing.
As companies race to add AI, terms of service changes are going to freak a lot of people out. Think twice before granting consent!
Jude Karabus reports:
WeTransfer this week denied claims it uses files uploaded to its ubiquitous cloud storage service to train AI, and rolled back changes it had introduced to its Terms of Service after they deeply upset users. The topic? Granting licensing permissions for an as-yet-unreleased LLM product.
Agentic AI, GenAI, AI service bots, AI assistants to legal clerks, and more are washing over the tech space like a giant wave as the industry paddles for its life hoping to surf on a neural networks breaker. WeTransfer is not the only tech giant refreshing its legal fine print – any new product that needs permissions-based data access – not just for AI – is going to require a change to its terms of service.
In the case of WeTransfer, the passage that aroused ire was:
You hereby grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license to use your Content for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy. (Emphasis ours.)
Read more at The Register.
Meanwhile, over on TechCrunch, Zack Whittaker writes: think twice before granting AI access to your personal data:
There is a trend of AI apps that promise to save you time by transcribing your calls or work meetings, for example, but which require an AI assistant to access your real-time private conversations, your calendars, contacts, and more. Meta, too, has been testing the limits of what its AI apps can ask for access to, including tapping into the photos stored in a user’s camera roll that haven’t been uploaded yet.
Signal president Meredith Whittaker recently likened the use of AI agents and assistants to “putting your brain in a jar.” Whittaker explained how some AI products can promise to do all kinds of mundane tasks, like reserving a table at a restaurant or booking a ticket for a concert. But to do that, AI will say it needs your permission to open your browser to load the website (which can allow the AI access to your stored passwords, bookmarks, and your browsing history), a credit card to make the reservation, your calendar to mark the date, and it may also ask to open your contacts so you can share the booking with a friend.
No doubt incorporating Asimov’s three laws...
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5351275
Should AI Write Your Constitution?
Artificial Intelligence (AI) now has the capacity to write a constitution for any country in the world. But should it? The immediate reaction is likely emphatically no—and understandably so, given that there is no greater exercise of popular sovereignty than the act of constituting oneself under higher law legitimated by the consent of the governed. But constitution-making is not a single act at a single moment. It is a series of discrete steps demanding varying degrees of popular participation to produce a text that enjoys legitimacy both in perception and reality. Some of these steps could prudently integrate human-AI collaboration or autonomous AI assistance—or so we argue in this first Article to explain and evaluate how constitutional designers not only could, but also should, harness the extraordinary potential of AI. We combine our expertise as innovators in the use and design of AI with our direct involvement as advisors in constitution-making processes around the world to map the terrain of opportunities and hazards in the next iteration of the continuing fusion of technology with governance. We ask and answer the most important question now confronting constitutional designers: how to use AI in making and reforming constitutions?
We make five major contributions to jumpstart the study of AI and constitutionalism. First, we unveil the results of the first Global Survey of Constitutional Experts on AI. How do constitutional experts view the risks and rewards of AI, would they use AI to write their own constitution, and what red lines would they impose around AI? Second, we introduce a novel spectrum of human control to classify and distinguish three types of tasks in constitution-making: high sensitivity tasks that should remain fully within the domain of human judgment and control, lower sensitivity tasks that are candidates for significant AI assistance or automation, and moderate sensitivity tasks that are ripe for human-AI collaboration. Third, we take readers through the key steps in the constitution-making process, from start to finish, to thoroughly explain how AI can assist with discrete tasks in constitution-making. Our objective here is to show scholars and practitioners how and when AI may be integrated into foundational democratic processes. Fourth, we construct a Democracy Shield—a set of specific practices, principles, and protocols—to protect constitutionalism and constitutional values from the real, perceived, and unanticipated risks that AI raises when merged into acts of national self-definition and popular reconstitution. Fifth, we make specific recommendations on how constitutional designers should use AI to make and reform constitutions, recognizing that openness to using AI in governance is likely to grow as human use and familiarity with AI increases over time, as we anticipate it will. This cutting-edge Article is therefore simultaneously descriptive, prescriptive, and normative.
No comments:
Post a Comment