I don’t think the money is enough to convince other states to enact biometric laws.
Instagram Settles Illinois Biometric Privacy Law Case for $68.5 Million
The lone strong biometric privacy law in the United States has struck again, this time taking $68.5 million from Instagram in a settlement for a class action first filed nearly three years ago.
Some other states have elements of biometric privacy law, but none are as comprehensive as Illinois or as demanding about express user consent for collecting such data. The suit is open to Instagram users that were active on the platform between August 10, 2015 to the present, and Meta has already been fined in the state for similar issues with Facebook.
My guess is, Meta has a backup plan...
https://thenextweb.com/news/meta-eu-privacy-consent-for-targeted-ads
Meta succumbs to EU pressure, will seek user consent for targeted ads
Meta operates a highly targeted advertising model based on the swathes of personal data you share on its platforms, and it makes tens of billions of dollars off it each year.
While these tactics are unlikely to end altogether in the near future, the company could soon offer users in the EU the chance to “opt-in” to the ads, the Wall Street Journal reports.
Since April, Meta has offered users in Europe the chance to opt out from personalised ads but only if they complete a lengthy form on its help pages. That process has likely limited the number of people who have opted out.
An opt-in option, however, would give users protection by default. That doesn’t mean you won’t be targeted by generalised ads, based on broader demographic data, such as your age, but it would prevent highly personalised ads based on, for instance, the videos you watch or the posts you share. Under EU law, a user has to be able to access Meta’s platforms even if they opt out.
Meta said the change stems from an order in January by Ireland’s Data Protection Commissioner (DPC) to reassess the legal basis of how it targets ads.
The future is coming fast, hop on or get run over…
Harvard Business School A.I. guru on why every Main Street shop should start using ChatGPT
Every small business owner should be using some combination of generative AI tools, including OpenAI’s ChatGPT, Microsoft’s AI-powered Bing search engine, and Poe, says Harvard Business School professor Karim Lakhani.
Gen AI offers small business owners a cost-effective way to become more productive and efficient in scaling their company, communicating with customers, and generating marketing, social media and new products.
Lakhani says the oldest adage in computer science about AI and fear of job losses holds for small businesses: the business owners that use AI will replace those that don’t.
(Related) Will this model work in other industries?
https://www.bespacific.com/embracing-artificial-intelligence-in-the-legal-landscape-the-blueprint/
Embracing Artificial Intelligence in the Legal Landscape: The Blueprint
Tąkiel, Maciej and Wagner, Dominik and Maksym, Błazej and Tarczyński, Tomasz, Embracing Artificial Intelligence in the Legal Landscape: The Blueprint (June 22, 2023). Available at SSRN: https://ssrn.com/abstract=4488199 or http://dx.doi.org/10.2139/ssrn.448819
“This innovative case study outlines a blueprint for strategic transformation based on the example of a real-life law firm operating in Germany, using AI tools and digitalization. Leveraging Kotter’s 8-step change model, the research underscores the imperative to adopt AI due to pressing market competition and escalating internal costs. The paper articulates how AI can optimize legal processes and dramatically improve efficiency and client satisfaction, while addressing the firm’s readiness to adapt and potential resistance.”
This could be very useful…
https://www.bespacific.com/fighting-fake-facts-with-two-little-words/
Hopkins researchers discover a new technique to ground a large language model’s answers in reality
Johns Hopkins University Hub: “Asking ChatGPT for answers comes with a risk—it may offer you entirely made-up “facts” that sound legitimate, as a New York lawyer recently discovered. Despite having been trained on vast amounts of factual data, large language models, or LLMs, are prone to generating false information called hallucinations. This may happen when LLMs are tasked with generating text about a topic they have not encountered much or when they mistakenly mix information from various sources. In the unfortunate attorney’s case, ChatGPT hallucinated imaginary judicial opinions and legal citations that he presented in court; the presiding judge was predictably displeased.” Imagine using your phone’s autocomplete function to finish the sentence ‘My favorite restaurant is…’ You’ll probably wind up with some reasonable-looking text that’s not necessarily accurate,” explains Marc Marone, a third-year doctoral candidate in the Whiting School of Engineering’s Department of Computer Science. Marone and a team of researchers that included doctoral candidates Orion Weller and Nathaniel Weir and advisers Benjamin Van Durme, an associate professor of computer science and a member of the Center for Language and Speech Processing; Dawn Lawrie, a senior research scientist at the Human Language Technology Center of Excellence; and Daniel Khashabi, an assistant professor of computer science and also a member of CLSP, developed a method to reduce the likelihood that LLMs hallucinate. Inspired by a phrase commonly used in journalism, the researchers conducted a study on the impact of incorporating the words “according to” in LLM queries.
They found that “according to” prompts successfully directed language models to ground their responses against previously observed text; rather than hallucinating false answers, the models are more likely to directly quote the requested source—just like a journalist would, the team says… By using Data Portraits, a tool previously developed by Marone and Van Durme to quickly determine if particular content is present in a training dataset without needing to download massive amounts of text, the team verified whether an LLM’s responses could be found in its original training data. In other words, they were able to determine whether the model was making things up or generating answers based on data it had already learned…”
No comments:
Post a Comment