Friday, March 19, 2021

No privacy here.

https://www.pogowasright.org/law-firm-wins-fired-lawyers-suit-over-analyzing-cell-phone-data/

Law Firm Wins Fired Lawyer’s Suit Over Analyzing Cell Phone Data

Brian Flood reports:

The law firm Zukowski, Rogers, Flood & McArdle didn’t violate the Stored Communications Act by having data extracted from a fired attorney’s smart phone, because only the phone’s internal storage was accessed and copied, a federal court in Illinois said Thursday.
ZRFM paid for attorney David Loughnane’s cell phone and its associated bills. The firm had no written policies on attorneys’ use of their firm-funded phones, and Loughnane used his for both work and personal purposes.

Read more on Bloomberg. The case is Loughnane v. Zukowski Rogers Flood & McArdle, N.D. Ill., No. 1:19-cv-00086, 3/18/21.

[From the article:

An individual’s smartphone or personal computer doesn’t fall within the ambit of the Stored Communications Act when only the data on its local storage drive is accessed, and the device isn’t used to access any data stored on an external Internet-based account or server, the court said.

While the Seventh Circuit has yet to address this issue, nearly every court to have done so has agreed with this view,” the federal court said. In enacting the law, “Congress was particularly concerned about data stored in the hands of electronic service providers, and did not set out to protect data stored in a personal computer,” the court added.





What do you do when management mucks it up? Send in the AIuditors? (An article worth reading)

https://hbr.org/sponsored/2021/03/is-your-privacy-governance-ready-for-ai

Is Your Privacy Governance Ready for AI?

Effective oversight requires complex structures most organizations do not have—not just for developers, like data-science teams, but also across teams that may procure AI solutions, such as operations and HR, and core teams, like privacy, that traditionally perform a governance function.

Inadequate governance exposes organizations to unnecessary risks, especially when teams are unaware of which data is restricted under which law. This risk was recently demonstrated when several organizations were sued for violating the California Consumer Privacy Act (CCPA) after sharing data with a third-party fraud-profiling tool. While the CCPA protects the use of data for fraud-detection and security purposes, it does not protect the voluntary transfer of data to a third party.

AI applications require significant quantities of data to make robust decisions, and often require a balancing of benefits and risks given AI’s impact on stakeholders. By embedding ethics into privacy and data-protection practices, some organizations are putting increased responsibility on the privacy teams to oversee AI. This change requires privacy groups to have a basic understanding of how models are developed and tested so they can evaluate development practices such as bias mitigation.



(Related)

https://theconversation.com/ai-developers-often-ignore-safety-in-the-pursuit-of-a-breakthrough-so-how-do-we-regulate-them-without-blocking-progress-155825

AI developers often ignore safety in the pursuit of a breakthrough – so how do we regulate them without blocking progress?

Our recent research, carried out alongside our colleague Francisco C. Santos, sought to determine which AI races should be regulated for safety reasons, and which should be left unregulated to avoid stifling innovation. We did this using a game theory simulation.

The regulation of AI must consider the harms and the benefits of the technology. Harms that regulation might seek to legislate against include the potential for AI to discriminate against disadvantaged communities and the development of autonomous weapons. But the benefits of AI, like better cancer diagnosis and smart climate modelling. might not exist if AI regulation were too heavy-handed. Sensible AI regulation would maximise its benefits and mitigate its harms.





Just the “leading edge” of government AI usage?

https://www.jpost.com/breaking-news/fears-of-digital-dictatorship-as-myanmar-deploys-ai-662449

Fears of 'digital dictatorship' as Myanmar deploys AI

Protesters in Myanmar fear they are being tracked with Chinese facial recognition technology, as spiraling violence and street surveillance spark fears of a "digital dictatorship" to replace ousted leader Aung San Suu Kyi.

Security forces have focused on stamping out dissent in cities including the capital Naypyidaw, Yangon and Mandalay, where hundreds of CCTV cameras had been installed as part of a drive to improve governance and curb crime.



(Related)

https://www.protocol.com/china/china-facial-recognition?utm_campaign=post-teaser&utm_content=2gqzffnb

China sours on facial recognition tech

State media and new regulations are going after dodgy company practices. Government still gets a free pass.



(Related) When doing business is more important than doing right?

https://www.wired.com/story/apple-russia-iphone-apps-law/

Apple Bent the Rules for Russia—and Other Countries Will Take Note

Russian iPhone buyers will soon be prompted to install software developed in that country, setting a precedent that other authoritarian governments may follow.





Harmless deep fake? Let’s see what the hackers can do with it.

https://www.marketingdive.com/news/lays-sends-soccer-fans-personalized-messages-from-star-messi-using-ai/596910/

Lay's sends soccer fans personalized messages from star Messi using AI

People can visit a Messi Messages website to generate a custom video, following a series of text prompts to guide what Messi says. The site's technology manipulates the movements of the star athlete's lips and then syncs the movements with voiceover audio in real time to make it appear as though Messi is actually speaking.

Lay's Messi Messages draw on a few different pieces of emerging tech, leveraging AI, lip-syncing and facial mapping to provide soccer fans with a synthetic piece of messaging from one of the most popular players in the world. The service summons comparisons to deepfakes, an application of AI that manipulates existing footage or images of people's faces to make it appear as though they're saying things. Deepfakes have raised serious ethical concerns, but also steadily leaked into advertising as brands experiment with new production methods during the pandemic.

The synthetic videos from Messi are informed by a series of text prompts that ask the user to share their name, a friend's name, an activity they want to do with their friend — such as watching a match — and when they should do that activity (i.e. "tonight" or "tomorrow"). After the site generates the video in seconds, users have the option to download the message and share it with others. An infomercial-like spot explains how the technology works and shows Messi saying "It's incredible" in several languages.

https://www.youtube.com/watch?v=mwWtxF7wKvQ&t=26s





Perspective.

https://www.efinancialcareers.com/news/2021/03/jpmorgan-ai-lab-jobs

JPMorgan now has 50 people working in its rarefied AI labs

If you want an artificial intelligence job in an investment bank that combines elements of working in academia, you probably want to work in one of the "labs" that leading banks have set up to push the boundaries of AI applications in finance.

JPMorgan has one such lab, which it founded in 2018. It's led by Manuela Veloso, who's also the head of machine learning research at Carnegie Mellon University. In the past year, it's been adding staff to a new lab in London.

Cohen said the team is working on a visualizations project that looks at the way traders ingest data on their screens and attempts to derive actionable outcomes from the information it contains.

This sounds very similar to the 'Mondrian' project discussed by Veloso at a conference in 2019. By analyzing traders' gaze patterns with respect to trading time series images, Veloso said her team had been able to design a neural net that could decide whether to buy or sell a stock with a 95% accuracy rate.

Two years later, the AI research team seems to have swelled considerably, but it doesn't seem that Mondrian is actually operational yet.





Perspective.

https://www.cnbc.com/2021/03/18/nfl-media-rights-deal-2023-2033-amazon-gets-exclusive-thursday-night.html

NFL finalizes new 11-year media rights deal, Amazon gets exclusive Thursday Night rights

The National Football League has finalized its new 11-year media rights agreement with a pact that will run through 2033 and could be worth over $100 billion.

The league announced Thursday it’s renewing TV rights with all of its existing broadcast partners and adding Amazon Prime Video as an exclusive partner for its Thursday Night Football package. It’s the first time a streaming service will carry a full package of games exclusively. Amazon is paying about $1 billion per year, according to people familiar with the matter. Amazon’s deal runs 10 years and begins in 2023.





Tools. There is a free version.

https://www.prnewswire.com/news-releases/quillbots-new-grammar-checker-uses-cutting-edge-ai-to-perfect-your-writing-301250410.html

QuillBot's New Grammar Checker Uses Cutting-Edge AI to Perfect Your Writing

QuillBot announced the release of its highly anticipated grammar checker today. QuillBot's AI-based writing platform now hosts a variety of time-saving tools that help make writing painless for 5 million global monthly active users. This new tool combines spelling, grammar, and punctuation correction tactics backed by powerful AI models, flagging errors and suggesting edits.

For more information, visit www.quillbot.com.





Proving fake data?

https://dilbert.com/strip/2021-03-19



No comments: