Personal liability. What a concept!
Facebook Investor Probing $5 Billion Privacy Payout Gets a Boost
Facebook Inc. lost a fight to withhold records in an investor lawsuit probing whether the company overpaid in a record $5 billion settlement with a government regulator in 2019 to protect founder Mark Zuckerberg in the wake of the Cambridge Analytica privacy scandal.
A Delaware judge ruled Wednesday that Facebook must furnish some internal files to Rhode Island’s public employee pension fund, which is questioning how the company came to terms with the U.S. Federal Trade Commission in the agency’s sweeping investigation of the misuse of consumer data.
The pension fund is looking into whether Facebook directors agreed to pay an additional $2 billion as part of the FTC settlement over the fallout from Cambridge Analytica to shield Zuckerberg from facing personal liability in the case. Delaware law gives investors access to internal files if they raise legitimate questions about mismanagement or self-dealing by directors.
Will no one rid us of this turbulent company?
Sweden’s data watchdog slaps police for unlawful use of Clearview AI
Sweden’s data protection authority, the IMY, has fined the local police authority €250,000 ($300k+) for unlawful use of the controversial facial recognition software, Clearview AI, in breach of the country’s Criminal Data Act.
As part of the enforcement the police must conduct further training and education of staff in order to avoid any future processing of personal data in breach of data protection rules and regulations.
(Related)
https://www.buzzfeednews.com/article/carolinehaskins1/facial-recognition-clearview-patent-dating
A Clearview AI Patent Application Describes Facial Recognition For Dating, And Identifying Drug Users And Homeless People
… The patent filing was made in August — three months after the company said in a federal court that it would take voluntary actions to “avoid transacting with non-governmental customers anywhere.” The patent application, however, describes ways to apply its facial recognition software to the private sector as well as to law enforcement and social work, where it says it could be used to possibly identify people who use drugs or people experiencing homelessness.
"In many instances, it may be desirable for an individual to know more about a person that they meet, such as through business, dating, or other relationship,” the application reads, outlining a means of running a rapid background check based on an image of a person’s face. “A strong need exists for an improved method and system to obtain information about a person.”
The document also describes several other possible uses for Clearview AI, such as to “grant or deny access for a person, a facility, a venue, or a device,” or for a public agency to accurately dispense social benefits and reduce fraud. It also says users could deploy Clearview to identify “a sex offender” or “homeless people,” or to determine whether someone has a “mental issue or handicap,” which could influence the way police respond to a situation.
One unique bio-marker is much like another…
AI can use the veins on your hand like fingerprints to identify you
The pattern of veins on the back of someone’s hand is as unique as their fingerprints and can be used to identify people even with a cheap commercial camera. The technique could be used in smart door locks or even to identify people from CCTV images.
I imagine there will be a few counter arguments.
https://thenextweb.com/neural/2021/02/12/doesnt-make-sense-ban-autonomous-weapons-syndication/
Why it doesn’t make sense to ban autonomous weapons
In May 2019, the Defense Advanced Research Projects Agency (DARPA) declared, “No AI currently exists that can outduel a human strapped into a fighter jet in a high-speed, high-G dogfight.”
Fast forward to August 2020, which saw an AI built by Heron Systems flawlessly beat top fighter pilots 5 to 0 at DARPA’s AlphaDogFight Trials. Time and time again Heron’s AI outmaneuvered human pilots as it pushed the boundaries of g-forces with unconventional tactics, lightning-fast decision-making, and deadly accuracy.
The former US Defense Secretary Mark Esper announced in September that the Air Combat Evolution (ACE) Program will deliver AI to the cockpit by 2024. They are very clear that the goal is to “assist” pilots rather than to “replace” them. It is difficult to imagine, however, in the heat of battle against other AI-enabled platforms how a human could reliably be kept in the loop when humans are simply not fast enough.
On Tuesday, January 26, the National Security Commission on Artificial Intelligence met, recommending not to ban AI for such applications. In fact, Vice Chairman Robert Work stated that AI could make fewer mistakes than human counterparts. The Commission’s recommendations, which are expected to be delivered to Congress in March, are in direct opposition with The Campaign to Stop Killer Robots, a coalition of 30 countries and numerous non-governmental organizations which have been advocating against autonomous weapons since 2013.
No comments:
Post a Comment