The
next big Privacy issue? Probably not.
As Amazon
Ring Partners With Law Enforcement on Surveillance Video, Privacy
Concerns Mount
While Amazon takes special care to position its
Ring video doorbell product as a friendly, high-tech version of the
traditional “neighborhood watch,” U.S. lawmakers and privacy
advocates are becoming increasingly skeptical. As they see it,
Amazon Ring is putting into place few if any safeguards to protect
personal privacy and civil rights. Now that Amazon Ring is
partnering with hundreds of law enforcement and police agencies
around the nation to share surveillance video, these privacy concerns
are only mounting.
… Currently, at least 630 police departments
around the nation have some form of partnership agreement in place
with Amazon Ring. That number is up significantly by more than 200
since August, and Amazon Ring appears to be on a massive outreach
program to get even more police departments to sign on to its
surveillance video partnerships.
According to the basic type of agreement with law
enforcement agencies, police can keep and share surveillance videos
with anyone they want, even if there is no evidence of a crime that
has taken place.
… Amazon Ring has put into place some privacy
and civil liberties safeguards. For example, owners of Ring video
doorbells are under no obligation to provide surveillance video, even
when requested or suggested. And Amazon Ring specifically protects
the identity of Ring video doorbell owners, such that local police
departments cannot “retaliate” against anyone who refuses a
surveillance video request. [“I
see you have a Ring doorbell, citizen. Why not voluntarily give me
the video?” Bob]
Without access to the same sources the author had,
Facebook must rely on the State to tell it what is truth and what is
fake. How 1984-ish...
Singapore
tells Facebook to correct user's post in test of 'fake news' laws
Singapore
instructed Facebook on Friday to publish a correction on a user’s
social media post under a new “fake news” law, raising fresh
questions about how the company will adhere to government requests to
regulate content.
The
government said in a statement that it had issued an order requiring
Facebook “to publish a correction notice” on a Nov. 23 post which
contained accusations about the arrest of a supposed whistleblower
and election rigging.
Singapore
said the allegations were “false” and “scurrilous” and
initially ordered user Alex Tan, who runs the States Times Review
blog, to issue the correction notice on the post. Tan, who does not
live in Singapore and says he is an Australian citizen, refused and
authorities said he is now under investigation.
… Facebook
often blocks content that governments allege violate local laws, with
nearly 18,000 cases globally in the year to June, according to the
company’s “transparency report.”
But
the new Singapore law is the first to demand that Facebook publish
corrections when directed to do so by the government, and it remains
unclear how Facebook plans to respond to the order.
The
case is the first big test for a law that was two years in the making
and came into effect last month.
(Related)
You have to do this thousands of times each hour.
The
context: The
vast majority of Facebook’s moderation is now done automatically by
the company’s machine-learning systems, reducing the amount of
harrowing content its moderators have to review. In its latest
community
standards enforcement report,
published earlier this month, the company claimed that 98% of
terrorist videos and photos are removed before anyone has the chance
to see them, let alone report them.
… Facebook’s
AI uses two main approaches to look for dangerous content. One is to
employ neural networks that look for features and behaviors of known
objects and label them with varying percentages of confidence (as we
can see in the video above).
… If
the system decides that a video file contains problematic images or
behavior, it can remove it automatically or send it to a human
content reviewer. If it breaks the rules, Facebook can then create a
hash—a unique string of numbers—to denote it and propagate that
throughout the system so that other matching content will be
automatically deleted if someone tries to re-upload it. These hashes
can be shared with other social-media firms so they can also take
down copies of the offending file.
… Facebook
is still struggling to automate its understanding of the meaning,
nuance, and context of language. That’s why the company relies on
people to report the overwhelming majority of bullying and harassment
posts that break its rules: just 16% of these posts are identified by
its automated systems
The Russians are doing it again. (Whatever “it”
is)
No comments:
Post a Comment