What’s that journalistic rule… If it bleeds, it leads? Perhaps
we need a new philosophy.
Why AI is
still terrible at spotting violence online
Artificial intelligence can identify people in
pictures, find the next TV series you should binge watch on Netflix,
and even drive a car.
But on Friday, when a suspected terrorist in New
Zealand streamed live video to Facebook of a mass murder, the
technology was of no help. The gruesome broadcast went on for
at least 17 minutes until New
Zealand police reported it to the social network.
Recordings of the video and related posts about it rocketed
across social media while companies tried to keep up.
… Even if violence appears to be shown in a
video, it isn't always so straightforward that a human — let alone
a trained machine — can spot it or decide what best to do with it.
A weapon might not be visible in a video or photo, or what appears to
be violence could actually be a simulation.
Furthermore, factors like lighting or background
images can throw off a computer.
… It's not simply that using AI to glean
meaning out of one video is hard, she said. It's doing so with the
high volume of videos social networks see day after day. On YouTube,
for instance, users upload more
than 400 hours of video per minute — or more than 576,000 hours
per day.
"Hundreds of thousands of hours of video is
what these companies trade in," Roberts said. "That's
actually what they solicit, and what they want."
Welcome to extremely low probabilities in an
extremely large (global) population.
Jargon
Watch: The Rising Danger of Stochastic Terrorism
Wired:
“Stochastic Terrorism n.
Acts of violence by random extremists, triggered by political
demagoguery. “When President
Trump tweeted a video of himself body-slamming the CNN logo in
2017, most people took it as a stupid joke. For Cesar Sayoc, it may
have been a call to arms: Last October the avowed Trump fan allegedly
mailed a pipe bomb to CNN headquarters. No one told Sayoc to do it,
but the fact that it happened was really no surprise. In 2011, after
the shooting of US representative Gabby Giffords, a Daily
Kos blog warned of a new threat the writer called stochastic
terrorism: the use of mass media to incite attacks by random
nut jobs—acts that are “statistically
predictable but individually unpredictable.” The writer
had in mind right-wing radio and TV agitators, but in 2016, Rolling
Stone accused then-candidate Trump of using the same playbook
when he joked that “Second Amendment people” might “do”
something if Hillary Clinton won the election…”
If Cambridge Analytica was the cause, have we yet
found a cure?
Cambridge
Analytica was the Chernobyl of privacy
… We knew that in 2012 the re-election
campaign of Barack
Obama had built a voter contact system using Facebook and had
acquired personal data on millions of American voters. When we tried
to raise the alarm that no head of state should have so
much personal data on so many of his citizens – many of whom
opposed his candidacy – we were ignored because the dominant story
at the moment was how digitally savvy the Obama campaign was. No one
seemed concerned that the United States might some day have a
president who was unconcerned with niceties like the rule of law or
civil liberties.
… In December 2016 a Swiss news site called
Das
Magazine published a long account of how Cambridge Analytica had
worked with researchers at the University of Cambridge to gather
personal information on millions of Facebook users and deploy it to
position political advertisements on Facebook. Facebook users had
been persuaded to take a seemingly harmless personality quiz.
Few took note of the Das Magazine story until the
US-based news site Motherboard
translated it into English six weeks later, in January 2017.
… The fact is that Cambridge Analytica sold
snake oil to gullible political campaigns around the world. Nix
boasted of the power of “psychometric profiling” of voters using
a complex set of personality descriptors. Nix somehow convinced
campaigns that this ability to stereotype voters could help them
precisely construct of messages and target ads. There is no reason
to believe any of this.
… The fact is that if you want to target
political advertisements precisely to move voters who have expressed
interest in particular issues or share certain interests, there is an
ideal tool to use that does not rely on pseudoscience. It’s called
Facebook.
Buying an inexpensive ad on Facebook involves a
simple process of choosing the location, gender, occupation,
education level, hobbies, or professional affiliation of Facebook
users. You don’t need Cambridge Analytica when you have Facebook.
Can I copyright my face? Must the government get
a warrant to look at me? Take my picture?
The
Government Is Using the Most Vulnerable People to Test Facial
Recognition Software
If you thought IBM using “quietly
scraped” Flickr images to train facial recognition systems was
bad, it gets worse. Our research, which will be reviewed for
publication this summer, indicates that the U.S. government,
researchers, and corporations have used images of immigrants, abused
children, and dead people to test their facial recognition systems,
all without consent. The very group the U.S. government has tasked
with regulating the facial recognition industry is perhaps the worst
offender when it comes to using images sourced without the knowledge
of the people in the photographs.
(Related) Possible answer?
Use and
Fair Use: Statement on shared images in facial recognition AI
… While we do not have all the facts regarding
the IBM dataset, we are aware that fair use allows all types of
content to be used freely, and that all types of content are
collected and used every day to train and develop AI. CC licenses
were designed to address a specific constraint, which they do very
well: unlocking restrictive copyright. But
copyright is not a good tool to protect individual privacy, to
address research ethics in AI development, or to regulate the use of
surveillance tools employed online. Those issues rightly
belong in the public policy space, and good solutions will consider
both the law and the community norms of CC licenses and content
shared online in general.
If Arnold Schwarzenegger puts his face on my body
(everyone needs a ‘before’ image) is that as outrageous as me
putting my face on his body?
Coming Soon
to a Courtroom Near You? What Lawyers Should Know About Deepfake
Videos
The Recorder (Law.com / paywall] via free
access on Yahoo} “Are rules that guard against forged or
tampered evidence enough to prevent deepfake videos from making their
way into court cases? … If you follow technology, it’s likely
you’re
in a panic over deepfakes—altered videos that employ artificial
intelligence and are nearly impossible to detect. Or else you’re
over
it already. For lawyers, a better course may lie somewhere in
between. We asked Riana
Pfefferkorn, associate director of surveillance and cybersecurity
at Stanford Law School’s Center for Internet and Society, to
explain (sans the alarmist rhetoric) why deepfakes should probably be
on your radar….”
For my Enterprise Architecture students.
With great
speed comes great responsibility: Software testing now a continuous
race
Continuous integration and continuous delivery is
giving us software updates every day in many cases. A recent survey
of 500 IT executives finds 58
percent of enterprises deploy a new build daily, and 26 percent at
least hourly. That's why Agile and DevOps are so
important. With great speed comes great responsibility. A constant
stream of software needs constant quality assurance. To make sure
things are functioning as they should, organizations are turning to
continuous testing.
I think I’ll make my students give more
presentations…
How long
will it take to read a speech or presentation?
“Convert
words to time. “Enter the word count into the tool below (or
paste in text) to see how many minutes it will take you to read.
Estimates number of minutes based on a slow, average, or fast paced
reading speed.” Great tool for presentations in any setting – in
person or online.
No comments:
Post a Comment