Toward a warm, fuzzy
election.
Public,
Election Officials May Be Kept in the Dark on Hacks
If
the FBI discovers that foreign hackers have infiltrated the networks
of your county election office, you may not find out about it until
after voting is over. And your governor and other state officials
may be kept in the dark, too.
There’s
no federal law compelling state and local governments to share
information when an electoral system is hacked. And a federal policy
keeps details secret by shielding the identity of all cyber victims
regardless of whether election systems are involved.
Election
officials are in a difficult spot: If someone else’s voting system
is targeted, they want to know exactly what happened so they can
protect their own system. Yet when their own systems are targeted,
they may be cautious about disclosing details.
… At
least two states — Colorado
and Iowa — have implemented policies to compel local officials to
notify the state about suspected breaches involving election systems.
Jeff
Bezos for President!
How
Amazon.com moved into the business of U.S. elections
Reuters:
“The expansion by Amazon Web Services into state and local
elections has quietly gathered pace since the 2016 U.S. presidential
vote. More than 40 states [How
many electoral votes? Bob]
now use one or more of Amazon’s election offerings, according to a
presentation given by an Amazon executive this year and seen by
Reuters. So do America’s two main political parties, the
Democratic presidential candidate Joe Biden and the U.S. federal body
charged with administering and enforcing federal campaign finance
laws. While it does not handle voting on election day, AWS – along
with a broad network of partners – now runs state and county
election websites, stores voter registration rolls and ballot data,
facilitates overseas voting by military personnel and helps provide
live election-night results, according to company documents and
interviews…
For
my Computer Security students.
Best
Practices for Evaluating and Vetting Third Parties
… If
the fates of companies like Delta,
Best
Buy,
Target and so many others tell us anything, it’s that having good
internal security, while critical, is no longer enough. In fact, a
2018 study
from
Ponemon found that more
than half the breaches in the United States these days are due to
third parties.
To be fully protected requires a solid Third-Party Cyber Risk
Management (TPCRM) program.
… The
first step to figuring out your third-party cyber risk is to identify
all of the vendors you are working with. This can be accomplished by
getting a list of all outgoing payments for some period of time,
likely for the previous year.
… A
critical step for prioritizing your vendors is understanding how you
use them. Do you share data with them, do they have access to your
facilities? Not all third parties are created equal, and therefore
do not require the same level of assessment or resources.
Like
flies on the wall... Interesting image.
Alexa
and Google Home abused to eavesdrop and phish passwords
By
now, the privacy threats posed by Amazon Alexa and Google Home are
common knowledge. Workers for both companies routinely listen
to
audio
of
users—recordings of which can be kept
forever —and
the sounds the devices capture can be used
in criminal trials.
Now,
there's a new concern: malicious apps developed by third parties and
hosted by Amazon or Google. The threat isn't just theoretical.
Whitehat hackers at Germany's Security Research Labs developed eight
apps—four Alexa "skills" and four Google Home
"actions"—that all passed Amazon or Google
security-vetting processes. The skills or actions posed as simple
apps for checking horoscopes, with the exception of one, which
masqueraded as a random-number generator. Behind the scenes, these
"smart spies," as the researchers call them,
surreptitiously eavesdropped on users and phished for their
passwords.
… As
the following two videos show, the eavesdropping apps gave the
expected responses and then went silent. In one case, an app went
silent because the task was completed, and, in another instance, an
app went silent because the user gave the command "stop,"
which Alexa uses to terminate apps. But the apps quietly logged all
conversations within earshot of the device and sent a copy to a
developer-designated server.
The
phishing apps follow a slightly different path by responding with an
error message that claims the skill or action isn't available in that
user's country. They then go silent to give the impression the app
is no longer running. After about a minute, the apps use a voice
that mimics the ones used by Alexa and Google home to falsely claim a
device update is available and prompts the user for a password for it
to be installed.
Interesting
tool.
BYU
Law creates language database to help interpret Constitution
The
Daily Universe:
“The Constitution is America’s central legal document. However,
it was written a long time ago, and language has since evolved.
Changing language can make the law difficult for lawyers and judges
to interpret.
What
does it really mean to “bear arms?” How should readers
understand the phrase “high crimes and misdemeanors?” BYU Law
created a database to help answer questions like this. This database
is called the Corpus of Founding Era American English, also known as
COFEA. “Corpus” refers to a collection of written texts on a
particular subject. The corpus holds founding-era documents that can
be used by legal professionals for free as a tool to make educated
legal decisions. BYU linguistics professor Mark Davies creates
various corpora for the linguistics department and was involved in
the beginning stages of the corpus. “We have all these words in
the Constitution — words and phrases that, 200-250 years later, we
don’t really know what they meant at that time. We can’t go in a
time travel machine … to go back 240 years, but what we can do is
scoop in hundreds of millions worth of text from that time and say,
oh well, when people were using a word or phrase, they were using it
in this context,” Davies said.
- The Corpus of Founding Era American English can be found at lawcorpus.byu.edu.
Probably
standard in high performance cars.
Google
Maps Just Introduced a Controversial New Feature That Drivers Will
Probably Love (But Police Will Utterly Hate)
On long drives, I often find myself running two
real-time mapping programs on my phone at once: Google Maps, and
Waze.
The reason is that Google Maps seems to be a
better, faster-loading map program that shows alternate routes on
long trips more quickly.
But Waze, which is actually owned by Google, has
one feature I greatly appreciate: It lets other drivers warn of the
locations of road hazards and police speed traps.
I'm not an especially lead-footed driver, but I'd
still rather know where the cops are. It's been a very small First
World Problem for me that Google didn't just combine both apps.
This week, however, Google announced
the next best thing: Starting immediately, drivers will be able to
report hazards, slowdowns and speed traps right on Google Maps.
Who
rates these tools?
Algorithms
are grading student essays across the country. Can this really teach
kids how to write better?
Todd
Feathers, who wrote about AI essay grading for Motherboard,
called up every state in the country and found that at least 21
states use some form of automated scoring.
“The algorithms are prone to a couple of flaws. One is that they can be fooled by any kind of nonsense gibberish sophisticated words. It looks good from afar but it doesn’t actually mean anything. And the other problem is that some of the algorithms have been proven by the testing vendors themselves to be biased against people from certain language backgrounds.”
… And
the worst part? You can’t cross-examine an algorithm and get to
the bottom of why it made a specific decision. It’s a black box.
Another
shot at governance of AI?
The
Democratization of Artificial Intelligence (AI) for Data Science
… A study by
research-led venture capital firm MMC Ventures showed that, in
Europe, only 60 percent of start-ups were actually using AI in a way
that’s material to their value proposition[1]. If we were to
include all start-ups and established companies, this percentage
would decrease even more.
Initially, this may
seem surprising. But, is it really?
… This article
will address this AI democratization (especially the Machine Learning
part of it), the required steps to do so and how to mitigate the
risks that it brings along.
… Steps Towards
the Democratization of Artificial Intelligence
1)
Data Accessibility and Quality
“Data
is the new oil“, “Your results are only as good as your data“,
“Garbage in, garbage out“, etc.
2)
User-friendly Interfaces
3)
Explanation of Results
A cartoon for the
Privacy Foundation’s November 1st seminar?
Cartoon:
Algorithmic Transparency
No comments:
Post a Comment