Useful
starting points?
https://www.theregister.com/2021/04/29/ransomware_task_force_offers_48/
48
ways you can avoid file-scrambling, data-stealing miscreants – or
so says the Ransomware Task Force
The
Institute for Security and Technology's Ransomware Task Force (RTF)
on Thursday published an 81-page report presenting policy makers with
48 recommendations to disrupt the ransomware business and mitigate
the effect of such attacks.
…
The
report, provided in advance of publication to The
Register and
due to appear here,
attempts to provide guidance for dealing with the alarmingly popular
scourge of ransomware, which generally involves miscreants who obtain
access to poorly secured systems and steal or encrypt system data,
thereafter offering to restore it or keep quiet about the whole thing
in exchange for a substantial payment.
We
were taught to look for changes in a pattern. For
example, changes in cash flow might indicate the start or end
of the fraud.
https://www.schneier.com/blog/archives/2021/04/identifying-people-through-lack-of-cell-phone-use.html
Identifying
People Through Lack of Cell Phone Use
In
this
entertaining story of
French serial criminal Rédoine Faïd and his jailbreaking ways,
there’s this bit about cell phone surveillance:
After
Faïd’s helicopter breakout, 3,000 police officers took part in the
manhunt. According to the 2019 documentary La
Traque de Rédoine Faïd, detective units scoured records
of cell phones used during his escape, isolating a handful of numbers
active at the time that went silent shortly thereafter.
Helped
me understand a few things.
https://www.lawfareblog.com/machines-learn-brussels-writes-rules-eus-new-ai-regulation
Machines
Learn That Brussels Writes the Rules: The EU’s New AI Regulation
The
European Union’s proposed
artificial
intelligence (AI) regulation, released on April 21, is a direct
challenge to Silicon Valley’s common view that law should leave
emerging technology alone. The proposal sets out a nuanced
regulatory structure that bans some uses of AI, heavily regulates
high-risk uses and lightly regulates less risky AI systems.
The
proposal would require providers and users of high-risk AI systems to
comply with rules on data and data governance; documentation and
record-keeping; transparency and provision of information to users;
human oversight; and robustness, accuracy and security. Its major
innovation, telegraphed in last year’s White
Paper on Artificial Intelligence,
is a requirement for ex-ante conformity assessments to establish that
high-risk AI systems meet
these requirements before they can be offered on the market or put
into service.
An additional important innovation is a mandate for a postmarket
monitoring system to detect problems in use and to mitigate them.
No need to find these yourself, just jump when we
point one out. (Who determines what is art or academic content?)
https://www.theverge.com/2021/4/29/22409306/eu-law-one-hour-terrorist-content-takedowns-passes-parliament?scrolla=5eb6d68b7fedc32c19ef33b4
EU adopts
controversial law forcing one-hour takedowns of terrorist content
The
European Parliament has formally
adopted a
law requiring internet companies to “remove or disable access to
flagged terrorist content” within one hour after being notified by
national authorities. Once issued, such takedown notices will apply
across the EU, with countries able to levy financial penalties
against firms that refuse to comply.
The
legislation will come into force 12 months after it is published in
the EU’s official
journal,
a standard step for all EU law. It will then have to be adopted by
each member state.
… Notably,
the legislation now explicitly excludes takedowns targeting terrorist
content that’s part of any educational, artistic, journalistic, or
academic material. It also includes no
obligation for internet companies to preemptively monitor or filter
their content.
Perhaps governments are not that anxious to
eliminate bias.
https://thenextweb.com/news/black-man-says-racially-biased-ai-system-rejected-his-passport-photo-facial-recognition-tiktok
Black man
says racially-biased AI system rejected his passport photo
… Joris Lechêne, a model and racial justice
activist, said in a
TikTok video that his photo met every rule in the
application guidelines:
But lo
and behold, that photo was rejected because the artificial
intelligence software wasn’t designed with people of my phenotype
in mind.
… Despite
knowing about these biases for years, the government is still using
the same face analysis algorithm.
In March, the
Passport Office told New
Scientist that an update to the system had been available
for more than a year, but still hadn’t been rolled out.
Interesting article.
https://www.lawfareblog.com/data-brokers-and-national-security
Data
Brokers and National Security
In
the worlds of data protection and privacy, too often there is a
decoupling of national security issues and what might be termed
non-national security issues despite the clear interplay between the
two realms. Over the past decade, U.S. adversaries have vacuumed up
the personal data of many Americans with one nation possibly being at
the fore: the People’s Republic of China (PRC). The PRC was
connected to the Office
of Personnel Management and
Equifax
hacks,
both of which provided massive troves of data the PRC has reportedly
used to foil
U.S.
espionage and intelligence collection efforts abroad.
… California
and
Vermont
have
enacted laws requiring the registration of data brokers operating in
those states, and legislation has been proposed in Congress to do the
same.
…
Earlier
this month, Justin Sherman discussed
definitional
problems and gaps with the California and Vermont statutes on
Lawfare,
arguing that getting these matters right in federal legislation is
critical.
Probably
should have happened years ago. (Will virtual lawyers wear pants?)
https://www.bespacific.com/zoom-court-is-changing-how-justice-is-served/
Zoom
Court Is Changing How Justice Is Served
The
Atlantic –
“Last
spring, as COVID-19 infections surged for the first time, many
American courts curtailed their operations. As case backlogs
swelled, courts moved online, at a speed that has amazed—and
sometimes alarmed—judges, prosecutors, and defense attorneys. In
the past year, U.S. courts have conducted millions of hearings,
depositions, arraignments, settlement conferences, and even
trials—nearly entirely in civil cases or for minor criminal
offenses—over Zoom and other meeting platforms. As of late
February, Texas, the state that’s moved online most aggressively,
had held 1.1 million remote proceedings. “Virtual justice” (the
preferred, if unsettling, term) is an emergency response to a dire
situation. But it is also a vision some judicial innovators had long
tried to realize. One leading booster, Michigan Chief Justice
Bridget Mary McCormack, told me that going online can make courts not
only safer but “more transparent, more accessible, and more
convenient.” Witnesses, jurors, and litigants no longer need to
miss hours of work and fight traffic. Attorneys with cases in
multiple courts can jump from one to another by swiping on their
phones. In July the Conference of Chief Justices and the Conference
of State Court Administrators jointly endorsed a set of “Guiding
Principles for Post-pandemic Court Technology”
with
a blunt message: The
legal system should “move as many court processes as possible
online,” and keep them there
after the risk of infection passes. The pandemic, they wrote, “is
not the disruption courts wanted, but it is the disruption that
courts needed.”…
Perspective.
This has never happened before, so it can’t happen now?
https://www.thedailybeast.com/artificial-intelligence-company-dataminr-warned-us-capitol-police-about-jan-6-riot
Even
a Private AI Company Warned Capitol Police Ahead of Jan. 6 Riot
Capitol
Police received warnings on Jan. 5 about social media posts
discussing an attack on the U.S. Capitol, according to emails
obtained by CNN. The warnings came from a rep for artificial
intelligence company Dataminr, who said they had detected a number of
troubling posts, including one on internet message board 8kun, which
said “we will storm government buildings, kill cops, kill security
guards, kill federal employees and agents.” Hours later, the same
Dataminr rep got back in touch with Capitol Police to flag comments
on Parler about storming the Capitol. However, internal
communications indicate that Capitol
Police didn’t consider the threats credible.
According to one Senate source, after months of investigation, they
are “stunned” at the way that Capitol Police ignored warning
signs about the Jan. 6 insurrection. The heads
of other law enforcement agencies have all blamed each other for
dropping the ball.
It
has always been possible to make guns in your home workshop.
Expensive and time consuming, yes. But possible. 3D printing is
faster and cheaper.
https://apnews.com/article/courts-gun-politics-b94001d41109ac47dda41ac6f3583340
U.S.
court says ‘ghost gun’ plans can be posted online
Plans
for 3D-printed, self-assembled “ghost guns” can be posted online
without U.S. State Department approval, a federal appeals court ruled
Tuesday.
A
divided panel of the 9th U.S. Circuit Court of Appeals in San
Francisco reinstated a Trump administration order that permitted
removal of the guns from the State Department’s Munitions List.
Listed
weapons need State Department approval for export.
In
2015, federal courts applied the requirement to weapons posted online
and intended for production on 3D printers, the San Francisco
Chronicle reported.