At last, Uncle Sam awakens.
https://gizmodo.com/doj-to-treat-ransomware-hacks-like-terrorism-now-heres-1847027610
DOJ to Treat Ransomware Hacks Like Terrorism Now: Here's the Full Memo
The U.S. Department of Justice plans to take a much harsher tack when pursuing cybercriminals involved in ransomware attacks—and will investigate them using strategies similar to those currently employed against foreign and domestic terrorists.
The new internal guidelines, previously reported by Reuters, were passed down to U.S. attorney’s offices throughout the country on Thursday, outlining a more coordinated approach to investigating attacks. The new guidance includes a stipulation that such investigations be “centrally coordinated” with the newly created task force on ransomware run by the Justice Department in Washington, DC. That task force, formed in April, is currently developing a “strategy that targets the entire criminal ecosystem around ransomware” by prioritizing “prosecutions, disruptions of ongoing attacks and curbs on services that support the attacks, such as online forums that advertise the sale of ransomware or hosting services that facilitate ransomware campaigns,” the Wall Street Journal previously reported.
[Memo om Scribd:
Use your access for evil? Does not violate this law! (But probably lots of other laws)
Diverse six-justice majority rejects broad reading of computer-fraud law
Ronald Mann writes:
The Supreme Court’s decision on Thursday in Van Buren v. United States provides the court’s first serious look at one of the most important criminal statutes involving computer-related crime, the federal Computer Fraud and Abuse Act. Justice Amy Coney Barrett’s opinion for a majority of six firmly rejected the broad reading of that statute that the Department of Justice has pressed in recent years.
Among other things, the CFAA criminalizes conduct that “exceeds authorized access” of a computer. Crucially, the statute defines that term as meaning “to access a computer with authorization and to use such access to obtain … information … that the accesser is not entitled so to obtain.” The question in Van Buren was whether users violate that statute by accessing information for improper purposes or instead whether users violate the statute only if they access information they were not entitled to obtain. In this case, for example, a Georgia police officer named Nathan Van Buren took a bribe to run a license-plate check. He was entitled to run license-plate checks, but not for illicit purposes. The lower courts upheld a conviction under the CFAA (because he was not entitled to check license-plate records for private purposes). The Supreme Court disagreed, adopting the narrower reading of the CFAA, under which it is a crime only if users access information they were not entitled to obtain.
Read more on SCOTUSblog.
For my Computer Security students.
https://www.schneier.com/blog/archives/2021/06/security-and-human-behavior-shb-2021.html
Security and Human Behavior (SHB) 2021
Today is the second day of the fourteenth Workshop on Security and Human Behavior. The University of Cambridge is the host, but we’re all on Zoom.
SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.
Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. The format translates well to Zoom, and we’re using random breakouts for the breaks between sessions.
I always find this workshop to be the most intellectually stimulating two days of my professional year. It influences my thinking in different, and sometimes surprising, ways.
This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.
Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, ,eleventh, twelfth, and thirteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.
Worth noting.
11th Circuit Upholds Historic $380 Million Equifax Data-Breach Settlement
Izzy Kapnick reports:
A three-judge panel for the 11th Circuit on Thursday upheld the largest-ever U.S. class action settlement over a consumer data breach, rejecting a bevy of challenges to the $380 million deal.
Finalized in January 2020, the settlement compensates U.S. consumers whose personal information was exposed in a cyberattack on the credit bureau Equifax. The breach compromised an estimated 147 million people’s data, including social security numbers and addresses.
Read more on Courthouse News.
Win some…
https://mspoweruser.com/end-to-end-encryption-is-coming-to-microsoft-teams-calls-soon/
End to End Encryption is coming to Microsoft Teams Calls soon
… Microsoft is expecting to begin rolling this out in early July and expect the rollout to be completed by mid-July.
Lose some…
TikTok just gave itself permission to collect biometric data on US users, including ‘faceprints and voiceprints’
A change to TikTok’s U.S. privacy policy on Wednesday introduced a new section that says the social video app “may collect biometric identifiers and biometric information” from its users’ content. This includes things like “faceprints and voiceprints,” the policy explained. Reached for comment, TikTok could not confirm what product developments necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users, but said it would ask for consent in the case such data collection practices began.
The debate…
https://www.ft.com/content/d1990d60-082e-422c-9753-23ed395a58e4
As AI develops, so does the debate over profits and ethics
Here’s one question that even the smartest minds aided by the most powerful machines will struggle to answer: at what point do the societal costs of not exploiting a transformative technology outweigh the conspicuous risks of using it?
… The uses of AI are too varied and consequential for any one government, company or research organisation to determine. But the profit motive that currently directs so much research in the field risks distorting its outcomes. Public debate about where the balance lies between innovation and regulation may be raucous and messy, but it is both inevitable and good that it is growing louder.
(Related) Yes or no, probably, maybe? (Podcast)
https://www.nytimes.com/2021/06/04/opinion/ezra-klein-podcast-brian-christian.html
Is AI the problem? Or are we?
If you talk to many of the people working on the cutting edge of artificial intelligence research, you’ll hear that we are on the cusp of a technology that will be far more transformative than simply computers and the internet, one that could bring about a new industrial revolution and usher in a utopia — or perhaps pose the greatest threat in our species’s history.
Others, of course, will tell you those folks are nuts.
[You can listen to this episode of “The Ezra Klein Show” on Apple, Spotify or Google or wherever you get your podcasts.]
Making AI safe?
CPSC Publishes Report on Artificial Intelligence and Machine Learning
On May 21, 2021, the U.S. Consumer Products Safety Commission (“CPSC”) published a report on artificial intelligence (AI) and machine learning (ML) in consumer products. The report highlights recent CPSC staff activity concerning AI and ML, proposes a framework for evaluating the potential safety impact of AI and ML capabilities in consumer products, and makes several recommendations that the CPSC can take in identifying and addressing potential hazards related to AI and ML capabilities in consumer products.
Concerning staff activity, CPSC recently hired a Chief Technologist with a background in AI and ML to address the use of AI in consumer products. The CPSC also recently established an “AI/ML Working Group” and held a virtual forum on AI and ML in March 2021.
For your next programming class?
https://www.makeuseof.com/an-introduction-to-the-bubble-sort-algorithm/
An Introduction to the Bubble Sort Algorithm
No comments:
Post a Comment