Saturday, January 11, 2020


Something my Computer Security students must consider. Not just disruption of services but massive disclosure of data. Paying the ransom is not guarantee that this is over.
Maze Ransomware Publishes 14GB of Stolen Southwire Files
The Maze Ransomware operators have released an additional 14GB of files that they claim were stolen from one of their victims for not paying a ransomware demand.
In December the Maze Ransomware operators attacked Southwire, a wire and cable manufacturer out of Georgia, and allegedly stole 120GB worth of files before encrypting 878 devices on the network.
Maze then demanded $6 million in bitcoins or they would publicly release Southwire's stolen files.
When Southwire did not make a payment, the Maze operators uploaded some of the company's files to a "News" site that they had created to shame non-paying victims.
This led to Southwire filing a lawsuit against Maze in Georgia courts and asking for an injunction in the courts of Ireland against a web hosting provider who was hosting the Maze news site. This injunction led to the site being taken down and Southwire's stolen data being accessible.
Yesterday, the Maze operators released an additional 14.1GB of stolen files that they claim belong to Southwire on a Russian hacking forum. They further state that they will continue to release 10% of the data every week unless the ransom is paid.




Who do you want elected and by how much?
'Online and vulnerable': Experts find nearly three dozen U.S. voting systems connected to internet
It was an assurance designed to bolster public confidence in the way America votes: Voting machines “are not connected to the internet.”
Then Acting Undersecretary for Cybersecurity and Communications at the Department of Homeland Security Jeanette Manfra said those words in 2017, testifying before Congress while she was responsible for the security of the nation’s voting system.
So many government officials like Manfra have said the same thing over the last few years that it is commonly accepted as gospel by most Americans. Behind it is the notion that if voting systems are not online, hackers will have a harder time compromising them.
But that is an overstatement, according to a team of 10 independent cybersecurity experts who specialize in voting systems and elections. While the voting machines themselves are not designed to be online, the larger voting systems in many states end up there, putting the voting process at risk.
… “We found over 35 [voting systems] had been left online and we’re still continuing to find more,” Kevin Skoglund, a senior technical advisor at the election security advocacy group National Election Defense Coalition, told NBC News.
The three largest voting manufacturing companies — Election Systems &Software, Dominion Voting Systems and Hart InterCivic — have acknowledged they all put modems in some of their tabulators and scanners. The reason? So that unofficial election results can more quickly be relayed to the public. Those modems connect to cell phone networks, which, in turn, are connected to the internet.




...and changing enterprise architecture.
The Internet of Things Is Changing the World
The Internet of Things has been a long time coming. Ubiquitous or pervasive computing, which is computing happening anytime and anywhere, dates to the 1990s, when devices and wireless networks were nowhere near where they are today.
The transformation brought by connected devices is about to go into overdrive, the Economist says in a recent issue: “One forecast is that by 2035 the world will have a trillion connected computers, built into everything from food packaging to bridges and clothes.”
IoT promises to bring many benefits, including a new generation of smart, connected products. In addition to mechanical and electrical components, these products use digital components such as microprocessors, sensors, data storage, software, and connectivity in a variety of ways.


(Related) Perspective.
About one-in-five Americans use a smart watch or fitness tracker
A fitness tracker can compile a variety of data about the wearer’s activities, depending on the complexity of the device. Users can monitor this data with a corresponding app, where they can manually input additional information about themselves and their lifestyle. As a result, the makers of fitness trackers amass a wealth of data on their users that can be used in many ways. Current privacy policies for many fitness tracking apps allow users’ data to be shared with others. Some researchers are already using data from these apps for health research,




Worth a look.
New Supplemental Materials for INFORMATION PRIVACY LAW Casebooks
I am pleased to announce that Professor Paul Schwartz and I have released new supplemental materials for our INFORMATION PRIVACY LAW casebooks:
(1) edited version of Carpenter v. US




Any indication that this technology is worth the investment?
San Diego’s massive, 7-year experiment with facial recognition technology appears to be a flop
Since 2012, the city’s law enforcement agencies have compiled over 65,000 face scans and tried to match them against a massive mugshot database. But it’s almost completely unclear how effective the initiative was, with one spokesperson saying they’re unaware of a single arrest or prosecution that stemmed from the program.




Part of my Security lectures.
What a Business AI Ethics Code Looks Like
By now, it’s safe to say that artificial intelligence (AI) has established itself in the mainstream, especially in the world of business. From customer service and marketing, to fraud detection and automation, this particular technology has helped streamline operations in recent years.
Unfortunately, our dependence on AI also means that it holds so much of our personal information – whether it’s our family history, the things we buy, places we go to, or even our favourite songs. Essentially, we’re giving technology free access to our lives. As AI continues to develop (and ask for even more data), it’s raising a lot of serious concerns.
The AI code of ethics isn’t meant for the AI itself, but for the people who develop and use said technology. Last year, the UK government published a report that aims to inform the public about its ethical use. All in all, the report can be summarised into five principles:
1. AI must be created and used for the benefit of all.
2. AI should not be used to diminish the data rights or privacy of individuals, families, and communities.
3. AI must operate within parameters understood by the human mind.
4. Everybody has the right to be educated on the nuances of AI.
5. Humans must be able to flourish mentally, emotionally, and economically alongside AI.




Probably not...
Samsung's Neon AI has an ethics problem, and it's as old as sci-fi canon
For decades, ethicists, philosophers and science fiction writers have wrestled with what seems increasingly like an inevitability in the evolution of humankind's technological discovery: The creation of a new species of artificial humanity. What better place for such a species' debutante ball than the Las Vegas consumer electronics frenzy, CES 2020? Enter stage right: The eerily realistic interactive CGI avatar, Neon.. It's the literal brainchild of Samsung-funded Star Labs' Pranav Mistry, who also serves as CEO of the company he says is building "the first computerized artificial human."
"Neon is like a new kind of life," Mistry said when unveiling the technology this week at CES. "There are millions of species on our planet, and we hope to add one more."



No comments: