Do insurers reinsure themselves?
https://www.databreaches.net/cna-financial-paid-40-million-in-ransom-after-march-cyberattack/
CNA Financial Paid $40 Million in Ransom After March Cyberattack
Kartikay Mehrotra and William Turton report:
CNA Financial Corp., among the largest insurance companies in the U.S., paid $40 million in late March to regain control of its network after a ransomware attack, according to people with knowledge of the attack.
The Chicago-based company paid the hackers about two weeks after a trove of company data was stolen, and CNA officials were locked out of their network, according to two people familiar with the attack who asked not to be named because they weren’t authorized to discuss the matter publicly.
Read more on Bloomberg, including the issue of did the payment go to a sanctioned group or not.
Possible, but let’s not double-think ourselves into a corner. Security AI should recognize this pattern as easily as any other.
‘Data poisoning’ that leverage machine learning may be the next big attack vector
Data poisoning attacks against the machine learning used in security software may be attackers’ next big vector, said Johannes Ullrich, dean of research of SANS Technology Institute.
Machine learning is based on pattern recognition in a pool of data. Data poisoning is adding intentionally misleading data to that pool so it begins to misidentify its inputs.
… Ulrich noted that hackers could provide a stream of bad information by, say, flooding a target organization with malware designed to refine ML detection away from the techniques they actually plan to use for the main attack.
Why not for everyone?
https://www.pogowasright.org/colorado-makes-doxxing-public-health-workers-illegal/
Colorado Makes Doxxing Public Health Workers Illegal
Anna Schavarien reports:
Colorado on Tuesday made it illegal to share the personal information of public health workers and their families online so that it can be used for purposes of harassment, responding to an increase in threats to such workers during the pandemic.
Known as doxxing, the practice of sharing a person’s sensitive information, such as a physical or email address or phone number, has long been used against law enforcement personnel, reporters, protesters and women speaking out about sexual abuse.
Read more on The New York Times.
I would think they have enough to determine the “missing” parts.
https://www.protocol.com/policy/social-media-data-act
Lawmakers want to force Big Tech to give researchers more data
Facebook's ad library allows researchers to see the content of ads that run on the platform and information on who those ads reach. But there is one key insight Facebook doesn't offer: information on how those ads were targeted.
Technology for evil or AI controlling the conversation.
https://www.washingtonpost.com/outlook/2021/05/20/ai-bots-grassroots-astroturf/
‘Grassroots’ bot campaigns are coming. Governments don’t have a plan to stop them.
Artificial intelligence software can easily pass for real public comments
This month, the New York state attorney general issued a report on a scheme by “U.S. Companies and Partisans [to] Hack Democracy.” This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of U.S. democracy — the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.
In case I hadn’t mentioned this before.
https://www.executivegov.com/2021/05/nist-seeks-public-comments-on-proposed-model-for-ai-user-trust/
NIST Seeks Public Comments on Proposed Model for AI User Trust
The National Institute of Standards and Technology (NIST) has published a draft document outlining a list of nine factors that contribute to an individual’s potential trust in an artificial intelligence platform.
The draft document titled “Artificial Intelligence and User Trust” seeks to show how a human may consider the factors based on the task and the risk involved in trusting the decision of an AI system and contributes to NIST’s efforts to advance the development of trustworthy AI tools, NIST said Wednesday.
https://nvlpubs.nist.gov/nistpubs/ir/2021/NIST.IR.8332-draft.pdf
So many books, so little time.
https://www.bespacific.com/amazon-publishing-dpla-ink-deal-to-lend-e-books-in-libraries/
Amazon Publishing, DPLA Ink Deal to Lend E-books in Libraries
Publishers Weekly: “The Digital Public Library of America (DPLA) today announced that it has signed a much-anticipated agreement with Amazon Publishing to make all of the roughly 10,000 Amazon Publishing e-books and digital audiobooks available to libraries, the first time that digital content from Amazon Publishing will be made available to libraries. In a release today, DPLA officials said that lending will begin sometime this summer, with Amazon Publishing content to be made available for license via the DPLA Exchange, the DPLA’s not-for-profit, “library-centered” platform, and accessible to readers via the SimplyE app, a free, open source library e-reader app developed by the New York Public Library and used by DPLA. Library users will not have to go through their Amazon accounts to access Amazon Publishing titles via the DPLA, and DPLA officials confirmed that, as with other publishers DPLA works with, Amazon will not receive any patron data. The executed, long awaited deal comes nearly six months after Amazon Publishing and the DPLA confirmed that they were in talks to make Amazon Publishing titles available to libraries for the first time. The deal represents a major step forward for the digital library market. Not only is Amazon Publishing finally making its digital content available to libraries, the deal gives libraries a range of models through which it can license the content, offering libraries the kind of flexibility librarians have long asked for from the major publishers.
No comments:
Post a Comment