Thursday, September 05, 2019


Clearly a city with a recovery plan that works. The insurance inspired detour bothers me.
Ransomware gang demands $5.3 million from New Bedford; city restores from backup instead
The hackers demanded the exorbitant sum of $5.3 million for the decryption keys, but city officials decided not to cave in. Instead, they made a counter-offer of $400,000, which the city’s insurer would have covered – likely at the insurer’s recommendation, as recovering from a ransomware attack the hard way typically ends up costing the same, or more. However, the gang refused, and communication between city officials and the ransomware operators became severed.
IT administrators then proceeded to recover the lost data from backups. It isn’t immediately clear if the city had backed up all the data encrypted by Ryuk. [Either data created since the last backup or files identified in backup planning as unnecessary. Bob]


(Related) What systems are required to teach?
School officials: Ransomware prompts school closure in Flagstaff
No specific details about type of ransomware or any ransom demands, but schools are closed in Flagstaff, AZ today due to a ransomware attack that impacted a number of systems needed for day-to-day operations. Video news clip here: https://www.fox10phoenix.com/video/601919




Not enough detail to say it was a Facebook server or how long it had been unprotected.
Facebook Data On 419 Million Users Found On the Internet
Data on 419 million Facebook users were found online, impacting customers from the U.K. to the U.S.
Sanyam Jain, a researcher from GDI Foundation discovered a database on a server that wasn't protected, reported TechCrunch. The data included phone numbers, Facebook IDs, user names, gender and countries they were located in. It's not clear why the data was scraped from the social media network or who was behind it, reported TechCrunch.
In a statement to Engadget, a Facebook spokesperson said the dataset is old [Does that mean the hackers have had it for a long time? Bob] and has information that was removed last year including using phone numbers to find other uses. The spokesperson said the dataset has been taken down and there is "no evidence" Facebook accounts were impacted. [and no evidence accounts were not impacted. Bob]




We also need to know how we ‘opted in’ in the first place. Let’s hope they grow quickly!
Here’s a site that you may want to check out: https://simpleoptout.com/
From its home page:
Simple Opt Out is drawing attention to opt-out data sharing and marketing practices that many people aren’t aware of (and most people don’t want), then making it easier to opt out. For example:
  • Target “may share your personal information with other companies which are not part of Target.”
  • Chase may share your “account balances and transaction history … For nonaffiliates to market to you.”
  • Crate & Barrel may share “your customer information [name, postal address and email address, and transactions you conduct on our Website or offline] with other select companies.”
This site makes it easier to opt out of data sharing by 50+ companies (or add a company, or see opt-out tips ). Enjoy!




Another list to get off.
US judge: 'Terrorist' watchlist violates constitutional rights
The United States government's watchlist of more than one million people identified as "known or suspected terrorists'" violates the constitutional rights of those placed on it, a US federal judge ruled on Wednesday.
The ruling from District Judge Anthony Trenga in Virginia grants summary judgement to nearly two dozen Muslim US citizens who had challenged the watchlist with the help of the civil-rights group, the Council on American-Islamic Relations (CAIR).
But the judge is seeking additional legal briefs before deciding what remedy to impose.
Trenga also wrote in his 31-page ruling that the case "presents unsettled issues."
Ultimately, Trenga ruled that the travel difficulties faced by plaintiffs - who say they were handcuffed at border crossings and frequently subjected to invasive secondary searches at airports - are significant, and that they have a right to due process when their constitutional rights are infringed.
He also said the concerns about erroneous placement on the list are legitimate.




The more a ‘super app’ can do, the more we will rely on it and give AI the opportunity to influence us.
Grab will invest US$150 million in AI to build regional super app
South-east Asian ride-hailing start-up Grab Holdings intends to invest US$150 million (S$207.5 million) in artificial intelligence research over the next year, accelerating its expanding business that now includes food delivery, digital payments and digital content.
Grab, in hot competition with local rival Gojek to become South-east Asia's do-it-all super app, outlined for the first time a blueprint for its use and deployment of AI.
At the heart of the company's global effort is an ambition to create an all-in-one "super app" akin to Tencent's WeChat for China. The company's GrabPay service already allows consumers to pick up the tab for rides and order food, and it's expanding into lending and insurance.
The company is also said to be considering applying for a digital banking licence if Singapore allows it.




If you can’t tell, has the bot passed the Turing test?
On the Internet, Nobody Knows You’re a Bot
Brian Friedberg is an investigative ethnographer whose work focuses on the impacts that alternative media, anonymous communities and popular cultures have on political communication and organization. Brian works with Dr. Joan Donovan, who heads one of the world’s leading teams focused on understanding and combating online disinformation and extremism, based at Harvard’s Shorenstein Center on Media, Politics and Public Policy. In this essay, Brian and Joan explore a challenge the Unreal has presented for study of activism online, the question of whether an online actor is real or synthetic. In this essay, they explore what happens when politically motivated humans impersonate vulnerable people or populations online to exploit their voices, positionality and power.
See also – Response: “The Dangers of Weaponized Truth” by Brandi Collins-Dexter Brandi Collins-Dexter from Color of Change responds to Friedberg & Donovan’s essay “On the Internet Nobody Knows You’re a Bot”




Will insurance companies offer AI powered health assistants to monitor our bodies, adding drugs as needed and when all else fails turning us off?
Artificial intelligence in medicine raises legal and ethical concerns
AI in medicine also raises significant legal and ethical challenges. Several of these are concerns about privacy, discrimination, psychological harm and the physician-patient relationship. In a forthcoming article, I argue that policymakers should establish a number of safeguards around AI, much as they did when genetic testing became commonplace.
Data broker industry giants such as LexisNexis and Acxiom are also mining personal data and engaging in AI activities. They could then sell medical predictions to any interested third parties, including marketers, employers, lenders, life insurers and others. Because these businesses are not health care providers or insurers, the HIPAA Privacy Rule does not apply to them. Therefore, they do not have to ask patients for permission to obtain their information and can freely disclose it.




For my geeks.



No comments: