Thursday, July 16, 2020


They didn’t hack their targets directly, they hack the Twitter employees who already had access to these accounts.
Twitter says hacking of high-profile Twitter accounts was a "coordinated social engineering attack"
Some of the world's richest and most influential politicians, celebrities, tech moguls and companies were the subject of a massive Twitter hack on Wednesday. Elon Musk, Joe Biden, Jeff Bezos, Michael Bloomberg, Kim Kardashian West and Bill Gates were among the accounts pushing out tweets asking millions of followers to send money to a Bitcoin address.
Twitter said in a statement that the company detected what they believed to be "a coordinated social engineering attack by people who successfully targeted some of our employees with access to internal systems and tools."
Companies, including Apple and Uber, were apparently hacked as well. Following the incident, all of Apple's tweets appeared to have been deleted.


(Related) This is a bit of an overreaction – isn’t it?
A catastrophe at Twitter
After today it is no longer unthinkable, if it ever truly was, that someone take over the account of a world leader and attempt to start a nuclear war. (A report on that subject from King’s College London came out just last week.)




Start securing your data…
TrojanNet – a simple yet effective attack on machine learning models
Injecting malicious backdoors into deep neural networks is easier than previously thought, a new study by researchers at Texas A&M University shows.
The threat of trojan attacks against AI systems has also drawn the attention of US government agencies.
With the rapid commercialization of DNN-based products, trojan attacks would become a severe threat to society,” the Texas A&M researchers write.
Previous research pertains that hiding a trojan in a deep learning system is an arduous, costly, and time-consuming process.
But in their paper, titled ‘An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks, the Texas A&M researchers show that all it takes to weaponize a deep learning algorithm is a few tiny patches of pixels and a few seconds’ worth of computation resources.




Should work as well as any other predictive policing.
Cities Turn to Software to Predict When Police Will Go Rogue
A startup selling tech to identify ‘bad apples’ shows the promise and challenges of using data to improve policing.




Another perspective.
An Ethics Guide for Tech Gets Rewritten With Workers in Mind
IN 2018, SILICON Valley, like Hamlet’s engineer, was hoist with its own petard. Citizens were panicking about data privacy, researchers were sounding alarms about artificial intelligence, and even industry stakeholders rebelled against app addiction. Policymakers, meanwhile, seemed to take a renewed interest in breaking up big tech, as a string of congressional hearings put CEOs in the hot seat over the products they made. Everywhere, techies were grasping for answers to the unintended consequences of their own creations. So the Omidyar Network—a “philanthropic investment firm” created by eBay founder Pierre Omidyar—set out to provide them. Through the firm’s newly minted Tech and Society Solutions Lab, it issued a tool kit called the EthicalOS, to teach tech leaders how to think through the impact of their products ahead of time.
Two years later, some things have changed. But it’s not CEOs who are leading the charge. It’s the workers—engineers, designers, product managers—who have become the loudest voices for reform in the industry. So when it came time for the Omidyar Network to refresh its tool kit, it became clear that a new target audience was needed.
The kit includes a “field guide” for navigating eight risk zones: surveillance, disinformation, exclusion, algorithmic bias, addiction, data control, bad actors, and outsize power




For anyone keeping score...
These Are the Highest Penalties under GDPR – Including Fines Issued to Private Individuals
PrivacyAffairs, a leading source of data privacy and cybersecurity research, has issued a report tallying fines issued under the 2018 General Data Protection Regulation (GDPR). It also lists the countries where the highest fines were dealt, as well as the nations with the most punishable incidents.
According to the research firm, since its rollout in May 2018, the GDPR has claimed 340 ‘victims’ for unlawful data protection practices. The report notes that every single one of the 28 EU nations, including the now Brexited United Kingdom, has issued at least one penalty under the new data protection legislature.




Two people out of a million (or more).
Amazon, Google, Microsoft sued over photos in facial recognition database
Amazon, Google parent Alphabet and Microsoft used people's photos to train their facial recognition technologies without obtaining the subjects' permission, in violation of an Illinois biometric privacy statute, a trio of federal lawsuits filed Tuesday allege.
The photos in question were part of IBM's Diversity in Faces database, which is designed to advance the study of fairness and accuracy in facial recognition by looking at more than just skin tone, age and gender. The data includes 1 million images of human faces, annotated with tags such as face symmetry, nose length and forehead height.
The two Illinois residents who brought the lawsuits, Steven Vance and Tim Janecyk, say their images were included in that data set without their permission, despite clearly identifying themselves as residents of Illinois. [Can that be accomplished when only the image is used? Bob]




But will all this activity result in a Covid vaccine?
Deep Dive Into Big Pharma AI Productivity: One Study Shaking The Pharmaceutical Industry
On June 15th, one article titled “The upside of being a digital pharma playergot accepted and quietly went online in a reputable peer-reviewed industry journal Drug Discovery Today.
Upon a closer look it turned out to be not a perspective but a comprehensive research study with a head-to-head comparison of the pharmaceutical companies by their efforts in AI in research and development.



No comments: