Friday, May 10, 2019


Disguising malware as something that’s supposed to help you secure your data.
Cryptanalyzing a Pair of Russian Encryption Algorithms
A pair of Russia-designed cryptographic algorithms -- the Kuznyechik block cipher and the Streebog hash function – have the same flawed S-box that is almost certainly an intentional backdoor. It's just not the kind of mistake you make by accident, not in 2014.




Sounds like something I should try with my students.
Some assembly required: building an interdisciplinary superteam to tackle AI ethics
Harvard Business School Digital Initiative – “What do a communications studies professor, a politics PhD, a technology policy advisor, and a machine learning engineer have in common? They share deep expertise in the ethics and governance of artificial intelligence — and they’re members of the 2019 Assembly program. Hosted by the Berkman Klein Center for Internet & Society and the MIT Media Lab, Assembly brings together a small cohort of technologists, managers, policymakers, and other professionals to confront emerging problems related to the ethics and governance of AI.
AI technologies are increasingly embedded in our lives at home and work — powering our virtual assistants, moderating content on social networking platforms, and helping companies hire new employees. Yet, as AI technologies become more ubiquitous, applying them can raise serious ethical concerns. AI systems are trained using data from the past to make decisions or predictions about the future. This can pose serious risks as societal biases embedded in data get baked into new technical systems. Biased algorithmic outputs are opaque; sometimes even a system’s programmers aren’t sure how a prediction was made. In a world plagued by systemic bias, how do we create AI systems that reduce inequality, rather than perpetuate it? What frameworks can companies use to determine if the application of a machine learning system is unethical? How do we bring communities impacted by AI systems into conversations about AI design and use?..”


(Related)
The AI Boom: Why Trust Will Play a Critical Role
Artificial Intelligence is on the cusp of becoming the biggest technology of the information age, says Horacio Rozanski, president and CEO of Booz Allen Hamilton. However, we need to bake human judgement into it before it is too late, he writes in this opinion piece.


(Related) A useful comparison of ethical guidelines.
The Ethics of AI Ethics -- An Evaluation of Guidelines




Depressing news for my Privacy Lawyer friends?
The U.S. and Europe Are Approaching GDPR and Data Privacy Much Differently
Well, GDPR is not scaring anyone. In fact, it’s a lawyer’s dream come true. It’s becoming quite clear Europe and the U.S. are attacking GDPR compliance problems from different angles. In Europe, the compliance budget covers lawyering up, whereas the on the other side of the pond, the Americans are using their compliance budgets to solve the problems with automated solutions. Which is the opposite if what we’d expect given the litigious nature in the U.S. It seems the worm has turned.


(Related)
GDPR – The Work Ahead
… The effect of the GDPR has been noticeable, but in a subtle sort of way. However, it would be hugely mistaken to think that the GDPR was just a fad or a failed attempt at helping privacy and data protection survive the 21st century. The true effect of the GDPR has yet to be felt as the work to overcome its regulatory challenges has barely begun. So what are the important areas of focus to achieve GDPR compliance?
An essential ‘GDPR To Do’ list for the months ahead looks as follows:




Background. This is well done.
Machine learning algorithms explained
Recall that machine learning is a class of methods for automatically creating predictive models from data. Machine learning algorithms are the engines of machine learning, meaning it is the algorithms that turn a data set into a model. Which kind of algorithm works best (supervised, unsupervised, classification, regression, etc.) depends on the kind of problem you’re solving, the computing resources available, and the nature of the data.




Perspective.
Facebook is not a monopoly, and breaking it up would defy logic and set a bad precedent
Facebook co-founder Chris Hughes laid out his arguments for breaking up the company in a lengthy op-ed for The New York Times on Thursday.
The essence of his argument seems to be that a single person, Mark Zuckerberg, has too much control over the communications platforms, including Facebook, Instagram and WhatsApp, that billions of people use. Therefore, the government should force Facebook to divest its other communications platforms and create a new agency to regulate tech companies, particularly around privacy.
The break-up argument is compelling if you're predisposed to dislike Zuckerberg and Facebook after the last few years of blunders related to user data and misinformation, and Facebook's often tone-deaf or seemingly indifferent responses to these incidents
It's also illogical, difficult and a waste of time.
Facebook is not a monopoly in its actual market — advertising — and the product it offers is not essential to the U.S. economy or society. Even worse, it's not clear that breaking Facebook up would solve the biggest problems with the platform, such as misinformation and data collection. Those problems would better be solved through targeted, strictly enforced regulation.



No comments: