Sunday, June 14, 2020


Once upon a time, I had my students build a Computer Security wiki. This one inspires me to do it again!
Jason Cronk has announced a new Privacy Wiki that you’ll want to bookmark. From Jason’s announcement:
And now for something different: a return to #privacy. I’d like to soft-announce the Privacy Wiki, a wiki dedicated to privacy laws and events. https://lnkd.in/dqiVAC4 Currently the wiki has 78 US Federal laws, over 200 US State laws and over 100 articles on privacy events. It is organized around the Solove taxonomy and other aspects of #privacybydesign. While most state laws have been entered, we’re looking for state legal editors to update them and keep them current. If you’re interested please contact our Chief Legal Editor Ece Gumusel, LL.M. on LinkedIn or through https://lnkd.in/d6q5d2Q. Currently we have NO state legal editor volunteers.




Confusing Congress?
Big tech to Congress: Your move on facial recognition
While the announcements are at least partially symbolic — Microsoft says it already wasn't selling those tools to police departments — the calls for congressional action mark an increasingly offensive posture for an industry that has faced heat from its own employees for dealing out tools that critics say facilitate mass surveillance and racial profiling by cops.
Other critics have noted the announcements don't encompass the full range of cutting-edge tools that the companies supply to law enforcement agencies, including home surveillance systems that lawmakers have sounded the alarm on. The moves could also create an opening for lesser-known tech companies that remain big sellers of facial recognition services to police agencies. They include Clearview AI, a U.S. company that allows hundreds of law enforcement agencies to search for individuals from a database of billions of photos scraped from online sources.
"I would love to be a fly on the wall at the board meeting where Amazon's government affairs team explains why it spent millions of dollars lobbying against a moratorium that it then decided to impose on itself," said the aide.




Rethink! If we can’t trust them, should we think of them as AP (artificial politicians)?
In AI We Trust: Ethics, Artificial Intelligence, and Reliability
One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them.




An interesting question.
These engineers are training AI to forge handwriting – why?
Researchers in Germany are developing a new artificial intelligence that can precisely imitate any kind of handwriting you feed it.
Rather than being intended for AI-enhanced forgeries, [Sure. Bob] the technology is initially intended to help people who can no longer write because of injury or some other impairment.
It's not the first time developers have tried to digitise human handwriting, and fonts available online already allow you to imitate the writing of people like Donald Trump and Greta Thunberg.
However, the system promises a far more sophisticated level of imitation, and rather than simply imitating individual letters, the new method works on the basis of generating entire lines of text, just as a human might.



No comments: