criminal justice, and
Wednesday, June 27, 2018
So, why haven’t they done this years ago?
Twitter fights spam by requiring new users to confirm their email or phone number
Twitter has traditionally had more lax sign up requirements then its oft-cited competitor Facebook. Users haven’t had to give their full name on Twitter, for example, which has made it easier for users to protect their identity but has also made it easy for spam accounts to take over the platform.
… In a blog post, Twitter’s Yoel Roth and Del Harvey said that new users will now have to confirm either an email address or phone number when they sign up for the platform. This change will be rolled out later this year, and the company says that its two-year-old Trust and Safety Council will also be working with NGOs to “ensure this change does not hurt someone in a high-risk environment where anonymity is important.” The company will also start “auditing existing accounts for signs of automated sign up.”
Before you try to be cool, be careful!
Education Scotland order hard reset on school social networking app following major security breach
An email distributed to headteachers and seen by The Courier has revealed how management called for all log-ins to be scrapped after it emerged children had been encouraged in schools to share credentials with their parents.
It means unauthorised could have gained access to applications such as Yammer, a social networking tool that allows every school child in Scotland – and by extension anyone with access to their log in details – to privately send messages to one another.
The service, which is hosted on the Glow learning platform, also allows users, regardless of whether they go to the same school, to view each other’s full name, school, interests, friends and email address.
Access to the app was locked down temporarily after an investigation by The Courier revealed how the Scottish Government’s own impact report had concluded it was vulnerable to individuals looking to find children and “do them harm”.
Education Secretary John Swinney claimed last week the service was “closed to the general public” and had only ever been accessible to pupils and educators.
However, it has now emerged that was not the case.
In some instances, children as young as five years old were sent home with strips of paper containing log-in details to give to their parents.
… “As Education Scotland does not hold the contact details of parents, [Really? Bob] informing them of the decision to reset passwords for students using Glow has to be an action for the relevant local authority.”
Is there also a targeting bias? Is Facial Recognition is used mostly on darker skinned peoples? Is anyone keeping track?
Microsoft’s facial recognition can better identify people with darker skin tones
Microsoft says its facial recognition tools are getting better at identifying people with darker skin tones than before, according to a company blog post today. The error rates have been reduced by as much as 20 times for men and women with darker skin and by nine times for all women.
The company says it’s been training its AI tools with larger and more diverse datasets, which has led to the progress. “If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases,” said Hanna Wallach, a Microsoft senior researcher, in the blog post.
Paul Ohm is always interesting.
The Broad Reach of Carpenter v. United States
Carpenter v. United States is an inflection point in the history of the Fourth Amendment. From now on, we’ll be talking about what the Fourth Amendment means in pre-Carpenter and post-Carpenter terms. It will be seen as being as important as Olmstead and Katz in the overall arc of technological privacy.
I hope to develop the argument that Carpenter is as important as Katz across two or three articles, but let me begin with my overall big picture: The holding and reasoning of Carpenter is breathtakingly broad and will be applied far beyond the facts of this case. (For a detailed overview of the facts and five opinions of this case, see this blog post written by my star student and recent Georgetown graduate, Sabrina McCubbin.)
(Related) Revolt isn’t the right word. Reaction to overreach?
Digital Searches, the Fourth Amendment, and the Magistrates’ Revolt
Berman, Emily, Digital Searches, the Fourth Amendment, and the Magistrates’ Revolt (May 30, 2018). Emory Law Journal, Forthcoming. Available at SSRN: https://ssrn.com/abstract=3187612
“Searches of electronically stored information present a Fourth Amendment challenge. It is often impossible for investigators to identify and collect, at the time a warrant is executed, only the specific data whose seizure is authorized. Instead, the government must seize the entire storage medium—e.g., a hard drive or a cell phone—and extract responsive information later. But investigators conducting that subsequent search inevitably will encounter vast amounts of non-responsive (and often intensely personal) information contained on the device. The challenge thus becomes how to balance the resulting privacy concerns with law enforcement’s legitimate need to investigate crime. Some magistrate judges have begun including in their warrants for digital searches limits on how those searches may be carried out—a development that some have referred to as a “magistrates’ revolt,” and which has both supporters and detractors. This Article argues that the magistrates’ “revolt” was actually no revolt at all. Instead, these judges simply adopted a time-honored tool—minimization—that is used to address a conceptually analogous privacy threat posed by foreign intelligence collection. I further argue that embracing both the practice and the label of “minimization” will yield at least two benefits: First, it will recast magistrates’ actions as a new instantiation of a legitimate judicial role, rather than a novel, potentially illegitimate practice. Second, it will allow magistrates to draw on lessons learned from the Foreign Intelligence Surveillance Court’s creative use of minimization to safeguard Fourth Amendment rights in the intelligence-collection context.”
Is this a real thing?
You can say no to Duke Energy's wireless meter. But you'll need a doctor's note.
North Carolina will start offering an unusual escape clause for the thousands of North Carolina residents who complain that Duke Energy's two-way communication utility meters give them headaches, ear-ringing and a case of the "brain fog."
Residents who say they suffer from acute sensitivity to radio-frequency waves can say no to Duke's smart meters — as long as they have a notarized doctor's note to attest to their rare condition.
The N.C. Utilities Commission, which sets utility rates and rules, created the new standard on Friday, possibly making North Carolina the first state to limit the smart meter technology revolution by means of a medical opinion. It took the Utilities Commission two years to resolve the dispute — longer than it takes to review a complicated rate increase or to issue a permit to build a coal-burning power plant — after considering the warnings and denials of conflicting studies and feuding experts.
Charlotte-based Duke had proposed charging customers extra if they refused a smart meter. Duke wanted to charge an initial fee of $150 plus $11.75 a month to cover the expense of sending someone out to that customer's house to take a monthly meter reading. But the Utilities Commission opted to give the benefit of the doubt to customers with smart meter health issues until the Federal Communications Commission determines the health risks of the devices.
… "More than a dozen individuals, including a physician, stated that they have personally experienced debilitating health impacts from the cumulative impact of RF emissions," the Utilities Commission said in its ruling. "A few went so far as to assert that RF emissions from smart meters contribute to violence and homicides."
The commission received a statement from the director of the Institute for Health and the Environment at the University of Albany in New York, co-signed by four other scientists and doctors. The letter said the greatest risk of radio frequency wave exposure is cancer, but symptoms include memory loss and fatigue.
What part of this headline will get all the attention? Here’s a hint.
Bill Gates hails 'huge milestone' for AI as bots work in a team to destroy humans at video game 'Dota 2'
Founder of Microsoft Bill Gates has hailed what he sees as a turning point in the development of AI.
OpenAI, a company which was cofounded by Elon Musk, has created five neural-networks called OpenAI Five which are capable of playing the online multi-player game "Dota 2."
Not only can the bots play as a team, they actually destroyed humans at the game during a number of battles.
(Related) And less inflammatory?
Artificial Intelligence: Emerging Opportunities, Challenges, and Implications for Policy and Research
Artificial Intelligence: Emerging Opportunities, Challenges, and Implications for Policy and Research; GAO-18-644T: Published: Jun 26, 2018. Publicly Released: Jun 26, 2018. “Artificial intelligence (AI) could improve human life and economic competitiveness—but it also poses new risks. The Comptroller General convened a Forum on AI to consider the policy and research implications of AI’s use in 4 areas with the potential to significantly affect daily life:
“Based on our March 2018 technology assessment, we testified that AI will have far-reaching effects on society—even if AI capabilities stop advancing today. We also testified about prospects for AI in the near future and areas where changes in policy and research may be needed.”
Not intrusive enough?
Facebook abandons its plans to build giant drones and lays off 16 employees
Facebook is scrapping its efforts to build its passenger jet-sized drones that provide wireless internet to the developing world, and laying off staff, it announced on Tuesday— a major retreat from what had been an ambitious and high-profile initiative at the company.
… "We've decided now is the right moment to focus on the next set of engineering and regulatory challenges for HAPS connectivity," Facebook's Yael Maguire wrote in a blog post. "This means we will no longer design and build our own aircraft, and, as a result, we've closed our facility in Bridgewater."
How Social Networks Set the Limits of What We Can Say On-line
Content material moderation is difficult. This ought to be apparent, but it surely’s simply forgotten. It’s useful resource intensive and relentless; it requires making tough and sometimes untenable distinctions; it’s wholly unclear what the requirements ought to be, particularly on a world scale; and one failure can incur sufficient public outrage to overshadow one million quiet successes. We as a society are partly accountable for having put platforms on this state of affairs. We generally decry the intrusions of moderators, and generally decry their absence.
Even so, now we have handed to non-public firms the facility to set and implement the boundaries of acceptable public speech. That is a gigantic cultural energy to be held by so few, and it’s largely wielded behind closed doorways, making it tough for outsiders to examine or problem. Platforms regularly, and conspicuously, fail to reside as much as our expectations. Actually, given the enormity of the endeavor, most platforms’ personal definition of success contains failing customers regularly.
Useful in several classes, I think.
Paper – Text as Data
Text as Data – Matthew Gentzkow, Stanford; Bryan T. Kelly, Yale and AQR Capital Management; Matt Taddy, Chicago Booth: “An ever increasing share of human interaction, communication, and culture is recorded as digital text. We provide an introduction to the use of text as an input to economic research. We discuss the features that make text different from other forms of data, offer a practical overview of relevant statistical methods, and survey a variety of applications.”
For my geeks.