Saturday, August 17, 2019
Doesn’t this suggest it is not a lone hacker?
Ransomware Attack Hits Local Governments In Texas
A coordinated ransomware attack has affected at least 20 local government entities in Texas, the said. It would not release information about which local governments have been affected.
The department said the Texas Division of Emergency Management is coordinating support from other state agencies through the Texas State Operations Center at DPS headquarters in Austin.
DIR said the Texas Military Department and the Texas A&M University Systems' Cyber-Response and Security Operations Center teams are deploying resources to "the most critically impacted jurisdictions."
I guess they don’t trust users to leave the scooters someplace safe?
China's Ninebot unveils scooters that drive themselves to charging stations
Segway-Ninebot Group, a Beijing-based electric scooter maker, on Friday unveiled a scooter that can return itself to charging stations without a driver, a potential boon for the burgeoning scooter-sharing industry.
Friday, August 16, 2019
Is it time for a politician who understands technology?
Cybersecurity Has Become a Political Issue for Americans, Survey Shows
Americans have a pragmatic view towards cybersecurity. For example, while 86% believe that paying ransoms merely encourages more attacks, 70% accept that when organizations do pay, it is because they had no choice. But politicians should consider: 87% believe that cybersecurity should be a top priority for government, but only 51% believe it is currently doing a good job.
With the 2020 elections approaching, threat intelligence firm Anomali commissioned Harris to survey American attitudes towards cybersecurity and government. The survey of more than 2,000 citizens aged 18 or more focuses on the single issue that is most understood and most experienced: ransomware.
More than one-in-five Americans have experienced a ransomware attack. Young adults (aged 18-34) and men are more likely to have been attacked than women (27% vs 16%). No reason is suggested, but it could be that women are generally more cautious with attachments while young men are more adventurous on the internet.
‘cause my students will have to do it. (An excuse for a discussion.)
Common Problems and Limitations Of Cyber Security Awareness Training
Cyber security has never been a bigger problem than it is right now, in the modern era of business. Banks are more likely to receive phishing or ransomware attacks than being conventionally robbed, and many employees won’t even know what those two phrases mean. In an age of unlimited access to information, a worrying number of staff members at companies and businesses across the world are woefully unequipped to deal with the underground hackers and cyber attackers who will target their workplaces every day. People are usually the ‘weakest links’ in these attacks, since most threats are allowed access into companies’ networks through scams which employees have fallen for. But why isn’t cyber security awareness training more common – and more effective?
So now I can connect Alexa to my ‘smart’ refrigerator and when I open the door, have it remind me that I’m breaking my diet. Again.
Since launching Alexa more than four years ago, customers have purchased more than 100 million Alexa-enabled devices, allowing them to interact with products in new and engaging ways. Today, we are excited to introduce new developer tools that enable you to connect gadgets, games, and smart toy products with immersive skill-based content—unlocking creative ways for customers to experience your product. This is made possible using Custom Interfaces, the newest feature available in the Alexa Gadgets Toolkit.
More a way to identify primitive AI.
The Skeptic's Guide To Assessing Artificial Intelligence
… all AI is machine learning, but not all machine learning is AI. To help mitigate this skepticism, I have outlined how businesses can distinguish between simple machine learning and actual AI capabilities, as well as how to vet providers for true AI.
… A good place to start is by focusing on these six key questions:
1. How does your product improve over time?
2. What decisions can technology make and adapt to over time?
3. What’s the feedback loop for the AI engine to learn?
4. Does it need human feedback?
5. What will my company be able to do with this AI engine?
6. How is this product going to help my human workforce make better, more informed decisions?
Interesting video interview. Youtube is the search engine of choice for kids? Transcript is available.
Danah Boyd on the Spread of Conspiracies and Hate Online
– “Danah Boyd is a senior researcher at Microsoft and founder of the research institute “Data & Society,” where she studies how media manipulators may be responsible for mass shootings and other crisis events. amplifies the spread of false information…”
Thursday, August 15, 2019
A ‘loss of Internet’ horror story. Could it happen here?
India Shut Down Kashmir’s Internet Access. Now, ‘We Cannot Do Anything.’
Pharmacists can’t restock medicines; workers aren’t being paid. But the government still loves to block the internet for “peace and tranquillity.”
… Shopkeepers said that vital supplies like insulin and baby food, which they typically ordered online, were running out. Cash was scarce, as metal shutters covered the doors and windows of banks and A.T.M.s, which relied on the internet for every transaction. Doctors said they could not communicate with their patients.
… While Prime Minister Narendra Modi has promoted the rapid adoption of the internet, particularly on smartphones, to modernize India and bring it out of poverty, the country is also the world leader in shutting down the internet.
The country has increasingly deployed communications and internet stoppages to suppress potential protests, prevent rumors from spreading on WhatsApp, conduct elections and even stop students from cheating on exams. Last year, India blocked the internet 134 times, compared with 12 shutdowns in Pakistan, the No. 2 country, according to Access Now, a global digital rights group, which said its data understates the number of occurrences.
… “Kashmir has become invisible even to itself,” said Gurshabad Grover, a senior policy officer at the Center for Internet and Society in Bangalore, quoting in The Indian Express. The center published a across India.
The United Nations has repeatedly as a violation of human rights.
But that has not deterred India from routinely using the tool. Under India’s laws, authorities at even the local level can easily shut down internet access in the name of ensuring “peace and tranquillity.”
(Related) Which raises the question…
How Sustainable are Russia’s Plans to Fully Control Internet?
The Russian government has been planning to up its control over the Web for quite some time, and now it has all but done it. Passed by the Duma this spring, the so-called “Independent Internet Law” takes effect on November 1. Though its pretext appears noble at a first glance — protection against cyberattacks and foreign pressure — its real purpose is very much clear to both involved parties: the government and the Russian people.
The hacks should be interesting.
Alexa, time for class: How one university put an Echo Dot in every dorm room
… Saint Louis University -- the oldest university west of the Mississippi, in fact -- is the first to put smart speakers in dorm rooms.
… Each dorm room comes equipped with an SLU-emblazoned, second-gen Echo Dot and instructions on how to use it, what students can ask and what to do if there are technical issues.
The network of 2,300 Echo Dots is powered by Amazon's Alexa for Business platform. A private SLU skill built through Amazon Web Services is enabled on each Echo Dot. That skill can answer more than 135 questions about campus events, building hours, even nearby food options.
Students can stream music, podcasts and live radio through iHeartRadio and call any phone number, including contacts in SLU's directory of student services.
… there's no personally identifiable information recorded, stored or handed over to the SLU team. Each Echo Dot is labeled with a sticker containing the dorm room number and a MAC address, but there's no data gathered on which room is asking which questions.
Interesting idea, but this will only work if you opt out of almost everything.
How Data Privacy Laws Can Fight Fake News
… Data privacy laws like the European General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are not intended to address harmful speech. Their main goal is giving users greater control over their personal data, allowing people to check what data has been stored, opt out of data sharing, or erase their data entirely. Personal data generally includes information directly or indirectly linking accounts to real-life individuals, like demographic characteristics, political beliefs, or biometric data.
By limiting access to the information that enables personalized ad targeting and polarization loops, data privacy laws can render disinformation a weapon without a target. Absent the detailed data on users’ political beliefs, age, location, and gender that currently guide ads and suggested content, disinformation has a higher chance of being lost in the noise.
Worth a half-hour of your time?
Governance in the Age of AI
Artificial intelligence is a powerful technology with capabilities that are open to use by state and non-state actors. In this conversation Azeem Azhar, De Kai, and Joanna Bryson discuss how governance should adapt as our institutions are challenged by unintended consequences of the technology and its creators.
Joanna, De Kai, and Azeem also discuss:
- Why rule-based systems fall short of protecting us against the unintended consequences of technology.
- The value of cross-cultural dialogue in establishing common values to guide the governance of AI globally.
- The role of the leading technology companies in regulating the industry.
Some interesting stuff.
High-Profile Jitters Over AI
Also, Kroger to build automated warehouses and why weather forecasting will never be perfect.
Editor’s note: Elsewhere is a column that highlights ideas from other media platforms we believe are worth your attention.
How clever of them. Perhaps they should attend the Privacy Foundation’s November 1 seminar on “Artificial Intelligence and Ethics”
ABA Votes to Urge Legal Profession to Address Emerging Legal and Ethical Issues of AI
“The American Bar Association’s House of Delegates, its policy-making body, voted this week to approve a resolution urging courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence in the practice of law. Among the AI-related issues the profession should address, the ABA said, are bias, explainability, and transparency of automated decisions made by AI; ethical and beneficial usage of AI; and controls and oversight of AI and the vendors that provide AI…”
Wednesday, August 14, 2019
A list for the toolkit.
The Best Web Browsers for Privacy and Security
“Your web browser knows a lot about you, and tells the sites you visit a lot about you as well—if you let it. We’ve talked about but in this guide, we’re going to focus on the browsers that you’ll want to use to better conceal everything you’re up to from all the advertisers that want to track your digital life…”
Worth a try?
Time for a Cyber-Attack Exception to the Foreign Sovereign Immunities Act
Recently, a federal judge in New York dismissed the Democratic National Committee’s (DNC) civil lawsuit against Russia, Wikileaks, and others stemming from the 2016 cyber-attack on the DNC. While much of the media attention has focused on the judge’s decision that, under the First Amendment, Wikileaks and other “second-level participants” could not be held liable for publishing documents stolen from the DNC, there has been scant attention paid to how and why the Russian government—the “primary wrongdoer,” according to the Judge—was found not legally liable for the cyberattack.
The decision should concern all Americans who care about protecting our nation from state-sponsored cyber-attacks. While the United States government has certain tools to punish state-sponsored cyberattacks against American targets—including sanctions, diplomatic action and sometimes criminal indictment—these options cannot force a foreign state to pay compensation for the damage caused by cyberattacks. Only civil liability for sovereign cyberattacks can impose monetary costs for such attacks.
In finding Russia not liable, the court relied on the Foreign Sovereign Immunities Act of 1976 (“FSIA”)—a statute enacted decades before the internet existed in its current form. The FSIA provides foreign states immunity from civil suits in U.S. courts in all but a few circumstances. It’s clear that we need to update and amend the FSIA to reflect the modern reality of state-sponsored hacking by adding a cyber-attack exception to sovereign immunity.
It’s what they don’t say that gets them in trouble.
Facebook’s human-AI blend for audio transcription is now facing privacy scrutiny in Europe
A page on Facebook’s also includes a “note” saying “Voice to Text uses machine learning” — but does not say the feature is also powered by people working for Facebook listening in.
A spokesperson for Irish Data Protection Commission told us: “Further to our ongoing engagement with Apple and Microsoft in relation to the processing of personal data in the context of the manual transcription of audio recordings, we are now seeking detailed information from Facebook on the processing in question and how Facebook believes that such processing of data is compliant with their GDPR obligations.”
Unethical lawyers? Inconceivable!
Ethical 'Fails': Social Media Pitfalls and In-House Counsel
… perhaps the most important question in an era in which attorneys are just one viral post or inflammatory tweet away from the unemployment line or even the disciplinary board, is whether in-house lawyers are using social media ethically and responsibly. As with their counterparts in private firms and government entities, there is no shortage of “cautionary tales for the Digital Age” originating from corporate legal departments.
Tuesday, August 13, 2019
Kleptocracy only applies if you steal from your own people. North Korea is just organized crime?
UN Probing 35 North Korean Cyberattacks in 17 Countries
U.N. experts say they are investigating at least 35 instances in 17 countries of North Koreans using cyberattacks to illegally raise money for weapons of mass destruction programs — and they are calling for sanctions against ships providing gasoline and diesel to the country.
Last week, The Associated Press quoted a summary of a report from the experts which said that North Korea illegally acquired as much as $2 billion from its increasingly sophisticated cyber activities against financial institutions and cryptocurrency exchanges.
… The report cites three main ways that North Korean cyber hackers operate:
—Attacks through the Society for Worldwide Interbank Financial Telecommunication or SWIFT system used to transfer money between banks, “with bank employee computers and infrastructure accessed to send fraudulent messages and destroy evidence.”
—Theft of cryptocurrency “through attacks on both exchanges and users.”
—And “mining of cryptocurrency as a source of funds for a professional branch of the military.”
Data managers should review lists of who can access their data periodically. Managers should review lists of the data their employees can access periodically.
Almost half of employees have access to more data than they need
“A new study of over 700 full-time US employees reveals that that 48 percent of employees have access to more company data than they need to perform their jobs, while 12 percent of employees say they have access to all company data. The survey by business app marketplace also asked employees what classifications of data protection are in place at their company. No more than a third of businesses were found to use any one individual data classification. The lowest in use are Proprietary (15 percent) and Highly Confidential (18 percent). The most commonly used are Confidential — 33 percent of businesses use this classification, Internal — 30 percent, Public — 29 percent and Restricted/Sensitive — 25 percent…
My students were rather convincing that they would call the police.
What do you do when you see a murder on the internet?
In today’s Big Story podcast, two weeks ago, four people were dead in a Markham, Ontario home. Before the police had seen the bodies, another group of people had. The alleged killer shared a potential confession as well as graphic evidence of the crimes with some acquaintances he’d made while playing an online video game. So around the world, while police had no knowledge of what was transpiring, this group of gamers was facing an impossible dilemma.
The podcast: https://media.blubrry.com/thebigstory/p/chtbl.com/track/G9G45/rogers-aod.leanstream.co/rogers/thebigstory_dai/tbs_08132019.mp3
Eventually, we’ll figure it out.
European Parliament Publishes Study on Blockchain and the GDPR
On July 24, 2019, the European Parliament published a study entitled “Blockchain and the General Data Protection Regulation: Can distributed ledgers be squared with European data protection law?” The study explores the tension between blockchain technology and compliance with the General Data Protection Regulation (the “GDPR”), the EU’s data protection law. The study also explores how blockchain technology can be used as a tool to assist with GDPR compliance. Finally, it recommends the adoption of certain policies to address the tension between blockchain and the GDPR, to ensure that “innovation is not stifled and remains responsible”. This blog post highlights some of the key findings in the study and provides a summary of the recommended policy options.
Normally, these are the folks who tell you how to process data under the GDPR.
PwC will have to work to rebuild trust after shock GDPR fine
… The GDPR clearly establishes legal bases, under which personal data may be processed by controllers. Consent is one such basis, but it’s not the only one. And PwC’s choice of consent as a legal basis for processing personal data of its employees was not appropriate, the DPA found.
The data was processed in the course of the company’s commercial activities, and the employees were not informed about that. That kind of approach was found to be in violation of the GDPR’s fairness and transparency principles.
The accountability principle was also not complied with since the company failed to demonstrate appropriate compliance and transferred the burden to data subjects. As PwC was in this case a controller of personal data, such transfer was inappropriate.
Where did they cross the line? What do employers say?
Exclusive: Google's jobs search draws antitrust complaints from rivals
Google’s fast-growing tool for searching job listings has been a boon for employers and job boards starving for candidates, but several rival job-finding services contend anti-competitive behavior has fueled its rise and cost them users and profits.
… Similar to worldwide leader Indeed and other search services familiar to job seekers, Google’s tool links to postings aggregated from many employers. It lets candidates filter, save and get alerts about openings, though they must go elsewhere to apply.
Alphabet Inc’s Google places a large widget for the 2-year-old tool at the top of results for searches such as “call center jobs” in most of the world.
Some rivals allege that positioning is illegal because Google is using its dominance to attract users to its specialized search offering without the traditional marketing investments they have to make.
Other job technology firms say Google has restored industry innovation and competition.
Often helpful in a ‘least common denominator’ kind of way. Lots of references to other standards. (Definitions need some work)
A Plan for Federal Engagement in Developing Technical Standards and Related Tools
NIST – Prepared in response to Executive Order 13859 Submitted on August 9, 2019. “United States global leadership in AI depends upon the Federal government playing an active and purpose-driven role in AI standards development. That includes AI standards-related efforts needed by agencies to fulfill their missions by:
- supporting and conducting AI research and development,
- actively engaging in AI standards development,
- procuring and deploying standards-based products and services, and
- developing and implementing supportive policies, including regulatory policies where needed..”
…This plan identifies the following nine areas of focus for AI standards: Concepts and terminology; Data and knowledge; Human interactions; Metrics; Networking; Performance testing and reporting methodology; Safety; Risk management; Trustworthiness…”
When playing blackjack with a robot dealer, never say, “Hit me!”
Robots need a new philosophy to get a grip
Robots need to know the reason why they are doing a job if they are to effectively and safely work alongside people in the near future. In simple terms, this means machines need to understand motive the way humans do, and not just perform tasks blindly, without context.
According to a new article by the National Centre for Nuclear Robotics, based at the University of Birmingham, this could herald a profound change for the world of robotics, but one that is necessary.
… "Imagine asking a robot to pass you a screwdriver in a workshop. Based on current conventions the best way for a robot to pick up the tool is by the handle," he said. "Unfortunately, that could mean that a hugely powerful machine then thrusts a potentially lethal blade towards you, at speed. Instead, the robot needs to know what the end goal is, i.e.,to pass the screwdriver safely to its human colleague, in order to rethink its actions.
… Robotic manipulation and the role of the task in the metric of success, Nature Machine Intelligence (2019). DOI: 10.1038/s42256-019-0078-4
Because I like lists. (Quantity not quality) How far down the list must you go to find a site you don’t recognize?
Ranking the Top 100 Websites in the World
– “As a greater portion of the world begins to live more of their life online, the world’s top 100 websites continue to see explosive growth in their traffic numbers. To claim even the 100th spot in this ranking, your website would need around 350 million visits in a single month. Using data from we’ve visually mapped out the top 100 biggest websites on the internet. Examining the ranking reveals a lot about how people around the world search for information, which services they use, and how they spend time online…”
For my students.
… Sifting through large piles of resumes is often machine-assisted, and machines don’t care about your potential. Machines only care about which words you’ve used and the “value” that has been assigned to them. Jobscan knows this and tries to play into it.
Jobscan’s primary function is to read your resume and cover letter. From there, it will tell you how likely you are to make it past first-round filters based upon the language you’ve used in your application.
Monday, August 12, 2019
More questions than answers in this article.
FBI wants to monitor Facebook and Instagram for domestic threats in real time
The Federal Bureau of Investigation has quietly been searching for private contractors who could gather and feed to law enforcement tremendous amounts of user data straight from social media platforms such as Twitter, Facebook and Instagram.
The U.S. government needs "real-time access to a full range of social media exchanges" to better fight terrorist groups and domestic threats, the FBI said in its request for bids, which was first reported by the Wall Street Journal.
But the FBI's effort to gain far-reaching visibility into the social media activities of both Americans and foreigners risks clashing with other parts of the federal government that have sought to clamp down on Silicon Valley for data breaches, privacy violations, and other cases in which user information was shared without consent.
… Civil liberties advocates warned that the contract could be easily abused.
"This proposal invites dragnet surveillance that history shows will disproportionately harm immigrants, communities of color, and activists, and it invites profit-seeking firms to violate Facebook and Twitter rules designed to keep users safe," said Matt Cagle, an attorney for the American Civil Liberties Union of Northern California.
… A Twitter spokesperson said the company's terms for third parties prohibit developers from "allowing law enforcement — or any other entity — to use Twitter data for surveillance purposes. Period."
...which is a very polite way of saying they can’t do it.
The FTC Can Rise to the Privacy Challenge, but Not Without Help From Congress
Over at Lawfare, I have an essay co-authored by Chris Hoofnagle and Woodrow Hartzog called The FTC Can Rise to the Privacy Challenge, but Not Without Help From Congress. This piece is also posted at the Brooking Institution’s TechTank. The essay begins:
Facebook’s recent settlement with the Federal Trade Commission (FTC) has reignited debate over whether the agency is up to the task of protecting privacy. Many people, including some skeptics of the FTC’s ability to rein in Silicon Valley, lauded the settlement, or at least parts of it.
Sunday, August 11, 2019
Perhaps if they make it less effective they can use it?
Thomas J. Prohaska reports:
Studies and local critics question its effectiveness. The State Education Department won’t let it be turned on. And the State Legislature came close to banning it.
But the Lockport City School District continues to push to activate a facial recognition security system it installed last year in its 10 school buildings.
“We believe it does provide another layer of security for our students. We firmly believe in that,” Board of Education President John A. Linderman said.
Oh, they FIRMLY believe in that… well, then.
Read more on
[From the article:
Wednesday night, the Board of Education, in its latest effort to win the state's approval to use the $2.75 million system, decided that photographs of suspended students will not be programmed into it.
Such students were to have been one of the categories of banned individuals whose presence, if detected by the 300 digital cameras the district installed last year, would trigger an alarm to local police and a small group of administrators.
Could this be extended to show how feeds influence actions?
FAIRY: A Framework for Understanding Relationships between Users' Actions and their Social Feeds
Users increasingly rely on social media feeds for consuming daily information. The items in a feed, such as news, questions, songs, etc., usually result from the complex interplay of a user's social contacts, her interests and her actions on the platform. The relationship of the user's own behavior and the received feed is often puzzling, and many users would like to have a clear explanation on why certain items were shown to them. Transparency and explainability are key concerns in the modern world of cognitive overload, filter bubbles, user tracking, and privacy risks. This paper presents FAIRY, a framework that systematically discovers, ranks, and explains relationships between users' actions and items in their social media feeds. We model the user's local neighborhood on the platform as an interaction graph, a form of heterogeneous information network constructed solely from information that is easily accessible to the concerned user.
… User studies on two social platforms demonstrate the practical viability and user benefits of the FAIRY method.
“Show me the ethics!” It’s easy to find bad examples (that’s what Google is for) but much harder to vet a company for ethical behavior.
The Techlash Has Come to Stanford
Even in the famed computer science program, students are no longer sure they’d go to work for Facebook or Google (and definitely not Palantir).
I’ve always had a hard time believing that the situation dictates the ethics.
Designing Normative Theories of Ethical Reasoning: Formal Framework, Methodology, and Tool Support
The area of formal ethics is experiencing a shift from a unique or standard approach to normative reasoning, as exemplified by so-called standard deontic logic, to a variety of application-specific theories. However, the adequate handling of normative concepts such as obligation, permission, prohibition, and moral commitment is challenging, as illustrated by the notorious paradoxes of deontic logic. In this article we introduce an approach to design and evaluate theories of normative reasoning. In particular, we present a formal framework based on higher-order logic, a design methodology, and we discuss tool support. Moreover, we illustrate the approach using an example of an implementation, we demonstrate different ways of using it, and we discuss how the design of normative theories is now made accessible to non-specialist users and developers.