- Many types of information disorder exist online, from fabricated videos to impersonated accounts to memes designed to manipulate genuine content.
- Automation and microtargeting tactics have made it easier for agents of disinformation to weaponize regular users of the social web to spread harmful messages.
- Much research is needed to understand the effects of disinformation and build safeguards against it.
Saturday, August 24, 2019
Some thoughts from Scientific American.
Misinformation Has Created a New World Disorder
Our willingness to share content without thinking is exploited to spread disinformation
(Related) One “fake” hack.
How Artist Imposters and Fake Songs Sneak Onto Streaming Services
When songs leak on Spotify and Apple Music, illegal uploads can generate substantial royalty payments—but for whom?
...and apparently they all have different ways of describing the “perfect” AI development process.
Meet the Researchers Working to Make Sure Artificial Intelligence Is a Force for Good
… To help ensure future AI is developed in humanity’s best interest, AI Now’s researchers have divided the challenges into rights and liberties; labor and automation; bias and inclusion; and safety and critical infrastructure. Rights and liberties pertains to the potential for AI to infringe on people’s civil liberties, like cases of facial recognition technology in public spaces. Labor and automation encompasses how workers are impacted by Bias and inclusion has to do with the potential for AI systems to exacerbate historical discrimination against marginalized groups. Finally, safety and critical infrastructure looks at risks posed by incorporating AI into important systems like the energy grid.
… AI Now is far from the only research institute founded in recent years to study ethical issues in AI. At Stanford University, the has put ethical and societal implications at the core of its thinking on AI development, while the University of Michigan’s new Center for Ethics, Society, and Computing (ESC) focuses on addressing technology’s potential to replicate and exacerbate inequality and discrimination. Harvard’s concentrates in part on the challenges of ethics and governance in AI.
I’m not so pessimistic. My library has her book, so I may change my mind when I read it.
Futurist Amy Webb envisions how AI technology could go off the rails
… Webb’s latest book, The Big Nine, examines the development of AI and how the ‘big nine’ corporations – Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM and Apple – have all taken control over the direction that development is heading. She says that the foundation upon which AI is built is fundamentally broken and that, within our lifetimes, AI will begin to behave unpredictably, to our detriment.
… One of the main issues is that corporations have a much greater incentive to push out this kind of technology quickly than they do to release it safely.
A geek lecture (45 minutes)
Computer Mathematics, AI and Functional Programming
For my students who might be slightly nervous about their presentations.
Friday, August 23, 2019
Definitely worth discussing.
When Ransomware Cripples a City, Who’s to Blame? This I.T. Chief Is Fighting Back
… The former information technology director of Lake City, the northern Florida city that was forced to pay out nearly half a million dollars after a ransomware attack this summer, was blamed for the breach, and for the long time it took to recover. But in a new lawsuit, Mr. Hawkins said he had warned the city about its vulnerability long ago — urging the purchase of an expensive, cloud-based backup system that might have averted the need to pay a ransom.
The error was in police software. Does US police software process the raw data before investigators/prosecutors see it?
Flaws in Cellphone Evidence Prompt Review of 10,000 Verdicts in Denmark
The authorities in Denmark say they plan to review over 10,000 court verdicts because of errors in cellphone tracking data offered as evidence.
The country’s director of public prosecutions on Monday also ordered a two-month halt in prosecutors’ use of cellphone data in criminal cases while the flaws and their potential consequences are investigated.
“It’s shaking our trust in the legal system,” Justice Minister Nick Haekkerup said in a statement.
The first error was found in an I.T. system that converts phone companies’ raw data into evidence that the police and prosecutors can use to place a person at the scene of a crime. During the conversions, the system omitted some data, creating a less-detailed image of a cellphone’s whereabouts. The error was fixed in March after the national police discovered it.
In a second problem, some cellphone tracking data linked phones to the wrong cellphone towers, potentially connecting innocent people to crime scenes, said Jan Reckendorff, the director of public prosecutions.
We can do this to Kazakhstan but would any first world government tolerate it?
Browsers Take a Stand Against Kazakhstan’s Invasive Internet Surveillance
Yesterday, Google Chrome, Mozilla Firefox, and Apple’s Safari browsers started blocking a security certificate previously used by Kazakh ISPs to compromise their users’ security and perform dragnet surveillance.
… The two-step of Kazakh ISPs deploying an untrusted certificate, and users manually trusting that certificate allows the ISPs to read and even alter the online communication of any of their users, including sensitive user data, messages, emails, and passwords sent over the web.
This assumes the hospital’s controls were adequate.
Hospital found not liable for Facebook post about patient's STD
An Ohio hospital is not liable for a worker's Facebook post that included a screenshot of a patient's medical records showing she had a sexually transmitted disease, a judge ruled.
The University of Cincinnati Medical Center employee posted records in 2013 on a Facebook group with a name that includes a derogatory term for women considered promiscuous.
… Last year, the patient sued the hospital, her former boyfriend and the employee, who was fired a week after the post.
After looking into what transpired, the hospital found that the financial services employee had accessed the information, court documents show.
A Hamilton County Common Pleas Court judge on Monday found that the worker did not act within the scope of her employment and that the hospital needs to be dropped from the lawsuit, the Cincinnati Enquirer reported.
"(The hospital) had a policy. It was violated," Judge Jody Luebbers said. "It's tragic, but that's just how I see it."
It seems we need better solutions than this article suggests.
Singularity: how governments can halt the rise of unfriendly, unstoppable super-AI
… A super-AI raises two fundamental challenges for its inventors, as philosopher Nick Bostrom and others have pointed out. One is a control problem, which is how to make sure the super-AI has the same objectives as humanity. Without this, the intelligence could deliberately, accidently or by neglect destroy humanity – an “AI disaster”.
The second is a political problem, which is how to ensure that the benefits of a super-intelligence do not go only to a small elite, causing massive social and wealth inequalities. If a super-AI arms race occurs, it could lead competing groups to ignore these problems in order to develop their technology more quickly. This could lead to a poor-quality or unfriendly super-AI.
Thursday, August 22, 2019
My students will have to figure this out.
How New A.I. Is Making the Law’s Definition of Hacking Obsolete
… In April this year, a research team at the Chinese tech giant Tencent showed that a Tesla Model S in autopilot mode could be tricked into following a bend in the road that didn’t exist simply by adding stickers to the road in a particular pattern. Earlier research in the U.S. had shown that small changes to a stop sign could cause a driverless car to mistakenly perceive it as a speed limit sign. Another study found that by playing tones indecipherable to a person, a malicious attacker could cause an Amazon Echo to order unwanted items.
These discoveries are part of a growing area of study known as adversarial machine learning. As more machines become artificially intelligent, computer scientists are learning that A.I. can be manipulated into perceiving the world in wrong, sometimes dangerous ways. And because these techniques “trick” the system instead of “hacking” it, federal laws and security standards may not protect us from these malicious new behaviors — and the serious consequences they can have.
Are they being overly secretive or do they just not know?
Attackers Demand Millions in Texas Ransomware Incident
The cybercriminals behind the recent ransomware incident that impacted over 20 local governments in Texas are apparently demanding $2.5 million in exchange for access to encrypted data.
The incident took place on August 16, when 23 towns in Texas revealed they were targeted in a coordinated attack to infect their systems with ransomware.
… City of Borger was one of the victims, with its business and financial operations and services impacted by ransomware, although basic and emergency services continued to be operational.
“Currently, Vital Statistics (birth and death certificates) remains offline, and the City is unable to take utility or other payments. Until such time as normal operations resume, no late fees will be assessed, and no services will be shut off,” the city said earlier this week (PDF ).
City of Keene was also affected, being unable to process utility payments.
Listen to other views and carefully consider. (Then burst out laughing?)
Political Confessional: The Man Who Thinks Mass Surveillance Can Work
This week we talked to Owen, a 37-year-old white man from the Bay Area in California. He wrote that he is “open to mass surveillance if it can lead to a world where a much higher percent of crimes are caught, leading to better public safety and, ideally, shorter [or] lighter sentences (because you don’t need as big a threat of punishment to deter people from crimes if the likelihood of catching them is very high).”
Creating the Terminator?
CRS Report to Congress on Lethal Autonomous Weapon Systems
The following is the August 16, 2019 Congressional Research Service In Focus report – “As technology, particularly artificial intelligence (AI), advances, lethal autonomous weapon systems (LAWS)—weapons designed to make decisions about using lethal force without manual human control—may soon make their appearance, raising a number of potential ethical, diplomatic, legal, and strategic concerns for Congress. By providing a brief overview of ongoing international discussions concerning LAWS, this In Focus seeks to assist Congress as it conducts oversight hearings on AI within the military (as the House and Senate Committees on Armed Services have done in recent years), guides U.S. foreign policy, and makes funding and authorization decisions related to LAWS…”
(Related) An alternate view...
Amazon, Microsoft, May be Putting World at Risk of Killer AI, Says Report
Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.
Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.
"Why are companies like Microsoft and Amazon not denying that they're currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?" said Frank Slijper, lead author of the report published this week.
… The report noted that Microsoft employees had also voiced their opposition to a US Army contract for an augmented reality headset, HoloLens, that aims at "increasing lethality" on the battlefield.
Make the world safe from the Terminator?
IBM joins Linux Foundation AI to promote open source trusted AI workflows
AI is advancing rapidly within the enterprise -- by Gartner's count, already have at least one AI deployment in operation, and they're planning to substantially accelerate their AI adoption within the next few years. At the same time, the organizations building and deploying these tools have yet to really grapple with the flaws and shortcomings of AI – whether the models deployed are fair, ethical, secure or even explainable.
Before the world is overrun with flawed AI systems, IBM is aiming to rev up the development of open source trusted AI workflows. As part of that effort, the company is as a General Member.
… As a Linux Foundation project, provides a vendor-neutral space for the promotion of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open source projects. It's backed by major organizations like AT&T, Baidu, Ericsson, Nokia and Huawei.
… IBM has already spearheaded efforts on this front with a designed to help build trusted AI. The helps developers and data scientists detect and mitigate unwanted bias in machine learning models and datasets. The is an open source library that helps researchers and developers defend deep neural networks from adversarial attacks. Meanwhile, the provides a set of algorithms, code, guides, tutorials and demos to support the interpretability and explainability of machine learning models.
“We need ethics, we just don’t need them right now.” What can we agree on today?
International AI ethics panel must be independent
China wants to be the world’s leader in artificial intelligence (AI) by 2030. The United States has a strategic plan to retain the top spot, and, by some measures, already leads in influential papers, hardware and AI talent. Other wealthy nations are also jockeying for a place in the world AI league.
A kind of AI arms race is under way, and governments and corporations are pouring eye-watering sums into research and development. The prize, and it’s a big one, is that AI is forecast to add around US$15 trillion to the world economy by 2030 — more than four times the 2017 gross domestic product of Germany. That’s $15 trillion in new companies, jobs, products, ways of working and forms of leisure, and it explains why countries are competing so vigorously for a slice of the pie.
… Officials from Canada and France, meanwhile, have been working to establish an International Panel on Artificial Intelligence (IPAI), to be launched at the G7 summit of world leaders in Biarritz, France, from 24 to 26 August.
… To be credible, the IPAI has to be different. It needs the support of more countries, but it must also commit to openness and transparency. Scientific advice must be published in full. Meetings should be open to observers and the media. Reassuringly, the panel’s secretariat is described in documents as “independent”. That’s an important signal.
Data Management Law for the 2020s: The Lost Origins and the New Needs
Pałka, Przemysław, Data Management Law for the 2020s: The Lost Origins and the New Needs (August 10, 2019). Available at SSRN: or
“In the data analytics society, each individual’s disclosure of personal information imposes costs on others. This disclosure enables companies, deploying novel forms of data analytics, to infer new knowledge about other people and to use this knowledge to engage in potentially harmful activities. These harms go beyond privacy and include difficult to detect price discrimination, preference manipulation, and even social exclusion. Currently existing, individual-focused, data protection regimes leave law unable to account for these social costs or to manage them. This Article suggests a way out, by proposing to re-conceptualize the problem of social costs of data analytics through the new frame of “data management law.” It offers a critical comparison of the two existing models of data governance: the American “notice and choice” approach and the European “personal data protection” regime (currently expressed in the GDPR). Tracing their origin to a single report issued in 1973, the article demonstrates how they developed differently under the influence of different ideologies (market-centered liberalism, and human rights, respectively). It also shows how both ultimately failed at addressing the challenges outlined already forty-five years ago. To tackle these challenges, this Article argues for three normative shifts. First, it proposes to go beyond “privacy” and towards “social costs of data management” as the framework for conceptualizing and mitigating negative effects of corporate data usage. Second, it argues to go beyond the individual interests, to account for collective ones, and to replace contracts with regulation as the means of creating norms governing data management. Third, it argues that the nature of the decisions about these norms is political, and so political means, in place of technocratic solutions, need to be employed.”
Wednesday, August 21, 2019
Looks like it will take a while for a comprehensive report on this attack.
Information Concerning the August 2019 Texas Cyber Incident
… Below is an update as of August 20, 2019, at approximately 3:00 p.m. central time.
For impacted entities and more information regarding cybersecurity best practices, please click here.
- The number of confirmed impacted entities has been reduced to twenty-two.
- More than twenty-five percent of the impacted entities have transitioned from response and assessment to remediation and recovery, with a number of entities back to operations as usual.
Study: Americans won’t vote for candidates who approve ransomware payments
… New by The Harris Poll reveals that 64% of registered voters will not vote for candidates who approve of making ransomware payments.
Law Enforcement To Flag & Spy On Future Criminals
… A recent Albuquerque Journal article revealed that law enforcement will flag people that they think might pose a potential risk.
… What types of things could Americans do that law enforcement would consider threatening?
Inside Sources revealed that police would be looking for "certain indicators."
State Police Chief Tim Johnson said, “I think it’s obviously important for all of the citizens of New Mexico to be on the lookout for certain indicators of these types of folks that would do this. And part of our job as government officials is to ensure that the citizens of the community understand what those indicators are so they can report them."
The Tampa Bay Times reports that police are looking for “certain critical threat indicators” on students social media posts and have even created their own FortifyFL app that allows anyone to secretly report suspicious behavior.
What these "indicators" are is anyone's guess.
Not in the US, yet.
You Can Finally See All Info Facebook Collected About You From Other Websites
– “…Facebook collects information about its users in two ways: first, through the information you input into its website and apps, and second, by tracking which websites you visit while you’re not on Facebook. That’s why, after you visit a clothing retailer’s website, you’ll likely see an ad for it in your Facebook News Feed or Instagram feed. Basically, Facebook monitors where you go, all across the internet, and uses your digital footprints to target you with ads. But Facebook users have never been able to view this external data Facebook collected about them, until now. Facebook via the “Login with Facebook” button, the “like” button, Facebook comments, and little bits of invisible code, called the Facebook pixel, embedded on other sites (including BuzzFeed News). Today the company will start to roll out a feature called “Off-Facebook Activity” that allows people to manage that external browsing data — finally delivering on a promise it made over a year ago when CEO Mark Zuckerberg announced at a company event that it would develop a feature then called “Clear History.”
The new tool will display a summary of those third-party websites that shared your visit with Facebook, and will allow you to disconnect that browsing history from your Facebook account. You can also opt out of future off-Facebook activity tracking, or selectively stop certain websites from sending your browsing activity to Facebook. Nearly a third of all websites include a Facebook tracker, according to Some people in Ireland, South Korea, and Spain will gain access to Off-Facebook Activity first. Facebook said it will continue rolling out the feature everywhere else over the coming months. The tool, found in account Settings > Off-Facebook Activity, includes an option allowing you to “clear” your browsing history…”
See also the related
There is a big difference between, “Hey! We have this shiny new tool!” and “Hey! We know how to use this shiny new tool!”
Flawed Algorithms Are Grading Millions of Students’ Essays
Fooled by gibberish and highly susceptible to human bias, automated essay-scoring systems are being increasingly adopted, a Motherboard investigation has found
… Of those 21 states, three said every essay is also graded by a human. But in the remaining 18 states, only a small percentage of students’ essays—it varies between 5 to 20 percent—will be randomly selected for a human grader to double check the machine’s work.
But research from psychometricians—professionals who study testing—and AI experts, as well as documents obtained by Motherboard, show that these tools are susceptible to a flaw that has repeatedly sprung up in the AI world: bias against certain demographic groups. And as a Motherboard experiment demonstrated, some of the systems can be fooled by nonsense essays with sophisticated vocabulary.
Fuel for an interesting discussion.
RPA And Machine Learning Brings Us The Autonomous Data Centre
… As we enter this new revolution in how businesses operate, it’s essential that every piece of data is handled and used appropriately to optimise its value. Without cost-effective storage and increasingly powerful hardware, digital transformation and the new business models associated with it wouldn’t be possible.
Experts have been predicting for some time that the automation technologies that are applied in factories worldwide would be applied to datacentres in the future. The truth is that we’re rapidly advancing this possibility with the application of Robotic Process Automation (RPA) and machine learning in the datacentre environment.
A Week in the Life of Popular YouTube Channels
– as well as content featuring children – received more views than other video”
“The media landscape was upended more than a decade ago when the video-sharing site YouTube was launched. The volume and variety of content posted on the site is staggering. The site’s popularity makes it a launchpad for performers, businesses and commentators on every conceivable subject. And like many platforms in the modern digital ecosystem, YouTube has in recent years become a flashpoint in ongoing debates over issues such as and the impact of technology on Amid this growing focus, and in an effort to continue demystifying the content of this popular source of information, Pew Research Center used its own custom to assemble a list of popular YouTube channels (those with at least 250,000 subscribers) that existed as of late 2018, then conducted a large-scale analysis of the videos those channels produced in the first week of 2019. The Center identified a total of 43,770 of these high-subscriber channels using a process similar to the one used in our This data collection produced a variety of insights into the nature of content on the platform: These popular channels alone posted nearly a quarter-million videos in the first seven days of 2019, totaling 48,486 hours of content. To put this figure in context, a single person watching videos for eight hours a day (with no breaks or days off) would need more than 16 years to watch all the content posted by just the most popular channels on the platform during a single week. The average video posted by these channels during this time period was roughly 12 minutes long and received 58,358 views during its first week on the site…”
Next, speech to sign?
Google's AI allows smartphones to translate sign language
Tuesday, August 20, 2019
Not sure I understand this one.
Al Restar reports:
The Australian court ruled that employees are allowed to refuse to provide biometric data to their employees. The ruling follows the lawsuit filed by Jeremy Lee getting fired from his previous job due to his refusal of providing his fingerprint samples for the company’s newly installed fingerprint login system.
Jeremy Lee from Queensland, Australia, won a after he was fired from his job at Superior Wood Pty Ltd, a lumber manufacturer, in February 2018, for refusing to provide his fingerprints to sign in and out of his work, citing that he was unfairly dismissed from the company.
Read more on
From the article:
If I were to submit to a fingerprint scan time clock, I would be allowing unknown individuals and groups to access my biometric data, the potential trading/acquisition of my biometric data by unknown individuals and groups, indefinitely,” reads Lee’s affidavit.
… “We accept Mr. Lee’s submission that once biometric information is digitized, it may be very difficult to contain its use by third parties, including for commercial purposes,” case documents state.
The case of Lee is a first in Australia. While it did not change the law, it opens a new perspective on the ownership of biometric information like fingerprints and facial recognition and reinterpreted privacy laws on how they will apply to data like these.
It’s a small step, but at least it’s a step.
Twitter Flexing its Muscles Against State Misinformation
Twitter first announced Monday, August 19, 2019, that is updating its policy on state media advertising. "Going forward," it said, "we will not accept advertising from state-controlled news media entities. Any affected accounts will be free to continue to use Twitter to engage in public conversation, just not our advertising products."
This policy is global and not targeted at any specific nation or nations, but does not "apply to taxpayer-funded entities, including independent public broadcasters" (so organizations like the BBC -- were it to advertise -- should be okay). The organizations targeted are not banned from using Twitter to engage in organic conversation, but will not be allowed to advertise on the platform.
The immediate catalyst is almost certainly mainland China's propaganda campaign against the ongoing Hong Kong protest movement, but it will reduce the capacity of all foreign countries to manipulate public opinion ahead of elections. The longer-term catalyst will be to help protect the U.S. 2020 elections from foreign influence, whether that comes from China, Russia, Iran or elsewhere.
Reading this, I think IT will have problems complying. Perhaps we should dedicate a lawyer to make the records and draft the notices? Can IT explain things to the lawyer in plain English?
Actionable takeaways from new Irish and Polish Data Protection Authorities' guidance on personal data breach notification under GDPR
The and the both recently issues guidance on the notification requirements under GDPR in the event of a Personal Data Breach.
What is "become aware"?
- A controller should be regarded as having become ‘aware’ when they have a reasonable degree of certainty that a security incident has occurred and compromised personal data.
- Controllers should have a system in place for recording how and when they become aware of personal data breaches and how they assessed the potential risk posed by the breach.
Amazon is passing along costs of a new digital tax to thousands of French sellers
… The reason the company cited was simple: a 3% digital tax passed by the French government in July.
Amazon’s move appears to directly conflict with the French government’s aim of leveling the playing field between Big Tech and small and medium-sized enterprises, and further complicates France’s effort to rein in companies like Amazon, Facebook and Google.
We covered Intellectual Property last week. Perhaps we should reopen the debate?
Inside Higher Education – “There is little dispute that Sci-Hub, the website that provides free access to millions of proprietary academic papers, is illegal. Yet, despite being successfully sued twice by major American academic publishers for massive copyright infringement, the site continues to operate. Some academics talk openly about their use of the repository — a small number even Sci-Hub founder Alexandra Elbakyan for her contribution to their research. Most academics who use the site, however, choose to do so discreetly, seemingly aware that drawing attention to their activities might be unwise. Just how careful academics should be about using Sci-Hub has become a topic of concern in recent weeks, with many questioning whether sharing links to Sci-Hub could in itself be considered illegal. The discussion started when the team behind a bibliography management tool based in Europe, tweeted that lawyers for Elsevier, a major publisher of academic journals, had threatened to pursue legal action if Citationsy did not remove a link to Sci-Hub from The link formed part of a blog post titled “Hacking Education: Download Research Papers and Scientific Articles for Free.”
… What the site does is not permitted, according to the law, but in the academic world, Sci-Hub is praised by many. In particular, those who don’t have direct access to expensive journals but aspire to excel in their academic field.
This leads to a rather intriguing situation where many of the ‘creators,’ the people who write academic articles, are openly supporting the site. By doing so, they go directly against the major publishers, including the billion-dollar company Elsevier, which are the rightsholders.
Elsevier previously convinced the courts that Sci-Hub is a force of evil.
For the student toolkit? New to me.
English Language & Usage Stack Exchange