Saturday, September 07, 2019


“Recent” seems to be a bit of a stretch. Why so long to let anyone know about the breach?
Meridian Community College Provides Notice of Data Incident
Meridian Community College ("MCC") is providing notice of a recent event that may have affected the privacy of personal information of certain individuals
In late January 2019, MCC became aware of a phishing incident that resulted in the compromise of certain user credentials. MCC immediately began reviewing this activity and commenced a diligent investigation, which included working with third party forensic investigators, to confirm the nature and scope of the activity. On April 12, 2019, this investigation determined that it could not forensically rule out access to certain employee email accounts. Additionally, the forensic investigation could not determine whether specific emails in the potentially affected accounts were subject to unauthorized access.
This review concluded on June 25, 2019. MCC then undertook a diligent effort to identify contact information for those individuals whose data was present in the relevant email accounts.




Investigation by Venn diagram. One circle is all the App downloaders. Where they overlap with other circles, you have your pool of suspects.
Thomas Brewster reports:
Own a rifle? Got a scope to go with it? T he U.S. government might soon know who you are, where you live and how to reach you.
That’s because the government wants Apple and Google to hand over names, phone numbers and other identifying data of at least 10,000 users of a single gun scope app, Forbes has discovered. It’s an unprecedented move: Never before has a case been disclosed in which American investigators demanded personal data of users of a single app from Apple and Google.
Read more on Forbes.
[From the article:
The Immigration and Customs Enforcement (ICE) department is seeking information as part of a broad investigation into possible breaches of weapons export regulations. It’s looking into illegal exports of ATN’s scope, though the company itself isn’t under investigation, according to the order. As part of that, investigators are looking for a quick way to find out where the app is in use, as that will likely indicate where the hardware has been shipped. ICE has repeatedly intercepted illegal shipments of the scope, which is controlled under the International Traffic in Arms Regulation (ITAR), according to the government court filing. They included shipments to Canada, the Netherlands and Hong Kong where the necessary licenses hadn’t been obtained.




Must tolerate frustration.
Pentagon seeks 'ethicist' to oversee military artificial intelligence
Wanted: military “ethicist”. Skills: data crunching, machine learning, killer robots. Must have: cool head, moral compass and the will to say no to generals, scientists and even presidents.
The Pentagon is looking for the right person to help it navigate the morally murky waters of artificial intelligence (AI), billed as the battlefield of the 21st century.




Gain control early.
Google files patent for using A.I. to track a baby’s body and eye movements
According to a patent application filed last year and published on Thursday, Google is researching technology that could track a baby’s eyes, movements and sounds using “intelligent” audio and video. If the behavior seems abnormal, the cloud-based system would notify parents on their device.




Perspective. How critical is the Internet?
Thousands Of Optimum Customers Without Internet & TV Service, Police Say Stop Calling 911 To Report It




As change (the introduction of new technologies) accelerates, the lifespan of old training is reduced.
120 million workers will need to be retrained due to AI, says IBM study
Artificial Intelligence is apparently ready to get to work. Over the next three years, as many as 120 million workers from the world's 12 largest economies may need to be retrained because of advances in artificial intelligence and intelligent automation, according to a study released Friday by IBM's Institute for Business Value. However, less than half of CEOs surveyed by IBM said they had the resources needed to close the skills gap brought on by these new technologies.



Friday, September 06, 2019


What has Hong Kong triggered?
Growing backlash in China against A.I. and facial recognition
China’s seemingly unfettered push into facial recognition is getting some high-level pushback.
Face-swapping app Zao went viral last weekend, but it subsequently triggered a backlash from media — both state-run and private — over the apparent lack of data privacy protections.
The future has come, artificial intelligence is not only a test for technological development, but a test for governance,” city newspaper The Beijing News, wrote Sunday in Chinese, according to a CNBC translation.




For my Disaster Recovery class. Grab the PDF!
How to (Inadvertently) Sabotage Your Organization
In 1944, the Office of Strategic Services (OSS), the Central Intelligence Agency’s predecessor — headed by legendary William “Wild Bill” Donovan — put together a secret field manual for sabotaging enemy organizations. The manual encouraged “simple acts” of destruction that required no special training, tools, or equipment, with minimal “danger of injury, detection, and reprisal,” and that, crucially, could be executed by “ordinary citizens.”




What standards? What oversight?
More Than Half of U.S. Adults Trust Law Enforcement to Use Facial Recognition Responsibly
But the public is less accepting of facial recognition technology when used by advertisers or technology companies: “The ability of governments and law enforcement agencies to monitor the public using facial recognition was once the province of dystopian science fiction. But modern technology is increasingly bringing versions of these scenarios to life. A recent investigation found that U.S. law enforcement agencies are using state Department of Motor Vehicles records to identify individual Americans without their consent, including those with no criminal record. And countries such as China have made facial recognition technology a cornerstone of their strategies to police the behaviors and activities of their publics. Despite these high-profile examples from fiction and reality, a new Pew Research Center survey finds that a majority of Americans (56%) trust law enforcement agencies to use these technologies responsibly. A similar share of the public (59%) says it is acceptable for law enforcement to use facial recognition tools to assess security threats in public spaces…”


(Related)
An ICO spokesperson said:
We will be reviewing the judgment carefully. We welcome the court’s finding that the police use of Live Facial Recognition (LFR) systems involves the processing of sensitive personal data of members of the public, requiring compliance with the Data Protection Act 2018. This new and intrusive technology has the potential, if used without the right privacy safeguards, to undermine rather than enhance confidence in the police.
Our investigation into the first police pilots of this technology has recently finished. We will now consider the court’s findings in finalising our recommendations and guidance to police forces about how to plan, authorise and deploy any future LFR systems.
In the meantime, any police forces or private organisations using these systems should be aware that existing data protection law and guidance still apply.”
So if you are not already aware of the High Court’s finding that the use of live (real-time) facial recognition systems by police is lawful, you can find the press summary here (pdf)




Conclusion: We better do something. Free ebook available.
Rand Report – Hostile Social Manipulation
Hostile Social Manipulation – Present Realities and Emerging Trends: “The role of information warfare in global strategic competition has become much more apparent in recent years. Today’s practitioners of what this report’s authors term hostile social manipulation employ targeted social media campaigns, sophisticated forgeries, cyberbullying and harassment of individuals, distribution of rumors and conspiracy theories, and other tools and approaches to cause damage to the target state. These emerging tools and techniques represent a potentially significant threat to U.S. and allied national interests. This report represents an effort to better define and understand the challenge by focusing on the activities of the two leading authors of such techniques — Russia and China.




Okay, why do humans matter?
Will AI replace university lecturers? Not if we make it clear why humans matter
… Forget robo-lecturers whirring away in front of whiteboards: AI teaching will mostly happen online, in 24/7 virtual classrooms. AI machines will learn to teach by ferreting out complex patterns in student behaviour – what you click, how long you watch, what mistakes you make, even what time of day you work best. This will then be linked to students’ “success”, which might be measured by exam marks, student satisfaction or employability.
AI edtech developers are nothing if not ambitious: this month, UK company Century Tech will partner the Flemish regional government to launch AI assistants in schools across half of Belgium.
Until now there’s been one big challenge to wholesale takeover by teaching machines: AI requires vast amounts of data to train on before it can spot patterns. But a large dataset now exists for student behaviour, thanks to the hundreds of thousands of students who have followed MOOCs (massive online open courses) over the past decade.
The big question mark around MOOCs was how they could survive by giving away course content for free. With uncomfortable echoes of recent data controversies, it may turn out that building the training database for AI teaching was the MOOC business plan all along.
Replacing all lecturers with AI is probably still some years off. The ethical and educational challenges, which include AI’s inbuilt biases, the importance of lecturers’ pastoral role amid increasing mental health concerns, and the idea that “consuming content” is equivalent to learning, are so unsettling I’d like to think we wouldn’t let it happen. But I worry that the combined pressures of technology and economics frequently prove irresistible. If machines can replace doctors, why not academics too?




Perspective.
Streaming makes up 80 percent of the music industry’s revenue
More people are streaming music through services like Apple Music and Spotify, and the record industry is seeing a major lift.
Revenue made from streaming services in the United States grew by 26 percent in the first six months of the year, according to trade group Recording Industry Association of America, as reported by The Wall Street Journal. That makes for a revenue of $4.3 billion, according to research conducted by the group, which represents approximately 80 percent of the music industry’s overall revenue.




Perspective. (Video)
The internet's second revolution | The Economist
The second half of humanity is joining the internet. People in countries like India will change the internet, and it will change them.




Capabilities.
Having just attended the Huawei keynote here at the IFA trade show, there were a couple of new features enabled through AI that were presented on stage that made the hair on the back of my neck stand on end. Part of it is just an impression on how quickly AI in hand-held devices is progressing, but the other part of it makes me think to how it can be misused.
"Real-Time Multi-Instance Segmentation"
Firstly, AI detection in photos is not new. Identifying objects isn’t new. But Huawei showed a use case where several people were playing musical instruments, and the smartphone camera could detect both the people from the background, and the people from each other. This allowed the software to change the background, from an indoor scene to an outdoor scene and such. What this also enabled was that individuals could be deleted, moved, or resized.
… Detecting Health Rate with Cameras
The second feature was related to Health and AR. By using a pre-trained algorithm, Huawei showed the ability for your smartphone to detect your heart rate simply by the front facing camera (and assuming the rear facing camera too). It does this by looking at small facial movements between video frames, and works on the values it predicts per pixel to get an overall picture.



Thursday, September 05, 2019


Clearly a city with a recovery plan that works. The insurance inspired detour bothers me.
Ransomware gang demands $5.3 million from New Bedford; city restores from backup instead
The hackers demanded the exorbitant sum of $5.3 million for the decryption keys, but city officials decided not to cave in. Instead, they made a counter-offer of $400,000, which the city’s insurer would have covered – likely at the insurer’s recommendation, as recovering from a ransomware attack the hard way typically ends up costing the same, or more. However, the gang refused, and communication between city officials and the ransomware operators became severed.
IT administrators then proceeded to recover the lost data from backups. It isn’t immediately clear if the city had backed up all the data encrypted by Ryuk. [Either data created since the last backup or files identified in backup planning as unnecessary. Bob]


(Related) What systems are required to teach?
School officials: Ransomware prompts school closure in Flagstaff
No specific details about type of ransomware or any ransom demands, but schools are closed in Flagstaff, AZ today due to a ransomware attack that impacted a number of systems needed for day-to-day operations. Video news clip here: https://www.fox10phoenix.com/video/601919




Not enough detail to say it was a Facebook server or how long it had been unprotected.
Facebook Data On 419 Million Users Found On the Internet
Data on 419 million Facebook users were found online, impacting customers from the U.K. to the U.S.
Sanyam Jain, a researcher from GDI Foundation discovered a database on a server that wasn't protected, reported TechCrunch. The data included phone numbers, Facebook IDs, user names, gender and countries they were located in. It's not clear why the data was scraped from the social media network or who was behind it, reported TechCrunch.
In a statement to Engadget, a Facebook spokesperson said the dataset is old [Does that mean the hackers have had it for a long time? Bob] and has information that was removed last year including using phone numbers to find other uses. The spokesperson said the dataset has been taken down and there is "no evidence" Facebook accounts were impacted. [and no evidence accounts were not impacted. Bob]




We also need to know how we ‘opted in’ in the first place. Let’s hope they grow quickly!
Here’s a site that you may want to check out: https://simpleoptout.com/
From its home page:
Simple Opt Out is drawing attention to opt-out data sharing and marketing practices that many people aren’t aware of (and most people don’t want), then making it easier to opt out. For example:
  • Target “may share your personal information with other companies which are not part of Target.”
  • Chase may share your “account balances and transaction history … For nonaffiliates to market to you.”
  • Crate & Barrel may share “your customer information [name, postal address and email address, and transactions you conduct on our Website or offline] with other select companies.”
This site makes it easier to opt out of data sharing by 50+ companies (or add a company, or see opt-out tips ). Enjoy!




Another list to get off.
US judge: 'Terrorist' watchlist violates constitutional rights
The United States government's watchlist of more than one million people identified as "known or suspected terrorists'" violates the constitutional rights of those placed on it, a US federal judge ruled on Wednesday.
The ruling from District Judge Anthony Trenga in Virginia grants summary judgement to nearly two dozen Muslim US citizens who had challenged the watchlist with the help of the civil-rights group, the Council on American-Islamic Relations (CAIR).
But the judge is seeking additional legal briefs before deciding what remedy to impose.
Trenga also wrote in his 31-page ruling that the case "presents unsettled issues."
Ultimately, Trenga ruled that the travel difficulties faced by plaintiffs - who say they were handcuffed at border crossings and frequently subjected to invasive secondary searches at airports - are significant, and that they have a right to due process when their constitutional rights are infringed.
He also said the concerns about erroneous placement on the list are legitimate.




The more a ‘super app’ can do, the more we will rely on it and give AI the opportunity to influence us.
Grab will invest US$150 million in AI to build regional super app
South-east Asian ride-hailing start-up Grab Holdings intends to invest US$150 million (S$207.5 million) in artificial intelligence research over the next year, accelerating its expanding business that now includes food delivery, digital payments and digital content.
Grab, in hot competition with local rival Gojek to become South-east Asia's do-it-all super app, outlined for the first time a blueprint for its use and deployment of AI.
At the heart of the company's global effort is an ambition to create an all-in-one "super app" akin to Tencent's WeChat for China. The company's GrabPay service already allows consumers to pick up the tab for rides and order food, and it's expanding into lending and insurance.
The company is also said to be considering applying for a digital banking licence if Singapore allows it.




If you can’t tell, has the bot passed the Turing test?
On the Internet, Nobody Knows You’re a Bot
Brian Friedberg is an investigative ethnographer whose work focuses on the impacts that alternative media, anonymous communities and popular cultures have on political communication and organization. Brian works with Dr. Joan Donovan, who heads one of the world’s leading teams focused on understanding and combating online disinformation and extremism, based at Harvard’s Shorenstein Center on Media, Politics and Public Policy. In this essay, Brian and Joan explore a challenge the Unreal has presented for study of activism online, the question of whether an online actor is real or synthetic. In this essay, they explore what happens when politically motivated humans impersonate vulnerable people or populations online to exploit their voices, positionality and power.
See also – Response: “The Dangers of Weaponized Truth” by Brandi Collins-Dexter Brandi Collins-Dexter from Color of Change responds to Friedberg & Donovan’s essay “On the Internet Nobody Knows You’re a Bot”




Will insurance companies offer AI powered health assistants to monitor our bodies, adding drugs as needed and when all else fails turning us off?
Artificial intelligence in medicine raises legal and ethical concerns
AI in medicine also raises significant legal and ethical challenges. Several of these are concerns about privacy, discrimination, psychological harm and the physician-patient relationship. In a forthcoming article, I argue that policymakers should establish a number of safeguards around AI, much as they did when genetic testing became commonplace.
Data broker industry giants such as LexisNexis and Acxiom are also mining personal data and engaging in AI activities. They could then sell medical predictions to any interested third parties, including marketers, employers, lenders, life insurers and others. Because these businesses are not health care providers or insurers, the HIPAA Privacy Rule does not apply to them. Therefore, they do not have to ask patients for permission to obtain their information and can freely disclose it.




For my geeks.



Wednesday, September 04, 2019


Keep up!
Round-Up of Recent Changes to U.S. State Data Breach Notification Laws




We should have this figured out in a few (Okay, 30) years.
Disinformation and the 2020 Election: How Social Media Industry Should Prepare
NYU Stern Center for Business and Human Rights – The role of social media in a democracy. “In our fourth report on online disinformation, the NYU Stern Center for Business and Human Rights explores risks to democracy and free speech posed by the expected spread of disinformation during the 2020 U.S. presidential election. The report outlines steps the social media companies should take to counter the coming wave of disinformation. Preparing for the fight against false and divisive content will not be cost-free. But investments in R&D and personnel ultimately will help social media platforms restore their brand reputations and slow demands for draconian government regulation.
Social media companies’ policies on disinformation often lack clarity and strategic foresight and have been enforced in an ad hoc fashion. To reduce the probability of governmental content regulation in the U.S., these companies should show they can close the governance gap when it comes to disinformation. Read our examination of how social media companies have reacted to politically oriented false content, and the disinformation tactics hey will need to prepare for in 2020…”


(Related)
US plans for fake social media run afoul of Facebook rules
Facebook said Tuesday that the U.S. Department of Homeland Security would be violating the company’s rules if agents create fake profiles to monitor the social media of foreigners seeking to enter the country.
Law enforcement authorities, like everyone else, are required to use their real names on Facebook and we make this policy clear,” Facebook spokeswoman Sarah Pollack told The Associated Press in a statement Tuesday. “Operating fake accounts is not allowed, and we will act on any violating accounts.”
Pollack said the company has communicated its concerns and its policies on the use of fake accounts to DHS. She said the company will shut down fake accounts, including those belonging to undercover law enforcement, when they are reported.




For discussion.
Russell Brandom reports on another case where law enforcement served Google with a search warrant,
...asking for data that would identify any Google user who had been within 100 feet of the bank during a half-hour block of time around the robbery. They were looking for the two men who had gone into the bank, as well as the driver who dropped off and picked up the crew, and would potentially be caught up in the same dragnet. It was an aggressive technique, scooping up every Android phone in the area and trusting police to find the right suspects in the mess of resulting data. But the court found it entirely legal, and it was returned as executed shortly after.
Read more about this type of reverse warrant on The Verge, and then think about whether you leave for your cellphone’s default settings for location ON or OFF.




Moving slowly is better than not moving at all.
Facebook will no longer scan user faces by default
Facebook is making facial recognition in photos opt-in by default. Starting today, it’s rolling out its Face Recognition privacy setting, which it first introduced in December 2017, to all users. If you have Face Recognition turned on, Facebook will notify you if someone uploads a photo of you, even if you aren’t tagged. You can then tag yourself, stay untagged, or report the photo if it’s something you want taken down. Facebook tells The Verge it expects to complete the rollout over the next several weeks.




Everything helps.
Transferring Data Under GDPR
We have found beliefs about managing data transfers can be broad and confusing since the EU General Data Protection Regulation (GDPR) was put in force in May 2018. Some believe no data transfers outside of the EU are allowed. Others believe if you have a legitimate business reason to transfer data, and an agreement with the customer, it is simply business as usual. The real answer often lives in between.
We will walk through the GDPR requirements for processing personal data to help you envision how the GDPR data transfer rules may apply to your organization and your customers.




Confusing, isn’t it?
German court decides that GDPR consent can be tied to receiving advertising
On June 27, 2019, the High Court of Frankfurt decided that a consent for data processing tied to a consent for receiving advertising can be considered as freely given under the GDPR.
The claimant’s consent had been obtained in connection with his participation in a sweepstakes contest. In order for the claimant to participate in the contest, he had to consent to receive advertising from partners of the sweepstakes company
In line with previous case law, the court decided that bundling consent for advertising with the participation in a sweepstakes contest does not prevent it from being “freely given”. According to the court, “freely given” consent is a consent that is given without “coercion” or “pressure”. The court decided that enticing a customer with a promise of a discount or the participation in a sweepstakes contest in exchange for the consent to process his data for advertising does not amount to such coercion or pressure. According to the court, “a consumer may and should decide himself or herself if the participation in the sweepstakes is worth his or her data”.




Do you have a secure procedure for forwarding email?
Beware of web beacons that can secretly monitor your email
Legal By the Bay – Joanna L. Storey: “A twist in the recent prosecution of a Navy Seal charged with killing a prisoner in Iraq in 2017 brought to the forefront an ethics issue that has been squarely addressed by several jurisdictions, but not yet in California: the unethical surreptitious tracking of emails sent to opposing counsel using software embedded in a logo or other image. Also known as a web beacon, the tracking software is an invisible image no larger than a pixel that is placed in an email and, once activated, monitors such actions as when the email was opened, for how long, how many times, where, and whether the email was forwarded. The sender’s goal may be to determine how seriously you are considering a settlement demand that he attached to an email – the more you view the email, the more you may be inclined to accept the demand. Or, the sender may want to know to where you forward the email (e.g., you may forward the email to a client whose location is unknown to opposing counsel)….”




Interesting article. Do my grad students know as much?
MIT developed a course to teach tweens about the ethics of AI
This summer, Blakeley Payne, a graduate student at MIT, ran a week-long course on ethics in artificial intelligence for 10-14 year olds. In one exercise, she asked the group what they thought YouTube’s recommendation algorithm was used for.
To get us to see more ads,” one student replied.
These kids know way more than we give them credit for,” Payne said.
Payne created an open source, middle-school AI ethics curriculum to make kids aware of how AI systems mediate their everyday lives, from YouTube and Amazon’s Alexa to Google search and social media.
Kids today are not just digital natives, they are AI natives,” said Cynthia Breazeal, Payne’s advisor and the head of the personal robots group at the MIT Media Lab. Her group has developed an AI curriculum for preschoolers.




Training the next generation. Probably worth considering?
4 Ways to Avoid Having AI Release Consumers’ Inner Sociopath
Alexa, you’re ugly. Alexa, you’re stupid. Alexa, you’re fat.”
This barrage of abuse came from my friend’s children, who were shouting at his Amazon device, trying to prompt a witty comeback from the AI assistant. What was just a game to the kids looked a lot like the worst kind of playground bullying, and as my friend unplugged the device, he scolded, “We don’t talk to people like that.”
But unfortunately, we do talk like that, especially to AI assistants and chatbots that are unable to establish the boundaries that humans do. After all, if you hit someone, they may hit you back. If you call your barista ugly, you should expect them to spit in your latte. In their inability to push back, virtual assistants and chatbots shield us from the consequences of bad behavior.


(Related)
Can Artificial Intelligence Help Prevent Mental Illness?
The company has developed a wearable device, an app and machine learning system to collect data and monitor users’ level of stress, before predicting when it could be the cause of a more serious or physical health condition.
Mental illness is one of the biggest medical challenges of the 21st century. According to the World Health Organization, around 450 million people globally are affected by mental illness.
But two-thirds of people with a known mental condition, such as anxiety, depression and co-occurring disorders, fail to seek help from medical professionals. This can be due to a number of factors, including stigma and discrimination.


(Related)
15 Social Challenges AI Could Help Solve




What could possibly go wrong?
Air Force-Affiliated Researchers Want to Let AI Launch Nukes
Air Force Institute of Technology associate dean Curtis McGiffin and Louisiana Tech Research Institute researcher Adam Lowther, also affiliated with the Air Force, co-wrote an article — with the ominous title “America Needs a ‘Dead Hand ” — arguing that the United States needs to develop “an automated strategic response system based on artificial intelligence.”
In other words, they want to give an AI the nuclear codes. And yes, as the authors admit, it sure sounds a lot like the “Doomsday Machine” from Stanley Kubrick’s 1964 satire “Dr. Strangelove.”
The “Dead Hand” referenced in the title refers to the Soviet Union’s semiautomated system that would have launched nuclear weapons if certain conditions were met, including the death of the Union’s leader.
This time, though, the AI-powered system suggested by Lowther and McGiffin wouldn’t even wait for a first strike against the U.S. to occur — it would know what to do ahead of time.




Dilbert offers a simple solution for bias.