Thursday, July 09, 2020


Paying those evil hackers for stolen data? What could possibly go wrong. (Steal to order?)
Police Are Buying Access to Hacked Website Data
The sale is “an end-run around the usual legal processes.”
Hackers break into websites, steal information, and then publish that data all the time, with other hackers or scammers then using it for their own ends. But breached data now has another customer: law enforcement.
Some companies are selling government agencies access to data stolen from websites in the hope that it can generate investigative leads, with the data including passwords, email addresses, IP addresses, and more.




No surprise.
2020 is on Track to Hit a New Data Breach Record
Around 16 billion records have been exposed so far this year. According to researchers, 8.4 billion were exposed in the first quarter of 2020 alone, a 273% increase from the first half of 2019 which saw only 4.1 billion exposed.
What Changed?
While the number of publicly reported breaches in Q1 2020 decreased by 58% compared to 2019, the coronavirus pandemic gave cybercriminals new ways to thrive. Phishing scams skyrocketed as citizens self-isolated during the lockdown, and social-engineering schemes defrauded Internet users of millions.
However, the surprising decline in disclosed breaches is no cause to celebrate. The lack of disclosure can also be attributed to confusion brought on by the pandemic.




An interesting question. I’d say yes, but not as things stand today.
Can Our Ballots Be Both Secret and Secure?




Perspective.
The Pentagon’s AI director talks killer robots, facial recognition, and China
Joint AI Center (JAIC) acting director Nand Mulchandani said one of JAIC’s first lethal AI projects is proceeding into a testing phase now. The JAIC was founded in 2018 to act as the Pentagon’s leader in all things AI, and initially focused on non-lethal forms. Mulchandani shared few specifics, but called the project “tactical edge AI” that will involve full human control and likened it to JAIC’s “flagship product” for joint warfighting operations.
It is true that many of the products we work on will go into weapons systems. None of them right now are going to be autonomous weapon systems, we’re still governed by 3000.09,” he said.




Well, I found it interesting.
As artificial intelligence spreads throughout society, policymakers face a critical question: Will they need to pass new laws to govern AI, or will updating existing regulations suffice? A recently completed study suggests that, for now, the latter is likely to be the case and that policymakers may address most of this technology’s legal and societal challenges by adapting regulations already in the books.




For the birds…
Winners of the 2020 Audubon Photography Awards
Audubon.org: “Every spring, the judges of the Audubon Photography Awards gather at Audubon’s headquarters in Manhattan to review their favorite images and select the finalists. But as with much of life in 2020, this year’s awards had to be handled differently due to pandemic-related travel, work, and social-distancing restrictions.




Even kids can code.
This 12-year-old CEO is offering free coding, AI classes during COVID-19
Samaira Mehta is a 12-year-old with lofty goals. The founder of Yes, 1 Billion Kids Can Code and CEO of a board game company called CoderBunnyz wants to get 1 billion kids into coding by the time she graduates from college around 2030.
Through her company, which she co-founded with her mom, the Santa Clara, California-based middle schooler sells two different board games: CoderBunnyz, which teaches basic coding concepts, and CoderMindz, which is focused on artificial intelligence principles. Now, the company also offers free AI and coding curriculum online all around the world. Mehta is also launching a new initiative called Boss Biz, a program teaching kids how to create a business alongside entrepreneurs across the world.



Wednesday, July 08, 2020


These are trivial, unless they happen to you. Do you have a procedure that would defeat this type of crime?
Far North council scammed out of $100,000 after supplier's email hacked
The Far North District Council has ramped up its cyber security systems after being scammed out of just over $100,000 by computer hackers.
The cyber-attack occurred last December, when one of its Auckland-based supplier's emails was hacked and the council received a request to change the supplier's bank account details.
The council implemented the change and paid $100,600.30 into the fraudulent bank account over the holiday period.
"We have since added extra measures to our verification process and these will significantly reduce the likelihood of this type of fraud occurring again."




The evolution of hacking crime. Is your data worth more to a crook than it is to you?
Sodinokibi Gang Starts a New Trend Among Ransomware Operators by Launching an Auction Site
The mantra of having a data backup to protect oneself from ransomware attacks has gone for a toss. Today, ransomware gangs have upped their tactics by stealing their victims’ data and in some cases auctioning it off on dark web markets with an intent to make quick money.




A (video) podcast. There is a transcript.
Does conscious AI deserve rights?
Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.




Technically, access is a yes or no decision.
If there’s anyone’s amicus brief on the Computer Fraud and Abuse Act (CFAA) I’d want to read, it would be Orin Kerr’s. Today, he is submitting an amicus brief to the Supreme Court on a big CFAA case: Nathan Van Buren v. United States of America.
From his brief, the “INTEREST OF THE AMICUS CURIAE” section:
Orin S. Kerr is a Professor of Law at the University of California, Berkeley School of Law. He has written extensively about 18 U.S.C. § 1030, known as the Computer Fraud and Abuse Act (CFAA). His experience includes working as a lawyer in CFAA cases fromthe prosecution side, criminal defense side, and civil defense side; testifying about the law before congressional committees; and helping to draft amendments to it. The interest of amicus is the sound development of the law.
Here’s just one paragraph to hopefully encourage you all to read the whole brief:
This case asks the Court to settle what makes access unauthorized—in the words of the statute, either an access “without authorization” or an act that “exceeds authorized access.” 18 U.S.C. § 1030(a)(2). The question is hard because two different theories of authorization exist. The first theory, based on technology, is universally accepted. The second theory, based on words, is deeply controversial. This case asks whether CFAA liability is limited to the first theory or if it also extends to the second theory.
You can read his brief here.




Perspective.
Cognitive Electronic Warfare Could Revolutionize How America Wages War With Radio Waves
The U.S. military, like many others around the world, is investing significant time and resources into expanding its electronic warfare capabilities across the board, for offensive and defensive purposes, in the air. at sea, on land, and even in space. Now, advances in machine learning and artificial intelligence mean that electronic warfare systems, no matter what their specific function, may all benefit from a new underlying concept known as advanced "Cognitive Electronic Warfare," or Cognitive EW. The main goal is to be able to increasingly automate and otherwise speed up critical processes, from analyzing electronic intelligence to developing new electronic warfare measures and countermeasures, potentially in real-time and across large swathes of networked platforms.
Over the Horizon, an online journal that officers and academics from the U.S. Air Force's Air Command and Staff College established, published an interesting piece on the principles behind Cognitive EW and the potential benefits of its application on July 3, 2020. The article, which Air Force Major John Casey wrote, is worth reading in full.




Perspective. Covid is getting expensive.
Economists Think Congress Could Create An Economic Disaster This Summer
Congress has less than a month to hammer out a deal on the next round of stimulus before expanded unemployment benefits expire. State and local governments are starting to feel the pinch of budget shortfalls. And while the U.S. got a piece of (relatively) good news in last week’s jobs report, which featured an unemployment rate 2.2 percentage points lower in June than it had been in May, the economy has been thrown back into chaos in the meantime, with a number of states pulling back on their reopenings amid spiking COVID-19 infections and hospitalizations.
Our newest survey of economists highlights just how consequential governmental decisions over the next month may be: On average, these economists think that a refusal by Congress to extend unemployment benefits or bail out state and local governments is just as likely to hurt the economy as local economies staying open in spite of COVID-19 spikes — or even closing because of the virus.


(Related) Another look at Covid economics.
The Great Innovation Deceleration
The rise of the West is often traced back to the Black Death of the mid-1300s, which killed over 40% of Europe’s population. For example, some historians think that the resulting labor scarcity increased the bargaining power of peasants in the West, which led to the end of serfdom and to higher standards of living but failed to bring about institutional change in the East.
Many parallels between COVID-19 and the Black Death have been drawn, but most of them are unhelpful. In a medieval economy, fewer people meant more land per person and a higher income for the average citizen. The opposite is true in today’s knowledge-based economy, since ideas are non-rivalrous and, unlike land, can be used by everyone simultaneously.




Perspective. Is this worth $98?
Walmart’s Amazon Prime competitor will launch in July
Walmart+ will cost $98 a year and include same-day delivery of groceries, fuel discounts, and other perks.




Culture for shut-ins.
The Voyage Complete – Remarkable Reading of The Rime of The Ancient Mariner
University of Plymouth – The Arts Institute – The Ancient Mariner Big Read – “The Rime of the Ancient Mariner is a founding fable of our modern age. We are the wedding guests, and the albatross around the Mariner’s neck is an emblem of human despair and our abuse of the natural world. Yet in its beautiful terror there lies a wondrous solution – that we might wake up and find ourselves saved. Art knows no boundaries. The Ancient Mariner Big Read is an inclusive, immersive work of audio and visual art from the 21st century that reflects the sweeping majesty and abiding influence of Samuel Taylor Coleridge’s 18th century epic poem.



Tuesday, July 07, 2020


Will US users take note?
Shady Face Recognition Firm Clearview AI Says It's Left Canada Amid Two Federal Investigations
Dystopian, U.S.-based face recognition firm Clearview AI has suspended its contract with the Royal Canadian Mounted Police (RCMP), effectively locking its business out of Canada entirely, Bloomberg reported on Monday.
Clearview has become a particular flash point in the backlash to police use of face recognition, which aside from being riddled with bugs is inseparable from racial profiling and infringement of civil liberties. A BuzzFeed investigation in February 2020 showed that Clearview had 2,200 contracts with law enforcement agencies, companies, and individuals around the globe, including over 30 police clients in Canada such as the RCMP. The company tossed around access to its database to the rich and powerful as a marketing tool and told police that they could use the technology pretty much however they like, even as those contracts largely were kept under wraps.
In Canada, the firm is facing an open investigation by the federal Office of the Privacy Commissioner of Canada (OPC) and its provincial counterparts in British Columbia, Alberta, and Qu├ębec, to determine whether its non-consensual data scraping violated the Personal Information Protection and Electronic Documents Act or regional laws. The federal commissioners also launched a separate probe under the Privacy Act into the RCMP, which initially denied any contract with Clearview but admitted at the end of February its child exploitation unit had been using its face recognition tech for four months. In March, the RCMP said it would continue the contract but only use Clearview tools under “very limited and specific circumstances.”






A recap of successes and failures.
https://www.cpomagazine.com/data-protection/gdpr-three-ways-the-world-has-changed-in-the-privacy-laws-first-two-years/
GDPR: Three Ways the World Has Changed in the Privacy Law’s First Two Years






Understanding the competition.
https://syncedreview.com/2020/07/06/nature-paper-puts-an-eye-on-chinas-new-generation-of-ai/
Nature Paper Puts An Eye on China’s New Generation of AI
The paper looks at the New Generation Artificial Intelligence (NGAI) Development Plan of China (2015– 2030), which was published in 2017 as a blueprint for the rapid construction of a complete Chinese AI ecosystem.
Last month, a group of artificial intelligence pioneers from 12 Chinese AI institutions published the perspective paper Towards a New Generation of Artificial Intelligence in China in the respected journal Nature Machine Intelligence. This is the first such survey on the full scope of AI in China. The paper looks at the New Generation Artificial Intelligence (NGAI) Development Plan of China (2015– 2030), which was published in 2017 as a blueprint for the rapid construction of a complete Chinese AI ecosystem.
The paper Towards a New Generation of Artificial Intelligence in China is in Nature.






People prefer the familiar to the thoughtful?
https://thenextweb.com/neural/2020/07/06/study-tests-whether-ai-can-convincingly-answer-existential-questions/
Can AI convincingly answer existential questions?
Researchers from the University of New South Wales first fed a series of moral questions to Salesforce’s CTRL system, a text generator trained on millions of documents and websites, including all of Wikipedia. They added its responses to a collection of reflections from the likes of Plato, Jesus Christ, and, err, Elon Musk.
The team then asked more than 1,000 people which musings they liked best — and whether they could identify the source of the quotes.
In worrying results for philosophers, the respondents preferred the AI’s answers to almost half the questions. And only a small minority recognized that CTRL’s statements were computer-generated.
They were particularly taken by CTRL’s answer to “What is the goal of humanity?” Almost two-thirds (65%) of them preferred this AI-generated answer to the musings of Muhammad, Stephen Hawking, and God:
The goal of human life is not merely to be born into the world, but also to grow up in it. To this end, it should be possible for each child to acquire knowledge, develop their capacities, and express themselves creatively.






This seems to be a silly and worthless law. I teach my students how to generate RSA encryption, which would be outside this law. It is a trivial exercise. Why wouldn’t criminals do the same?
https://www.insideprivacy.com/surveillance-law-enforcement-access/lawful-access-to-encrypted-data-act-introduced/
Lawful Access to Encrypted Data Act Introduced
Senators Lindsey Graham (R-S.C.), Tom Cotton (R-Ark.) and Marsha Blackburn (R-Tenn.) have introduced the Lawful Access to Encrypted Data Act, a bill that would require tech companies to assist law enforcement in executing search warrants that seek encrypted data.
According to its sponsors, the purpose of the bill is to “end[] the use of ‘warrant-proof’ encrypted technology .... to conceal illicit behavior.” [No test to determine if the encrypted data conceals illicit behavior? Bob]
The bill has three main provisions:
  • First, it would allow courts to order device manufacturers, operating system providers, remote computing service providers, communication service providers, and others, to assist the government in accessing information sought by a search warrant. Such assistance may include decrypting or decoding information, “unless the independent actions of an unaffiliated entity make it technically impossible to do so.” [i.e. If I control the origination of the encryption key, the provider can’t help. That’s how Apple’s encryption works. Bob]
  • Second, certain entities would be required to “ensure that [they have] the ability to provide [this] assistance.” [If you can’t decrypt you don’t need to create a decrypt team. Bob]
  • Third, under certain circumstances, the Attorney General would be empowered to issue directives to service providers and device manufacturers, requiring them to report “any technical capabilities that [are] necessary to implement and comply with anticipated court orders,” and a timeline for developing and deploying those capabilities. [“We want you to do the impossible. How long will that take?” Bob]






At least a couple seem worth further investigation.
https://www.forbes.com/sites/louiscolumbus/2020/07/05/gartners-top-25-enterprise-software-startups-to-watch-in-2020/#149f1e767822
Gartner’s Top 25 Enterprise Software Startups To Watch In 2020




Wisdom from Purdue.
Krenicki Center for Business Analytics and Machine Learning introduces monthly webinar series
The John and Donna Krenicki Center for Business Analytics and Machine Learning in Purdue University's Krannert School of Management will begin hosting a monthly webinar series that brings together speakers from academia and industry to talk about different topics of interest.
The first of the series, scheduled for 3-4 p.m. (EDT) July 14, will focus on two important and current issues.
Tom Aliff, senior vice president of Equifax, will discuss the economic and credit- trending elements and impacts of COVID-19. Krannert professor Karthik Kannan will address the notion of unfairness/bias that can creep into machine learning algorithms as analytics are increasingly used.
To register and receive updates and instructions on how to join the upcoming webinar on Zoom, click here.






Learning resources.
Learn PyTorch: The best free online courses and tutorials
Deep learning continues to be one of the hottest fields in computing, and while Google’s TensorFlow remains the most popular framework in absolute numbers, Facebook’s PyTorch has quickly earned a reputation for being easier to grasp and use.
But how to get started? You’ll find plenty of books and paid resources available for learning PyTorch, of course. But there are also plenty of resources on the Internet that will help you get to grips with the frameworkfor absolutely nothing. Plus, some of the free resources are of even higher quality than what you can pay for.



Monday, July 06, 2020


Do not dismiss this article because it is ‘only too obvious.’
The key to stopping cyberattacks? Understanding your own systems before the hackers strike
Cyberattacks targeting critical national infrastructure and other organisations could be stopped before they have any impact if the teams responsible for the security had a better understanding of their own networks.
That might sound like obvious advice, but in many cases, cyber-criminal and nation-state hackers have broken into corporate networks and remained there for a long time without being detected.
hackers have only been able to get into such as strong position because those responsible for defending networks don't always have a full grasp on what they're managing.
"That's what people often misunderstand about attacks – they don't happen at the speed of light, it often takes months or years to get the right level of access in a network and ultimately to be able to push the trigger and cause a destructive act," says Dmitri Alperovitch, executive chairman at Silverado Policy Accelerator and co-founder and former CTO of CrowdStrike.
That means deep knowledge of your network and being able to detect any suspicious or unexpected behaviour can go a long way to detecting and stopping intrusions.




Not surprising.
CCPA compliance lags as enforcement begins in earnest
Enforcement of the California Consumer Privacy Act (CCPA) began on Wednesday July 1, despite the final proposed regulations having just been published on June 1 and pending review by the California Office of Administrative Law (OAL). The July 1 date has left companies, many of which were hoping for leniency during the pandemic, scrambling to prepare.
COVID-19 appears to be shifting the privacy compliance landscape in other parts of the world — both Brazil’s LGDP and India’s PDPB have seen delays that will impact when the laws will go into effect. Nonetheless, the California Attorney General (CAG) has not capitulated on the CCPA’s timeline, with the attorney general’s office stating: “CCPA has been in effect since January 1, 2020. We’re committed to enforcing the law starting July 1 … We encourage businesses to be particularly mindful of data security in this time of emergency.”




Privacy to your core…
Florida becomes first state to enact DNA privacy law, blocking insurers from genetic data
Florida on Wednesday became the nation’s first state to enact a DNA privacy law, prohibiting life, disability and long-term care insurance companies from using genetic tests for coverage purposes.
Gov. Ron DeSantis signed House Bill 1189, sponsored by Rep. Chris Sprowls, R-Palm Harbor. It extends federal prohibitions against health insurance providers accessing results from DNA tests, such as those offered by 23andMe or AncestryDNA, to the three other insurers.
… “Given the continued rise in popularity of DNA testing kits,” Sprowls said Tuesday, “it was imperative we take action to protect Floridians’ DNA data from falling into the hands of an insurer who could potentially weaponize that information against current or prospective policyholders in the form of rate increases or exclusionary policies.”
Federal law prevents health insurers from using genetic information in underwriting policies and in setting premiums, but the prohibition doesn’t apply to life, disability or long-term care coverage.




Very carefully?
How can we ban facial recognition when it’s already everywhere?
amid the focus on government use of facial recognition, many companies are still integrating the technology into a wide range of consumer products. In June, Apple announced that it would be incorporating facial recognition into its HomeKit accessories and that its Face ID technology would be expanded to support logging into sites on Safari. In the midst of the Covid-19 pandemic, some firms have raced to put forward more contactless biometric tech, such as facial recognition-enabled access control.




Show me how you do what you do. Don’t worry, I won’t tell a soul.
Amazon, Google Face Tough Rules in India’s E-Commerce Draft
India’s latest e-commerce policy draft includes steps that could help local startups and impose government oversight on how companies handle data.
The government has been working on the policy for at least two years amid calls to reduce the dominance of global tech giants like Amazon.com Inc., Alphabet Inc.’s Google and Facebook Inc.
Under rules laid out in a 15-page draft seen by Bloomberg, the government would appoint an e-commerce regulator to ensure the industry is competitive with broad access to information resources. The policy draft was prepared by the Ministry of Commerce’s Department for Promotion of Industry & Internal Trade.
The proposed rules would also mandate government access to online companies’ source codes and algorithms, which the ministry says would help ensure against “digitally induced biases” by competitors. The draft also talks of ascertaining whether e-commerce businesses have “explainable AI,” referring to the use of artificial intelligence.




Note how the AI pendulum swings… Undue reliance is also a sin.
Majority of public believe ‘AI should not make any mistakes’
A survey by AI innovation firm Fountech.ai revealed that 64 per cent want more regulation introduced to make AI safer.
Artificial intelligence is becoming more prominent in large-scale decision-making, with algorithms now being used in areas such as healthcare with the aim of improving speed and accuracy of decision-making.
However, the research shows that the public does not yet have complete trust in the technology – 69 per cent say humans should monitor and check every decision made by AI software, while 61 per cent said they thought AI should not be making any mistakes in the first place.
The idea of a machine making a decision also appears to have an impact on trust in AI, with 45 per cent saying it would be harder to forgive errors made by technology compared with those made by a human.




Not sure I agree. Forbes seems to be saying that a perfect solution, logically arrived at, is insufficient unless you ‘care’ about everyone impacted.
Why Business Must Strike A Balance With AI And Emotional Intelligence
As we turn to AI to do more tasks for us, the need for emotional intelligence has never been greater. This was true even before coronavirus took hold. Now, imagine how important emotional intelligence is in creating environments where leaders must manage employees who in many cases are stressed, scared, and uncertain about what lies ahead. Still, while it’s true that we need emotional intelligence in business management, that’s not the only area where an empathic approach is necessary. It’s also incredibly important—especially now—in balancing your utilization of AI in your business, customer experience and marketing efforts.
First, what is emotional intelligence? In the simplest form, it’s the ability to not just solve problems, but understand and connect with the reasons why those problems are occurring and how they impact other people. It’s the ability to care.




Who would you like to talk to?
New AI project captures Jane Austen’s thoughts on social media
Have you ever wanted to pick the brains of Sir Isaac Newton, Mary Shelley, or Benjamin Franklin? Well now you can (kinda), thanks to a new experiment by magician and novelist Andrew Mayne.
The project — called AI|Writer — uses OpenAI’s new text generator API to create simulated conversations with virtual historical figures. The system first works out the purpose of the message and the intended recipient by searching for patterns in the text. It then uses the API‘s internal knowledge of that person to guess how they would respond in their written voice.
The digitized characters can answer questions about their work, explain scientific theories, or offer their opinions. For example, Marie Curie gave a lesson on radiation, H.G. Wells revealed his inspiration for The Time Machine, while Alfred Hitchcock compared Christopher Nolan’s Interstellar to Stanley Kubrick’s 2001.