Saturday, December 21, 2019


Encryption. The return of the “one time pad.” Security depends on a unique cypher for each short message. Use any key too long and the message can be cracked.
New "uncrackable" security system may make your VPN obsolete
Researchers at the University of St Andrews, King Abdullah University of Sciences and Technology (KAUST) and the Center for Unconventional Processes of Sciences (CUP Sciences) have developed a new uncrackable security system which is set to revolutionize communications privacy.
The international team of scientists have created optical chips that allow for information to be sent from one user to another using a one-time unhackable communication that is able to achieve 'perfect secrecy' since confidential data can now be protected more securely than ever before.
The researchers' proposed system uses silicon chips that contain complex structures that are irreversibly changed in order to send information in a one-time key that can't be recreated or intercepted by an attacker.




A name I haven’t heard in twenty years.
Meet the Mad Scientist Who Wrote the Book on How to Hunt Hackers
Thirty years ago, Cliff Stoll published The Cuckoo's Egg, a book about his cat-and-mouse game with a KGB-sponsored hacker. Today, the internet is a far darker place—and Stoll has become a cybersecurity icon.




Next, let’s analyze politicians!
Artificial intelligence as behavioral analyst
… "To understand how the brain generates behavior, we need to know the "syllables," the building blocks of the behavior." Aided by artificial intelligence, Mearns and his colleagues from the Max Planck Institute of Neurobiology have broken down the hunting behavior of larval zebrafish into its basic building blocks. They show how these building blocks combine to form longer sequences.
… Catching prey is such an innate behavioral sequence, fine-tuned by experience. However, how do neuronal circuits steer and combine the components of this behavior in order to lead to a successful prey capture?
The neurobiologists from the Baier department developed a high-tech assay to investigate the details of the fish behavior. High-speed cameras recorded eye, tail and jaw movements of the fish while the animals roamed freely in a small bowl. Specially designed computer algorithms then evaluated the recorded images and assigned them to a computer-learned behavioral component. The results of thousands of fish movements revealed three components of the prey capture behavior: orientation, approach and capture.




In case you wondered.
How Artificial Intelligence Is Totally Changing Everything
Back in Oct. 1950, British techno-visionary Alan Turing published an article called "Computing Machinery and Intelligence," in the journal MIND that raised what at the time must have seemed to many like a science-fiction fantasy.
"May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" Turing asked.
How Artificial Intelligence Works
"AI is a family of technologies that perform tasks that are thought to require intelligence if performed by humans," explains Vasant Honavar, a professor and director of the Artificial Intelligence Research Laboratory at Penn State University, in an email interview. "I say 'thought,' because nobody is really quite sure what intelligence is."
AI works by combining large amounts of data with intelligent algorithms — series of instructions — that allow the software to learn from patterns and features of the data, as this SAS primer on artificial intelligence explains.




Perspective. Why can’t your kid do this?
The Highest-Paid YouTube Stars of 2019: The Kids Are Killing It
Anastasia Radzinskaya is an unlikely media star. Born in southern Russia with cerebral palsy, her doctors feared she would never be able to speak. To document her development through treatments, her parents posted videos of her on YouTube so friends and relatives could see the progress.
The videos are typical kid stuff: playdates with dad, jumping around on an inflatable castle and playing with her cat, each video accompanied by catchy jingles and voice-over giggles. She soon gained followers around the world. Her biggest hit was a 2018 trip to the petting zoo with her father Yuri that featured the two dancing to child favorite “Baby Shark,” milking a pretend cow and eating ice cream. That video has garnered 767 million views, the top draw for a growing media business that has funneled $18 million to the Radzinskayas between June 1, 2018, and June 1, 2019.
Anastasia, who goes by “Nastya,” now has 107 million subscribers across her seven channels who have watched her videos 42 billion times. She is No. 3 on the Forbes Top-Earning YouTube Stars ranking for 2019, which tallies pretax income collected from advertisements, sponsored content, merchandise sales, tours and more.



Friday, December 20, 2019


Hire the elderly?
Cybersecurity's Weakest Link Grows Exponentially Due to Device Proliferation
It may surprise you to learn that individuals under the age of 30, often referred to as “digital natives”, are less likely to adopt cybersecurity best practice than those over the age of 30 with “acquired digital DNA”. That’s according to a recent report commissioned by NTT that involved 2,256 organizations in 17 sectors across 20 countries. For security professionals, the good news is that all that work raising awareness for cybersecurity and educating employees has paid off. The bad news is our challenges are mounting. Researchers found that younger people entering the workforce expect to use more of their own applications and devices while believing the responsibility for security rests solely with their employer.




Ignorance works two ways. “We didn’t know what they were doing.” vs. “We don’t want to spend too much time or money fixing things.”
Labels & Publishers Win $1 Billion Piracy Lawsuit Against Cox Communications
Cox Communications was found liable for the piracy infringement of more than 10,000 musical works by a U.S. District Court jury in Virgina on Thursday (Dec. 19), awarding $1 billion statutory damages to plaintiffs Sony Music, Universal Music Group, Warner Music Group and EMI.
The labels and their publishing entities filed the lawsuit in July 2018, accusing the cable and internet service provider of turning a blind eye to pirates on its network. They alleged that Cox “deliberately refused to take reasonable measures” to combat copyright infringers, even after the company became aware of specific acts of infringement by its customers.
Cox was also accused of imposing an “arbitrary cap” on the number of infringement notices it would accept from copyright holders -- thereby allowing said infringement to continue -- and of failing to permanently terminate customers who were found to have pirated. The complaint noted that at least 20,000 Cox subscribers could be categorized as repeat infringers.
Cox was found guilty of infringment claims on 10,017 pieces of work -- the full amount charged by plaintiffs -- and fined $99,830.29 per work.




Typical.
If you made a claim for $125 from Equifax, you’re not getting it after court awards nearly $80 million to attorneys
On Thursday, Dec. 19, a Georgia federal judge awarded $77.5 million to the attorneys representing the class of consumers against Equifax. That’s over 20% of the roughly $380 million settlement fund Equifax agreed to set up to directly help consumers affected by the breach, according to the Hamilton Lincoln Law Institute, which house the Center for Class Action Fairness and opposed the high fee award.
It’s also one more reason why the consumers who sought a cash settlement from Equifax won’t be getting the full $125 as initially expected. In fact, consumers were never going to get $125, says Ted Frank, director of litigation for Hamilton Lincoln. “That’s down to $6 or $7 [per consumer] now. Maybe even less than that,” he tells CNBC Make It.




Would that the US agreed.
Glyn Moody writes:
One of the features of surveillance in Germany is the routine use of malware to spy on its citizens. The big advantage for the authorities is that this allows them to circumvent end-to-end encryption. By placing spy software on the user’s equipment, the police are able to see messages in an unencrypted form. Austrian police were due to start deploying malware in this way next year. But in a welcome win for digital rights, Austria’s top court has just ruled its use unconstitutional (in German). The Austrian Constitutional Court based its judgment on the European Convention on Human Rights (ECHR — pdf). The Web site of the Austrian national public service broadcaster ORF reported the court as ruling:
Read more on TechDirt.




Changing the data process.
Examining Industry Approaches to CCPA “Do Not Sell” Compliance
Over the past year, the online advertising (“ad tech”) industry has grappled with the practical challenges of complying with the new California Consumer Privacy Act (CCPA). Once the new law — the first of its kind in the United States — goes into effect on January 1, 2020, businesses operating in California will be required by law to provide California residents (“consumers”) with “explicit notice” and the opportunity to opt-out of the sale of their personal information, thus establishing powerful individual rights that represent a major step forward in US privacy legislation.
Practically speaking, however, the law’s notice and “Do Not Sell” obligations present unique structural challenges for ad tech companies, many of whom operate as intermediaries, lack a direct relationship with users, and may or may not have formal contractual relationships with data supply chain partners, including publisher properties where the personal information and insights about user activity are utilized to power data-driven advertising. In light of the imminent effective date of CCPA and with an aim to address these challenges, several key ad tech players have developed approaches to comply with specific CCPA requirements that demonstrate a variety of perspectives toward viable compliance solutions.




Forbes (and Fortune) are writing a lot about AI, but with little real substance. It seems like they are telling the business world that there is something to all the AI hype, but they haven’t quite figured out what.
AI Will Transform The Field Of Law
Among the social sciences, law may come the closest to a system of formal logic. To oversimplify, legal rulings involve setting forth axioms derived from precedent, applying those axioms to the particular facts at hand, and reaching a conclusion accordingly. This logic-oriented methodology is exactly the type of activity to which machine intelligence can fruitfully be applied.
Within the field of law, a few areas stand out as particularly promising for the application of AI. Exciting progress is already being made in each of these areas.
Contract Review
Contract Analytics
Litigation Prediction
Legal Research
Consider the main functional areas in a typical business: marketing, sales, customer success, finance, accounting, human resources, talent, legal.
In nearly all of these functions, billion-dollar-plus enterprise software businesses have been built in the past two decades to enhance productivity and workflows. To give a few examples: HubSpot (marketing); Salesforce (sales); Zendesk (customer success); Workday (finance); NetSuite (accounting); Gusto (HR); LinkedIn (talent).
The glaring exception is legal.




Kill now, explain later? (He was in the advance stages of an incurable disease and his insurance had run out – so I nuked him.)
When Robots Can Decide Whether You Live or Die
Computers have gotten pretty good at making certain decisions for themselves. Automatic spam filters block most unwanted email. Some US clinics use artificial-intelligence-powered cameras to flag diabetes patients at risk of blindness. But can a machine ever be trusted to decide whether to kill a human being?
It’s a question taken up by the eighth episode of the Sleepwalkers podcast, which examines the AI revolution. Recent, rapid growth in the power of AI technology is causing some military experts to worry about a new generation of lethal weapons capable of independent and often opaque actions.




Mr. Zillman collects everything. Worth looking through the list carefully.
2020 Open Educational Resources (OER) Sources and Tools
Via LLRX 2020 Open Educational Resources (OER) Sources and Tools This is a comprehensive listing of Open Educational Resources (OER) sources and tools available in the United States and around the world, by Marcus P. Zillman. His guide includes references to: search engines, directories, initiatives, books, E-books, E-textbooks, free online seminars and webinars, subject guides, open and distance learning, open access papers and research, as well as related costs and metrics to identify and choose reliable, subject matter expert sources for free and open continuing education and research on the internet.



Thursday, December 19, 2019


Background for my students.
Comparing Privacy Laws: GDPR v. CCPA
In November 2018, OneTrust DataGuidance and FPF partnered to publish a guide to the key differences between the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act of 2018 (CCPA).
Since then, a series of bills, signed by the California Governor on 11 October 2019, amended the CCPA to exempt from its application certain categories of data and to provide different requirements for submission of consumer requests, among other things. The Guide has been updated to take into account these amendments.




Perspective. At least the UK’s perspective... Does the language used suggest they have no evidence to support their speculation?
UK CMA lifts the lid on digital giants
The UK Competition and Markets Authority’s (CMA) has published an update in its examination of online platforms and digital advertising, uncovering new detail about how the sector’s biggest names operate. The CMA’s interim report [12/18/19] has found that:
    • Last year, Google accounted for more than 90% of all revenues earned from search advertising in the UK, with revenues of around £6 billion
    • In the same year, Facebook accounted for almost half of all display advertising revenues in the UK, reaching more than £2 billion
Big’ is not necessarily ‘bad’ and these platforms have brought very innovative and valuable products and services to the market. But the CMA is concerned that their position may have become entrenched with negative consequences for the people and businesses who use these services every day. A lack of real competition to Google and Facebook could mean people are already missing out on the next great new idea from a potential rival. It could also be resulting in a lack of proper choice for consumers and higher prices for advertisers that can mean cost rises for goods and services such as flights, electronics and insurance bought online. The market position of Google and Facebook may potentially be undermining the ability of newspapers and other publishers to produce valuable content as their share of revenues is squeezed by large platforms…”




How fast can you read?
London Review of Books rounds off 40th anniversary
The Bookseller: “The London Review of Books has launched a new website, rounding off its 40th anniversary celebrations with a comprehensive overhaul of the paper’s online presence, with its archive freely accessible for a month. The new website launched on Monday (16th December) with the entire LRB archive of almost 17,500 pieces—including writers such as Frank Kermode, Hilary Mantel, Oliver Sacks and Angela Carter—available to read for free until 15th January. The refreshed site makes it easier navigate the archive and find articles, the literary journal said, with a “subjects” search function and curated “best of” lists featuring favourite pieces selected by the LRB editorial team, initially in the areas of Arts & Culture, Biography & Memoir, History & Classics, Literature & Criticism, Philosophy & Law, Politics & Economics, Psychology & Anthropology, and Science & Technology. Contributor pages now include articles about the contributor, as well as all pieces written by them…”



Wednesday, December 18, 2019


No doubt an unintended consequence.
More than 38,000 people will stand in line this week to get a new password
All of this is going on at the Justus Liebig University (JLU) in Gießen, a town north of Frankfurt, Germany.
The university suffered a malware infection last week. While the name or the nature of the malware strain was not disclosed, the university's IT staff considered the infection severe enough to take down its entire IT and server infrastructure.
Furthermore, JLU staff also believed the malware infection impacted the university's email server, and, as a precautionary measure, they reset all passwords for all email accounts, used by students and staff alike.
But in a bizarre twist of German law, the university couldn't send out the new passwords to the students' personal email accounts.
Instead, citing legal requirements imposed by the German National Research and Education Network (DFN), JLU professors and students had to pick up their passwords from the university's IT staff in person, after providing proof of their identity using an ID card.




Here we go again!
How India Plans to Protect Consumer Data
The Indian government looks set to legislate a Personal Data Protection Bill (DPB), which would control the collection, processing, storage, usage, transfer, protection, and disclosure of personal data of Indian residents. Despite its regional nature, DPB is an important development for global managers. The digital economy in India is expected to reach a valuation of $1 trillion dollars by 2022 — and it will attract numerous global players who must comply with DPB.
Yet, Indian DPB carries additional provisions beyond the EU regulation. Because India is a nation state, it would treat the data generated by its citizens as a national asset, store and guard it within national boundaries, and reserve the right to use that data to safeguard its defense and strategic interests.
There are a number of features of the DPB that will require companies to change their business models, practices, and principles.
Ownership of personal data: In principle, DPB proposes that the data provider is the owner of their own personal data. While simple in idea, this notion could impose an enormous implementation burden for digital companies.
Three classes of data: DPB has identified three categories of data from which a principal can be identified: Sensitive data includes information on financials, health, sexual orientation, genetics, transgender status, caste, and religious belief. Critical data includes information that the government stipulates from time to time as extraordinarily important, such as military or national security data. The third is a general category, which is not defined but contains the remaining data. DPB prescribes specific requirements that data fiduciaries must follow for the storage and processing for each data class.
All sensitive and critical data must be stored in servers located in India.
Data sovereignty: DPB reserves the right to access the locally stored data to protect national interests.




Why we need AI lawyers! Why would this be different from today’s ‘reverse engineering?’
Researchers were about to solve AI’s black box problem, then the lawyers got involved
The downside of transparency
This is fine when we’re using blackbox AI to determine whether something is a hotdog or not, or when Instagram uses it to determine if you’re about to post something that might be offensive. It’s not fine when we can’t explain why an AI sentenced a black man with no priors to more time than a white man with a criminal history for the same offense.
The answer is transparency. If there is no black box, then we can tell where things went wrong. If our AI sentences black people to longer prison terms than white people because it’s over-reliant on external sentencing guidance, we can point to that problem and fix it in the system.
But there’s a huge downside to transparency: If the world can figure out how your AI works, it can figure out how to make it work without you. The companies making money off of black box AI – especially those like Palantir, Facebook, Amazon, and Google who have managed to entrench biased AI within government systems – don’t want to open the black box anymore than they want their competitors to have access to their research. Transparency is expensive and, often, exposes just how unethical some companies’ use of AI is.
As legal expert Andrew Burt recently wrote in Harvard Business Review:
To start, companies attempting to utilize artificial intelligence need to recognize that there are costs associated with transparency. This is not, of course, to suggest that transparency isn’t worth achieving, simply that it also poses downsides that need to be fully understood. These costs should be incorporated into a broader risk model that governs how to engage with explainable models and the extent to which information about the model is available to others.




Why fast is not always best.
Are California's New Data Privacy Controls Even Legal?
A new paper raises constitutional questions about expansive state-level regulations that reach beyond their borders.
Data privacy hardliners [???] are pretty jazzed about the California Consumer Protection Act (CCPA), which is slated to take effect on the first of the next year. While many outside of the Golden State may not have heard of this bold foray into computing regulation, activists hope that it will soon effectively control how much of the country is allowed to process data. If they can't have an European Union-level General Data Protection Regulation (GPPR), then at least this state law can kind of regulate through the back door without the pesky need to go through Congress.
Of course any strong enough data controls imposed in California would inevitably affect everyone else in the US. Most technology companies are based there, and even those in other states would be fools to lock themselves out of California's population of almost 40 million.
And CCPA supporters know this. In fact, many of them see this as a feature.
A new Federalist Society Regulatory Transparency Project paper by my colleague Jennifer Huddleston and TechFreedom's Ian Adams suggests that state data controls like the CCPA raise serious legal questions about potential free speech and dormant commerce clause violations.
There's this thing called "the Constitution…"
In the rush to get a GDPR-style regulatory framework in place in California, no one seemed to stop and ask whether what they were doing was actually legal. Indeed, many of the controls enshrined in the European law are fundamentally at odds with American principles of permissionless innovation and open interstate commerce. Huddleston and Adams point out that state laws like the CCPA may run into constitutional problems concerning speech and interstate trade.
Data is often speech. Laws that regulate speech are subject to a high level of legal scrutiny because of our First Amendment protections. States don't get to ignore the First Amendment just because they really don't like Facebook. If they try to regulate data-as-speech, the courts may promptly strike them down.



Tuesday, December 17, 2019


Many IoT devices have this same flaw.
Smart lock security flaw could leave your door wide open
Consultants at the cybersecurity firm F-Secure have discovered an exploitable design flaw in one brand of smart lock that can allow an attacker to easily pick the device.
Since the smart lock itself is unable to receive new firmware updates, the manufacturer can't patch the device to mitigate the flaw leaving users at risk unless they decide to physically uninstall their smart lock.




I predict failure. How significant remains unpredictable.
How New Voting Machines Could Hack Our Democracy
The United States has a disturbing habit of investing in unvetted new touchscreen voting machines that later prove disastrous. As we barrel toward what is set to be the most important election in a generation, Congress appears poised to fund another generation of risky touchscreen voting machines called universal use Ballot Marking Devices (or BMDs), which function as electronic pens, marking your selections on paper on your behalf. Although vendors, election officials, and others often refer to this paper as a “paper ballot,” it differs from a traditional hand-marked paper ballot in that it is marked by a machine, which can be hacked without detection in a manual recount or audit.




People held accountable? Inconceivable!
People should be held accountable for AI and algorithm errors, rights commissioner says
People need to be held accountable for the mistakes AI and algorithms make on their behalf, such as that seen in the government’s robodebt scandal, according to Australian human rights commissioner Ed Santow.
The proposal comes in a new discussion paper on the impact of new technologies on human rights in Australia, released by the commission on Tuesday.
After the Australian government backed down on the use of automatic debt notices based on income averaging, and had legislation for its facial recognition system rejected by a government-dominated parliamentary committee, Santow said it was time to set some rules to govern how these new technologies are used.
You can download the Human Rights and Technology Discussion Paper and make a submission at tech.humanrights.gov.au




Perspective. (One caught my eye.)
8 biggest AI trends of 2020, according to experts
Automated AI development
“In 2020, expect to see significant new innovations in the area of what IBM calls ‘AI for AI’: using AI to help automate the steps and processes involved in the life cycle of creating, deploying, managing, and operating AI models to help scale AI more widely into the enterprise,” said Sriram Raghavan, VP of IBM Research AI.




Perspective.
AI is outpacing Moore’s Law
AI performance is doubling nearly every 3 months, a new report shows.
According to a new report produced by Stanford University, AI computational power is accelerating at a much higher rate than the development of processor chips.
Prior to 2012, AI results closely tracked Moore’s Law, with compute doubling every two years,” the authors of the report wrote. “Post-2012, compute has been doubling every 3.4 months.”
Stanford’s AI Index 2019 annual report examined how AI algorithms have improved over time.




Perspective. Facebook is number one, and number two and number three and number four…
A Look Back At the Top Apps & Games of the Decade
Looking at the most downloaded apps of the decade, Facebook has dominated the mobile space representing the four most downloaded apps of the decade with Facebook, Facebook Messenger, WhatsApp and Instagram




I am not going to speculate on the jobs my students seem destined for…
Robot career guidance: AI may soon be able to analyse your tweets to match you to a job
Our study published today in the Proceedings of the National Academy of Science found different professions attract people with very different psychological characteristics.
When looking for a new career, you might visit a career adviser and answer a set of questions to identify your interests and strengths. These results are used to match you with a set of potential occupations.
However, this method relies on long surveys, and doesn’t account for the fact that many occupations are changing or disappearing as technology transforms the employment landscape.
We wondered if we could develop a data-driven approach to matching a person with a suitable profession, based on psychological traces they reveal online.
Studies have shown people leave traces of themselves through the language they post online and their online behaviours.




Also for my students.




For the student toolkit?
How to Use Zotero and Scrivener for Research-Driven Writing