Saturday, November 27, 2021

Hoist on their own petard. (I’ve always wanted to say that…)

https://www.wsj.com/articles/biometrics-smartphones-surveillance-cameras-pose-new-obstacles-for-u-s-spies-11638009002

Biometrics, Smartphones, Surveillance Cameras Pose New Obstacles for U.S. Spies

U.S., rivals seek ways to adapt spycraft to a changing world; being on the grid can blow your cover, but so can staying off

Operatives widely suspected of working for Israel’s Mossad spy service planned a stealthy operation to kill a Palestinian militant living in Dubai. The 2010 plan was a success except for the stealth part—closed-circuit cameras followed the team’s every move, even capturing them before and after they put on disguises.

In 2017, a suspected U.S. intelligence officer held a supposedly clandestine meeting with the half brother of North Korean leader Kim Jong Un, days before the latter was assassinated. That encounter also became public knowledge, thanks to a hotel’s security camera footage.


(Related) Surveillance: It’s not just for governments…

https://www.makeuseof.com/tag/3-effective-cell-phone-surveillance-apps/

The 6 Best Spy Phone Apps

Concerned about your children's safety? Install one of these cell phone surveillance apps on their Android device or iPhone.



When I started working with computers, all data (then available) was delivered to the mainframe.

https://venturebeat.com/2021/11/24/ai-will-soon-oversee-its-own-data-management/

AI will soon oversee its own data management

AI thrives on data. The more data it can access, and the more accurate and contextual that data is, the better the results will be.

The problem is that the data volumes currently being generated by the global digital footprint are so vast that it would take literally millions, if not billions, of data scientists to crunch it all — and it still would not happen fast enough to make a meaningful impact on AI-driven processes.

According to Dell’s 2021 Global Data Protection Index, the average enterprise is now managing ten times more data compared to five years ago, with the global load skyrocketing from “just” 1.45 petabytes in 2016 to 14.6 petabytes today. With data being generated in the datacenter, the cloud, the edge, and on connected devices around the world, we can expect this upward trend to continue well into the future.



Refining our perspective. I can explain it, can you understand it?

https://venturebeat.com/2021/11/26/what-is-explainable-ai-building-trust-in-ai-models/

What is explainable AI? Building trust in AI models

As AI-powered technologies proliferate in the enterprise, the term “explainable AI” (XAI) has entered mainstream vernacular. XAI is a set of tools, techniques, and frameworks intended to help users and designers of AI systems understand their predictions, including how and why the systems arrived at them.

A June 2020 IDC report found that business decision-makers believe explainability is a “critical requirement” in AI. To this end, explainability has been referenced as a guiding principle for AI development at DARPA, the European Commission’s High-level Expert Group on AI, and the National Institute of Standards and Technology.

Generally speaking, there are three types of explanations in XAI: Global, local, and social influence.

  • Global explanations shed light on what a system is doing as a whole as opposed to the processes that lead to a prediction or decision. They often include summaries of how a system uses a feature to make a prediction and “metainformation,” like the type of data used to train the system.

  • Local explanations provide a detailed description of how the model came up with a specific prediction. These might include information about how a model uses features to generate an output or how flaws in input data will influence the output.

  • Social influence explanations relate to the way that “socially relevant” others — i.e., users — behave in response to a system’s predictions. A system using this sort of explanation may show a report on model adoption statistics, or the ranking of the system by users with similar characteristics (e.g., people above a certain age).


Friday, November 26, 2021

This will save both the technically challenged and the forgetful expert. Should have been a law thirty or forty years ago. (Don’t think it’s a problem? Try a Google search for “default password list”)

https://www.bbc.com/news/technology-59400762

Huge fines and a ban on default passwords in new UK law

Default passwords for internet-connected devices will be banned, and firms which do not comply will face huge fines.

One expert said that it was an important "first step".

In 2017, for example, hackers stole data from a US casino via an internet-connected fish tank. There have also been reports of people accessing home webcams and speaking to family members.

The Product Security and Telecommunications Infrastructure Bill lays out three new rules:



Should Marketing dictate your security policy?

https://www.cpomagazine.com/cyber-security/do-companies-need-biometric-based-logins-to-survive-new-marketing-report-calls-for-end-to-passwords-mass-changes-to-identity-authentication/

Do Companies Need Biometric-Based Logins To Survive? New Marketing Report Calls for End to Passwords, Mass Changes to Identity Authentication



Is there safety in a goal-free AI? (Not yet in my local library.)

https://thenextweb.com/news/ai-own-goals-intelligent-syndication

AI must have its own goals to be truly intelligent

To Daeyeol Lee, professor of neuroscience at Johns Hopkins University, current AI systems are “surrogates of human intelligence” because they are designed to accomplish the goals of their human creators, not their own.

True intelligence, Lee argues in his book Birth of Intelligence: From RNA to Artificial Intelligence. is “the ability of life to solve complex problems in a variety of environments for its self-replication.” In other words, every living species that has passed the test of time and has been able to reproduce—from bacteria to trees, insects, fish, birds, mammals, and humans—is intelligent.



A bit over optimistic?

https://news.un.org/en/story/2021/11/1106612

193 countries adopt the first global agreement on the Ethics of Artificial Intelligence

Artificial intelligence is present in everyday life, from booking flights and applying for loans to steering driverless cars. It is also used in specialized fields such as cancer screening or to help create inclusive environments for the disabled.

According to UNESCO, AI is also supporting the decision-making of governments and the private sector, as well as helping combat global problems such as climate change and world hunger.

Until now, there were no universal standards to provide an answer to these issues”, UNESCO explained in a statement.

Considering this, the adopted text aims to guide the construction of the necessary legal infrastructure to ensure the ethical development of this technology.

… You can read the full text here.



A users guide? Might be something law school students could produce.

https://www.databreaches.net/overview-of-legislations-on-cybersecurity-personal-data-protection-and-computer-misuse/

Overview of Legislations on Cybersecurity, Personal Data Protection and Computer Misuse

The Cyber Security Agency of Singapore (CSA) had collaborated with the PDPC and Singapore Police Force (SPF) to develop a handbook covering an overview of the Cybersecurity Act, Computer Misuse Act and Personal Data Protection Act.
The handbook explains the three different legislations and how they work in tandem, illustrated through examples of data breaches. It also provides online resources to assist organisations in securing their IT systems and help individuals protect their data.
Access the handbook on Overview of Legislations on Cybersecurity, Personal Data Protection & Computer Misuse here.

Source: Personal Data Protection Commission of Singapore



What would the results be in the US?

https://www.unite.ai/ai-researchers-estimate-97-of-eu-websites-fail-gdpr-privacy-requirements-especially-user-profiling/

AI Researchers Estimate 97% Of EU Websites Fail GDPR Privacy Requirements- Especially User Profiling

Researchers in the US have used machine learning techniques to study the GDPR privacy policies of over a thousand representative websites based in the EU. They found that 97% of the sites studied failed to comply with at least one requirement of the European Union’s 2018 regulatory framework, and that they complied least of all with regulatory requirements around the practice of ‘user profiling’.

The paper states:

‘[Since] the privacy policy is the essential communication channel for users to understand and control their privacy, many companies updated their privacy policies after GDPR was enforced. However, most privacy policies are verbose, full of jargon, and vaguely describe companies’ data practices and users’ rights. Therefore, it is unclear if they comply with GDPR.’

It continues:

‘Our results show that even after GDPR went into effect, 97% of websites still fail to comply with at least one requirement of GDPR.’

The study is titled Automated Detection of GDPR Disclosure Requirements in Privacy Policies using Deep Active Learning, and comes from three researchers at the University of Virginia at Charlottesville.

The area of least compliance, according to the study, concerned GDPR’s stipulations about user profiling, with the authors stating that only 15.3% of the sites studied were in full compliance with this particular rule.



Tis a puzzlement.” Will decisions be completely apolitical?

https://www.scmp.com/news/china/politics/article/3157559/chinese-city-officials-told-base-their-decisions-big-data-not

Chinese city officials told to base their decisions on big data, not experience

He called for city officials across China to make the shift from “experience-driven” decisions to basing them on big data analysis, and to “use smart governance to improve capabilities and to warn of and deal with risks”, according to a statement on the commission’s WeChat account.

The smart governance pilot scheme – “modernising” governance with the use of big data and artificial intelligence, particularly surveillance technology – was introduced across 81 cities last year.



Perspective. The only constant is change. How could (competent) managers get into this position?

https://www.zdnet.com/article/tech-is-evolving-quickly-managers-are-worried-their-teams-cant-keep-up/

Managers are losing confidence in their tech team. That's bad news for everyone

IT functions are set to undergo radical changes in the coming years, and tech leaders are experiencing a crisis of confidence.

More than half (56%) of IT leaders surveyed said they were uncertain that their IT teams could bring about positive change in the department over the next five years, with almost one in five (17%) reporting either significant doubts or no confidence whatsoever.

Schuerman told ZDNet that "while some CIOs and IT leaders feel that the pandemic positively challenged them, they have also begun to realize that there is no end on the horizon for transformation – they are in a period of constant, accelerating change."

He added: "That means they need different technical skills and soft skills in their teams to succeed in the long term. For example, there isn't the depth [of knowledge] in DevOps/Agile, AI or native cloud capabilities. Filling this in requires a considerable upskilling that's hard to achieve."


Thursday, November 25, 2021

More GDPR level thinking about security?

https://www.theregister.com/2021/11/25/product_security_telecoms_bill_parliament/

UK.gov emits draft IoT and smartphone security law for Parliamentary scrutiny

A new British IoT product security law is racing through the House of Commons, with the government boasting it will outlaw default admin passwords and more.

The Product Security and Telecommunications Infrastructure (PSTI) Bill was introduced yesterday and is intended to drive up security standards in consumer tech gadgetry, ranging from IoT devices to phones, fondleslabs, smart TVs, and so on.

As for enforcement of these new regs, UK.gov isn't messing around. A government statement said: "This new cyber security regime will be overseen by a regulator, which will be designated once the Bill comes into force, and will have the power to fine companies for non-compliance up to £10 million or four per cent of their global turnover, as well as up to £20,000 a day in the case of an ongoing contravention."

The draft bill can be viewed as a 72-page PDF on the Parliamentary website. It is now subject to normal Parliamentary debate and amendment, which The Register will be following.


(Related) Can’t target political ads based on my political views?

https://www.wsj.com/articles/eu-pushes-to-limit-how-tech-companies-target-political-ads-11637839613?mod=djemalertNEWS

EU Pushes to Limit How Tech Companies Target Political Ads

The European Union is proposing a ban on media companies targeting political ads at people based on their religious views or sexual orientation, a new volley in the continent’s expansion of global tech regulation.

A bill proposed Thursday by the European Commission, the EU’s executive arm, would restrict online tech platforms from targeting political ads at individual users based on a list of categories that regulators deem sensitive, including their race, political beliefs and health status, without users’ explicit consent. But it stops short of a broader ban on so-called microtargeting based on personal information that some activists had demanded.

The bill would also impose broad new requirements on social-media companies to disclose more information about every political ad they run, including how widely viewed an ad is and what criteria are used to determine who sees it, including targeting via the use of third-party data.

Companies that fail to comply could face fines of as much as 5% of their annual global revenue—higher than EU fines for privacy violations.



Doom and gloom economics?

https://interestingengineering.com/will-ai-cause-more-harm-than-good

Here's Why an MIT Professor Thinks AI Will Cause More Harm Than Good

The problem isn’t AI itself, of course. It’s how we use it. According to professor Daron Acemoğlu, whose 2013 book Why Nations Fail was a Wall Street Journal bestseller, organizations that are good at applying statistical pattern recognition to large datasets gain too great of an advantage over consumers, workers, and competitors. Acemoğlu isn’t concerned with fairness for its own sake. He’s worried that AI, as it’s currently being developed, will lead to consequences that far outweigh the benefits of the technology in the long run.

The good news is that regulation can fix the problem, but only if we act soon. [Too optimistic? Bob]



Fear of robots before we had the word robot? I imagine Science Fiction started with someone saying, ‘Imagine what the world would be like if we could control fire!”

https://thereader.mitpress.mit.edu/the-ancient-history-of-intelligent-machines/

Surveillance, Companionship, and Entertainment: The Ancient History of Intelligent Machines

Artificial servants, autonomous killing machines, surveillance systems, and sex robots have been part of the human imagination for thousands of years.

… As early as 3,000 years ago we encounter interest in intelligent machines and AI that perform different servile functions. In the works of Homer (c. eighth century BCE) we find Hephaestus, the Greek god of smithing and craft, using automatic bellows to execute simple, repetitive labor. Golden handmaidens, endowed with characteristics of movement, perception, judgment, and speech, assist him in his work. In his “Odyssey,” Homer recounts how the ships of the Phaeacians perfectly obey their human captains, detecting and avoiding obstacles or threats, and moving “at the speed of thought.” Several centuries later, around 400 BCE, we meet Talos, the giant bronze sentry, created by Hephaestus, that patrolled the shores of Crete.

… Buddhist legends focused on north-eastern India from the fourth and third centuries BCE recount the army of automata that guarded Buddha’s relics


Wednesday, November 24, 2021

If you switch to self-driving mode (which is not actually self-driving) Tesla will record your accident so they can point out why you are at fault.

https://electrek.co/2021/11/23/tesla-asks-full-self-driving-beta-to-accept-being-recorded-crash-safety-risk/

Tesla asks Full Self-Driving Beta drivers to accept being recorded in case of a crash or ‘safety risk’

Tesla is now asking owners getting into the Full Self-Driving Beta program to accept that Tesla can use footage from both inside and outside the car in case of a safety risk or accident.

It’s the first time that Tesla will attach footage to specific individuals.

The automaker has updated the warning that comes with downloading a new version of the FSD Beta.

It includes all the much-needed warnings that were parts of previous releases, but Tesla added important new language:

By enabling FSD Beta, I consent to Tesla’s collection of VIN-associated image data from the vehicle’s external cameras and Cabin Camera in the occurrence of a serious safety risk or a safety event like a collision.”

The important part is “VIN-associated,” which means that the footage collected will be associated with the owners’ vehicle.


(Related) Timely! What information should a car have?

https://theconversation.com/the-self-driving-trolley-problem-how-will-future-ai-systems-make-the-most-ethical-choices-for-all-of-us-170961

The self-driving trolley problem: how will future AI systems make the most ethical choices for all of us?

… Imagine a future with self-driving cars that are fully autonomous. If everything works as intended, the morning commute will be an opportunity to prepare for the day’s meetings, catch up on news, or sit back and relax.

But what if things go wrong? The car approaches a traffic light, but suddenly the brakes fail and the computer has to make a split-second decision. It can swerve into a nearby pole and kill the passenger, or keep going and kill the pedestrian ahead.

The computer controlling the car will only have access to limited information collected through car sensors, and will have to make a decision based on this. As dramatic as this may seem, we’re only a few years away from potentially facing such dilemmas.

Tesla does not yet produce fully autonomous cars, although it plans to. In collision situations, Tesla cars don’t automatically operate or deactivate the Automatic Emergency Braking (AEB) system if a human driver is in control.

In other words, the driver’s actions are not disrupted – even if they themselves are causing the collision. Instead, if the car detects a potential collision, it sends alerts to the driver to take action.

In “autopilot” mode, however, the car should automatically brake for pedestrians. Some argue if the car can prevent a collision, then there is a moral obligation for it to override the driver’s actions in every scenario. But would we want an autonomous car to make this decision?



Perspective.

https://www.theregister.com/2021/11/24/aspi_chinese_internet_governance_report/

China trying to export its Great Firewall and governance model

China is actively trying to export its internal internet governance model, according to a paper from the International Cyber Policy Centre at the Australian Strategic Policy Institute.

Titled "China's cyber vision: How the Cyberspace Administration of China is building a new consensus on global internet governance", the paper outlines how China perceives sovereignty over its internet as having equivalent importance to sovereignty over its territory.

Recent data security initiatives that restrict Chinese data from going offshore, and crackdowns on tech giants, are both expressions of Beijing's desire to ensure that the Communist Party of China (CCP) can control the internet within China's borders.

Pervasive censorship with the Great Firewall is another element, as are the blizzard of new rules covering acceptable online content (another dropped yesterday, restricting how celebrities can behave online as they engage with fans).



I thought we had determined that if the security system allows you to access the data, you are “authorized” to access the data. Not enough detail in the article to determine if this was the case.

https://www.databreaches.net/little-rock-officer-arrested-for-unauthorized-access-of-personal-information/

Little Rock officer arrested for ‘unauthorized access’ of personal information

THV11 reports:

According to the Little Rock Police Department, Officer Miles McWayne has been arrested for “unauthorized access” of the personal information of a citizen.
The person filed a police report with the department in August after the reported incident took place in May.
McWayne was officially charged with the misdemeanor of accessing the information illegally.

Read more on THV11.



No Terminator in my lifetime? How boring…

https://www.weforum.org/agenda/2021/11/positive-artificial-intelligence-visions-for-the-future-of-work/

6 positive AI visions for the future of work

Current trends in AI are nothing if not remarkable. Day after day, we hear stories about systems and machines taking on tasks that, until very recently, we saw as the exclusive and permanent preserve of humankind: making medical diagnoses, drafting legal documents, designing buildings, and even composing music.

Our concern here, though, is with something even more striking: the prospect of high-level machine intelligence systems that outperform human beings at essentially every task. This is not science fiction. In a recent survey the median estimate among leading computer scientists reported a 50% chance that this technology would arrive within 45 years.

… What will constitute 'work' in a future

Participants were divided on this question. One camp thought that, freed from the shackles of traditional work, humans could use their new freedom to engage in exploration, self-improvement, volunteering, or whatever else they find satisfying. Proponents of this view usually supported some form of universal basic income (UBI), while acknowledging that our current system of education hardly prepares people to fashion their own lives, free of any economic constraints.

The second camp in our workshops and interviews believed the opposite: traditional work might still be essential. To them, UBI is an admission of failure – it assumes that most people will have nothing of economic value to contribute to society. They can be fed, housed, and entertained – mostly by machines – but otherwise left to their own devices.


(Related) An interactive chart…

https://intelligence.weforum.org/topics/a1Gb0000000pTDREA2/key-issues/a1Gb00000017LCAEA2

Artificial Intelligence: The Geopolitical Impacts of AI



An Audit algorithm could look at each transaction as they occur and determine how each will impact the bottom line.

https://www.journalofaccountancy.com/news/2021/nov/6-lessons-audit-experts-adopted-ai-early.html

6 lessons from audit experts who adopted AI early

In advance of the conference, they shared practical advice for other practitioners and firms. GRF started using an AI platform for audits about four years ago — an investment that they said is now delivering returns.

GRF uses MindBridge, a cloud-based platform that analyzes transactions and assigns a risk percentage based on 28 control points within the software. It represents a fundamental shift, the CPAs said: Instead of sampling transactions for review, the platform ingests and analyzes every transaction.



Should we imagine the Terminator writing plays? (Worth reading for the Trump hair description)

https://www.nytimes.com/2021/11/24/books/review/shakespeare-cohere-natural-language-processing.html

The Algorithm That Could Take Us Inside Shakespeare’s Mind



A list I need to complete.

https://www.bespacific.com/vote-for-the-best-book/

Vote For the Best Book

The New York Times – “In October, editors at the Book Review asked you to help us choose the best book of the past 125 years. We received thousands of nominations — including novels, memoirs and poetry collections — from readers across the world. We narrowed those submissions down to a shortlist of 25 finalists. And now we’re ready to choose the winner. That’s where you come in. Scroll through the list to learn more about each title, including why readers suggested it and how The Times covered it in the past. You can choose up to three, and we’ll crown a winning book in December…” [Note: this is a terrific list and worth reading even if you do not vote.]



Zoom at the start of the pandemic?

https://dilbert.com/strip/2021-11-24


Tuesday, November 23, 2021

Probably not a problem in the US, We could sic the IRS on them.

https://www.cpomagazine.com/data-protection/issuing-gdpr-fines-is-one-thing-collecting-them-is-another-uk-ico-struggling-to-enforce-actions-as-74-of-penalties-remain-unpaid/

Issuing GDPR Fines Is One Thing, Collecting Them Is Another; UK ICO Struggling To Enforce Actions as 74% Of Penalties Remain Unpaid

The UK’s Information Commissioner’s Office (ICO) has not been afraid to issue some heavy General Data Protection Regulation (GDPR) fines to the likes of Google, British Airways and Marriott for their assorted data leaks and breaches in recent years. Issuing a GDPR fine is just the first step, however; at some point it needs to be collected, or the process is meaningless.

That second part is apparently where ICO is running into some serious difficulty, with 74% of the GDPR fines issued by the agency since the start of 2020 remaining unpaid. TheSMSWorks has collected numbers that indicate the problem is tilted more to smaller companies than larger ones, with SMS and phone spammers frequently dragging out the appeals process for years or simply outright refusing to pay.



The Feds must really want facial recognition tools. Who new there were so many options?

https://www.nytimes.com/2021/11/23/technology/clearview-facial-recognition-accuracy.html

Clearview AI does well in another round of facial recognition accuracy tests.

After Clearview AI scraped billions of photos from the public web — from websites including Instagram, Venmo and LinkedIn — to create a facial recognition tool for law enforcement authorities, many concerns were raised about the company and its norm-breaking tool. Beyond the privacy implications and legality of what Clearview AI had done, there were questions about whether the tool worked as advertised: Could the company actually find one particular person’s face out of a database of billions?

In results announced on Monday, Clearview, which is based in New York, placed among the top 10 out of nearly 100 facial recognition vendors in a federal test intended to reveal which tools are best at finding the right face while looking through photos of millions of people. Clearview performed less well in another version of the test, which simulates using facial recognition for providing access to buildings, such as verifying that someone is an employee.

the top performers were SenseTime, a Chinese company, and Cubox, from South Korea.



For your amusement…

https://www.bespacific.com/were-making-the-facebook-papers-public/

We’re Making the Facebook Papers Public

Gizmodo – Here’s Why and How – “Independent experts from NYU, UMass Amherst, Columbia, Marquette, and the ACLU are partnering with Gizmodo to responsibly publish this historic leak. In one of Silicon Valley’s largest leaks, a former Facebook product manager slipped financial regulators stacks of documents containing thousands of confidential memos, chat logs, and a veritable library of hidden research. The leak was designed to convince the feds that the gravity and scope of Facebook’s design flaws and misdeeds vastly exceed anything its executives ever divulged to their investors. The documents, captured by whistleblower Frances Haugen and first reported by the Wall Street Journal, were also handed to members of a Senate Commerce subcommittee chaired by Sen. Richard Blumenthal, a Democrat of Connecticut who last month called Instagram “a breeding ground for eating disorders and self harm.” And it’s from here that Gizmodo and some 300 other mostly Western journalists derived their access. We believe there’s a strong public need in making as many of the documents public as possible, as quickly as possible. To that end, we’ve partnered with a small group of independent monitors, who are joining us to establish guidelines for an accountable review of the documents prior to publication. The mission is to minimize any costs to individuals’ privacy or the furtherance of other harms while ensuring the responsible disclosure of the greatest amount of information in the public interest…”


Monday, November 22, 2021

The joys of technology.

https://www.pogowasright.org/showdown-at-the-second-circuit-on-the-standards-protecting-onine-anonymity/

Showdown at the Second Circuit on the Standards Protecting Online Anonymity

Paul Alan Levy writes about a case we should all be following:

An important case about anonymous online speech is hurtling toward a decision in the Second Circuit. The situation is worrisome because defendants are so unsympathetic and the plaintiff’s legal claims seem to me very strong. The danger is that the trial judge’s dismissive treatment of the right to speak anonymously could be addressed in a way that has serious implications for more legitimate speakers.
Everytown for Gun Safety Sues Gun Rights Activists for Trademark Infringement
The case arises from a dispute between advocates of broad access to firearms through the device of 3D printing of guns and the gun-safety policy group Everytown for Gun Safety Action Fund, as well as its subsidiary Moms Demand Action. The groups strongly oppose the 3D printing of guns, which they see as a serious threat to safety and as an evasion of sensible gun regulation.

Read more on Public Citizen.



An objective evaluation of AI in other fields (law, health) could be very useful.

https://www.lr.org/en/latest-news/lloyds-register-launches-industry-first-artificial-intelligence-register/

Lloyd’s Register launches industry-first Artificial Intelligence Register.

Lloyd’s Register (LR) has launched an Artificial Intelligence (AI) Register, a standardised digital register of LR certified AI providers and solutions, a first of its kind for the maritime industry.

AI technology, the engineered systems that have hardware and software elements that mimic human capacity for observing, understanding and decision-making, is continuing to grow in maritime with applications ranging from digital twins, virtual commissioning and autonomous navigation systems. To support this uptake in technology, LR’s AI Register has been developed to signpost proven and reliable AI technology to help maritime stakeholders benefit from the latest applications.



Tools & Techniques.

https://www.i-programmer.info/news/150-training-a-education/15028-take-stanfords-natural-language-processing-with-deep-learning-for-free.html

Take Stanford's Natural Language Processing with Deep Learning For Free

The content of CS224n Natural Language Processing with Deep Learning, a graduate level, one-semester course originally provided to Stanford University Computer Science students, has been made available for free to anyone in a self-paced version.

… In the free version, however, you get access to the course's resources and the full YouTube playlist of the recorded lectures without those limitations. That way you get a taste of what it would have been like taking the class as a Stanford student. Especially comforting when you consider that for attending there's a fee of $4,056 - $5,408.

CS224n: Natural Language Processing with Deep Learning

YouTube Playlist

WantWords


Sunday, November 21, 2021

Violating privacy by algorithm?

https://www.theguardian.com/society/2021/nov/21/dwp-urged-to-reveal-algorithm-that-targets-disabled-for-benefit

DWP urged to reveal algorithm that ‘targets’ disabled for benefit fraud

Disabled people are being subjected to stressful checks and months of frustrating bureaucracy after being identified as potential benefit fraudsters by an algorithm the government is refusing to disclose, according to a new legal challenge.

A group in Manchester has launched the action after mounting testimony from disabled people in the area that they were being disproportionately targeted for benefit fraud investigations. Some said they were living in “fear of the brown envelope” showing their case was being investigated. Others said they had received a phone call, without explanation as to why they had been flagged.

The Department for Work and Pensions (DWP) has previously conceded that it uses “cutting-edge artificial intelligence” to track possible fraud but has so far rebuffed attempts to explain how the algorithm behind the system was compiled. Campaigners say that once flagged, those being examined can face an invasive and humiliating investigation lasting up to a year.



Fertile ground for recruiting Privacy Lawyers?

https://www.theregister.com/2021/11/20/in_brief_ai/

AI surveillance software increasingly used to make sure contract lawyers are doing their jobs at home

Contract lawyers are increasingly working under the thumb of facial-recognition software as they continue to work from home during the COVID-19 pandemic.

The technology is hit-and-miss, judging from interviews with more than two dozen American attorneys conducted by the Washington Post. To make sure these contract lawyers, who take on short term-gigs, are working as expected and are handling sensitive information appropriately, their every move is followed by webcams.

The monitoring software is mandated by their employers, and is used to control access to the legal documents that need to be reviewed. If the system thinks someone else is looking at the files on the computer, or equipment has been set up to record information from the screen, the user is booted out.


(Related)

https://www.tribuneindia.com/news/jobs&careers/how-wearable-tech-can-reveal-your-performance-at-work-341035

How wearable tech can reveal your performance at work

Not just keeping you fit and healthy, data from fitness trackers and smart watches can also predict individual job performances as workers travel to and from the office wearing those devices, says a study.

Previous research on commuting indicates that stress, anxiety, and frustration from commuting can lead to a less efficient workforce and an increased counterproductive work behaviour.

Researchers from Dartmouth College in the US built mobile sensing machine learning (ML) models to accurately predict job performance via data derived from wearable devices.

… "Compared to low performers, high performers display greater consistency in the time they arrive and leave work," said Pino Audia, a co-author of the study.

"This dramatically reduces the negative impacts of commuting variability and suggests that the secret to high performance may lie in sticking to better routines." While high performers had physiological indicators that are consistent with physical fitness and stress resilience, low performers had higher stress levels in the times before, during, and after commutes.



When laws conflict?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3961863

Legal Opacity: Artificial Intelligence’s Sticky Wicket

Proponents of artificial intelligence (“AI”) transparency have carefully illustrated the many ways in which transparency may be beneficial to prevent safety and unfairness issues, to promote innovation, and to effectively provide recovery or support due process in lawsuits. However, impediments to transparency goals, described as opacity, or the “black-box” nature of AI, present significant issues for promoting these goals.

An undertheorized perspective on opacity is legal opacity, where competitive, and often discretionary legal choices, coupled with regulatory barriers create opacity. Although legal opacity does not specifically affect AI only, the combination of technical opacity in AI systems with legal opacity amounts to a nearly insurmountable barrier to transparency goals. Types of legal opacity, including trade secrecy status, contractual provisions that promote confidentiality and data ownership restrictions, and privacy law independently and cumulatively make the black box substantially opaquer.

The degree to which legal opacity should be limited or disincentivized depends on the specific sector and transparency goals of specific AI technologies, technologies which may dramatically affect people’s lives or may simply be introduced for convenience. This Response proposes a contextual approach to transparency: Legal opacity may be limited in situations where the individual or patient benefits, when data sharing and technology disclosure can be incentivized, or in a protected state when transparency and explanation are necessary.



Everything you ever wanted to know?

https://www.emerald.com/insight/content/doi/10.1108/S2398-601820210000008007/full/html

The Big Data World: Benefits, Threats and Ethical Challenges

Advances in Big Data, artificial Intelligence and data-driven innovation bring enormous benefits for the overall society and for different sectors. By contrast, their misuse can lead to data workflows bypassing the intent of privacy and data protection law, as well as of ethical mandates. It may be referred to as the ‘creep factor’ of Big Data, and needs to be tackled right away, especially considering that we are moving towards the ‘datafication’ of society, where devices to capture, collect, store and process data are becoming ever-cheaper and faster, whilst the computational power is continuously increasing. If using Big Data in truly anonymisable ways, within an ethically sound and societally focussed framework, is capable of acting as an enabler of sustainable development, using Big Data outside such a framework poses a number of threats, potential hurdles and multiple ethical challenges. Some examples are the impact on privacy caused by new surveillance tools and data gathering techniques, including also group privacy, high-tech profiling, automated decision making and discriminatory practices. In our society, everything can be given a score and critical life changing opportunities are increasingly determined by such scoring systems, often obtained through secret predictive algorithms applied to data to determine who has value. It is therefore essential to guarantee the fairness and accurateness of such scoring systems and that the decisions relying upon them are realised in a legal and ethical manner, avoiding the risk of stigmatisation capable of affecting individuals’ opportunities. Likewise, it is necessary to prevent the so-called ‘social cooling’. This represents the long-term negative side effects of the data-driven innovation, in particular of such scoring systems and of the reputation economy. It is reflected in terms, for instance, of self-censorship, risk-aversion and lack of exercise of free speech generated by increasingly intrusive Big Data practices lacking an ethical foundation. Another key ethics dimension pertains to human-data interaction in Internet of Things (IoT) environments, which is increasing the volume of data collected, the speed of the process and the variety of data sources. It is urgent to further investigate aspects like the ‘ownership’ of data and other hurdles, especially considering that the regulatory landscape is developing at a much slower pace than IoT and the evolution of Big Data technologies. These are only some examples of the issues and consequences that Big Data raise, which require adequate measures in response to the ‘data trust deficit’, moving not towards the prohibition of the collection of data but rather towards the identification and prohibition of their misuse and unfair behaviours and treatments, once government and companies have such data. At the same time, the debate should further investigate ‘data altruism’, deepening how the increasing amounts of data in our society can be concretely used for public good and the best implementation modalities.



Perhaps an AI detective agency?

https://journals.sagepub.com/doi/abs/10.1177/20322844211057019

Legal challenges in bringing AI evidence to the criminal courtroom

Artificial Intelligence (AI) is rapidly transforming the criminal justice system. One of the promising applications of AI in this field is the gathering and processing of evidence to investigate and prosecute crime. Despite its great potential, AI evidence also generates novel challenges to the requirements in the European criminal law landscape. This study aims to contribute to the burgeoning body of work on AI in criminal justice, elaborating upon an issue that has not received sufficient attention: the challenges triggered by AI evidence in criminal proceedings. The analysis is based on the norms and standards for evidence and fair trial, which are fleshed out in a large amount of European case law. Through the lens of AI evidence, this contribution aims to reflect on these issues and offer new perspectives, providing recommendations that would help address the identified concerns and ensure that the fair trial standards are effectively respected in the criminal courtroom.



Next article should discuss how to find a jury of AI peers.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3963422

The Legal Quandary When AI Is The Criminal

One assumption about AI is that there will always be a human held accountable for any bad acts that the AI perchance commits. Some though question this assumption and emphasize that the AI might presumably “act on its own” or that it will veer far from its programming or that the programmers that created the AI will be impossible to identify. [Or the programmers were themselves AI? Bob] A legal quandary is ostensibly raised via the advent of such AI that goes criminally bad (or was bad, to begin with).



Making new law…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3963426

Core Principles of Justice And Respective AI Impacts

A vital question worth asking is what will happen to our venerated principles of justice due to the advent of AI in the law. To grapple with that crucial matter, we first clarify the precepts of justice to be considered and then stepwise analyze how AI will impact each of them.


(Related)

https://scholar.law.colorado.edu/cgi/viewcontent.cgi?article=2500&context=articles

The Law of AI

The question of whether new technology requires new law is central to the field of law and technology. From Frank Easterbrook’s “law of the horse” to Ryan Calo’s law of robotics, scholars have debated the what, why, and how of technological, social, and legal co-development and construction. Given how rarely lawmakers create new legal regimes around a particular technology, the EU’s proposed “AI Act” (Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts) should put tech-law scholars on high alert. Leaked early this spring and officially released in April 2021, the AI Act aims to establish a comprehensive European approach to AI risk-management and compliance, including bans on some AI systems.

In Demystifying the Draft EU Artificial Intelligence Act, Michael Veale and Frederik Zuiderveen Borgesius provide a helpful and evenhanded entrée into this “world-first attempt at horizontal regulation of AI systems.” One the one hand, they admire the Act’s “sensible” aspects, including its risk-based approach, prohibitions of certain systems, and attempts at establishing public transparency. On the other, they note its “severe weaknesses” including its reliance on “1980s product safety regulation” and “standardisation bodies with no fundamental rights experience.” For U.S. (and EU!) readers looking for a thoughtful overview and contextualization of a complex and somewhat inscrutable new legal system, this Article brings much to the table at a relatively concise length



Obvious?

https://orbilu.uni.lu/bitstream/10993/48564/1/Blount%20RIDP%20PDF.pdf

APPLYING THE PRESUMPTION OF INNOCENCE TO POLICING WITH AI

This paper argues that predictive policing, which relies upon former arrest records, hinders the future application of the presumption of innocence. This is established by positing that predictive policing is comparable to traditional criminal investigations in substance and scope. Police records generally do not clarify whether former charges result in dismissal or acquittal, or conversely, conviction. Therefore, police as state actors may unlawfully act in reliance on an individual’s former arrest record, despite a favourable disposition. Accordingly, it is argued that the presumption of innocence as a fair trial right may be effectively nullified by predictive policing.


(Related) The next step…

https://orbi.uliege.be/handle/2268/264969

The Use of AI Tools in Criminal Courts: Justice Done and Seen To Be Done?

Artificial intelligence (hereafter: AI) is impacting all sectors of society these days, including the criminal justice area. AI has indeed become an important tool in this area, whether for citizens seeking justice, legal practitioners or police and judicial authorities. While there is already a large body of literature on the prediction and detection of crime, this article focuses on the current and future role of AI in the adjudication of criminal cases. A distinction will be made between AI systems that facilitate adjudication and those that could, in part or wholly, replace human judges. At each step, we will give some concrete examples and evaluate what are, or could be, the advantages and disadvantages of such systems when used in criminal courts.



AI is never cruel…

https://lexelectronica.openum.ca/files/sites/103/La-justice-dans-tous-ses-%C3%A9tats_Michael_Lang.pdf

REVIEWING ALGORITHMIC DECISION MAKING IN ADMINISTRATIVE LAW

Artificial intelligence is perhaps the most significant technological shift since the popularization of the Internet in the waning years of the 20th century. Artificial intelligence promises to affect most parts of the modern economy, from trucking and transportation to medical care and research. Our legal system has already begun to contemplate how artificially intelligent decision making systems are likely to affect procedural fairness and access to justice. These effects have been underexamined in the area of administrative law, in which artificially intelligent systems might be used to expedite decision making, ensure the relatively equal treatment of like cases, and ward against discrimination. But the adoption of

artificially intelligent systems by administrative decision makers also raises serious questions. This essay focuses on one such question: whether the administrative decisions taken by artificially intelligent systems are capable of meeting the duty of procedural fairness owed to the subjects of such decisions. This essay is arranged in three sections. In the first, I briefly outline the increasing use of artificially intelligent systems in the administrative context. I focus primarily on machine learning algorithms and will describe the technical challenge of inexplicability that they raise. In the second section, I set out the duty of administrative decision makers to explain their reasoning in certain contexts. In the third section, I argue that administrative processes that use artificially intelligent systems will likely complicate the effective discharge of this duty. Individuals subject to certain kinds of administrative decisions may be deprived of the reasons to which they are entitled. I argue that artificial intelligence might prompt us to rethink reason giving practices in administrative law.



Ethical medicine. Take two tablets and call me in the morning?

https://ieeexplore.ieee.org/abstract/document/9597180

Regulatory Framework of Artificial Intelligence in Healthcare

This paper provides an overview of the application of artificial intelligence in healthcare and what it means in many ways. These aspects will be the privacy that this new technology offers us versus the availability of information that this technology needs. We will also discuss the regulatory framework in the most important areas of the world such as the United States and Europe, comparing the laws and strategies that organizations have used to preserve the security and control of artificial intelligence in healthcare. As a consequence, we will expose the ethical challenges posed by the entry of this new technology into our lives. We will also place ourselves in the current framework of the situation of artificial intelligence today, how it emerged, and its history over the years. To summarize, some conclusions have been proposed to conclude, and a personal opinion of the authors is about everything discussed throughout the paperwork.



Backing into ethics?

https://ieeexplore.ieee.org/abstract/document/9611065

8 The Ethics of Artificial Intelligence

Chapter Abstract: The dramatic theoretical and practical progress of artificial intelligence in the past decade has raised serious concerns about its ethical consequences. In response, more than eighty organizations have proposed sets of principles for ethical artificial intelligence. The proposed principles overlap in their concern with values such as transparency, justice, fairness, human benefits, avoiding harm, responsibility, and privacy. But no substantive discussion of how principles for ethical AI can be analyzed, justified, and reconciled has taken place. Moreover, the values assumed by these principles have received little analysis and assessment. Perhaps issues about principles and values can be evaded by Stuart Russell's proposal that beneficial AI concerns people's preferences rather than their ethical principles and values.



Tools & Techniques

https://www.makeuseof.com/tag/best-walkie-talkie-app/

The Best Two-Way Walkie Talkie Apps for Android and iPhone