Monday, December 13, 2021

Today we use the pandemic to justify it. The technology can also track political dissidents.

https://www.reuters.com/world/asia-pacific/skorea-test-ai-powered-facial-recognition-track-covid-19-cases-2021-12-13/

S.Korea to test AI-powered facial recognition to track COVID-19 cases

South Korea will soon roll out a pilot project to use artificial intelligence, facial recognition and thousands of CCTV cameras to track the movement of people infected with the coronavirus, despite concerns about the invasion of privacy.

The nationally funded project in Bucheon, one of the country's most densely populated cities on the outskirts of Seoul, is due to become operational in January, a city official told Reuters.

The system uses an AI algorithms and facial recognition technology to analyse footage gathered by more than 10,820 CCTV cameras and track an infected person’s movements, anyone they had close contact with, and whether they were wearing a mask, according to a 110-page business plan from the city submitted to the Ministry of Science and ICT (Information and Communications Technology), and provided to Reuters by a parliamentary lawmaker critical of the project.



The value of people who make bad choices?

https://insight.kellogg.northwestern.edu/article/podcast-why-you-need-a-working-knowledge-of-ai

Podcast: Why You Need a Working Knowledge of AI

What do Watermelon Oreos and Cheetos lip balm have in common? A customer you don’t want.

Using artificial intelligence, marketing professor Eric Anderson and a team of researchers learned that fans of these ultimately doomed products were “harbingers of failure,” in that they tended to really like items that were later discontinued. The fine print for businesses: you don’t want your product to be in their shopping carts.

Note: The Insightful Leader is produced for the ear and not meant to be read as a transcript. We encourage you to listen to the audio version above. However, a transcript of this episode is available here. https://insight.kellogg.northwestern.edu/content/uploads/Eric-Anderson-AI-podcast-transcription.pdf



What happens if the UK says no?

https://www.reuters.com/world/uk/uk-antitrust-regulator-looks-into-microsofts-16-bln-nuance-deal-2021-12-13/

UK antitrust regulator looks into Microsoft's $16 bln Nuance deal

The Competition and Markets Authority (CMA), which has been stepping up its regulation of Big Tech, said it was considering if the deal would result in lesser competition in the UK market.

Microsoft announced it would buy Nuance in April to boost its presence in cloud services for healthcare. The deal has already received regulatory approval in the United States and Australia, without remedies given.



Some minor changes to economic thought in the age of AI.

https://www.ft.com/content/d1bfa6d4-cee9-49db-9f79-eaf5ebfebf76

Health tech industry learns true value of medical data

In a medical artificial intelligence business, the quality of your algorithms — and therefore the value of your company — depends on your access to data. In this, the health tech sector is in some ways similar to advertising and internet search industries: it has quickly learnt that data is immensely valuable.

… Some 20 US healthcare systems recently formed a data company called Truveta, raising $200m to capitalise on the value of their combined patient records. In 2018, pharmaceutical company Roche valued US cancer patient data at almost $2bn, through its acquisition of Flatiron Health.

Hospitals and diagnostic labs are a rich source of this kind of health data for AI developers. Their databases of images and medical records are fodder for machine learning algorithms. These healthcare facilities typically seek patient consent for use of their data via a blanket “research use” provision that is a condition for using the medical service.


(Related)

https://www.inc.com/kevin-j-ryan/artificial-intelligence-social-justice-responsibility-microsoft-mary-gray-neurips.html

A Microsoft Researcher on the Power (and Perils) of Building AI

Harvard anthropologist Mary Gray explains why some of the biggest problems can arise in the earliest stages.

Relying on artificial intelligence in your business comes with some serious responsibilities. Just ask Mary Gray, a Harvard anthropologist and senior principal researcher at Microsoft Research who this week stressed the importance of collecting data mindfully when building AI--and how failing to do so can result in social injustices. Gray was speaking at the Conference on Neural Information Processing Systems about the relationship between AI and social justice.

"Data," she cautioned to the online audience, "is power."


No comments: