Sunday, January 09, 2022

Another wave of technology we can try to understand.

https://www.liebertpub.com/doi/full/10.1089/cyber.2021.29234.editorial

Ready (or Not) Player One: Initial Musings on the Metaverse



That’s obvious” is not a phrase understood by AI.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998249

Overturned Legal Rulings Are Pivotal In Using Machine Learning And The Law

Much of the time, attorneys know that the law is relatively stable and predictable. This makes things easier for all concerned. At the same time, attorneys also know and anticipate that cases will be overturned. What would happen if we trained AI but failed to point out that rulings are at times overruled? That’s the mess that some using machine learning are starting to appreciate.



What are the inputs? How much weight do you give a Trump Tweet?

https://interestingengineering.com/data-scientists-believe-algorithms-can-predict-political-unrest

Can Algorithms Predict Political Unrest? These Data Scientists Believe So

Who can forget the attack on Capital last January 6th? For those who do remember it well, there is an urgency to do something to avoid it ever happening again. One way to do that is to predict these events before they happen just like you can predict weather patterns.

Some data scientists believe they can achieve exactly that, according to The Washington Post. “We now have the data — and opportunity — to pursue a very different path than we did before,” said Clayton Besaw, who helps run CoupCast, a machine-learning-driven program based at the University of Central Florida that predicts coups for a variety of countries.

This type of predictive modeling has been around for a while but has mostly focused on countries where political unrest is far more common. Now, the hope is that it can be redirected to other nations to help prevent events like that of January 6th. And so far, the firms working in this field have been quite successful.



Perhaps something like it. Actions taken before we know why something appears to work.

https://www.theguardian.com/technology/2022/jan/09/are-we-witnessing-the-dawn-of-post-theory-science

Are we witnessing the dawn of post-theory science?

Isaac Newton apocryphally discovered his second law – the one about gravity – after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship – one that could be expressed as an equation, F=ma – and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).

Contrast how science is increasingly done today. Facebook’s machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.



My AI should read this...

https://openeducationalberta.ca/educationaltechnologyethics2/chapter/final-the-razors-edge-how-to-balance-risk-in-artificial-intelligence-machine-learning-and-big-data/

Chapter 6: The Razor’s Edge: How to Balance Risk in Artificial Intelligence, Machine Learning, and Big Data

This chapter is guided by the question, how can an educational system take advantage of rapid technological advances in a safe and socially responsible manner while still achieving its mandate of fostering and supporting learner success? Artificial intelligence (AI), machine learning (ML) [New Tab], and big data [New Tab] are examples of highly risky technologies that also hold vast potential for innovation (Floridi et al., 2018). In examining technological advances from an ethical perspective, one of the aims is to avoid harm and minimize risk. This is referred to as a consequentialist perspective (Farrow, 2016). The complexity of finding and maintaining a proper balance in advancing technological innovation and avoiding harm and minimizing risk cannot be understated. This quest for an educational “sweet spot” is mired by a lack of understanding, inconsistent leadership, and simple human greed.


No comments: