Sunday, September 20, 2020

Interesting that this comes from Russia.

http://journals.rudn.ru/law/article/view/24569

COMPUTER TECHNOLOGIES FOR COMMITTING SABOTAGE AND TERRORISM

The article discusses the problems that arise in connection with the crimes against state and public security committed by use of computer and network technologies. This topic is becoming relevant because some states have already experienced the effects of “combat” computer viruses, which can be regarded as waging war using cyber weapons. The most famous example is the attack by the Stuxnet computer virus on an Iranian uranium enrichment plant. The virus was created specifically to disable industrial control systems. The use of unmanned ground and air vehicles to carry out terrorist acts is of particular danger.

The destructive potential of cyberterrorism is determined by the widespread computerization of state and public life, the implementation of projects to create smart cities, including smart transportation, as well as the intensive development of the Internet of things. The purpose of the article is to analyze new criminal threats to state and public security, as well as to study high-tech ways of committing crimes such as sabotage, terrorist acts, and other crimes of a terrorist nature.

The article describes some of the techniques already used to commit crimes of sabotage and terrorism.





...and you will think it’s a human writing it.

https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/

In the Future, Propaganda Will Be Computer-Generated

Disinformation campaigns used to require a lot of human effort, but artificial intelligence will take them to a whole new level.





Don’t wait for an AI professor…

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7487209/

Emerging challenges in AI and the need for AI ethics education

Artificial Intelligence (AI) is reshaping the world in profound ways; some of its impacts are certainly beneficial but widespread and lasting harms can result from the technology as well. The integration of AI into various aspects of human life is underway, and the complex ethical concerns emerging from the design, deployment, and use of the technology serves as a reminder that it is time to revisit what future developers and designers, along with professionals, are learning when it comes to AI. It is of paramount importance to train future members of the AI community, and other stakeholders as well, to reflect on the ways in which AI might impact people’s lives and to embrace their responsibilities to enhance its benefits while mitigating its potential harms. This could occur in part through the fuller and more systematic inclusion of AI ethics into the curriculum. In this paper, we briefly describe different approaches to AI ethics and offer a set of recommendations related to AI ethics pedagogy.



(Related) If that last article was too complex…

https://dspace.mit.edu/handle/1721.1/127488

Can my algorithm be my opinion? : an AI + ethics curriculum for middle school students

Children of today can be considered "AI natives." In the same way that children of the 90s were considered to be digital natives, children of the early 2000s and 2010s have grown up in a world where much of their access to information is mediated by artificial intelligence systems. Furthermore, we expect their futures to be increasingly affected by AI, as consumers and designers. For this reason, there is a movement to teach AI concepts to K-12 students. Drawing on a tradition of scholarship in Science and Technology Studies and a surge in recent research on the ethical issues associated with the construction of AI systems, it is clear that students not only need a technical education of AI, but an education that will allow them to become conscientious consumers and ethical designers of it. This thesis presents a set of standards which describe what every child should know about the ethics of artificial intelligence: that it is not an objective or morally neutral source of information and, given that, how to design AI systems with stakeholders in mind. It then describes a series of open-source, largely unplugged activities which address these standards by blending together ethical and technical content. Finally, it presents results from a pilot where students engaged with these activities. Findings about students' initial understanding of AI and the ethical dilemmas associated with it are presented, as are students' understanding after engaging with the curriculum. After participating, students moved from seeing AI as an objective tool to a tool that can be both objective and subjective. By the end of the curriculum, students were able to identify more stakeholders of technical systems and design their own systems according to the values of those stakeholders. This work shows that students can transform into conscientious consumers and ethical designers of AI.





Explaining “explaining.”

https://www.sciencedirect.com/science/article/abs/pii/S0004370220301375

Explanation in AI and law: Past, present and future

Explanation has been a central feature of AI systems for legal reasoning since their inception. Recently, the topic of explanation of decisions has taken on a new urgency, throughout AI in general, with the increasing deployment of AI tools and the need for lay users to be able to place trust in the decisions that the support tools are recommending. This paper provides a comprehensive review of the variety of techniques for explanation that have been developed in AI and Law. We summarise the early contributions and how these have since developed. We describe a number of notable current methods for automated explanation of legal reasoning and we also highlight gaps that must be addressed by future systems to ensure that accurate, trustworthy, unbiased decision support can be provided to legal professionals. We believe that insights from AI and Law, where explanation has long been a concern, may provide useful pointers for future development of explainable AI.





Another reason to grant AI personhood?

https://qz.com/1905712/when-ai-in-healthcare-goes-wrong-who-is-responsible-2/

When AI in healthcare goes wrong, who is responsible?

Artificial intelligence can be used to diagnose cancer, predict suicide, and assist in surgery. In all these cases, studies suggest AI outperforms human doctors in set tasks. But when something does go wrong, who is responsible?

There’s no easy answer, says Patrick Lin, director of Ethics and Emerging Sciences Group at California Polytechnic State University. At any point in the process of implementing AI in healthcare, from design to data and delivery, errors are possible. “This is a big mess,” says Lin. “It’s not clear who would be responsible because the details of why an error or accident happens matters. That event could happen anywhere along the value chain.”





I wonder if the President has other software as a target for his ‘blessing?’

https://www.npr.org/2020/09/20/914032065/tiktok-ban-averted-trump-gives-oracle-walmart-deal-his-blessing

TikTok Ban Averted: Trump Gives Oracle-Walmart Deal His 'Blessing'

President Trump has given tentative approval to a deal that will keep TikTok alive in the U.S., resolving a months-long confrontation between a hit app popularized by lip-syncing teens and White House officials who viewed the service as a national security risk.

As part of the deal rescuing TikTok, U.S. tech company Oracle is joining hands with Walmart to form a new entity called TikTok Global, which will be headquartered in the U.S.

That arrangement appears to satisfy the White House's concerns over the security of American user data, even though ByteDance is expected to hold its majority ownership position.



(Related) Just a “cost of doing business in the US?”

https://www.reuters.com/article/us-usa-china-tiktok-bytedance-idUSKCN26B039

ByteDance says not aware of $5 billion education fund in TikTok deal

TikTok owner Bytedance said in a social media post on Sunday that it was the first time it had heard in the news it was setting up a $5 billion education fund in the United States.

U.S. President Donald Trump said he had approved a deal, which included a $5 billion education fund, to allow TikTok to continue to operate in the United States.





Perspective. An impressive Infographic returns!

https://www.visualcapitalist.com/every-minute-internet-2020/

Here’s What Happens Every Minute on the Internet in 2020

[Grab it from:

https://web-assets.domo.com/blog/wp-content/uploads/2020/08/20-data-never-sleeps-8-final-01-Resize.jpg





Perhaps the greatest invention of the Pandemic Era!

https://dilbert.com/strip/2020-09-20



No comments: