Sunday, January 23, 2022

Got religion? Let us add you to our list… (Is claiming a religion the only way ‘legal’ way to avoid vaccination?)

https://www.pogowasright.org/18-more-federal-agencies-eye-making-vaccine-religious-objector-lists/

18 More Federal Agencies Eye Making Vaccine Religious-Objector Lists

Sarah Parshall Perry and GianCarlo Canaparo of Meese Center write:

This week, we revealed that an obscure federal agency plans to keep lists of the “personal religious information” employees who had religious objections to the federal employee vaccine mandate.
As it turns out, the little-known Pre-trial Services Agency or the District of Columbia isn’t the only federal agency involved. As we feared, a whole-of-government effort looks to be underway.
A little digging at the Federal Register revealed that there are at least 19 total federal agencies—including five cabinet level agencies—that have created or proposed to create these tracking lists for religious-exemption requests from their employees.

Read more at The Heritage Foundation.

[From the article:

the federal government decrees that a citizen who seeks a medical exemption or a waiver based on a sincerely held religious belief has automatically consented to being entered in the Database. To put it plainly, invoking the legal right to exercise one’s religious faith risks simultaneously waiving that legal right.

Related:

The Biden Administration Is Making Lists of Religious Vaccine Objectors:

https://www.dailysignal.com/2022/01/11/breaking-biden-administration-making-lists-of-religious-vaccine-objectors/

https://www.federalregister.gov/documents/2022/01/11/2021-28135/privacy-act-of-1974-system-of-records



Intentional but not hyped.

https://www.wsj.com/articles/the-nanotechnology-revolution-is-herewe-just-havent-noticed-yet-11642827640?mod=djemalertNEWS

The Nanotechnology Revolution Is Here—We Just Haven’t Noticed Yet

Before there was a “metaverse,” before there were crypto millionaires, before nearly every kid in America wanted to be an influencer, the most-hyped thing in tech was “nanotechnology.” “Nano-,” for those who could use a refresher, means “one billionth,” and nanotechnology generally refers to materials manipulated at an atomic or molecular scale.

In the more distant future, this technology might yet enable the vision physicist Richard Feynman laid out in his famous 1959 lecture “There’s Plenty of Room at the Bottom,” in which he hypothesized about a way to build three-dimensional structures one atom at a time. Achieving even a fraction of what he proposed would open up tantalizing possibilities, from sensors that can detect viruses in the air before we inhale them to quantum computers in our pockets.



Can we do it?

https://dl.acm.org/doi/abs/10.1145/3491209

Trustworthy Artificial Intelligence: A Review

Artificial intelligence (AI) and algorithmic decision making are having a profound impact on our daily lives. These systems are vastly used in different high-stakes applications like healthcare, business, government, education, and justice, moving us toward a more algorithmic society. However, despite so many advantages of these systems, they sometimes directly or indirectly cause harm to the users and society. Therefore, it has become essential to make these systems safe, reliable, and trustworthy. Several requirements, such as fairness, explainability, accountability, reliability, and acceptance, have been proposed in this direction to make these systems trustworthy. This survey analyzes all of these different requirements through the lens of the literature. It provides an overview of different approaches that can help mitigate AI risks and increase trust and acceptance of the systems by utilizing the users and society. It also discusses existing strategies for validating and verifying these systems and the current standardization efforts for trustworthy AI. Finally, we present a holistic view of the recent advancements in trustworthy AI to help the interested researchers grasp the crucial facets of the topic efficiently and offer possible future research directions.



Perspective. (Not yet in my library)

https://www.engadget.com/hitting-the-books-the-work-of-the-future-autor-mindell-reynolds-mit-press-163011680.html

Hitting the Books: What autonomous vehicles mean for tomorrow's workforce

In the face of daily pandemic-induced upheavals, the notion of "business as usual" can often seem a quaint and distant notion to today's workforce. But even before we all got stuck in never-ending Zoom meetings, the logistics and transportation sectors (like much of America's economy) were already subtly shifting in the face of continuing advances in robotics, machine learning and autonomous navigation technologies.

In their new book, The Work of the Future: Building Better Jobs in an Age of Intelligent Machines, an interdisciplinary team of MIT researchers (leveraging insights gleaned from MIT's multi-year Task Force on the Work of the Future) exam the disconnect between improvements in technology and the benefits derived by workers from those advancements. It's not that America is rife with "low-skill workers" as New York's new mayor seems to believe, but rather that the nation is saturated with low-wage, low-quality positions — positions which are excluded from the ever-increasing perks and paychecks enjoyed by knowledge workers. The excerpt below examines the impact vehicular automation will have on rank and file employees, rather than the Musks of the world.



A very brief summary…

https://www.concordia.ca/content/dam/ginacody/research/spnet/Documents/BriefingNotes/EmergingTech-MilitaryApp/BN-62-Emerging-technology-and-military-application-May2021.pdf

VULNERABILITIES OF EMERGING TECHNOLOGIES: A SYSTEMATIC LITERATURE REVIEW

This note spots and categorises the vulnerabilities and problems caused by emerging technologies through a literature review and makes suggestions regarding resolving the issues.

o The new technologies of which the vulnerabilities are being considered are:

o Artificial Intelligence (AI), including but not limited to Machine Learning and Deep

Learning

o Internet of Things (IOT)

o Smart Cities including Smart Homes and Self Driving Vehicles

o Block Chains

o Cloud Computing

o Quantum Computing

o Dark Web

The vulnerabilities being investigated include:

o Security

o Privacy

o Trust and confidence

o Fairness, equality and human rights

o Law and policy making



Warning. We don’t understand ethics.

https://new.cultureplex.ca/wp-content/uploads/2022/01/Ethical_Skills_We_Are_Not_Teaching_Report.pdf

The Ethical Skills We Are Not Teaching: An Evaluation of University Level Courses on Artificial Intelligence, Ethics, and Society


(Related) Similar questions…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4001877

Bioethics and the Great Powers

Advances in medicine and technology stand poised to transform the way that humans relate to each other and to nature, even our own. Regimes like China are already enlisting the life sciences to implement troubling programs of biometric surveillance and population control. The bioethics wars are here. America isn’t ready. Democracies are equipped to deliberate on the ethics of cutting-edge approaches to public health, climate change, and biodiversity. But for the first time since 1974, the country lacks even a national authority to guide citizens and policymakers on the moral and social implications of interventions like genetically editing mosquitos to combat disease, or babies for COVID-19 immunity. Hard debates about values and consequences that spill over across borders -- that's precisely where autocracies often fall short. The United States must lead on bioethics abroad by shoring up the institutions that govern experimental research and clinical practice. Now is the time for bold measures to confront the controversies of our time.



Tools & Techniques.

https://www.makeuseof.com/python-developer-tools/

10 Useful Tools for Python Developers

Whether you need Python tools for data science, machine learning, web development, or anything in between, this list has you covered.


No comments: