Hoist on their own petard. (I’ve always wanted to say that…)
Biometrics, Smartphones, Surveillance Cameras Pose New Obstacles for U.S. Spies
U.S., rivals seek ways to adapt spycraft to a changing world; being on the grid can blow your cover, but so can staying off
Operatives widely suspected of working for Israel’s Mossad spy service planned a stealthy operation to kill a Palestinian militant living in Dubai. The 2010 plan was a success except for the stealth part—closed-circuit cameras followed the team’s every move, even capturing them before and after they put on disguises.
In 2017, a suspected U.S. intelligence officer held a supposedly clandestine meeting with the half brother of North Korean leader Kim Jong Un, days before the latter was assassinated. That encounter also became public knowledge, thanks to a hotel’s security camera footage.
(Related) Surveillance: It’s not just for governments…
https://www.makeuseof.com/tag/3-effective-cell-phone-surveillance-apps/
The 6 Best Spy Phone Apps
Concerned about your children's safety? Install one of these cell phone surveillance apps on their Android device or iPhone.
When I started working with computers, all data (then available) was delivered to the mainframe.
https://venturebeat.com/2021/11/24/ai-will-soon-oversee-its-own-data-management/
AI will soon oversee its own data management
AI thrives on data. The more data it can access, and the more accurate and contextual that data is, the better the results will be.
The problem is that the data volumes currently being generated by the global digital footprint are so vast that it would take literally millions, if not billions, of data scientists to crunch it all — and it still would not happen fast enough to make a meaningful impact on AI-driven processes.
According to Dell’s 2021 Global Data Protection Index, the average enterprise is now managing ten times more data compared to five years ago, with the global load skyrocketing from “just” 1.45 petabytes in 2016 to 14.6 petabytes today. With data being generated in the datacenter, the cloud, the edge, and on connected devices around the world, we can expect this upward trend to continue well into the future.
Refining our perspective. I can explain it, can you understand it?
https://venturebeat.com/2021/11/26/what-is-explainable-ai-building-trust-in-ai-models/
What is explainable AI? Building trust in AI models
As AI-powered technologies proliferate in the enterprise, the term “explainable AI” (XAI) has entered mainstream vernacular. XAI is a set of tools, techniques, and frameworks intended to help users and designers of AI systems understand their predictions, including how and why the systems arrived at them.
A June 2020 IDC report found that business decision-makers believe explainability is a “critical requirement” in AI. To this end, explainability has been referenced as a guiding principle for AI development at DARPA, the European Commission’s High-level Expert Group on AI, and the National Institute of Standards and Technology.
… Generally speaking, there are three types of explanations in XAI: Global, local, and social influence.
Global explanations shed light on what a system is doing as a whole as opposed to the processes that lead to a prediction or decision. They often include summaries of how a system uses a feature to make a prediction and “metainformation,” like the type of data used to train the system.
Local explanations provide a detailed description of how the model came up with a specific prediction. These might include information about how a model uses features to generate an output or how flaws in input data will influence the output.
Social influence explanations relate to the way that “socially relevant” others — i.e., users — behave in response to a system’s predictions. A system using this sort of explanation may show a report on model adoption statistics, or the ranking of the system by users with similar characteristics (e.g., people above a certain age).
No comments:
Post a Comment