Sunday, July 11, 2021

Government collects the most obvious and least offensive security thinking. You had better not fall below this level!

https://www.cpomagazine.com/cyber-security/cisa-releases-ransomware-readiness-assessment-tool-for-assessing-organizations-cybersecurity-posture/

CISA Releases Ransomware Readiness Assessment Tool for Assessing Organizations’ Cybersecurity Posture

CISA strongly recommends that all organizations undertake the CSET Ransomware Readiness Assessment. The toolset is available for free download on CISA’s GitHub repository.





A secure investment?

https://news.crunchbase.com/news/funding-pours-into-cybersecurity-as-first-half-numbers-eclipse-last-years-total/

Funding Pours Into Cybersecurity As Mid-Year 2021 Numbers Eclipse Last Year’s Total

While global funding to startups has exploded this year, cybersecurity seems to be riding its own wave. Only halfway through the year, 2021 already has surpassed the record-breaking $7.8 billion raised by security companies last year.

According to Crunchbase data, $9 billion has flooded into the sector in 309 deals in the first six months of the year — more than double the $4.4 billion the industry realized in the first half of 2020. The second quarter alone saw $5.2 billion — compared to less than $2 billion for the same quarter last year.





Come to think of it, my AI says “Trust me!” a lot!

https://venturebeat.com/2021/07/10/how-cybersecurity-is-getting-ai-wrong/

How cybersecurity is getting AI wrong

The cybersecurity industry is rapidly embracing the notion of “zero trust”, where architectures, policies, and processes are guided by the principle that no one and nothing should be trusted.

However, in the same breath, the cybersecurity industry is incorporating a growing number of AI-driven security solutions that rely on some type of trusted “ground truth” as reference point.

How can these two seemingly diametrically opposing philosophies coexist?

This is not a hypothetical discussion. Organizations are introducing AI models into their security practices that impact almost every aspect of their business, and one of the most urgent questions remains whether regulators, compliance officers, security professionals, and employees will be able to trust these security models at all.

Because AI models are sophisticated, obscure, automated, and oftentimes evolving, it is difficult to establish trust in an AI-dominant environment. Yet without trust and accountability, some of these models might be considered risk-prohibitive and so could eventually be under-utilized, marginalized, or banned altogether.





Because law is always mysterious.

https://osf.io/preprints/socarxiv/38p5f/

Demystifying the Draft EU Artificial Intelligence Act

In April 2021, the European Commission proposed a Regulation on Artificial Intelligence, known as the AI Act. We present an overview of the Act and analyse its implications, drawing on scholarship ranging from the study of contemporary AI practices to the structure of EU product safety regimes over the last four decades. Aspects of the AI Act, such as different rules for different risk-levels of AI, make sense. But we also find that some provisions of the draft AI Act have surprising legal implications, whilst others may be largely ineffective at achieving their stated goals. Several overarching aspects, including the enforcement regime and the effect of maximum harmonisation on the space for AI policy more generally, engender significant concern. These issues should be addressed as a priority in the legislative process.





When AI start asking the tough questions…

https://journals.sagepub.com/doi/abs/10.1177/1037969X211029434

Access to algorithms post-Robodebt: Do Freedom of Information laws extend to automated systems?

This article analyses how current Freedom of Information (FOI) laws apply to automated decision-making systems. The authors argue that while current law may extend to automated systems, its application is unclear, both to practitioners and government. Instead, amendments to the FOI Act 1982 (Cth) could clarify how the law operates with respect to automated systems and better balance the underpinning objectives of the Act.





What should your self-driving car say to other self-driving cars?

https://link.springer.com/article/10.1007/s11227-021-03969-0

The internet-of-vehicle traffic condition system developed by artificial intelligence of things

An Internet-of-Vehicle (IoV) system primary transmits traffic information and various kinds of emergency notices through Vehicle-to-Vehicle (V2V) or Vehicle-to-Infrastructure (V2I); however, the transmission of multimedia enables drivers to control route conditions better, such as road obstacles and the range of a construction site. Additionally, car accidents usually require relevant video records of the scene for investigation; surrounding cars could transfer the accident scene videos to help the police restore the detailed situation. Meanwhile, the multimedia messages of IoV need to go through security verification and privacy protection for the system to deliver push notifications and multimedia messages to social groups instantly. The study aims to construct an IoV traffic condition system developed by Artificial Intelligence of Things (AIoT); the data transmitting method of this research is via the 6th Generation Network (6G Network), which has advantages of high transmission speed and Quality of Service (QoS) guarantee. Furthermore, the suggested system employs federated learning to ensure message security and privacy. The features of the researched system are: 1. Use Faster Region-based Convolutional Neural Networks (R-CNN) to recognize the objects in cameras and judge if there are road obstacles and any constructions; 2. Capture car accident videos through federated learning, and send the encrypted evidence to relevant legal units; 3. Use push notifications to send multimedia messages to social groups instantly, marking the locations and the road conditions to help drivers control the conditions with the surroundings. This study expects to delivering videos and Global Positioning System (GPS) data for road condition recognition, improving driving safety. The features of the approach developed in this article are different from those IoV alarms presented in past research that requires drivers to enter messages for notifying nearby cars. Instead, this research utilizes Faster R-CNN to recognize road conditions and transmit information to base stations, and the base stations will pass the information to other vehicles. The federated learning technique in this article can enhance the Faster R-CNN model’s accuracy in each car.





Obvious? I think not.

https://www.proquest.com/openview/7af25d86e4e2231c7b33e57504be6b6b/1?pq-origsite=gscholar&cbl=18750&diss=y

Toward Ethical Applications of Artificial Intelligence: Understanding Current Uses of Facial Recognition Technology and Advancing Bias Mitigation Strategies

Facial recognition technology (FRT) is a biometric software-based tool that mathematically maps and analyzes an individual’s facial features for the purpose of making identifying conclusions from photographs and video. FRT is being implemented throughout society at a rapid rate as the tool offers significant economic benefits for identification processes and policing. In spite of FRT’s benefits its broadening implementation comes with significant risk to society, as the potential for misuse or identification errors and bias with FRT can lead to large-scale violation of individual’s civil and human rights. The key risks using FRT come from two sources: first, FRT uses curated facial datasets for training, it has been shown that labeling errors and lack of diverse facial demographics in the datasets leads to poorly trained and error-prone outcomes with regard to underrepresented groups. Second, there are only limited regulatory frameworks and ethical standards of use for FRT, leading to situations where FRT is either misused or extended beyond its practical utility, leading to violation of individual privacy and legal assemble rights and the perpetuation of cultural bias. The legal and ethical issues surrounding FRT have come under scrutiny in recent years following increased public awareness from mainstream media reports on the use of FRT in large-scale protest events and in law enforcement use cases. Currently, there are a few examples of state-level regulation and industry self-regulation through guiding ethical principles that restrict and monitor the use of FRT in both government and industry applications. These minimal and isolated forms of regulation leave tremendous gaps in the effective and ethical implementation of FRT, leaving ample room for unregulated and unethical use cases. This thesis primarily aims to advance promising bias mitigation strategies. The key recommendations made are: 1) education for users and increased engagement by stakeholders, 2) comprehensive guidelines that can lead to federal regulation, and 3) a push towards explainable AI. FRT regulation has become a controversial and increasingly challenging task; the time for urgency and regulation is now in order to put a halt to the negative consequences of the technology as it currently exists.



No comments: