Interesting. I intend to read this carefully.
https://www.preprints.org/manuscript/202403.1086/v1
Intention Recognition in Digital Forensics: Systematic Review
In this comprehensive review, we delve into the realm of intention recognition within the context of digital forensics and cybercrime. The rise of cybercrime has become a major concern for individuals, organizations, and governments worldwide. Digital forensics is a field that deals with the investigation and analysis of digital evidence in order to identify, preserve, and analyze information that can be used as evidence in a court of law. Whereas, Intention recognition is a subfield of artificial intelligence that deals with the identification of agents’ intentions based on their actions and change of states. In the context of cybercrime, intention recognition can be used to identify the intentions of cybercriminals and even to predict their future actions. Employing a meticulous six-step systematic review approach, we curated research articles from reputable journals and categorized them into three distinct modeling approaches: logic-based, classical machine learning-based, and deep learning-based. Notably, intention recognition has transcended its historical confinement to network security, now addressing critical challenges across various subdomains, including social engineering attacks, AI black box vulnerabilities, and physical security. While deep learning emerges as the dominant paradigm, its inherent lack of transparency poses unique challenges in the digital forensics landscape. We advocate for hybrid solutions that blend deep learning’s power with interpretability. Furthermore, we propose the creation of a comprehensive taxonomy to precisely define intention recognition, paving the way for future advancements in this pivotal field.
Better get ready.
https://papers.academic-conferences.org/index.php/iccws/article/view/2099
Deepfakes: The Legal Implications
The development of deepfakes began in 2017, when a software developer on the Reddit online platform began posting his creations in which he swapped the faces of Hollywood celebrities onto the faces of adult film artists, while in 2018, the comedic actor Jordan Peele posted a deepfake video of former U.S. President Obama insulting former U.S. President Trump and warning of the dangers of deepfake media. With the viral use of deepfakes by 2019, the U.S. House Intelligence Committee began hearings on the potential threats to U.S. security posed by deepfakes. Unfortunately, deepfakes have become even more sophisticated and difficult to detect. With easy accessibility to the applications of deepfakes, its usage has increased drastically over the last five years. Deepfakes are now designed to harass, intimidate, degrade, and threaten people and often leads to the creation and dissemination of misinformation as well as creating confusion about important state and non-state issues. A deepfake may also breach IP rights e.g., by unlawfully exploiting a specific line, trademark or label. Furthermore, deepfakes may cause more severe problems such as violation of the human rights, right of privacy, personal data protection rights apart from the copyright infringements. While just a few governments have approved AI regulations, the majority have not due to concerns around the freedom of speech. And while most online platforms such as YouTube have implemented a number of legal mechanisms to control the content posted on their platforms, it remains a time consuming and costly affair. A major challenge is that deep fakes often remain indetectable by the unaided human eye, which lead to the development by governments and private platform to develop deep-fake detecting technologies and regulations around their usage. This paper seeks to discuss the legal and ethical implications and responsibilities of the use of deepfake technologies as well as to highlight the various social and legal challenges which both regulators and the society face while considering the potential role of online content dissemination platforms and governments in addressing deep fakes.
An unlikely solution?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4759742
AI's Hippocratic Oath
Diagnosing diseases, creating artwork, offering companionship, analyzing data, and securing our infrastructure—artificial intelligence (AI) does it all. But it does not always do it well. AI can be wrong, biased, and manipulative. It has convinced people to commit suicide, starve themselves, arrest innocent people, discriminate based on race, radicalize in support of terrorist causes, and spread misinformation. All without betraying how it functions or what went wrong.
A burgeoning body of scholarship enumerates AI harms and proposes solutions. This Article diverges from that scholarship to argue that the heart of the problem is not the technology but its creators: AI engineers who either don’t know how to, or are told not to, build better systems. Today, AI engineers act at the behest of self-interested companies pursuing profit, not safe, socially beneficial products. The government lacks the agility and expertise to address bad AI engineering practices on its best day. On its worst day, the government falls prey to industry’s siren song. Litigation doesn’t fare much better; plaintiffs have had little success challenging technology companies in court.
This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?
No comments:
Post a Comment