When
you get that email from God…
Business
Email Compromise (BEC) and Cyberpsychology
This
paper gives a brief introduction about what BEC (Business Email
Compromise) is and why we should be concerned about. In addition, it
presents 2 examples, Ubiquity and Peebles Media Group, which have
been chosen to analyse the phenomena of BEC and underpin how
universal BEC threat is for all companies. The psychology behind
this scam has been, then, studied. In particular, the Big Five
Framework has been analysed to understand how personality traits play
an important role in Social Engineering-based attacks. Furthermore,
the 6 basic principles of influence, by Cialdini, have been presented
to show which strategies are adopted in such scam. The paper follows
with the analysis of the BEC impacts and the incident evaluation and,
finally, with the description of some precautions, that companies
should undertake in order to mitigate and reduce the likelihood of a
Business Email Compromise.
...and
if the ‘bot’ does not comply, we will arrest it!
People
v. Robots: A Roadmap for Enforcing California's New Online Bot
Disclosure Act.
Bots
are software applications that complete tasks automatically. A bot's
communication is disembodied, so humans can mistake it for a real
person, and their misbelief can be exploited by the bot owner to
deploy malware or phish personal data. Bots also pose as consumers
posting online product reviews or spread (often fake) news, and a bot
owner can coordinate multiple social-network accounts to trick a
network's "trending" algorithms, boosting the visibility of
specific content, sowing and exacerbating controversy, or fabricating
an impression of mass individual consensus. California's 2019
Bolstering Online Transparency Act (the "CA Bot Act")
imposes conspicuous
disclosure requirements on bots when they communicate or
interact with humans in California. Call
it Isaac Asimov's fourth Rule of Robotics: A robot may not pretend to
be a human being. [Cute
Bob] By requiring bots to "self-identify" as
such, the CA Bot Act is a pioneer in laws regulating artificial
intelligence. Most of its
criticism points to the act's lack of an enforcement mechanism to
incentivize compliance. Accordingly, this Article lays
out a map to sanction violations of the act with civil actions under
California's Unfair Competition Law and statutory tort law of
fraudulent deceit. It outlines what is prohibited, who can be sued,
and who has standing to sue, then addresses First Amendment limits on
unmasking John Doe defendants via subpoena. For many reasons,
attempts to hold CA Bot Act violators liable are most likely to
prevail in the commercial arena. But a willful use of bots to
undermine a political election or prevent voting might also be a
worthy target. Ultimately, the law could be strengthened with an
articulated enforcement provision. But if the CA Bot Act aims a
first salvo against malicious online bots, this Article hopes to
spark the powder.
AI
on the board.
Smart
Companies: Company & board members liability in the age of AI
Artificial
Intelligence, although at its infancy, is progressing at a fast pace.
Its potential applications within the business structure, have led
economists and industry analysts to conclude that in the next years,
it will become an integral part of the boardroom. This paper
examines how AI can be used to augment the decision-making process of
the board of directors and the possible legal implications regarding
its deployment in the field of company law and corporate governance.
After examining the three possible stages of AI use in the boardroom,
based on a multidisciplinary approach, the advantages and pitfalls of
using AI in the decision-making process are scrutinised. Moreover,
AI might be able to autonomously manage a company in the future,
whether the legal
appointment of the AI as a director is possible and the
enforceability of its action is tested. Concomitantly, a change in
the corporate governance paradigm is proposed for Smart Companies.
Finally, following a comparative analysis on company and securities
law, possible adaptations to the current directors’ liability
scheme when AI is used to augment the decisions of the board is
investigated and future legal solutions are proposed for the
legislator.
No comments:
Post a Comment