Yes, it’s complicated. Will we figure it out? Perhaps.
These new
rules were meant to protect our privacy. They don’t work
… for those of us living under the GDPR, what
has really changed?
Before it came into effect last year, we faced an
onslaught of emails from organisations asking if we were happy to
continue a relationship most of us never knew we were in, or if we
wanted them to delete our data and unsubscribe us from their data
gathering.
While it was an opportunity for a digital spring
clean, informing people that their data is being collected is not the
same as preventing it from being collected in the first place. That
continues and is even increasing. The only difference is that now we
are forced to participate in our own privacy violation in a grotesque
game of “consent”.
… Under the GDPR, we gained the right to find
out what data is held on us and to request its deletion. Again, this
puts the onus on us, not the companies or the government, to do the
work. Again, most of us don’t. Yet the GDPR could have solved
this easily by making privacy the default and requiring us to opt in
if we want to have our data collected.
… Nor is the GDPR stopping the construction of
a surveillance society – in fact, it may even legalise it. The
collection of biometric data, which occurs with facial recognition
technology, is prohibited under the GDPR unless citizens give their
explicit consent. Yet there are exceptions when it is in the public
interest, such as fighting crime.
This is how an exception becomes the rule. After
all, who doesn’t want to fight crime? And since the security
services and police can use it, many companies and property owners
use it too.
Is AI smart enough to understand ethics?
Scenarios
and Recommendations for Ethical Interpretive AI
Artificially intelligent systems, given a set of
non-trivial ethical rules to follow, will inevitably be faced with
scenarios which call into question the scope of those rules. In such
cases, human reasoners typically will engage in interpretive
reasoning, where interpretive arguments are used to support or attack
claims that some rule should be understood a certain way.
Artificially intelligent
reasoners, however, currently lack the ability to carry out
human-like interpretive reasoning, and we argue that bridging this
gulf is of tremendous importance to human-centered AI. In
order to better understand how future artificial reasoners capable of
human-like interpretive reasoning must be developed, we have
collected a dataset of ethical rules, scenarios designed to invoke
interpretive reasoning, and interpretations of those scenarios. We
perform a qualitative analysis of our dataset, and summarize our
findings in the form of practical recommendations.
When “your” AI asks for a lawyer.
LEGAL
CAPACITY OF ARTIFICIAL INTELLIGENCE
Digitalization of the economy makes research in
the field of artificial intelligence relevant. The introduction of
robots in all spheres of human life gives rise to problems of
responsibility for the actions of artificial intelligence, for
example, in the event of an accident involving an unmanned taxi. No
less relevant is the problem of intellectual property in the results
of intellectual activity. Who should be considered the author of a
work created by artificial intelligence: the robot itself, man?
These practical problems entail the need for a scientific and
theoretical understanding of the personality of the robot. The
article explores the basic approaches to understanding artificial
intelligence, its types. It is determined as the
degree of autonomy of artificial intelligence can determine its legal
personality.
No comments:
Post a Comment