Too
quick to trust?
Report:
e-Learning Data Breach Exposes 1 Million College Students’ Data
A
report
by
vpnMentor published on Thursday claims that OneClass, an online
learning platform, experienced a serious data breach this week.
The
report claims that a vulnerability in the OneClass platform “created
a goldmine for criminal hackers” by offering them access to over 1
million private student records.
“It
contained over 27 GB of data, totaling 8.9 million records, and
exposed over 1 million individual OneClass users. The database
contained different indices, each with different types of records
related to students using the platform, those who had been rejected
from joining, many university professors, and more,” the report
reads.
The
report suggests that the breach could put students at risk because
young people are often more vulnerable to online schemes. Moreover,
the breach may have exposed families to financial risk, particularly
those parents that paid for the OneClass service with a credit card.
Would
anyone be able to match my live-streamed face to my 10 year old
driver’s license photo?
French
startup ubble completes €10 million seed funding round to bolster
its online identity verification service
… ubble’s
digital identity verification platform relies on live video streaming
and AI to help companies verify the authenticity of a person trying
to e.g. open a bank account, signing a temporary work contract etc.
and thus prevent fraud.
… ubble
believes its technology – which prompts users to capture a live
video of his or her face and ID documentation, and analyses the
results in real time and with the help of identity fraud experts –
will make a dent.
In
a press release announcing the completion of its seed round, ubble
also promises to make the identity verification process “fun”,
“interactive” and “enjoyable for everyone”. [No
mention of ‘accurate?’ Bob]
Automating
compliance and when not to comply.
Automated
Individual Decisions to Disclose Personal Data: Why GDPR Article 22
Should Not Apply
… Organizations
of all types are increasingly adopting the tools of machine learning
and artificial intelligence in a variety of applications. Such
organizations must determine when and how the Article 22 restrictions
on automated decision-making apply. Depending on whether Article 22
applies broadly or narrowly will have dramatic impacts on a wide
range of organizations.
… This
paper will provide an overview of Article 22 and will examine several
considerations that are important for determining its scope. It will
argue that the scope of automated decision-making regulated by
Article 22 is quite narrow, limited to those solely automated
decisions where a legal or similarly significant effect is an
inherent and direct result of the decision and where human
intervention could be helpful and meaningful in protecting individual
rights.
Precrime,
a la Minority Report
Predictive
Algorithms in Criminal Justice
The
paper aims at offering an overview of the complex current and
foreseeable intertwines between criminal law and developments of
artificial intelligence systems. In particular, specific attention
has been paid to the risks arising from the application of predictive
algorithms in criminal justice.
“Computers
have rights!”
“No
you doesn’t, you stupid machine.”
The
Wave of Innovation Artificial Intelligence and I P Rights
The
paper provides a concise overview of the interplay between law and
artificial intelligence. Based on the analysis of legal resources,
it identifies key topics, organizes them in a systematic manner and
describes them in general, essentially regardless of specificities of
individual jurisdictions. The paper depicts how artificial
intelligence is related to copyright and patent law, how law
regulates artificial intelligence, with regard to the developments in
artificial intelligence.
Avoiding
“Ready, Fire, Aim”
Artificial
intelligence in a crisis needs ethics with urgency
Artificial
intelligence tools can help save lives in a pandemic. However, the
need to implement technological solutions rapidly raises challenging
ethical issues. We need new approaches for ethics with urgency, to
ensure AI can be safely and beneficially used in the COVID-19
response and beyond.
It
ain’t real until you can measure it! Would you trust software that
is 51% ethical?
Measurement
of Ethical Issues in Software Products
Ethics
is a research field that is obtaining more and more attention in
Computer Science due to the proliferation of artificial intelligence
software, machine learning algorithms, robot agents (like chatbot),
and so on. Indeed, ethics research has produced till now a set of
guidelines, such as ethical codes, to be followed by people involved
in Computer Science. However, a little effort has been spent for
producing formal requirements to be included in the design process of
software able to act ethically with users. In
the paper, we investigate those issues that make a software product
ethical and propose a set of metrics devoted to quantitatively
evaluate if a software product can be considered ethical or not.
No comments:
Post a Comment