Explaining
AI to GDPR regulators.
Companies
could be fined if they fail to explain decisions made by AI
Businesses
and other organisations could face multimillion-pound fines if they
are unable to explain decisions made by artificial intelligence,
under plans put forward by the UK’s data watchdog today.
The
Information Commissioner’s Office (ICO) said its new guidance was
vital because the UK is at a tipping
point where many firms are using AI to inform decisions
for the first time. This could include human resources departments
using machine learning to shortlist job applicants based on analysis
of their Cvs. The regulator says it is the first in the world to put
forward rules on explaining choices taken by AI.
… The
guidance, which is out for consultation today, tells organisations
how to communicate explanations to people in a form they will
understand. Failure to do so could, in extreme cases, result in a
fine of up to 4 per cent of a company’s global turnover, under the
EU’s data protection law.
Not
having enough money or time to explain AI decisions won’t be an
acceptable excuse, says McDougall. “They have to be accountable
for their actions. If they don’t have the resources to properly
think through how they are going to use AI to make decisions, then
they should be reflecting on whether they should be using it all.”
[The
guidance:
https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-and-the-turing-consultation-on-explaining-ai-decisions-guidance/
(Related)
Commission
Expert Group report on liability for emerging digital technologies
On
November 21, 2019, the European Commission’s Expert
Group on Liability and New Technologies –
New Technologies Formation (“NTF”)
published its Report
on Liability for Artificial Intelligence and other emerging
technologies.
The Commission tasked the NTF with establishing the extent to which
liability frameworks in the EU will continue to operate effectively
in relation to emerging digital technologies (including artificial
intelligence, the internet of things, and distributed ledger
technologies). This report presents the NTF’s findings and
recommendations.
Spiderman’s
bank?
WITH GREAT
POWER COMES GREAT RESPONSIBILITY: ARTIFICIAL INTELLIGENCE IN BANKING
… Arguably,
no large financial institution can afford not to integrate AI into
its business, but care should be taken to establish audit trails and
make the parameters of AI deployment transparent and available for
scrutiny. The opportunities AI offers to foster innovation and
promote growth in the industry are significant but must be pursued
responsibly in order to avoid serious harm.
The place
of AI in financial institutions
In broad
terms, the use of AI in financial institutions can be categorised
into four groups. The first is in customer interactions and
compliance, whether related to AML checks, fraud detection or
personalised customer engagement. The second is in the
context of financial systems and processes, such as payments[i] and
treasury services. The third use is for the enhancement of
financial products and the financial institution’s business model.
This could involve faster loan-affordability checks, more
personalised insurance premiums informed by policyholder behaviour or
algorithmic trading in foreign-exchange markets. The final
use case is to assist with regulatory reporting or change, including
stress testing, ring-fencing in the United Kingdom or the transition
away from LIBOR (London Interbank Offered Rate) as a reference rate.
For a
variety of toolkits. Mr Zillman collects everything within
his scope.
2020 Guide
to Web Data Extractors
New
on LLRX
–
2020
Guide to Web Data Extractors –
This
guide by Marcus
P. Zillman is
a comprehensive listing of web data extractors, screen, web scraping
and crawling sources and sites for the Internet and the Deep Web.
These sources are useful for professionals who focus on competitive
intelligence, business intelligence and analysis, knowledge
management and research that requires collecting, reviewing,
monitoring and tracking data, metadata and text.
For your browser collection.
Alternative
search engine provides google results but with privacy
FastCompany
–
“Picture
for a moment a version of Google Search that barely evolved from its
early years. Instead of a results page cluttered by informational
widgets, this one would primarily link out to other sites. And
instead of tracking your search history for ad targeting purposes,
this search engine would be decidedly impersonal. It turns out that
such a thing exists today in Startpage,
a Netherlands-based Google search alternative that emphasizes
privacy. While it’s not the only privacy-first search
engine—DuckDuckGo
is
a better-known example—Startpage is the only one whose search
results come from Google,
due to a unique and longstanding agreement in which Startpage pays
the search giant to get a feed of links for any search. The result
is a search engine that feels a lot like Google did before it leaned
into personalized search and advertising —and
all of its requisite data collection—about 15 years ago…”
No comments:
Post a Comment