A
one hour discussion (watch the video)
Microsoft’s
President on Privacy, Artificial Intelligence, and Human Rights
Before
a rapt, standing-room-only audience of more than 300 students,
faculty, and other members of the Law School community, Microsoft
President and Chief Legal Officer Brad Smith ’84 returned to campus
on October 1 to discuss his new book, Tools
and Weapons: The Promise and the Peril of the Digital Age
(cowritten
with Carol Ann Browne).
The
event with Gillian
Lester,
Dean and the Lucy G. Moses Professor of Law of Columbia Law School,
and Professor Tim
Wu,
a leading authority on antitrust law who advocates for breaking up
Big Tech companies, was the season’s first installment of the
Dean’s Distinguished Speaker Series.
… Does
the tech sector need its own version of the Hippocratic oath? Smith
thinks so. When Wu, the Julius Silver Professor of Law, Science and
Technology, asked him about the idea, Smith pointed to his concerns
about ethical questions raised by artificial intelligence. “I
think we should consider
[AI] to be the rapidly emerging dominant economic force of the next
three decades,” he said. “Here we are fundamentally
equipping machines with the power to make decisions that previously
were only made by human beings. So you have to ask, ‘How do we
want machines to make these decisions?’ And as soon as you ask
that question, I think one of the things you realize is we probably
want to make these decisions based on more than what people who study
computer or data science learn in their disciplines.”
Older
articles.
Sunday
Reading: The Rise of Artificial Intelligence
We’re living
through an extraordinary moment in technological history. In the
past decade, the rise of artificial intelligence (both in theory and
in practice) has revolutionized computer science and the workplace.
As the field expands, many philosophers and academics are raising
questions about what A.I. means for the future of human intelligence.
This week, we’ve gathered a selection of pieces about the
evolution of artificial intelligence and its impact on our lives.
Thinking
about artificial intelligence can help clarify what makes us
human—for better and for worse.
What
happens when diagnosis is automated?
Will
artificial intelligence bring us utopia or destruction?
And yet, this
is a hot field for AI.
Potential
Liability for Physicians Using Artificial Intelligence
Medical
AI may be trained in inappropriate environments, using imperfect
techniques, or on incomplete data. Even when algorithms are trained
as well as possible, they may, for example, miss a tumor in a
radiological image or suggest the incorrect dose for a drug or an
inappropriate drug. Sometimes, patients will be injured as a result.
In this Viewpoint, we discuss when a physician could likely be held
liable under current law when using medical AI.
No comments:
Post a Comment