Do
not dismiss this article because it is ‘only too obvious.’
The
key to stopping cyberattacks? Understanding your own systems before
the hackers strike
Cyberattacks
targeting critical national infrastructure and other organisations
could be stopped before they have any impact if the teams responsible
for the security had a better understanding of their own networks.
That
might sound like obvious advice, but in many cases, cyber-criminal
and nation-state hackers have broken into corporate networks and
remained
there for a long time without
being detected.
… hackers
have only been able to get into such as strong position because those
responsible for defending networks don't always have a full grasp on
what they're managing.
"That's
what people often misunderstand about attacks – they don't happen
at the speed of light, it often takes months or years to get the
right level of access in a network and ultimately to be able to push
the trigger and cause a destructive act," says Dmitri
Alperovitch, executive chairman at Silverado Policy Accelerator and
co-founder and former CTO of CrowdStrike.
That
means deep knowledge of your network and being able to detect any
suspicious or unexpected behaviour can go a long way to detecting and
stopping intrusions.
Not
surprising.
CCPA
compliance lags as enforcement begins in earnest
Enforcement
of the California
Consumer Privacy Act (CCPA)
began on Wednesday July 1, despite the final
proposed regulations having
just been published on June 1 and pending review by the California
Office of Administrative Law (OAL). The July 1 date has left
companies, many
of which were hoping
for leniency
during the pandemic,
scrambling to prepare.
COVID-19
appears to be shifting the privacy compliance landscape in other
parts of the world — both Brazil’s
LGDP and
India’s
PDPB have
seen delays that will impact when the laws will go into effect.
Nonetheless, the California Attorney General (CAG) has
not capitulated on
the CCPA’s timeline, with the attorney general’s office stating:
“CCPA has been in effect since January 1, 2020. We’re committed
to enforcing the law starting July 1 … We encourage businesses to
be particularly mindful of data security in this time of emergency.”
Privacy
to your core…
Florida
becomes first state to enact DNA privacy law, blocking insurers from
genetic data
Florida
on Wednesday became the nation’s first state to enact a DNA privacy
law, prohibiting life, disability and long-term care insurance
companies from using genetic tests for coverage purposes.
Gov.
Ron DeSantis signed House
Bill 1189,
sponsored by Rep. Chris Sprowls, R-Palm Harbor. It extends federal
prohibitions against health insurance providers accessing results
from DNA tests, such as those offered by 23andMe or AncestryDNA, to
the three other insurers.
… “Given
the continued rise in popularity of DNA testing kits,” Sprowls said
Tuesday, “it was imperative we take action to protect Floridians’
DNA data from falling into the hands of an insurer who could
potentially weaponize that information against current or prospective
policyholders in the form of rate increases or exclusionary
policies.”
Federal
law prevents health insurers from using genetic information in
underwriting policies and in setting premiums, but the prohibition
doesn’t apply to life, disability or long-term care coverage.
Very
carefully?
How
can we ban facial recognition when it’s already everywhere?
… amid
the focus on government use of facial recognition, many companies are
still integrating the technology into a wide range of consumer
products. In June, Apple announced that it would be incorporating
facial recognition into
its HomeKit accessories and that its Face ID technology would be
expanded to support logging
into sites on Safari.
In the midst of the Covid-19
pandemic,
some firms have raced to put forward more contactless
biometric tech,
such as facial
recognition-enabled access control.
Show
me how you do what you do. Don’t worry, I won’t tell a soul.
Amazon,
Google Face Tough Rules in India’s E-Commerce Draft
India’s
latest e-commerce policy draft includes steps that could help local
startups and impose government oversight on how companies handle
data.
The
government has been working on the policy for at least two years amid
calls to reduce the dominance of global tech giants like Amazon.com
Inc.,
Alphabet
Inc.’s
Google and Facebook
Inc.
Under
rules laid out in a 15-page draft seen by Bloomberg, the government
would appoint an e-commerce regulator to ensure the industry is
competitive with broad access to information resources. The policy
draft was prepared by the Ministry of Commerce’s Department for
Promotion of Industry & Internal Trade.
The
proposed rules would also mandate government access to online
companies’ source codes and algorithms, which the
ministry says would help ensure against “digitally induced biases”
by competitors. The draft also talks of ascertaining whether
e-commerce businesses have “explainable AI,” referring to the use
of artificial intelligence.
Note
how the AI pendulum swings… Undue reliance is also a sin.
Majority
of public believe ‘AI should not make any mistakes’
… A
survey by AI innovation firm Fountech.ai revealed that 64 per cent
want more regulation introduced to make AI safer.
Artificial
intelligence is becoming more prominent in large-scale
decision-making, with algorithms now being used in areas such as
healthcare with the aim of improving speed
and
accuracy
of
decision-making.
However,
the research shows that the public does not yet have complete trust
in the technology – 69 per cent say humans should monitor and check
every decision made by AI software, while 61
per cent said they thought AI should not be making any mistakes in
the first place.
The
idea of a machine making a decision also appears to have an impact on
trust in AI, with 45 per cent saying it would be harder to forgive
errors made by technology compared with those made by a human.
Not
sure I agree. Forbes seems to be saying that a perfect solution,
logically arrived at, is insufficient unless you ‘care’ about
everyone impacted.
Why
Business Must Strike A Balance With AI And Emotional Intelligence
As
we turn to AI to do more tasks for us, the need for emotional
intelligence has never been greater. This was true even before
coronavirus took hold. Now, imagine how important emotional
intelligence is in creating environments where leaders must manage
employees who in many cases are stressed, scared, and uncertain about
what lies ahead. Still, while it’s true that we need emotional
intelligence in business management,
that’s not the only area where an empathic
approach is
necessary. It’s also incredibly important—especially now—in
balancing your utilization of AI in your business, customer
experience and marketing efforts.
First,
what is emotional
intelligence?
In the simplest form, it’s the ability to not just solve problems,
but understand and connect with the reasons why those problems are
occurring and how they impact other people. It’s
the ability to care.
Who
would you like to talk to?
New
AI project captures Jane Austen’s thoughts on social media
Have
you ever wanted to pick the brains of Sir Isaac Newton, Mary Shelley,
or Benjamin Franklin? Well now you can (kinda), thanks to a new
experiment by magician and novelist Andrew
Mayne.
The
project — called AI|Writer — uses OpenAI’s
new text generator API to
create simulated conversations with virtual historical figures. The
system first works out the purpose of the message and
the intended recipient by searching for patterns in the text. It
then uses the API‘s
internal knowledge of that person to guess how they would respond in
their written voice.
The
digitized characters can answer questions about their work, explain
scientific theories, or offer their opinions. For example, Marie
Curie gave a lesson
on radiation,
H.G. Wells revealed his inspiration
for The Time Machine,
while Alfred Hitchcock compared
Christopher Nolan’s Interstellar to Stanley Kubrick’s 2001.
No comments:
Post a Comment