The camera seems to see a lot!
Wyze leaks
personal data for 2.4 million security camera users
You
buy a home monitoring camera to improve your security, but Wyze
customers might have wound up achieving the opposite. The company,
which makes $20 security cameras to pepper around your home, has
admitted that data on more than 2.4 million users has been exposed.
A database was left exposed, allowing people to access key pieces of
data, although financial information was not included.
The
issue was uncovered by consulting firm Twelve
Security, who announced that sensitive user data had been left
exposed on the internet. This included a staggering array of
personal information including email addresses, a list of cameras in
the house, WiFi SSIDs and even health information including
height, weight, gender, bone density and more.
… Wyze
says it is investigating what happened and how the leak occurred, and
that it plans to send an email notification to affected customers.
In the meantime, if you have a Wyze account it's a good idea to
change your password and
turn on two-factor authentication.
Will
they also look for missed deductions? (Mais
non, mon ami.)
French
court clears social media tracking plan in tax crackdown
France’s
government can pursue plans to trawl social media to detect tax
avoidance, its Constitutional Court ruled on Friday, although it
introduced limitations on what information can be collected following
a privacy outcry.
… Customs
and tax authorities will be allowed to review people’s profiles,
posts and photographs on social media for evidence of undeclared
income or inconsistencies.
Just
to see if I agree with the list.
Top
10 Privacy Law Developments of the Decade 2010-2019
I
told you I liked lists, even ones with some silly items.
52
things I learned in 2019
Emojis are starting to appear in evidence in court cases, and lawyers are worried: “When emoji symbols are strung together, we don’t have a reliable way of interpreting their meaning.” (In 2017, an Israeli judge had to decide if one emoji-filled message constituted a verbal contract)
Placebos are so effective that placebo placebos work: A pain cream with no active ingredients worked even when not used by the patient. Just owning the cream was enough to reduce pain.
Mechanical devices to cheat your phone pedometer (for health insurance fraud or vanity) are now all over AliExpress.
Using machine learning, researchers can now predict how likely an individual is to be involve in a car accident by looking at the image of their home address on Google Street View.
To AI or not to AI...
When Is It
Ethical to Not Replace Humans with AI?
There
are legitimate questions about the ethics of employing AI in place of
human workers. But what about when there's a moral imperative to
automate?
It
is by now, well-known that artificial intelligence will augment the
human worker and, in some instances outright take jobs once handled
by humans. A 2019
report indicated
that 36 million U.S. workers have “high exposure” to impending
automation. For businesses, the opportunities of AI mean they must
scrutinize which tasks would be more efficiently and cost-effectively
performed by robots than by human employees, as well as which ones
should combine human and AI resources.
… Based
on my own experiences as an AI strategist, I can identify at least
three broad areas where the ethics of employing AI are not only sound
but imperative:
1.
Physically dangerous jobs
2.
Health care
3.
Data-driven decision-making
No comments:
Post a Comment