Possibly
the expiration date was encrypted?
https://news.yahoo.com/internet-goes-down-millions-tech-021400230.html
Internet
goes down for millions, tech companies scramble as key encryption
service expires
The
expiration of a key digital encryption service on Thursday sent major
tech companies nationwide scrambling to deal with internet outages
that affected millions of online users.
Tech
giants — such as Amazon, Google, Microsoft, and Cisco, as well as
many smaller tech companies — were still battling with an endless
array of issues by the end of the night. The problems were caused by
the forced expiration of a popular digital certificate that encrypts
and protects the connection between devices and websites on the
internet.
I
read about the process and wonder: If I had been wandering by and the
FBI noted that I do not have a smartphone, would I immediately leap
to the top of their suspect list? Clearly I’m trying to hide my
tracks…
https://www.wired.com/story/capitol-riot-google-geofence-warrant/
How
a Secret Google Geofence Warrant Helped Catch the Capitol Riot Mob
COURT
DOCUMENTS SUGGEST the FBI has been using controversial geofence
search warrants at
a scale not publicly seen before, collecting account information and
location data on hundreds
of devices inside
the US Capitol during
a deadly invasion by
a right-wing
mob on
January 6.
(Related)
Same thing, another angle.
https://www.pogowasright.org/when-the-fbi-seizes-your-messages-from-big-tech-you-may-not-know-it-for-years/
When
the FBI seizes your messages from Big Tech, you may not know it for
years
Jay
Greene and Drew Harwell recently reported:
At first, Ryan Lackey thought the email
was a scam. It arrived one morning in March, bearing news that
Facebook had received an order from the Federal Bureau of
Investigation to turn over data from personal accounts Lackey uses to
chat with friends and exchange cat photos.
Even weirder, the email said Facebook had
been forced to keep this intrusion secret. Six months later, Lackey,
a computer security consultant in Puerto Rico, still has no idea what
Facebook turned over to an FBI investigation that he believes may
have started as early as 2019.
Read
more on Washington
Post,
I am in a similar situation, I have heard, but I have no details as
yet as to the gag order on Twitter that may have gone on for years.
For
another aspect of law enforcement and tech, see It’s
not easy to control police use of tech—even with a law by
Sidney Fussell of Wired.com.
And
in the most recent story about government surveillance, John Wright
has a story on Raw Story: FBI
used secret Google tracking data to nab Capitol rioters.
It begins:
Federal
prosecutors have cited
secretive “geofence” warrants —
which allow law enforcement to pinpoint cell-phone users’ precise
locations over time — in 45 Capitol riot cases, including six where
where suspects had not previously been identified.
Geofence warrants, also known as
reverse-location warrants, allow law enforcement to obtain data from
Google to identify potential suspects.
Read
more on Raw
Story.
A
right the Founding Fathers missed?
https://www.bespacific.com/discriminatory-ai-and-the-law-legal-standards-for-algorithmic-profiling/
Discriminatory
AI and the Law Legal Standards for Algorithmic Profiling
von
Ungern-Sternberg, Antje, Discriminatory AI and the Law – Legal
Standards for Algorithmic Profiling. (June 29, 2021). Draft
Chapter, in: Silja Vöneky, Philipp Kellmeyer, Oliver Müller and
Wolfram Burgard (ed.) Responsible AI, Cambridge University Press
(Forthcoming), Available at SSRN: https://ssrn.com/abstract=3876657
“Artificial
Intelligence is increasingly used to assess people (profiling) and
helps employers to find qualified employees, internet platforms to
distribute information or to sell goods, and security authorities to
single out suspects. Apart from being more efficient than humans in
processing huge amounts of data, intelligent algorithms – which are
free of human prejudices and stereotypes – would also prevent
discriminatory decisions, or so the story goes. However, many
studies show that the use of AI can lead to discriminatory outcomes.
From a legal point of view,
this raises the question if the law as it stands prohibits
objectionable forms of differential treatment and detrimental impact.
In the legal literature dealing with automated profiling, some
authors have suggested that we
need a “right to reasonable inferences”, i.e. a
certain methodology for AI algorithms affecting humans. This paper
takes up this idea with respect to discriminatory AI and claims
that such a right already exists in antidiscrimination
law. It argues that the need to justify differential treatment and
detrimental impact implies that profiling methods correspond to
certain standards. It is now a major challenge for both lawyers as
well as data and computer scientist to develop and establish those
methodological standards in order to guarantee compliance with
antidiscrimination law (and other legal regimes), as the paper
outlines.”
Was
customer demand high or is this purely speculative?
https://www.ft.com/content/c2cf67d6-a143-4aff-9eb1-b7a4e93c3c73
Amazon’s
Astro robot is a symbol of the surveillance age
When
Amazon unveiled a domestic robot this week, it promised that the
Astro is capable of “many delightful things”. Tellingly, the
first practical example given by Dave Limp, the executive in charge,
was checking whether his dogs were cheekily sleeping on the sofa
while he was out of the house.
…
In 1967, the American novelist and poet
Richard Brautigan imagined “a cybernetic ecology where we are free
of our labours . . . and all watched over/by machines of
loving grace.” Brautigan was prescient about one thing: the task
for which Amazon’s robot is best suited is surveillance, loving or
not.
… Astro’s most
human talent is recognising its owners. Amazon has built into the
device a screen and artificial intelligence, so that it can identify
up to 10 family members, follow them around playing music or videos,
blink its digital eyes and carry small items from one to another. In
other words, it performs like a well-behaved toddler; it will even go
away on command. Where Astro outperforms the toddler is on sentry
duty. It can act like a miniature guard, patrolling while the
occupants are out and checking on unexpected noises, such as burglar
alarms or breaking windows. If it finds an intruder, it will track
him and observe the crime, unless he kicks it over.
Reading
this got me thinking. The first company to succeed because
of strong ethics will change the world. Any idea how that would
work?
https://venturebeat.com/2021/09/30/are-ai-ethics-teams-doomed-to-be-a-facade-the-women-who-pioneered-them-weigh-in/
Are
AI ethics teams doomed to be a facade? Women who pioneered them weigh
in
The
concept of “ethical AI” hardly existed just a few years ago, but
times have changed. After countless discoveries of AI systems
causing real-world harm and a slew of professionals ringing the
alarm, tech companies now know that all eyes — from customers to
regulators — are on their AI. They also know this is something
they need to have an answer for. That answer, in many cases, has
been to establish in-house AI ethics teams.
Now
present at companies including Google, Microsoft, IBM, Facebook,
Salesforce, Sony, and more, such groups and boards were largely
positioned as places to do important research and even act as
safeguards against the companies’ own AI technologies. But after
Google fired
Timnit
Gebru and Margaret Mitchell, leading voices in the space and the
former co-leads of the company’s ethical AI lab, this past winter
after Gebru refused to rescind a research paper on the risks of large
language models, it felt as if the rug had been pulled out on the
whole concept. It doesn’t help that Facebook has also been
criticized for
steering its AI ethics team away from research into topics like
misinformation, in fear it could impact user growth and engagement.
Now, many in the industry are questioning if these in-house teams are
just a facade.
Perspective.
Perhaps Facebook isn’t so bad? Shouldn’t management of any
organization have confidence that they can deal with any issues that
arrise?
https://www.techdirt.com/articles/20210929/17352047662/facebooks-latest-scandals-banality-hubris-messiness-humanity.shtml
Facebook's
Latest Scandals: The Banality Of Hubris; The Messiness Of Humanity
Over
the last few weeks, the WSJ has run a series of posts generally
called "The
Facebook Files," which
have exposed a variety of internal documents from Facebook that are
somewhat embarrassing. I do think some of the reporting is overblown
-- and, in rather typical fashion regarding the big news publications
and their reporting on Facebook, presents everything in
the worst possible light.
For example, the report on how internal research showed that
Instagram
made teen girls feel bad about themselves downplays
that the data actually shows a significantly higher percentage of
teens indicated that Instagram made them feel better:
Perspective.
https://www.bespacific.com/pwc-offers-u-s-employees-full-time-remote-work/
PwC
offers U.S. employees full-time remote work
Reuters:
“Accounting and consulting firm PwC told Reuters on Thursday it
will allow all its 40,000 U.S. client services employees to work
virtually and live anywhere they want in perpetuity,
making it one of the biggest employers to embrace permanent remote
work. The policy is a departure from the accounting industry’s
rigid attitudes, known for encouraging people to put in late nights
at the office. Other major accounting firms, such as Deloitte and
KPMG, have also been giving employees more choice to work remotely in
the face of the COVID-19 pandemic. PwC’s deputy people leader,
Yolanda Seals-Coffield, said in an interview that the firm was the
first in its industry to make full-time virtual work available to
client services employees. PwC’s support staff and employees in
areas such as human resources and legal operations that do not face
clients already had the option to work virtually full-time…”