Wednesday, February 17, 2021

Securing privacy.

https://www.zdnet.com/article/spy-pixels-in-emails-to-track-recipient-activity-are-now-an-endemic-privacy-concern/#ftag=RSSbaffb68

Tracker pixels in emails are now an ‘endemic’ privacy concern

This week, the Hey messaging service analyzed its traffic following a request from the BBC and discovered that roughly two-thirds of emails sent to its users' private email accounts contained what is known as a "spy pixel."

Spy pixels, also known as tracking pixels or web beacons, are invisible, tiny image files – including .PNGs and .GIFs – that are inserted in the content body of an email.

They may appear as clear, white, or another color to merge with the content and remain unseen by a recipient and are often as small as 1x1 pixels.

The recipient of an email does not need to directly engage with the pixel in any way for it to track certain activities. Instead, when an email is opened, the tracking pixel is automatically downloaded – and this lets a server, owned by a marketer, know that the email has been read. Servers may also record the number of times an email is opened, the IP address linked to a user's location, and device usage.

In Europe, GDPR demands that organizations tell recipients of the use of such pixels. However, the water has been muddied surrounding the transparency necessary to implement pixel tracking, as consent is not always required – and when it is, this could be 'obtained' automatically when a user signs up to an email service and is asked to read a privacy notice published on a website.

It is possible to prevent tracking pixels from triggering by disallowing automatic image uploads in your web browser, or by downloading email and browser add-ons to block trackers.





Was this really a bad idea?

https://www.bespacific.com/the-troubling-new-practice-of-police-livestreaming-protests/

The Troubling New Practice of Police Livestreaming Protests

Slate – “This article is part of the Free Speech Project, a collaboration between Future Tense and the Tech, Law, & Security Program at American University Washington College of Law that examines the ways technology is influencing how we think about speech. Last summer’s anti–police brutality protests represented the largest mass demonstration effort in American history. Since then, law enforcement departments nationwide have faced intense scrutiny for how they policed these historic protests. The repeated, egregious instances of violence against journalists and protesters are well documented and have driven widespread calls for systematic reform. These calls have focused in part on surveillance, after the police used sophisticated social media data monitoring, commandeered non-city camera networks, and tried other intrusive methods to identify suspects. [Does participation make you a ‘suspect?’ Bob] But in Oregon, the Portland Police Bureau went a step further in its innovation: It broadcast its surveillance publicly, in real time, by livestreaming protests on social media. According to a lawsuit filed by the ACLU, PPB hosted a video on YouTube and on its official Twitter feed—which has more than 230,000 followers—on at least three occasions. PPB allegedly zoomed in to focus on individual protesters’ faces, making them easily identifiable and vulnerable to surveillance technologies such as facial recognition software, which law enforcement used to identify a protester in D.C.’s Lafayette Square and, reportedly, many of the insurrectionists who stormed the Capitol on Jan. 6. PPB first justified its public livestreaming on the grounds that it was necessary to provide “situational awareness” and to record possible criminal activity, and later “so the community could understand what was occurring at the protest.” But an Oregon court quickly forbade the livestreams, based on Oregon law and a local consent decree…”





If a home owner refused, were there consequences? How could they know in advance?

https://gizmodo.com/the-lapd-asked-ring-owners-to-hand-over-footage-of-blm-1846283117

The LAPD Asked Ring Owners to Hand Over Footage of BLM Protesters

On Tuesday, digital rights nonprofit the Electronic Frontier Foundation released the results of a Freedom of Information Act (FOIA) request which it had sent to the LAPD. The EFF obtained emails showing that a detective with the LAPD—which has a partnership with Ring’s Neighbors community app— asked for owners of the doorbell cams to submit footage to the “Safe L.A. Task Force” picturing “recent protests.” The timeline of the requests match up with nationwide protests following the killing of George Floyd by Minneapolis police, which drew countless thousands over the course of weeks in Los Angeles.





The implications are at least confusing. Watch the short video...

https://petapixel.com/2021/02/16/ai-can-now-turn-you-into-a-fully-digital-realistic-talking-clone/

AI Can Now Turn You Into a Fully Digital, Realistic Talking Clone

Hour One describes itself as a “video transformation company” that wants to replace cameras with code, and its latest creation is the ability for anyone to create a fully digital clone of themselves that can appear to speak “on camera” without a camera or audio inputs at all.

The company has debuted its digital clone technology in partnership with YouTuber Taryn Southern. In the video above, Southern is a fully digital creation that was created as a collaborative experiment between Southern and Hour One. The company uses a proprietary AI-driven process to provide automation to video creation, which enables presenter-led videos at scale without needing to put a person in front of a camera.





I’m betting that this won’t work either.

https://www.newstatesman.com/science-tech/2021/02/how-prevent-ai-taking-over-world

How to prevent AI from taking over the world

The best and most direct way to control AI is to ensure that its values are our values. By building human values into AI, we ensure that everything an AI does meets with our approval. But this is not simple. The so-called “Value Alignment Problem – how to get AI to respect and conform to human values – is arguably the most important, if vexing, problem faced by AI developers today.

So far, this problem has been seen as one of uncertainty: if only we understood our values better, we could program AI to promote these values. Stuart Russell, a leading AI scientist at Berkeley, offers an intriguing solution. Let’s design AI so that its goals are unclear. We then allow it to fill in the gaps by observing human behaviour. By learning its values from humans, the AI’s goals will be our goals.



No comments: