Saturday, August 10, 2019


Another one I’m not going to like. Do we all agree on what must be censored? Define bias.
White House proposal would have FCC and FTC police alleged social media censorship
A draft executive order from the White House could put the Federal Communications Commission in charge of shaping how Facebook, Twitter and other large tech companies curate what appears on their websites, according to multiple people familiar with the matter.
The draft order, a summary of which was obtained by CNN, calls for the FCC to develop new regulations clarifying how and when the law protects social media websites when they decide to remove or suppress content on their platforms. Although still in its early stages and subject to change, the Trump administration's draft order also calls for the Federal Trade Commission to take those new policies into account when it investigates or files lawsuits against misbehaving companies. Politico first reported the existence of the draft.
If put into effect, the order would reflect a significant escalation by President Trump in his frequent attacks against social media companies over an alleged but unproven systemic bias against conservatives by technology platforms. And it could lead to a significant reinterpretation of a law that, its authors have insisted, was meant to give tech companies broad freedom to handle content as they see fit.
The Trump administration's proposal seeks to significantly narrow the protections afforded to companies under Section 230 of the Communications Decency Act, a part of the Telecommunications Act of 1996. Under the current law, internet companies are not liable for most of the content that their users or other third parties post on their platforms. Tech platforms also qualify for broad legal immunity when they take down objectionable content, at least when they are acting "in good faith."


(Related?) Would this redefine a “Clear and present danger” test?
White House questions tech giants on ways to predict shootings from social media
Top officials in the Trump administration expressed interest in tools that might anticipate mass shootings or predict attackers by scanning social media posts, photos and videos during a meeting Friday with tech giants including Facebook, Google and Twitter.
In response, though, tech leaders expressed doubt that such technology is feasible, while raising concerns about the privacy risks that such a system might create for all users, two of the sources said.




Coming soon to your neighborhood.
Ring, the smart doorbell home security system Amazon bought for over $1 billion last year, is involved in some fairly unnerving arrangements with local law enforcement agencies. Wouldn’t you like to know if the cops in your town are among them?
That’s precisely what Shreyas Gandlur, an incoming senior studying electrical engineering at the University of Illinois at Urbana-Champaign put together, using Amazon’s own demands for narrative control over the law enforcement agencies it works with to help build an interactive map:
… Where ring is concerned, FFTF’s map only includes about 50 cities, a far cry from the “more than 225" police departments reported by Gizmodo late last month. (Ring has declined to share the exact figure.) Finding the rest was, in a sense, trivial.
“Ring pre-writes almost all of the messages shared by police across social media, and attempts to legally obligate police to give the company final say on all statements about its products,” my colleague Dell Cameron wrote, a detail Gandlur seized on.
“I added a bunch of agencies I found by literally searching ‘excited to join neighbors by ring’ on Twitter and searching similar phrases on Google,” Gandlur said. “Nothing too complicated and it’s pretty funny that Ring controlling the content of police press releases came to my aid since basically every agency releases the same statement.” If Ring hoped to obfuscate which towns were using for surveillance purposes, it clearly failed.




AI and the GDPR
The Right to Human Intervention: Law, Ethics and Artificial Intelligence
Τhe paper analyses the new right of human intervention in use of information technology, automatization processes and advanced algorithms in individual decisionmaking activities. Art. 22 of the new General Data Protection Regulation (GDPR) provides that the data subject has the right not to be subject to a fully automated decision on matters of legal importance to her interests, hence the data subject has a right to human intervention in this kind of decisions.
[From the Conclusion
As may be clarified, human intervention does not always lessen the danger of discrimination and that technology can prevent bias, proposing not only privacy, but also fairness by design. This can be achieved through the application of the principle of justice when it comes to algorithms, which will prevent discrimination. We not only need human intervention, but also algorithmic neutrality, or 'correct' policy-directed algorithms, as with human intervention, unfair factors may inappropriately affect decisions.




How AI thinks.
Causal deep learning teaches AI to ask why
Most AI runs on pattern recognition, but as any high school student will tell you, correlation is not causation. Researchers are now looking at ways to help AI fathom this deeper level.



No comments: