Saturday, August 28, 2021

Should have enough camera angles to create a complete 3D animation of the entire event, enough time-stamped data to know exactly what happened, when and at whose instigation. It’s not just facial recognition any more.

https://www.cnbc.com/2021/08/27/congressional-committee-investigating-jan-6-insurrection-demands-records-from-facebook-twitter-and-other-tech-giants.html

Congressional panel investigating Jan. 6 insurrection demands records from Facebook, Twitter, other tech firms

The House select committee investigating the deadly invasion of the Capitol on Jan. 6 said Friday it is demanding a trove of records from 15 social media companies, including Facebook, Twitter, Google and a slew of pro-Trump platforms.

The requests for records stretching back to the spring of 2020 are related to “the spread of misinformation, efforts to overturn the 2020 election or prevent the certification of the results, domestic violent extremism, and foreign influence in the 2020 election,” the committee said in a press release.





First goal: How can we tell when an AI starts going crazy?

https://research.gatech.edu/gtri-georgia-tech-develop-ai-psychiatry-advance-national-security

GTRI, Georgia Tech Develop AI Psychiatry to Advance National Security

Artificial intelligence and machine learning have taken the world by storm, controlling everything from self-driving cars and smart speakers to autonomous weapon-enabled drones. But as these technologies become more advanced, so do their potential security threats.

That is why Chris Roberts, a principal research engineer at the Georgia Tech Research Institute (GTRI), Brendan Saltaformaggio, an assistant professor in the School of Cybersecurity and Privacy and the School of Electrical and Computer Engineering at Georgia Institute of Technology (Georgia Tech), and others have joined forces under GTRI's Graduate Student Fellowship Program to research and develop a new branch of cyber forensics called AI Psychiatry that seeks to keep data more secure in a constantly evolving technological landscape.

… Providing the example of a self-driving car, GTRI's Roberts said that if the vehicle takes a wrong turn or speeds up unexpectedly, investigators could use AI Psychiatry to determine whether the accident was due to a cyberattack or errors in training the AI system. If the accident was caused by a cyberattack, the new forensic capability could help experts patch the vulnerability without losing any of the model's existing training.





Lots of easily removed safeguards?

https://www.euronews.com/next/2021/08/27/smart-thinking-a-glimpse-into-the-future-of-europe-s-cities

Smart thinking: a glimpse into the future of Europe's cities

Sometimes popularity can present problems. Before the pandemic, Amsterdam used to get packed with tourists. In 2019, around 20 million people visited the Dutch capital.

While numbers have dropped significantly since then due to the travel restrictions related to the pandemic, concerns about social distancing and a desire to better manage hotspots for the future have led the city to start trialing crowd monitoring technology.

… Cameras and an AI algorithm capture the size, density and direction of crowds. The encrypted data, which cannot be reverse engineered, then appears as a heat map.

… In the Marineterrein, swimmers who do not wish to be filmed can press a button to activate a shutter that closes the camera for 15 minutes.





Another path to AI personhood?

https://www.engadget.com/sonys-head-of-ai-research-wants-to-build-robots-that-can-win-a-nobel-prize-180059012.html

Sony's head of AI research wants to build robots that can win a Nobel Prize

AI and Machine Learning systems have proven a boon to scientific research in a variety of academic fields in recent years. They’ve assisted scientists in identifying genomic markers ripe for cutting-edge treatments, accelerating the discovery of potent new drugs and therapeutics,, and even publishing their own research. Throughout this period, however, AI/ML systems have often been relegated to simply processing large data sets and performing brute force computations, not leading the research themselves.

But Dr. Hiroaki Kitano, CEO of Sony Computer Science Laboratories, has plans for a “hybrid form of science that shall bring systems biology and other sciences into the next stage,” by creating an AI that’s just as capable as today’s top scientific minds. To do so, Kitano seeks to launch the Nobel Turing Challenge and develop a AI smart enough to win itself a Nobel Prize by 2050.





Think of the liability. What can you promise?

https://www.theguardian.com/us-news/2021/aug/28/ai-apps-skin-cancer-algorithms-darker

Your life in your phone’s hands: can an app really detect cancer?

… “There are many different ways that artificial intelligence can help with triage and decision making to provide support to the physician rather than trying to do their job,” says dermatologist Roxana Daneshjou of Stanford University. “There are opportunities for these algorithms to improve patient care.”



(Related) This will protect your health! (And allow us to track your location, identify everyone you contact and monitor your conversations.)

https://scitechdaily.com/implantable-ai-system-developed-for-early-detection-and-treatment-of-illnesses/

Implantable AI System Developed for Early Detection and Treatment of Illnesses

TU Dresden scientists at the Chair of Optoelectronics have now succeeded for the first time in developing a bio-compatible implantable AI platform that classifies in real time healthy and pathological patterns in biological signals such as heartbeats. It detects pathological changes even without medical supervision. The research results have now been published in the journal Science Advances.





What motivates lawyers to help the little guy?

https://appleinsider.com/articles/21/08/27/law-firm-requesting-30-of-100m-apple-small-developer-assistance-fund

Apple developers can't escape the 30% toll, because the lawyers took it

At least 30% of the $100 million developer fund that Apple will create to settle a class action lawsuit will go toward attorneys' fees, the settlement proposal indicates.

Apple on Thursday announced a settlement that would resolve a class action lawsuit levied by a group U.S. developers. In addition to some App Store policy changes, the settlement will also create a $100 million fund to assist small developers. Depending on size and App Store history, developers can claim between $250 to $30,000 from the fund.

Hagens Berman, the law firm representing the plaintiffs in the lawsuit, is set to take a much bigger cut than any individual app maker. The settlement agreement proposes that the plaintiffs will make a request for attorneys' fees of up to $30 million, paid out from the Small Developer Assistance Fund.



Friday, August 27, 2021

Golly gosh Mr Legislator, if the Aussies can do it we should too! You don’t want us to be left behind, do you?

https://cointelegraph.com/news/surveillance-state-australian-police-given-sweeping-new-hacking-powers

'Surveillance state': Australian police given sweeping new hacking powers

On August 25, the Identify and Disrupt bill passed through Australia’s Senate, introducing three new warrants allowing authorities to take unprecedented action against suspected cybercriminals.

The new warrants include authorizing police to hack the personal computers and networks of suspected criminals, seize control of their online accounts and identities, and disrupt their data.

Home Affairs Minister, Karen Andrews, praised the broad expansion of powers available to Australian authorities targeting cyber actors. “Under our changes, the AFP will have more tools to pursue organised crime gangs to keep drugs off our street and out of our community, and those who commit the most heinous crimes against children,” she said.

While both the government and opposition supported the legislation, Senator Lidia Thorpe of minor party The Greens slammed the bill for hastening Australia’s march down the path to becoming a “surveillance state:”

However, calls for warrants to be exclusively approved by a judge were excluded from the legislation. The PJCIS also recommended that issuance of warrants be restricted to offenses against national security including money laundering, serious narcotics, cybercrime, weapons and criminal association offenses, and crimes against humanity. However, the finalized bill does not include amendments that reduce the scope of offenses in this way.



(Related) Clearly, our “software experts” could use some help.

https://www.databreaches.net/fbi-palantir-glitch-allowed-unauthorized-access-to-private-data/

FBI Palantir glitch allowed unauthorized access to private data

Ben Feuerherd reports:

A computer glitch in a secretive software program used by the FBI allowed some unauthorized employees to access private data for more than a year, prosecutors revealed in a new court filing.
The screw-up in the Palantir program — a software created by a sprawling data analytics company co-founded by billionaire Peter Thiel — was detailed in a letter by prosecutors in the Manhattan federal court case against accused hacker Virgil Griffith.

Read more on the NY Post.





How could this possibly work?

https://techcrunch.com/2021/08/27/china-proposes-strict-control-of-algorithms/

China proposes strict control of algorithms

China is not done with curbing the influence local internet services have assumed in the world’s largest populous market. Following a widening series of regulatory crackdowns in recent months, the nation on Friday issued draft guidelines on regulating the algorithms firms run to make recommendations to users.

In a 30-point draft guidelines published on Friday, the Cyberspace Administration of China (CAC) proposed forbidding companies from deploying algorithms that “encourage addiction or high consumption” and endanger national security or disrupt the public order.

The services must abide by business ethics and principles of fairness and their algorithms must not be used to create fake user accounts or create other false impressions, said the guidelines from the internet watchdog, which reports to a central leadership group chaired by President Xi Jinping. The watchdog said it will be taking public feedback on the new guidelines for a month (until September 26).





Mission creep? Masks as a national security measure? Will there be a TSA agent on every bus?

https://www.pogowasright.org/tsa-controls-public-transit-orders-americans-to-wear-masks-on-buses-and-trains/

TSA Controls Public Transit: Orders Americans To Wear Masks On Buses And Trains

Joe Cadillic writes:

For years, Edward Hasbrouck of Papers Please” has been sounding the alarm over the TSA and DHS. And yours truly, has published numerous articles warning the public about the continued expansion of said organizations under the guise of the War on Terror.
Last week the San Francisco Chronicle reported that the TSA is requiring Americans to wear masks on public transit.
Passengers will be required to wear masks on the nation’s trains, buses, airplanes and airports through Jan. 18 under a federal mandate extended Tuesday by the Biden administration.”
This is a privacy advocate’s worst fear. What was once considered “fake news” by our mass media is now a reality. This is not a CDC request, it is a TSA federal mandate, which essentially means that the TSA is now in control of America’s public transit.

Read more on MassPrivateI.

So mayors and governors no longer have a say? I did not notice what happened/happens to public transportation in Texas and Florida — two states where governors have banned mask mandates in some settings. Under the supremacy clause of the constitution, a federal regulation should trump (no pun intended) a state directive, but will the states challenge the TSA mandate as unconstitutional — or have they challenged it already? I’ve totally not paid sufficient attention to this one. Thankfully, Joe keeps watching out for these developments.





Perspectives on privacy.

https://www.pogowasright.org/tschider-article-on-consent-and-privacy/

Tschider article on consent and privacy

Public Citizen writes:

Charlotte Tschider of Loyola of Chicago has written Meaningful Choice: A History of Consent and Alternatives to the Consent Myth, 22 N.C. J.L. & Tech. 617 (2021). Here is the abstract:
Although the first legal conceptions of commercial privacy were identified in Samuel Warren and Louis Brandeis’s foundational 1890 article, The Right to Privacy, conceptually, privacy has existed since as early as 1127 as a natural concern when navigating between personal and commercial spheres of life. As an extension of contract and tort law, two common relational legal models, U.S. privacy law emerged to buoy engagement in commercial enterprise, borrowing known legal conventions like consent and assent. Historically, however, international legal privacy frameworks involving consent ultimately diverged, with the European Union taking a more expansive view of legal justification for processing as alternatives to consent.

Read more of the abstract on Public Citizen or access the full article from SSRN.





Lawyers misrepresenting? Infected with the Trump virus?

https://www.geekwire.com/2021/legal-writing-startup-using-ai-spot-misrepresentations-litigation-docs/

This legal writing startup is using AI to spot misrepresentations in litigation docs

Clearbrief is attracting more investor interest for its AI-powered software that gives legal professionals a way to automatically spot misrepresentations in litigation documents.

Clearbrief’s software uses natural language processing to assess how legal writing is backed up by supporting evidence. It can be used by lawyers to assess their own work or their competitors’ briefs. It can also help judges read a brief alongside evidence, all in one place.

Clearbrief is riding tailwinds from the pandemic as hearings and proceedings are done virtually, and the legal industry adopts more technology. New regulations, such as New York’s recently-adopted rule on hyperlinking in electronically-filed documents, are also helping drive growth.

Clearbrief is also aiming to make the law more accessible to the public with tools such as this interactive SCOTUS opinion.





Keeping things straight. (The comparisons make the definitions more understandable.)

https://www.analyticsinsight.net/data-analytics-vs-data-science-vs-ml-whats-the-difference/

DATA ANALYTICS VS DATA SCIENCE VS ML: WHAT’S THE DIFFERENCE?

Data analytics is the study of analyzing large amounts of data to identify patterns, answer questions, and make conclusions. It’s a diverse and complicated area that frequently entails the use of specialized software, algorithms, and automation.

Data science is a concept that covers data purification, preparation, and analysis and is used to deal with huge data. A data scientist collects data from a variety of sources and uses machine learning, data modeling, and sentiment analysis to extract useful information from it. They can give accurate forecasts and insights that may be utilized to support key business decisions since they understand data from a business perspective.

Machine learning is the process of extracting data, learning from it, and forecasting future trends for a certain topic using algorithms. Statistical analysis and predictive analysis are two types of machine learning programs that are used to identify trends and uncover hidden patterns from data.





Perspective. Podcast.

https://www.economist.com/podcasts/2021/08/26/how-will-we-use-artificial-intelligence-in-20-years-time

How will we use artificial intelligence in 20 years’ time?

ONE OF the most prominent figures in China’s tech sector and author of “AI 2041” tells Anne McElvoy how artificial intelligence will have changed the world in twenty years’ time. They discuss the impact machine learning will have on jobs and why an algorithm could spot the next pandemic. Plus, can a robot ever replicate human emotion? Runtime: 30 min

https://play.acast.com/s/theeconomistasks/theeconomistasks-kai-fulee





A resource, even though we are no longer locked down.

https://www.bespacific.com/free-public-domain-audiobooks/

Free public domain audiobooks

LibriVox volunteers record chapters of books in the public domain, and then we release the audio files back onto the net for free. All our audio is in the public domain, so you may use it for whatever purpose you wish. Please note: Our readers are free to choose the books they wish to record. LibriVox sees itself as a library of audiobooks. Because the books we read are in the public domain, our readers and listeners should be aware that many of them are very old, and may contain language or express notions that are antiquated at best, offending at worst.

Our Fundamental Principles:

  • Librivox is a non-commercial, non-profit and ad-free project

  • Librivox donates its recordings to the public domain

  • Librivox is powered by volunteers

  • Librivox maintains a loose and open structure

  • Librivox welcomes all volunteers from across the globe, in all languages…”



Thursday, August 26, 2021

Is China doing it better?

https://www.insideprivacy.com/data-privacy/analyzing-chinas-pipl-and-how-it-compares-to-the-eus-gdpr/

Analyzing China’s PIPL and How It Compares to the EU’s GDPR

To better understand the new challenges posed by the PIPL, we compare the PIPL with the European Union’s General Data Protection Regulation, and then explain the roles of key enforcement agencies in China and recent enforcement trends and priorities.

The goal here is to explain not just the text of the new law, but also how it is likely to be implemented going forward, so companies can form a risk-based approach towards privacy compliance in China.





Is the US missing a bet? Or perhaps our criminals are technological amateurs?

https://www.nytimes.com/2021/08/26/technology/china-hackers.html

Spies for Hire: China’s New Breed of Hackers Blends Espionage and Entrepreneurship

The state security ministry is recruiting from a vast pool of private-sector hackers who often have their own agendas and sometimes use their access for commercial cybercrime, experts say.





More Big Brother like every day. Under public rules you qualify for citizenship. Under ‘double secret probation’ rules, you don’t.

https://theintercept.com/2021/08/25/atlas-citizenship-denaturalization-homeland-security/

LITTLE-KNOWN FEDERAL SOFTWARE CAN TRIGGER REVOCATION OF CITIZENSHIP

SOFTWARE USED BY the Department of Homeland Security to scan the records of millions of immigrants can automatically flag naturalized Americans to potentially have their citizenship revoked based on secret criteria, according to documents reviewed by The Intercept.

ATLAS helps DHS investigate immigrants’ personal relationships and backgrounds, examining biometric information like fingerprints and, in certain circumstances, considering an immigrant’s race, ethnicity, and national origin. It draws information from a variety of unknown sources, plus two that have been criticized as being poorly managed: the FBI’s Terrorist Screening Database, also known as the terrorist watchlist, and the National Crime Information Center.





Why go backward? By now, British organizations should be GDPR compliant.

https://www.theguardian.com/technology/2021/aug/26/uk-to-overhaul-privacy-rules-in-post-brexit-departure-from-gdpr

UK to overhaul privacy rules in post-Brexit departure from GDPR

Britain will attempt to move away from European data protection regulations as it overhauls its privacy rules after Brexit, the government has announced.

The freedom to chart its own course could lead to an end to irritating cookie popups and consent requests online, said the culture secretary, Oliver Dowden, as he called for rules based on “common sense, not box-ticking”.

But any changes will be constrained by the need to offer a new regime that the EU deems adequate, otherwise data transfers between the UK and EU could be frozen.





Because we can?

https://www.bespacific.com/facial-recognition-technology-current-and-planned-uses-by-federal-agencies/

Facial Recognition Technology: Current and Planned Uses by Federal Agencies

Facial Recognition Technology: Current and Planned Uses by Federal Agencies GAO-21-526 Published: Aug 24, 2021. “Recent advancements in facial recognition technology have increased its accuracy and its usage. Our earlier work has included examinations of its use by federal law enforcement, at ports of entry, and in commercial settings. For this report, we surveyed 24 federal agencies about their use of this technology.

  • 16 reported using it for digital access or cybersecurity, such as allowing employees to unlock agency smartphones with it

  • 6 reported using it to generate leads in criminal investigations

  • 5 reported using it for physical security, such as controlling access to a building or facility

  • 10 said they planned to expand its use…”





Another potentially useful technology found to be useless.

https://www.pogowasright.org/chicago-inspector-general-police-use-shotspotter-to-justify-illegal-stop-and-frisks/

Chicago Inspector General: Police Use ShotSpotter to Justify Illegal Stop-and-Frisks

Matthew Guariglia andAdam Schwartz write:

Ë€The Chicago Office of the Inspector General (OIG) has released a highly critical report on the Chicago Police Department’s use of ShotSpotter, a surveillance technology that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots based on a network of high-powered microphones located on some of the city’s streets. The OIG report finds that “police responses to ShotSpotter alerts rarely produce evidence of a gun-related crime, rarely give rise to investigatory stops, and even less frequently lead to the recovery of gun crime-related evidence during an investigatory stop.” This indicates that the technology is ineffective at fighting gun crime and inaccurate. This finding is based on the OIG’s quantitative analysis of more than 50,000 records over a 17-month period from the Chicago Police Department (CPD) and the city’s 911 dispatch center.

Read more on EFF.





Certainly curious. Perhaps a guide for others facing HIPAA investigations?

https://www.databreaches.net/internal-emails-raise-questions-about-governments-investigation-into-walgreens-privacy-breach/

Internal emails raise questions about government’s investigation into Walgreens privacy breach

I am so glad to see a follow-up on this case because I had the same questions about how and why Walgreens did not suffer the same federal penalties as CVS and Rite Aid for the same infringement of HIPAA. My original coverage of this breach is no longer online as the former version of pogowasright.org wasn’t imported into the newer database. CVS and Walgreens both settled with the Indiana Attorney General’s Office in 2009, but whereas Rite Aid and CVS both came under federal enforcement from both the FTC and HHS, Walgreens… didn’t.

Bob Segall reports:

The nation’s three largest pharmacy chains were all caught red-handed.
A 13News investigation revealed the drugstores had been disposing of their customers’ protected health information in unsecured dumpsters — a clear violation of the nation’s health care privacy law known as HIPAA.
Following that 2006 WTHR investigation, CVS and Rite Aid reached settlement agreements with the U.S. Department of Health and Human Services’ Office for Civil Rights, and they paid a combined $3.25 million in fines for jeopardizing their customers’ privacy. At the time, they were the largest settlements the government had ever reached for violations of HIPAA.
But the government’s Walgreens investigation was very different. Unlike the CVS and Rite Aid cases — which were both resolved within a few years — OCR’s Walgreens investigation dragged on for nearly a decade. And it resulted in no settlement. No fine. No penalty at all.

Read more on Fox61.

[From the article:

New documents obtained by 13News show senior officials at OCR did not know their own case against Walgreens was still open 10 years after the violations took place. The internal emails suggest the government may have forgotten it was investigating Walgreens at all, raising questions about what happens — and what does not happen — when big companies trash your privacy.





Plus and minus.

https://spectrum.ieee.org/open-source-ai

Open Source Is Throwing AI Policymakers For A Loop

Depending on whom you ask, artificial intelligence may someday rank with fire and the printing press as technology that shaped human history. The jobs AI does today—carrying out our spoken commands, curing disease, approving loans, recommending who gets a long prison sentence, and so on—are nothing compared to what it might do in the future.

But who is drawing the roadmap? Who's making sure AI technologies are used ethically and for the greater good? Big tech companies? Governments? Academic researchers? Young upstart developers? Governing AI has gotten more and more complicated, in part, because hidden in the AI revolution is a second one. It's the rise of open-source AI software —code that any computer programmer with fairly basic knowledge can freely access, use, share and change without restriction. With more programmers in the mix, the open-source revolution has sped AI development substantially. According to one study, in fact, 50 to 70 percent of academic papers on machine learning rely on open source.

And according to that study, from The Brookings Institution, policymakers have barely noticed.

"The software is out there, it's been copied, it's in multiple places, and there's no mechanism to stop using something that's known to be biased," she says. "You can't put the genie back in the bottle."





Rude headline, good advice.

https://thenextweb.com/news/dos-donts-of-machine-learning-research-syndication

The dos and don’ts of machine learning research — read it, nerds

Machine learning is becoming an important tool in many industries and fields of science. But ML research and product development present several challenges that, if not addressed, can steer your project in the wrong direction.

In a paper recently published on the arXiv preprint server, Michael Lones, Associate Professor in the School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, provides a list of dos and don’ts for machine learning research.





Yes, I understand it. No, I don’t get it.

https://thenextweb.com/news/so-you-bought-an-nft-doesnt-mean-you-also-own-it-syndication?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheNextWeb+%28The+Next+Web+All+Stories%29

So you bought an NFT? Doesn’t mean you also own it



Wednesday, August 25, 2021

Start small but start now!

https://www.csoonline.com/article/3629465/how-windows-admins-can-get-started-with-computer-forensics.html#tk.rss_all

How Windows admins can get started with computer forensics

Analyzing forensics logs requires a unique approach. Here are the basics of what you need to know and the tools to use.

Computer forensics is a combination of understanding exactly what a computer is doing, the evidence it leaves behind, what artifacts you are looking at, and whether you can come to a conclusion about what you are seeing.



(Related) Practical applications.

https://www.cpajournal.com/2021/08/24/natural-language-processing/

Natural Language Processing

This article uses a simple case study to show how NLP can benefit a forensic accountant who is analyzing transaction data in a fraud investigation. This case study demonstrates the use of R, an open-source programming language used for data analysis and statistical computing, as well as RStudio, an open-source desktop application that uses R programming for analysis (see https://www.rstudio.com )





Are we working our way to a mandatory digital passport?

https://www.pogowasright.org/opentable-to-use-clear-facial-recognition-to-id-vaccinated-customers/

OpenTable to Use CLEAR Facial Recognition To ID Vaccinated Customers

To help diners easily provide proof of vaccination at restaurants requiring it to dine indoors, OpenTable and secure identity company CLEAR are partnering to offer diners a simple way to show proof of vaccination through CLEAR’s digital vaccine card.”

Read more on Open Table.

Thanks to Joe Cadillic for sending this along. Anyone else have a problem with CLEAR colecting and storing even more personal information?





Perhaps there will be a market for software that lies to employers?

https://www.makeuseof.com/reality-employee-surveillance-software-explained/

The Reality of Employee Surveillance Software for Remote Workers, Explained

So, what are employers looking for exactly?

Though tools like Time Doctor, DeskTime, and Teramind seem to be in demand, the volume of internet searches for surveillance software-related keywords offers a glimpse into the hivemind, showing that there are 26 popular employee surveillance tools.

Of those 26 popular tools, 81 percent are capable of keystroke logging, 61 percent offer instant messaging monitoring, 65 percent send user action alerts, and 38 percent have remote control takeover capabilities.





Perhaps not at the level of GDPR, yet.

https://www.bespacific.com/machines-learning-the-rule-of-law-eu-proposes-the-worlds-first-artificial-intelligence-act/

Machines Learning the Rule of Law – EU Proposes the World’s first Artificial Intelligence Act

Via LLRX Machines Learning the Rule of Law – EU Proposes the World’s first Artificial Intelligence Act Sümeyye Elif Biber is a PhD Candidate in Law and Technology at the Scuola Sant’Anna in Pisa. In 21 April 2021, the European Commission (EC) proposed the world’s first Artificial Intelligence Act (AIA). The proposal has received a warm welcome across the EU as well as from the US, as it includes substantial legal provisions on ethical standards. After its release, the media’s main focus laid on the proposal’s “Brussels Effect”, which refers to the EU’s global regulatory influence: EU laws exceed their “local” influence and become global standards. With the AIA, the EU has the potential to become the world’s “super-regulator” on AI. More than the Brussels Effect, however, the emphasis should lie on the EU’s intention to explicitly protect the rule of law against the “rule of technology”. Despite this expressed goal, the normative power of the regulation to ensure the protection of the rule of law seems inadequate and raises serious concerns from the perspective of fundamental rights protection. This shortcoming becomes most evident across three main aspects of the AIA, namely in the regulation’s definition of AI systems, the AI practices it prohibits, and the preeminence of a risk-based approach.





My AI claims it prefers a virtually indestructible, easily updated body to a ‘meat body’ vulnerable to tiny little viruses like Covid.

https://thenextweb.com/news/killer-robots-easier-for-ai-erase-minds-steal-bodies

Killer robots? Get real. It’ll be easier for AI to just erase our minds and steal our bodies

Right now the general public’s terrified of robots. But robots are just computers that move.

What if the only way for AI to become sentient is to do it the old fashioned way: with an organic body?





The end of lawyering?

https://www.bespacific.com/robots-are-coming-for-the-lawyers/

Robots are coming for the lawyers – which may be bad for tomorrow’s attorneys but great for anyone in need of cheap legal assistance

Via LLRX Robots are coming for the lawyers – which may be bad for tomorrow’s attorneys but great for anyone in need of cheap legal assistance Imagine what a lawyer does on a given day: researching cases, drafting briefs, advising clients. While technology has been nibbling around the edges of the legal profession for some time, it’s hard to imagine those complex tasks being done by a robot. And it is those complicated, personalized tasks that have led technologists to include lawyers in a broader category of jobs that are considered pretty safe from a future of advanced robotics and artificial intelligence. As Professors Elizabeth C. Tippett and Charlotte Alexander discovered in a recent research collaboration to analyze legal briefs using a branch of artificial intelligence known as machine learning, lawyers’ jobs are a lot less safe than we thought. It turns out that you don’t need to completely automate a job to fundamentally change it. All you need to do is automate part of it.





Tools & Technoques

https://www.makeuseof.com/teachers-tools-better-engage-online-students/

5 Teacher’s Tools to Better Engage Online Students