Saturday, May 21, 2022

Providing both the tools and rules that enabled an authorized user to “win” millions of dollars could make the lawsuits a bit confusing.

https://www.bloomberg.com/news/features/2022-05-19/crypto-platform-hack-rocks-blockchain-community

The Math Prodigy Whose Hack Upended DeFi Won’t Give Back His Millions

An 18-year-old graduate student exploited a weakness in Indexed Finance’s code and opened a legal conundrum that’s still rocking the blockchain community. Then he disappeared.

Medjedovic hasn’t officially responded to either suit; he told me he doesn’t even have a lawyer in Ontario. But in our email exchanges, he argued that he’d executed a perfectly legal series of trades. Nothing he did “involves getting access to a system I was not allowed access into,” he said. “I did not steal anyone’s private keys. I interacted with the smart contract according to its very own publicly available rules. The people who lost internet tokens in this trade were other people seeking to use the smart contract to their own advantage and taking on risky trading positions that they, apparently, did not fully understand.” Medjedovic added that he’d taken on “substantial risk” in pursuing this strategy. If he’d failed he would have lost “a pretty large chunk of my portfolio.”

The case raises several tricky questions about how people should be allowed to interact with code on the blockchain. For instance, the plaintiffs allege that Medjedovic made a “false representation” by manipulating the value of the tokens in the pools. But did Medjedovic do this, or did the algorithm? Barry Sookman, a lawyer in Toronto specializing in information technology, says it’s a distinction without a difference: “Individuals are responsible for the activities of technologies they control.”

And if Medjedovic was engaged in deception, who was being deceived? That’s one basis on which Andrew Lin, a Dallas-based lawyer who advises Medjedovic but isn’t formally involved in the Ontario cases, rejects the false representation argument. “It’s unclear who he made a misrepresentation to,” Lin says. “He set forth lines of code. The code itself is neither true nor false.”





Always useful to know who is playing for the other side.

https://www.databreaches.net/major-cyber-organizations-of-the-russian-intelligence-services/

Major Cyber Organizations of the Russian Intelligence Services

The Office of Information Security Securing One HHS and Health Sector Security Coordination Center (HC3) have released slides from:

Major Cyber Organizations of the Russian Intelligence Services (pdf, 27 pp) TLP: WHITE, ID# 202205191300 May 19, 2022

• Russian Intelligence Services’ Structure

• Russian Intelligence Services’ Mandates





Is this something a small country (or a US state) could cheerfully ignore?

https://www.cpomagazine.com/cyber-security/could-a-cyber-attack-overthrow-a-government-conti-ransomware-group-now-threatening-to-topple-costa-rican-government-if-ransom-not-paid/

Could a Cyber Attack Overthrow a Government? Conti Ransomware Group Now Threatening To Topple Costa Rican Government if Ransom Not Paid

The spate of ransomware attacks on critical infrastructure companies in 2021 was seen as a major escalation by cyber criminal groups. The Conti ransomware gang appears to be attempting to skip several steps by threatening to overthrow the government of Costa Rica, having established a presence throughout its national agencies.

The threat is almost certainly hollow, but it showcases the boldness with which major ransomware groups are operating even after international law enforcement operations took out previous line-crossers REvil and DarkSide among others.





You can provide all the Privacy and Security features you advertise as long as you don’t really provide all those Privacy and Security features. Encryption is Okay as long as we get copies of the plaintext.

https://www.cpomagazine.com/data-privacy/vpn-providers-ordered-by-indian-government-to-hold-all-customer-data-for-five-years-hand-over-to-government-upon-request/

VPN Providers Ordered by Indian Government To Hold All Customer Data for Five Years, Hand Over to Government Upon Request

Virtual private networks (VPN) sell themselves on their ability to anonymize traffic and protect user identities from any prying eyes. A new order from the Indian government could essentially undermine the business of VPN providers in the country, requiring the personal information of all users to be collected and this profile of customer data to be held for up to five years.

The country’s Computer Emergency Response Team (CERT-In), an office of the Ministry of Electronics and Information Technology tasked with taking point on cybersecurity threats, would also require VPN providers to grant it access to this customer data upon request.





Rethinking war. Why would anyone think that nothing would change?

https://breakingdefense.com/2022/05/ukraine-shows-that-city-hopping-is-the-new-era-of-defensive-warfare/

Ukraine shows that city hopping is the ‘new era’ of defensive warfare

The future of land warfare may not be hordes of missile raining down on an opposing force, crushing it and giving the attacker the advantage. Instead, the war in Ukraine may demonstrate that the advantage has swung to the defender, who can strike from hiding using tactical weapons in part because of the power of drone surveillance.

Maj. Gen. Scott Winter, commander of Australia’s 1st Division, told the more than 2,000 attendees at AUSA’s Pacific Land Warfare Conference that land warfare now increasingly resembles the island hopping strategy America followed in the Pacific during World War II. Drones create what he called “massive no-man’s lands,” stretching thousands of kilometers. Major attacking forces then get struck by smaller units hiding in urban areas, and suffering losses and disruptions to their crucial supply lines as they move between cities, tracked all the way by unmanned cameras in the sky.



Friday, May 20, 2022

This is not a “Get out of jail free” card for my Ethical Hackers. More a “Stay out of jail, IF ...” card.

https://www.theregister.com/2022/05/20/cfaa_rule_change/

US won’t prosecute ‘good faith’ security researchers under CFAA

The US Justice Department has directed prosecutors not to charge "good-faith security researchers" with violating the Computer Fraud and Abuse Act (CFAA) if their reasons for hacking are ethical — things like bug hunting, responsible vulnerability disclosure, or above-board penetration testing.

Good-faith, according to the policy [PDF], means using a computer "solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability."





Illustrating complexity for my students.

https://www.cpomagazine.com/data-protection/data-privacy-conundrum-when-different-states-play-by-different-rules/

Data Privacy Conundrum: When Different States Play by Different Rules. . .

It’s been less than two and a half years since the California Consumer Privacy Act, also known as CCPA, went into effect, but the influence of that signature legislation is already incalculable. Like General Data Privacy Regulation (GDPR), the European mandate that came before it, this set of wide-ranging regulations has fundamentally changed the conversation on data privacy and reset the clock on what government can and should do to protect consumers’ personal information.

Even CCPA won’t be CCPA much longer—when 2024 arrives, it’ll be CPRA, or the California Privacy Rights Act, which encompasses its predecessor while establishing more stringent measures (and enforcement bodies to make sure they stick). However, there are even bigger changes on the horizon, and they potentially affect every company doing business in every state.





To Bio or not to Bio? (And other interesting questions)

https://fpf.org/blog/when-is-a-biometric-no-longer-a-biometric/

WHEN IS A BIOMETRIC NO LONGER A BIOMETRIC?

In October 2021, the White House Office of Science and Technology (OSTP) published a Request for Information (RFI) regarding uses, harms, and recommendations for biometric technologies. Over 130 entities responded to the RFI, including advocacy organizations, scientists, experts in healthcare, lawyers, and technology companies. While most commenters agreed on core concepts of biometric technologies used to identify or verify identity (with differences in how to address it in policy), there was clear division as to what extent the law should apply to emerging technologies used for physical detection and characterization (such as skin cancer detection or diagnostic tools). These comments reveal that there is no general consensus on what “biometrics” should entail and thus what the applicable scope of law should be.





...and humans shall have the rights AI shall grant them, and no more.

https://www.bespacific.com/human-rights-and-algorithmic-opacity/

Human Rights, and Algorithmic Opacity

Lu, Sylvia Si-Wei, Data Privacy, Human Rights, and Algorithmic Opacity (May 6, 2022). California Law Review, Vol. 110, 2022 Forthcoming, Available at SSRN: https://ssrn.com/abstract=4004716

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, machine-learning-based AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. However, machine-learning algorithms can be technically complex and legally claimed as trade secrets, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding privacy rules and human rights in modern society. “The emergence of advanced AI systems thus generates a deeper tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an issue that is equally important but managerially disregarded, commercially evasive, and legally unactualized. This Note illustrates how regulators should rethink strategies regarding transparency for privacy protection through the interplay of human rights, disclosure regulations, and whistleblowing systems. It discusses how machine-learning algorithms threaten privacy protection through algorithmic opacity, assesses the effectiveness of the EU’s response to privacy issues raised by opaque AI systems, demonstrates the GDPR’s inadequacy in addressing privacy issues caused by algorithmic opacity, and proposes new algorithmic transparency strategies toward privacy protection, along with a broad array of policy implications and suggested moves. The analytical results indicate that in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.”





Perspective.

https://www.techrepublic.com/article/ai-remains-priority-ceos-gartner-survey/

AI remains priority for CEOs, according to new Gartner survey

For the third year running, AI is the top priority for CEOs, according to a survey of CEOs and senior executives released by Gartner on Wednesday.

The survey “2022 CEO Survey — The Year Perspectives Changed gauged the opinions of CEOs and top executives on a range of issues from the workforce to the environment and digitalization. The findings also revealed that the metaverse, which has received a lot of hype in the last year, especially since the rebranding of Facebook to Meta, is not as relevant to business leaders – 63% say that they do not see the metaverse as a key technology for their organization.





For my students.

https://insights.dice.com/2022/05/20/are-there-a-lot-of-artificial-intelligence-a-i-jobs-right-now/

Are There a Lot of Artificial Intelligence (A.I.) Jobs Right Now?

Interested in a career in machine learning and artificial intelligence (A.I.)? Curious about the number of opportunities out there? A new breakdown shows that A.I. remains a highly specialized field with relatively few job openings—but that will almost certainly change in coming years.

CompTIA’s monthly Tech Jobs Report reveals that states with the largest tech hubs—including California, Texas, Washington, and Massachusetts—lead when it comes to A.I.-related job postings



Thursday, May 19, 2022

It’s called collateral damage. This is not the first time it has happened.

https://www.businessinsider.com/russian-cyberattacks-on-ukraine-may-have-gotten-out-of-hand-2022-5?r=US&IR=T

Cyberattacks quietly launched by Russia before its invasion of Ukraine may have been more damaging than intended

Russian hackers went after a variety of Ukrainian targets in the private and public sectors, but one cyber weapon aimed at a specific military target spilled over and affected tens of thousands of devices outside Ukraine.

A few hours before the Russian invasion began on February 24, Russian hackers launched a cyberweapon against Viasat, an American satellite communications company that has been providing communication services to the Ukrainian military.

Named "AcidRain," the cyberweapon was a kind of malware known as a "wiper" that targeted Viasat modems and routers and erased all their data before permanently disabling them.

However, the Russian hackers appear to have let AcidRain run amok, either not able or not caring to limit the attack to Ukrainian devices.





Interesting language to describe a “research” vessel...

https://www.scmp.com/news/china/science/article/3178382/chinas-world-first-drone-carrier-new-marine-species-using-ai

China’s world-first drone carrier is a new ‘marine species’ using AI for unmanned maritime intelligence

China launched the world’s first drone carrier capable of operating on its own on Wednesday.

The unmanned ship, which can be controlled remotely and navigate autonomously in open water, will be a powerful tool for the nation to carry out marine scientific research and observation, according to the state-run Science and Technology Daily.

It comes as artificial intelligence plays an increasingly important role in maintaining maritime security, controlling sea lanes and competing for marine resources. China aims to use AI technology to expand its maritime influence.

The wide deck of the ship can carry dozens of unmanned vehicles, including drones, unmanned ships and submersibles, and the equipment will be able to form a network to observe targets, according to the report.

Last year, Zhuhai Yunzhou Intelligence Technology Co, a leading developer of unmanned surface vehicles, announced the company had developed an unmanned high-speed vessel, a breakthrough in its “dynamic cooperation confrontation technology”, according to the state-owned Global Times.

The report said the vessel could quickly intercept, besiege and expel invasive targets and it marked a milestone in the development of unmanned maritime intelligence equipment.





Do lawyers often use such pretexts to extract data? Just asking, because there may be something profitable here…

https://www.pogowasright.org/a-sham-website-chhabria-questions-legitimacy-of-plaintiff-in-subpoena-to-unveil-anonymous-twitter-user/

A Sham Website’?: Chhabria Questions Legitimacy of Plaintiff in Subpoena to Unveil Anonymous Twitter User

Meghann M. Cuniff reports:

A federal judge has said he’s ready to quash a subpoena to Twitter over an anonymous user after pressing for more information about the limited liability company behind it, accusing its website of being a “sham” and suggesting its attorney doesn’t want an investigation into the people behind it.
Lawrence Hadley, a Glaser Weil Fink Howard Avchen & Shapiro attorney representing Bayside LLC, told U.S. District Judge Vince Chhabria of the Northern District of California he doesn’t wish to submit further evidence in support of the subpoena, but Chhabria wondered if he can push for it, mentioning his ability to issue sanctions and suggesting he has “an independent duty to explore whether Bayside has abused the judicial process.”

Read more at The Recorder.





Governance or another layer of bureaucracy?

https://www.airforcemag.com/new-pentagon-office-overseeing-data-and-ai-nearing-foc/

New Pentagon Office Overseeing Data and AI Nearing FOC

As the Defense Department looks to accelerate use of artificial intelligence and to connect its sensors and shooters into one massive data network, a new office overseeing those efforts will reach full operating capability in the coming weeks.

The Office of the Chief Data and Artificial Intelligence Officer (CDAO) will reach FOC by June 1, John Sherman, the Pentagon’s chief information officer and acting CDAO, told lawmakers May 18.

In the meantime, those already in the office are working to define its structure. To this point, AI projects across the Pentagon have formed a massive sprawling enterprise—there are more than 600 efforts currently underway, Defense Secretary Lloyd J. Austin III has said—making consolidation a key point.

And it’s not just the larger DOD-wide offices and efforts that need to be coordinated—the services have their own AI ambitions. The Department of the Air Force, in particular, has already named its new chief data and AI officer, Brig. Gen. John M. Olson, and pursued projects to integrate AI into unmanned autonomous aircraft and target identification.

… “Just as when the world came to terms with the horrors of chemical weapons in World War I, and the Geneva Convention was the result, I think this is a second Geneva Convention moment,” said Moulton, who served as an officer in the Marine Corps. “… I get that this basically falls under the State Department. But I don’t think enough people in State appreciate how important this is, and as one of the leaders in our government on the use and employment of AI, I would strongly encourage you to help mount an effort to work on this broader problem.”

Palmieri agreed with Moulton and revealed that DOD is “in the last few weeks of coordination” in developing a strategy for responsible AI.





The IRS is looking at a “face tax?”

https://krebsonsecurity.com/2022/05/senators-urge-ftc-to-probe-id-me-over-selfie-data/

Senators Urge FTC to Probe ID.me Over Selfie Data

Some of more tech-savvy Democrats in the U.S. Senate are asking the Federal Trade Commission (FTC) to investigate identity-proofing company ID.me for “deceptive statements” the company and its founder allegedly made over how they handle facial recognition data collected on behalf of the Internal Revenue Service, which until recently required anyone seeking a new IRS account online to provide a live video selfie to ID.me.





Resources. Completing my SciFi collection.

https://www.makeuseof.com/best-websites-second-hand-books/

The 5 Best Websites to Buy Second-Hand Books

If you're looking to score a bargain on expanding your book collection, second-hand sites are a good way to go. Here are five of the best.





Perspective.

https://www.bespacific.com/robophobia/

Robophobia

University of Colorado Law Review > Printed > Volume 93 > Issue 1 > Robophobia by Andrew Keane Woods

Robots—machines, algorithms, artificial intelligence—play an increasingly important role in society, often supplementing or even replacing human judgment. Scholars have rightly become concerned with the fairness, accuracy, and humanity of these systems. Indeed, anxiety about machine bias is at a fever pitch. While these concerns are important, they nearly all run in one direction: we worry about robot bias against humans; we rarely worry about human bias against robots. This is a mistake. Not because robots deserve, in some deontological sense, to be treated fairly—although that may be true—but because our bias against nonhuman deciders is bad for us. For example, it would be a mistake to reject self-driving cars merely because they cause a single fatal accident. Yet all too often this is what we do. We tolerate enormous risk from our fellow humans but almost none from machines. A substantial literature—almost entirely ignored by legal scholars concerned with algorithmic bias—suggests that we routinely prefer worse-performing humans over better-performing robots. We do this on our roads, in our courthouses, in our military, and in our hospitals. Our bias against robots is costly, and it will only get more so as robots become more capable. This Article catalogs the many different forms of antirobot bias and suggests some reforms to curtail the harmful effects of that bias. The Article’s descriptive contribution is to develop a taxonomy of robophobia. Its normative contribution is to offer some reasons to be less biased against robots. The stakes could hardly be higher. We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers.”





Perspective. (Lawyer automation)

https://www.jdsupra.com/legalnews/legal-ai-series-chapter-nine-early-case-9773382/

Legal AI Series [Chapter Nine]: Early Case Assessment Software: AI’s “Inner Eye” to Discovery Processes

Artificial intelligence has given legal professionals an arsenal of tools to help them tackle the challenges of ESI and its unprecedented growth in the modern world. However, so far in our AI Legal Revolution series, the tools we’ve discussed have largely been reactive; solutions that attempt to resolve problems instead of anticipate them.

In other words, an alley-oop to attorneys who are desperately scrambling to play catch up.

Don’t get us wrong, these “catch up” tools are a much-needed boost over some of document review’s biggest hurdles. But what if AI software could do more than just react… what if, instead, it could act?

With early case assessment (ECA) software, attorneys now have the ability to do just that.

Here’s a closer look at the clairvoyant powers of ECA software, and how this technology can be used to improve discovery processes for legal professionals around the globe.



Wednesday, May 18, 2022

Interesting question to ask before implementing an AI system, “Can we explain this to a jury?” (Can your AI Expert explain it?)

https://www.bespacific.com/the-right-to-contest-ai/

The Right to Contest AI

Kaminski, Margot E. and Urban, Jennifer M., The Right to Contest AI (November 16, 2021). Columbia Law Review, Vol. 121, No. 7, 2021, U of Colorado Law Legal Studies Research Paper No. 21-30, Available at SSRN: https://ssrn.com/abstract=3965041

Artificial intelligence (AI) is increasingly used to make important decisions, from university admissions selections to loan determinations to the distribution of COVID-19 vaccines. These uses of AI raise a host of concerns about discrimination, accuracy, fairness, and accountability. In the United States, recent proposals for regulating AI focus largely on ex ante and systemic governance. This Article argues instead—or really, in addition—for an individual right to contest AI decisions, modeled on due process but adapted for the digital age. The European Union, in fact, recognizes such a right, and a growing number of institutions around the world now call for its establishment. This Article argues that despite considerable differences between the United States and other countries, establishing the right to contest AI decisions here would be in keeping with a long tradition of due process theory. This Article then fills a gap in the literature, establishing a theoretical scaffolding for discussing what a right to contest should look like in practice. This Article establishes four contestation archetypes that should serve as the bases of discussions of contestation both for the right to contest AI and in other policy contexts. The contestation archetypes vary along two axes: from contestation rules to standards and from emphasizing procedure to establishing substantive rights. This Article then discusses four processes that illustrate these archetypes in practice, including the first in depth consideration of the GDPR’s right to contestation for a U.S. audience. Finally, this Article integrates findings from these investigations to develop normative and practical guidance for establishing a right to contest AI.”



(Related) The first wave of contests?

https://www.bespacific.com/feds-warn-employers-against-discriminatory-hiring-algorithms/

Feds Warn Employers Against Discriminatory Hiring Algorithms

Wired:As companies increasingly involve AI in their hiring processes, advocates, lawyers, and researchers have continued to sound the alarm. Algorithms have been found to automatically assign job candidates different scores based on arbitrary criteria like whether they wear glasses or a headscarf or have a bookshelf in the background. Hiring algorithms can penalize applicants for having a Black-sounding name, mentioning a women’s college, and even submitting their résumé using certain file types. They can disadvantage people who stutter or have a physical disability that limits their ability to interact with a keyboard. All of this has gone widely unchecked. But now, the US Department of Justice and the Equal Employment Opportunity Commission have offered guidance on what businesses and government agencies must do to ensure their use of AI in hiring complies with the Americans with Disabilities Act. “We cannot let these tools become a high-tech pathway to discrimination,” said EEOC chair Charlotte Burrows in a briefing with reporters on Thursday. The EEOC instructs employers to disclose to applicants not only when algorithmic tools are being used to evaluate them but what traits those algorithms assess. “Today we are sounding an alarm regarding the dangers tied to blind reliance on AI and other technologies that we are seeing increasingly used by employers,” assistant attorney general for civil rights Kristen Clark told reporters in the same press conference. “Today we are making clear that we must do more to eliminate the barriers faced by people with disabilities, and no doubt: The use of AI is compounding the long-standing discrimination that job seekers with disabilities face.”





Keeping current.

https://www.theregister.com/2022/05/18/fraud_economy_booms/

State of internet crime in Q1 2022: Bot traffic on the rise, and more

The fraud industry, in some respects, grew in the first quarter of the year, with crooks putting more human resources into some attacks while increasingly relying on bots to carry out things like credential stuffing and fake account creation.

That's according to Arkose Labs, which claimed in its latest State of Fraud and Account Security report that one in four online accounts created in Q1 2022 were fake and used for fraud, scams, and the like.





If I can sign in with a photo, can I hack in the same way?

https://www.cnbc.com/2022/05/17/mastercard-launches-tech-that-lets-you-pay-with-your-face-or-hand.html

Mastercard launches tech that lets you pay with your face or hand in stores

Mastercard is piloting new technology that lets shoppers make payments with just their face or hand at the checkout point.

The program has already gone live in five St Marche grocery stores in Sao Paulo, Brazil. Mastercard says it plans to roll it out globally later this year.

To sign up on Mastercard, you take a picture of your face or scan your fingerprint to register it with an app. This is done either on your smartphone or at a payment terminal. You can then add a credit card, which gets linked to your biometric data.





Face it, we still have a lot to learn about the use of faces.

https://www.pogowasright.org/letter-to-the-standing-committee-on-access-to-information-privacy-and-ethics-on-their-study-of-the-use-and-impact-of-facial-recognition-technology/

Letter to the Standing Committee on Access to Information, Privacy and Ethics on their Study of the Use and Impact of Facial Recognition Technology

The Privacy Commissioner of Canada, Daniel Therrien has sent the following letter to the Standing Committee on Access to Information, Privacy and Ethics to provide information requested during his appearance before the Committee on May 2, 2022.

[…]
Recommended legal framework for police use of facial recognition technology
During the appearance, I undertook to provide the committee with a copy of our Recommended legal framework for police agencies’ use of facial recognition Footnote1, which was issued jointly by Federal, Provincial and Territorial Privacy Commissioners on May 2, 2022. Our recommended framework sets out our views on changes needed to ensure appropriate regulation of police use of facial recognition technology (FRT) in Canada. A future framework should, we believe, establish clearly and explicitly the circumstances in which police use of FRT is acceptable – and when it is not. It should include privacy protections that are specific to FRT use, and it should ensure appropriate oversight when the technology is deployed. While developed specifically for the policing context, there are many elements of our proposed that could be leveraged beyond this context.
Best practices for FRT regulation
The committee requested that I provide examples of best practices for regulating FRT from jurisdictions where regulatory frameworks have been enacted or proposed. Several international jurisdictions have enacted or proposed regulatory frameworks for FRT specifically, or biometrics more broadly that would also apply to FRT, which could inspire Canada’s approach. In particular, I would draw your attention to a number of notable measures worthy of consideration:

Read the full letter at the Office of the Privacy Commissioner of Canada.





My AI says, “Probably not so.”

https://finance.yahoo.com/news/game-over-google-deepmind-says-133304193.html

The Game is Over’: Google’s DeepMind says it is on verge of achieving human-level AI

Human-level artificial intelligence is close to finally being achieved, according to a lead researcher at Google’s DeepMind AI division.

Dr Nando de Freitas said “the game is over” in the decades-long quest to realise artificial general intelligence (AGI) after DeepMind unveiled an AI system capable of completing a wide range of complex tasks, from stacking blocks to writing poetry.

Described as a “generalist agent”, DeepMind’s new Gato AI needs to just be scaled up in order to create an AI capable of rivalling human intelligence, Dr de Freitas said.

Responding to an opinion piece written in The Next Web  that claimed “humans will never achieve AGI”, DeepMind’s research director wrote that it was his opinion that such an outcome is an inevitability.

It’s all about scale now! The Game is Over!” he wrote on Twitter.

It’s all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline... Solving these challenges is what will deliver AGI.”

When asked by machine learning researcher Alex Dimikas how far he believed the Gato AI was from passing a real Turing test – a measure of computer intelligence that requires a human to be unable to distinguish a machine from another human – Dr de Freitas replied: “Far still.”





Closer to safe self-driving cars or a new way to reduce employee headcount?

https://www.cnbc.com/2022/05/17/argo-ai-robotaxis-ditch-human-safety-drivers-in-miami-and-austin.html

Ford-backed robotaxi start-up Argo AI is ditching its human safety drivers in Miami and Austin

Robotaxi start-up Argo AI said Tuesday it has begun operating its autonomous test vehicles without human safety drivers in two U.S. cities — Miami and Austin, Texas — a major milestone for the Ford- and Volkswagen-backed company.

For now, those driverless vehicles won't be carrying paying customers. But they will be operating in daylight, during business hours, in dense urban neighborhoods, shuttling Argo AI employees who can summon the vehicles via a test app.





After that first ethical question…

https://news.mit.edu/2022/living-better-algorithms-sarah-cen-0518

Living better with algorithms

Laboratory for Information and Decision Systems (LIDS) student Sarah Cen remembers the lecture that sent her down the track to an upstream question.

At a talk on ethical artificial intelligence, the speaker brought up a variation on the famous trolley problem, which outlines a philosophical choice between two undesirable outcomes.

The speaker’s scenario: Say a self-driving car is traveling down a narrow alley with an elderly woman walking on one side and a small child on the other, and no way to thread between both without a fatality. Who should the car hit?

Then the speaker said: Let’s take a step back. Is this the question we should even be asking?

That’s when things clicked for Cen. Instead of considering the point of impact, a self-driving car could have avoided choosing between two bad outcomes by making a decision earlier on — the speaker pointed out that, when entering the alley, the car could have determined that the space was narrow and slowed to a speed that would keep everyone safe.

… . In one such project, Cen studies options for regulating social media. Her recent work provides a method for translating human-readable regulations into implementable audits.

To get a sense of what this means, suppose that regulators require that any public health content — for example, on vaccines — not be vastly different for politically left- and right-leaning users. How should auditors check that a social media platform complies with this regulation? Can a platform be made to comply with the regulation without damaging its bottom line? And how does compliance affect the actual content that users do see?

Designing an auditing procedure is difficult in large part because there are so many stakeholders when it comes to social media. Auditors have to inspect the algorithm without accessing sensitive user data. They also have to work around tricky trade secrets, which can prevent them from getting a close look at the very algorithm that they are auditing because these algorithms are legally protected. Other considerations come into play as well, such as balancing the removal of misinformation with the protection of free speech.





Would you merge your boss’s face with Gandi or Donald Trump?

https://www.makeuseof.com/tag/morphthing/

How to Morph Faces Online and Create Face Merges With MorphThing

You can have a lot of fun with face mashup tools. Here are some ways to morph two faces online and share them with friends.





Tools & Techniques. I have some spare time, perhaps I’ll write a symphony…

https://www.makeuseof.com/best-tools-write-musical-notation/

The 4 Best Online Tools to Write Musical Notation