I assume it is all about the assumptions programmed in.
Understanding the errors introduced by military AI applications
On March 22, 2003, two days into the U.S.-led invasion of Iraq, American troops fired a Patriot interceptor missile at what they assumed was an Iraqi anti-radiation missile designed to destroy air-defense systems. Acting on the recommendation of their computer-powered weapon, the Americans fired in self-defense, thinking they were shooting down a missile coming to destroy their outpost. What the Patriot missile system had identified as an incoming missile, was in fact a UK Tornado fighter jet, and when the Patriot struck the aircraft, it killed two crew on board instantly. The deaths were the first losses suffered by the Royal Air Force in the war and the tragic result of friendly fire.
A subsequent RAF Board of Inquiry investigation concluded that the shoot-down was the result of a combination of factors: how the Patriot missile classified targets, rules for firing the missiles, autonomous operation of Patriot missile batteries, and several other technical and procedural factors, like the Tornado not broadcasting its “friend or foe” identifier at the time of the friendly fire. The destruction of Tornado ZG710, the report concluded, represented a tragic error enabled by the missile’s computer routines.
Another opportunity for AI generated errors.
5 ways to address regulations around AI-enabled hiring and employment
In November, the New York City Council passed the first bill in the U.S. to broadly address the use of AI in hiring and employment. It would require hiring vendors to conduct annual bias audits of the use of artificial intelligence (AI) in the city’s processes and tools.
But that was just the beginning for proposed regulations on the use of employment AI tools. The European Commission recently drafted proposals that would protect gig workers from AI-enabled monitoring. And this past April, California introduced The Workplace Technology Accountability Act, or Assembly Bill 1651, which proposes employees be notified prior to the collection of data and use of monitoring tools and deployment of algorithms, with the right to review and correct collected data. It would limit the use of monitoring technologies to job-related use cases and valid business practices and require employers to conduct impact assessments on the use of algorithms and data collection.
This kind of legislation around the use of AI in hiring and employment is becoming more common, Beena Ammanath, executive director of the Global Deloitte AI Institute, told VentureBeat. The question is, what should HR departments and technical decision-makers be thinking about as AI regulation evolves?
Interesting, but I don’t think it will spread as completely as they hope.
How Apple, Google, and Microsoft will kill passwords and phishing in one stroke
… The program that Apple, Google, and Microsoft are rolling out will finally organize the current disarray of MFA services in some significant ways. Once it’s fully implemented, I’ll be able to use my iPhone to store a single token that will authenticate me on any of those three companies' services (and, one expects, many more follow-on services). The same credential can also be stored on a device running Android or Windows.
By presenting a facial scan or fingerprint to the device, I’ll be able to log in without having to type a password, which is faster and much more convenient. Equally important, the credential can be stored online so that it’s available when I replace or lose my current phone, solving another problem that has plagued some MFA users—the risk of being locked out of accounts when phones are lost or stolen. The recovery processes works by using an already authenticated device to download the credential, with no password required.
Tools & Techniques.
https://www.makeuseof.com/tag/how-to-trace-your-emails-back-to-the-source/
How to Trace Emails Back to Their Source IP Address
No comments:
Post a Comment