Saturday, May 22, 2021

Remember, it’s ‘security researchers’ not hackers.

https://www.makeuseof.com/browser-extensions-security-researchers/

10 Browser Extensions for Security Researchers

Browser extensions make a lot of things easier. They're not just limited to general browsing usage, but can also come in handy for cybersecurity professionals.

It saves time for security researchers to quickly analyze a website, or online service—no matter whether they are looking for potential security issues or just doing a background check.

Here are some of the best browser extensions that cybersecurity researchers, ethical hackers, or penetration testers find useful. Even if you are not one, you can still use these extensions to find out more information about the websites you visit.





Could an AI be tried by a jury of its peers? Will AI law be identical to human law?

https://fpf.org/blog/south-korea-the-first-case-where-the-personal-information-protection-act-was-applied-to-an-ai-system/

SOUTH KOREA: THE FIRST CASE WHERE THE PERSONAL INFORMATION PROTECTION ACT WAS APPLIED TO AN AI SYSTEM

As AI regulation is being considered in the European Union, privacy commissioners and data protection authorities around the world are starting to apply existing comprehensive data protection laws against AI systems and how they process personal information. On April 28th, the South Korean Personal Information Protection Commission (PIPC) imposed sanctions and a fine of KRW 103.3 million (USD 92,900) on ScatterLab, Inc., developer of the chatbot “Iruda,” for eight violations of the Personal Information Protection Act (PIPA). This is the first time PIPC sanctioned an AI technology company for indiscriminate personal information processing.

Iruda” caused considerable controversy in South Korea in early January after complaints of the chatbot using vulgar and discriminatory racist, homophobic, and ableist language in conversations with users. The chatbot, which assumed the persona of a 20-year-old college student named “Iruda” (Lee Luda), attracted more than 750,000 users on Facebook Messenger less than a month after release. The media reports prompted PIPC to launch an official investigation on January 12th, soliciting input from industry, law, academia, and civil society groups on personal information processing and legal and technical perspectives on AI development and services.





This seems to be a rather simplified response, even for Harvard.

https://hbr.org/2021/05/5-rules-to-manage-ais-unintended-consequences

5 Rules to Manage AI’s Unintended Consequences

Summary: Companies are increasingly using “reinforcement-learning agents,” a type of AI that rapidly improves through trial and error as it single-mindedly pursues its goal, often with unintended and even dangerous consequences. The weaponization of polarizing content on social media platforms is an extreme example of what can happen when RL agents aren’t properly constrained. To prevent their RL agents from causing harm, leaders should abide by five rules as they integrate this AI into their strategy execution.





Even if I can leave the house, I may want to read.

https://www.makeuseof.com/curators-to-find-the-best-articles-worth-reading-on-the-internet/

5 Curators to Find the Best Articles Worth Reading on the Internet

Want to see the pick of the best writing worth reading on the web? Follow these curators who recommend only the best articles.



No comments: