Imagine hackers taking control of a police robot…
Are robots too insecure for lethal use by law enforcement?
In late November, the San Francisco Board of Supervisors voted 8-3 to give the police the option to launch potentially lethal, remote-controlled robots in emergencies, creating an international outcry over law enforcement use of “killer robots.” The San Francisco Police Department (SFPD), which was behind the proposal, said they would deploy robots equipped with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspects” only when lives are at stake.
Missing from the mounds of media coverage is any mention of how digitally secure the lethal robots would be or whether an unpatched vulnerability or malicious threat actor could intervene in the digital machine’s functioning, no matter how skilled the robot operator, with tragic consequences. Experts caution that robots are frequently insecure and subject to exploitation and, for those reasons alone, should not be used with the intent to harm human beings.
Any reason to suspect that the ‘bad guys’ will comply?
China bans AI-generated media without watermarks
China's Cyberspace Administration recently issued regulations prohibiting the creation of AI-generated media without clear labels, such as watermarks—among other policies—reports The Register. The new rules come as part of China's evolving response to the generative AI trend that has swept the tech world in 2022, and they will take effect on January 10, 2023.
(Related)
https://techcrunch.com/2022/12/10/openais-attempts-to-watermark-ai-text-hit-limits/
OpenAI’s attempts to watermark AI text hit limits
Did a human write that, or ChatGPT? It can be hard to tell — perhaps too hard, its creator OpenAI thinks, which is why it is working on a way to “watermark” AI-generated content.
In a lecture at the University of Texas at Austin, computer science professor Scott Aaronson, currently a guest researcher at OpenAI, revealed that OpenAI is developing a tool for “statistically watermarking the outputs of a text [AI system].” Whenever a system — say, ChatGPT — generates text, the tool would embed an “unnoticeable secret signal” indicating where the text came from.
More “we don’t need lawyers” tech.
https://techcrunch.com/2022/12/12/digip/
Digip digitizes the process of applying for trademarks
For businesses, protecting trademarks is often a lengthy and expensive process, especially if they have multiple brands. Digip digitizes much of the process, helping its customers file trademarks by themselves instead of going to law firms.
… To file trademarks, businesses usually ask a lawyer to conduct trademark searches. They are billed per search, which adds up quickly if a business has multiple brands they need to trademark. Then they have to pay for a lawyer to file trademark applications. But the process doesn’t end there. Businesses also have to monitor their trademarks in markets where they own it, and that is another charge.
Digip combines all these steps into one online workflow. Instead of charging for different parts of the process, its customers pay a flat monthly or yearly subscription fee, plus application fees charged by trademark offices.
Tools & Techniques. It’s not just for teachers…
https://www.freetech4teachers.com/2022/12/get-your-free-copy-of-2022-23-practical.html
Get Your Free Copy of The 2022-23 Practical Ed Tech Handbook
If you didn't get your copy earlier this school year, The Practical Ed Tech Handbook is now available for free to anyone who is subscribed to The Practical Ed Tech Newsletter or who registers for it here.
No comments:
Post a Comment