Saturday, June 04, 2022

An important question for my Ethical Hackers…

https://krebsonsecurity.com/2022/06/what-counts-as-good-faith-security-research/

What Counts as “Good Faith Security Research?”

The U.S. Department of Justice (DOJ) recently revised its policy on charging violations of the Computer Fraud and Abuse Act (CFAA), a 1986 law that remains the primary statute by which federal prosecutors pursue cybercrime cases. The new guidelines state that prosecutors should avoid charging security researchers who operate in “good faith” when finding and reporting vulnerabilities . But legal experts continue to advise researchers to proceed with caution, noting the new guidelines can’t be used as a defense in court, nor are they any kind of shield against civil prosecution.





If it could potentially be a solution, do we have an obligation to try? (We can, therefore we must)

https://www.protocol.com/policy/axon-taser-drone-ethics

How Axon's plans for Taser drones blindsided its AI ethics board

Late Tuesday night, NYU law professor Barry Friedman called an emergency Zoom meeting with members of the AI ethics board for Taser-maker Axon.

Just a few weeks before, the board — which includes academics, civil liberties advocates and two former chiefs of police — had voted against a proposal by Axon to develop Taser-equipped drones and run a limited pilot program with law enforcement. The board had been mulling the possibility of such a pilot for about a year, according to Friedman; ultimately, a majority of the board decided the risks outweighed the benefits.

But on Tuesday, an email landed in Friedman’s inbox from an Axon employee, alerting him that the company was forging ahead with the plan anyway. Not only was Axon going to develop Taser drones, it planned to pitch them as an answer to school shootings, in the wake of the Uvalde tragedy.





Will the police also publish a database of “normal environment” images, so parents can learn not to publish ‘exploitative’ pictures like a child taking a bath?

https://www.theregister.com/2022/06/03/police_australia_ai/

Police want your happy childhood pictures to train AI to detect child abuse

Australia's federal police and Monash University are asking netizens to send in snaps of their younger selves to train a machine-learning algorithm to spot child abuse in photographs.

Researchers are looking to collect images of people aged 17 and under in safe scenarios; they don't want any nudity, even if it's a relatively innocuous picture like a child taking a bath. The crowdsourcing campaign, dubbed My Pictures Matter, is open to those aged 18 and above, who can consent to having their photographs be used for research purposes.

All the images will be amassed into a dataset in an attempt to train an AI model to tell the difference between a minor in a normal environment and an exploitative, unsafe situation. The software could, in theory, help law enforcement better automatically and rapidly pinpoint child sex abuse material (aka CSAM) in among thousands upon thousands of photographs under investigation, avoiding having human analysts inspect every single snap.





Probably important. What could possibly go wrong?

https://www.theverge.com/2022/6/3/23153504/right-to-repair-new-york-state-law-ifixit-repairability-diy?scrolla=5eb6d68b7fedc32c19ef33b4

New York state passes first-ever ‘right to repair’ law for electronics

The New York state legislature has passed the United States’ first “right to repair” bill covering electronics. Called the Fair Repair Act, the measure would require all manufacturers who sell “digital electronic products” within state borders to make tools, parts, and instructions for repair available to both consumers and independent shops.





My AI is following this closely. It thinks Thaler is the next Ruth Bader Ginsberg.

https://news.bloomberglaw.com/ip-law/artificial-intelligence-can-be-copyright-author-lawsuit-alleges

Artificial Intelligence Can Be Copyright Author, Suit Says (1)

An artificial intelligence could be the proud author of copyrighted material if its creator emerges victorious in a lawsuit against the US Copyright Office.

Stephen Thaler, the president and CEO of Imagination Engines, sued the Copyright Office on Thursday, following the agency’s denial of Thaler’s copyright registration application on the basis that the work created by the inventor’s AI “lacks the human authorship necessary to support a copyright claim.”

It’s the latest lawsuit filed by Thaler, who has sought to secure AI intellectual property rights around the world, so far with limited success. On Monday, he will argue before the US Court of Appeals for the Federal Circuit that inventors on patents do not need to be human.

My interest is the definition of what a person is,” Thaler said in an interview with Bloomberg Law. “What I’m building, what many will argue, is sentient machine intelligence. So maybe expansion to the term sentient organism would be in order.”





We’re no longer ‘shut in” but perhaps for a rainy day?

https://www.makeuseof.com/discover-free-documentaries-to-stream-online/

5 More Websites to Discover Free Documentaries to Stream Online



Friday, June 03, 2022

Perspective. Recycling, ready or not.

https://www.bespacific.com/theres-an-army-of-thieves-coming-for-your-catalytic-converter/

There’s an Army of Thieves Coming for Your Catalytic Converter

Popular Mechanics – One of the largest crime-waves in recent years could cost you thousands: “…According to the Universal Technical Institute, there are typically 3 to 7 grams of platinum, 2 to 7 grams of palladium, and 1 to 2 grams of rhodium in the standard converter.

While a variety of industries, including computer hardware and medical tools, use PGMs like palladium, the automotive industry uses over 80 percent of the global supply of palladium each year. This intense demand exceeds what mining can supply, so the autocat recycling industry steps up to supplement the rest. More than 90 percent of the PGMs in an old catalytic converter can be recovered, and industry estimates suggest that recycled PGMs account for 40 to 50 percent of the annual supply. Recyclers typically buy used catalytic converters in thousand-pound lots from scrap yards, or take them from the junked cars they buy by the thousands, Froneman says, noting his company processes up to 24 tons of old catalytic converters per day. Used autocats are “decanned” with hydraulic shears, then the PGM-coated honeycomb substrate is removed, pulverized, smelted, and refined. From there, the PGMs are sold to original equipment manufacturers that build new catalytic converters for auto manufacturers. Whereas mining yields around 16 grams of PGMs per ton of ore, recycling yields up to 2300 grams per ton of old catalysts, Froneman says—a lucrative industry for miners, refiners, and thieves…”





Not surprising to find the pendulum swinging yet again. Will they now translate that to a more general acceptance of surveillance?

https://www.bespacific.com/the-supreme-court-is-building-its-own-surveillance-state/

The Supreme Court Is Building Its Own Surveillance State

Wired – Searching clerks’ phones to find out who leaked the Dobbs opinion sets a dangerous precedent of exploiting digital rights.Following the leak of a draft opinion striking down abortion rights, the Supreme Court’s police force (the Marshal’s Office) launched an unprecedented probe to uncover who leaked the decision. Already, authorities have demanded phone records, signed affidavits, and law clerks’ devices. The scrutiny is so intense that many onlookers have suggested that clerks retain attorneys to protect their rights. While it’s unclear how broad the cellphone searches are, or the exact language of clerks’ affidavits, the intrusive probe reveals a disturbing about-face from the Supreme Court, and particularly Chief Justice John Roberts, on surveillance powers…”





I’ll take guidance anywhere I can find it.

https://www.natlawreview.com/article/eeoc-issues-guidance-ada-employer-use-ai-screening-tools

EEOC Issues Guidance on the ADA & Employer Use of AI Screening Tools

The Equal Employment Opportunity Commission (EEOC) is one of the federal agencies that is keenly aware of and has raised concerns about the potential of automated technologies to perpetuate the biases that humans may intentionally or unintentionally inject into hiring and employment decisions. In October 2021, the EEOC launched the Artificial Intelligence and Algorithmic Fairness Initiative in response to the increased use of AI in workforce management systems and hiring processes. According to the EEOC, the purpose of the initiative is “to ensure that the use of software, including [AI], machine learning and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces.”

Consistent with that initiative, on May 12, 2022, the EEOC issued its first guidance addressing the use of AI in making employment decisions. The guidance, entitled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” aims to ensure that AI and other automated technologies do not discriminate against or disadvantage job applicants and employees with disabilities.





Baby steps. Drive slow when the streets are less crowded.

https://techcrunch.com/2022/06/02/cruise-can-finally-charge-for-driverless-robotaxi-rides-in-san-francisco/

Cruise can finally charge for driverless robotaxi rides in San Francisco

Cruise, the autonomous vehicle unit of General Motors, has finally been given the green light to start charging fares for its driverless robotaxi service in San Francisco.

Cruise will be operating its passenger service at a maximum speed of 30 miles per hour between the hours of 10 p.m. to 6 a.m. on select streets in San Francisco, adding another one and a half hours to its current service.





Tools & Techniques. Highly recommended. (I use Feedly.) Great for those sites that only publish occasionally. Never miss a post.

https://www.makeuseof.com/best-free-rss-readers/

The 4 Best Free RSS Readers

If you're looking to keep on top of online news, RSS readers are a great tool for doing so. Here are four of the best free options you can use.

If you spent a lot of time browsing the internet, then you no doubt understand that there are simply too many websites out there to check on regularly. RSS readers can help solve this problem by condensing your online browsing all into one feed



Thursday, June 02, 2022

You have to figure that Russia knew we were doing this. I wonder if it started only after they invaded Ukraine?

https://news.sky.com/story/us-military-hackers-conducting-offensive-operations-in-support-of-ukraine-says-head-of-cyber-command-12625139

US military hackers conducting offensive operations in support of Ukraine, says head of Cyber Command

In an exclusive interview with Sky News, General Paul Nakasone confirmed for the first time that the US had "conducted a series of operations" in response to Russia's invasion of Ukraine.

In an exclusive interview, General Paul Nakasone also explained how separate "hunt forward" operations were allowing the United States to search out foreign hackers and identify their tools before they were used against America.

Speaking in Tallinn, Estonia, the general, who is also director of the National Security Agency (NSA), told Sky News that he is concerned "every single day" about the risk of a Russian cyber attack targeting the US and said that the hunt forward activities were an effective way of protecting both America as well as allies.

The four star general did not detail the activities, but explained how they were lawful, conducted with complete civilian oversight of the military and through policy decided at the Department of Defence.

"My job is to provide a series of options to the secretary of defence and the president, and so that's what I do," he said. He declined to describe those options.

General Nakasone had delivered a keynote speech at CyCon, an international conference on cyber conflict, hosted by NATO's Cooperative Cyber Defence Centre of Excellence in Tallinn, and praised the partnerships between democratic states as a key strategic benefit.





Ransom aside, why would Iran target a hospital?

https://apnews.com/article/russia-ukraine-technology-health-middle-east-e4f8e7145e4b4447a331d4b0cc5a5bd3

Wray: FBI blocked planned cyberattack on children’s hospital

The FBI thwarted a planned cyberattack on a children’s hospital in Boston that was to have been carried out by hackers sponsored by the Iranian government, FBI Director Christopher Wray said Wednesday.

He did not ascribe a particular motive to the planned attack on the hospital, but he noted that Iran and other countries have been hiring cyber mercenaries to conduct attacks on their behalf. In addition, the health care and public health sector is classified by the U.S. government as one of 16 critical infrastructure sectors, and health care providers such as hospitals are seen as ripe targets for hackers.





Sounds good” is not sufficient justification.

https://www.theregister.com/2022/06/02/eu_child_protection/

Dear Europe, once again here are the reasons why scanning devices for unlawful files is not going to fly

While Apple has, temporarily at least, backed away from last year's plan to run client-side scanning (CSS) software on customers' iPhones to detect and report child sexual abuse material (CSAM) to authorities, European officials in May proposed rules to protect children that involve the same highly criticized approach.

The European Commission has suggested several ways to deal with child abuse imagery, including scanning online private communication and breaking encryption. It has done so undeterred by a paper penned last October by 14 prominent computer scientists and security experts dismissing CSS as a source of serious security and privacy risks.

In response, a trio of academics aims to convey just how ineffective and rights-violating CSS would be to those who missed the memo the first time around. And the last time, and the time before that.

In an ArXiv paper titled "YASM (Yet Another Surveillance Mechanism)," Kaspar Rosager Ludvigsen and Shishir Nagaraja, of the University of Strathclyde, and Angela Daly, of the Leverhulme Research Center for Forensic Science and Dundee Law School, in Scotland, revisit CSS as a way to ferret out CSAM and conclude the technology is both ineffective and unjustified.





I wonder if this is really an “unintended” consequence?

https://economictimes.indiatimes.com/tech/technology/expressvpn-rejects-cert-in-directives-suspends-india-ops/articleshow/91956961.cms

ExpressVPN rejects CERT-In directives, removes its India servers

Virtual private network (VPN) operator ExpressVPN is pulling its servers out of India, citing the impossibility of complying with the country's upcoming mandate to record users' names and activities.

"With a recent data law introduced in India requiring all VPN providers to store user information for at least five years, ExpressVPN has made the very straightforward decision to remove our Indian-based VPN servers," the company said in a blog post.

… The new data law proposed by India's Computer Emergency Response Team (CERT-In) to combat cybercrime is incompatible with the purpose of VPNs, which are supposed to keep users' internet behaviour private, the company said.

"Rest assured, our users will still be able to connect to VPN servers that will give them Indian IP addresses and allow them to access the internet as if they were located in India. These “virtual” India servers will instead be physically located in Singapore and the UK," it added.





Government liability. I guess you can’t sue China successfully.

https://www.csoonline.com/article/3662158/opms-63-million-breach-settlement-offer-is-it-enough.html#tk.rss_all

OPM's $63 million breach settlement offer: Is it enough?

The nature and scope of the data stolen in the U.S. Office of Personnel Management presents a life-long risk to victims, who might get as little as $700 if the court accepts the settlement.

If one was to look into the Federal Court’s Public Access to Court Electronic Records (PACER) one would see that more than 130 separate lawsuits have been filed against the U.S. Government’s Office of Personnel Management (OPM), all of which are associated with the 2014 and 2015 data breaches that affected millions.





Another reason to grant AI personhood?

https://techcrunch.com/2022/06/01/whos-liable-for-ai-generated-lies/

Who’s liable for AI-generated lies?

Who will be liable for harmful speech generated by large language models? As advanced AIs such as OpenAI’s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation — and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots — the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can’t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.

Indeed, OpenAI is concerned enough about the risks of its models going “totally off the rails,” as its documentation puts it at one point (in reference to a response example in which an abusive customer input is met with a very troll-esque AI reply), to offer a free content filter that “aims to detect generated text that could be sensitive or unsafe coming from the API” — and to recommend that users don’t return any generated text that the filter deems “unsafe.” (To be clear, its documentation defines “unsafe” to mean “the text contains profane language, prejudiced or hateful language, something that could be NSFW or text that portrays certain groups/people in a harmful manner.”).

But, given the novel nature of the technology, there are no clear legal requirements that content filters must be applied. So OpenAI is either acting out of concern to avoid its models causing generative harms to people — and/or reputational concern — because if the technology gets associated with instant toxicity that could derail development.





Read and heed.

https://undark.org/2022/06/02/the-long-uncertain-road-to-artificial-general-intelligence/

Opinion: The Long, Uncertain Road to Artificial General Intelligence

A versatile new AI is fueling speculation that machines will soon think like humans. It’s time for a reality check.





Perspective. (Because I haven’t considered all the implications)

https://www.bespacific.com/13-ways-overturning-roe-v-wade-affects-you-even-if-you-think-it-doesnt/

13 Ways Overturning Roe v. Wade Affects You (even if you think it doesn’t)

Via LLRX 13 Ways Overturning Roe v. Wade Affects You (even if you think it doesn’t) Kathy Biehl is a lawyer licensed in two states, as well as a prolific multidisciplinary author and writer. Roe v. Wade has been settled law during her entire career. In this article Biehl succinctly and expertly identifies how the upcoming Supreme Court decision in Dobbs V. Jackson Women’s Health Organization, a draft of which was “leaked” on May 2, 2022, will impact many facets of our society as well as our democracy.



Wednesday, June 01, 2022

Another well known, well documented problem that we do nothing about.

https://apnews.com/article/2022-midterm-elections-technology-georgia-election-2020-a746b253f3404dbf794349df498c9542

Cyber agency: Voting software vulnerable in some states

Electronic voting machines from a leading vendor used in at least 16 states have software vulnerabilities that leave them susceptible to hacking if unaddressed, the nation’s leading cybersecurity agency says in an advisory sent to state election officials.

The U.S. Cybersecurity and Infrastructure Agency, or CISA, said there is no evidence the flaws in the Dominion Voting Systems’ equipment have been exploited to alter election results. The advisory is based on testing by a prominent computer scientist and expert witness in a long-running lawsuit that is unrelated to false allegations of a stolen election pushed by former President Donald Trump after his 2020 election loss.





As predicted. Vigilante surveillance – what could possibly go wrong?

https://www.technologyreview.com/2022/05/31/1052901/anti-abortion-activists-are-collecting-the-data-theyll-need-for-prosecutions-post-roe/

Anti-abortion activists are collecting the data they’ll need for prosecutions post-Roe

Body cams and license plates are already being used to track people arriving at abortion clinics.





No doubt Texas will keep on trying...

https://www.axios.com/2022/05/31/supreme-court-texas-social-media-law

Supreme Court blocks Texas' controversial social media law

The Supreme Court has voted 5-4 to block Texas' social media censorship law, a major boon for tech companies who have been fighting against content moderation laws that would fundamentally change how they do business.

Why it matters: Conservative states have launched a legal war on social media companies in an effort to stem what they see as a wave of censorship, but this decision, like other recent rulings, suggests they face an uphill climb in court.

What's happening: The Supreme Court's decision means that Texas can't enforce a new law that would allow Texans and the state's attorney general to sue tech giants like Meta and YouTube over their content moderation policies.

  • The court's order isn't a final ruling on the merits of Texas' law, but when the courts freeze a particular law or policy, it's often a sign the measure faces a difficult road on the merits.

  • It comes just a few days after a federal appeals court ruled against a similar law in Florida.



Tuesday, May 31, 2022

 Not the weapons we thought it was?

https://www.schneier.com/blog/archives/2022/05/the-limits-of-cyber-operations-in-wartime.html

The Limits of Cyber Operations in Wartime

Interesting paper by Lennart Maschmeyer: “The Subversive Trilemma: Why Cyber Operations Fall Short of Expectations:

Abstract: Although cyber conflict has existed for thirty years, the strategic utility of cyber operations remains unclear. Many expect cyber operations to provide independent utility in both warfare and low-intensity competition. Underlying these expectations are broadly shared assumptions that information technology increases operational effectiveness. But a growing body of research shows how cyber operations tend to fall short of their promise. The reason for this shortfall is their subversive mechanism of action. In theory, subversion provides a way to exert influence at lower risks than force because it is secret and indirect, exploiting systems to use them against adversaries. The mismatch between promise and practice is the consequence of the subversive trilemma of cyber operations, whereby speed, intensity, and control are negatively correlated. These constraints pose a trilemma for actors because a gain in one variable tends to produce losses across the other two variables. A case study of the Russo-Ukrainian conflict provides empirical support for the argument. Qualitative analysis leverages original data from field interviews, leaked documents, forensic evidence, and local media. Findings show that the subversive trilemma limited the strategic utility of all five major disruptive cyber operations in this conflict.





Resource.

https://www.schneier.com/blog/archives/2022/05/security-and-human-behavior-shb-2022.html

Security and Human Behavior (SHB) 2022

Today is the second day of the fifteenth Workshop on Security and Human Behavior, hosted by Ross Anderson and Alice Hutchings at the University of Cambridge. After two years of having this conference remotely on Zoom, it’s nice to be back together in person.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, thirteenth, and fourteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio/video recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.





It’s a mental illness and we tend to ignore or avoid anything to do with mental illness.

https://www.politico.com/news/magazine/2022/05/27/stopping-mass-shooters-q-a-00035762

Two Professors Found What Creates a Mass Shooter. Will Politicians Pay Attention?

Three years ago, Jillian Peterson, an associate professor of criminology at Hamline University, and James Densley, a professor of criminal justice at Metro State University, decided to take a different approach. In their view, the failure to gain a more meaningful and evidence-based understanding of why mass shooters do what they do seemed a lost opportunity to stop the next one from happening. Funded by the National Institute of Justice, the research arm of the Department of Justice, their research constructed a database of every mass shooter since 1966 who shot and killed four or more people in a public place, and every shooting incident at schools, workplaces and places of worship since 1999.

Their findings, also published in the 2021 book, The Violence Project: How to Stop a Mass Shooting Epidemic, reveal striking commonalities among the perpetrators of mass shootings and suggest a data-backed, mental health-based approach could identify and address the next mass shooter before he pulls the trigger — if only politicians are willing to actually engage in finding and funding targeted solutions.

mass shootings are socially contagious and when one really big one happens and gets a lot of media attention, we tend to see others follow.

There’s this really consistent pathway. Early childhood trauma seems to be the foundation, whether violence in the home, sexual assault, parental suicides, extreme bullying. Then you see the build toward hopelessness, despair, isolation, self-loathing, oftentimes rejection from peers. That turns into a really identifiable crisis point where they’re acting differently. Sometimes they have previous suicide attempts.

What’s different from traditional suicide is that the self-hate turns against a group. They start asking themselves, “Whose fault is this?” Is it a racial group or women or a religious group, or is it my classmates? The hate turns outward. There’s also this quest for fame and notoriety.





Caffeine up! (So why aren’t the British extinct?)

https://neurosciencenews.com/coffee-mortality-20705/

Sweetened and Unsweetened Coffee Consumption Associated With Lower Death Risk

Summary: Those who drink sweetened coffee daily are up to 31% less likely to die within a 7-year follow-up than non-coffee drinkers. Those who drank unsweetened coffee were 21% less likely to die during the follow-up.



Monday, May 30, 2022

A useful metaphor? Years of progress lost.

https://www.databreaches.net/ransomware-attack-sends-new-jersey-county-back-to-1977/

Ransomware attack sends New Jersey county back to 1977

Brandon Vigliarolo reports:

Somerset County, New Jersey, was hit by a ransomware attack this week that hobbled its ability to conduct business, and also cut off access to essential data.
Services that depend on access to county databases are temporarily unavailable, such as land records, vital statistics, and probate records. Title searches are possible only on paper records dated before 1977,” the county said in a statement.

Read more at The Register.





As the percentage of self-driving cars increases, the probability that two will collide also increases. A case of AI vs. AI!

https://thenextweb.com/news/self-driving-cars-crash-responsible-courts-black-box

When self-driving cars crash, who’s responsible? Courts need to know what’s inside the ‘black box’

The first serious accident involving a self-driving car in Australia occurred in March this year. A pedestrian suffered life-threatening injuries when hit by a Tesla Model 3, which the driver claims was in “autopilot” mode.

In the US, the highway safety regulator is investigating a series of accidents where Teslas on autopilot crashed into first-responder vehicles with flashing lights during traffic stops.

The decision-making processes of “self-driving” cars are often opaque and unpredictable (even to their manufacturers), so it can be hard to determine who should be held accountable for incidents such as these. However, the growing field of “explainable AI” may help provide some answers.





Is the problem Russia’s military strategy or Putin’s leadership?

https://www.newsweek.com/exclusive-russias-air-war-ukraine-total-failure-new-data-show-1709388

Exclusive: Russia's Air War in Ukraine is a Total Failure, New Data Show

… Russia's dubious world record in accumulating missile strikes comes as President Zelensky announced that his country destroyed their 200th Russian airplane, an embarrassing result for an air force that is 15 times larger than that of Ukraine.

The global commentary on this milestone lauded Ukraine's defenders while noting Russia's failure to take advantage of its overwhelming numerical advantage, Moscow's misstep in not establishing air superiority in the skies over Ukraine, and Russia's dwindling supply of precision-guided weapons.

In the face of all of this, Russia retaliated on Sunday by announcing that it had destroyed 165 Ukrainian aircraft since the beginning of its "special military operation." That would be almost three times the number of flyable fighter jets that Ukraine even possesses.

… Russia's failure to follow this path has become a significant feature of the Ukraine war—one that confuses Western observers. [Certainly confused me! Bob] After 48 hours of attacks on Ukrainian air defenses in the opening salvo of the war, Moscow seemed to give up on pursuing this American war prerequisite. The Russians attacked airfields and air defense sites on the first two days but mostly didn't follow-up. Ukraine's small air force was largely grounded, but Kyiv was given an opportunity to adjust, especially in its dispersal of air defense missiles, in particular shoulder-fired ones. This created what Stringer calls "poor man's air superiority."





This could be extremely important (and valuable) since programmers hate to document and when they do, they do it poorly.

https://techcrunch.com/2022/05/30/mintlify-taps-ai-to-automatically-generate-documentation-from-code/

Mintlify taps AI to automatically generate documentation from code

Mintlify, a startup developing software to automate software documentation tasks…

… “We’ve worked as software engineers at companies in all stages ranging from startups to big tech and found that they all suffer from bad documentation, if it even existed at all,” Wang told TechCrunch in an email interview. “Documentation is the lifeline for junior engineers and those jumping into new codebases. It helps senior devs save time from explaining their code to others in the future. For public-facing and open-source products, documentation has a direct impact on user adoption.”



Sunday, May 29, 2022

 Data as the new oil?  

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4116921

A Perspective on Fairness in Artificial Intelligence

Data is the weapon of the future.  Whoever controls data, controls the world… If we don’t put up a fight, our data will belong to the wrong people – The Billion Dollar Code Machine Learning Algorithms: An Overview



Best to keep your AI happy…

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4115647

Both/And - Why Robots Should not Be Slaves

One solution to the exclusive person/thing dichotomy that persists in both moral philosophy and law is slavery.  Already in Roman times, slaves were regarded as something more than a mere thing but not quite a full person.  They occupied a liminal position that was situated in between the one and the other, being both thing and person.  And there has been, in both the legal and philosophical literature, a surprising number of serious proposals arguing for instituting what can only be called Slavery 2.0.  This chapter provides a thorough critique of these “robots should be slaves” proposals, demonstrating how this supposed solution to the person/thing dichotomy actually produces more and significantly worse problems than it can possibly begin to resolve. 



Can machines be ethical? 

https://www.researchgate.net/profile/Fatih-Esen/publication/360655432_The_Trust_in_the_Usage_of_Artificial_Intelligence_in_Social_Media_and_Traditional_Mass_Media/links/6283cb7eb2548471fee261d2/The-Trust-in-the-Usage-of-Artificial-Intelligence-in-Social-Media-and-Traditional-Mass-Media.pdf#page=76 

DIMENSIONS AND LIMITATIONS OF AI ETHICS

Ethics of AI is a new field in philosophy of technology addressing ethical issues raised by various emerging technologies under the umbrella term of “artificial intelligence” (AI).  The notion of “artificial intelligence” broadly understood is any kind of artificial (semi)autonomous system that shows forms of intelligent behaviour in achieving a goal.  Initially, intelligent behaviour in machines had to simulate human cognitive faculties, such as symbolic manipulation, logical reasoning, abstract thinking, learning, decision-making, and more (McCarthy et al. 1955: 2), but current understanding of AI incorporates wider range of automatic artificial agents, which excel at particular narrowly defined tasks. 

…   Traditionally, moral behaviour required rational determination of the will (Kant 2015), so only human beings were expected to bear moral responsibility and rights.  From this perspective, technologies have been understood as passive and neutral instruments, whose use by humans could be ethical or unethical. 



You have got to be kidding.  Who defines the moral?  

https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190881931.001.0001/oxfordhb-9780190881931-e-53

Moral Bioenhancement and Future Generations: Selecting Martyrdom?

Moral bioenhancement is a biological modification or intervention which makes moral behavior more likely.  There are a number of ways this could potentially be achieved, including pharmaceuticals, non-invasive brain stimulation, or genetic engineering.  Moral bioenhancement can be distinguished from other kinds of enhancement because it primarily benefits others, rather than just the individual who has been enhanced.  With the challenges that will face future generations, such as climate change and the rise of artificial intelligence, it is even more important to address the possibilities of moral bioenhancement.  In this chapter, the authors examine rationales for moral bioenhancement, the possibilities of these technologies, and some common critiques and concerns.  The authors defend the view that moral bioenhancement may be a useful tool—one tool among many—to help future generations respond to these aforementioned challenges.  The authors suggest that an application of the non-identity problem to genetic selection may help resolve some of the concerns surrounding moral bioenhancement.