Saturday, May 22, 2021

Remember, it’s ‘security researchers’ not hackers.

https://www.makeuseof.com/browser-extensions-security-researchers/

10 Browser Extensions for Security Researchers

Browser extensions make a lot of things easier. They're not just limited to general browsing usage, but can also come in handy for cybersecurity professionals.

It saves time for security researchers to quickly analyze a website, or online service—no matter whether they are looking for potential security issues or just doing a background check.

Here are some of the best browser extensions that cybersecurity researchers, ethical hackers, or penetration testers find useful. Even if you are not one, you can still use these extensions to find out more information about the websites you visit.





Could an AI be tried by a jury of its peers? Will AI law be identical to human law?

https://fpf.org/blog/south-korea-the-first-case-where-the-personal-information-protection-act-was-applied-to-an-ai-system/

SOUTH KOREA: THE FIRST CASE WHERE THE PERSONAL INFORMATION PROTECTION ACT WAS APPLIED TO AN AI SYSTEM

As AI regulation is being considered in the European Union, privacy commissioners and data protection authorities around the world are starting to apply existing comprehensive data protection laws against AI systems and how they process personal information. On April 28th, the South Korean Personal Information Protection Commission (PIPC) imposed sanctions and a fine of KRW 103.3 million (USD 92,900) on ScatterLab, Inc., developer of the chatbot “Iruda,” for eight violations of the Personal Information Protection Act (PIPA). This is the first time PIPC sanctioned an AI technology company for indiscriminate personal information processing.

Iruda” caused considerable controversy in South Korea in early January after complaints of the chatbot using vulgar and discriminatory racist, homophobic, and ableist language in conversations with users. The chatbot, which assumed the persona of a 20-year-old college student named “Iruda” (Lee Luda), attracted more than 750,000 users on Facebook Messenger less than a month after release. The media reports prompted PIPC to launch an official investigation on January 12th, soliciting input from industry, law, academia, and civil society groups on personal information processing and legal and technical perspectives on AI development and services.





This seems to be a rather simplified response, even for Harvard.

https://hbr.org/2021/05/5-rules-to-manage-ais-unintended-consequences

5 Rules to Manage AI’s Unintended Consequences

Summary: Companies are increasingly using “reinforcement-learning agents,” a type of AI that rapidly improves through trial and error as it single-mindedly pursues its goal, often with unintended and even dangerous consequences. The weaponization of polarizing content on social media platforms is an extreme example of what can happen when RL agents aren’t properly constrained. To prevent their RL agents from causing harm, leaders should abide by five rules as they integrate this AI into their strategy execution.





Even if I can leave the house, I may want to read.

https://www.makeuseof.com/curators-to-find-the-best-articles-worth-reading-on-the-internet/

5 Curators to Find the Best Articles Worth Reading on the Internet

Want to see the pick of the best writing worth reading on the web? Follow these curators who recommend only the best articles.



Friday, May 21, 2021

Do insurers reinsure themselves?

https://www.databreaches.net/cna-financial-paid-40-million-in-ransom-after-march-cyberattack/

CNA Financial Paid $40 Million in Ransom After March Cyberattack

Kartikay Mehrotra and William Turton report:

CNA Financial Corp., among the largest insurance companies in the U.S., paid $40 million in late March to regain control of its network after a ransomware attack, according to people with knowledge of the attack.
The Chicago-based company paid the hackers about two weeks after a trove of company data was stolen, and CNA officials were locked out of their network, according to two people familiar with the attack who asked not to be named because they weren’t authorized to discuss the matter publicly.

Read more on Bloomberg, including the issue of did the payment go to a sanctioned group or not.





Possible, but let’s not double-think ourselves into a corner. Security AI should recognize this pattern as easily as any other.

https://www.scmagazine.com/home/2021-rsa-conference/data-poisoning-that-leverage-machine-learning-may-be-the-next-big-attack-vector/

Data poisoning’ that leverage machine learning may be the next big attack vector

Data poisoning attacks against the machine learning used in security software may be attackers’ next big vector, said Johannes Ullrich, dean of research of SANS Technology Institute.

Machine learning is based on pattern recognition in a pool of data. Data poisoning is adding intentionally misleading data to that pool so it begins to misidentify its inputs.

… Ulrich noted that hackers could provide a stream of bad information by, say, flooding a target organization with malware designed to refine ML detection away from the techniques they actually plan to use for the main attack.





Why not for everyone?

https://www.pogowasright.org/colorado-makes-doxxing-public-health-workers-illegal/

Colorado Makes Doxxing Public Health Workers Illegal

Anna Schavarien reports:

Colorado on Tuesday made it illegal to share the personal information of public health workers and their families online so that it can be used for purposes of harassment, responding to an increase in threats to such workers during the pandemic.
Known as doxxing, the practice of sharing a person’s sensitive information, such as a physical or email address or phone number, has long been used against law enforcement personnel, reporters, protesters and women speaking out about sexual abuse.

Read more on The New York Times.





I would think they have enough to determine the “missing” parts.

https://www.protocol.com/policy/social-media-data-act

Lawmakers want to force Big Tech to give researchers more data

Facebook's ad library allows researchers to see the content of ads that run on the platform and information on who those ads reach. But there is one key insight Facebook doesn't offer: information on how those ads were targeted.





Technology for evil or AI controlling the conversation.

https://www.washingtonpost.com/outlook/2021/05/20/ai-bots-grassroots-astroturf/

Grassroots’ bot campaigns are coming. Governments don’t have a plan to stop them.

Artificial intelligence software can easily pass for real public comments

This month, the New York state attorney general issued a report on a scheme by “U.S. Companies and Partisans [to] Hack Democracy.” This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of U.S. democracy — the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.





In case I hadn’t mentioned this before.

https://www.executivegov.com/2021/05/nist-seeks-public-comments-on-proposed-model-for-ai-user-trust/

NIST Seeks Public Comments on Proposed Model for AI User Trust

The National Institute of Standards and Technology (NIST) has published a draft document outlining a list of nine factors that contribute to an individual’s potential trust in an artificial intelligence platform.

The draft document titled “Artificial Intelligence and User Trust” seeks to show how a human may consider the factors based on the task and the risk involved in trusting the decision of an AI system and contributes to NIST’s efforts to advance the development of trustworthy AI tools, NIST said Wednesday.

https://nvlpubs.nist.gov/nistpubs/ir/2021/NIST.IR.8332-draft.pdf





So many books, so little time.

https://www.bespacific.com/amazon-publishing-dpla-ink-deal-to-lend-e-books-in-libraries/

Amazon Publishing, DPLA Ink Deal to Lend E-books in Libraries

Publishers Weekly: “The Digital Public Library of America (DPLA) today announced that it has signed a much-anticipated agreement with Amazon Publishing to make all of the roughly 10,000 Amazon Publishing e-books and digital audiobooks available to libraries, the first time that digital content from Amazon Publishing will be made available to libraries. In a release today, DPLA officials said that lending will begin sometime this summer, with Amazon Publishing content to be made available for license via the DPLA Exchange, the DPLA’s not-for-profit, “library-centered” platform, and accessible to readers via the SimplyE app, a free, open source library e-reader app developed by the New York Public Library and used by DPLA. Library users will not have to go through their Amazon accounts to access Amazon Publishing titles via the DPLA, and DPLA officials confirmed that, as with other publishers DPLA works with, Amazon will not receive any patron data. The executed, long awaited deal comes nearly six months after Amazon Publishing and the DPLA confirmed that they were in talks to make Amazon Publishing titles available to libraries for the first time. The deal represents a major step forward for the digital library market. Not only is Amazon Publishing finally making its digital content available to libraries, the deal gives libraries a range of models through which it can license the content, offering libraries the kind of flexibility librarians have long asked for from the major publishers.



Thursday, May 20, 2021

Apparently, hackers are not slowed by Covid.

https://www.verizon.com/business/resources/reports/dbir/

2021 Data Breach Investigations Report

Reduce risks with insights from more than 5,250 confirmed breaches.





Remember, in a cyberwar every sector will be attacked simultaneously.

https://www.databreaches.net/cyber-attack-has-caused-enormous-risk-hse-official/

Cyber attack has caused ‘enormous risk’ – HSE official

RTÉ reports:

The Health Service Executive’s National Clinical Adviser for Acute Operations has said there is an “enormous risk” across health services following the cyber attack last week which forced a shutdown of the HSE’s IT systems.
Speaking on RTÉ’s Morning Ireland, Dr Vida Hamilton said it is a “major disaster” and described it as a stressful time in hospitals.
There is enormous risk in the system and everything has to be done so slowly and carefully to mitigate that risk,” Dr Hamilton said.
She said 90% of acute hospitals are substantially impacted by this cyber attack and it is affecting every aspect of patient care.

Read more on RTÉ.

So this is exactly the type of impact we have often cautioned could happen with an attack on the healthcare sector. The HSE incident seems to be getting more media coverage than other similar attacks, perhaps because it is national, but the risks have been known for years now.

So when all is said and done, when it comes time for the forensics, what was HSE’s security like prior to the attack? What was their backup system like? Had they really used “best practices?” Yes, the blame belongs to the criminals, but had HSE deployed reasonable security given the times?

And will this be the incident that puts so much heat on Conti and other ransomware groups that Conti ducks for cover and other groups now exclude healthcare as carefully as they have excluded Russian or CIS entities? Right now, it doesn’t seem that way. They may not get the $20 million they have demanded, but unless something changes, they will live to extort another day.





Would the loss of sales/recovery costs/fines have been greater if they did not pay the ransom?

https://www.databreaches.net/colonial-pipeline-confirms-it-paid-4-4-million-to-hackers/

Colonial Pipeline confirms it paid $4.4 million to hackers

Cathy Bussewitz of AP reports:

The operator of the nation’s largest fuel pipeline confirmed it paid $4.4 million to a gang of hackers who broke into its computer systems.
Colonial Pipeline said Wednesday that after it learned of the May 7 ransomware attack, the company took its pipeline system offline and needed to do everything in its power to restart it quickly and safely, and made the decision then to pay the ransom.

Read more on WSOC-TV.



(Related)

https://www.wsj.com/articles/colonial-pipeline-ceo-tells-why-he-paid-hackers-a-4-4-million-ransom-11621435636?mod=djemalertNEWS

Colonial Pipeline CEO Tells Why He Paid Hackers a $4.4 Million Ransom

Joseph Blount, CEO of Colonial Pipeline Co., told The Wall Street Journal that he authorized the ransom payment of $4.4 million because executives were unsure how badly the cyberattack had breached its systems, and consequently, how long it would take to bring the pipeline back.

… “I know that’s a highly controversial decision,” Mr. Blount said in his first public remarks since the crippling hack. “I didn’t make it lightly. I will admit that I wasn’t comfortable seeing money go out the door to people like this.”

But it was the right thing to do for the country,” he added.





Did the city take any action against the employee who screwed up? Did they change any procedures?

https://www.databreaches.net/city-pays-350000-after-suing-hackers-for-opening-dropbox-link-it-sent-them/

City pays $350,000 after suing “hackers” for opening Dropbox link it sent them

When is a “hack” not a “hack?” When a government entity mistakenly gives journalists access to files that just maybe, they didn’t intend to give them access to…..

Tim De Chant reports:

The city of Fullerton, California, has agreed to pay $350,000 to settle a lawsuit it brought against two bloggers it accused of hacking the city’s Dropbox account.
Joshua Ferguson and David Curlee frequently made public record requests in the course of covering city government for a local blog, Friends for Fullerton’s Future. The city used Dropbox to fulfill large file requests, and in response to a June 6, 2019, request for records related to police misconduct, Ferguson and Curlee were sent a link to a Dropbox folder containing a password-protected zip file.
But a city employee also sent them a link to a more general “Outbox” shared folder that contained potential records request documents that had not yet been reviewed by the city attorney.

Read more on Ars Technica

[From the article:

As the case made its way through the courts, both the Electronic Frontier Foundation and the Reporters Committee for Freedom of the Press filed amicus briefs earlier this year in support of the bloggers. The EFF’s brief was particularly pointed. “The City’s interpretation would permit public officials to decide—after making records publicly available online (through their own fault or otherwise)—that accessing those records was illegal,” the group wrote. “The City proposes that journalists perusing a website used to disclose public records must guess whether particular documents are intended for them or not, intuit the City’s intentions in posting those documents, and then politely look the other way—or be criminally liable.”

The city of Fullerton faced increasingly long odds of winning the lawsuit, and last week, the city council voted 3-2 to settle the suit. Under the terms of the settlement, the city will pay the defendants $230,000 in attorneys costs and $60,000 each in damages. The city will also post a public apology on its website.





Another example the US won’t bother to follow.

https://www.huntonprivacyblog.com/2021/05/19/ecuador-approves-data-protection-law/

Ecuador Approves Data Protection Law

The Data Protection Law is based on the EU General Data Protection Regulation (the “GDPR”) and requires data controllers to implement safeguards to protect personal data, appoint a data protection officer and provide notice to individuals before processing certain persona data. The Data Protection Law also (1) establishes a national data protection authority; (2) regulates cross-border data transfers; and (3) provides Ecuadorians with the rights to request access to, amendment of and deletion of their personal data.

[The law in Spanish: https://privacyblogfullservice.huntonwilliamsblogs.com/wp-content/uploads/sites/28/2019/09/Anteproyecto-de-Ley-Orga%CC%81nica-de-Proteccio%CC%81n-de-Datos-Personales.pdf





This could be useful.

https://i-sight.com/resources/a-practical-guide-to-data-privacy-laws-by-country/

A Practical Guide to Data Privacy Laws by Country [2021]

Privacy laws have never been as important as they are today, now that data travels the world through borderless networks. Over 130 jurisdictions now have data privacy laws, as of January 2021.





Podcast with full transcript.

https://www.technologyreview.com/2021/05/19/1025016/embracing-the-rapid-pace-of-ai/

Embracing the rapid pace of AI

In a recent survey, “2021 Thriving in an AI World,” KPMG found that across every industry—manufacturing to technology to retail—the adoption of artificial intelligence (AI) is increasing year over year. Part of the reason is digital transformation is moving faster, which helps companies start to move exponentially faster. But, as Cliff Justice, US leader for enterprise innovation at KPMG posits, “Covid-19 has accelerated the pace of digital in many ways, across many types of technologies.” Justice continues, “This is where we are starting to experience such a rapid pace of exponential change that it’s very difficult for most people to understand the progress.” But understand it they must because “artificial intelligence is evolving at a very rapid pace.”

Justice challenges us to think about AI in a different way, “more like a relationship with technology, as opposed to a tool that we program,” because he says, “AI is something that evolves and learns and develops the more it gets exposed to humans.”

Show notes and links 2021 Thriving in an AI World,” KPMG



Wednesday, May 19, 2021

As they say in Moscow, “Это может повредить.” I agree.

https://krebsonsecurity.com/2021/05/try-this-one-weird-trick-russian-hackers-hate/

Try This One Weird Trick Russian Hackers Hate

In a Twitter discussion last week on ransomware attacks, KrebsOnSecurity noted that virtually all ransomware strains have a built-in failsafe designed to cover the backsides of the malware purveyors: They simply will not install on a Microsoft Windows computer that already has one of many types of virtual keyboards installed — such as Russian or Ukrainian. So many readers had questions in response to the tweet that I thought it was worth a blog post exploring this one weird cyber defense trick.

Will installing one of these languages keep your Windows computer safe from all malware? Absolutely not. There is plenty of malware that doesn’t care where in the world you are. And there is no substitute for adopting a defense-in-depth posture, and avoiding risky behaviors online.

But is there really a downside to taking this simple, free, prophylactic approach? None that I can see, other than perhaps a sinking feeling of capitulation. The worst that could happen is that you accidentally toggle the language settings and all your menu options are in Russian.





Always amusing.

https://thenextweb.com/news/how-much-your-stolen-personal-data-is-worth-on-the-dark-web-syndication?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheNextWeb+%28The+Next+Web+All+Stories%29

Here’s how much your stolen personal data is worth on the dark web





What to expect when an AI runs for office?

https://www.axios.com/gpt-3-disinformation-artificial-intelligence-c6ea11f7-b7eb-474d-b577-14731ffdbfa4.html

The disinformation threat from text-generating AI

A new report lays out the ways that cutting-edge text-generating AI models could be used to aid disinformation campaigns.

Why it matters: In the wrong hands text-generating systems could be used to scale up state-sponsored disinformation efforts — and humans would struggle to know when they're being lied to.

How it works: Text-generating models like OpenAI's leading GPT-3 are trained on vast volumes of internet data, and learn to write eerily life-like text off human prompts.

  • In their new report released this morning, researchers from Georgetown's Center for Security and Emerging Technology (CSET) examined how GPT-3 might be used to turbocharge disinformation campaigns like the one carried out by Russia's Internet Research Agency (IRA) during the 2016 election.



(Related) Learning to lie or detect lies. Either could be useful.

https://www.bespacific.com/mit-detect-political-fakes/

MIT – Detect Political Fakes

MIT Media Lab: “Did he say that? At Detect Political Fakes, we will show you a variety of media snippets (transcripts, audio, and videos). Half of the media snippets are real statements made by Joseph Biden and Donald Trump. The other half of the media snippets are fabricated. The media snippets that are fabricated are produced using deepfake technology. We are asking you to share how confident you are that a media snippet is real or fabricated.

Instructions – We will show you a variety of media snippets including transcripts, audio files, and videos. Sometimes, we include subtitles. Sometimes, the video is silent. You can watch the videos as many times as you would like. Please share how confident you are that the individual really said what we show. If you have seen the video before today, please select the checkbox that says “I’ve already seen this video.” And remember, half of the media snippets that we present are statements that the individual actually said…”





Is it worth noting that even the lawyers are confused?

https://abovethelaw.com/2021/05/just-calling-a-product-artificial-intelligence-isnt-good-enough/

Just Calling A Product ‘Artificial Intelligence’ Isn’t Good Enough

The many, many definitions of AI Contract Review.





Ignore all those primary sources, let your AI tell you what’s what.

https://www.bespacific.com/rethinking-search-making-experts-out-of-dilettantes/

Rethinking Search: Making Experts out of Dilettantes

MIT Technology Review: “…a team of Google researchers has published a proposal for a radical redesign that throws out the ranking approach and replaces it with a single large AI language model—a future version of BERT or GPT-3. The idea is that instead of searching for information in a vast list of web pages, users would ask questions and have a language model trained on those pages answer them directly. The approach could change not only how search engines work, but how we interact with them. Many issues with existing language models will need to be fixed first. For a start, these AIs can sometimes generate biased and toxic responses to queries—a problem that researchers at Google and elsewhere have pointed out... Metzler and his colleagues are interested in a search engine that behaves like a human expert. It should produce answers in natural language, synthesized from more than one document, and back up its answers with references to supporting evidence, as Wikipedia articles aim to do. ..”

Source – Cornell University arXiv:2105.02274 – Rethinking Search: Making Experts out of Dilettantes, Authors: Donald Metzler, Yi Tay, Dara Bahri, Marc Najork: Abstract – When experiencing an information need, users want to engage with an expert, but often turn to an information retrieval system, such as a search engine, instead. Classical information retrieval systems do not answer information needs directly, but instead provide references to (hopefully authoritative) answers. Successful question answering systems offer a limited corpus created on-demand by human experts, which is neither timely nor scalable. Large pre-trained language models, by contrast, are capable of directly generating prose that may be responsive to an information need, but at present they are dilettantes rather than experts – they do not have a true understanding of the world, they are prone to hallucinating, and crucially they are incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over. This paper examines how ideas from classical information retrieval and large pre-trained language models can be synthesized and evolved into systems that truly deliver on the promise of expert advice.”





Who knew dogs were such techies?

https://www.theregister.com/2021/05/19/woof_woof_whos_a_good/

Australian Federal Police hiring digital evidence retrieval specialists: Being a very good boy and paws required

Hounds can sniff out SIM cards that a human might miss



Tuesday, May 18, 2021

Mi casa es su casa may not always work.

https://www.cpomagazine.com/cyber-security/51-of-organizations-experienced-a-third-party-data-breach-after-overlooking-external-access-privileges/

51% Of Organizations Experienced a Third-Party Data Breach After Overlooking External Access Privileges

SecureLink released its third-party data breach report with the Ponemon Institute, highlighting the gap between perceived third-party access threats and the security mitigations adopted.

The report titled “A Crisis in Third-party Remote Access Security” found that organizations were not implementing the necessary security measures to mitigate third-party remote access risks.





To an individual home owner, these are merely rather cool security devices that allow you to see who is at the door from anywhere in the world. It is Amazon’s ability to link all of them together that provides ‘near perfect’ surveillance.

https://www.theguardian.com/commentisfree/2021/may/18/amazon-ring-largest-civilian-surveillance-network-us

Amazon’s Ring is the largest civilian surveillance network the US has ever seen

One in 10 US police departments can now access videos from millions of privately-owned home security cameras without a warrant

Ring video doorbells, Amazon’s signature home security product, pose a serious threat to a free and democratic society. Not only is Ring’s surveillance network spreading rapidly, it is extending the reach of law enforcement into private property and expanding the surveillance of everyday life. What’s more, once Ring users agree to release video content to law enforcement, there is no way to revoke access and few limitations on how that content can be used, stored, and with whom it can be shared.



(Related)

https://www.makeuseof.com/eufy-urges-users-log-out-reset-cameras/

Eufy Urges All Users to Log Out and Reset Their Cameras

A major privacy breach was discovered in Eufy security cameras that allowed one to view the live and recorded camera feeds of strangers. Eufy users also had complete access to the other person's account and could control their camera's pan and tilt positions.



(Related)

https://www.bespacific.com/alexa-what-other-devices-are-listening-to-me/

Alexa, what other devices are listening to me?

CNN Business: “More and more, the devices in your home are listening to you, your friends and family. It sounds Orwellian. It’s billed as convenient. As the Internet of Things proliferates, it creates a world in which everyday devices are interconnected via a web of sensors, apps, software and Wi-Fi. That means you can lower your thermostat on the drive home while your refrigerator orders a dozen eggs after sensing the supply is low. Your hackable home – Devices with various types of voice technology are also becoming more common. With a simple hands-free utterance, an Amazon- or Google-run personal assistant can stream your favorite Gap Band playlist or find a solid recipe for macaroons. But it also raises concerns about privacy – not just hacking but also how companies protect your data… So, what devices are listening? And why? Here’s a quick rundown of some popular contraptions, along with links to their privacy policies, so you can see what the parent companies can and can’t do with the data they collect…





The possibilities are disturbing. Toward a complete Deep Fake tool?

https://techcrunch.com/2021/05/17/reface-now-lets-users-face-swap-into-pics-they-upload/

Reface now lets users face-swap into pics and GIFs they upload

Buzzy face-swapping video app Reface is expanding its reality-shifting potential beyond selfies by letting users upload more of their own content for its AI to bring to life.

Users of its iOS and Android apps still can’t upload their own user generated video but the latest feature — which it calls Swap Animation — lets them upload images of humanoid stuff (monuments, memes, fine art portraits, or — indeed — photos of other people) which they want animated, choosing from a selection of in-app song snippets and poems for the AI-incarnate version to appear to speak/sing etc.

Reface’s freemium app has, thus far, taken a tightly curated approach to the content users can animate, only letting you face swap a selfie into a pre-set selection of movie and music video snippets (plus memes, GIFs, red carpet celeb shots, salon hair-dos and more).

But the new feature — which similarly relies on GAN (generative adversarial network) algorithms to work its reality-bending effects — expands the expressive potential of the app by letting users supply their own source material to face swap/animate.

Some rival apps do already offer this kind of functionality — so there’s an element of Reface catching up to apps like Avatarify, Wombo and Deep Nostalgia.





Useful tool?

https://www.freetech4teachers.com/2021/05/brainstormer-collaborative.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+freetech4teachers/cGEY+(Free+Technology+for+Teachers)

Brainstormer - A Collaborative Brainstorming and Voting Tool

Brainstormer is a new online brainstorming tool that is easy to use and helps solve the "what do we do now?" problem that often arises at the end of group brainstorming sessions. Brainstormer solves that problem by letting members of the brainstorming session vote for their favorite ideas.

Brainstormer is quick and easy to use. Registration is not required in order to host or participate in a Brainstormer session. To get started simply head to the site and click "setup brainstorm." The next screen will prompt you to write a question or problem to brainstorm about. After writing your prompt you'll enter your name and on the next screen you'll get a link to share with the people you want join your session. Participants will join by just clicking the link you share with them.

In a Brainstormer session you can set a time limit of five, ten, fifteen, twenty, or thirty minutes. You can reset the timer if you need more time and you can end the session early if the group has run out of ideas. Whenever your Brainstormer session ends a voting screen appears and all group members can vote for their favorite ideas.

Participants in Brainstormer sessions can write and submit as many ideas as they like. All submitted ideas appear as sticky notes on the screen. Participants' screen names do not appear on voting page.