Oregon State University finds citizen scientists’ contributions a boon to snowpack modeling

Data gathered by backcountry skiers, avalanche forecasters, and other snow recreationists and professionals has the potential to greatly improve snowpack modeling, research by the Oregon State University College of Engineering indicates. Snowpack research pic by Kendra Sharp

Findings, published in the journal Hydrology and Earth System Sciences, stem from a NASA-funded project known as Community Snow Observations, or CSO, part of NASA’s Citizen Science for Earth Systems program.

The paper is the first documentation of CSO’s power to make snowpack modeling better through “organic, opportunistic” data – a notable outcome, said researcher David Hill.

“We have shown citizen scientist contributions are very valuable and that we can do great things in the absence of observational network infrastructure,” said Hill, professor of civil engineering at OSU. “In this study, we used a new data set collected by CSO participants in coastal Alaska to improve snow depth and snow-water equivalent outputs from a snow process model.”

In western North America, snow’s role in ecosystem function and water resource management is critical, the scientists say, and around the world, more than a billion people live in watersheds where snow is a major component of the hydrologic system.

“Snowpack dynamics in the mountains have a big role in connecting atmospheric processes and the hydrologic cycle with downstream water users,” said Chris Cosgrove, an OSU graduate student during the research. “At our Alaska field site, hydroelectric power generation is the principal concern, but in the lower 48, many agricultural producers and municipal water systems rely on seasonal snow.”

In 2017, NASA enlisted Hill and doctoral student Ryan Crumley, as well as researchers at the University of Washington, the University of Alaska Fairbanks, and the Alaska Division of Geological & Geophysical Surveys, to recruit citizen scientists and incorporate their data into supercomputer models that generate important snowpack information for scientists, engineers and land and watershed managers.

Community Snow Observations kicked off in February 2017 and since then thousands of data entries have been made. Led by Hill, Gabe Wolken of Alaska Fairbanks, and Anthony Arendt of the University of Washington, the project first focused primarily on Alaskan snowpacks. Researchers then recruited citizen scientists in the Pacific Northwest and the Rocky Mountain region.

The work is ongoing and getting involved in Community Snow Observations is easy. A smartphone, the free Mountain Hub application, and an avalanche probe with graduated markings in centimeters are the only tools needed.

As citizen scientists make their way through the mountains, they use their avalanche probes to take snow depth readings that they then upload into Mountain Hub, an app for the outdoor community.

That’s all there is to it.

“We’ve now taken our modeling work operational,” Hill said. “We serve up real-time grids on snow information at many sites across the United States, including the central Cascades in Oregon, at mountainsnow.org. The general public can go there and view real-time information on snow, snow changes, and other things like satellite measurements of snow.”

In the recently published research, Hill and Crumley, who’s now at the Los Alamos National Laboratory, teamed with Wolken, Arendt, Cosgrove, and OSU graduate student Christina Aragon to look at how snowpack models for the Thompson Pass region of Alaska’s Chugach Mountains improved when citizen science measurements were incorporated.

“Improvements were seen in 62% to 78% of the simulations depending on the model year,” Aragon said. “Our results suggest that even modest measurement efforts by citizen scientists have the potential to improve efforts to model snowpack processes in high mountain environments.”

Information about snow distribution reaches scientists from many sources, including telemetry stations and remote sensing via light detection and ranging, or LIDAR, but the simplicity of the citizen science data gathering approach allows for many gaps to be filled, the scientists say.

“Snow depth measurements can be made accurately and quickly by anyone with a measuring device,” Crumley said. “The potential of mobilizing a new type of data set collected by people like snowshoers and snow machiners is significant because those folks often go to remote mountain environments where so far there haven’t been many observations recorded. All of those people can gather data at scales much greater than the capacity of a small group of scientists.”

(ISC)² cybersecurity workforce study sheds new light on talent demand amid a lingering pandemic

New 2021 data finds continued resilient growth trajectory for cybersecurity profession offers practical solutions for closing the gap

(ISC)² has released the findings of its 2021 (ISC)2 Cybersecurity Workforce Study. The study reveals updated figures for both the Cybersecurity Workforce Estimate and the Cybersecurity Workforce Gap in 2021, provides key insights into the makeup of the profession, and explores the challenges and opportunities that exist for professionals and hiring organizations.

The study reveals a decrease in the global workforce shortage for the second consecutive year from 3.12 million down to 2.72 million cybersecurity professionals. There are two significant contributing factors to this year's workforce gap estimate. The first is that 700,000 new entrants joined the field since 2020, contributing to a sharp increase in the available supply, now up to 4.19 million people. The second is that the workforce gap for every region other than Asia-Pacific increased. Data suggests that slower economic recovery from the pandemic and its impact on small businesses and critical sectors like IT services (a major cybersecurity employer in the region) is contributing to the relative softness in demand for cybersecurity professionals compared to North America, Europe, and Latin America. However, Asia-Pacific still has the largest regional workforce gap of 1.42 million.

Even with 700,000 new entrants, demand continues to outpace the supply of talent. The global cybersecurity workforce needs to grow 65% to effectively defend organizations' critical assets.

"Any increase in the global supply of cybersecurity professionals is encouraging, but let's be realistic about what we still need and the urgency of the task before us," said Clar Rosso, CEO, (ISC)². "The study tells us where talent is needed most and that traditional hiring practices are insufficient. We must put people before technology, invest in their development, and embrace remote work as an opportunity. And perhaps most importantly, organizations must adopt meaningful diversity, equity, and inclusion practices to meet employee expectations and close the gap."

How Organizations Overcome Their Gap
This year's research provides fresh perspectives into how organizations are overcoming their workforce gaps. Study participants shared their organizations' planned talent and technology investments, including:

  • More training (36%); providing more flexible working conditions (33%); and investing in diversity, equity, and inclusion (DEI) initiatives (29%)
  • Using cloud service providers (38%); deploying intelligence and automation for manual tasks (37%); involving cybersecurity staff earlier in third-party relationships (32%)

The study uncovered the avoidable consequences that occur when cybersecurity staff is stretched too thin. Participants said they experienced misconfigured systems (32%); not enough time for proper risk assessment and management (30%); slowly patched critical systems (29%); rushed deployments (27%).

Participants also offered opinions on what specialized skills and roles their teams lack, aligned with the roles outlined in the U.S. government's National Initiative for Cybersecurity Education (NICE) Framework. They cited categories such as Securely Provision (48%); Analyze (47%); and Protect and Defend (47%) as the top areas of need, but the data also shows a strong need for help across all roles.

Lasting Pandemic Impact
The percentage of cybersecurity professionals working remotely in some capacity due to the pandemic remains unchanged at 85%; however, 37% report they must now come to the office at times compared to 31% in 2020. In addition to the advantages of remote work as a public health measure, organizations cited improved workplace flexibility (53%); accelerated innovation and digital transformation efforts (37%); and stronger collaboration (34%) as some of the ways the pandemic has changed their organizations for the better.

Security challenges arising from remote workforces included the rapid deployment of new collaboration tools (31%); lack of security awareness among remote workers (30%); and rising concern for the physical security of distributed assets (29%). 

Additional highlighted findings include:

  • Cybersecurity professionals have consistently expressed very high levels of job satisfaction over the last four years—a record 77% of respondents reported they are satisfied or extremely satisfied with their jobs.
  • More cybersecurity professionals are getting their start outside of IT— 17% transitioned from unrelated career fields, 15% gained access through cybersecurity education and 15% explored cybersecurity concepts independently. Alternate points of entry are more common for women than men – only 38% of female participants started their careers in IT compared to 50% of male participants.
  • The average salary of a cybersecurity professional before taxes is U.S. $90,900—up from U.S. $83,000 among respondents in 2020. Salaries of certified cybersecurity professionals are U.S. $33,000 higher than those with no certifications.
  • Cloud computing security is once again the top priority for cybersecurity professionals' skills development in the next two years.

MIT neuroscientists build AI that sheds light on how the brain processes language

Neuroscientists find the internal workings of next-word prediction models resemble those of language-processing centers in the brain.

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion. 

Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands the language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain.

Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing.

“The better the model is at predicting the next word, the more closely it fits the human brain,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM), and an author of the new study. “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”

Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of CBMM and MIT’s Artificial Intelligence Laboratory (CSAIL); and Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute, are the senior authors of the study. Martin Schrimpf, an MIT graduate student who works in CBMM, is the first scholar of the academic research paper.

Making predictions

The new, high-performing next-word prediction models belong to a class of models called deep neural networks. These networks contain computational “nodes” that form connections of varying strength, and layers that pass information between each other in prescribed ways.

Over the past decade, scientists have used deep neural networks to create models of vision that can recognize objects, as well as the primate brain, does. Research at MIT has also shown that the underlying function of visual object recognition models matches the organization of the primate visual cortex, even though those computer models were not specifically designed to mimic the brain.

In the new study, the MIT team used a similar approach to compare language-processing centers in the human brain with language-processing models. The researchers analyzed 43 different language models, including several that are optimized for next-word prediction. These include a model called GPT-3 (Generative Pre-trained Transformer 3), which, given a prompt, can generate text similar to what a human would produce. Other models were designed to perform different language tasks, such as filling in a blank in a sentence.

As each model was presented with a string of words, the researchers measured the activity of the nodes that make up the network. They then compared these patterns to activity in the human brain, measured in subjects performing three language tasks: listening to stories, reading sentences one at a time, and reading sentences in which one word is revealed at a time. These human datasets included functional magnetic resonance (fMRI) data and intracranial electrocorticographic measurements taken in people undergoing brain surgery for epilepsy.

They found that the best-performing next-word prediction models had activity patterns that very closely resembled those seen in the human brain. Activity in those same models was also highly correlated with measures of human behavioral measures such as how fast people were able to read the text.

“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.

Game changer

One of the key computational features of predictive models such as GPT-3 is an element known as a forward one-way predictive transformer. This kind of transformer can make predictions of what is going to come next, based on previous sequences. A significant feature of this transformer is that it can make predictions based on a very long prior context (hundreds of words), not just the last few words.

Scientists have not found any brain circuits or learning mechanisms that correspond to this type of processing, Tenenbaum says. However, the new findings are consistent with hypotheses that have been previously proposed that prediction is one of the key functions in language processing, he says.

“One of the challenges of language processing is the real-time aspect of it,” he says. “Language comes in, and you have to keep up with it and be able to make sense of it in real-time.”

The researchers now plan to build variants of these language processing models to see how small changes in their architecture affect their performance and their ability to fit human neural data.

“For me, this result has been a game-changer,” Fedorenko says. “It’s totally transforming my research program because I would not have predicted that in my lifetime we would get to these computationally explicit models that capture enough about the brain so that we can actually leverage them in understanding how the brain works.”

The researchers also plan to try to combine these high-performing language models with some computer models Tenenbaum’s lab has previously developed that can perform other kinds of tasks such as constructing perceptual representations of the physical world.

“If we’re able to understand what these language models do and how they can connect to models which do things that are more like perceiving and thinking, then that can give us more integrative models of how things work in the brain,” Tenenbaum says. “This could take us toward better artificial intelligence models, as well as giving us better models of how more of the brain works and how general intelligence emerges than we’ve had in the past.”