University of Illinois supercomputing shows how to boost soybean yields by adapting photosynthesis to fleeting shadows

Komorebi is a Japanese word that describes how light filters through leaves—creating a shifting, dappled “sun flecks” that illustrate plants’ ever-changing light environment. Crops harness light energy to fix carbon dioxide into food via photosynthesis. In a special issue of Plant Journal, a team from the University of Illinois reports a new mathematical supercomputer model that is used to understand how much yield is lost as soybean crops grapple with minute-by-minute light fluctuations on cloudy and sunny days. 

“Soybean is the fourth most important crop in terms of overall production, but it is the top source of vegetable protein globally,” said Yu Wang, a postdoctoral researcher at Illinois, who led this work for Realizing Increased Photosynthetic Efficiency (RIPE). “We found that soybean plants may lose as much as 13 percent of their productivity because they cannot adjust quickly enough to the changes in light intensity that are standard in any crop field. It may not sound like much, but in terms of the global yield—this is massive.” Postdoctoral Researcher Yu Wang (left) and Ikenberry Endowed Professor Stephen Long (right) {module INSIDE STORY}

RIPE is an international research project that aims to improve photosynthesis to equip farmers worldwide with higher-yielding crops needed to ensure everyone has enough food to lead a healthy, productive life. RIPE is sponsored by the Bill & Melinda Gates Foundation, the U.S. Foundation for Food and Agriculture Research (FFAR), and the U.K. Government’s Department for International Development (DFID).

Past models have only examined hour-by-hour changes in light intensity. For this study, the team created a dynamic computational ray-tracing model that was able to predict light levels to the millimeter across every leaf for every minute of the day in a flowering soybean crop. The model also takes into account two critical factors: photoprotection and Rubisco activase.

Photoprotection protects plants from sun damage. Triggered by high light levels, this process dissipates excess light energy safely as heat. But, when light levels drop, it can take minutes to hours for photoprotection to relax, or stop—costing the plant potential yield. The team evaluated 41 varieties of soybean to find out the fastest, slowest, and average rate from induction to the relaxation of photoprotection. Less than 30 minutes is considered “short-term,” and anything longer is “long-term” photoprotection. 

Using this new model, the team simulated a sunny and cloudy day in Champaign, Illinois. On a sunny day, long-term photoprotection was the most significant limitation of photosynthesis. On a cloudy day, photosynthesis was the most limited by short-term photoprotection and Rubisco activase, which is a helper enzyme—triggered by light—that turns on Rubisco to fix carbon into sugar. 

The RIPE project has already begun to address photoprotection limitations in soybean and other crops, including cassava, cowpea, and rice. In 2016, the team published a study in Science where they increased the levels of three proteins involved in photoprotection to boost the productivity of a model crop by 14-20 percent. In addition, the RIPE team from the Lancaster Environment Centre at Lancaster University is seeking better forms of Rubisco activase in soybean and cowpea.

The RIPE project and its sponsors are committed to ensuring Global Access and making these technologies available to the farmers who need them the most.

“Models like these are critical to uncovering barriers—and solutions—to attain this crop’s full potential,” said RIPE Director Stephen Long, Ikenberry Endowed University Chair of Plant Biology and Crop Sciences at Illinois’ Carl R. Woese Institute for Genomic Biology. “We’ve already begun to address these bottlenecks and seen significant gains, but this study shows us that there is still room for improvement.” 

University of Illinois develops AI algorithm to better predict corn yield

With some reports predicting the precision agriculture market will reach $12.9 billion by 2027, there is an increasing need to develop sophisticated data-analysis solutions that can guide management decisions in real-time. A new study from an interdisciplinary research group at the University of Illinois offers a promising approach to efficiently and accurately process precision ag data.

"We're trying to change how people run agronomic research. Instead of establishing a small field plot, running statistics, and publishing the means, what we're trying to do involves the farmer far more directly. We are running experiments with farmers' machinery in their own fields. We can detect site-specific responses to different inputs. And we can see whether there's a response in different parts of the field," says Nicolas Martin, assistant professor in the Department of Crop Sciences at Illinois and co-author of the study.

He adds, "We developed a methodology using deep learning to generate yield predictions. It incorporates information from different topographic variables, soil electroconductivity, as well as nitrogen and seed rate treatments we applied throughout nine Midwestern cornfields." CAPTION New research from the University of Illinois demonstrates the promise of convolutional neural network algorithm for crop yield prediction.  CREDIT L. Brian Stauffer, University of Illinois{module INSIDE STORY}

Martin and his team worked with 2017 and 2018 data from the Data-Intensive Farm Management project, in which seeds and nitrogen fertilizer were applied at varying rates across 226 fields in the Midwest, Brazil, Argentina, and South Africa. On-ground measurements were paired with high-resolution satellite images from PlanetLab to predict yield.

Fields were digitally broken down into 5-meter (approximately 16-foot) squares. Data on soil, elevation, nitrogen application rate, and seed rate were fed into the computer for each square, with the goal of learning how the factors interact to predict yield in that square.

The researchers approached their analysis with a type of machine learning or artificial intelligence known as a convolutional neural network (CNN). Some types of machine learning start with patterns and ask the computer to fit new bits of data into those existing patterns. Convolutional neural networks are blind to existing patterns. Instead, they take bits of data and learn the patterns that organize them, similar to the way humans organize new information through neural networks in the brain. The CNN process, which predicted yield with high accuracy, was also compared to other machine learning algorithms and traditional statistical techniques.

"We don't really know what is causing differences in yield responses to inputs across a field. Sometimes people have an idea that a certain spot should respond really strongly to nitrogen and it doesn't or vice versa. The CNN can pick up on hidden patterns that may be causing a response," Martin says. "And when we compared several methods, we found out that CNN was working very well to explain yield variation."

Using artificial intelligence to untangle data from precision agriculture is still relatively new, but Martin says his experiment merely grazes the tip of the iceberg in terms of CNN's potential applications. "Eventually, we could use it to come up with optimum recommendations for a given combination of inputs and site constraints."

University of Valencia's Todolí investigates the risks that Artificial Intelligence can cause in occupational health

A study by Adrián Todolí, professor at the Faculty of Law and co-director of the Chair of Collaborative Economy and Digital Transformation of the University of Valencia, highlights the health risks that algorithms and artificial intelligence can cause due to the progressive work management automation. Published in the journal Labour & Law Issues, the article warns that machines can treat humans as if they were other machines and proposes that the algorithms be designed considering existing occupational hazards.

The use of new technologies is constantly growing in the workplace and tends to introduce artificial intelligence algorithms or systems that are part of the work management in the company. To optimize productivity, these programs analyze data and routines that evaluate performance and effectiveness; establish shifts and production times; design and designate tasks; and even analyze the information of applicants for a position and select the options that best fit the company criteria.

In his research, the professor of the Department of Labour Law and Social Security Adrián Todolí details that the personnel of a company is exposed to certain risks when the management of human resources and labor relations is automated. Thus, constant monitoring through technologies such as GPS or wearable devices; or also the intensification of effort as a result of a period marked by the algorithm, which can generate stress, anxiety, discouragement, and even depression.

“New technologies must be programmed to prevent and reduce these risks, taking into account specific factors such as the transparency of their operation, the adaptation to the abilities of each staff member or the margin of autonomy to make decisions and self-organize”, said the researcher of the Faculty of Law. Adrián Todolí, professor of the Faculty of Law of the University of Valencia.{module INSIDE STORY}

The expert also analyses other risk factors such as possible discrimination through the use of pre-existing non-equal data or statistical inferences outside of professional ethics and the right to privacy; or as malfunctions and cyberattacks. In addition, he cites the depersonalization and lack of empathy for the machines, which carry out their activities without taking into account personal circumstances. Among other cases, Todolí cites the big brother, or feeling of being observed at all times, or the burnout syndrome due to lack of privacy or invasive technological control.

Therefore, algorithms and artificial intelligence in work management can cause physical and mental health problems such as increased stress, anxiety, and frustration; decreased self-esteem and depression in more extreme cases; the reduction of human contact and the worsening of the work environment. They can also cause personality changes related to the dehumanization that the algorithm can exert.

Regulation

Therefore, Todolí emphasizes the importance of the algorithms being regulated, as a post-development phase taking into account occupational hazards. In this sense, it is important to respect privacy and non-discrimination, and the need for a trained person to supervise the actions of the algorithms and maintain contact and communications with employees.

The professor at the University of Valencia recalls that these new tools, in addition to the efficient management of the company and human resources, can offer positive aspects such as better evaluations and prevention methods, detect risks through audiovisual and auditory sensors, as well as alert and increase template protection.