Local stress in the bridge structure just as lateral displacements get drastically large (deformations have been magnified five times).  CREDIT Tokyo Metropolitan University
Local stress in the bridge structure just as lateral displacements get drastically large (deformations have been magnified five times). CREDIT Tokyo Metropolitan University

Japanese researchers perform pushover analyses with nonlinear FEM of the damage on bridges during earthquakes

Detailed model highlights how important girder end design is for improving resilience

Researchers from Tokyo Metropolitan University have carried out a detailed simulation showing how a common type of bridge fails during large-scale earthquakes. They modeled “I-shaped girder” bridges, looking at the step-by-step mechanism by which they yield and deform under lateral forces, starting at the ends. Reinforcing ribs were shown to be effective against lateral forces and improve load-bearing capacity. Their work points bridge engineers to rational design strategies to make more resilient infrastructure.

Major earthquakes can have a devastating impact on infrastructure. The effects of a severely damaged bridge, for example, are not limited to the tragedy that befalls people on it but extends to how the loss of access affects emergency services, evacuation efforts, and the transport of crucial supplies. Understanding how seismic activity impacts common bridge structures is therefore crucial, not only to build bridges that can withstand strong quakes but how to prevent the failure of existing ones through effective reinforcement. Though numerical models exist which are used to assess the resilience of bridge superstructures, for the most part, there are very few examples that examine how each part of the whole bridge structure behaves during large-scale earthquakes. As different parts of the structure show yielding, we can see that the displacement in the lateral direction gets larger more quickly with increased force.  CREDIT Tokyo Metropolitan University

A team led by Professor Jun Murakoshi of Tokyo Metropolitan University has been studying detailed models which accurately reflect the actual behavior of entire structures, with a focus on how they might inform new design strategies. They looked at the failure process and impact on load-bearing capacity caused by lateral shaking of an “I-shaped girder” bridge, a common bridge type with a span length of 30m; supported steel girders shaped to have a cross-section that looks like a capital “I” carry a flat “deck slab” over which cars and people can pass. They subjected their model bridge to the lateral forces commonly seen during quakes, considering the response when the force was applied in the longitudinal and transverse directions to the girders.

The model revealed a detailed picture of how the bridge yields and deforms. For example, when the force was applied in the transverse direction, the first place to get affected was the lower part of the vertical stiffeners on the support, followed by the yielding of the diagonal members of the end cross frame. The vertical stiffeners then go on to yield until finally, the gusset plate (a steel plate that connects lateral members) starts to deform. Though this does not lead to bridges failing, there are already reports of deformations impeding the passage of emergency vehicles after large-scale earthquakes. The numerical model constructed by the team includes the girders, lateral members, supports, and the deck on top.  CREDIT Tokyo Metropolitan University

The question now becomes how we might prevent this from happening. The team went on to study the effect of reinforcing ribs on the structure: a model with reinforcing ribs showed how stress acting on the girders and the end cross frame connecting them was reduced. The team’s work thus provides rational insight into how bridge structures may be designed and reinforced to make our infrastructure safer, as well as better strategies to assess their safety.

This work was supported by the Japan Iron and Steel Federation.

 

Norwegian built model makes bike-sharing work

Solving the "first-mile/last-mile" problem with a new optimization model

They’re everywhere, from Berlin to Beijing, with brightly colored bicycles you can borrow to move around the city without a car. These systems, along with e-scooters, offer people a quick and convenient way to travel around urban areas. And when cities are scrambling to find ways to meet their climate goals, they’re a welcome tool for urban planners. There's plenty of bikes here for riders, and even places for people to return bikes. But what's the best way to juggle the balance between available bikes and available parking places over the course of a busy day? Jens Gunnar H. Ellingsen, who works for Trondheim Bysykkel/UiP drift, has to think about this problem every day as he shifts bicycles around the city  CREDIT Nancy Bazilchuk/NTNU

People want the bikes to be there when they want to use them, and they will only want to use the system if it’s a good service.

Making sure the bikes and e-scooters are on hand can be something of a challenge — but it’s also key to the success of the offer, says Steffen Bakker, a researcher at NTNU’s Department of Industrial Economic and Technology Management who studies ways to make transport greener and more efficient.

“If a system like this is going to be successful, then we need to have user satisfaction,” Bakker said. “People want the bikes to be there when they want to use them, and they will only want to use the system if it’s a good service.”

Bakker was a co-author of a recent paper that describes an optimization model to help cities and companies do a better job keeping their bike-sharing customers happy.

Like shooting a moving target

Consider the challenges of providing bikes or scooters where and when people will want them.

You don’t know when the customers will pick up the bikes and where they will put them.

Researchers describe the problem as being dynamic, because it is always changing, and stochastic because it changes in random and often difficult-to-predict ways, Bakker said.

“Bike-sharing system users pick up bikes in one place, and they move them somewhere else. And then the state of the system changes because all of a sudden, the bikes are not where they started, which is the dynamic part,” he said. “But then on top of that, you don’t know when the customers will pick up the bikes and where they will put them.  That’s the stochastic part. So if you want to plan at the start of the day, you don’t know what is going to happen.”

Bakker and his colleagues can use the enormous treasure trove of data collected by bikes and e-scooters when they are in use to make predictions. But there’s no guarantee that the way bikes were used last Tuesday, for example, will be the same the following Tuesday, he said.

“You must adjust for things that occur during the day,” he said. “Maybe all of a sudden, there’s an event happening or the weather changes, and then people don’t use the service, and the demand pattern changes, which impacts the planning.”

Putting the pieces together

What Bakker and his colleagues have developed is an optimization model that can give recommendations about what the service operators should do.

This includes what service vehicles should do at the station they’re currently at — whether they should drop off or pick up bikes, or swap out batteries for e-bikes and scooters — and where to go next. The underlying calculations are based on what has happened so far during the day, and what is expected to happen shortly.

It’s very complex because it’s a big system.

The group’s research has been funded as part of a NOK 10 million project financed by the Research Council of Norway called the Future of Micro mobility (FOMO), with the company Urban Sharing AS as the lead business on the grant.

“Through Pilot-T, we plan to use existing city bike systems as test bases, and by developing new decision support tools, the aim is to increase the efficiency of the rebalancing teams by 30% and the lifetime of the bikes by 20%,” said Jasmina Vele, project manager at Urban Sharing. “This can be realized through better decisions related to rebalancing and preventive maintenance, and this will correspond to a large cost reduction in existing city bicycle systems.”

Moving bikes in the most efficient way

The process of collecting and moving bikes from one bike parking station to another is called “rebalancing.” Using the optimization model, which is still in its development phase, allows the drivers to be sent a new plan every time they arrive at a bicycle station.

“You don’t make just one plan at the start of the day, but what we do is we make a new plan every time a vehicle arrives at a bicycle station,” he said.  “And when the car arrives at the station we’ll tell them, ‘Okay, pick up this many bikes or drop off this many bikes’.”

But here’s where the tricky part comes in. It’s important not to be too myopic by just focusing on the current state of the system, Bakker says, especially if it’s expected that certain stations will have more demand within the next hour or so.

“It’s very complex because it’s a big system,” he said.  “Maybe there’s going to be a lot of demand at the station in one hour. So you already want to bring some bicycles there. But at the same time, there may be stations now that are almost empty, and they need some bicycles. So you need to figure out this trade-off.”

It’s also important to coordinate pickups and drop-offs between the different vehicles that are servicing the bike-sharing network, he said.

Digital twins and computational time

Bakker and his colleagues are working with NTNU’s Department of Computer Science to create a “digital twin”, or a computer simulation, of the systems they are modeling, so they can try out different approaches without actually having to test them in the real world.

Initial tests showed that the model the group generated can reduce the number of problems (meaning either not enough bikes where the user wants one, or too many bikes so the user can’t park the bike)  by 41 per cent compared to not doing any rebalancing at all.

Compared to the current rebalancing practices of Oslo City Bikes, which is also a collaborator in the NFR grant, the number of problems was reduced by 24 per cent.  Bakker says newer versions of the model show even more potential.

Simpler approaches are possible too

Not surprisingly, the kinds of calculations needed to make the model work are complex,  and researchers need to fine-tune the different parameters affecting the performance of the model.

Bakker and his colleagues have also worked on one component of the optimization model called criticality scores, which is a little simpler and can be used independently of the larger optimization model.

A criticality score is basically a score given to different bike-sharing parking areas based on the number of bikes it currently contains or needs. These scores are relatively simple to calculate and can be provided to drivers as they travel around the city to rebalance the number of bikes at each station.

“It’s a score that tells the driver which station is most critical to visit,” Bakker said. “If you can present that to the person driving the car and say these are the stations with the highest criticality score, we can provide something that is not the best, but it’s probably good, and much better than what bike-sharing companies do now.”

Urban Sharing’s Vele says using these kinds of optimization models can help make bike-sharing an important component in urban transport.

“Urban Sharing’s vision for future mobility is a transport system that is responsive and adaptive. By using data and machine learning/optimization algorithms, we can combine the best of both traditional and modern transport systems, and create a resource-efficient system that responds to demand and adapts to users’ individual needs,” she said.

Brazilian researchers use AI to define priority areas for action to combat deforestation in the Amazon

A study using satellite imagery and machine learning techniques shows that many deforestation hotspots lie outside the 11 municipalities currently monitored by the Brazilian federal government under its Amazon Plan 2021/2022. A study using satellite imagery and machine learning techniques shows that many deforestation hotspots lie outside the 11 municipalities currently monitored by the Brazilian federal government.

Using a method based on satellite images and artificial intelligence, Brazilian researchers have shown that the priority area for actions to combat illegal deforestation could comprise 27.8% less territory than the 11 municipalities monitored by the federal government under the current strategy, known as the Amazon Plan 2021/2022. This monitoring ignores new deforestation frontiers outside the targeted areas.

According to an article by the researchers, published in June in Conservation Letters, a journal of the Society for Conservation Biology, areas of the Amazon classified as a high priority for having the highest deforestation rates totaled 414,603 square kilometers (km2) this year, while the total area targeted by the plan for the 11 municipalities is 574,724 km2. In other words, the area to be monitored could be reduced by 160,000 km2, which is about the size of Suriname.

However, while the deforestation hotspots identified by the researchers accounted for 66% of the average annual deforestation rate, the 11 municipalities targeted by the plan represented 37% of the deforestation rate for the last three years (2019-21).

In the article, scientists affiliated with Brazil’s National Space Research Institute (INPE) and universities in the United States conclude that the proposed method would give monitoring and law enforcement a tighter focus. Furthermore, they stress, that it reveals new deforestation frontiers outside the priority area and hence not covered by the official monitoring plan.

“Using this new approach, we concluded that prioritizing areas with higher deforestation rates would be more effective than limiting the monitoring to certain municipalities, This is an important finding, given that the agencies responsible for law enforcement, in this case, mainly IBAMA and ICMBio, have had their budgets and staffing steadily whittled down. Some of these deforestation hotspots are in the 11 municipalities, but others are in the vicinity and constitute new frontiers,” Guilherme Augusto Verola Mataveli, corresponding author of the article, told Agência FAPESP. Mataveli is a researcher in INPE’s Earth Observation and Geoinformatics Division.

The study was supported by FAPESP via four projects (19/25701-819/21662-821/07382-2, and 16/02018-2).

The National Council for Legal Amazonia (CNAL), which oversees the Amazon Plan 2021/2022, responded as follows to Agência FAPESP's request for comment: “The aim [of the plan] was to focus on where the occurrence of illegal environmental activities had the most impact on the results of Brazil’s environmental management without neglecting the need to act in other areas of Legal Amazonia.” 

Legal Amazonia is an area of more than 5 million km2 comprising the states of Acre, Amapá, Amazonas, Maranhão, Mato Grosso, Pará, Rondônia, Roraima, and Tocantins. It was created by federal laws dating back to 1953 to promote special protection and development policies for the area. 

According to CNAL, “the 11 municipalities were chosen because they had the largest deforested area and the highest incidence of fires, with the possibility of including others to be mapped by the Center for Management and Operations of the Amazon Protection System [Censipam]”.

The council also stated that INPE was one of the “leading institutions in the process of choosing priorities”, and that the scientists who conducted the research “could have contributed in an institutional manner as the opportunity arose”. 

“CNAL always works with official information managed, processed, and analyzed by official government bodies,” its statement said.

Advances in the data processing

The authors of the article note that deforestation in the 11 municipalities targeted by the plan has been significant in recent years and that this is grounds for monitoring but not sufficient to prioritize only these areas, which are as follows: São Félix do Xingu, Altamira, Novo Progresso, Pacajá, Portel, Itaituba and Rurópolis (Pará); Apuí and Lábrea (Amazonas); Colniza (Mato Grosso); and Porto Velho (Rondônia).

They also note that despite the concentration on these areas for monitoring and law enforcement, deforestation increased 105% between February and April 2021 compared with the average for the same period between 2017 and 2021. DETER, Brazil’s official deforestation alert program pointed to 524.89 km2 of new deforestation sites in these areas.

“The study validates the importance of INPE, which for 60 years has trained outstanding researchers, producing science and technology from satellite data for society and national development. The advances in data processing embodied in the use of artificial intelligence for the planning of actions to combat deforestation are critical to mitigate the country’s environmental problems and construct a national sustainable development plan,” said Luiz Aragão, the last author of the article. Aragão heads INPE’s Earth Observation and Geoinformatics Division,

Priority areas

The data sources for the study included INPE’s Legal Amazonia Deforestation Satellite Monitoring Service (PRODES), which produces the annual deforestation statistics used by the Brazilian government in formulating public policy for the region. PRODES focuses on cut-and-burn rates and has used the same methodology since 1988.

According to its latest report, the areas deforested in the region totaled 13,235 km2 between August 2020 and July 2021. This was a year-over-year increase of 22%, the largest since 2006 (more at: terrabrasilis.dpi.inpe.br/app/dashboard/deforestation/biomes/legal_amazon/rates).

“The idea for the article came up in February 2021 when the Amazon Plan 2021/2022 was announced,” Mataveli said. “Deforestation in the 11 municipalities was said to account for 70% of total deforestation detected in the Amazon, but the PRODES number was different. When we enhanced the model, we found it to be a useful tool to focus monitoring and law enforcement more effectively.”

To establish the priority areas, the researchers first defined what they call grid cells measuring 25 km by 25 km and regularly distributed across the Amazon. Using the Random Forest machine learning algorithm to predict deforestation hotspots in the following year based on sets of multivariate regressions, they placed each cell in a high, medium, or low priority class. According to the article, the method identified a larger proportion of areas at risk of deforestation in terms of total size and public plots where clearing trees is illegal.

The model considered five predictors: deforestation in previous years, distance to grid cells with high cumulative deforestation in previous years, distance to infrastructures such as roads and waterways, the total area protected in grid cells, and the number of active fires. 

The three priority classes were based on predicted deforestation, with values below the 70th percentile classified as low, values between the 70th and 90th percentiles as a medium, and values above the 90th percentile as high. The grid cells classified as high were used to map priority areas for 2022 totaling 414,603 km2.

The authors also note that their method prioritizes actions in boundary areas of the 11 priority municipalities where deforestation activities are concentrated, captures other areas of increasing deforestation not monitored by the plan, determines priorities based on the land cleared in the previous year, and does not depend on geopolitical frontiers such as municipalities. 

“Prioritizing these 11 municipalities will be insufficient for Brazil to achieve its international commitments, including the pledge to reduce illegal deforestation to zero by 2028 announced at COP-26 [the 2021 UN Climate Change Conference],” Mataveli said. “Moreover, the plan aims to reduce deforestation by 8,719 km2 per year, but a 2018 decree set a far lower target of 3,925 km2 per year after 2020.”

This was a reference to Decree 9578 (2018), which consolidated the National Climate Change Policy and set a goal of cutting deforestation in the Amazon by 80% compared with the average for 1996-2005. This is one of the actions to which Brazil is committed to containing greenhouse gas emissions.

Besides its 2028 zero-deforestation pledge, Brazil also announced at COP-26 that it would cut greenhouse gas emissions by half compared to 2005 levels by 2030 and achieve climate neutrality by 2050. Rising deforestation in the Amazon contrasts with these promises: about 11% of greenhouse gas emissions are due to forest and land use mismanagement, including deforestation and fire.

When the Amazon Plan 2021/2022 was announced, experts criticized the targets it set as insufficient because they were based on the average deforestation rate for the period 2016-20, which was already 35% higher than the average for the previous ten years.

Call for complementary actions

The article argues for several complementary actions to combat deforestation, in addition to direct methods for the setting of public policy targets. These should include environmental education and awareness raising, identifying and making accountable actors who infringe environmental protection laws and profit from illegal deforestation, incentivizing projects that invest in the green economy and maintenance of the standing forest, and regularizing public and Indigenous land holdings.

“We used open-source code to create the model and define priority areas,” Mataveli said. “We’re talking to the Terra Brasilis platform to include these areas in the information available to all those who want to access it, so that it can be used in practice by any state or municipal governments interested.”