Can AI predict your thermal comfort from the layout of a room?

Scientists develop novel machine learning-based approach to study the impact of spatial parameters on indoor thermal comfort Researchers at XJTLU have developed an artificial neural network based system to predict personal thermal comfort in different areas of a room, based on spatial parameters such as the position and direction of windows.

We spend more than 90% of our time indoors, so buildings must be designed to maximize our comfort, particularly when it comes to maintaining an ambient temperature. Predicting whether a person will be hot or cold in a room is not only important in designing a comfortable space, it can also be useful for reducing energy consumption.

Currently, 55% of a building’s operational energy is spent on heating, ventilation, and air-conditioning (HVAC). Machine-learning-based methods are often used to predict individuals’ thermal comfort based on various factors, but the impact of architectural spatial features, such as the position of doors and windows, has not been included in the previous modeling.

A fresh perspective

In a recent study published in the Journal of Building Engineering, a team of scientists— including Dr. Cheng Zhang and Dr. Bing Chen, both from Xi’an Jiaotong-Liverpool University, China—sought to address this knowledge gap.

“Our research set out to determine exactly how we can map the thermal comfort of different areas of a room, and how factors such as sunlight exposure, windows, and HVAC positioning affect each area,” explains Dr. Zhang. “To this end, we developed an artificial neural network (ANN)-based system to predict personal thermal comfort based on these factors.”

Developing a new model

The team developed the ANN-based model using three main categories of spatial parameters: personal-dependent parameters (age, gender, clothing, body mass index, etc.), environmental parameters (mean indoor temperature, humidity, weather conditions, etc.), and spatial parameters. While the first two have been considered in many personal thermal comfort models before, accounting for spatial parameters is an entirely novel approach to modeling personal thermal comfort.

Dr. Chen says: “Most models only cover physical environmental information, with little attention being paid to users’ behavior within the environment. We intended to incorporate human-related factors and behavior into the existing models.”

Collecting data

Just like for any other machine learning-based approach, the scientists had to train and validate their ANN-based model with real data on all the input parameters and the resulting thermal comfort. The team conducted field experiments in both summer and winter, in five experimental office rooms with different layouts. They also recruited participants who, after staying at a given position in the room for a predetermined time, had to self-assess their thermal comfort. Their forehead temperature was also recorded. The collected data was used to train, validate, and finally test the ANN.

Promising results

The trained ANN model mapped the thermal comfort of a room with exceptional accuracy. The data showed that spatial parameters have a significant impact on the prediction accuracy of the model and determined which spatial factors had the biggest impact on an individual’s thermal comfort.

A path for the future

This research highlights the importance of including architectural spatial features in models to predict thermal comfort and reduce energy consumption, something that has been largely overlooked in previous research efforts.

“Our findings pave the way for future researchers to explore more relationships between indoor spatial layout and individual thermal comfort using AI technology, to provide a better environment using less energy,” remarks Dr. Zhang. “This may ultimately lead to an acceptable trade-off between thermal comfort and energy consumption achieved through spatial design.”

So next time you feel chilly in your living room, perhaps consider moving your sofa before you switch your heating on.

UK prof uses AI on the eye as a window into heart disease

Scientists have developed an artificial intelligence (AI) system that can analyze eye scans taken during a routine visit to an optician or eye clinic and identify patients at a high risk of a heart attack. A graphical representation of the idea of using a scan of the eye to get a window into heart health.

Doctors have recognized that changes to the tiny blood vessels in the retina are indicators of broader vascular disease, including problems with the heart. 

In the research, led by the University of Leeds, deep learning techniques were used to train the AI system to automatically read retinal scans and identify those people who, over the following year, were likely to have a heart attack.  

Deep learning is a complex series of algorithms that enable computers to identify patterns in data and make predictions. 

The researchers report that the AI system had an accuracy of between 70% and 80% and could be used as a second referral mechanism for in-depth cardiovascular investigation.  

The use of deep learning in the analysis of retinal scans could revolutionize the way patients are regularly screened for signs of heart disease. 

Professor Alex Frangi, who holds the Diamond Jubilee Chair in Computational Medicine at the University of Leeds and is a Turing Fellow at the Alan Turing Institute, supervised the research. He said: “Cardiovascular diseases, including heart attacks, are the leading cause of early death worldwide and the second-largest killer in the UK. This causes chronic ill-health and misery worldwide. 

“This technique opens up the possibility of revolutionizing the screening of cardiac disease. Retinal scans are comparatively cheap and routinely used in many optician practices. As a result of automated screening, patients who are at high risk of becoming ill could be referred to specialist cardiac services. 

“The scans could also be used to track the early signs of heart disease.” 

The study involved a worldwide collaboration of scientists, engineers, and clinicians from the University of Leeds; Leeds Teaching Hospitals NHS Trust; the University of York; the Cixi Institute of Biomedical Imaging in Ningbo, part of the Chinese Academy of Sciences; the University of Cote d’Azur, France; the National Centre for Biotechnology Information and the National Eye Institute, both part of the National Institutes for Health in the US; and KU Leuven in Belgium. 

The UK Biobank provided data for the study. 

Chris Gale, Professor of Cardiovascular Medicine at the University of Leeds and a Consultant Cardiologist at Leeds Teaching Hospitals NHS Trust, was one of the authors of the research paper. 

He said: “The AI system has the potential to identify individuals attending routine eye screening who are at higher future risk of cardiovascular disease, whereby preventative treatments could be started earlier to prevent premature cardiovascular disease.” 

Deep learning 

During the deep learning process, the AI system analyzed the retinal scans and cardiac scans of more than 5,000 people. The AI system identified associations between pathology in the retina and changes in the patient’s heart.  

Once the image patterns were learned, the AI system could estimate the size and pumping efficiency of the left ventricle, one of the heart’s four chambers, from retinal scans alone. An enlarged ventricle is linked with an increased risk of heart disease.  

With information on the estimated size of the left ventricle and its pumping efficiency combined with basic demographic data about the patient, their age, and sex, the AI system could predict their risk of a heart attack over the subsequent 12 months.  

Currently, details about the size and pumping efficiency of a patient’s left ventricle can only be determined if they have diagnostic tests such as echocardiography or magnetic resonance imaging of the heart. Those diagnostic tests can be expensive and are often only available in a hospital setting, making them inaccessible for people in countries with less well-resourced healthcare systems - or unnecessarily increasing healthcare costs and waiting times in developed countries. 

Sven Plein, British Heart Foundation Professor of Cardiovascular Imaging at the University of Leeds and one of the authors of the research paper, said: “The AI system is an excellent tool for unraveling the complex patterns that exist in nature, and that is what we have found here – the intricate pattern of changes in the retina linked to changes in the heart.” 

Norwegian researchers improve climate model projections of carbon, heat uptake in the Antarctic Ocean

In a new study, Norwegian researchers unveil a new relationship for climate models linking both carbon and heat uptake with the water-column stability in the Southern Ocean – also called the Antarctic Ocean. Cumulative uptake of anthropogenic carbon simulated by climate models during the 1850–2100 period using the high-emission scenario (RCP8.5). Red areas depict high uptake of carbon by the Ocean. On the left, the weakly-stratified models show a stronger carbon uptake between 30°S and 55°S than the strongly-stratified models on the right. Adapted from Bourgeois et al. (2022).

This climate model projections study is conducted by researchers from NORCE Norwegian Research Centre, and the Bjerknes Centre for Climate Research in Bergen, Norway. The Ocean is a powerful mitigator of climate change. It absorbs about 25 percent of the CO2 emitted by humans into the atmosphere (anthropogenic carbon) and 90 percent of the excess heat caused by global warming. One of the hotspots of anthropogenic carbon and excess heat uptake in the Southern Ocean. 

In this region, a key mechanism called “ocean subduction”, particularly located between 30°S and 55°S permits to efficient transfer of anthropogenic carbon and excess heat from the surface to the depths of the ocean, where it can be stored for centuries. Climate model projections of these carbon and heat sinks remain highly uncertain. Reducing such uncertainties is required to effectively guide the development of climate mitigation policies for meeting ambitious climate targets.

- Climate models have significantly improved on many aspects during the last decades, yet they still show some biases that we must reduce, says lead author Timothée Bourgeois. Models with a more unstable water column have an efficient subduction mechanism and a strengthened carbon and heat uptake in our region of interest, lead author Bourgeois and his colleagues Nadine Goris, Jörg Schwinger , and Jerry F. Tjiputra point out. On the contrary, models with a stable or “stratified” water column show reduced subduction and reduced uptakes. Using a powerful and recent statistical methodology as well as observational data describing today’s water-column stability, the new relationship permits to reduce the uncertainty of future estimates of the anthropogenic carbon uptake by up to 53 percent and the excess heat uptake by 28 percent. These results highlight that, for this region, an improved representation of the water-column stability in climate models is key to improving climate change projections.

Canadian prof builds models to simulate effects of pregnancy on kidneys

In Ontario, Canada, University of Waterloo researchers are using supercomputer simulations to better understand the impacts pregnancy can have on kidneys.

The new research will help medical practitioners better understand the physiology of the kidneys during pregnancy and develop appropriate patient care and treatments to improve health outcomes.

The researchers are interested in how the kidneys change during a typical pregnancy and how increased strain on the kidneys can lead to gestational diseases. The kidneys can also be affected by preeclampsia - unusually high blood pressure during pregnancy that may lead to organ damage.

“One thing that happens during pregnancy is that plasma volume expands to supply a developing fetus and placenta,” said Melissa Stadt, a master’s researcher in applied mathematics at the University of Waterloo. “There’s also retention of extra sodium and potassium, which are essential electrolytes during pregnancy. Everything about pregnancy means a lot more work for the kidneys.”

The research team used super computational models representing kidney function during mid-and late pregnancy. These in-silico experiments, so-called because they are essentially conducted in the silicon of computer chips, provide a way to simulate different kinds of strain on the kidneys that would otherwise not be possible to test in live pregnancies without substantial risk.

Because of the risks associated with human pregnancies, medical researchers often use other mammals like rats for research. Although computational models do not require any live test subjects, the research team still modeled rat pregnancies so they could incorporate more of the existing scientific data into their study.

“What’s powerful about computational modeling is that we can do trials that we could never do in live experiments,” said Anita Layton, professor of applied mathematics and Canada 150 Research Chair in mathematical biology and medicine at the University of Waterloo. “We can easily change one parameter and see the implications. Once we have the working model, we can see how these changes affect pregnancy.”

While computational models of organs like the kidneys are only ever approximations of what may happen in a specific individual case, they are a safe, cost-effective, and timely way to conduct trials, not just of the various impacts pregnancy may have on the kidneys, but also of potential treatments and medications. 

“If things go wrong in pregnancy, it can affect the mother for the rest of their life, and the growing fetus is very sensitive to any complications that affect the mother’s organs,” said Layton. “That’s where our models come in. Unfortunately, there’s a big gap in medical research related to all the changes in the kidneys of pregnant women. So our research is trying to make some progress and help improve health outcomes during pregnancy.”

Rice shows how iron catalyzes corrosion in 'inert' carbon dioxide

The iron that rusts in water theoretically shouldn’t corrode in contact with an “inert” supercritical fluid of carbon dioxide. But it does. 

The reason has eluded materials scientists to now, but a team at Rice University has a theory that could contribute to new strategies to protect the iron from the environment. Iron (blue) can react with trace amounts of water to produce corrosive chemicals despite being bathed in “inert” supercritical fluids of carbon dioxide. Atomistic simulations carried out at Rice University show how this reaction happens. (Credit: Evgeni Penev/Rice University)

Materials theorist Boris Yakobson and his colleagues at Rice’s George R. Brown School of Engineering found through atom-level simulations that iron itself plays a role in its corrosion when exposed to supercritical CO2 (sCO2) and trace amounts of water by promoting the formation of reactive species in the fluid that come back to attack it. 

In their research, published in the Cell Press journal Matter, they conclude that thin hydrophobic layers of 2D materials like graphene or hexagonal boron nitride could be employed as a barrier between iron atoms and the reactive elements of sCO2. 

Rice graduate student Qin-Kun Li and research scientist Alex Kutana are co-led authors of the paper. Rice assistant research professor Evgeni Penev is a co-author.

Supercritical fluids are materials at a temperature and pressure that keeps them roughly between phases -- say, not all liquid, but not yet all gas. The properties of sCO2 make it an ideal working fluid because, according to the researchers, it is “essentially inert,” noncorrosive, and low-cost. 

“Eliminating corrosion is a constant challenge, and it’s on a lot of people’s minds right now as the government prepares to invest heavily in infrastructure,” said Yakobson, the Karl F. Hasselmann Professor of Materials Science and NanoEngineering and a professor of chemistry. “Iron is a pillar of infrastructure from ancient times, but only now are we able to get an atomistic understanding of how it corrodes.”

The Rice lab’s simulations reveal the devil’s in the details. Previous studies have attributed corrosion to the presence of bulk water and other contaminants in the superfluid, but that isn’t necessarily the case, Yakobson said.

“Water, as the primary impurity in sCO2, provides a hydrogen bond network to trigger interfacial reactions with CO2 and other impurities like nitrous oxide and to form corrosive acid detrimental to iron,” Li said.

The simulations also showed that the iron itself acts as a catalyst, lowering the reaction energy barriers at the interface between iron and sCO2, ultimately leading to the formation of a host of corrosive species: oxygen, hydroxidecarboxylic acid, and nitrous acid

To the researchers, the study illustrates the power of theoretical modeling to solve complicated chemistry problems, in this case predicting thermodynamic reactions and estimates of corrosion rates at the interface between iron and sCO2. They also showed all bets are off if there’s more than a trace of water in the superfluid, accelerating corrosion.