LATEST

In physics, the conundrum known as the "few-body problem," how three or more interacting particles behave, has bedeviled scientists for centuries. Equations that describe the physics of few-body systems are usually unsolvable and the methods used to find solutions are unstable. There aren't many equations that can probe the wide spectrum of possible few-particle dynamics. A new family of mathematical models for mixtures of quantum particles could help light the way.

"These mathematical models of interacting quantum particles are like lanterns, or islands of simplicity in a sea of complexity and possible dynamics," said Nathan Harshman, American University associate professor of physics and an expert in symmetry and quantum mechanics, who along with his peers created the new models. "They give us something to grip onto to explore the surrounding chaos."

Harshman and his peers describe the work in a paper published in Physical Letters X. Theoretical physicists like Harshman work at the atomic level, aiming to solve the mysteries of the building blocks of life for energy, motion and matter. The new models exhibit a broad array of quantum particle interactions, from stable to chaotic, simple to complex, controllable to uncontrollable, and persistent to transitory. If these models could be constructed in a laboratory, then the control and coherence provided in special, solvable cases could be used as a tool in the next generation of quantum information processing devices, like quantum sensors and quantum supercomputers.

In the last decade or so, physicists have been able to make one-dimensional optical traps for ultracold atoms in the lab. (Only at low temperatures do quantum dynamics emerge.) This led to a flurry of theoretical analyses, as researchers discovered they could make progress on understanding three-dimensional problems by thinking about solutions in terms of simpler, one-dimensional systems.

The researchers' key insight is working in abstract, higher dimensions. The models describe a few ultracold atoms trapped and bouncing back and forth in a one-dimensional trap. The equation describing four quantum particles in one dimension is mathematically equivalent to the equation describing one particle in four dimensions. Each position of this fictional single particle actually corresponds to a specific arrangement of the four real particles. The breakthrough is to use these mathematical results about symmetry to find new, solvable few-body systems, Harshman explained.

By moving particles to a higher dimensional space and choosing the right coordinates, some symmetries become more obvious and more useful. Then, these symmetries can be used to map a system from the higher dimension back into a simpler model in a lower (but abstract) dimension.

Coxeter models, as Harshman calls these symmetric, few-body systems, named for the mathematician H.S.M. Coxeter, can be defined for any number of particles. The particles can have different masses, making them different from previous equations that can only describe particles that have equal mass. In particular, when the particle mass and order are chosen correctly, the system shows integrable (or well-defined) dynamics, which have as many conserved quantities, like energy and momentum, as they have degrees of freedom.

So far, only rarely do solvable few-body systems have experimental applications. What comes next is to implement the Coxeter models in a lab. Harshman and his colleagues are talking with physics experimentalists about how to construct systems with mixed-mass particles as close as possible to integrable systems. As integrable systems allow for greater coherence, the systems they construct could help unravel some of the most complex concepts in physics, like quantum entanglement. Other proposals include using chains solitons (stable clumps of atoms) because the masses of solitons can be controlled in an experiment.

Michael Brotherton, a UW professor of astronomy, played a key role in a study, published in Nature Astronomy, that suggests a newly developed supercomputer model can more accurately explain the diversity of quasar broad emission line regions, which are the clouds of hot, ionized gas that surround the supermassive black holes feeding in the centers of galaxies. This artist’s impression shows how ULAS J1120+0641, a very distant quasar powered by a black hole with a mass 2 billion times that of the sun, may have looked. This quasar is the most distant yet found and is seen as it was just 770 million years after the Big Bang. (European Southern Observatory/M. Kornmesser Photo)

A University of Wyoming researcher played a key role in a study that suggests a newly developed supercomputer model can more accurately explain the diversity of quasar broad emission line regions, which are the clouds of hot, ionized gas that surround the supermassive black holes feeding in the centers of galaxies.

“We are trying to get at more detailed questions about spectral broad-line regions that help us diagnose the black hole mass,” says Michael Brotherton, a UW professor in the Department of Physics and Astronomy. “People don’t know where these broad emission line regions come from or the nature of this gas.” 

The new study, titled “Tidally Disrupted Dusty Clumps as the Origin of Broad Emission Lines in Active Galactic Nuclei,” was published earlier this month in Nature Astronomy, a monthly, online-only, multidisciplinary journal that publishes the most significant research, review and comment at the cutting edge of astronomy, astrophysics and planetary science. 

Jian-Min Wang, from the Chinese Academy of Sciences, was the paper’s lead author. Other contributing authors were from Key Laboratory for Particle Astrophysics Institute of High Energy Physics, National Astronomical Observatories of China and the School of Astronomy of Space Science, all at the Chinese Academy of Sciences; and the School of Astronomy and Space Science at Nanjing University in Nanjing, China.

Brotherton says most current computer models look at symmetrical lines in the spectral broad emission line region in active galactic nuclei (AGN), whereas the new model he helped develop looks at real lines, which are often asymmetrical.

“We see and try to reach a deeper understanding of the broad emission line region, where it comes from, its structure and how it can lead to a better understanding of quasars themselves,” he says. “Our model tries to explain the full range of quasars,” which Brotherton describes with humor as “the fire-breathing, bat-winged, vampire rainbow zebra unicorns of astrological phenomena.”

The black hole’s gravity accelerates the surrounding gas from these quasars to extremely high velocities, Brotherton explains. The gas heats up and, in turn, outshines the entire surrounding galaxy. 

“People think, ‘It’s a black hole. Why is it so bright?’ A black hole is still dark,” he says. “The discs reach such high temperatures that they put out radiation across the electromagnetic spectrum, which includes gamma rays, X-rays, UV, infrared and radio waves. The black hole and surrounding accreting gas the black hole is feeding on is fuel that turns on the quasar.”

The gases, like wispy fires, put out colors of light, described by Brotherton as similar to “giant neon signs in space.” The gases move at thousands of kilometers per second, with the blue-shifted gases moving toward us and the red-shifted gases moving away from us. This effect broadens the lines but doesn’t actually make the gases red or blue, he says.

At the broad emission line region, those separate colors become a spiral of colors, a measure of the velocity of surrounding dust clouds.

The model includes what Brotherton terms “a swarming donut of dusty gas.” Dusty clouds or clumps are contained in this donut that surrounds the quasar discs.

“What we propose happens is these dusty clumps are moving. Some bang into each other and merge, and change velocity,” he says. “Maybe they move into the quasar, where the black hole lives. Some of the clumps spin in from the broad-line region. Some get kicked out.”

The research was supported by the National Key Program for Science and Technology Research and Development, and the Key Research Program of Frontier Sciences at the Chinese Academy of Sciences.

“It is an important first step forward at looking at these emission lines that form the black hole mass,” Brotherton says.

A new early warning system to alert farmers to the risk of disease among their young cattle stock is being developed by experts at The University of Nottingham.

The innovation, dubbed Y-Ware, could save the UK farming industry millions of pounds, while improving health and welfare of animals and reducing the antimicrobial use to treat these diseases.

The £1.13million project is a partnership with farming digitalisation specialists PrognostiX and BT, and is supported by a grant from Innovate UK, the UK Government-funded innovation agency.

Dr Jasmeet Kaler, Associate Professor of Epidemiology and Farm Animal Health currently leading Ruminant Population research in the University’s School of Veterinary Medicine and Science, is the academic lead on the project.

She said: “Improving youngstock health on cattle farms is a key priority for cattle industry and also been identified by industry task force RUMA (responsible use of medicine in agriculture alliance) as one of key targets released last week for antibiotic reduction on cattle farms especially beef. Use of innovative and precision health technologies offer a great solution in this direction. Whilst there has been an increase in availability of various technologies for livestock over the past decade, there are none that target youngstock health and overall very few precision livestock technologies that have been validated in the field and combine various sources of data with multiple transmission protocols to develop algorithms for livestock health and welfare. Our group does impactful cutting-edge research into the health and welfare of UK cattle and sheep, with a special focus on endemic disease in populations. 

“In this project, we are leading data analytics working alongside our partners. We will utilise our domain knowledge with regard to our understanding of disease biology and epidemiology together with various machine learning approaches on the data gathered via sensors. Our overall aim will be to develop an innovative technology that combines different formats of data ,uses application of Internet of Things and advanced analytics for early detection of disease in young stock and thus allow targeted use of antibiotics.” 

Cattle farmers are facing major challenges in remaining profitable while maintaining the high standards of animal welfare demanded by retailers and consumers.

Every year, of the 2.5 million calves that are born, eight per cent of them are born dead or die within 24 hours and a further 15 per cent die in rearing from diarrhoea and pneumonia, costing the UK cattle industry £80 million. The cost of a pneumonia outbreak is £81 per calf and £57 per calf for a diarrhoea outbreak.

Bolus sensors, which sit in an animal’s gut and monitor body temperature or pH, are in widespread use in cattle – but are currently only available for adult cows. Also, many technologies exist on farms that don’t talk to each other which limits the predictive value of such data.

The Y-Ware project is aiming to develop a bolus sensor which could be used in calves as young as 14 weeks, as well as a dashboard that will use machine learning techniques to give farmers an early warning system for health using bolus sensor information and comprehensive information about the animal collected from a range of additional sources including building temperature, humidity, farm and vet records and weight.

All the information would be used to produce baseline data and a specific ‘signature’ for the animal. Unusual changes to this signature, for example, an unexpected rise in 

body temperature, could allow farmers to spot the signs of disease, treat early and quarantine the animal to prevent wider outbreaks among the herd.

The development will allow farmers to more effectively target use of antibiotics to treat these diseases and this will tackle overuse of the drugs which is contributing to the problem of antibiotic resistance in both animals and humans who are exposed to increasing levels through the food chain. 

Y-Ware will develop an Internet of Things (IoT)-based data collection solution including:

  • Specific real-time 24/7 temperature sensor with combined tamper-proof animal ID verification
  • Easy to collect data from a range of incompatible sensors (both wearable and non-wearable) in young stock via wireless technology
  • A fully automated weighing platform to collect data on cattle weight without the need for human intervention
  • A communications hub to collect and process the remote data
  • A web dashboard offering access to customisable reports that will provide farmers and vets with essential information on individuals and groups of animals. This will provide an early warning system for disease, a ‘welfare score’ and detailed antibiotic usage that can be used.

The consortium is made up of specialists in engineering technology, software development, vet epidemiology, cattle health and data science, cloud supercomputing and data analytics.

Alan Beynon, who is a Director of PrognostiX, Director of St David’s Poultry Team and Managing Director of Molecare Farm Vets, said: “This is a very exciting time for veterinarians in practice in all sectors of Agriculture as the pressure to reduce antimicrobials is current and pressing. The use of real-time data to make clinical decisions is an integral part of the where the future will be alongside better diagnostic facilities. We are delighted to be working alongside our dynamic partners Nottingham University and British Telecom.”

Martin Tufft, IoT Director at BT said: “We’re providing expertise around data science and analytics, exploring the data generated from multiple sensors with a view to developing unique algorithms and machine learning techniques to support the project. The application of advance data analytics is key to the success of IoT solutions and we look forward to helping this project provide valuable information for the farming industry.”

This is an example of the software performance. (top) Atomic displacement of model structure as a function of depth. (bottom) Scattered x-ray intensity profiles calculated from the model structure (demo-data, open circles), initial structural model (blue curve) and the result of the refinement (red curve). In this figure, the analysis on the demo-data to show the accuracy of the method. The analysis on an experimentally obtained dataset is also reported.

Osaka University-led researchers develop a Bayesian probability-based supercomputer program to help work out the structure of perovskite oxides at their interfaces

Perovskites are a type of mineral and class of materials, and have been attracting a great deal of attention for their potential applications to technologies such as those used in solar cells. These unique materials have well-ordered structures and show many interesting properties that could be useful in other areas of electronics. Such a variety of properties in the same structural backbone allows different kinds of perovskites, with different properties, to be evenly joined together without breaking lattice coherency. Being able to examine the structures at these interfaces is important for researchers studying perovskites, but currently used techniques have insufficient resolution or produce complex results that are very difficult to analyze.

Now, Osaka University-led researchers have found a way to model perovskite oxide interfaces with great precision and accuracy using a new supercomputerized approach to picking out the correct structure from X-ray data. They recently reported their findings in the Journal of Applied Crystallography.

"Using typical scanning transmission electron microscopy on perovskite oxides requires samples to be cut, which can damage the surface and affect the resolution," study lead author Masato Anada says. "Surface X-ray di?raction approaches avoid these effects but analyzing the data is complex, so few people are using this method. Our Monte Carlo-based refinement method provides a fast way to search for the most probable structure from X-ray data, and is versatile enough to be applied to more variable interfaces."

Monte Carlo methods help predict what the structure of an interface probably looks like. By making small changes, with certain restrictions, many different possible structures can be randomly simulated.

Applying this technique to the interface between perovskites and comparing simulated X-ray data with real measurements allows the researchers to rapidly identify the most likely perovskite structures.

They tested their new method on a simulated X-ray dataset from a realistic interface structure between two types of perovskite oxides, and the final structure refined by their modelling was very close to actual structure of the interface.

"Features of perovskite interfaces are ideal for testing out certain theories in condensed matter physics and for making new types of electronic materials system," coauthor Yusuke Wakabayashi says. "Our approach makes analyzing the complex structural data of these interfaces much easier, and it's also robust for uneven interfacial structures. This approach should be useful for anyone currently investigating these structures."

Rajeev Prabhakar

Researchers at the University of Miami identify binding site on amyloid beta peptide, learn to modify its structure

A probe that lights up when it binds to a misfolded amyloid beta peptide, the kind suspected of causing Alzheimer's disease, has identified a specific binding site on the protein that could facilitate better drugs to treat the disease. Even better, the lab has discovered that when the metallic probe is illuminated, it catalyzes oxidation of the protein in a way they believe might keep it from aggregating in the brains of patients.

The study done on long amyloid fibrils backs up supercomputer simulations by the Center of Computational Science at the University of Miami that predicted the photoluminescent metal complex would attach itself to the amyloid peptide near a hydrophobic (water-avoiding) cleft that appears on the surface of the fibril aggregate. That cleft presents a new target for drugs.

Finding the site was relatively simple once the lab of Angel Martí used its rhenium-based complexes to target fibrils. The light-switching complex glows when hit with ultraviolet light, but when it binds to the fibril it becomes more than 100 times brighter and causes oxidation of the amyloid peptide.

"It's like walking on the beach," Marti said. "You can see that someone was there before you by looking at footprints in the sand. While we cannot see the rhenium complex, we can find the oxidation (footprint) it produces on the amyloid peptide.

"That oxidation only happens right next to the place where it binds," he said. "The real importance of this research is that allows us to see with a high degree of certainty where molecules can interact with amyloid beta fibrils."

"The binding sites of the rhenium complex on fibrils were not known experimentally. That's where our computational techniques became invaluable," said University of Miami Chemist Rajeev Prabhakar. "Our computer modeling predicted binding sites and the modes of interactions of these molecules at the atomic level." 

The study appears in the journal Chem.

"We believe this hydrophobic cleft is a general binding site (on amyloid beta) for molecules," Martí said. "This is important because amyloid beta aggregation has been associated with the onset of Alzheimer's disease. We know that fibrillar insoluble amyloid beta is toxic to cell cultures. Soluble amyloid oligomers that are made of several misfolded units of amyloid beta are also toxic to cells, probably even more than fibrillar.

"There's an interest in finding medications that will quench the deleterious effects of amyloid beta aggregates," he said. "But to create drugs for these, we first need to know how drugs or molecules in general can bind and interact with these fibrils, and this was not well-known. Now we have a better idea of what the molecule needs to interact with these fibrils."

When amyloid peptides fold properly, they hide their hydrophobic residues while exposing their hydrophilic (water-attracting) residues to water. That makes the proteins soluble, Martí said. But when amyloid beta misfolds, it leaves two hydrophobic residues, known as Valine 18 and Phenylalanine 20, exposed to create the hydrophobic cleft. This binding site was proposed in the Prabhakar lab and was experimentally confirmed in the Martí lab.

"It's perfect, because then molecules with hydrophobic domains are driven to bind there," Martí said. "They are compatible with this hydrophobic cleft and associate with the fibril, forming a strong interaction."

These results can lead to the development of photodynamic therapy for Alzheimer's disease.

"We found multiple oxygen binding locations adjacent to the oxidation site," said Prabhakar. "We were quite surprised by the ability of these fibrils to trap oxygen molecules."

If the resulting oxidation keeps the fibrils from aggregating farther into the sticky substance found in the brains of Alzheimer's patients, it may be the start of a useful strategy to stop aggregation before symptoms of the disease appear.

"It's a very attractive system because it uses light, which is a cheap resource," Martí said. "If we can modify complexes so they absorb red light, which is transparent to tissue, we might be able to perform these photochemical modifications in living animals, and maybe someday in humans."

He said light activation allows the researchers to have "exquisite control" of oxidation.

"We imagine it might be possible someday to prevent symptoms of Alzheimer's by targeting amyloid beta in the same way we treat cholesterol in people now to prevent cardiovascular disease," Martí said. "That would be wonderful."

CAPTION Scientists from Boson College and Harvard turned to copper to create a first-of-its-kind iridate -- Cu2IrO3 -- where the natural magnetic order is disrupted, a state known as geometric frustration.

Honeycomb lattice meets elusive standards of the Kitaev model

Researchers from Boston College and Harvard have created an elusive honeycomb-structured material capable of frustrating the magnetic properties within it in order to produce a chemical entity known as "spin liquid," long theorized as a gateway to the free-flowing properties of quantum supercomputing, according to a new report in the Journal of the American Chemical Society.

The first-of-its-kind copper iridate metal oxide - Cu2IrO3 - is one where the natural magnetic order is disrupted, a state known as geometric frustration, said Boston College Assistant Professor of Physics Fazel Tafti, a lead author of the study, titled Cu2IrO3: a new magnetically frustrated honeycomb iridate.

The copper iridate is an insulator - its electrons are immobilized in the solid - but they can still transport a magnetic moment known as "spin". The transport of free spins in the material allows for a flow of quantum information.

The Kitaev model, proposed in 2006 by Cal Tech Professor of Physics Alexei Kitaev, states that a hexagonal honeycomb structure offered a promising route to geometric frustration and therefore, to quantum spin liquid.

Only two honeycomb lattice have been successfully developed in an attempt to fulfill Kitaev's model: a lithium iridate (Li2IrO3) and a sodium iridate (Na2IrO3). Yet both fell short of achieving an ideal spin liquid due to magnetic ordering, said Tafti, who co-authored the paper with Boston College post-doctoral researchers Mykola Abramchuk and Jason W. Krizan, BC Adjunct Professor of Chemistry and Director of Advanced Chemistry Laboratories Kenneth R. Metz, and Harvard's David C. Bell and Cigdem Ozsoy-Keskinbora.

Tafti and his team turned to copper due to its ideal atomic size, which is between lithium and sodium. Their studies in x-ray crystallography found subtle flaws in the honeycombs formed in the lithium and sodium iridates. The team swapped copper for sodium in what Tafti termed a relatively simple "exchange" reaction. The effort produced the first oxide of copper and iridium, Tafti said.

"Copper is ideally suited to the honeycomb structure," said Tafti. "There is almost no distortion in the honeycomb structure."

A decade after the original prediction of quantum spin liquid on a honeycomb lattice by Kitaev, the young team of scientists from Boston College succeeded in making a material that almost exactly corresponds to the Kitaev model, Tafti said.

Tafti's lab will pursue the "exchange" chemistry path to make new forms of honeycomb materials with more exotic magnetic properties, he said.

A new electronics-cooling technique relies on microchannels, just a few microns wide, embedded within the chip itself. The device was built at Purdue University’s Birck Nanotechnology Center. (Purdue University photo/ Kevin P. Drummond)

Researchers have developed a new type of cooling system for high-performance radars and supercomputers that circulates a liquid coolant directly into electronic chips through an intricate series of tiny microchannels.

Conventional chip-cooling methods use finned metal plates called heat sinks, which are attached to computer chips to dissipate heat. Such attachment methods, however, do not remove heat efficiently enough for an emerging class of high-performance electronics, said Suresh V. Garimella, who is principal investigator for the project and the Goodson Distinguished Professor of Mechanical Engineering at Purdue University.

New advanced cooling technologies will be needed for high-performance electronics that contain three-dimensional stacks of processing chips instead of a single, flat-profile chip. Too much heat hinders the performance of electronic chips or damages the tiny circuitry, especially in small "hot spots."

“You can pack only so much computing power into a single chip, so stacking chips on top of each other is one way of increasing performance,” said Justin A. Weibel, a research associate professor in Purdue’s School of Mechanical Engineering, and co-investigator on the project. “This presents a cooling challenge because if you have layers of many chips, normally each one of these would have its own system attached on top of it to draw out heat. As soon as you have even two chips stacked on top of each other the bottom one has to operate with significantly less power because it can’t be cooled directly.”

The solution is to create a cooling system that is embedded within the stack of chips.

The work has been funded with a four-year grant issued in 2013 totaling around $2 million from the U.S. Defense Advanced Research Projects Agency (DARPA). New findings are detailed in a paper appearing on Oct. 12 in the International Journal of Heat and Mass Transfer.

 “I think for the first time we have shown a proof of concept for embedded cooling for Department of Defense and potential commercial applications,” Garimella said. “This transformative approach has great promise for use in radar electronics, as well as in high-performance supercomputers. In this paper, we have demonstrated the technology and the unprecedented performance it provides.”

A fundamental requirement stipulated by DARPA is the ability to handle chips generating a kilowatt of heat per square centimeter, more than 10 times greater than in conventional high-performance computers.

“This number of 1,000 watts per square centimeter is sort of a Holy Grail of microcooling, and we’ve demonstrated this capability in a functioning system with an electrically insulated liquid,” Garimella said.

Purdue doctoral student Kevin Drummond led much of the research. (Purdue University photo/ Jared Pike)

Much of the integration and testing of the system was performed by Purdue doctoral student Kevin Drummond. Key to fabrication of the devices used in the demonstration were teams led by co-investigators David Janes, a professor of electrical and computer engineering, and Dimitrios Peroulis, a professor of electrical and computer engineering and Deputy Director of the Birck Nanotechnology Center in Purdue’s Discovery Park.

The team has presented preliminary findings in several conference papers during the course of the project. The researchers received a best paper award last year in the emerging technologies, and additional papers will be published, Garimella said.

The system uses a commercial refrigerant called HFE-7100, a dielectric, or electrically insulating fluid, meaning it won’t cause short circuits in the electronics. As the fluid circulates over the heat source, it boils inside the microchannels.

“Allowing the liquid to boil dramatically increases how much heat can be removed, compared to simply heating a liquid to below its boiling point,” he said.

The team created an elaborate testing apparatus that simulates the heat generated by real devices. An array of heaters and temperature sensors allow the researchers to test the system under a range of conditions, including the effects of hot spots. The testing system was fabricated at the Birck Nanotechnology Center.

The new approach improves efficiency by eliminating the need to attach cooling devices to chips.

 “Any time you are attaching heat sinks to the chip there are a lot of resistances and inefficiencies associated with that interface,” Garimella said.

This interfacial, or “parasitic,” thermal resistance limits the performance of heat sinks.

“We are going to a technology that eliminates those interfaces because the cooling is occurring inside the chips,” Weibel said.

Using ultra-small channels allows for high performance.

“It’s been known for a long time that the smaller the channel the higher the heat-transfer performance,” Drummond said. “We are going down to 15 or 10 microns in channel width, which is about 10 times smaller than what is typical for microchannel cooling technologies.”

The new design solves one major obstacle to perfecting such systems: although using ultra-small channels increases the cooling performance, it is difficult to pump the required rates of liquid flow through the tiny microchannels. The Purdue team overcame this problem by designing a system of short, parallel channels instead of long channels stretching across the entire length of the chip. A special “hierarchical” manifold distributes the flow of coolant through these channels.

“So, instead of a channel being 5,000 microns in length, we shorten it to 250 microns long,” Garimella said. “The total length of the channel is the same, but it is now fed in discrete segments, and this prevents major pressure drops. So this represents a different paradigm.”

Peroulis and his students handled fabrication of the channels, a task made especially difficult by the need for “high aspect ratios,” meaning the microscopic grooves are far deeper than they are wide. The channels were etched in silicon with a width of about 15 microns but a depth of up to 300 microns.

“So, they are about 20 times as deep as they are wide, which is a non-trivial challenge from a fabrication perspective, particularly for repeatable and low-cost manufacturing processes,” Peroulis said.

Janes and his students designed and built the intricate heating and sensing portions of the testing apparatus.

“It is a complex task to be able to simulate the generation of hotspots and different heating scenarios while simultaneously having an accurate measure of the temperatures” Janes said.

Other members of the team focused on computational models to describe the physics of the cooling technology.

The new journal paper was authored by Drummond; doctoral student Doosan Back; Michael D. Sinanis, a manufacturing engineer and process development manager; Janes; Peroulis; Weibel and Garimella. Although the team has recently completed the DARPA-funded project, the overall research is ongoing.

The technology has evolved from work originating in the Purdue-based National Science Foundation Cooling Technologies Research Center. The center, formed in 1999, is a consortium of corporations, government laboratories and the university working to overcome heat-generation problems in electronic systems by developing new compact cooling technologies. More than 60 undergraduate students and about 100 graduate students have performed research through the center. The center also has involved about 15 Purdue faculty members from a variety of fields, from electrical engineering to chemistry.

In 2011, Garimella received the NSF Industry/University Cooperative Research Center Association's Alexander Schwarzkopf Prize for Technological Innovation on behalf of the cooling-research center.  Earlier, Indiana's 21st Century Research and Technology Fund provided $3.8 million to help commercialize an advanced cooling system for hybrid and electric cars. Research in the center is conducted in partnership with Toyota.

Classifying Sickle Cell Anemia RBC in an automated manner with high accuracy based on Deep Convolutional Neural Network method for 8 SCD patients (over 7,000 single RBC images) for both oxygenated and deoxygenated RBCs.

Deep learning approach could aid in sickle cell disease monitoring

Using a supercomputational approach known as deep learning, scientists have developed a new system to classify the shapes of red blood cells in a patient's blood. The findings, published in PLOS Computational Biology, could potentially help doctors monitor people with sickle cell disease.

A person with sickle cell disease produces abnormally shaped, stiff red blood cells that can build up and block blood vessels, causing pain and sometimes death. The disease is named after sickle-shaped (crescent-like) red blood cells, but it also results in many other shapes, such as oval or elongated red blood cells. The particular shapes found in a given patient can hold clues to the severity of their disease, but it is difficult to manually classify these shapes.

To automate the process of identifying red blood cell shape, Mengjia Xu of Northeastern University, China, and colleagues developed a supercomputational framework that employs a machine-learning tool known as a deep convolutional neural network (CNN).

The new framework uses three steps to classify the shapes of red blood cells in microscopic images of blood. First, it distinguishes red blood cells from the background of each image and from each other. Then, for each cell detected, it zooms in or out until all cell images are a uniform size. Finally, it uses deep CNNs to categorize the cells by shape.

The researchers validated their new tool using 7,000 microscopy images from eight sickle cell disease patients. They found that the automated method successfully classified red blood cell shape for both oxygenated and deoxygenated cells (red blood cells transport oxygen to tissues throughout the body).

"We have developed the first deep learning tool that can automatically identify and classify red blood cell alteration, hence providing direct quantitative evidence of the severity of the disease," says study co-author George Karniadakis.

The research team plans to further improve their deep CNN tool and test it in other blood diseases that alter the shape and size of red blood cells, such as diabetes and HIV. They also plan to explore its usefulness in characterizing cancer cells.

Naveen Vaidya

37 million people around the world today live with Human Immunodeficiency Virus (HIV), which is responsible for roughly 1.1 million deaths caused by AIDS-related conditions. 

The virus replicates by inserting itself into the genetic code of CD4+ memory T-cells, human immune cells essential to the body’s immune response. While antiretroviral therapy (ART) can interfere with this replication process, complete elimination of the virus is a challenge, since HIV maintains latent viral reservoirs within the body that can help re-establish infection. Viral reservoirs exist within resting CD4+ memory T-cells that maintain replication-competent HIV for extended time periods, allowing viral persistence even in the face of immune surveillance or antiretroviral therapy.

As Naveen Vaidya, a mathematics professor at San Diego State University, explains, “Currently there is no cure for HIV, presumably due to the establishment of latently infected cells that cannot be destroyed by available antiretroviral therapy. Hence, the primary focus of current HIV research has been to destruct HIV latent infections, and as part of this effort, initiation of antiretroviral therapy early in infection to avoid the formation of latently infected cells has been considered as potential means of successful HIV cure.”

While early treatment of infection has been shown to limit and even possibly eradicate the virus in the case of pre-exposure and post-exposure prophylactic treatments, some studies have shown mixed results with regard to their effect on viral rebound. Additionally, factors determining the success of early treatment are poorly understood and the accurate timing for establishment of latent reservoirs in humans post-infection is not known. This necessitates the development of effective strategies to control latently infected cells. The pharmacodynamic properties of drugs and their effects on success of treatment have so far received little attention.

In a paper publishing this week in the SIAM Journal on Applied Mathematics, Vaidya and Libin Rong, a mathematics professor at the University of Florida, propose a mathematical model that investigates the effects of drug parameters and dosing schedules on HIV latent reservoirs and viral load dynamics.

“Our research uses mathematical modeling to gain deeper insights into the effects of antiretroviral therapy on HIV latent infections, and highlights that the pharmacodynamics of drugs—and thus, the choice of drugs—used in treatment regimens can be a determinant factor for successful therapy,” Vaidya says.

While previous mathematical models have helped analyze dynamics of latently-infected cells, studies exploring antiretroviral therapy and the resulting pharmacodynamics in latent reservoir dynamics are lacking.

“We have developed theories of infection threshold that help identify values of drug-related parameters for avoiding latent infections,” Vaidya says. “Our results on detailed analysis of pharmacodynamics can contribute significantly to the study of drug-related parameters for controlling HIV latent infection and possibly HIV cure.”

Their model specifically focuses on the impact of antiretroviral therapy early in treatment to control latently infected cells. Using a realistic periodic drug intake scenario to obtain a periodic model system, the authors study local as well as global properties of infection dynamics, described via differential equations. The model takes into account uninfected target cells, productively infected cells, latently infected cells, and free virus concentrations as mutually exclusive compartments.

Currently available antiretroviral therapy demonstrates antiviral activity either by reducing the infection rate or viral production rate. Based on a classical dose-response relationship, the authors formulate residual viral infectivity and residual viral production during antiretroviral therapy.

Variations in specific drug parameters are shown to generate either an infection-free steady state or persistent infection. A viral invasion threshold, derived based on the model, is seen to govern the global stability of the infection-free steady state and viral persistence.

“Pharmacodynamic parameters and dosing schedule can have significant impact on outcomes of infection dynamics in HIV patients. This effect is particularly pronounced in early and preventive therapy,” Vaidya points out. The authors show that the invasion threshold is highly dependent on a few pharmacodynamic parameters. Vaidya continues, “These parameters can determine whether latent infection will establish or not; in general, treatment regimens containing drugs with a larger slope of the dose-response curve, a higher ratio of the maximum dosage to the 50% inhibitory concentration, a longer half-life and a smaller dosing interval, have the potential to prevent or postpone the establishment of viral infection. Thus, choice of drugs is key to successful cure via early therapy.”

Vaidya’s results demonstrate that prophylaxis or very early treatment using drugs with a good pharmacodynamics profile can potentially prevent or postpone establishment of viral infection. Only drugs with proper pharmacodynamic properties given at proper intervals can successfully combat infection. “However, once the latent infection is established, the pharmacodynamic parameters have less effect on the latent reservoir and virus dynamics,” Vaidya says. “This is because the latent reservoir can be maintained by hemostasis of latently infected cells or other mechanisms rather than ongoing residual viral replications.”

Efforts to maximize the impact of HIV therapy and the performance of different treatment regimens is essential to curtail the disease’s burden on public health. Mathematical modeling offers a theoretical framework to evaluate drug pharmacodynamics and their antiviral effects on HIV dynamics.

“Mathematical models can be used to analyze and simulate a large number of treatment scenarios, which are often impossible and/or extremely difficult to study in vivo and/or vitro experimental settings,” as Vaidya explains. “The results from these models can also provide novel themes for further experiments. For example, our modeling results in this study suggest that the drugs with a larger slope of the drug-response curve, such as protease inhibitors, are more effective in controlling latent infections, and thus such drugs in treatment regimens need to be included in further experimental studies.”

While these theoretical results offer useful ideas to develop treatment protocols, these in vivo and in vitro experimental studies are needed to properly design treatment regimens for successful control of latent infections. “Our group and collaborators will continue to develop mathematical models to study effects of pharmacodynamics on latent infection of HIV, including the models with the emergence of drug resistance,” says Vaidya. “Furthermore, our future modeling study will include the effects of drug pharmacodynamics on treatment outcomes in HIV patients under conditioning of drugs of abuse, and identifying optimal control regimens for successful reduction of latent infections.” 

  1. Los Alamos supercomputers help interpret the latest LIGO findings
  2. Emerson acquires Paradigm
  3. Chinese scientists discover more than 600 new periodic orbits of the famous three body problem
  4. KU Leuven computational biologists develop supercomputer program detects differences between human cells
  5. Seeing the next dimension of computer chips
  6. NOAA scientists produce new insights into how global warming is drying up the North American monsoon
  7. Paradigm launches cloud-based production management solution
  8. SEAS researchers add zero-index waveguide to photonics toolbox
  9. NICT demos world record 53.3 Tb/s switching capacity for data center networks
  10. AI set to revolutionize retail banking, says GlobalData
  11. China builds world's first space-ground integrated quantum communication network
  12. Russian researchers simulate the motion of incompressible liquid
  13. RIT's Lousto maps black hole collisions, gives astronomers hitchhikers guide to help LIGO-Virgo pinpoint mergers
  14. University of Vienna scientists use machine learning to accelerate MD simulation of infrared spectra
  15. Italian researchers reveal autism-related genes including genes related to cancer
  16. Lehigh's Rangarajan wins grant for supercomputational discovery of new catalysts for olefin production through quantum chemistry calculations
  17. CSHL supercomputational biologists Jesse Gillis develops new tools to analyze neuronal cell types defined by gene activity shaping their communication patterns
  18. Oxford researcher Steven Reece develops machine learning approach to help hurricane relief efforts
  19. Pitt researcher John Keith develops machine learning methods to search for a greener cleaner
  20. Solar wind impacts on giant 'space hurricanes' may affect satellite safety

Page 9 of 42