CU physicists' simulations suggest the sun has a dual personality

Researchers at CU Boulder have discovered hints that humanity's favorite star may have a dual personality, with intriguing discrepancies in its magnetic fields that could hold clues to the sun's own "internal clock."

Physicists Loren Matilsky and Juri Toomre developed a supercomputer simulation of the sun's interior as a means of capturing the inner roiling turmoil of the star. In the process, the team spotted something unexpected: On rare occasions, the sun's internal dynamics may jolt out of their normal routines and switch to an alternate state--bit like a superhero trading the cape and cowl for civilian clothes.

While the findings are only preliminary, Matilsky said, they may line up with real observations of the sun dating back to the 19th century.

He added that the existence of such a solar alter-ego could provide physicists with new clues to the processes that govern the sun's internal clock--a cycle in which the sun switches from periods of high activity to low activity about once every 11 years. {module In-article}

"We don't know what is setting the cycle period for the sun or why some cycles are more violent than others," said Matilsky, a graduate student at JILA. "Our ultimate goal is to map what we're seeing in the model to the sun's surface so that we can then make predictions."

He will present the team's findings at a press briefing today at the 234th meeting of the American Astronomical Society in St. Louis.

The study takes a deep look at a phenomenon that scientists call the solar "dynamo," essentially a concentration of the star's magnetic energy. This dynamo is formed by the spinning and twisting of the hot gases inside the sun and can have big impacts--an especially active solar dynamo can generate large numbers of sunspots and solar flares, or globs of energy that blast out from the surface.

But that dynamo isn't easy to study, Matilsky said. That's because it mainly forms and evolves within the sun's interior, far out of range of most scientific instruments.

"We can't dive into the interior, which makes the sun's internal magnetism a few steps removed from real observations," he said.

To get around that limitation, many solar physicists use massive supercomputers to try to recreate what's occurring inside the sun.

Matilsky and Toomre's simulation examines activity in the outer third of that interior, which Matilsky likens to "a spherical pot of boiling water."

And, he said, this model delivered some interesting results. When the researchers ran their simulation, they first found that the solar dynamo formed to the north and south of the sun's equator. Following a regular cycle, that dynamo moved toward the equator and stopped, then reset in close agreement with actual observations of the sun.

But that regular churn wasn't the whole picture. Roughly twice every 100 years, the simulated sun did something different.

In those strange cases, the solar dynamo didn't follow that same cycle but, instead, clustered in one hemisphere over the other.

"That additional dynamo cycle would kind of wander," Matilsky said. "It would stay in one hemisphere over a few cycles, then move into the other one. Eventually, the solar dynamo would return to its original state."

That pattern could be a fluke of the model, Matilsky said, but it might also point to real, and previously unknown, the behavior of the solar dynamo. He added that astronomers have, on rare occasions, seen sunspots congregating in one hemisphere of the sun more than the other, an observation that matches the CU Boulder team's findings.

Matilsky said that the group will need to develop its model further to see if the dual dynamo pans out. But he said that the team's results could, one day, help to explain the cause of the peaks and dips in the sun's activity--patterns that have huge implications for climate and technological societies on Earth.

"It gives us clues to how the sun might shut off its dynamo and turn itself back on again," he said.

Zhou Group develops AirSurf-Lettuce machine learning platform for cr-optimization in the UK

At Earlham Institute (EI), artificial intelligence based techniques such as machine learning is moving from being merely an exciting premise to having real-life applications, where it's needed most: improving efficiency and precision on the farm.

Researchers in the Zhou Group at EI, in cooperation with Ely-based G's Growers, have developed a machine learning platform, AirSurf-Lettuce, which works with computer vision and ultra-scale images taken from the air to help categorise lettuce crops in fields.

The advanced software includes measuring quantity, size and pinpointing location to help farmers harvest with precision and getting the crop to market in the most efficient possible way. Importantly, this technology can be applied to other crops, widening the scope for positive impact across the food chain. 

Lettuce is big business, especially in East Anglia, with 122,000 tonnes produced in the UK each year. Up to 30% of yield can be lost due to inefficiencies in the growing process as well as harvest strategies, which, if made up, could provide a significant economic boost. CAPTION Transplanting lettuce at G's Growers plantation field, near Ely, UK.  CREDIT G's Growers{module In-article}

It's very important that farmers and growers understand precisely when crops will become harvest-ready, so that they can set in motion the planning of logistics, trading and marketing their produce further along the chain.

Traditionally, however, measuring crops in fields has been very time-consuming and labour intensive, as well as prone to error; therefore novel AI solutions based on aerial images can provide a much more robust and effective method.

Another barrier to efficiency is the fact that inclement weather conditions, which have been increasing in recent years, can throw off harvesting times quite significantly, as crops take different lengths of time to mature.

The AirSurf technology - developed by members of the Zhou Group, including first authors of the paper on the project, Alan Bauer and Aaron Bostrom - uses 'deep learning' (a deep structured machine learning technique) combined with sophisticated, ultra-wide-scale imaging analysis to measure iceberg lettuce in a high-throughput mode. This is able to identify the precise quantity and location of lettuce plants, with the additional advantage of recognising crop quality, i.e. small, medium or large lettuce heads.

Combining this system with GPS allows farmers to precisely track size distribution of lettuce in fields, which can only help in increasing the precision and effectiveness of farming practice, including harvest time.

First author, Alan Bauer at EI, said: "This cross-disciplinary collaboration integrates computer vision and machine learning with the lettuce growing business to demonstrate how we can improve crop yields using machine learning."

Group Leader at EI, Dr Ji Zhou, said: "My lab is keen to seek every possible approach to translate our public funded research in algorithm design, machine learning, computer vision, and crop phenomics to techniques and tools that can be used by academic and industrial partners to address challenging problems in crop research and crop production.

"Utilising our research work supported by BBSRC and other public and industry jointly funded projects, we have partnered with G's, leading vegetable growers in the UK, to equip our Agri-Food sector with smart and precise crop surveillance and analytical methods, for which we are confident that better crop management decisions and enhanced crop marketability could be achieved through our joint efforts".

Industry partner at G's Growers, Innovation Manager Jacob Kirwan, added: "Farming at a large scale means that precision is essential when ensuring that we are producing crops in an environmentally and economically sustainable way. Using technology like AirSurf means that growers are able to understand the variability in their fields and crops at a much higher level of detail that was previously possible.

"The decisions that can then be taken from this information, such as varying applications of inputs and irrigation; changing harvest strategies and planning the optimum time to sell crop, will all contribute towards increasing on farm yields and improving farm productivity."

Stanford AI tool helps radiologists detect brain aneurysms

Doctors could soon get some help from an artificial intelligence tool when diagnosing brain aneurysms - bulges in blood vessels in the brain that can leak or burst open, potentially leading to stroke, brain damage or death.

The AI tool, developed by researchers at Stanford University and detailed in a paper published June 7 in JAMA Network Open, highlights areas of a brain scan that are likely to contain an aneurysm.

"There's been a lot of concern about how machine learning will actually work within the medical field," said Allison Park, a Stanford graduate student in statistics and co-lead author of the paper. "This research is an example of how humans stay involved in the diagnostic process, aided by an artificial intelligence tool."

This tool, which is built around an algorithm called HeadXNet, improved clinicians' ability to correctly identify aneurysms at a level equivalent to finding six more aneurysms in 100 scans that contain aneurysms. It also improved consensus among the interpreting clinicians. While the success of HeadXNet in these experiments is promising, the team of researchers - who have expertise in machine learning, radiology and neurosurgery - cautions that further investigation is needed to evaluate generalizability of the AI tool prior to real-time clinical deployment given differences in scanner hardware and imaging protocols across different hospital centers. The researchers plan to address such problems through multi-center collaboration. CAPTION HeadXNet team members (from left to right, Andrew Ng, Kristen Yeom, Christopher Chute, Pranav Rajpurkar and Allison Park) looking at a brain scan. Scans like this were used to train and test their artificial intelligence tool, which helps identify brain aneurysms.  CREDIT L.A. Cicero/Stanford News Service{module In-article}

Augmented expertise

Combing brain scans for signs of an aneurysm can mean scrolling through hundreds of images. Aneurysms come in many sizes and shapes and balloon out at tricky angles - some register as no more than a blip within the movie-like succession of images.

"Search for an aneurysm is one of the most labor-intensive and critical tasks radiologists undertake," said Kristen Yeom, associate professor of radiology and co-senior author of the paper. "Given inherent challenges of complex neurovascular anatomy and potential fatal outcome of a missed aneurysm, it prompted me to apply advances in computer science and vision to neuroimaging."

Yeom brought the idea to the AI for Healthcare Bootcamp run by Stanford's Machine Learning Group, which is led by Andrew Ng, adjunct professor of computer science and co-senior author of the paper. The central challenge was creating an artificial intelligence tool that could accurately process these large stacks of 3D images and complement clinical diagnostic practice.

To train their algorithm, Yeom worked with Park and Christopher Chute, a graduate student in computer science, and outlined clinically significant aneurysms detectable on 611 computerized tomography (CT) angiogram head scans.

"We labelled, by hand, every voxel - the 3D equivalent to a pixel - with whether or not it was part of an aneurysm," said Chute, who is also co-lead author of the paper. "Building the training data was a pretty grueling task and there were a lot of data."

Following the training, the algorithm decides for each voxel of a scan whether there is an aneurysm present. The end result of the HeadXNet tool is the algorithm's conclusions overlaid as a semi-transparent highlight on top of the scan. This representation of the algorithm's decision makes it easy for the clinicians to still see what the scans look like without HeadXNet's input.

"We were interested how these scans with AI-added overlays would improve the performance of clinicians," said Pranav Rajpurkar, a graduate student in computer science and co-lead author of the paper. "Rather than just having the algorithm say that a scan contained an aneurysm, we were able to bring the exact locations of the aneurysms to the clinician's attention."

Eight clinicians tested HeadXNet by evaluating a set of 115 brain scans for aneurysm, once with the help of HeadXNet and once without. With the tool, the clinicians correctly identified more aneurysms, and therefore reduced the "miss" rate, and the clinicians were more likely to agree with one another. HeadXNet did not influence how long it took the clinicians to decide on a diagnosis or their ability to correctly identify scans without aneurysms - a guard against telling someone they have an aneurysm when they don't.

To other tasks and institutions

The machine learning methods at the heart of HeadXNet could likely be trained to identify other diseases inside and outside the brain. For example, Yeom imagines a future version could focus on speeding up identifying aneurysms after they have burst, saving precious time in an urgent situation. But a considerable hurdle remains in integrating any artificial intelligence medical tools with daily clinical workflow in radiology across hospitals.

Current scan viewers aren't designed to work with deep learning assistance, so the researchers had to custom-build tools to integrate HeadXNet within scan viewers. Similarly, variations in real-world data - as opposed to the data on which the algorithm is tested and trained - could reduce model performance. If the algorithm processes data from different kinds of scanners or imaging protocols, or a patient population that wasn't part of its original training, it might not work as expected.

"Because of these issues, I think deployment will come faster not with pure AI automation, but instead with AI and radiologists collaborating," said Ng. "We still have technical and non-technical work to do, but we as a community will get there and AI-radiologist collaboration is the most promising path."