Unveiling the future of mosquito repellents: Machine learning leads the way

In an innovative blend of technology and entomology, researchers at the University of California, Riverside, are utilizing machine learning to enhance the effectiveness of mosquito repellents.

The Mosquito Menace

Mosquitoes are more than just a nuisance; they carry deadly diseases like malaria and dengue fever. Traditional repellents like DEET, while effective, have drawbacks—they can be expensive, require frequent reapplication, and may not provide a pleasant user experience. Furthermore, the widespread use of pyrethroid-based spatial repellents is facing challenges due to increasing resistance in mosquito populations.

Enter Machine Learning

Professor Anandasankar Ray and his team are at the forefront of this innovation, having developed a machine-learning-based cheminformatics approach. This cutting-edge method has screened over 10 million compounds to identify potential new mosquito repellents and insecticides. Importantly, they have discovered effective and pleasantly scented repellent molecules derived from ordinary food and flavoring sources.

A Four-Pronged Strategy

The research team concentrates on four key areas:

1. Improved Topical Repellents: Developing formulations that provide long-lasting protection (12-24 hours) with a desirable scent.
2. Spatial Repellents: Creating solutions to protect areas like backyards and homes from mosquito intrusion.
3. Long-Lasting Pyrethroid Analogs: Designing new molecules that are effective against resistant mosquito strains and suitable for use in bed nets and clothing.
4. Enhanced Spatial Pyrethroid Formulations: Increasing the efficacy of repellents against mosquitoes that exhibit knockdown resistance.

The Road Ahead

With a $2.5 million five-year grant from the National Institutes of Health, Ray’s team is set further to explore the identification of novel spatial mosquito repellents and understand their mechanisms. They aim to provide safe, affordable, and highly effective mosquito control solutions that could significantly reduce human exposure to disease vectors, thereby improving the quality of life for at-risk populations.

As machine learning reveals new possibilities, the vision of a world less burdened by mosquito-borne diseases becomes increasingly achievable.

Nvidia sales grow 78% on AI demand

NVIDIA has reported impressive financial results for the fourth quarter and fiscal year 2025, demonstrating significant advancements in AI supercomputing. The company's Q4 revenue reached an all-time high of $39.3 billion, marking a 78% increase compared to the previous year. The Data Center segment alone contributed $35.6 billion, reflecting a remarkable 93% surge year-over-year. This growth is primarily attributed to the Blackwell AI supercomputers' successful launch and large-scale production, which generated billions in sales during their first quarter. CEO Jensen Huang highlighted the extraordinary demand for Blackwell, emphasizing its critical role in enhancing AI capabilities across various industries.

In contrast, AMD reported a record Q4 2024 revenue of $7.7 billion, with the Data Center segment achieving $3.9 billion, a 69% increase year-over-year. This growth was driven by the increased adoption of EPYC processors and over $5 billion in Instinct accelerator sales for the year. CEO Dr. Lisa Su expressed optimism for continued expansion, citing the strength of AMD's product portfolio and the rising demand for high-performance computing solutions.

Intel, meanwhile, reported Q4 2024 Data Center and AI segment revenue of $3.4 billion, with an operating income of $200 million. While Intel remains a significant player in the industry, its data center revenue falls short compared to both NVIDIA and AMD, highlighting a competitive landscape in the AI and supercomputing sectors.

NVIDIA's leadership in AI supercomputing is further reinforced by its involvement in the $500 billion Stargate Project and its collaborations with major cloud service providers like AWS, Google Cloud, and Microsoft Azure. These partnerships address the growing demand for AI capabilities, positioning NVIDIA at the forefront of technological innovation.

As the AI and supercomputing markets continue to expand, NVIDIA's strong financial performance and strategic initiatives underscore its pivotal role in shaping the future of technology.

Caltech's landmark breakthrough in quantum networking: A true revolution or just theoretical hype

Caltech scientists claim a significant advancement in quantum networking with a method for "multiplexing entanglement," which could improve the efficiency of quantum communication systems. They suggest this technique might lead to faster, more scalable quantum networks—potentially paving the way for a "quantum internet." But is this a practical breakthrough or just another case of quantum hype?

The researchers demonstrated a technique for distributing quantum entanglement among multiple users, likened to conventional networks using multiplexing to send various signals over a single channel. However, the details of this method remain abstract, and its real-world implications are unclear.

Theory vs. Reality

Quantum entanglement is challenging to maintain over long distances. While Caltech asserts its multiplexing method could enhance scalability, it provides no evidence that it will function outside lab conditions. Moreover, established internet infrastructure relies on classical physics, whereas quantum communication needs a different framework that is not yet in place.

The Quantum Internet Mirage

Although the "quantum internet" promises secure communication, many skeptics doubt it will become operational soon. Theoretically, quantum networks are immune to eavesdropping, but practical applications remain experimental. Despite significant investments from governments and companies like Google and IBM, a functional quantum internet seems distant.

Limited Real-World Application

Even if multiplexing entanglement proves helpful, it's uncertain who would benefit—businesses, governments, or consumers—because the researchers do not indicate when this technology might be deployed beyond experimental labs. Until quantum networks can reliably transmit data at scale, announcements like these are merely theoretical milestones.

The "quantum internet" is still more buzzword than reality. While Caltech's research is technically impressive, not all breakthroughs lead to revolutions. Enthusiasts may feel hopeful, but the broader community should remain cautious until these advancements show practical benefits beyond academic contexts.

Breakthrough or hype? Questions arise over 'low-cost' computer claims

Swedish researchers at the University of Gothenburg have announced a potential breakthrough in creating a low-cost computer to make high-performance computing more accessible. However, whether this represents a true revolution in affordable computing or merely an academic project is unclear.

The university claims this innovative microchip technology significantly reduces production costs while achieving low energy consumption. Yet, the term "low-cost" is subjective. Are we talking about a product for the mass market or just a slight cost decrease? The announcement lacks concrete pricing comparisons with options like Raspberry Pi or low-end Chromebooks.

Moreover, academic advancements frequently do not lead to commercial success, and it remains uncertain who would manufacture or distribute these computers at scale. The energy efficiency claims must also be validated against industry standards. Without supporting data, it is not easy to assess whether this innovation stands out or is merely incremental.

Software compatibility is another vital concern. A low-cost computer only succeeds if it can run essential applications. Will it rely on existing operating systems or require custom software that limits adoption? Many similar projects have struggled with these challenges.

While the research is intriguing, tangible proof of performance and a clear route to market are essential to avoid this "breakthrough" being just an academic exercise. Until then, the tech world should remain skeptical as the promise of a low-cost computer revolution is yet to be substantiated.

USC shows a more accurate picture of brain aging

In a groundbreaking development, researchers at the USC Leonard Davis School of Gerontology have unveiled an innovative artificial intelligence model designed to measure the rate at which our brains age. This cutting-edge tool estimates an individual's brain age and provides profound insights into neurocognitive changes, potentially revolutionizing our understanding of neurological health.

The model utilizes deep learning techniques to analyze neuroimaging data, accurately predicting brain age by identifying patterns associated with aging. Such precise estimations are invaluable, as discrepancies between chronological and brain age can indicate accelerated aging or neurological disorders.

A comprehensive review titled "Deep Learning for Brain Age Estimation: A Systematic Review" highlights the significance of these AI-driven approaches. The study emphasizes that machine learning models have been successfully employed to predict brain age, with deviations from typical aging patterns linked to brain abnormalities. The review also underscores the importance of accurate diagnostic techniques for reliable brain age estimations.

However, this journey does not end here. The field is rapidly evolving, with researchers continually refining AI models to enhance their accuracy and applicability across diverse populations. The ultimate goal is to integrate these tools into clinical settings, providing personalized assessments and interventions to maintain cognitive health throughout aging.

As we stand on the cusp of this exciting frontier, the fusion of artificial intelligence and neuroscience promises to unlock more profound mysteries of the human brain, paving the way for a future where cognitive decline is not an inevitable part of aging but a challenge we are equipped to understand and address.