Pune Media

Science News in Review: Sept. 23

The field of artificial intelligence (AI) has seen significant advancements, and this week’s review showcases some of the most groundbreaking developments in AI foundation models and their interdisciplinary applications. These highlights include a breakthrough in neuromorphic hardware that could improve energy efficiency in AI, AI surpassing humans in predicting odor, Google’s progress in detecting AI-manipulated images and promising uses of large language models (LLMs) for debunking conspiracy theories. 

Neuromorphic hardware could potentially revolutionize computing

A research team at the University of Limerick’s Bernal Institute has published a new study in Nature that proved the feasibility of designing molecules that could revolutionize computing. 

Inspired by the folded appearances of the human brain, the team developed a computing platform that uses the natural movement of atoms within a crystal lattice to process and store information. This design allows for creating numerous memory states — each smaller than an atom — that significantly improve energy efficiency and space economy compared to traditional silicon-based systems. 

The innovation overcomes the longstanding challenge of achieving high-computing resolution in neuromorphic platforms which had previously only been effective for low-accuracy tasks. This breakthrough has potential applications in energy-efficient data centers, digital mapping and online gaming — marking a major advancement in computing technologies.

AI helps predict odor from molecular structure

The physical properties of an odor molecule often provide little insight into how it will actually smell. This makes it difficult to predict odors or relate them to one another, unlike vision, where a clear spectrum exists with colors acting as intermediates between foundational hues.

A team in Cambridge, Mass. led by Alex Wiltschko at Osmo set out to tackle this challenge by digitizing smell using an AI algorithm. They created a novel molecule — 533 — which illustrated just how unpredictable odor prediction can be even with advanced technology. 

However, their AI model — trained on thousands of molecular structures and corresponding scent labels — outperformed most human participants in accurately predicting aromas. This breakthrough opens the door to applications such as disease diagnostics, fragrance design and more effective insect repellents.

Google searches will now detect the origin of AI-manipulated images

AI-manipulated images have caused confusion and deception by making people believe in false information. This becomes particularly harmful when targeting celebrities or being used to manipulate voter outcomes.

In response, major search engines are ramping up efforts to label AI-generated content. Google — in collaboration with the Coalition for Content Provenance and Authenticity (C2PA) — is at the forefront of this effort. Google has recently upgraded its “About this image” tool to include a global standard for tracing the origins of AI-altered images.

The initiative focuses on standardizing AI certification and detection through a verification technology known as “Content Credentials.” By integrating C2PA’s new 2.1 standard into platforms like Google Search and Google Ads, users can now trace the origins of images through embedded metadata. 

TikTok has already implemented C2PA’s system, and YouTube is poised to follow suit. These measures are part of Google’s broader strategy to combat AI misinformation which includes tools like SynthID — its digital watermarking solution.

AI chatbot shows promise in talking people out of conspiracy theories

A recent study published by Science has shown that conversations with AI can effectively reduce belief in conspiracy theories. In the study, 2,190 participants described a conspiracy theory they believed in, and the AI — powered by GPT-4 Turbo — refuted their claims through personalized and evidence-based arguments. 

Results showed that this intervention reduced belief in conspiracies by approximately 20% with effects persisting for at least two months. The AI-driven conversations also led to a general reduction in belief in unrelated conspiracy theories, challenging the idea that such beliefs are impervious to change. 

The study hypothesized that earlier attempts to debunk conspiracy theories failed because they lacked the depth and personalization needed to engage entrenched believers. On the other hand, GPT-4 Turbo’s ability to provide personalized counterarguments demonstrated that, when believers are presented with compelling evidence tailored to their reasoning, they are more likely to change their views. This approach succeeded across a wide range of conspiracies, including both topical and deeply ingrained beliefs.

This finding challenges the notion that conspiracy beliefs are impervious to change as well as further establishes AI’s potential to mitigate societal conflicts and promote more informed public discourse. 



Images are for reference only.Images and contents gathered automatic from google or 3rd party sources.All rights on the images and contents are with their legal original owners.

Aggregated From –

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More