Xiaoliang Luo
YOU?
Author Swipe
View article: The inevitability and superfluousness of cell types in spatial cognition
The inevitability and superfluousness of cell types in spatial cognition Open
Discoveries of functional cell types, exemplified by the cataloging of spatial cells in the hippocampal formation, are heralded as scientific breakthroughs. We question whether the identification of cell types based on human intuitions has…
View article: Author response: The inevitability and superfluousness of cell types in spatial cognition
Author response: The inevitability and superfluousness of cell types in spatial cognition Open
View article: Contour-based instance segmentation method of road scene
Contour-based instance segmentation method of road scene Open
View article: Near-zero nonlinear error pressure sensor based on piezoresistor sensitivity matching for wind tunnel pressure test
Near-zero nonlinear error pressure sensor based on piezoresistor sensitivity matching for wind tunnel pressure test Open
View article: Coordinating multiple mental faculties during learning
Coordinating multiple mental faculties during learning Open
Complex behavior is supported by the coordination of multiple brain regions. How do brain regions coordinate absent a homunculus? We propose coordination is achieved by a controller-peripheral architecture in which peripherals (e.g., the v…
View article: DSSNet: An Anchor‐Free Rotated Object Detection Network With Dynamic Sample Selection for Remote Sensing Images
DSSNet: An Anchor‐Free Rotated Object Detection Network With Dynamic Sample Selection for Remote Sensing Images Open
Object detection in remote sensing imagery requires precise localisation and identification of targets under challenging conditions. Facing the challenges of arbitrary target orientations, wide‐scale variations, dense distributions, and sm…
View article: Large language models surpass human experts in predicting neuroscience results
Large language models surpass human experts in predicting neuroscience results Open
View article: Beyond Human-Like Processing: Large Language Models Perform Equivalently on Forward and Backward Scientific Text
Beyond Human-Like Processing: Large Language Models Perform Equivalently on Forward and Backward Scientific Text Open
The impressive performance of large language models (LLMs) has led to their consideration as models of human language processing. Instead, we suggest that the success of LLMs arises from the flexibility of the transformer learning architec…
View article: Author response: The inevitability and superfluousness of cell types in spatial cognition
Author response: The inevitability and superfluousness of cell types in spatial cognition Open
View article: The inevitability and superfluousness of cell types in spatial cognition
The inevitability and superfluousness of cell types in spatial cognition Open
Summary Discoveries of functional cell types, exemplified by the cataloging of spatial cells in the hippocampal formation, are heralded as scientific breakthroughs. We question whether the identification of cell types based on human intuit…
View article: The inevitability and superfluousness of cell types in spatial cognition
The inevitability and superfluousness of cell types in spatial cognition Open
Discoveries of functional cell types, exemplified by the cataloging of spatial cells in the hippocampal formation, are heralded as scientific breakthroughs. We question whether the identification of cell types based on human intuitions has…
View article: Confidence-weighted integration of human and machine judgments for superior decision-making
Confidence-weighted integration of human and machine judgments for superior decision-making Open
Large language models (LLMs) can surpass humans in certain forecasting tasks. What role does this leave for humans in the overall decision process? One possibility is that humans, despite performing worse than LLMs, can still add value whe…
View article: Matching domain experts by training from scratch on domain knowledge
Matching domain experts by training from scratch on domain knowledge Open
Recently, large language models (LLMs) have outperformed human experts in predicting the results of neuroscience experiments (Luo et al., 2024). What is the basis for this performance? One possibility is that statistical patterns in that s…
View article: Large language models surpass human experts in predicting neuroscience results
Large language models surpass human experts in predicting neuroscience results Open
Scientific discoveries often hinge on synthesizing decades of research, a task that potentially outstrips human information processing capacities. Large language models (LLMs) offer a solution. LLMs trained on the vast scientific literatur…
View article: The inevitability and superfluousness of cell types in spatial cognition
The inevitability and superfluousness of cell types in spatial cognition Open
Summary Discoveries of functional cell types, exemplified by the cataloging of spatial cells in the hippocampal formation, are heralded as scientific breakthroughs. We question whether the identification of cell types based on human intuit…
View article: Adaptive stretching of representations across brain regions and deep learning model layers
Adaptive stretching of representations across brain regions and deep learning model layers Open
Prefrontal cortex (PFC) is known to modulate the visual system to favor goal-relevant information by accentuating task-relevant stimulus dimensions. Does the brain broadly re-configures itself to optimize performance by stretching visual r…
View article: A controller-peripheral architecture and costly energy principle for learning
A controller-peripheral architecture and costly energy principle for learning Open
Complex behavior is supported by the coordination of multiple brain regions. How do brain regions coordinate absent a homunculus? We propose coordination is achieved by a controller-peripheral architecture in which peripherals (e.g., the v…
View article: A controller-peripheral architecture and costly energy principle for learning
A controller-peripheral architecture and costly energy principle for learning Open
View article: Characterizing the Fracture Forming Limit Curve from Shear to Uniaxial Tension in Ultrathin Sheet Metals Using Specimens with Cutouts
Characterizing the Fracture Forming Limit Curve from Shear to Uniaxial Tension in Ultrathin Sheet Metals Using Specimens with Cutouts Open
View article: A too-good-to-be-true prior to reduce shortcut reliance
A too-good-to-be-true prior to reduce shortcut reliance Open
Despite their impressive performance in object recognition and other tasks under standard testing conditions, deep networks often fail to generalize to out-of-distribution (o.o.d.) samples. One cause for this shortcoming is that modern arc…
View article: Effects of Inorganic Passivators on Gas Production and Heavy Metal Passivation Performance during Anaerobic Digestion of Pig Manure and Corn Straw
Effects of Inorganic Passivators on Gas Production and Heavy Metal Passivation Performance during Anaerobic Digestion of Pig Manure and Corn Straw Open
The treatment of livestock manure caused by the expansion of the breeding industry in China has attracted wide attention. Heavy metals in pig manure can pollute soil and water and even transfer to crops, posing harm to humans through the f…
View article: Optimized Shear Specimen with Cutouts for Shear Fracture Characterization of Ultrathin Sheet Metal
Optimized Shear Specimen with Cutouts for Shear Fracture Characterization of Ultrathin Sheet Metal Open
View article: A deep learning account of how language affects thought
A deep learning account of how language affects thought Open
How can words shape meaning? Shared labels highlight commonalities between concepts whereas contrasting labels make differences apparent. To address such findings, we propose a deep learning account that spans perception to decision (i.e. …
View article: Understanding top-down attention using task-oriented ablation design
Understanding top-down attention using task-oriented ablation design Open
Top-down attention allows neural networks, both artificial and biological, to focus on the information most relevant for a given task. This is known to enhance performance in visual perception. But it remains unclear how attention brings a…
View article: A Deep Learning Account of How Language Affects Thought
A Deep Learning Account of How Language Affects Thought Open
How can words shape meaning? Shared labels highlight commonalities between concepts whereas contrasting labels make differences apparent. To address such findings, we propose a deep learning account that spans perception to decision (i.e.,…
View article: The Costs and Benefits of Goal-Directed Attention in Deep Convolutional Neural Networks
The Costs and Benefits of Goal-Directed Attention in Deep Convolutional Neural Networks Open
View article: Note sur l’acquisition des voyelles du français par les apprenants chinois avec la L2 anglais
Note sur l’acquisition des voyelles du français par les apprenants chinois avec la L2 anglais Open
https://www.shs-conferences.org/articles/shsconf/pdf/2020/06/shsconf_cmlf2020_07003.pdf
View article: The perceptual boost of visual attention is task-dependent in naturalistic settings
The perceptual boost of visual attention is task-dependent in naturalistic settings Open
Top-down attention allows people to focus on task-relevant visual information. Is the resulting perceptual boost task-dependent in naturalistic settings? We aim to answer this with a large-scale computational experiment. First, we design a…
View article: The perceptual boost of visual attention is task-dependent in\n naturalistic settings
The perceptual boost of visual attention is task-dependent in\n naturalistic settings Open
Top-down attention allows people to focus on task-relevant visual\ninformation. Is the resulting perceptual boost task-dependent in naturalistic\nsettings? We aim to answer this with a large-scale computational experiment.\nFirst, we desig…
View article: The Costs and Benefits of Goal-Directed Attention in Deep Convolutional Neural Networks
The Costs and Benefits of Goal-Directed Attention in Deep Convolutional Neural Networks Open
People deploy top-down, goal-directed attention to accomplish tasks, such as finding lost keys. By tuning the visual system to relevant information sources, object recognition can become more efficient (a benefit) and more biased toward th…