Arnout Devos
YOU?
Author Swipe
View article: Apertus: Democratizing Open and Compliant LLMs for Global Language Environments
Apertus: Democratizing Open and Compliant LLMs for Global Language Environments Open
We present Apertus, a fully open suite of large language models (LLMs) designed to address two systemic shortcomings in today's open model ecosystem: data compliance and multilingual representation. Unlike many prior models that release we…
View article: Bridging the gap between scientists and clinicians: addressing collaboration challenges in clinical AI integration
Bridging the gap between scientists and clinicians: addressing collaboration challenges in clinical AI integration Open
This article explores challenges for bridging the gap between scientists and healthcare professionals in artifical intelligence (AI) integration. It highlights barriers, the role of interdisciplinary research centers, and the importance of…
View article: AI and inclusion in simulation education and leadership: a global cross-sectional evaluation of diversity
AI and inclusion in simulation education and leadership: a global cross-sectional evaluation of diversity Open
Background Simulation-based medical education (SBME) is a critical training tool in healthcare, shaping learners’ skills, professional identities, and inclusivity. Leadership demographics in SBME, including age, gender, race/ethnicity, and…
View article: Gender Disparities in Artificial Intelligence–Generated Images of Hospital Leadership in the United States
Gender Disparities in Artificial Intelligence–Generated Images of Hospital Leadership in the United States Open
Artificial intelligence text-to-image models reflect and amplify systemic biases, overrepresenting men and White leaders, while underrepresenting diversity. Ethical AI practices, including diverse training data sets and fairness-aware algo…
View article: A comparison of large language model-generated and published perioperative neurocognitive disorder recommendations: a cross-sectional web-based analysis
A comparison of large language model-generated and published perioperative neurocognitive disorder recommendations: a cross-sectional web-based analysis Open
Large language models can generate perioperative neurocognitive disorder recommendations that align closely with published guidelines. However, further validation and integration of clinician feedback are required before clinical applicati…
View article: My-This-Your-That - Interpretable Identification of Systematic Bias in Federated Learning for Biomedical Images
My-This-Your-That - Interpretable Identification of Systematic Bias in Federated Learning for Biomedical Images Open
Deep learning has the potential to improve and even automate the interpretation of biomedical images, making it more accessible, particularly in low-resource settings where human experts are often lacking. The privacy concerns of these ima…
View article: A meta-learning approach for genomic survival analysis
A meta-learning approach for genomic survival analysis Open
RNA sequencing has emerged as a promising approach in cancer prognosis as sequencing data becomes more easily and affordably accessible. However, it remains challenging to build good predictive models especially when the sample size is lim…
View article: Model-Agnostic Learning to Meta-Learn
Model-Agnostic Learning to Meta-Learn Open
In this paper, we propose a learning algorithm that enables a model to quickly exploit commonalities among related tasks from an unseen task distribution, before quickly adapting to specific tasks from that same distribution. We investigat…
View article: Self-Supervised Prototypical Transfer Learning for Few-Shot Classification
Self-Supervised Prototypical Transfer Learning for Few-Shot Classification Open
Recent advances in transfer learning and few-shot learning largely rely on annotated data related to the goal task during (pre-)training. However, collecting sufficiently similar and annotated data is often infeasible. Building on advances…
View article: Self-Supervised Prototypical Transfer Learning for Few-Shot Classification
Self-Supervised Prototypical Transfer Learning for Few-Shot Classification Open
Most approaches in few-shot learning rely on costly annotated data related to the goal task domain during (pre-)training. Recently, unsupervised meta-learning methods have exchanged the annotation requirement for a reduction in few-shot cl…
View article: A meta-learning approach for genomic survival analysis
A meta-learning approach for genomic survival analysis Open
RNA sequencing has emerged as a promising approach in cancer prognosis as sequencing data becomes more easily and affordably accessible. However, it remains challenging to build good predictive models especially when the sample size is lim…
View article: Model-Agnostic Learning to Meta-Learn
Model-Agnostic Learning to Meta-Learn Open
In this paper, we propose a learning algorithm that enables a model to quickly exploit commonalities among related tasks from an unseen task distribution, before quickly adapting to specific tasks from that same distribution. We investigat…
View article: Revisiting Few-Shot Learning for Facial Expression Recognition
Revisiting Few-Shot Learning for Facial Expression Recognition Open
Most of the existing deep neural nets on automatic facial expression recognition focus on a set of predefined emotion classes, where the amount of training data has the biggest impact on performance. However, in the standard setting over-p…
View article: Regression Networks for Meta-Learning Few-Shot Classification
Regression Networks for Meta-Learning Few-Shot Classification Open
We propose regression networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each class. In high dimensional embedding s…
View article: [Re] Meta-learning with differentiable closed-form solvers
[Re] Meta-learning with differentiable closed-form solvers Open
Replication
View article: [Re] Reproducing Meta-learning with differentiable closed-form solvers
[Re] Reproducing Meta-learning with differentiable closed-form solvers Open
TensorFlow implementation of R2D2 from "Meta-learning with differentiable closed-form solvers" (ICLR 2019)
View article: Area and Power Efficient Ultra-Wideband Transmitter Based on Active Inductor
Area and Power Efficient Ultra-Wideband Transmitter Based on Active Inductor Open
This paper presents the design of an impulse radio ultra-wideband (IR-UWB) transmitter for low-power, short-range, and high-data rate applications such as high density neural recording interfaces. The IR-UWB transmitter pulses are generate…
View article: PROFIT MAXIMIZING LOGISTIC REGRESSION MODELING FOR CREDIT SCORING
PROFIT MAXIMIZING LOGISTIC REGRESSION MODELING FOR CREDIT SCORING Open
Multiple classification techniques have been employed for different business applications. In the particular case of credit scoring, a classifier which maximizes the total profit is preferable. The recently proposed expected maximum profit…
View article: Multiphase digitally controlled oscillator for future 5G phased arrays in 90 nm CMOS
Multiphase digitally controlled oscillator for future 5G phased arrays in 90 nm CMOS Open
This paper reports a low noise Digitally Controlled Oscillator (DCO) with multiphase outputs, suitable for next\ngeneration phased arrays. The DCO core is implemented using\nan 8 stage Rotary Traveling Wave Oscillator (RTWO) topology.\nSim…