Marek Herde
YOU?
Author Swipe
View article: scikit-activeml: A Comprehensive and User-friendly Active Learning Library
scikit-activeml: A Comprehensive and User-friendly Active Learning Library Open
scikit-activeml is a user-friendly open-source Python library for active learning on top of scikit-learn. Included are implementations of a large collection of query strategies, models, and visualization tools in pool- and stream-based act…
View article: No Free Lunch in Active Learning: LLM Embedding Quality Dictates Query Strategy Success
No Free Lunch in Active Learning: LLM Embedding Quality Dictates Query Strategy Success Open
The advent of large language models (LLMs) capable of producing general-purpose representations lets us revisit the practicality of deep active learning (AL): By leveraging frozen LLM embeddings, we can mitigate the computational costs of …
View article: crowd-hpo: Realistic Hyperparameter Optimization and Benchmarking for Learning from Crowds with Noisy Labels
crowd-hpo: Realistic Hyperparameter Optimization and Benchmarking for Learning from Crowds with Noisy Labels Open
Crowdworking is a cost-efficient solution for acquiring class labels. Since these labels are subject to noise, various approaches to learning from crowds have been proposed. Typically, these approaches are evaluated with default hyperparam…
View article: Annot-Mix: Learning with Noisy Class Labels from Multiple Annotators via a Mixup Extension
Annot-Mix: Learning with Noisy Class Labels from Multiple Annotators via a Mixup Extension Open
Training with noisy class labels impairs neural networks’ generalization performance. In this context, mixup is a popular regularization technique to improve training robustness by making memorizing false class labels more difficult. Howev…
View article: Systematic Evaluation of Uncertainty Calibration in Pretrained Object Detectors
Systematic Evaluation of Uncertainty Calibration in Pretrained Object Detectors Open
In the field of deep learning based computer vision, the development of deep object detection has led to unique paradigms (e.g., two-stage or set-based) and architectures (e.g., Faster-RCNN or DETR ) which enable outstanding performance on…
View article: dopanim: A Dataset of Doppelganger Animals with Noisy Annotations from Multiple Humans
dopanim: A Dataset of Doppelganger Animals with Noisy Annotations from Multiple Humans Open
Human annotators typically provide annotated data for training machine learning models, such as neural networks. Yet, human annotations are subject to noise, impairing generalization performances. Methodological research on approaches coun…
View article: Annot-Mix: Learning with Noisy Class Labels from Multiple Annotators via a Mixup Extension
Annot-Mix: Learning with Noisy Class Labels from Multiple Annotators via a Mixup Extension Open
Training with noisy class labels impairs neural networks' generalization performance. In this context, mixup is a popular regularization technique to improve training robustness by making memorizing false class labels more difficult. Howev…
View article: Fast Fishing: Approximating BAIT for Efficient and Scalable Deep Active Image Classification
Fast Fishing: Approximating BAIT for Efficient and Scalable Deep Active Image Classification Open
Deep active learning (AL) seeks to minimize the annotation costs for training deep neural networks. BAIT, a recently proposed AL strategy based on the Fisher Information, has demonstrated impressive performance across various datasets. How…
View article: Active Label Refinement for Semantic Segmentation of Satellite Images
Active Label Refinement for Semantic Segmentation of Satellite Images Open
Remote sensing through semantic segmentation of satellite images contributes to the understanding and utilisation of the earth's surface. For this purpose, semantic segmentation networks are typically trained on large sets of labelled sate…
View article: Multi-annotator Deep Learning: A Probabilistic Framework for Classification
Multi-annotator Deep Learning: A Probabilistic Framework for Classification Open
Solving complex classification tasks using deep neural networks typically requires large amounts of annotated data. However, corresponding class labels are noisy when provided by error-prone annotators, e.g., crowdworkers. Training standar…
View article: Efficient Bayesian Updates for Deep Learning via Laplace Approximations
Efficient Bayesian Updates for Deep Learning via Laplace Approximations Open
Since training deep neural networks takes significant computational resources, extending the training dataset with new data is difficult, as it typically requires complete retraining. Moreover, specific applications do not allow costly ret…
View article: A Review of Uncertainty Calibration in Pretrained Object Detectors
A Review of Uncertainty Calibration in Pretrained Object Detectors Open
In the field of deep learning based computer vision, the development of deep object detection has led to unique paradigms (e.g., two-stage or set-based) and architectures (e.g., Faster-RCNN or DETR) which enable outstanding performance on …
View article: Toward optimal probabilistic active learning using a Bayesian approach
Toward optimal probabilistic active learning using a Bayesian approach Open
Gathering labeled data to train well-performing machine learning models is one of the critical challenges in many applications. Active learning aims at reducing the labeling costs by an efficient and effective allocation of costly labeling…
View article: scikit-activeml: A Library and Toolbox for Active Learning Algorithms
scikit-activeml: A Library and Toolbox for Active Learning Algorithms Open
Machine learning applications often need large amounts of training data to perform well. Whereas unlabeled data can be easily gathered, the labeling process is difficult, time-consuming, or expensive in most applications. Active learning c…
View article: A Survey on Cost Types, Interaction Schemes, and Annotator Performance Models in Selection Algorithms for Active Learning in Classification
A Survey on Cost Types, Interaction Schemes, and Annotator Performance Models in Selection Algorithms for Active Learning in Classification Open
Pool-based active learning (AL) aims to optimize the annotation process\n(i.e., labeling) as the acquisition of annotations is often time-consuming and\ntherefore expensive. For this purpose, an AL strategy queries annotations\nintelligent…
View article: Collaborative Interactive Learning -- A clarification of terms and a differentiation from other research fields
Collaborative Interactive Learning -- A clarification of terms and a differentiation from other research fields Open
The field of collaborative interactive learning (CIL) aims at developing and investigating the technological foundations for a new generation of smart systems that support humans in their everyday life. While the concept of CIL has already…
View article: Automated Active Learning with a Robot
Automated Active Learning with a Robot Open
In the field of automated processes in industry, a major goal is for robots to solve new tasks without costly adaptions. Therefore, it is of advantage if the robot can perform new tasks independently while the learning process is intuitive…