Optimized FreeMark Post-Training White-Box Watermarking of Tiny Neural Networks Article Swipe
YOU?
·
· 2025
· Open Access
·
· DOI: https://doi.org/10.3390/electronics14214237
· OA: W4415680243
Neural networks are powerful, high-accuracy systems whose trained parameters represent a valuable intellectual property. Building models that reach top level performance is a complex task and requires substantial investments of time and money so protecting these assets is an increasingly important task. Extensive research has been carried out on Neural Network Watermarking, exploring the possibility of inserting a recognizable marker in a host model either in the form of a concealed bit-string or as a characteristic output, making it possible to confirm network ownership even in the presence of malicious attempts at erasing the embedded marker from the model. The study examines the applicability of Opt-FreeMark, a non-invasive post-training white-box watermarking technique, obtained by modifying and optimizing an already existing state-of-the-art technique for tiny neural networks. Here, “Tiny” refers to models intended for ultra-low-power deployments, such as those running on edge devices like sensors and micro-controllers. Watermark robustness is also demonstrated by simulating common model-modification attacks that try to eliminate it from the model while preserving performance; the results presented in the paper indicate that the watermarking scheme effectively protects the networks against these manipulations.