Creator:
Date:
Abstract:
We introduce FLightNER, a Federated Learning model that extends a state-of-the-art Named-Entity Recognition model using prompt-tuning known as LightNER. FLightNER allows the aggregation of only the trainable parameters of LightNER without model accuracy degradation saving 10 GB per client and enabling more clients to join a federation without extending the central server's memory. We evaluate our approach against two baselines using three diverse datasets with different distributions across up to seven clients in a federation. We empirically show that compared to the centrally-trained LightNER model, FLightNER outperforms it by 19% when performed on an unbalanced medical dataset and matches it when performed on two balanced datasets: CoNLL and I2B2. Furthermore, we use and evaluate two memory-saving techniques: AdaFactor optimizer and Automatic Mixed Precision. Our findings enable owners of sensitive data, such as healthcare practitioners, to train a NER model collaboratively, with low memory requirements, while keeping their data on-premise.